title
stringlengths
3
68
text
stringlengths
685
186k
relevans
float64
0.76
0.82
popularity
float64
0.93
1
ranking
float64
0.75
0.81
Civilization state
A civilization state, or civilizational state, is a country that aims to represent not just a historical territory, ethnolinguistic group, or body of governance, but a unique civilization in its own right. It is distinguished from the concept of a nation state by describing a country's dominant sociopolitical modes as constituting a category larger than a single nation. When classifying states as civilization states, emphasis is often placed on a country's historical continuity and cultural unity across a large geographic region. The term was first coined in the 1990s as a way to describe China, later India but has also been used to describe countries such as Egypt, Russia, Turkey, Iran and the United States. The term has been popularized by Bruno Maçães in a series of essays since 2018. China as a civilization state The term "civilization-state" was first used by American political scientist Lucian Pye in 1990 to categorize China as having a distinct sociopolitical character, as opposed to viewing it as a nation state in the European model. The use of this new term implies that China was and still is an "empire state" with a unique political tradition and governmental structure, and its proponents asserted that the nation state model fails to properly describe the evolution of the Chinese state. Proponents of the label describe China as having a unique historical and cultural unity, derived from a continuous process of cultural syncretism. The term was further popularized by its use in When China Rules the World by British political scientist Martin Jacques. According to Li Xing and Timothy M. Shaw, the central feature of analyzing China as a civilization state is the view that the Chinese state derives its legitimacy from the continuation of a sociopolitical order which posits that the state maintains natural authority over its subjects, and that it is the "guardian" of both its citizens and their society, a view of the state that is completely distinct from the Westphalian nation-state model. Other scholars make the case that the key features of a civilization-state are the maintenance of an ethos of cultural unity despite displaying significant cultural diversity, across centuries of history and a large geographic space. Some specifically draw attention to the longevity of the Chinese writing system, or describe China's existence as being uniquely and inexorably tied to the past. Guang Xia pushes back on the idea of the uniqueness of a Chinese civilization-state. Xia argues that civilization-state discourse in China studies is an important and positive development, as it allows for characteristics of the modern Chinese state to be properly analyzed in the context of their history. However, Xia concludes that ultimately, all civilizations must reinvent themselves in the context of their history, and that it is a mistake to view China as a static entity or to portray it as being more tied to its past than the rest of the world. India, Egypt and other proposed civilization states Egypt By creating a civilizational continuation between ancient Egypt and contemporary Egypt with its Muslim characteristics, Egypt is another example of a civilization state that centers its continuous historical and cultural identity and tradition that contrast the West's global cultural dominance. India India is another example of a civilization state, with political commentators arguing that a shared Indian identity predates British colonization and Islamic invasions. India as a Hindu majority state, is perhaps the only nation that still follows a Bronze Age religion, i.e. Hinduism. It is suggested by some scholars, taking support of archaeological evidences together with analogy of later cult divinities and religious practices, that roots of Hinduism could be diagnosed in the Bronze Age civilization. Russia Vladimir Putin's administration has at times embraced the rhetoric of portraying Russia as a distinct Eurasian civilization-state. Criticism British journalist Gideon Rachman argued in a 2019 article that the concept of a civilization state is at odds with modern conceptions of universal human rights and common democratic standards, and is inherently exclusive to minority groups who do not share the feature(s) that define a particular civilization state (for example, they may have a different religion). See also Chinese exceptionalism Cradle of civilization Empire Four Great Ancient Civilizations Five thousand years of Chinese civilization Greater India Imperialism Nation state Superstate Tributary system of China Westphalian sovereignty Scholars Christopher Coker Lucian Pye Zhang Weiwei and his 2011 book The China Wave: Rise of a Civilizational State References Citations Zhang Weiwei 2012. The China Wave: Rise of a Civilizational State. Singapore: World Scientific Publishing. The China Wave External links by Kraut – Mentions Chinese civilization state (during marked time stamp). Podcast about Christopher Coker's book The Rise of the Civilizational State (Polity Press, 2019) Civilizations Society of China Geopolitical terminology Political science terminology
0.773189
0.987976
0.763892
Neo-nationalism
Neo-nationalism, or new nationalism, is an ideology and political movement built on the basic characteristics of classical nationalism. It developed to its final form by applying elements with reactionary character generated as a reaction to the political, economic and socio-cultural changes that came with globalization during the second wave of globalization in the 1980s. Neo-nationalism is associated with several positions such as right-wing populism, anti-globalization, nativism, protectionism, opposition to immigration, Islamophobia in non-Muslim-majority countries, and Euroscepticism, where applicable. With globalisation and the idea of a single nation, neo-nationalists see the problems of identification and threatened identities. They call for the protection of symbolic heritage, like art and folk traditions, which is also common for cultural nationalism. Particularly notable expressions of new nationalism include the vote for Brexit in the 2016 United Kingdom European Union membership referendum and the 2016 election of Donald Trump as the president of the United States. Several neo-nationalist politicians have come to power or run strongly during the 2010s and 2020s, including Giorgia Meloni in Italy, Marine Le Pen in France, Rodrigo Duterte and Bongbong Marcos in the Philippines, and Jair Bolsonaro in Brazil. Origins Neo-nationalism is considered as a pan-West European phenomenon. It has its origins in the post-Cold War period and the changes that the third phase of globalization brought to the West European states. The European Union integration and enlargement gave rise to a series of economic, social, and political changes causing uncertainties on an individual and collective level. Empowerment of the European Union by extending its members and the referendums on European Constitution formed the idea of a transnational quasi-state and a global nation under liberal democracy as the single political ideology that governs that transnational state. After the referendum on the Treaty to establish a Constitution for Europe was rejected, the delegation of national sovereignty to the European Union was seen by the neo-nationalists as a strategic act that aims at accumulation of power that undermines states’ national sovereignty and their right of self-determination. External factors The dramatic events that marked the Islamic world in the 1980s such as the Iranian Revolution set a start of increased immigration towards Western European states. The problems that immigrants encountered in relation to their arrival, accommodation, and integration within the domestic society of the hosting state motivated restructure of the political agenda and policy adjustments that integrated the diversity of immigrants. The inclusion of "foreign principles" next to the traditional elements that constitute the character of the hosting state as criteria for policy led to the feeling of the threat neo-nationalist felt. This process was framed as "Islamization" and turned into the explanatory factor for a specific defensive collective behaviour. The conflicts and the violence that followed after the political destabilization in some of the Islamic states led to the categorisation of Islam as having an anti-democratic and anti-modern character that is at odds with the Western liberal democracy. After the September 11 attacks, this image of Islam became dominant. The sense of the "Islamic threat" to de modern societies, and their culture that spread along the Western European states resulted in the rise of national awareness and pride in terms of culture and folklore and a need of protection the national cultural identity. Roots in nationalism Neo-nationalism is the successor to classical nationalism. Both nationalists and neo-nationalists see the nation as one family but differ in the criteria for affiliation. Nationalists see the state and the nation as a family whose members are inextricably linked based on ethnical, racial, genetic, religious or cultural homogeneity as criteria of belonging In contrast, neo-nationalists take historical association as the major factor for granting membership to the national family, which makes them fundamentally different from their predecessors in terms of inclusiveness. Overview and characteristics Writing for Politico, Michael Hirsh described new nationalism as "a bitter populist rejection of the status quo that global elites have imposed on the international system since the Cold War ended, and which lower-income voters have decided—understandably—is unfair." Michael Brendan Dougherty wrote in The Week that new nationalism is a "broad nativist revolt" against post-Cold War politics long "characterized by an orthodoxy of free trade, nurturing the service economy, neoliberal trading arrangements, and liberalized immigration policies." Political science professor Eirikur Bergmann defines new-nationalism as being a specific kind of nativist populism. The Economist wrote in November 2016 that "new nationalists are riding high on promises to close borders and restore societies to a past homogeneity." Clarence Page wrote in the Las Vegas Sun that "a new neo-tribal nationalism has boiled up in European politics and to a lesser degree in the United States since the global economic meltdown of 2008". In The Week, Ryan Cooper and researchers with the Centre for Economic Policy Research have linked 21st-century right-wing populism to the Great Recession. According to Harvard political theorist Yascha Mounk, "economic stagnation among lower- and middle-class whites [has been] a main driver for nationalism's rise around the globe." According to religion scholar Mark L. Movesian, new nationalism "sets the nation-state against supranational, liberal regimes like the EU or NAFTA, and local customs and traditions, including religious traditions, against alien, outside trends." David Brog and Yoram Hazony wrote in National Review that some conservatives view the new nationalism associated with Brexit, Rodrigo Duterte, and Donald Trump as a betrayal of conservative ideology, while they see it as a "return". According to conservative commentator Jonah Goldberg, the nationalism associated with Trump is "really little more than a brand name for generic white identity politics." Writing for The Week, Damon Linker called the idea of neo-nationalism being racist "nonsense" and went on to say that "the tendency of progressives to describe it as nothing but 'racism, Islamophobia, and xenophobia'—is the desire to delegitimize any particularistic attachment or form of solidarity, be it national, linguistic, religious, territorial, or ethnic." Regarding new nationalism, The Economist said that "Mr Trump needs to realise that his policies will unfold in the context of other countries' jealous nationalism" and called nationalism itself a "slippery concept" that is "easy to manipulate". They also repeatedly contrasted ethnic nationalism and civic nationalism and implied new nationalism could become "angry" and difficult to control, citing Chinese nationalism as an example. Associated politicians, parties and events Brazil The president of Brazil Jair Bolsonaro of the country's Liberal Party has been described as a leading new nationalist. Bolsonaro's ideology and policies have been heavily influenced by his adviser, nationalist thinker Olavo de Carvalho. China Chinese Communist Party general secretary Xi Jinping's concept of "Chinese Dream" has been described as an expression of new nationalism. His form of nationalism stresses pride in the historic Chinese civilisation, embracing the teachings of Confucius and other ancient Chinese sages, and thus rejecting the anti-Confucius campaign of Party chairman Mao Zedong. Egypt Egyptian President Abdel Fattah el-Sisi (assumed office in 2014), and Nation's Future Party has been described as a new nationalist. Hungary Hungarian Prime Minister Viktor Orbán (assumed office in 2010), the leader of the ruling Fidesz party, has been described as a new nationalist. India Indian Prime Minister Narendra Modi (assumed office in 2014) and his Bharatiya Janata Party (BJP) have been referred to as neo-nationalist. Modi is a volunteer in the Rashtriya Swayamsevak Sangh (RSS), a religio-socio-cultural voluntary organisation to which the BJP is aligned with, which has also been said to advocate a neo-nationalist ideology. Modi's nationalist campaigns have been directed by BJP strategist Amit Shah, who currently serves as the Indian Home Minister (assumed office in 2019), and has been touted as a potential successor to Modi as Prime Minister. Yogi Adityanath, Chief Minister of the Indian state of Uttar Pradesh (assumed office in 2017), has also been identified as a neo-nationalist. He has also been touted as a future Prime Minister of the country. Israel Israeli Prime Minister Benjamin Netanyahu (assumed office from 2009 to 2021), the leader of the Likud party, has been described both as promoting new nationalism, and as pursuing a foreign policy of close ties with other new nationalist leaders, including Trump, Orbán, Salvini, Putin, Modi, Bolsonaro, Duterte and Sisi. In 2019, Netanyahu has forged a political alliance with the ultranationalist Union of Right-Wing Parties. Italy Italian Prime Minister Giuseppe Conte (assumed office in 2018), head of the populist coalition Government of Change, and in particular former Deputy Prime Minister and Interior Minister and the League's leader Matteo Salvini (2018–2019), were often described as new nationalists. While in office, Salvini was described by some media outlets as the most powerful politician in the country, and a "de facto prime minister". In August 2019, Salvini filed a motion of no confidence in the coalition government, asking new election to take "full powers", but Conte formed a new government between Five Star Movement (M5S) and Democratic Party (PD). At the head of this new cabinet, Conte toned down his neo-nationalist rhetoric. In the 2022 Italian general election, the neo-nationalist Brothers of Italy emerged as the most voted party and its leader, Giorgia Meloni, became the new prime minister on 22 October 2022, at the head of what it was described as the most right-wing government in Italy since 1945. Japan The 63rd Prime Minister Shinzō Abe (assumed office from 2012 to 2020), a member of the far-right organisation Nippon Kaigi, promoted ideas of new nationalism, as did the Liberal Democratic Party of Japan, which he led. Mexico Mexican President Andrés Manuel López Obrador (assumed office in 2018) has been described as Neo-nationalist and often dubbed as "Mexican Donald Trump" by the media. Pakistan Former Pakistani prime minister Imran Khan (2018–2022), the leader of the then-ruling Pakistan Tehreek-e-Insaf (Pakistan Movement for Justice) was compared to Donald Trump and described as a neo-nationalist populist during his tenure. Philippines Philippine President Rodrigo Duterte (assumed office in 2016) has been described as a new nationalist. His party PDP-Laban has adopted Filipino nationalism as a platform. The country has also a "far-right" reputation politically. Bongbong Marcos, elected in 2022, is expected to govern in continuity with Duterte with a more far-right agenda. Poland Confederation party is a main political party in Poland that promotes New Nationalism, especially National Movement. There is also a neofascist and National Radical Narodowe Odrodzenie Polski that promotes harshly anti-globalist, anti-immigrant and anti-liberal agenda. Russia President of Russia Vladimir Putin (second President of Russia from 2000 to 2008 and fourth President of Russia from 2012) has been labelled a new nationalist. Putin has been described by Hirsh as "the harbinger of this new global nationalism". Charles Clover, the Moscow bureau chief of the Financial Times from 2008 to 2013, wrote a book in 2016 titled Black Wind, White Snow: The Rise of Russia's New Nationalism. Russian nationalist thinker Aleksandr Dugin in particular has had influence over the Kremlin, serving as an adviser to key members of the ruling United Russia party, including now-SVR Director Sergey Naryshkin. Russia has been accused of supporting new nationalist movements across the Western World. Saudi Arabia The Crown Prince of Saudi Arabia, Mohammad bin Salman (assumed office in 2017), has been described by Kristin Diwan of The Arab Gulf States Institute as being attached to a "strong new nationalism". The "new Saudi nationalism" has been used to bolster support for the Kingdom's economic and foreign policies, and represents a shift away from the Kingdom's earlier dependence on religion for legitimacy. Many of the country's foreign policy actions from 2017 onwards, such as its blockade of Qatar and its diplomatic dispute with Canada have been described as motivated by this nationalism. The policies of Mohammad bin Salman's administration have been heavily influenced by his adviser Saud al-Qahtani, who has been described as a "nationalist ideologue" and whose role has been compared to that formerly of Steve Bannon. Turkey In 2014, Mustafa Akyol wrote of a new "brand of Turkish neonationalism" promoted by Justice and Development Party (AKP), the country's ruling party, whose leader is President Recep Tayyip Erdoğan (assumed office in 2014). The Turkish "new nationalism" replaces the secular character of traditional forms of Turkish nationalism with an "assertively Muslim" identity. Devlet Bahçeli, the leader of the Nationalist Movement Party (MHP), has been described as creating a "new nationalist front" by forming the People's Alliance with Erdoğan's AKP in 2018. The MHP is affiliated with the Grey Wolves paramilitary organisation, which Erdoğan has also expressed support for. United Arab Emirates The United Arab Emirates, under the leadership of Crown Prince of Abu Dhabi Mohammed bin Zayed (assumed office in 2004), has been described as propagating a "new Arab nationalism", which replaces the older, leftist form of the Arab nationalist ideology with a more conservative form, through its strong support for the rise of the respective new leaders of Egypt and Saudi Arabia, Abdel Fattah el-Sisi and Prince Mohammad bin Salman, as a means of countering Iranian and Turkish influence in the Arab states. United Kingdom The 23 June 2016 referendum in the United Kingdom to leave the European Union ("Brexit") has been described as a milestone of neo-nationalism. Owen Matthews noted similarities in motives for support of the Brexit movement and Donald Trump in the United States. He wrote in Newsweek that supporters of both are motivated by "a yearning to control immigration, reverse globalization and restore national greatness by disengaging from the wide, threatening world". Matt O'Brien wrote of the Brexit as "the most shocking success for the new nationalism sweeping the Western world". Leaders of the Brexit campaign, such as Nigel Farage, the former leader of the eurosceptic UK Independence Party (now of Reform UK); London Mayor (now former prime minister and Conservative Party leader) Boris Johnson; Vote Leave Co-Convenor Michael Gove; former Brexit Secretary David Davis; and European Research Group chairman Jacob Rees-Mogg, have been called "new nationalists". United States Donald Trump's rise to the Republican candidacy was widely described as a sign of growing new nationalism in the United States. A Chicago Sun-Times editorial on the day of the inauguration of Donald Trump called him "our new nationalist president". The appointment of Steve Bannon, the executive of Breitbart News (later cofounding The Movement), as White House Chief Strategist, was described by one analyst as arousal of a "new world order, driven by patriotism and a fierce urge to look after your own, a neo-nationalism that endlessly smears Muslims and strives to turn back the clock on free trade and globalization, a world where military might counts for far more than diplomacy and compromise". In the wake of Trump's election, U.S. Senator Marco Rubio has called for the Republican Party to embrace a "new nationalism" to oppose "economic elitism that has replaced a commitment to the dignity of work with a blind faith in financial markets and that views America simply as an economy instead of a nation." People The following politicians have all been described in some way as being neo-nationalists: Africa Muhammadu Buhari, President of Nigeria (2015–2023) Hamid Chabat, former mayor of Fez (2003–2015) and leader of the Moroccan Istiqlal Party Uhuru Kenyatta, President of Kenya (assumed office in 2013) and leader of the Jubilee Party of Kenya Pieter Groenewald, Leader of the Freedom Front Plus and member of South African National Assembly Julius Malema, President of the Economic Freedom Fighters and member of South African National Assembly Herman Mashaba, former mayor of Johannesburg (2016–) and ex-member of the Democratic Alliance John Magufuli, President of Tanzania (2015–2021) Isaias Afwerki, President of Eritrea (1993–) Americas Jair Bolsonaro, President of Brazil (2019–2023) and former member of the Social Liberal Party. Olavo de Carvalho, Brazilian political pundit and journalist. Mario Abdo Benítez, President of Paraguay (2018–) and candidate from the Colorado Party Chi Hyun Chung, presidential candidate of 2019 Bolivian general election Luis Fernando Camacho, Governor of Santa Cruz (2021–) Maxime Bernier, MP, 2017 candidate for the leadership of the Conservative Party of Canada and leader of the People's Party of Canada Nayib Bukele, former mayor of San Salvador (2015–2018) and President of El Salvador (2019–) Horacio Cartes, former president of Paraguay (2013–2018) and candidate from the Colorado Party Andrés Chadwick, Interior Minister of Chile (2012–2014; 2018–2019) and member of the Independent Democratic Union Juan Orlando Hernández, President of Honduras (2014–2022) and candidate from the National Party of Honduras José Antonio Kast, Member of the Chamber of Deputies of Chile (2002–2018), independent presidential candidate in the 2017 presidential election, right-wing presidential candidate in the 2021 presidential election and leader of Republican Party François Legault, Premier of Quebec (2018–) and leader of the Canadian Coalition Avenir Québec Kellie Leitch, MP and 2017 candidate for the leadership of the Conservative Party of Canada Iván Duque Márquez, President of Colombia (2018–2022) and candidate from the Democratic Center Jimmy Morales, President of Guatemala (2016–2020) and candidate from the National Convergence Front Alejandro Giammattei, President of Guatemala (2020–) Fabricio Alvarado Muñoz, 2018 and 2022 presidential candidate Juan Diego Castro Fernández, 2018 Costa Rican presidential candidate. Rodrigo Chaves Robles, President of Costa Rica. Andrés Manuel López Obrador, President of Mexico (2018–) and founder of the National Regeneration Movement. Kevin O'Leary, businessman and 2017 candidate for the leadership of the Conservative Party of Canada Donald Trump, businessman, television personality, politician, former president of the United States (2017–2021) and member the Republican Party. Marco Rubio, U.S. Senator from Florida and member of the Republican Party. Steve Bannon, American political figure, former White House Chief Strategist and former executive chairman of Breitbart News. Tucker Carlson, American political commentator and host for Fox News. Josh Hawley, U.S. Senator from Missouri and member of the Republican Party Nicolás Maduro, President of Venezuela and leader of the PSUV Daniel Ortega, President of Nicaragua Elise Stefanik, U.S. representative from New York and member of the Republican Party Marjorie Taylor Greene, U.S. representative from Georgia and member of the Republican Party Lauren Boebert, U.S. representative from Colorado and member of the Republican Party Mary Miller, U.S. representative from Illinois and member of the Republican Party Matt Gaetz, U.S. representative from Florida and member of the Republican Party Keiko Fujimori, former First Lady of Peru, presidential candidate in 2011, 2016, and 2021 Asia-Pacific Tony Abbott, former prime minister of Australia (2013–2015) and former leader of the Liberal Party of Australia Xi Jinping, Paramount leader of China (2012–) and General Secretary of the Chinese Communist Party. Kim Jong-un, Supreme Leader of North Korea (2011–) and general secretary of the Workers' Party of Korea Khaltmaagiin Battulga, President of Mongolia (2017–) and candidate of the Mongolian Democratic Party Prayut Chan-o-cha, former Prime Minister of Thailand (2014–2023) and prime ministerial candidate of the Phalang Pracharat Party in the 2019 general election Peter Dutton, Minister for Defence (2021–), Minister for Home Affairs (2018–2021) and member of the Liberal Party of Australia Park Geun-hye, former president of South Korea (2013–2017) and former leader of the Saenuri Party Hong Jun-pyo, former leader of the Liberty Korea Party and candidate in the 2017 presidential election Bongbong Marcos President of the Philippines (2022–) Narendra Modi, Prime Minister of India (2014–) and member of the Bharatiya Janata Party. Shinzō Abe, former prime minister of Japan (2006–2007, 2012–2020) and former leader of the Liberal Democratic Party (2006–2007, 2012–2020). Tarō Asō, Deputy Prime Minister of Japan (2012–2021) and Minister of Finance (2012–2021) Imran Khan, Prime Minister of Pakistan (2018–) and leader of Pakistan Tehreek-e-Insaf Rodrigo Duterte, President of the Philippines (2016–2022) and leader of PDP–Laban. Winston Peters, former deputy prime minister of New Zealand (2017–2020) and leader of New Zealand First Najib Razak, former prime minister of Malaysia (2009–2018) and former leader of Barisan Nasional and the United Malays National Organisation Hun Sen, Prime Minister of Cambodia (1985–) and leader of the Cambodian People's Party Prabowo Subianto,President of Indonesia (2024–present) ,Defense Minister of Indonesia (2019–2024) and leader of the Great Indonesia Movement Party Abdulla Yameen, former president of the Maldives (2013–2018) and leader of the Progressive Party of Maldives Pauline Hanson, leader of One Nation Min Aung Hlaing, leader of the Tatmadaw and Chairman of the State Administration Council Lukar Jam Atsok, Sikyong candidate for the Central Tibetan Administration. Europe Sebastian Kurz, former Chancellor of Austria (2017–2019, 2020–2021) and former leader of the Austrian People's Party Heinz-Christian Strache, former Vice Chancellor of Austria (2017–2019) and former leader of the Freedom Party of Austria Norbert Hofer, former Transport, Innovation and Technology Minister of Austria (2017–2019), leader of the Freedom Party of Austria and candidate in the 2016 presidential election. Tom Van Grieken, leader of the Belgian Vlaams Belang. Theo Francken, member of the Belgian Chamber of Representatives and former Secretary of State for Asylum, member of N-VA. Mischaël Modrikamen, Belgian politician and lawyer, former leader of the People's Party and former executive director of The Movement Tomislav Karamarko, Deputy Prime Minister of Croatia (2016) and former leader of the Croatian Democratic Union Boyko Borisov, Prime Minister of Bulgaria (2009–2013, 2014–2021) and leader of GERB Krasimir Karakachanov, Defence Minister of Bulgaria, leader of IMRO – Bulgarian National Movement and spokesperson for United Patriots. Veselin Mareshki, Bulgarian businessman and leader of Volya. Miloš Zeman, President of the Czech Republic (2013–), former Prime Minister of the Czech Republic (1998–2002) and leader of the Party of Civic Rights. Andrej Babiš, Prime Minister of the Czech Republic (2017–2021) and leader of ANO 2011 Tomio Okamura, leader of the Czech Freedom and Direct Democracy Kristian Thulesen Dahl, Member of the Folketing and leader of the Danish People's Party. Mart Helme, Deputy Prime Minister and Interior Minister of Estonia (assumed office in 2019) and leader of the Conservative People's Party of Estonia Jussi Halla-aho, Member of the Finnish Parliament and leader of the Finns Party. Marine Le Pen, leader of the French National Rally and candidate in the 2017 presidential election Éric Zemmour, leader or Reconquête and candidate in the 2022 French presidential election Alexander Gauland, Member of the German Bundestag and co-leader of Alternative for Germany. Jörg Meuthen, Member of the German Bundestag and co-leader of Alternative for Germany. Alice Weidel, Member of the German Bundestag and parliamentary leader of Alternative for Germany. Adonis Georgiadis, Minister for Development and Investment of Greece and member of New Democracy. Panos Kammenos, former Defence Minister of Greece (2015–2019) and leader of the Independent Greeks Georgios Karatzaferis, leader of Popular Orthodox Rally Ilias Kasidiaris, Greek agronomist and former member of the Greek Parliament. Ioannis Lagos, Greek MEP and leader of the National People's Conscience. Nikos Michaloliakos, Greek mathematician and leader of the Golden Dawn. Kyriakos Velopoulos, Greek television personality, politician and leader of the Greek Solution party. Makis Voridis, Agricultural Development Minister of Greece (assumed office in 2019) and member of New Democracy Failos Kranidiotis, Greek lawyer and leader of New Right. Viktor Orbán, Prime Minister of Hungary (1998–2002, 2010–) and leader of Fidesz. Sigmundur Davíð Gunnlaugsson, former prime minister of Iceland (2013–2016) and leader of the Centre Party Giorgia Meloni, Prime Minister of Italy (2022–present) and leader of Brothers of Italy. Matteo Salvini, Deputy Prime Minister of Italy (2018–2019, 2022–present) and current leader of League. Raivis Zeltīts, Latvian politician and Secretary General of National Alliance. Rolandas Paksas, former prime minister and President of Lithuania, former leader of Order and Justice. Nebojša Medojević, candidate in the 2008 Montenegrin presidential election and leader of Movement for Changes. Geert Wilders, leader of the Dutch Party for Freedom Thierry Baudet, member of the House of Representatives and leader of Forum for Democracy. Janusz Korwin-Mikke, Polish politician, philosopher, writer, former member of the European Parliament and leader of Confederation. Victor Ponta, former prime minister of Romania (2012–2015) and former leader of the Social Democratic Party Vladimir Putin, President of Russia, former prime minister of Russia and leader of United Russia. Robert Fico, former prime minister of Slovakia and leader of Direction-Social Democracy Andrej Danko, Speaker of the Slovak National Council and leader of the Slovak National Party. Janez Janša, Prime Minister of Slovenia and leader of the Slovenian Democratic Party Santiago Abascal, former member of Basque Parliament, and leader of VOX. Pablo Casado, Spanish opposition leader and President of People's Party Jimmie Åkesson, Member of the Swedish Riksdag and leader of the Sweden Democrats. Christoph Blocher, former member of the Swiss Federal Council and former vice president of the Swiss People's Party. Gerard Batten, Deputy leader of the UK Independence Party, former Member of the European Parliament and former leader of the UK Independence Party. Christian Tybring-Gjedde, Progress Party Member of Norwegian Parliament Middle East Abdel Fattah el-Sisi, President of Egypt (2014–) and former Minister of Defence (2012–2014). Muqtada al-Sadr, leader of the Iraqi Sadrist Movement Benjamin Netanyahu, Prime Minister of Israel (1996–1999, 2009–2021, 2022–) and leader of Likud. Naftali Bennett, Prime Minister of Israel (2021–), former Israeli Minister of Education, former leader of The Jewish Home and current member of New Right. Khalifa Haftar, commander of the Libyan National Army (assumed office in 2015) Tamim bin Hamad, Emir of Qatar (2013–) Recep Tayyip Erdoğan, President of Turkey (2014–), former Prime Minister of Turkey (2003–2014) and leader of the Justice and Development Party. Mohammad bin Salman, Crown Prince of Saudi Arabia (2017–) and Deputy Prime Minister. Saud al-Qahtani, Saudi Arabian consultant and former Royal Court Advisor. Devlet Bahçeli, former Deputy Prime Minister of Turkey and leader of the Nationalist Movement Party. Mohammed bin Zayed Al Nahyan, Crown Prince of the United Arab Emirates. Bashar al-Assad, President of Syria (2000–). Mohammed Dahlan, Palestinian politician and advisor of Crown Prince Mohammed bin Zayed Al Nahyan. Parties The following parties have all been described in some way as being neo-nationalist parties: Africa Economic Freedom Fighters Freedom Front Plus National Council for the Defense of Democracy – Forces for the Defense of Democracy (Burundi) ZANU-PF Americas Republican Party (United States), especially the Trumpist wing Communist Party of Cuba Liberal Party (Brazil, 2006) Republican Party (Chile, 2019) United Socialist Party of Venezuela Asia-Pacific Chinese Communist Party Communist Party of Vietnam Katipunan ng Demokratikong Pilipino Kuomintang Lao People's Revolutionary Party Liberal Democratic Party (Japan) One Nation in Australia Europe Alternative for Germany The Danish People's Party, which provided parliamentary support for the centre-right governing coalition led by Venstre Party (2001–2011, 2015–2019) The Dutch Forum for Democracy The National Alliance, a member of the governing coalition in Latvia (since 2016) The Slovak National Party, a member of the governing coalition in Slovakia (2016–2020) We Are Family, a member of the governing coalition in Slovakia (assumed office in 2020) The Sweden Democrats The Swiss People's Party, a member of the governing coalition in Switzerland (since 1929) The United Patriots, a member of the governing coalition in Bulgaria (since 2014) The Flemish Vlaams Belang The Portuguese Enough See also Alt-right Anti-globalization movement Christian right Conservative wave Ethnic nationalism Japanese nationalism European Alliance of People and Nations Political influence of Evangelicalism in Latin America Evangelical political parties in Latin America Far-right politics Illiberal democracy National conservatism Neoconservatism Nippon Kaigi Paleoconservatism Pan-nationalism Pasokification Radical right (disambiguation) Right-wing populism The Movement Traditional conservatism Trumpism Ultranationalism References 2010s in politics Anthropology Anti-globalization movement Nationalism Political theories Reactionary Right-wing populism World systems theory Nativism (politics)
0.770758
0.991082
0.763884
Western hunter-gatherer
In archaeogenetics, western hunter-gatherer (WHG, also known as west European hunter-gatherer, western European hunter-gatherer or Oberkassel cluster) is a distinct ancestral component of modern Europeans, representing descent from a population of Mesolithic hunter-gatherers who scattered over western, southern and central Europe, from the British Isles in the west to the Carpathians in the east, following the retreat of the ice sheet of the Last Glacial Maximum. It is closely associated and sometimes considered synonymous with the concept of the Villabruna cluster, named after Ripari Villabruna cave in Italy, known from the terminal Pleistocene of Europe, which is largely ancestral to later WHG populations. WHGs share a closer genetic relationship to ancient and modern peoples in the Middle East and the Caucasus than earlier European hunter-gatherers. Their precise relationships to other groups are somewhat obscure, with the origin of the Villabruna cluster likely somewhere in the vicinity of the Balkans. The Villabruna cluster had expanded into the Italian and Iberian Peninsulas by approximately 19,000 years ago, with the WHG cluster subsequently expanding across Western Europe at the end of the Pleistocene around 14-12,000 years ago, largely replacing the Magdalenians who previously dominated the region. The Magdalenians largely descended from earlier Western European Cro-Magnon groups that had arrived in the region over 30,000 years ago, prior to the Last Glacial Maximum. WHGs constituted one of the main genetic groups in the postglacial period of early Holocene Europe, along with eastern hunter-gatherers (EHG) in Eastern Europe. The border between WHGs and EHGs ran roughly from the lower Danube, northward along the western forests of the Dnieper towards the western Baltic Sea. EHGs primarily consisted of a mixture of WHG-related and Ancient North Eurasian (ANE) ancestry. Scandinavia was inhabited by Scandinavian hunter-gatherers (SHGs), which were a mixture between WHG and EHG. In the Iberian Peninsula, early Holocene hunter-gathers consisted of a mixture of WHG and Magdalenian Cro-Magnon (GoyetQ2) ancestry. Once the main population throughout Europe, the WHGs were largely replaced by successive expansions of Early European Farmers (EEFs) of Anatolian origin during the early Neolithic, who generally carried a minor amount of WHG ancestry due to admixture with WHG groups during their European expansion. Among modern-day populations, WHG ancestry is most common among populations of the eastern Baltic region. Research Western hunter-gatherers (WHG) are recognised as a distinct ancestral component contributing to the ancestry of most modern Europeans. Most Europeans can be modeled as a mixture of WHG, EEF, and WSH from the Pontic–Caspian steppe. WHGs also contributed ancestry to other ancient groups such as Early European Farmers (EEF), who were, however, mostly of Anatolian descent. With the Neolithic expansion, EEF came to dominate the gene pool in most parts of Europe, although WHG ancestry had a resurgence in Western Europe from the Early Neolithic to the Middle Neolithic. Origin and expansion into continental Europe WHGs represent a major population shift within Europe at the end of the Ice Age, probably a population expansion into continental Europe, from Southeastern European or West Asian refugia. It is thought that their ancestors separated from eastern Eurasians around 40,000 BP, and from Ancient North Eurasians (ANE) prior to 24,000 BP (the estimated age date of the Mal'ta boy). This date was subsequently put further back in time by the findings of the Yana Rhinoceros Horn Site to around 38kya, shortly after the divergence of West-Eurasian and East-Eurasian lineages. Vallini et al. 2022 argues that the dispersal and split patterns of West Eurasian lineages was not earlier than c. 38,000 years ago, with older Initial Upper Paleolithic European specimens, such as those found in the Zlaty Kun, Peștera cu Oase and Bacho Kiro caves, being unrelated to Western hunter-gatherers but closer to Ancient East Eurasians or basal to both. The relationships of the WHG/Villabruna cluster to other Paleolithic human groups in Europe and West Asia are obscure and subject to conflicting interpretations. A 2022 study proposed that the WHG/Villabruna population genetically diverged from hunter-gatherers in the Middle East and the Caucasus around 26,000 years ago, during the Last Glacial Maximum. WHG genomes display higher affinity for ancient and modern Middle Eastern populations when compared against earlier Paleolithic Europeans such as Gravettians. The affinity for ancient Middle Eastern populations in Europe increased after the Last Glacial Maximum, correlating with the expansion of WHG (Villabruna or Oberkassel) ancestry. There is also evidence for bi-directional geneflow between WHG and Middle Eastern populations as early as 15,000 years ago. WHG associated remains belonged primarily to the human Y-chromosome haplogroups I-M170 with a lower frequency of C-F3393 (specifically the clade C-V20/C1a2), which has been found commonly among earlier Paleolithic European remains such as Kostenki-14 and Sungir. The paternal haplogroup C-V20 can still be found in men living in modern Spain, attesting to this lineage's longstanding presence in Western Europe. The Villabruna cluster also carried the Y-haplogroup R1b (R1b-L754), derived from the Ancient North Eurasian haplogroup R*, indicating "an early link between Europe and the western edge of the Steppe Belt of Eurasia." Their mitochondrial chromosomes belonged primarily to haplogroup U5. A 2023 study proposed that the Villabruna cluster emerged from the mixing in roughly equal proportions of a divergent West Eurasian ancestry with a West Eurasian ancestry closely related to the 35,000 year old BK1653 individual from Bacho Kiro Cave in Bulgaria, with this BK1653-related ancestry also significantly (~59%) ancestral to the Věstonice cluster characteristic of eastern Gravettian producing Cro-Magnon groups, which may reflect shared ancestry in the Balkan region. The earliest known individuals of predominantly WHG/Villabruna ancestry in Europe are known from Italy, dating to around 17,000 years ago, though an individual from El Mirón cave in northern Spain with 43% Villabruna ancestry is known from 19,000 years ago. While not confirmed, the Villabruna cluster was probably present earlier than in the Balkans region than elsewhere in Southern Europe. Early WHG/Villabruna populations are associated with the Epigravettian archaeological culture, which largely replaced populations associated with the Magdalenian culture about 14,000 years ago (the ancestry of Magdalenian-associated Goyet-Q2 cluster primarily descended from the earlier Solutrean, and western Gravettian-producing groups in France and Spain). A 2023 study found that relative to earlier Western European Cro-Magnon populations like the Gravettians, that Magdalenian-associated Goyet-Q2 cluster carried significant (~30%) Villabruna ancestry even prior to the major expansion of WHG-related groups north of the Alps. This study also found that relative to earlier members of the Villabruna cluster from Italy, WHG-related groups which appeared north of the Alps beginning around 14,000 years ago carried around 25% ancestry from the Goyet-Q2 cluster (or alternatively 10% from the western Gravettian associated Fournol cluster). This paper proposed that WHG should be named the Oberkassel cluster, after one of the oldest WHG individuals found north of the Alps. The study suggests that Oberkassel ancestry was mostly already formed before expanding, possibly around the west side of the Alps, to Western and Central Europe and Britain, where sampled WHG individuals are genetically homogeneous. This is in contrast to the arrival of Villabruna and Oberkassel ancestry to Iberia, which seems to have involved repeated admixture events with local populations carrying high levels of Goyet-Q2 ancestry. This, and the survival of specific Y-DNA haplogroup C1 clades previously observed among early European hunter-gatherers, suggests relatively higher genetic continuity in southwest Europe during this period. Interaction with other populations The WHG were also found to have contributed ancestry to populations on the borders of Europe such as early Anatolian farmers and Ancient Northwestern Africans, as well as other European groups such as eastern hunter-gatherers. The relationship of WHGs to the EHGs remains inconclusive. EHGs are modeled to derive varying degrees of ancestry from a WHG-related lineage, ranging from merely 25% to up to 91%, with the remainder being linked to geneflow from Paleolithic Siberians (ANE) and perhaps Caucasus hunter-gatherers. Another lineage known as the Scandinavian hunter-gatherers (SHGs) were found to be a mix of EHGs and WHGs. In the Iberian Peninsula early Holocene hunter-gathers consisted of populations with a mixture of WHG and Magdalenian Cro-Magnon (GoyetQ2) ancestry. People of the Mesolithic Kunda culture and the Narva culture of the eastern Baltic were a mix of WHG and EHG, showing the closest affinity with WHG. Samples from the Ukrainian Mesolithic and Neolithic were found to cluster tightly together between WHG and EHG, suggesting genetic continuity in the Dnieper Rapids for a period of 4,000 years. The Ukrainian samples belonged exclusively to the maternal haplogroup U, which is found in around 80% of all European hunter-gatherer samples. People of the Pit–Comb Ware culture (CCC) of the eastern Baltic were closely related to EHG. Unlike most WHGs, the WHGs of the eastern Baltic did not receive European farmer admixture during the Neolithic. Modern populations of the eastern Baltic thus harbor a larger amount of WHG ancestry than any other population in Europe. SHGs have been found to contain a mix of WHG components who had likely migrated into Scandinavia from the south, and EHGs who had later migrated into Scandinavia from the northeast along the Norwegian coast. This hypothesis is supported by evidence that SHGs from western and northern Scandinavia had less WHG ancestry (ca 51%) than individuals from eastern Scandinavia (ca. 62%). The WHGs who entered Scandinavia are believed to have belonged to the Ahrensburg culture. EHGs and WHGs displayed lower allele frequencies of SLC45A2 and SLC24A5, which cause depigmentation, and OCA/Herc2, which causes light eye color, than SHGs. The DNA of eleven WHGs from the Upper Palaeolithic and Mesolithic in Western Europe, Central Europe and the Balkans was analyzed, with regards to their Y-DNA haplogroups and mtDNA haplogroups. The analysis suggested that WHGs were once widely distributed from the Atlantic coast in the West, to Sicily in the South, to the Balkans in the Southeast, for more than six thousand years. The study also included an analysis of a large number of individuals of prehistoric Eastern Europe. Thirty-seven samples were collected from Mesolithic and Neolithic Ukraine (9500-6000 BC). These were determined to be an intermediate between EHG and SHG, although WHG ancestry in this population increased during the Neolithic. Samples of Y-DNA extracted from these individuals belonged exclusively to R haplotypes (particularly subclades of R1b1) and I haplotypes (particularly subclades of I2). mtDNA belonged almost exclusively to U (particularly subclades of U5 and U4). A large number of individuals from the Zvejnieki burial ground, which mostly belonged to the Kunda culture and Narva culture in the eastern Baltic, were analyzed. These individuals were mostly of WHG descent in the earlier phases, but over time EHG ancestry became predominant. The Y-DNA of this site belonged almost exclusively to haplotypes of haplogroup R1b1a1a and I2a1. The mtDNA belonged exclusively to haplogroup U (particularly subclades of U2, U4 and U5). Forty individuals from three sites of the Iron Gates Mesolithic in the Balkans were also analyzed. These individuals were estimated to be of 85% WHG and 15% EHG descent. The males at these sites carried exclusively haplogroup R1b1a and I (mostly subclades of I2a) haplotypes. mtDNA belonged mostly to U (particularly subclades of U5 and U4). People of the Balkan Neolithic were found to harbor 98% Anatolian ancestry and 2% WHG ancestry. By the Chalcolithic, people of the Cucuteni–Trypillia culture were found to harbor about 20% hunter-gatherer ancestry, which was intermediate between EHG and WHG. People of the Globular Amphora culture were found to harbor ca. 25% WHG ancestry, which is significantly higher than Middle Neolithic groups of Central Europe. Replacement by Neolithic farmers A seminal 2014 study first identified the contribution of three main components to modern European lineages: the Western Hunter Gatherers (WHG, in proportions of up to 50% in Northern Europeans), the Ancient North Eurasians (ANE, Upper Palaeolithic Siberians later associated with the later Indo-European expansion, present in proportions up to 20%), and finally the Early European Farmers (EEF, agriculturists of mainly Near Eastern origin who migrated to Europe from circa 8,000 BP, now present in proportions from around 30% in the Baltic region to around 90% in the Mediterranean). The Early European Farmer (EEF) component was identified based on the genome of a woman buried c. 7,000 years ago in a Linear Pottery culture grave in Stuttgart, Germany. This 2014 study found evidence for genetic mixing between WHG and EEF throughout Europe, with the largest contribution of EEF in Mediterranean Europe (especially in Sardinia, Sicily, Malta and among Ashkenazi Jews), and the largest contribution of WHG in Northern Europe and among Basque people. Since 2014, further studies have refined the picture of interbreeding between EEF and WHG. In a 2017 analysis of 180 ancient DNA datasets of the Chalcolithic and Neolithic periods from Hungary, Germany and Spain, evidence was found of a prolonged period of interbreeding. Admixture took place regionally, from local hunter-gatherer populations, so that populations from the three regions (Germany, Iberia and Hungary) were genetically distinguishable at all stages of the Neolithic period, with a gradually increasing ratio of WHG ancestry of farming populations over time. This suggests that after the initial expansion of early farmers, there were no further long-range migrations substantial enough to homogenize the farming population, and that farming and hunter-gatherer populations existed side by side for many centuries, with ongoing gradual admixture throughout the 5th to 4th millennia BC (rather than a single admixture event on initial contact). Admixture rates varied geographically; in the late Neolithic, WHG ancestry in farmers in Hungary was at around 10%, in Germany around 25% and in Iberia as high as 50%. Analysis of remains from the Grotta Continenza in Italy showed that out of six remains, three buried between belonged to I2a-P214; and two-times the maternal haplogroups U5b1 and one U5b3. Around 6000 BC, the WHGs of Italy were almost completely genetically replaced by EEFs (two G2a2) and one Haplogroup R1b, although WHG ancestry slightly increased in subsequent millennia. Neolithic individuals in the British Isles were close to Iberian and Central European Early and Middle Neolithic populations, modeled as having about 75% ancestry from EEF with the rest coming from WHG in continental Europe. They subsequently replaced most of the WHG population in the British Isles without mixing much with them. The WHG are estimated to have contributed between 20-30% ancestry to Neolithic EEF groups throughout Europe. Specific adaptions against local pathogens may have been introduced via the Mesolithic WHG admixture into Neolithic EEF populations. A study on Mesolithic hunter-gatherers from Denmark found that they were related to contemporary Western hunter-gatherers, and are associated with the Maglemose, Kongemose and Ertebølle cultures. They displayed "genetic homogeneity from around 10,500 to 5,900 calibrated years before present", until "Neolithic farmers with Anatolian-derived ancestry arrived". The transition to the Neolithic period was "very abrupt and resulted in a population turnover with limited genetic contribution from local hunter-gatherers. The succeeding Neolithic population has been associated with the Funnelbeaker culture. Physical appearance According to David Reich, DNA analysis has shown that Western Hunter Gatherers were typically dark skinned, dark haired, and blue eyed. The dark skin was due to their Out-of-Africa origin (all Homo sapiens populations having had initially dark skin), while the blue eyes were the result of a variation in their OCA2 gene, which caused iris depigmentation. Archaeologist Graeme Warren has said that their skin color ranged from olive to black, and speculated that they may have had some regional variety of eye and hair colors. This is strikingly different from the distantly related eastern hunter-gatherers (EHG)—who have been suggested to be light-skinned, brown-eyed or blue eyed and dark-haired or light-haired. Two WHG skeletons with incomplete SNPs, La Braña and Cheddar Man, are predicted to have had dark or dark to black skin, whereas two other WHG skeletons with complete SNPs, "Sven" and Loschbour man, are predicted to have had dark or intermediate-to-dark and intermediate skin, respectively. Spanish biologist Carles Lalueza-Fox said the La Braña-1 individual had dark skin, "although we cannot know the exact shade." According to a 2020 study, the arrival of Early European Farmers (EEFs) from western Anatolia from 8500 to 5000 years ago, along with Western Steppe Herders during the Bronze Age, caused a rapid evolution of European populations towards lighter skin and hair. Admixture between hunter-gatherer and agriculturist populations was apparently occasional, but not extensive. Some authors have expressed caution regarding skin pigmentation reconstructions: Quillen et al. (2019) acknowledge studies that generally show that "lighter skin color was uncommon across much of Europe during the Mesolithic", including studies regarding the “dark or dark to black” predictions for the Cheddar Man, but warn that "reconstructions of Mesolithic and Neolithic pigmentation phenotype using loci common in modern populations should be interpreted with some caution, as it is possible that other as yet unexamined loci may have also influenced phenotype." Geneticist Susan Walsh at Indiana University–Purdue University Indianapolis, who worked on Cheddar Man project, said that "we simply don't know his skin colour". German biochemist Johannes Krause stated that we do not know whether the skin color of Western European hunter-gatherers was more similar to the skin color of people from present-day Central Africa or people from the Arab region. It is only certain that they did not carry any known mutation responsible for the light skin in subsequent populations of Europeans. A 2024 research into the genomic ancestry and social dynamics of the last hunter-gatherers of Atlantic France has stated that "phenotypically, we find some diversity during the Late Mesolithic in France", at which two of the WHG's sequenced in the study "likely had pale to intermediate skin pigmentation", but "most individuals carry the dark skin and blue eyes characteristic of WHGs" of the studied samples. Notes References Bibliography Further reading WHG Genetic history of Europe Peopling of the world Last Glacial Maximum Mesolithic Europe Hunter-gatherers of Europe
0.766781
0.996204
0.763871
MECE principle
The MECE principle (mutually exclusive and collectively exhaustive) is a grouping principle for separating a set of items into subsets that are mutually exclusive (ME) and collectively exhaustive (CE). It was developed in the late 1960s by Barbara Minto at McKinsey & Company and underlies her Minto Pyramid Principle, and while she takes credit for MECE, according to her interview with McKinsey, she says the idea for MECE goes back as far as to Aristotle. The MECE principle has been used in the business mapping process wherein the optimum arrangement of information is exhaustive and does not double count at any level of the hierarchy. Examples of MECE arrangements include categorizing people by year of birth (assuming all years are known), apartments by their building number, letters by postmark, and dice rolls. A non-MECE example would be categorization by nationality, because nationalities are neither mutually exclusive (some people have dual nationality) nor collectively exhaustive (some people have none). Common uses Strategy consultants use MECE problem structuring to break down client problems into logical, clean buckets of analysis that they can then hand out as work streams to consulting staff on the project. Similarly, MECE can be used in technical problem solving and communication. In some technical projects, like Six Sigma projects, the most effective method of communication is not the same as the problem solving process. In Six Sigma, the DMAIC process is used, but executive audiences looking for a summary or overview may not be interested in the details. By reorganizing the information using MECE and the related storytelling framework, the point of the topic can be addressed quickly and supported with appropriate detail. The aim is more effective communication. Criticisms The MECE concept has been criticized for not being exhaustive, as it doesn't exclude superfluous/extraneous items. Also, MECE thinking can be too limiting as mutual exclusiveness is not necessarily desirable. For instance, while it may be desirable to classify the answers to a question in a MECE framework so as to consider all of them exactly once, forcing the answers themselves to be MECE can be unnecessarily limiting. Another attribute of MECE thinking is that, by definition, it precludes redundancies. However, there are cases where redundancies are desirable or even necessary. Acronym pronunciation There is some debate regarding the pronunciation of the acronym MECE. Although it is pronounced by many as , the author insisted that it should be pronounced as . See also Proof by cases or case analysis Partition of a set for a mathematical treatment Work breakdown structure for application in project management Algebraic data type in programming, which makes it possible to define analogous structures Carroll diagram in logic, which divides a set into partitions of attributes References Types of groupings
0.767393
0.995371
0.763841
Why the West Rules—For Now
Why the West Rules—For Now: The Patterns of History, and What They Reveal About the Future is a history book by a British historian Ian Morris, published in 2010. Content The book compares East and West across the last 15,000 years, arguing that physical geography rather than culture, religion, politics, genetics, or great men explains Western domination of the globe. Morris' Social Development Index considers the amount of energy a civilization can usefully capture, its ability to organize (measured by the size of its largest cities), war-making capability (weapons, troop strength, logistics), and information technology (speed and reach of writing, printing, telecommunication, etc.). The evidence and statistical methods used in this book are explained in more detail in Social Development, a free eBook, and by the published volume, The Measure of Civilization. Morris argues that: When agriculture was first invented, areas with reliable rainfall benefited most. Irrigation benefited drier areas such as Egypt and the Fertile Crescent. Plants and animals that were more easily domesticated gave certain areas an early advantage, especially the Fertile Crescent and China. (See cradle of civilization.) Development of Africa and the Americas started on the same path, but it was delayed by thousands of years. With the development of ships in Eurasia, rivers became trade routes. Europe and empires in Greece and Rome benefited from the Mediterranean, compared to Chinese empires (who later built the Grand Canal for similar purposes). Raids from the Eurasian Steppe brought diseases that caused epidemics in settled populations. The Social Development Index shows the West leading until the 6th century, China leading until the 18th century, and the West leading again in the modern era. After the development of ocean-going ships, the significantly greater size of the Pacific Ocean made trans-Atlantic exploration and trade more feasible and profitable for Europe than trans-Pacific exploration and trade for East Asia. Though the mariner's compass was invented in China in the 11th century, Chinese exploration was less successful than the European Age of Discovery and subsequent colonization. Eurasian diseases to which people in the Americas had no immunity were a byproduct of Eurasian development that devastated Native Americans after contact, in addition to superior European weapons. Globalization and advances in information technology are leveling differences between civilizational areas. Reception The book won several literary awards, including the 2011 PEN Center USA Literary Award for Creative Nonfiction and 2011 GetAbstract International Book Award, and was named as one of the books of the year by Newsweek, Foreign Affairs, Foreign Policy, The New York Times, and a number of other newspapers. It has been translated into 13 languages. The Economist has called it "an important book—one that challenges, stimulates and entertains. Anyone who does not believe there are lessons to be learned from history should start here." The book has been criticized by the controversial historical sociologist Ricardo Duchesne for offering a 'diffuse definition of the West which Morris envisions encompassing not only Europe but all civilizations descending from the Fertile Crescent, including Islam, as well as a propensity to level out fundamental differences between the development of the West and the rest, which disregards the singular role of Europe in shaping the modern world'. Morris replied, saying that "despite his review’s length, rather little of it takes on my book’s central thesis", and defending his focus on China. The notion that the Middle East and Europe are in the same system was introduced by David Wilkinson in 1987. Sverre Bagge criticizes the book for underestimating the importance of institutional factors (such as state formation) and in downplaying cultural explanations in favor of materialist explanations. See also Guns, Germs, and Steel References 2010 non-fiction books Books about the West History books about civilization
0.787068
0.970457
0.763815
Modernization theory
Modernization theory holds that as societies become more economically modernized, wealthier and more educated, their political institutions become increasingly liberal democratic. The "classical" theories of modernization of the 1950s and 1960s, most influentially articulated by Seymour Lipset, drew on sociological analyses of Karl Marx, Emile Durkheim, Max Weber, and Talcott Parsons. Modernization theory was a dominant paradigm in the social sciences in the 1950s and 1960s, and saw a resurgence after 1991, when Francis Fukuyama wrote about the end of the Cold War as confirmation on modernization theory. The theory is subject of much debate among scholars. Critics have highlighted cases where industrialization did not prompt stable democratization, such as Japan, Germany, and the Soviet Union, as well as cases of democratic backsliding in economically advanced parts of Latin America. Other critics argue the causal relationship is reverse (democracy is more likely to lead to economic modernization) or that economic modernization helps democracies survive but does not prompt democratization. Other scholars provide supporting evidence, showing that economic development significantly predicts democratization. The rise and fall of modernization theory The modernization theory of the 1950s and 1960s drew on classical evolutionary theory and a Parsonian reading of Weber's ideas about a transition from traditional to modern society. Parsons had translated Weber's works into English in the 1930s and provided his own interpretation. After 1945 the Parsonian version became widely used in sociology and other social sciences. Some of the thinkers associated with modernization theory are Marion J. Levy Jr., Gabriel Almond, Seymour Martin Lipset, Walt Rostow, Daniel Lerner, Lucian Pye, David Apter, Alex Inkeles, Cyril Edwin Black, Bert F. Hoselitz, Myron Weiner, and Karl Deutsch. By the late 1960s opposition to modernization theory developed because the theory was too general and did not fit all societies in quite the same way. Yet, with the end of the Cold War, a few attempts to revive modernization theory were carried out. Francis Fukuyama argued for the use of modernization theory as universal history. A more academic effort to revise modernization theory was that of Ronald Inglehart and Christian Welzel in Modernization, Cultural Change, and Democracy (2005). Inglehart and Welzel amended the 1960s version of modernization theory in significant ways. Counter to Lipset, who associated industrial growth with democratization, Inglehart and Welzel did not see an association between industrialization and democratization. Rather, they held that only at a latter stage in the process of economic modernization, which various authors have characterized as post-industrial, did values conducive to democratization – which Inglehart and Welzel call "self-expression values" – emerge. Nonetheless, these efforts to revive modernization theory were criticized by many (see the section on "Criticisms and alternatives" below), and the theory remained a controversial one. Modernization and democracy The relationship between modernization and democracy or democratization is one of the most researched studies in comparative politics. Many studies show that modernization has contributed to democracy in some countries. For example, Seymour Martin Lipset argued that modernization can turn into democracy. There is academic debate over the drivers of democracy because there are theories that support economic growth as both a cause and effect of the institution of democracy. "Lipset's observation that democracy is related to economic development, first advanced in 1959, has generated the largest body of research on any topic in comparative politics," Anderson explains the idea of an elongated diamond in order to describe the concentration of power in the hands of a few at the top during an authoritarian leadership. He develops this by giving an understanding of the shift in power from the elite class to the middle class that occurs when modernization is incorporated. Socioeconomic modernization allows for a democracy to further develop and influences the success of a democracy. Concluded from this, is the idea that as socioeconomic levels are leveled, democracy levels would further increase. Larry Diamond and Juan Linz, who worked with Lipset in the book, Democracy in Developing Countries: Latin America, argue that economic performance affects the development of democracy in at least three ways. First, they argue that economic growth is more important for democracy than given levels of socioeconomic development. Second, socioeconomic development generates social changes that can potentially facilitate democratization. Third, socioeconomic development promotes other changes, like organization of the middle class, which is conducive to democracy. As Seymour Martin Lipset put it, "All the various aspects of economic development—industrialization, urbanization, wealth and education—are so closely interrelated as to form one major factor which has the political correlate of democracy". The argument also appears in Walt W. Rostow, Politics and the Stages of Growth (1971); A. F. K. Organski, The Stages of Political Development (1965); and David Apter, The Politics of Modernization (1965). In the 1960s, some critics argued that the link between modernization and democracy was based too much on the example of European history and neglected the Third World. One historical problem with that argument has always been Germany whose economic modernization in the 19th century came long before the democratization after 1918. Berman, however, concludes that a process of democratization was underway in Imperial Germany, for "during these years Germans developed many of the habits and mores that are now thought by political scientists to augur healthy political development". One contemporary problem for modernization theory is the argument of whether modernization implies more human rights for citizens or not. China, one of the most rapidly growing economies in the world, can be observed as an example. The modernization theory implies that this should correlate to democratic growth in some regards, especially in relation to the liberalization of the middle and lower classes. However, active human rights abuses and constant oppression of Chinese citizens by the government seem to contradict the theory strongly. Interestingly enough, the irony is that increasing restrictions on Chinese citizens are a result of modernization theory. In the 1990s, the Chinese government wanted to reform the legal system and emphasized governing the country by law. This led to a legal awakening for citizens as they were becoming more educated on the law, yet more understanding of their inequality in relation to the government. Looking down the line in the 2000s, Chinese citizens saw even more opportunities to liberalize and were able to be a part of urbanization and could access higher levels of education. This in turn resulted in the attitudes of the lower and middle classes changing to more liberal ideas, which went against the CCP. Over time, this has led to their active participation in civil society activities and similar adjacent political groups in order to make their voices heard. Consequently, the Chinese government represses Chinese citizens at a more aggressive rate, all due to modernization theory. Ronald Inglehart and Christian Welzel contend that the realization of democracy is not based solely on an expressed desire for that form of government, but democracies are born as a result of the admixture of certain social and cultural factors. They argue the ideal social and cultural conditions for the foundation of a democracy are born of significant modernization and economic development that result in mass political participation. Randall Peerenboom explores the relationships among democracy, the rule of law and their relationship to wealth by pointing to examples of Asian countries, such as Taiwan and South Korea, which have successfully democratized only after economic growth reached relatively high levels and to examples of countries such as the Philippines, Bangladesh, Cambodia, Thailand, Indonesia and India, which sought to democratize at lower levels of wealth but have not done as well. Adam Przeworski and others have challenged Lipset's argument. They say political regimes do not transition to democracy as per capita incomes rise. Rather, democratic transitions occur randomly, but once there, countries with higher levels of gross domestic product per capita remain democratic. Epstein et al. (2006) retest the modernization hypothesis using new data, new techniques, and a three-way, rather than dichotomous, classification of regimes. Contrary to Przeworski, this study finds that the modernization hypothesis stands up well. Partial democracies emerge as among the most important and least understood regime types. Daron Acemoglu and James A. Robinson (2008) further weaken the case for Lipset's argument by showing that even though there is a strong cross-country correlation between income and democracy, once one controls for country fixed effects and removes the association between income per capita and various measures of democracy, there is "no causal effect of income on democracy." In "Non-Modernization" (2022), they further argue that modernization theory cannot account for various paths of political development "because it posits a link between economics and politics that is not conditional on institutions and culture and that presumes a definite endpoint—for example, an 'end of history'." Sirianne Dahlum and Carl Henrik Knutsen offer a test of the Ronald Inglehart and Christian Welzel revised version of modernization theory, which focuses on cultural traits triggered by economic development that are presummed to be conducive to democratization. They find "no empirical support" for the Inglehart and Welzel thesis and conclude that "self-expression values do not enhance democracy levels or democratization chances, and neither do they stabilize existing democracies." A meta-analysis by Gerardo L. Munck of research on Lipset's argument shows that a majority of studies do not support the thesis that higher levels of economic development leads to more democracy. Modernization and economic development Modernization theorists often saw traditions as obstacles to economic development. According to Seymour Martin Lipset, economic conditions are heavily determined by the cultural, social values present in that given society. Furthermore, while modernization might deliver violent, radical change for traditional societies, it was thought worth the price. Critics insist that traditional societies were often destroyed without ever gaining the promised advantages. Others point to improvements in living standards, physical infrastructure, education and economic opportunity to refute such criticisms. Modernization theorists such as Samuel P. Huntington held in the 1960s and 1970s that authoritarian regimes yielded greater economic growth than democracies. However, this view had been challenged. In Democracy and Development: Political Institutions and Well-Being in the World, 1950–1990 (2000), Adam Przeworski argued that "democracies perform as well economically as do authoritarian regimes." A study by Daron Acemoglu, Suresh Naidu, Pascual Restrepo, and James A. Robinson shows that "democracy has a positive effect on GDP per capita." Modernization and globalization Globalization can be defined as the integration of economic, political and social cultures. It is argued that globalization is related to the spreading of modernization across borders. Global trade has grown continuously since the European discovery of new continents in the early modern period; it increased particularly as a result of the Industrial Revolution and the mid-20th century adoption of the shipping container. Annual trans-border tourist arrivals rose to 456 million by 1990 and almost tripled since, reaching a total of over 1.2 billion in 2016. Communication is another major area that has grown due to modernization. Communication industries have enabled capitalism to spread throughout the world. Telephony, television broadcasts, news services and online service providers have played a crucial part in globalization. Former U.S. president Lyndon B. Johnson was a supporter of the modernization theory and believed that television had potential to provide educational tools in development. With the many apparent positive attributes to globalization there are also negative consequences. The dominant, neoliberal model of globalization often increases disparities between a society's rich and its poor. In major cities of developing countries there exist pockets where technologies of the modernised world, computers, cell phones and satellite television, exist alongside stark poverty. Globalists are globalization modernization theorists and argue that globalization is positive for everyone, as its benefits must eventually extend to all members of society, including vulnerable groups such as women and children. Applications United States foreign aid in the 1960s President John F. Kennedy (1961–1963) relied on economists W.W. Rostow on his staff and outsider John Kenneth Galbraith for ideas on how to promote rapid economic development in the "Third World", as it was called at the time. They promoted modernization models in order to reorient American aid to Asia, Africa and Latin America. In the Rostow version in his The Stages of Economic Growth (1960) progress must pass through five stages, and for underdeveloped world the critical stages were the second one, the transition, the third stage, the takeoff into self-sustaining growth. Rostow argued that American intervention could propel a country from the second to the third stage he expected that once it reached maturity, it would have a large energized middle class that would establish democracy and civil liberties and institutionalize human rights. The result was a comprehensive theory that could be used to challenge Marxist ideologies, and thereby repel communist advances. The model provided the foundation for the Alliance for Progress in Latin America, the Peace Corps, Food for Peace, and the Agency for International Development (AID). Kennedy proclaimed the 1960s the "Development Decade" and substantially increased the budget for foreign assistance. Modernization theory supplied the design, rationale, and justification for these programs. The goals proved much too ambitious, and the economists in a few years abandoned the European-based modernization model as inappropriate to the cultures they were trying to impact. Kennedy and his top advisers were working from implicit ideological assumptions regarding modernization. They firmly believed modernity was not only good for the target populations, but was essential to avoid communism on the one hand or extreme control of traditional rural society by the very rich landowners on the other. They believed America had a duty, as the most modern country in the world, to promulgate this ideal to the poor nations of the Third World. They wanted programs that were altruistic, and benevolent—and also tough, energetic, and determined. It was benevolence with a foreign policy purpose. Michael Latham has identified how this ideology worked out in three major programs the Alliance for Progress, the Peace Corps, and the strategic hamlet program in South Vietnam. However, Latham argues that the ideology was a non-coercive version of the modernization goals of the imperialistic of Britain, France and other European countries in the 19th century. Criticisms and alternatives From the 1970s, modernization theory has been criticized by numerous scholars, including Andre Gunder Frank (1929–2005) and Immanuel Wallerstein (1930–2019). In this model, the modernization of a society required the destruction of the indigenous culture and its replacement by a more Westernized one. By one definition, modern simply refers to the present, and any society still in existence is therefore modern. Proponents of modernization typically view only Western society as being truly modern and argue that others are primitive or unevolved by comparison. That view sees unmodernized societies as inferior even if they have the same standard of living as western societies. Opponents argue that modernity is independent of culture and can be adapted to any society. Japan is cited as an example by both sides. Some see it as proof that a thoroughly modern way of life can exist in a non western society. Others argue that Japan has become distinctly more Western as a result of its modernization. As Tipps has argued, by conflating modernization with other processes, with which theorists use interchangeably (democratization, liberalization, development), the term becomes imprecise and therefore difficult to disprove. The theory has also been criticised empirically, as modernization theorists ignore external sources of change in societies. The binary between traditional and modern is unhelpful, as the two are linked and often interdependent, and "modernization" does not come as a whole. Modernization theory has also been accused of being Eurocentric, as modernization began in Europe, with the Industrial Revolution, the French Revolution and the Revolutions of 1848 and has long been regarded as reaching its most advanced stage in Europe. Anthropologists typically make their criticism one step further and say that the view is ethnocentric and is specific to Western culture. Dependency theory One alternative model is dependency theory. It emerged in the 1950s and argues that the underdevelopment of poor nations in the Third World derived from systematic imperial and neo-colonial exploitation of raw materials. Its proponents argue that resources typically flow from a "periphery" of poor and underdeveloped states to a "core" of wealthy states, enriching the latter at the expense of the former. It is a central contention of dependency theorists such as Andre Gunder Frank that poor states are impoverished and rich ones enriched by the way poor states are integrated into the "world system". Dependency models arose from a growing association of southern hemisphere nationalists (from Latin America and Africa) and Marxists. It was their reaction against modernization theory, which held that all societies progress through similar stages of development, that today's underdeveloped areas are thus in a similar situation to that of today's developed areas at some time in the past, and that, therefore, the task of helping the underdeveloped areas out of poverty is to accelerate them along this supposed common path of development, by various means such as investment, technology transfers, and closer integration into the world market. Dependency theory rejected this view, arguing that underdeveloped countries are not merely primitive versions of developed countries, but have unique features and structures of their own; and, importantly, are in the situation of being the weaker members in a world market economy. Barrington Moore and comparative historical analysis Another line of critique of modernization theory was due to sociologist Barrington Moore Jr., in his Social Origins of Dictatorship and Democracy (1966). In this classic book, Moore argues there were at least "three routes to the modern world" - the liberal democratic, the fascist, and the communist - each deriving from the timing of industrialization and the social structure at the time of transition. Counter to modernization theory, Moore held that there was not one path to the modern world and that economic development did not always bring about democracy. Guillermo O'Donnell and bureaucratic authoritarianism Political scientist Guillermo O'Donnell, in his Modernization and Bureaucratic Authoritarianism (1973) challenged the thesis, advanced most notably by Seymour Martin Lipset, that industrialization produced democracy. In South America, O'Donnell argued, industrialization generated not democracy, but bureaucratic authoritarianism. Acemoglu and Robinson and institutional economics Ecoonomists Daron Acemoglu and James A. Robinson (2022), argue that modernization theory cannot account for various paths of political development "because it posits a link between economics and politics that is not conditional on institutions and culture and that presumes a definite endpoint—for example, an 'end of history'." See also Anti-modernization Bielefeld School Consumerism Dependency theory Development criticism Ecological modernization Globalization Gwangmu Reform timeline Idea of Progress Mass society Mediatization (media) Modernism Modernization theory (Nationalism) Outline of organizational theory Progressive Era (US, early 20th century) Postmodernism Postmodernity Western education References Further reading Cammack, Paul Anthony, Capitalism and Democracy in the Third World: The Doctrine for Political Development. London: Leicester University Press, 1997 Garon, Sheldon. "Rethinking Modernization and Modernity in Japanese History: A Focus on State-Society Relations" Journal of Asian Studies 53#2 (1994), pp. 346–366 online Huntington, Samuel P. (1966). "Political Modernization: America vs. Europe". World Politics 18 (3): 378–414. . Janos, Andrew C. Politics and Paradigms: Changing Theories of Change in Social Science. Stanford University Press, 1986 , modernizers, traditionalists and post-moderns make state history (4 vol.) Munck, Gerardo L. "Modernization Theory as a Case of Failed Knowledge Production." The Annals of Comparative Democratization 16, 3 (2018): 37–41. SImmons, Joel W. (2024). "Democracy and Economic Growth: Theoretical Debates and Empirical Contributions". World Politics. Wucherpfennig, Julian, and Franziska Deutsch. 2009. "Modernization and Democracy: Theories and Evidence Revisited." Living Reviews in Democracy Vol. 1, p. 1-9. 9p. External links Comparative politics Development studies Management cybernetics Modernity Sociocultural evolution theory Sociological theories
0.766298
0.996714
0.76378
Kyriarchy
In feminist theory, kyriarchy is a social system or set of connecting social systems built around domination, oppression, and submission. The word was coined by Elisabeth Schüssler Fiorenza in 1992 to describe her theory of interconnected, interacting, and self-extending systems of domination and submission, in which a single individual might be oppressed in some relationships and privileged in others. It is an intersectional extension of the idea of patriarchy beyond gender. Kyriarchy encompasses sexism, racism, ableism, ageism, antisemitism, Islamophobia, anti-Catholicism, homophobia, transphobia, fatphobia, classism, xenophobia, economic injustice, the prison-industrial complex, colonialism, militarism, ethnocentrism, speciesism, linguicism and other forms of dominating hierarchies in which the subordination of one person or group to another is internalized and institutionalized. Whenever the term is taken to encompass topics that were not and could not be addressed by the original theory, the kyriarchic aspects in emerging fields of study such as mononormativity, allonormativity, and chrononormativity are likewise included. Etymology The term was coined into English by Elisabeth Schüssler Fiorenza in 1992 when she published her book But She Said: Feminist Practices of Biblical Interpretation. It is derived from , "lord, master" and , "lead, rule, govern". The word kyriarchy, already existed in Modern Greek, and means "sovereignty". Usage The term was originally developed in the context of feminist theological discourse, and has been used in some other areas of academia as a non–gender-based descriptor of systems of power, as opposed to patriarchy. It is also widely used outside of scholarly contexts. The Kurdish-Iranian asylum seeker Behrouz Boochani has described the Australian-run Manus Island prison as a kyriarchal system: one where different forms of oppression intersect; oppression is not random but purposeful, designed to isolate and create friction amongst prisoners, leading to despair and broken spirits. He elaborates on this in his autobiographical account of the prison, No Friend But the Mountains. Structural positions Schüssler Fiorenza describes interdependent "stratifications of gender, race, class, religion, heterosexualism, and age" as structural positions assigned at birth. She suggests that people inhabit several positions, and that positions with privilege become nodal points through which other positions are experienced. For example, in a context where gender is the primary privileged position (e.g. patriarchy, matriarchy), gender becomes the nodal point through which sexuality, race, and class are experienced. In a context where class is the primary privileged position (i.e. classism), gender and race are experienced through class dynamics. Fiorenza stresses that kyriarchy is not a hierarchical system as it does not focus on one point of domination. Instead it is described as a "complex pyramidal system" with those on the bottom of the pyramid experiencing the "full power of kyriarchal oppression". The kyriarchy is recognized as the status quo and therefore its oppressive structures may not be recognized. To maintain this system, kyriarchy relies on the creation of a servant class, race, gender, or people. The position of this class is reinforced through "education, socialization, and brute violence and malestream rationalization". Tēraudkalns suggests that these structures of oppression are self-sustained by internalized oppression; those with relative power tend to remain in power, while those without tend to remain disenfranchised. In addition, structures of oppression amplify and feed into each other. See also References Further reading Giannacopoulos, M. "Kyriarchy, Nomopoly, and Patriarchal White Sovereignty." Biography, (2020) 43(4), 736–747. Thompson, Margaret Susan. "Circles of sisterhood: formal and informal collaboration among American nuns in response to conflict with Vatican Kyriarchy." Journal of feminist studies in religion 32.2 (2016): 63-82. Thompson, Margaret Susan. "Sacraments as Weapons: Patriarchal Coercion and Engendered Power in the Nineteenth-Century Convent." Journal of Feminist Studies in Religion 38.2 (2022): 89-104. External links Feminism and society Intersectionality Social inequality Social systems 1992 neologisms
0.770951
0.990685
0.763769
Russification
Russification, Russianisation or Russianization, is a form of cultural assimilation in which non-Russians, whether involuntarily or voluntarily, give up their culture and language in favor of the Russian culture and the Russian language. In a historical sense, the term refers to both official and unofficial policies of the Russian Empire and the Soviet Union concerning their national constituents and to national minorities in Russia, aimed at Russian domination and hegemony. The major areas of Russification are politics and culture. In politics, an element of Russification is assigning Russian nationals to lead administrative positions in national institutions. In culture, Russification primarily amounts to the domination of the Russian language in official business and the strong influence of the Russian language on national idioms. The shifts in demographics in favour of the ethnic Russian population are sometimes considered a form of Russification as well. Some researchers distinguish Russification, as a process of changing one's ethnic self-label or identity from a non-Russian ethnonym to Russian, from Russianization, the spread of the Russian language, culture, and people into non-Russian cultures and regions, distinct also from Sovietization or the imposition of institutional forms established by the Communist Party of the Soviet Union throughout the territory ruled by that party. In this sense, although Russification is usually conflated across Russification, Russianization, and Russian-led Sovietization, each can be considered a distinct process. Russianization and Sovietization, for example, did not automatically lead to Russification – a change in language or self-identity of non-Russian people to being Russian. Thus, despite long exposure to the Russian language and culture, as well as to Sovietization, at the end of the Soviet era, non-Russians were on the verge of becoming a majority of the population in the Soviet Union. After the two collapses: of Russian Empire in 1917 and Soviet Union in 1991 major processes of derussification took place. History The Russification of Uralic-speaking people, such as Vepsians, Mordvins, Maris, and Permians, indigenous to large parts of western and central Russia had already begun with the original eastward expansion of East Slavs. Written records of the oldest period are scarce, but toponymic evidence indicates that this expansion was accomplished at the expense of various Volga-Finnic peoples, who were gradually assimilated by Russians; beginning with the Merya and the Muroma early in the 2nd millennium AD. In the 13th to 14th century, the Russification of the Komi began but it did not penetrate the Komi heartlands until the 18th century. However, by the 19th century, Komi-Russian bilingualism had become the norm and there was an increasing Russian influence on the Komi language. After the Russian defeat in the Crimean War in 1856 and the January Uprising of 1863, Tsar Alexander II increased Russification to reduce the threat of future rebellions. Russia was populated by many minority groups, and forcing them to accept the Russian culture was an attempt to prevent self-determination tendencies and separatism. In the 19th century, Russian settlers on traditional Kazakh land (misidentified as Kyrgyz at the time) drove many of the Kazakhs over the border to China. Russification was extended to non-Muscovite ethnographic groups that composed former Kievan Rus, namely Ukrainians and Belarusians, whose vernacular language and culture developed differently from that of Muscovy due to separation after the partitioning of Kievan Rus. The mentality behind Russification when applied to these groups differed from that applied to others, in that they were claimed to be part of the All-Russian or Triune Russian nation by the Russian Imperial government and by subscribers to Russophilia. Russification competed with contemporary nationalist movements in Ukraine and Belarus that were developing during the 19th century. Russian Imperial authorities as well as modern Russian nationalists asserted that Russification was an organic national consolidation process that would accomplish the goals of homogenizing the Russian nation as they saw it, and reversing the effects of Polonization. In the Soviet Union After the 1917 revolution, authorities in the USSR decided to abolish the use of the Arabic alphabet in native languages in Soviet-controlled Central Asia, in the Caucasus, and in the Volga region (including Tatarstan). This detached the local Muslim populations from exposure to the language and writing system of the Quran. The new alphabet for these languages was based on the Latin alphabet and was also inspired by the Turkish alphabet. By the late 1930s, the policy had changed. In 1939–1940, the Soviets decided that a number of these languages (including Tatar, Kazakh, Uzbek, Turkmen, Tajik, Kyrgyz, Azerbaijani, and Bashkir) would henceforth use variations of the Cyrillic script (see Cyrillization in the Soviet union). Not only that, the spelling and writing of these new Cyrillic words must also be in accordance with the Russian language. Some historians evaluating the Soviet Union as a colonial empire, applied the "prison of nations" idea to the USSR. Thomas Winderl wrote "The USSR became in a certain sense more a prison-house of nations than the old Empire had ever been." Korenizatsiya Stalin's Marxism and the National Question (1913) provided the basic framework for nationality policy in the Soviet Union. The early years of said policy, from the early 1920s to the mid-1930s, were guided by the policy of korenizatsiya ("indigenization"), during which the new Soviet regime sought to reverse the long-term effects of Russification on the non-Russian populations. As the regime was trying to establish its power and legitimacy throughout the former Russian empire, it went about constructing regional administrative units, recruiting non-Russians into leadership positions, and promoting non-Russian languages in government administration, the courts, the schools, and the mass media. The slogan then established was that local cultures should be "socialist in content but national in form." That is, these cultures should be transformed to conform with the Communist Party's socialist project for the Soviet society as a whole but have active participation and leadership by the indigenous nationalities and operate primarily in the local languages. Early nationality policies shared with later policy the object of assuring control by the Communist Party over all aspects of Soviet political, economic, and social life. The early Soviet policy of promoting what one scholar has described as "ethnic particularism" and another as "institutionalized multinationality", had a double goal. On the one hand, it had been an effort to counter Russian chauvinism by assuring a place for non-Russian languages and cultures in the newly formed Soviet Union. On the other hand, it was a means to prevent the formation of alternative ethnically based political movements, including pan-Islamism and pan-Turkism. One way of accomplishing this was to promote what some regard as artificial distinctions between ethnic groups and languages rather than promoting the amalgamation of these groups and a common set of languages based on Turkish or another regional language. The Soviet nationalities policy from its early years sought to counter these two tendencies by assuring a modicum of cultural autonomy to non-Russian nationalities within a federal system or structure of government, though maintaining that the ruling Communist Party was monolithic, not federal. A process of "national-territorial delimitation" (:ru:национально-территориальное размежевание) was undertaken to define the official territories of the non-Russian populations within the Soviet Union. The federal system conferred the highest status to the titular nationalities of union republics, and lower status to the titular nationalities of autonomous republics, autonomous provinces, and autonomous okrugs. In all, some 50 nationalities had a republic, province, or okrug of which they held nominal control in the federal system. Federalism and the provision of native-language education ultimately left as a legacy a large non-Russian public that was educated in the languages of their ethnic groups and that identified a particular homeland on the territory of the Soviet Union. World War II By the late 1930s, policies had shifted. Purges in some of the national regions, such as Ukraine, had occurred already in the early 1930s. Before the turnabout in Ukraine in 1933, a purge of Veli İbraimov and his leadership in the Crimean ASSR in 1929 for "national deviation" led to the Russianization of government, education, and the media and to the creation of a special alphabet for Crimean Tatar to replace the Latin alphabet. Of the two dangers that Joseph Stalin had identified in 1923, now bourgeois nationalism (local nationalism) was said to be a greater threat than Great Russian chauvinism (great power chauvinism). In 1937, Faizullah Khojaev and Akmal Ikramov were removed as leaders of the Uzbek SSR, and in 1938, during the third great Moscow show trial, convicted and subsequently put to death for alleged anti-Soviet nationalist activities. After Stalin, an ethnic Georgian, became the undisputed leader of the Soviet Union, the Russian language gained greater emphasis. In 1938, Russian became a required subject of study in every Soviet school, including those in which a non-Russian language was the principal medium of instruction for other subjects (e.g., mathematics, science, and social studies). In 1939, non-Russian languages that had been given Latin-based scripts in the late 1920s were given new scripts based on the Cyrillic script. Before and during World War II, Joseph Stalin deported to Central Asia and Siberia many entire nationalities for their alleged and largely disproven collaboration with the German invaders: Volga Germans, Crimean Tatars, Chechens, Ingush, Balkars, Kalmyks, and others. Shortly after the war, he deported many Ukrainians, Balts, and Estonians to Siberia as well. After the war, the leading role of the Russian people in the Soviet family of nations and nationalities was promoted by Stalin and his successors. This shift was most clearly underscored by Communist Party General Secretary Stalin's Victory Day toast to the Russian people in May 1945: The view was reflected in the new State Anthem of the Soviet Union which started with: "An unbreakable union of free republics, Great Russia has sealed forever." Anthems of nearly all Soviet republics mentioned "Russia" or "Russian nation" singled out as "brother", "friend", "elder brother" (Uzbek SSR) or "stronghold of friendship" (Turkmen SSR). Although the official literature on nationalities and languages in subsequent years continued to speak of there being 130 equal languages in the USSR, in practice a hierarchy was endorsed in which some nationalities and languages were given special roles or viewed as having different long-term futures. Educational reforms An analysis of textbook publishing found that education was offered for at least one year and it was also offered to children who were in at least the first class (grade) in 67 languages between 1934 and 1980. Educational reforms were undertaken after Nikita Khrushchev became First Secretary of the Communist Party in the late 1950s and launched a process of replacing non-Russian schools with Russian ones for the nationalities that had lower status in the federal system, the nationalities whose populations were smaller and the nationalities which were already bilingual on a large scale. Nominally, this process was guided by the principle of "voluntary parental choice." But other factors also came into play, including the size and formal political status of the group in the Soviet federal hierarchy and the prevailing level of bilingualism among parents. By the early 1970s schools in which non-Russian languages served as the principal medium of instruction operated in 45 languages, while seven more indigenous languages were taught as subjects of study for at least one class year. By 1980, instruction was offered in 35 non-Russian languages of the peoples of the USSR, just over half the number in the early 1930s. In most of these languages, schooling was not offered for the complete ten-year curriculum. For example, within the Russian SFSR in 1958–59, full 10-year schooling in the native language was offered in only three languages: Russian, Tatar, and Bashkir. And some nationalities had minimal or no native-language schooling. By 1962–1963, among non-Russian nationalities that were indigenous to the RSFSR, whereas 27% of children in classes I-IV (primary school) studied in Russian-language schools, 53% of those in classes V-VIII (incomplete secondary school) studied in Russian-language schools, and 66% of those in classes IX-X studied in Russian-language schools. Although many non-Russian languages were still offered as a subject of study at a higher class level (in some cases through complete general secondary school – the 10th class), the pattern of using the Russian language as the main medium of instruction accelerated after Khrushchev's parental choice program got underway. Pressure to convert the main medium of instruction to Russian was evidently higher in urban areas. For example, in 1961–62, reportedly only 6% of Tatar children living in urban areas attended schools in which Tatar was the main medium of instruction. Similarly in Dagestan in 1965, schools in which the indigenous language was the medium of instruction existed only in rural areas. The pattern was probably similar, if less extreme, in most of the non-Russian union republics, although in Belarus and Ukraine, schooling in urban areas was highly Russianized. Rapprochement The promotion of federalism and of non-Russian languages had always been a strategic decision aimed at expanding and maintaining Communist Party rule. On the theoretical plane, the Communist Party's official doctrine was of eventual national differences and nationalities as such would disappear. In official party doctrine as it was reformulated in the Third Program of the Communist Party of the Soviet Union introduced by Nikita Khrushchev at the 22nd Party Congress in 1961, although the program stated that ethnic distinctions would eventually disappear and a single common language would be adopted by all nationalities in the Soviet Union, "the obliteration of national distinctions, and especially language distinctions, is a considerably more drawn-out process than the obliteration of class distinctions." At the time, Soviet nations and nationalities were further flowering their cultures and drawing together (сближение – sblizhenie) into a stronger union. In his Report on the Program to the Congress, Khrushchev used even stronger language: that the process of further rapprochement (sblizhenie) and greater unity of nations would eventually lead to a merging or fusion (слияние – sliyanie) of nationalities. Khrushchev's formula of rapprochement-fusing was moderated slightly when Leonid Brezhnev replaced Khrushchev as General Secretary of the Communist Party in 1964 (a post he held until his death in 1982). Brezhnev asserted that rapprochement would lead ultimately to the complete unity of nationalities. "Unity" is an ambiguous term because it can imply either the maintenance of separate national identities but a higher stage of mutual attraction, similarity between nationalities or total disappearance of ethnic differences. In the political context of the time, rapprochement-unity was regarded as a softening of the pressure toward Russification that Khrushchev had promoted with his endorsement of sliyanie. The 24th Party Congress in 1971 launched the idea that a new "Soviet people" was forming on the territory of the USSR, a community for which the common language – the language of the "Soviet people" – was the Russian language, consistent with the role that Russian was playing for the fraternal nations and nationalities in the territory already. This new community was labeled a people (народ – narod), not a nation (нация – natsiya), but in that context the Russian word narod ("people") implied an ethnic community, not just a civic or political community. October 13, 1978, the Soviet Council of Ministers enacted (but did not officially publish) 1978 Decree No. 835, titled "On measures to further improve the teaching and learning of the Russian language in the Union Republics", directing mandating the teaching of Russian, starting in first grade, in the other 14 Republics. The new rule was accompanied by a statement that Russian was a "second native language" for all Soviet citizens and "the only means of participation in social life across the nation." The Councils of Ministers of the Republics across the USSR enacted resolutions based on Decree No. 835. Other aspects of Russification contemplated that native languages would gradually be removed from newspapers, radio and television in favor of Russian. Thus, until the end of the Soviet era, doctrinal rationalization had been provided for some of the practical policy steps that were taken in the areas of education and the media. First of all, the transfer of many "national schools" (schools based on local languages) to Russian as a medium of instruction accelerated under Khrushchev in the late 1950s and continued into the 1980s. Second, the new doctrine was used to justify the special place of the Russian language as the "language of inter-nationality communication" (язык межнационального общения) in the USSR. Use of the term "inter-nationality" (межнациональное) rather than the more conventional "international" (международное) focused on the special internal role of Russian language rather than on its role as a language of international discourse. That Russian was the most widely spoken language, and that Russians were the majority of the population of the country, were also cited in justification of the special place of the Russian language in government, education, and the media. At the 27th CPSU Party Congress in 1986, presided over by Mikhail Gorbachev, the 4th Party Program reiterated the formulas of the previous program: During the Soviet era, a significant number of ethnic Russians and Ukrainians migrated to other Soviet republics, and many of them settled there. According to the last census in 1989, the Russian 'diaspora' in the non-Russian Soviet republics had reached 25 million. Linguistics Progress in the spread of the Russian language as a second language and the gradual displacement of other languages was monitored in Soviet censuses. The Soviet censuses of 1926, 1937, 1939, and 1959, had included questions on "native language" (родной язык) as well as "nationality." The 1970, 1979, and 1989 censuses added to these questions one on "other language of the peoples of the USSR" that an individual could "use fluently" (свободно владеть). It is speculated that the explicit goal of the new question on the "second language" was to monitor the spread of Russian as the language of internationality communication. Each of the official homelands within the Soviet Union was regarded as the only homeland of the titular nationality and its language, while the Russian language was regarded as the language for interethnic communication for the whole Soviet Union. Therefore, for most of the Soviet era, especially after the korenizatsiya (indigenization) policy ended in the 1930s, schools in which non-Russian Soviet languages would be taught were not generally available outside the respective ethnically based administrative units of these ethnicities. Some exceptions appeared to involve cases of historic rivalries or patterns of assimilation between neighboring non-Russian groups, such as between Tatars and Bashkirs in Russia or among major Central Asian nationalities. For example, even in the 1970s schooling was offered in at least seven languages in Uzbekistan: Russian, Uzbek, Tajik, Kazakh, Turkmen, Kyrgyz, and Karakalpak. While formally all languages were equal, in almost all Soviet republics the Russian/local bilingualism was "asymmetric": the titular nation learned Russian, whereas immigrant Russians generally did not learn the local language. In addition, many non-Russians who lived outside their respective administrative units tended to become Russified linguistically; that is, they not only learned Russian as a second language but they also adopted it as their home language or mother tongue – although some still retained their sense of ethnic identity or origins even after shifting their native language to Russian. This includes both the traditional communities (e.g., Lithuanians in the northwestern Belarus (see Eastern Vilnius region) or the Kaliningrad Oblast (see Lithuania Minor)) and the communities that appeared during Soviet times such as Ukrainian or Belarusian workers in Kazakhstan or Latvia, whose children attended primarily the Russian-language schools and thus the further generations are primarily speaking Russian as their native language; for example, 57% of Estonia's Ukrainians, 70% of Estonia's Belarusians and 37% of Estonia's Latvians claimed Russian as the native language in the last Soviet census of 1989. Russian replaced Yiddish and other languages as the main language of many Jewish communities inside the Soviet Union as well. Another consequence of the mixing of nationalities and the spread of bilingualism and linguistic Russification was the growth of ethnic intermarriage and a process of ethnic Russification—coming to call oneself Russian by nationality or ethnicity, not just speaking Russian as a second language or using it as a primary language. In the last decades of the Soviet Union, ethnic Russification (or ethnic assimilation) was moving very rapidly for a few nationalities such as the Karelians and Mordvinians. Whether children born in mixed families to one Russian parent were likely to be raised as Russians depended on the context. For example, the majority of children in North Kazakhstan with one of each parent chose Russian as their nationality on their internal passport at age 16. Children of mixed Russian and Estonian parents living in Tallinn (the capital city of Estonia), or mixed Russian and Latvian parents living in Riga (the capital of Latvia), or mixed Russian and Lithuanian parents living in Vilnius (the capital of Lithuania) most often chose as their own nationality that of the titular nationality of their republic – not Russian. More generally, patterns of linguistic and ethnic assimilation (Russification) were complex and cannot be accounted for by any single factor such as educational policy. Also relevant were the traditional cultures and religions of the groups, their residence in urban or rural areas, their contact with and exposure to the Russian language and to ethnic Russians, and other factors. In the Russian Federation (1991–present) The enforced Russification of Russia's remaining indigenous minorities continued in Russia after the collapse of the Soviet Union, especially in connection with urbanization and the declining population replacement rates (particularly low among the more western groups). As a result, several of Russia's indigenous languages and cultures are currently considered endangered. E.g. between the 1989 and 2002 censuses, the assimilation numbers of the Mordvins have totalled over 100,000, a major loss for a people totalling less than one million in number. On 19 June 2018, the Russian State Duma adopted a bill that made education in all languages but Russian optional, overruling previous laws by ethnic autonomies, and reducing instruction in minority languages to only two hours a week. This bill has been likened by some commentators, such as in Foreign Affairs, to the policy of Russification. When the bill was still being considered, advocates for minorities warned that the bill could endanger their languages and traditional cultures. The law came after a lawsuit in the summer of 2017, where a Russian mother claimed that her son had been "materially harmed" by learning the Tatar language, while in a speech Putin argued that it was wrong to force someone to learn a language that is not their own. The later "language crackdown" in which autonomous units were forced to stop mandatory hours of native languages was also seen as a move by Putin to "build identity in Russian society". Protests and petitions against the bill by either civic society, groups of public intellectuals or regional governments came from Tatarstan (with attempts for demonstrations suppressed), Chuvashia, Mari El, North Ossetia, Kabardino-Balkaria, the Karachays, the Kumyks, the Avars, Chechnya, and Ingushetia. Although the Duma representatives from the Caucasus did not oppose the bill, it prompted a large outcry in the North Caucasus with representatives from the region being accused of cowardice. The law was also seen as possibly destabilizing, threatening ethnic relations and revitalizing the various North Caucasian nationalist movements. The International Circassian Organization called for the law to be rescinded before it came into effect. Twelve of Russia's ethnic autonomies, including five in the Caucasus called for the legislation to be blocked. On 10 September 2019, Udmurt activist Albert Razin self-immolated in front of the regional government building in Izhevsk as it was considering passing the controversial bill to reduce the status of the Udmurt language. Between 2002 and 2010 the number of Udmurt speakers dwindled from 463,000 to 324,000. Other languages in the Volga region recorded similar declines in the number of speakers; between the 2002 and 2010 censuses the number of Mari speakers declined from 254,000 to 204,000 while Chuvash recorded only 1,042,989 speakers in 2010, a 21.6% drop from 2002. This is attributed to a gradual phasing out of indigenous language teaching both in the cities and rural areas while regional media and governments shift exclusively to Russian. In the North Caucasus, the law came after a decade in which educational opportunities in the indigenous languages was reduced by more than 50%, due to budget reductions and federal efforts to decrease the role of languages other than Russian. During this period, numerous indigenous languages in the North Caucasus showed significant decreases in their numbers of speakers even though the numbers of the corresponding nationalities increased, leading to fears of language replacement. The numbers of Ossetian, Kumyk and Avar speakers dropped by 43,000, 63,000 and 80,000 respectively. As of 2018, it has been reported that the North Caucasus is nearly devoid of schools that teach in mainly their native languages, with the exception of one school in North Ossetia, and a few in rural regions of Dagestan; this is true even in largely monoethnic Chechnya and Ingushetia. Chechen and Ingush are still used as languages of everyday communication to a greater degree than their North Caucasian neighbours, but sociolinguistics argue that the current situation will lead to their degradation relative to Russian as well. In 2020, a set of amendments to the Russian constitution was approved by the State Duma and later the Federation Council. One of the amendments enshrined Russian nation as the "state-forming nationality" (Russian: государствообразующий народ) and Russian the “language of the state-forming nationality”. The amendment has been met with criticism from Russia's minorities who argue that it goes against the principle that Russia is a multinational state and will only marginalize them further. The amendments were welcomed by Russian nationalists, such as Konstantin Malofeev and Nikolai Starikov. The changes in Constitution were preceded by "Strategy of government's national policy of Russian Federation" issued in December 2018, which stated that "all-Russian civic identity is founded on Russia cultural dominant, inherent to all nations of Russian Federation". With the release of the latest census in 2022, results showed a catastrophic decline in the number of many ethnic groups, particularly peoples of the Volga region. Between 2010 and 2022, the number of people identifying as ethnic Mari dropped by 22.6%, from 548,000 to 424,000 people. Ethnic Chuvash and Udmurts dropped by 25% and 30% respectively. More vulnerable groups like the Mordvins and Komi-Permyaks saw even larger declines, dropping by 35% and 40% respectively, the former of which resulted in Mordvins no longer being among the top ten largest ethnic groups in Russia. By country/region Azerbaijan Russia was introduced to the South Caucasus following its colonisation in the first half of the nineteenth century after Qajar Iran was forced to cede its Caucasian territories per the Treaty of Gulistan and Treaty of Turkmenchay in 1813 and 1828 respectively to Russia. By 1830 there were schools with Russian as the language of instruction in the cities of Shusha, Baku, Yelisavetpol (Ganja), and Shemakha (Shamakhi); later such schools were established in Kuba (Quba), Ordubad, and Zakataly (Zaqatala). Education in Russian was unpopular amongst ethnic Azerbaijanis until 1887 when Habib bey Mahmudbeyov and Sultan Majid Ganizadeh founded the first Russian–Azerbaijani school in Baku. A secular school with instruction in both Russian and Azeri, its programs were designed to be consistent with the cultural values and traditions of the Muslim population. Eventually, 240 such schools for both boys and girls, including a women's college founded in 1901, were established prior to the "Sovietization" of the South Caucasus. The first Russian-Azeri reference library opened in 1894. In 1918, during the short period of Azerbaijan's independence, the government declared Azeri the official language, but the use of Russian in government documents was permitted until all civil servants mastered the official language. In the Soviet era, the large Russian population of Baku, the quality and prospects of education in Russia, increased access to Russian literature, and other factors contributed to the intensive Russification of Baku's population. Its direct result by the mid-twentieth century was the formation of a supra-ethnic urban Baku subculture, uniting people of Russian, Azerbaijani, Armenian, Jewish, and other origins and whose special features were being cosmopolitan and Russian-speaking. The widespread use of Russian resulted in a phenomenon of 'Russian-speaking Azeris', i.e. an emergence of an urban community of Azerbaijani-born ethnic Azeris who considered Russian their native language. In 1970, 57,500 Azeris (1.3%) identified Russian as their native language. Belarus Russian and Soviet authorities conducted policies of Russification of Belarus from 1772 to 1991, interrupted by the Belarusization policy in the 1920s. When the pro-Russian president Alexander Lukashenko gained power in 1994, the Russification policy was renewed. Finland The Russification of Finland (1899–1905, 1908–1917), sortokaudet ("times of oppression" in Finnish) was a governmental policy of the Russian Empire aimed at the termination of Finland's autonomy. Finnish opposition to Russification was one of the main factors that ultimately led to Finland's declaration of independence in 1917. East Prussia The Northern part of the German province of East Prussia was annexed by the Soviet Union to RSFSR after World War II becoming the Kaliningrad Oblast. While the former German population was expelled or deported to the Soviet Union for forced labor, a systematic settlement of the Kaliningrad Oblast with Russians, Belarusians, and Ukrainians took place. Almost all cultural assets reminiscent of the Germans (e.g., churches, castles, palaces, monuments, drainage systems, etc.) were demolished or left to decay. All settlements were given names in the Russian language, the same for bodies of water, forests and other geographical features. Northern East Prussia was completely Russified. Latvia On September 14, 1885, an ukaz was signed by Alexander III setting the mandatory use of Russian for Baltic governorate officials. In 1889, it was extended to apply to official proceedings of the Baltic municipal governments as well. By the beginning of the 1890s, Russian was enforced as the language of instruction in Baltic governorate schools. After Soviet re-occupation of Latvia in 1944, Russian became the language of State business, and Russian served as the language of inter-ethnic communication among the increasingly urbanized non-Russian ethnic groups, making cities major centres for the use of the Russian language and making functional bilingualism in Russian a minimum necessary for the local population. In an attempt to partially reverse the Soviet Russification policies and give the Latvian language more equal positions to Russian, the so-called Latvian national communist faction within the Communist Party of Latvia passed a bill in 1957 that made the knowledge of both Latvian and Russian obligatory for all Communist Party employees, government functionaries, and service sector staff. The law included a 2-year deadline for gaining proficiency in both languages. In 1958, as the two-year deadline for the bill was approaching, the Communist Party of the Soviet Union set out to enact an education reform, a component of which, the so-called Thesis 19, would give parents in all of the Soviet Republics, with the exception of Russian SFSR, a choice for their children in public schools to study either the language of the republic's titular nation (in this case Latvian) or Russian, as well as one foreign language, in contrast, to the previous education system, where it was mandatory for school children to learn all three languages. Due to strong opposition from the Latvian national communists and the Latvian public, Latvian SSR was only one of two of the 12 Soviet Republics that did not yield to the increasing pressure to adopt Thesis 19 and excluded its contents from their ratified statutes. This led to the eventual purge of the Latvian national communists from the Communist Party ranks between 1959 and 1962. A month after the removal of the Latvian National Communist leader Eduards Berklavs All-Union legislation was implemented in Latvia by Arvīds Pelše. In an attempt to further widen the use of Russian and reverse the work of the national communists, a bilingual school system was established in Latvia, with parallel classes being taught in both Russian and Latvian. The number of such schools increased dramatically, including regions where the Russian population was minimal, and by July 1963 there were already 240 bilingual schools. The effect of the reform was the gradual decline in the number of assigned hours for learning Latvian in Russian schools and the increase in hours allocated for learning Russian in Latvian schools. In 1964–1965 the total weekly average of Latvian language classes and Russian language and literature classes in Latvian schools across all grades was reported to be 38.5 and 72.5 hours respectively, in comparison with 79 hours being devoted to the Russian language and 26 hours being devoted to Latvian language and literature in Russian schools. The reform has been attributed to the persistence of poor Latvian language knowledge among Russians living in Latvia and the increasing language gap between Latvians and Russians. In 1972, the Letter of 17 Latvian communists, was smuggled outside the Latvian SSR and circulated in the Western world, accusing the Communist Party of the Soviet Union of "Great Russian chauvinism" and "progressive Russification of all life in Latvia": Lithuania and Poland In the 19th century, the Russian Empire strove to replace the Ukrainian, Polish, Lithuanian, and Belarusian languages and dialects with Russian in those areas, which were annexed by the Russian Empire after the Partitions of Poland (1772–1795) and the Congress of Vienna (1815). Imperial Russia faced a crucial critical cultural situation by 1815: Russification in Congress Poland intensified after the November Uprising of 1831, and in particular after the January Uprising of 1863. In 1864, the Polish and Belarusian languages were banned in public places; in the 1880s, Polish was banned in schools, on school grounds, and in the offices of Congress Poland. Research and teaching of the Polish language, Polish history, or Catholicism were forbidden. Illiteracy rose as Poles refused to learn Russian. Students were beaten for resisting Russification. A Polish underground education network was formed, including the famous Flying University. According to Russian estimates, by the start of the 20th century, around one-third of the inhabitants in the territory of Congress Poland participated in secret teaching with use of Polish literary works. Starting in the 1840s, Russia considered introducing Cyrillic script for spelling the Polish language, with the first school books printed in the 1860s; the reform was eventually deemed unnecessary because of the introduction of school education in the Russian language. A similar development took place in Lithuania. Its Governor General, Mikhail Muravyov (in office 1863–1865), prohibited the public use of spoken Polish and Lithuanian and closed Polish and Lithuanian schools; teachers from other parts of Russia who did not speak these languages were moved in to teach pupils. Muravyov also banned the use of Latin and Gothic scripts in publishing. He was reported as saying, "What the Russian bayonet didn't accomplish, the Russian school will." ("Что не додѣлалъ русскій штыкъ – додѣлаетъ русская школа.") This ban, lifted only in 1904, was disregarded by the Knygnešiai, the Lithuanian book smugglers, who brought Lithuanian publications printed in the Latin alphabet, the historic orthography of the Lithuanian language, from Lithuania Minor (part of East Prussia) and from the United States into the Lithuanian-speaking areas of Imperial Russia. The knygnešiai came to symbolise the resistance of Lithuanians against Russification. The Russification campaign also promoted the Russian Orthodox faith over Catholicism. The measures used included closing down Catholic monasteries, officially banning the building of new churches and giving many of the old ones to the Russian Orthodox church, banning Catholic schools and establishing state schools that taught only the Orthodox religion, requiring Catholic priests to preach only officially approved sermons, requiring that Catholics who married members of the Orthodox church convert, requiring Catholic nobles to pay an additional tax in the amount of 10% of their profits, limiting the amount of land a Catholic peasant could own, and switching from the Gregorian calendar (used by Catholics) to the Julian one (used by members of the Orthodox church). Most of the Orthodox Church property in the 19th century Congress Poland was acquired at the expense of the Catholic Church of both rites (Roman and Greek Catholic). After the 1863 January Uprising, many manors and great chunks of land were confiscated from nobles of Polish and Lithuanian descent who were accused of helping the uprising; these properties were later given or sold to Russian nobles. Villages where supporters of the uprising lived were repopulated by ethnic Russians. Vilnius University, where the language of instruction had been Polish rather than Russian, closed in 1832. Lithuanians and Poles were banned from holding any public jobs (including professional positions, such as teachers and doctors) in Lithuania; this forced educated Lithuanians to move to other parts of the Russian Empire. The old legal code was dismantled and a new one based on the Russian code and written in the Russian language was enacted; Russian became the only administrative and juridical language in the area. Most of these actions ended at the beginning of the Russo-Japanese War of 1904–1905, but others took longer to be reversed; Vilnius University re-opened only after Russia had lost control of the city in 1919. Bessarabia/Moldova Bessarabia was annexed by the Russian Empire in 1812. In 1816 Bessarabia became an autonomous state, but only until 1828. In 1829, the use of the Romanian language was forbidden in the administration. In 1833, the use of the Romanian language was forbidden in churches. In 1842, teaching in Romanian was forbidden in secondary schools; it was forbidden in elementary schools in 1860. The Russian authorities encouraged the migration of Moldovans to other provinces of the Russian Empire (especially in Kuban, Kazakhstan, and Siberia), while foreign ethnic groups (especially Russians and Ukrainians, called in the 19th century "Little Russians") were encouraged to settle there. Though the 1817 census did not record ethnicity, Romanian authors have claimed that Bessarabia was populated at the time by 86% Moldovans, 6.5% Ukrainians, 1.5% Russians (Lipovans), and 6% other ethnic groups. 80 years later, in 1897, the ethnic structure was very different: only 56% Moldovans, but 11.7% Ukrainians, 18.9% Russians, and 13.4% other ethnic groups. During 80 years, between 1817 and 1897, the share of the Moldovan population dropped by 30%. After the Soviet occupation of Bessarabia in 1940, the Romanian population of Bessarabia was persecuted by Soviet authorities, especially in the years following the annexation, based mostly on social, educational, and political grounds; because of this, Russification laws were imposed again on the Romanian population. The Moldovan language promoted during the Interwar period by the Soviet authorities first in the Moldavian Autonomous Soviet Socialist Republic, and after 1940 taught in the Moldavian Soviet Socialist Republic, was actually the Romanian language but written with a version of the Cyrillic script derived from the Russian alphabet. Proponents of Cyrillic orthography argue that the Romanian language was historically written with the Cyrillic script, albeit a different version of it (see Moldovan alphabet and Romanian Cyrillic alphabet for a discussion of this controversy). Ukraine Russian and Soviet authorities conducted policies of Russification of Ukraine from 1709 to 1991, interrupted by the Korenizatsiya policy in the 1920s. Since Ukraine's independence, its government has implemented Ukrainization policies to decrease the use of Russian and favour Ukrainian. The Russification policy included various instruments, most notably an explicit ban on using Ukrainian language in print or importing literature, staging plays or lectures in Ukrainian from 1876 (Ems Ukaz). A number of Ukrainian activists died by suicide in protest against Russification, including Vasyl Makukh in 1968 and Oleksa Hirnyk in 1978. Following the 2014 Russian annexation of Crimea and the emergence of unrecognized Russian-backed entities in Eastern Ukraine, a subtle form of Russification was initiated, despite these areas being predominantly Russian-speaking. See also Geographical distribution of Russian speakers Territorial evolution of Russia Orthodoxy, Autocracy, and Nationality Education in the Soviet Union Slavophilia Population transfer in the Soviet Union Russian diaspora Russian imperialism Russian nationalism Sovietization Ems Ukaz – 1876 decree banning use of Ukrainian language in Russian Empire References Further reading Anderson, Barbara A., and Brian D. Silver. 1984. "Equality, Efficiency, and Politics in Soviet Bilingual Education Policy: 1934–1980," American Political Science Review 78 (December): 1019–1039. Armstrong, John A. 1968. "The Ethnic Scene in the Soviet Union: The View of the Dictatorship," in Erich Goldhagen, Ed., Ethnic Minorities in the Soviet Union (New York: Praeger): 3–49. Aspaturian, Vernon V. 1968. "The Non-Russian Peoples," in Allen Kassof, Ed., Prospects for Soviet Society. New York: Praeger: 143–198. Azrael, Jeremy R., Ed. 1978. Soviet Nationality Policies and Practices. New York: Praeger. Bilinsky, Yaroslav. 1962. "The Soviet Education Laws of 1958–59 and Soviet Nationality Policy," Soviet Studies 14 (Oct. 1962): 138–157. Gasimov, Zaur (Ed.), Kampf um Wort und Schrift. Russifizierung in Osteuropa im 19.-20. Jahrhundert. Göttingen:V&R 2012. Hajda, Lubomyr, and Mark Beissinger, Eds. 1990. The Nationality Factor in Soviet Politics and Society. Boulder, CO: Westview. Kaiser, Robert, and Jeffrey Chinn. 1996. The Russians as the New Minority in the Soviet Successor States. Boulder, CO: Westview. Karklins, Rasma. 1986. Ethnic Relations in the USSR: The Perspective from Below. Boston and London: Allen & Unwin. Kreindler, Isabelle. 1982. "The Changing Status of Russian in the Soviet Union," International Journal of the Sociology of Language 33: 7–39. Lewis, E. Glyn. 1972. Multilingualism in the Soviet Union: Aspects of Language Policy and its Implementation. The Hague: Mouton. Pavlenko, Aneta. 2008. Multilingualism in Post-Soviet Countries. Multilingual Matters, Tonawanda, NY. . Silver, Brian D. 1974. "The Status of National Minority Languages in Soviet Education: An Assessment of Recent Changes," Soviet Studies 26 (January): 28–40. Silver, Brian D. 1986. "The Ethnic and Language Dimensions in Russian and Soviet Censuses," in Ralph S. Clem, Ed., Research Guide to the Russian and Soviet Censuses (Ithaca: Cornell Univ. Press): 70–97. Thaden, Edward C., Ed. 1981. Russification in the Baltic Provinces and Finland, 1855–1914. Princeton: Princeton University Press. Wixman, Ronald. 1984. The Peoples of the USSR: An Ethnographic Handbook. New York: M.E. Sharpe and London, Macmillan. External links Russification in Lithuania LTV online documentary examines 'Russification' and its effects. 16 April 2020. Public Broadcasting of Latvia. The Civic Identity of Russifying Officials in the Empire’s Northwestern Region after 1863 by Mikhail Dolbilov Permanent mission of Caucasian Institute for Democracy Foundation opened in Tskhinvali – Regnum News Agency (Russia), 9 December 2005 Tatarstan Rejects Dominant Role of Russians – Kommersant, 6 March 2006 Forgetting How to Speak Russian | Fast forward | OZY (7 January 2014) Politics of the Russian Empire Ethnic groups in the Soviet Union History of Belarus (1795–1918) History of Lithuania (1795–1918) History of the Lithuanian language Congress Poland Social history of Ukraine Social history of Armenia Soviet internal politics Poland–Russia relations Slavicization Soviet ethnic policy Russian nationalism
0.766987
0.995772
0.763744
Quasi-state
A quasi-state (sometimes referred to as a state-like entity or formatively a proto-state) is a political entity that does not represent a fully autonomous sovereign state with its own institutions. The precise definition of quasi-state in political literature fluctuates depending on the context in which it is used. It has been used by some modern scholars to describe the self-governing British colonies and dependencies that exercised a form of home rule but remained crucial parts of the British Empire and subject firstly to the metropole's administration. Similarly, the Republics of the Soviet Union, which represented administrative units with their own respective national distinctions, have also been described as quasi-states. In the 21st century usage, the term quasi-state has most often been evoked in reference to militant secessionist groups who claim, and exercise some form of territorial control over, a specific region, but which lack institutional cohesion. Such quasi-states include the Republika Srpska and Herzeg-Bosnia during the Bosnian War, the Republic of Serbian Krajina during the Croatian War of Independence, and Azawad during the 2012 Tuareg rebellion. The Islamic State is also widely held to be an example of a modern quasi-state or proto-state. History The term "proto-state" has been used in reference to contexts as far back as Ancient Greece, to refer to the phenomenon that the formation of a large and cohesive nation would often be preceded by very small and loose forms of statehood. For instance, historical sociologist Garry Runciman describes the evolution of social organisation in the Greek Dark Ages from statelessness, to what he calls semistates based on patriarchal domination but lacking inherent potential to achieve the requirements for statehood, sometimes transitioning into protostates with governmental roles able to maintain themselves generationally, which could evolve into larger, more centralised entities fulfilling the requirements of statehood by 700 BC in the archaic period. Most ancient proto-states were the product of tribal societies, consisting of relatively short-lived confederations of communities that united under a single warlord or chieftain endowed with symbolic authority and military rank. These were not considered sovereign states since they rarely achieved any degree of institutional permanence and authority was often exercised over a mobile people rather than measurable territory. Loose confederacies of this nature were the primary means of embracing a common statehood by people in many regions, such as the Central Asian steppes, throughout ancient history. Proto-states proliferated in Western Europe during the Middle Ages, likely as a result of a trend towards political decentralisation following the collapse of the Western Roman Empire and the adoption of feudalism. While theoretically owing allegiance to a single monarch under the feudal system, many lesser nobles administered their own fiefs as miniature "states within states" that were independent of each other. This practice was especially notable with regards to large, decentralised political entities such as the Holy Roman Empire, that incorporated many autonomous and semi-autonomous proto-states. Following the Age of Discovery, the emergence of European colonialism resulted in the formation of colonial proto-states in Asia, Africa, and the Americas. A few colonies were given the unique status of protectorates, which were effectively controlled by the metropole but retained limited ability to administer themselves, self-governing colonies, dominions, and dependencies. These were distinct administrative units that each fulfilled many of the functions of a state without actually exercising full sovereignty or independence. Colonies without a sub-national home rule status, on the other hand, were considered administrative extensions of the colonising power rather than true proto-states. Colonial proto-states later served as the basis for a number of modern nation states, particularly on the Asian and African continents. During the twentieth century, some proto-states existed as not only distinct administrative units, but their own theoretically self-governing republics joined to each other in a political union such as the socialist federal systems observed in Yugoslavia, Czechoslovakia, and the Soviet Union. Another form of proto-state that has become especially common since the end of World War II is established through the unconstitutional seizure of territory by an insurgent or militant group that proceeds to assume the role of a de facto government. Although denied recognition and bereft of civil institutions, insurgent proto-states may engage in external trade, provide social services, and even undertake limited diplomatic activity. These proto-states are usually formed by movements drawn from geographically concentrated ethnic or religious minorities, and are thus a common feature of inter-ethnic civil conflicts. This is often due to the inclinations of an internal cultural identity group seeking to reject the legitimacy of a sovereign state's political order, and create its own enclave where it is free to live under its own sphere of laws, social mores, and ordering. Since the 1980s a special kind of insurgent statehood has emerged in form of the "Jihadi proto-state", as the Islamist concept of statehood is extremely flexible. For instance, a Jihadi emirate can be simply understood as a territory or group ruled by an emir; accordingly, it might rule a significant area or just a neighborhood. Regardless of its extent, the assumption of statehood provides Jihadi militants with important internal legitimacy and cementes their self-identification as frontline society opposed to certain enemies. The accumulation of territory by an insurgent force to form a sub-national geopolitical system and eventually, a proto-state, was a calculated process in China during the Chinese Civil War that set a precedent for many similar attempts throughout the twentieth and twenty-first centuries. Proto-states established as a result of civil conflict typically exist in a perpetual state of warfare and their wealth and populations may be limited accordingly. One of the most prominent examples of a wartime proto-state in the twenty-first century is the Islamic State of Iraq and the Levant, that maintained its own administrative bureaucracy and imposed taxes. Theoretical basis The definition of a proto-state is not concise, and has been confused by the interchangeable use of the terms state, country, and nation to describe a given territory. The term proto-state is preferred to "proto-nation" in an academic context, however, since some authorities also use nation to denote a social, ethnic, or cultural group capable of forming its own state. A proto-state does not meet the four essential criteria for statehood as elaborated upon in the declarative theory of statehood of the 1933 Montevideo Convention: a permanent population, a defined territory, a government with its own institutions, and the capacity to enter into relations with other states. A proto-state is not necessarily synonymous with a state with limited recognition that otherwise has all the hallmarks of a fully functioning sovereign state, such as Rhodesia or the Republic of China, also known as Taiwan. However, proto-states frequently go unrecognised since a state actor that recognises a proto-state does so in violation of another state actor's external sovereignty. If full diplomatic recognition is extended to a proto-state and embassies exchanged, it is defined as a sovereign state in its own right and may no longer be classified as a proto-state. Throughout modern history, partially autonomous regions of larger recognised states, especially those based on a historical precedent or ethnic and cultural distinctiveness that places them apart from those who dominate the state as a whole, have been considered proto-states. Home rule generates a sub-national institutional structure that may justifiably be defined as a proto-state. When a rebellion or insurrection seizes control and begins to establish some semblance of administration in regions within national territories under its effective rule, it has also metamorphosed into a proto-state. These wartime proto-states, sometimes known as insurgent states, may eventually transform the structure of a state altogether, or demarcate their own autonomous political spaces. While not a new phenomenon, the modern formation of a proto-states in territory held by a militant non-state entity was popularised by Mao Zedong during the Chinese Civil War, and the national liberation movements worldwide that adopted his military philosophies. The rise of an insurgent proto-state was sometimes also an indirect consequence of a movement adopting Che Guevara's foco theory of guerrilla warfare. Secessionist proto-states are likeliest to form in preexisting states that lack secure boundaries, a concise and well-defined body of citizens, or a single sovereign power with a monopoly on the legitimate use of military force. They may be created as a result of putsches, insurrections, separatist political campaigns, foreign intervention, sectarian violence, civil war, and even the bloodless dissolution or division of the state. Proto-states can be important regional players, as their existence affects the options available to state actors, either as potential allies or as impediments to their political or economic policy articulations. List of proto-states Constituent proto-states Current Former Secessionist, insurgent, and self-proclaimed autonomous proto-states Current Former See also Aspirant state Deep state Failed state List of sovereign states Nation-building Rump state Sovereign state State-building List of rebel groups that control territory Notes and references Annotations References Bibliography Types of countries International law Political science terminology Political geography Political metaphors Political neologisms Forms of government Former countries Former countries by form of government Political anthropology
0.768048
0.994344
0.763704
Sectarianism
Sectarianism is a debated concept. Some scholars and journalists define it as pre-existing fixed communal categories in society, and use it to explain political, cultural, or religious conflicts between groups. Others conceive of sectarianism as a set of social practices where daily life is organised on the basis of communal norms and rules that individuals strategically use and transcend. This definition highlights the co-constitutive aspect of sectarianism and people's agency, as opposed to understanding sectarianism as being fixed and incompatible communal boundaries. While sectarianism is often labelled as 'religious' and/or 'political', the reality of a sectarian situation is usually much more complex. In its most basic form, sectarianism has been defined as, 'the existence, within a locality, of two or more divided and actively competing communal identities, resulting in a strong sense of dualism which unremittingly transcends commonality, and is both culturally and physically manifest.' Definition The term "sectarianism" is defined in the Oxford English Dictionary as "excessive attachment to a particular sect or party, especially in religion". The phrase "sectarian conflict" usually refers to violent conflict along religious or political lines, such as the conflicts between Nationalists and Unionists in Northern Ireland (religious and class-divisions may play major roles as well). It may also refer to general philosophical, political disparity between different schools of thought, such as that between Shia and Sunni Muslims. Non-sectarians see free association and tolerance of different beliefs as the cornerstones to successful, peaceful human interaction. They adopt political and religious pluralism. Polemics against the term "sectarianism" Some scholars identify the problems with using the term "sectarianism" in articles. Western mainstream media and politicians often presume "sectarianism" as ancient and long-lasting. For example, Obama in his final State of the Union address phrased the sectarian violence in the Middle East as "rooted in conflicts that dated back millennia", but many pointed out that some sectarian tensions don't even date back a decade. "Sectarianism" is also too ambiguous, which makes it a slogan whose meanings are up to the observers. Scholars argued that the use of term "sectarianism" has become a catch-all explanation to conflicts, which drives analytical attention away from underlying political and socioeconomic issues, lacks coherence, and is often associated with emotional negativity. Many scholars find the term "sectarianism" problematic, and therefore three alternatives are proposed. Alternative: Sectarianization Hashemi and Postel and other scholars differentiate between "sectarianism" and "sectarianization". While "sectarianism" describes antipathy, prejudice, and discrimination between subdivisions within a group, e.g. based on their religious or ethnic identity, the latter describes a process mobilized by political actors operating within authoritarian contexts to pursue their political goals that involve popular mobilization around religious or identity markers. The use of the word sectarianism to explain sectarian violence and its upsurge in i.e. the Middle East is insufficient, as it does not take into account complex political realities. In the past and present, religious identities have been politicized and mobilized by state actors inside and outside of the Middle East in pursuit of political gain and power. The term sectarianization conceptualizes this notion. Sectarianization is an active, multi-layered process and a set of practices, not a static condition, that is set in motion and shaped by political actors pursuing political goals. The sectarianization thesis focuses on the intersection of politics and sectarian identity from a top-down state-centric perspective. While religious identity is salient in the Middle East and has contributed to and intensified conflicts across the region, it is the politicization and mobilization of popular sentiments around certain identity markers ("sectarianization") that explains the extent and upsurge of sectarian violence in the Middle East. The Ottoman Tanzimat, European colonialism and authoritarianism are key in the process of sectarianization in the Middle East. Alternative: Sectarian as a prefix Haddad argues "sectarianism" cannot capture sectarian relations in reality nor represent the complex expressions of sectarian identities. Haddad calls for an abandonment of -ism in "sectarianism" in scholarly research as it "has overshadowed the root" and direct use of 'sectarian' as a qualifier to "direct our analytical focus towards understanding sectarian identity". Sectarian identity is "simultaneously formulated along four overlapping, interconnected and mutually informing dimensions: doctrinal, subnational, national, and transnational". The relevance of these factors is context-dependent and works on four layers in chorus. The multi-layered work provides more clarity and enables more accurate diagnoses of problems at certain dimensions to find more specific solutions. Alternative: Sextarianism In her book Sextarianism, Mikdashi emphasizes the relationship between sect, sex and sexuality. She argues that sectarianism cannot be studied in isolation, because the practice of sectarianism always goes hand in hand with the practice of sexism. For this reason she proposes the term "sextarianism". Sex, sexuality and sect together define citizenship, and, since the concept of citizenship is the basis of the modern nation-state, sextarianism therefore forms the basis for the legal bureaucratic systems of the state and thus for state power. Intersectionality in Sectarianism The analytical framework of intersectionality in examining sectarianism has gained increasing prominence in the study of this subject. Intersectionality highlights the nature of religious, ethnic, political, and social identities in contexts marked by sectarian tensions and conflicts. Acknowledging that individuals' experiences of sectarianism are shaped not only by their religious affiliation or other sectarian categories but also by other dimensions such as sex, class, and nationality among others, are essentially contributing to those experiences. Religious Dimension Intersectionality reveals that factors like sex, ethnicity, and socioeconomic status intersect with religious identity to shape individuals' experiences of sectarianism. Authors such as Maya Mikdashi introduced the concept of 'Sextarianism', particularly showing how the role of gender is crucially influencing the individual's experience of religious sectarian differences in political sectarian systems such as in Lebanon. In the case of Sectarianism in Lebanon, she highlights how Sextarian differences are decisive vectors in determining woman's experiences of power and sovereignty in a political sectarian system. Political Dimension In the political dimensions, the intersectional lens recognizes the intricate connections between political identities and other social categories. Political parties or other factions may exploit religious divisions for political gain, exacerbating sectarian tensions. Intersectionality helps to understand how for instance political affiliations intersect with factors such as socioeconomic status and regional background, providing insights into the motivation behind political mobilization and the dynamics of power in sectarian settings. Implications for Communities The intersectionality of sectarianism has profound implications for affected communities, particularly for individuals who belong to multiple marginalized groups such as women, migrants, and marginalized ethnicities living under sectarian systems. The recognition of these intersecting forms of discrimination and marginalization is decisive for developing inclusive strategies to promote peace, tolerance, and increased social cohesion within diverse societies. Political sectarianism Sectarianism in the 21st century Sectarian tendencies in politics are visible in countries and cities associated with sectarian violence in the present, and the past. Notable examples where sectarianism affects lives are street-art expression, urban planning, and sports club affiliation. United Kingdom Across the United Kingdom, Scottish and Irish sectarian tendencies are often reflected in team-sports competitions. Affiliations are regarded as a latent representation of sectarianism tendencies. (Since the early 1900s, cricket teams were established via patronage of sectarian affiliated landlords. In response to the Protestant representation of the sport, many Catholic schools founded their own Cricket schools.) Modern day examples include tensions in sports such as football and have led to the passing of the "Offensive Behaviour at Football and Threatening Communications (Scotland) Act 2012". Iran World leaders have criticised the political ambitions of Iran and have condemned its involvement and support for opposition groups such as Hezbollah. The political authority of the Islamic Republic of Iran has extended into neighboring countries, and has led to an increase of tensions in the region. An important figure in this process of expansion was the major general of Iran's Quds Force (the foreign arm of the IRGC), Qasem Soleimani. Soleimani was assassinated in Iraq by an American drone in January 2020 leading to an increase of tension between the United States of America and Iran. Soleimani was responsible for strengthening Iran's ties with foreign powers such as Hezbollah in Lebanon, Syria's al-Assad, and Shia militia groups in Iraq. Soleimani was seen as the number-one commander of Iran's foreign troops and played a crucial role in the spread of Iran's ideology in the region. According to President Donald Trump, Soleimani was the world's most wanted terrorist and had to be assassinated in order to bring more peace to the Middle-East region and the rest of the world. however this was shown to be incorrect as his murder had little effect on Iranian ambitions and only increased support for Iran as it was an illegal act under international law. Authoritarian regimes In recent years, authoritarian regimes have been particularly prone to sectarianization. This is because their key strategy of survival lies in manipulating sectarian identities to deflect demands for change and justice, and preserve and perpetuate their power. The sectarianization as a theory and process that extended beyond the middle east was introduced by Saleena Saleem (see and ). Christian communities, and other religious and ethnic minorities in the Middle East, have been socially, economically and politically excluded and harmed primarily by regimes that focus on "securing power and manipulating their base by appeals to Arab nationalism and/or to Islam". An example of this is the Middle Eastern regional response to the Iranian revolution of 1979. Middle Eastern dictatorships backed by the United States, especially Saudi Arabia, feared that the spread of the revolutionary spirit and ideology would affect their power and dominance in the region. Therefore, efforts were made to undermine the Iranian revolution by labeling it as a Shi’a conspiracy to corrupt the Sunni Islamic tradition. This was followed by a rise of anti-Shi’a sentiments across the region and a deterioration of Shi'a-Sunni relations, impelled by funds from the Gulf states. Therefore, the process of sectarianization, the mobilization and politicization of sectarian identities, is a political tool for authoritarian regimes to perpetuate their power and justify violence. Western powers indirectly take part in the process of sectarianization by supporting undemocratic regimes in the Middle East. As Nader Hashemi asserts:The U.S. invasion of Iraq; the support of various Western governments for the Kingdom of Saudi Arabia, which commits war crime upon war crime in Yemen and disseminates poisonous sectarian propaganda throughout the Sunni world; not to mention longstanding Western support for highly repressive dictators who manipulate sectarian fears and anxieties as a strategy of control and regime survival – the "ancient hatreds" narrative [between Sunnis and Shi’as] washes this all away and lays the blame for the regionʹs problems on supposedly trans-historical religious passions. Itʹs absurd in the extreme and an exercise in bad faith. Approaches to Study Sectarian Identities in authoritarian regimes Scholars have adopted three approaches to study sectarian discourses: primordialism, instrumentalism, and constructivism. Primordialism sees sectarian identity as rotted in biology and ingrained in history and culture. Makdisi describes the process of bringing the sectarian discourses back to early Islamic history as "pervasive medievalization". The centuries-old narrative is problematic as it treats sectarian identities in the Middle East as sui generis instead of modern collective identities. Scholars should be cautious of sectarian essentialism and Middle East exceptionalism the primordial narrative reinforces since primordialism suggests sectarian tensions persist while theological differences do not guarantee conflicts. Instrumentalism emphasizes that ruling elites manipulate identities to create violent conflicts for their interests. Instrumentalists see the Sunni-Shi'a divide as a modern invention and challenge the myths of primordial narratives since sectarian harmony have existed for centuries. Constructivism is in the middle ground of primordialism and instrumentalism. Religious sectarianism Wherever people of different religions live in close proximity to each other, religious sectarianism can often be found in varying forms and degrees. In some areas, religious sectarians (for example Protestant and Catholic Christians) exist peacefully side by side for the most part, although these differences have resulted in violence, death, and outright warfare as recently as the 1990s. Probably the best-known example in recent times were The Troubles. Catholic-Protestant sectarianism has also been a factor in U.S. presidential campaigns. Prior to John F. Kennedy, only one Catholic (Al Smith) had ever been a major party presidential nominee, and he had been solidly defeated largely because of claims based on his Catholicism. JFK chose to tackle the sectarian issue head-on during the West Virginia primary, but that only sufficed to win him barely enough Protestant votes to eventually win the presidency by one of the narrowest margins ever. Within Islam, there has been dilemmas at various periods between Sunnis and Shias; Shias consider Sunnis to be false, due to their refusal to accept the first caliph as Ali and accept all following descendants of him as infallible and divinely guided. Many Sunni religious leaders, including those inspired by Wahhabism and other ideologies have declared Shias to be heretics or apostates. Europe Long before the Reformation, dating back to the 12th century, there has been sectarian conflict of varying intensity in Ireland. Historically, some Catholic countries once persecuted Protestants as heretics. For example, in Scotland. The Roman Catholic church "arrested" (kidnapped) entire Protestant families, including new borns . Subjected them to torture, starvation,rape. For the sole purpose with forcing people of the Protestant faith to renounce their staunch beliefs and say that the Pope's word was God's word. The Papacy then burned alive, publicly at the stake, every single Martyr. In some countries where the Reformation was successful, there was persecution of Roman Catholics. This was motivated by the perception that Catholics retained allegiance to a 'foreign' power (the papacy or the Vatican), causing them to be regarded with suspicion. Sometimes this mistrust manifested itself in Catholics being subjected to restrictions and discrimination, which itself led to further conflict. For example, before Catholic Emancipation was introduced with the Roman Catholic Relief Act 1829, Catholics were forbidden from voting, becoming MP's or buying land in Ireland. Ireland Protestant-Catholic sectarianism is prominent in Irish history; during the period of English (and later British) rule, Protestant settlers from Britain were "planted" in Ireland, which along with the Protestant Reformation led to increasing sectarian tensions between Irish Catholics and British Protestants. These tensions eventually boiled over into widespread violence during the Irish Rebellion of 1641. At the end of that war the lands of Catholics were confiscated with over ten million acres granted to new English owners under the Act for the Settlement of Ireland 1652. The Cromwellian conquest of Ireland (1649–1653) saw a series of massacres perpetrated by the Protestant New Model Army against Catholic English royalists and Irish civilians. Sectarianism between Catholics and Protestants continued in the Kingdom of Ireland, with the Irish Rebellion of 1798 against British rule leading to more sectarian violence in the island, most infamously the Scullabogue Barn massacre, in which Protestants were burned alive in County Wexford. The British response to the rebellion which included the public executions of dozens of suspected rebels in Dunlavin and Carnew, also inflamed sectarian sentiments. Northern Ireland After the Partition of Ireland in 1922, Northern Ireland witnessed decades of intensified conflict, tension, and sporadic violence (see The Troubles (1920–1922) and The Troubles) between the dominant Protestant majority and the Catholic minority. In 1969 the Northern Ireland Civil Rights Association was formed to support civil rights and end discrimination (based on religion) in voting rights (see Gerrymandering), housing allocation and employment. Also in 1969, 25 years of violence erupted, becoming what is known as “The Troubles” between Irish Republicans whose goal is a United Ireland and Ulster loyalists who wish for Northern Ireland to remain a part of the United Kingdom. The conflict was primarily fought over the existence of the Northern Irish state rather than religion, though sectarian relations within Northern Ireland fueled the conflict. However, religion is commonly used as a marker to differentiate the two sides of the community. The Catholic minority primarily favour the nationalist, and to some degree, republican, goal of unity with the Republic of Ireland, while the Protestant majority favour Northern Ireland continuing the union with Great Britain. England In June 1780 a series of riots (see the Gordon Riots) occurred in London motivated by anti-Catholic sentiment. These riots were described as being the most destructive in the history of London and resulted in approximately 300-700 deaths. A long history of politically and religious motivated sectarian violence already existed in Ireland (see Irish Rebellions). The sectarian divisions related to the "Irish question" influenced local constituent politics in England. Liverpool is an English city sometimes associated with sectarian politics. Halfway through the 19th century, Liverpool faced a wave of mass-immigration from Irish Catholics as a consequence of the Great Famine in Ireland. Most of the Irish-Catholic immigrants were unskilled workers and aligned themselves with the Labour party. The Labour-Catholic party saw a larger political electorate in the many Liverpool-Irish, and often ran on the slogan of "Home Rule" - the independence of Ireland, to gain the support of Irish voters. During the first half of the 20th century, Liverpool politics were divided not only between Catholics and Protestants, but between two polarized groups consisting of multiple identities: Catholic-Liberal-Labour and Protestant-Conservative-Tory/Orangeists. From early 1900 onwards, the polarized Catholic Labour and Protestant Conservative affiliations gradually broke apart and created the opportunity for mixed alliances. The Irish National party gained its first electoral victory in 1875, and kept growing until the realization of Irish independence in 1921, after which it became less reliant on Labour support. On the Protestant side, Tory opposition in 1902 to vote in line with Protestant proposed bills indicated a split between the working class Protestants and the Tory party, which were regarded as "too distant" from its electorate. After the First and Second World War, religiously mixed battalions provided a counterweight to anti-Roman Catholic and anti-Protestant propaganda from either side. While the IRA-bombing in 1939 (see S-Plan) somewhat increased violence between the Irish-Catholic associated Labour party and the Conservative Protestants, the German May Blitz destroyed property of more than 40.000 households. Rebuilding Liverpool after the war created a new sense of community across religious lines. Inter-church relations increased as a response as well, as seen through the warming up of relations between Archbishop Worlock and Anglican Bishop David Sheppard after 1976, a symbol of decreasing religious hostility. The increase in education rates and the rise of trade and labour unions shifted religious affiliation to class affiliation further, which allowed Protestant and catholic affiliates under a Labour umbrella in politics. In the 1980s, class division had outgrown religious division, replacing religious sectarianism with class struggle. Growing rates of non-English immigration from other parts of the Commonwealth near the 21st century also provides new political lines of division in identity affiliation. Northern Ireland has introduced a Private Day of Reflection, since 2007, to mark the transition to a post-[sectarian] conflict society, an initiative of the cross-community Healing Through Remembering organization and research project. The Balkans The civil wars in the Balkans which followed the breakup of Yugoslavia in the 1990s have been heavily tinged with sectarianism. Croats and Slovenes have traditionally been Catholic, Serbs and Macedonians Eastern Orthodox, and Bosniaks and most Albanians Muslim. Religious affiliation served as a marker of group identity in this conflict, despite relatively low rates of religious practice and belief among these various groups after decades of communism. Africa Over 1,000 Muslims and Christians were killed in the sectarian violence in the Central African Republic in 2013–2014. Nearly 1 million people, a quarter of the population, were displaced. Australia Sectarianism in Australia is a historical legacy from the 18th, 19th and 20th centuries, between Catholics of mainly Celtic heritage and Protestants of mainly English descent. It has largely disappeared in the 21st century. In the late 20th and early 21st centuries, religious tensions were more centered between Muslim immigrants and non-Muslim nationalists, amid the backdrop of the War on Terror. Asia Japan For the violent conflict between Buddhist sects in Japan, see Japanese Buddhism. Pakistan Pakistan, one of the largest Muslim countries the world, has seen serious Shia-Sunni sectarian violence. Almost 85-90% of Pakistan's Muslim population is Sunni, and another 10-15% are Shia. However, this Shia minority forms the second largest Shia population of any country, larger than the Shia majority in Iraq. In the last two decades, as many as 4,000 people are estimated to have died in sectarian fighting in Pakistan, 300 in 2006. Among the culprits blamed for the killing are Al Qaeda working "with local sectarian groups" to kill what they perceive as Shi'a apostates. Sri Lanka Most Muslims in Sri Lanka are Sunnis. There are a few Shia Muslims too from the relatively small trading community of Bohras. Divisiveness is not a new phenomenon to Beruwala. Sunni Muslims in the Kalutara district are split in two different sub groups. One group, known as the Alaviya sect, historically holds its annual feast at the Ketchimalai mosque located on the palm-fringed promontory adjoining the fisheries harbour in Beruwala. It is a microcosm of the Muslim identity in many ways. The Galle Road that hugs the coast from Colombo veers inland just ahead of the town and forms the divide. On the left of the road lies China Fort, the area where some of the wealthiest among Sri Lankans Muslims live. The palatial houses with all modern conveniences could outdo if not equal those in the Colombo 7 sector. Most of the wealthy Muslims, gem dealers, even have a home in the capital, not to mention property. Strict Wahabis believe that all those who do not practise their form of religion are heathens and enemies. There are others who say Wahabism's rigidity has led it to misinterpret and distort Islam, pointing to the Taliban as well as Osama bin Laden. What has caused concern in intelligence and security circles is the manifestation of this new phenomenon in Beruwala. It had earlier seen its emergence in the east. Turkey Ottoman Empire In 1511, a pro-Shia revolt known as Şahkulu Rebellion was brutally suppressed by the Ottomans: 40,000 were massacred on the order of the sultan. Republican era (1923-) Alevis were targeted in various massacres including 1978 Maraş massacre, 1980 Çorum massacre and 1993 Sivas massacre. During his campaign for the 2023 Turkish presidential election, Kemal Kılıçdaroğlu was attacked with sectarian insults in Adıyaman. Iran Overview Sectarianism in Iran has existed for centuries, dating back to the Islamic conquest of the country in early Islamic years and continuing throughout Iranian history until the present. During the Safavid dynasty's reign, sectarianism started to play an important role in shaping the path of the country. During the Safavid rule between 1501 and 1722, Shiism started to evolve and became established as the official state religion, leading to the creation of the first religiously legitimate government since the occultation of the Twelfth imam. This pattern of sectarianism prevailed throughout the Iranian history. The approach that sectarianism has taken after the Iranian 1979 revolution is shifted compared to the earlier periods. Never before the Iranian 1979 revolution did the Shiite leadership gain as much authority. Due to this change, the sectarian timeline in Iran can be divided in pre- and post-Iranian 1979 revolution where the religious leadership changed course. Pre-1979 Revolution Shiism has been an important factor in shaping the politics, culture and religion within Iran, long before the Iranian 1979 revolution. During the Safavid dynasty Shiism was established as the official ideology. The establishment of Shiism as an official government ideology opened the doors for clergies to benefit from new cultural, political and religious rights which were denied prior to the Safavid ruling. During the Safavid dynasty Shiism was established as the official ideology. The Safavid rule allowed greater freedom for religious leaders. By establishing Shiism as the state religion, they legitimised the religious authority. After this power establishment, religious leaders started to play a crucial role within the political system but remained socially and economically independent. The monarchial power balance during the Safavid ere changed every few years, resulting in a changing limit of power of the clergies. The tensions concerning power relations of the religious authorities and the ruling power eventually played a pivotal role in the 1906 constitutional revolution which limited the power of the monarch, and increased the power of religious leaders. The 1906 constitutional revolution involved both constitutionalist and anti-constitutionalist clergy leaders. Individuals such as Sayyid Jamal al-Din Va'iz were constitutionalist clergies whereas other clergies such as Mohammed Kazem Yazdi were considered anti-constitutionalist. The establishment of a Shiite government during the Safavid rule resulted in the increase of power within this religious sect. The religious power establishment increased throughout the years and resulted in fundamental changes within the Iranian society in the twentieth century, eventually leading to the establishment of the Shiite Islamic Republic of Iran in 1979. Post-1979 Revolution: Islamic Republic of Iran The Iranian 1979 revolution led to the overthrow of the Pahlavi dynasty and the establishment of the Islamic Government of Iran. The governing body of Iran displays clear elements of sectarianism which are visible within different layers of its system. The 1979 revolution led to changes in political system, leading to the establishment of a bureaucratic clergy-regime which has created its own interpretation of the Shia sect in Iran. Religious differentiation is often used by authoritarian regimes to express hostility towards other groups such as ethnic minorities and political opponents. Authoritarian regimes can use religion as a weapon to create an "us and them" paradigm. This leads to hostility amongst the involved parties and takes place internally but also externally. A valid example is the suppression of religious minorities like the Sunnis and Baha-ís. With the establishment of the Islamic Republic of Iran sectarian discourses arose in the Middle-East as the Iranian religious regime has attempted and in some cases succeeded to spread its religious and political ideas in the region. These sectarian labeled issues are politically charged. The most notable Religious leaders in Iran are named Supreme-leaders. Their role has proved to be pivotal in the evolvement of sectarianism within the country and in the region. The following part discusses Iran's supreme-leadership in further detail. Ruhollah Khomeini and Ali Khamenei During the Iran-Iraq war, Iran's first supreme-leader, Ayatollah Khomeini called for the participation of all Iranians in the war. His usage of Shia martyrdom led to the creation of a national consensus. In the early aftermath of the Iranian 1979 revolution, Khomeini started to evolve a sectarian tone in his speeches. His focus on Shiism and Shia Islam grew which was also implemented within the changing policies of the country. In one of his speeches Khomeini quoted: "the Path to Jerusalem passes through Karbala." His phrase lead to many different interpretations, leading to turmoil in the region but also within the country. From a religious historic viewpoint, Karbala and Najaf which are both situated in Iraq, serve as important sites for Shia Muslims around the world. By mentioning these two cities, Khomeini led to the creation of Shia expansionism. Khomeini's war with the Iraqi Bath Regime had many underlying reasons and sectarianism can be considered one of the main reasons. The tensions between Iran and Iraq are of course not only sectarian related, but religion is often a weapon used by the Iranian regime to justify its actions. Khomeini's words also resonated in other Arab countries who had been fighting for Palestinian liberation against Israel. By naming Jerusalem, Khomeini expressed his desire for liberating Palestine from the hands of what he later often has named "the enemy of Islam." Iran has supported rebellious groups throughout the region. Its support for Hamas and Hezbollah has resulted in international condemnation. This desire for Shia expansionism did not disappear after Khomeini's death. It can even be argued that sectarian tone within the Islamic Republic of Iran has grown since then. The Friday prayers held in Tehran by Ali Khamenei can be seen as a proof of growing sectarian tone within the regime. Khamenei's speeches are extremely political and sectarian. He often mentions extreme wishes such as the removal of Israel from the world map and fatwas directed towards those opposing the regime. Iraq Sunni Iraqi insurgency and foreign Sunni terrorist organizations who came to Iraq after the fall of Saddam Hussein have targeted Shia civilians in sectarian attacks. Following the civil war, the Sunnis have complained of discrimination by Iraq's Shia majority governments, which is bolstered by the news that Sunni detainees were allegedly discovered to have been tortured in a compound used by government forces on 15 November 2005. This sectarianism has fueled a giant level of emigration and internal displacement. The Shia majority oppression by the Sunni minority has a long history in Iraq. After the fall of the Ottoman Empire, the British government placed a Sunni Hashemite monarchy to the Iraqi throne which suppressed various uprisings against its rule by the Christian Assyrians and Shi'ites. Syria Although sectarianism has been described as one of the characteristic features of the Syrian civil war, the narrative of sectarianism already had its origins in Syria's past. Ottoman rule The hostilities that took place in 1850 in Aleppo and subsequently in 1860 in Damascus, had many causes and reflected long-standing tensions. However, scholars have claimed that the eruptions of violence can also be partly attributed to the modernizing reforms, the Tanzimat, taking place within the Ottoman Empire, who had been ruling Syria since 1516. The Tanzimat reforms attempted to bring about equality between Muslims and non-Muslims living in the Ottoman Empire. These reforms, combined with European interference on behalf of the Ottoman Christians, caused the non-Muslims to gain privileges and influence. In the silk trade business, European powers formed ties with local sects. They usually opted for a sect that adhered to a religion similar to the one in their home countries, thus not Muslims. These developments caused new social classes to emerge, consisting of mainly Christians, Druzes and Jews. These social classes stripped the previously existing Muslim classes of their privileges. The involvement of another foreign power, though this time non-European, also had its influence on communal relations in Syria. Ibrahim Pasha of Egypt ruled Syria between 1831 and 1840. His divide-and-rule strategy contributed to the hostilities between the Druze and Maronite community, by arming the Maronite Christians. However, it is noteworthy to mention that different sects did not fight the others out of religious motives, nor did Ibrahim Pasha aim to disrupt society among communal lines. This can also be illustrated by the unification of Druzes and Maronites in their revolts to oust Ibrahim Pasha in 1840. This shows the fluidity of communal alliances and animosities and the different, at times non-religious, reasons that may underline sectarianism. After Ottoman rule Before the fall of the Ottoman Empire and the French Mandate in Syria, the Syrian territory had already witnessed massacres on the Maronite Christians, other Christians, Alawites, Shias and Ismailiyas, which had resulted in distrustful sentiments between the members of different sects. In an attempt to protect the minority communities against the majority Sunni population, France, with the command of Henri Gouraud, created five states for the following sects: Armenians, Alawites, Druzes, Maronite Christians and Sunni Muslims. This focus on minorities was new and part of a divide-and-rule strategy of the French, which enhanced and politicized differences between sects. The restructuring by the French caused the Alawite community to advance itself from their marginalized position. In addition to that, the Alawites were also able to obtain a position of power through granting top level positions to family members of the ruling clan or other tribal allies of the Alawite community. During the period 1961–1980, Syria was not necessarily exclusively ruled by the Alawite sect, but due to efforts of the Sunni Muslim extremist opponents of the Ba’th regime in Syria, it was perceived as such. The Ba’ath regime was being dominated by the Alawite community, as well as were other institutions of power. As a result of this, the regime was considered to be sectarian, which caused the Alawite community to cluster together, as they feared for their position. This period is actually contradictory as Hafez al-Assad tried to create a Syrian Arab nationalism, but the regime was still regarded as sectarian and sectarian identities were reproduced and politicized. Sectarian tensions that later gave rise to the Syrian civil war, had already appeared in society due to events preceding 1970. For example, President Hafez al-Assad's involvement in the Lebanese civil war by giving political aid to Maronite Christians in Lebanon. This was viewed by many Sunny Muslims as an act of treason, which made them link al-Assad's actions to his Alawite identity. The Muslim Brothers, a part of the Sunni Muslims, used those tensions towards the Alawites as a tool to boost their political agenda and plans. Several assassinations were carried out by the Muslim Brothers, mostly against Alawites, but also against some Sunni Muslims. The failed assassination attempt on President Hafez al-Assad is arguably the most well-known. Part of the animosity between the Alawites and the Sunni Islamists of the Muslim Brothers is due to the secularization of Syria, which the later holds the Alawites in power to be responsible for. Syrian Civil War As of 2015, the majority of the Syrian population consisted of Sunni Muslims, namely two-thirds of the population, which can be found throughout the country. The Alawites are the second largest group, which make up around 10 percent of the population. This makes them a ruling minority. The Alawites were originally settled in the highlands of Northwest Syria, but since the twentieth century have spread to places like Latakia, Homs and Damascus. Other groups that can be found in Syria are Christians, among which the Maronite Christians, Druzes and Twelver Shias. Although sectarian identities played a role in the unfolding of events of the Syrian Civil War, the importance of tribal and kinship relationships should not be underestimated, as they can be used to obtain and maintain power and loyalty. At the start of the protests against President Basher al-Assad in March 2011, there was no sectarian nature or approach involved. The opposition had national, inclusive goals and spoke in the name of a collective Syria, although the protesters being mainly Sunni Muslims. This changed after the protests and the following civil war began to be portrayed in sectarian terms by the regime, as a result of which people started to mobilize along ethnic lines. However, this does not mean that the conflict is solely or primarily a sectarian conflict, as there were also socio-economic factors at play. These socio-economic factors were mainly the result of Basher al-Assad's mismanaged economic restructuring. The conflict has therefore been described as being semi-sectarian, making sectarianism a factor at play in the civil war, but certainly does not stand alone in causing the war and has varied in importance throughout time and place. In addition to local forces, the role of external actors in the conflict in general as well as the sectarian aspect of the conflict should not be overlooked. Although foreign regimes were first in support of the Free Syrian Army, they eventually ended up supporting sectarian militias with money and arms. However, it has to be said that their sectarian nature did not only attract these flows of support, but they also adopted a more sectarian and Islamic appearance in order to attract this support. Yemen Introduction In Yemen, there have been many clashes between Salafis and Shia Houthis. According to The Washington Post, "In today’s Middle East, activated sectarianism affects the political cost of alliances, making them easier between co-religionists. That helps explain why Sunni-majority states are lining up against Iran, Iraq and Hezbollah over Yemen." Historically, divisions in Yemen along religious lines (sects) used to be less intense than those in Pakistan, Lebanon, Syria, Iraq, Saudi Arabia, and Bahrain. However, the situation has changed dramatically after the Houthi takeover in 2014. Most political forces in Yemen are primarily characterized by regional interests and not by religious sectarianism. Regional interests are, for example, the north's proximity to the Hejaz, the south's coast along the Indian Ocean trade route, and the southeast's oil and gas fields. Yemen's northern population consists for a substantial part of Zaydis, and its southern population predominantly of Shafi’is. Hadhramaut in Yemen's southeast has a distinct Sufi Ba’Alawi profile. Ottoman era, 1849–1918 Sectarianism reached the region once known as Arabia Felix with the 1911 Treaty of Daan. It divided the Yemen Vilayet into an Ottoman controlled section and an Ottoman-Zaydi controlled section. The former dominated by Sunni Islam and the latter by Zaydi-Shia Islam, thus dividing the Yemen Vilayet along Islamic sectarian lines. Yahya Muhammad Hamid ed-Din became the ruler of the Zaidi community within this Ottoman entity. Before the agreement, inter-communal battles between Shafi’is and Zaydis never occurred in the Yemen Vilayet. After the agreement, sectarian strife still did not surface between religious communities. Feuds between Yemenis were nonsectarian in nature, and Zaydis attacked Ottoman officials not because they were Sunnis. Following the collapse of the Ottoman Empire, the divide between Shafi’is and Zaydis changed with the establishment of the Kingdom of Yemen. Shafi’i scholars were compelled to accept the supreme authority of Yahya Muhammad Hamid ed-Din, and the army “institutionalized the supremacy of the Zaydi tribesman over the Shafi’is”. Unification period, 1918–1990 Before the 1990 Yemeni unification, the region had never been united as one country. In order to create unity and overcome sectarianism, the myth of Qahtanite was used as a nationalist narrative. Although not all ethnic groups of Yemen fit in this narrative, such as the Al-Akhdam and the Teimanim. The latter established a Jewish kingdom in ancient Yemen, the only one ever created outside Palestine. A massacre of Christians, executed by the Jewish king Dhu Nuwas, eventually led to the fall of the Homerite Kingdom. In modern times, the establishment of the Jewish state resulted in the 1947 Aden riots, after which most Teimanim left the country during Operation Magic Carpet. Conflicting geopolitical interests surfaced during the North Yemen Civil War (1962–1970). Wahhabist Saudi Arabia and other Arab monarchies supported Muhammad al-Badr, the deposed Zaydi imam of the Kingdom of Yemen. His adversary, Abdullah al-Sallal, received support from Egypt and other Arab republics. Both international backings were not based on religious sectarian affiliation. In Yemen however, President Abdullah al-Sallal (a Zaydi) sidelined his vice-president Abdurrahman al-Baidani (a Shaffi'i) for not being a member of the Zaydi sect. Shaffi'i officials of North Yemen also lobbied for "the establishment of a separate Shaffi'i state in Lower Yemen" in this period. Contemporary Sunni-Shia rivalry According to Lisa Wedeen, the perceived sectarian rivalry between Sunnis and Shias in the Muslim world is not the same as Yemen's sectarian rivalry between Salafists and Houthis. Not all supporters of Houthi's Ansar Allah movement are Shia, and not all Zaydis are Houthis. Although most Houthis are followers of Shia's Zaydi branch, most Shias in the world are from the Twelver branch. Yemen is geographically not in proximity of the so-called Shia Crescent. To link Hezbollah and Iran, whose subjects are overwhelmingly Twelver Shias, organically with Houthis is exploited for political purposes. Saudi Arabia emphasized an alleged military support of Iran for the Houthis during Operation Scorched Earth. The slogan of the Houthi movement is 'Death to America, death to Israel, a curse upon the Jews'. This is a trope of Iran and Hezbollah, so the Houthis seem to have no qualms about a perceived association with them. Tribes and political movements Tribal culture in the southern regions has virtually disappeared through policies of the People's Democratic Republic of Yemen. However, Yemen's northern part is still home to the powerful tribal confederations of Bakil and Hashid. These tribal confederations maintain their own institutions without state interference, such as prisons, courts, and armed forces. Unlike the Bakils, the Hashids adopted Salafist tenets, and during the Sa’dah War (2004–2015) sectarian tensions materialized. Yemen's Salafists attacked the Zaydi Mosque of Razih in Sa’dah and destroyed tombs of Zaydi imams across Yemen. In turn, Houthis attacked Yemen's main Salafist center of Muqbil bin Hadi al-Wadi'I during the Siege of Dammaj. Houthis also attacked the Salafist Bin Salman Mosque and threatened various Teimanim families. Members of Hashid's elite founded the Sunni Islamist party Al-Islah and, as a counterpart, Hizb al-Haqq was founded by Zaydis with the support of Bakil's elite. Violent non-state actors Al-Qaeda, Ansar al-Sharia and Daesh, particularly active in southern cities like Mukalla, fuel sectarian tendencies with their animosity towards Yemen's Isma'ilis, Zaydis, and others. An assassination attempt in 1995 on Hosni Mubarak, executed by Yemen's Islamists, damaged the country's international reputation. The war on terror further strengthened Salafist-jihadist groups impact on Yemen's politics. The 2000 USS Cole bombing resulted in US military operations on Yemen's soil. Collateral damage caused by cruise missiles, cluster bombs, and drone attacks, deployed by the United States, compromised Yemen's sovereignty. Ali Abdullah Saleh's reign Ali Abdullah Saleh is a Zaydi from the Hashid's Sanhan clan and founder of the nationalist party General People's Congress. During his decades long reign as head of state, he used Sa'dah's Salafist's ideological dissemination against Zaydi's Islamic revival advocacy. In addition, the Armed Forces of Yemen used Salafists as mercenaries to fight against Houthis. Though, Ali Abdullah Saleh also used Houthis as a political counterweight to Yemen's Muslim Brotherhood. Due to the Houthis persistent opposition to the central government, Upper Yemen was economically marginalized by the state. This policy of divide and rule executed by Ali Abdullah Saleh worsened Yemen's social cohesion and nourished sectarian persuasions within Yemen's society. Following the Arab Spring and the Yemeni Revolution, Ali Abdullah Saleh was forced to step down as president in 2012. Subsequently, a complex and violent power struggle broke out between three national alliances: (1) Ali Abdullah Saleh, his political party General People's Congress, and the Houthis; (2) Ali Mohsen al-Ahmar, supported by the political party Al-Islah; (3) Abdrabbuh Mansur Hadi, supported by the Joint Meeting Parties. According to Ibrahim Fraihat, “Yemen’s conflict has never been about sectarianism, as the Houthis were originally motivated by economic and political grievances. However, in 2014, the regional context substantially changed”. The Houthi takeover in 2014-2015 provoked a Saudi-led intervention, strengthening the sectarian dimension of the conflict. Hezbollah's Hassan Nasrallah heavily criticized the Saudi intervention, bolstering the regional Sunni-Shia geopolitical dynamic behind it. Saudi Arabia Sectarianism in Saudi Arabia is exemplified through the tensions with its Shi’ite population, who constitute up to 15% of the Saudi population. This includes the anti-Shi’ite policies and persecution of the Shi’ites by the Saudi government. According to Human Rights Watch, Shi’ites face marginalisation socially, politically, religiously, legally and economically, whilst facing discrimination in education and in the workplace. This history dates back to 1744, with the establishment of a coalition between the House of Saud and the Wahhabis, who equate Shi’ism with polytheism. Over the course of the twentieth century clashes and tensions unfolded between the Shi’ites and the Saudi regime, including the 1979 Qatif Uprising and the repercussions of the 1987 Makkah Incident. Though relations underwent a détente in the 1990s and the early 2000s, tensions rose again after the 2003 US-led election of Iraq (owing to a broader rise of Shi’ism in the region) and peaked during the Arab Spring. Sectarianism in Saudi Arabia has attracted widespread attention by Human Rights groups, including Human Rights Watch and Amnesty International, especially after the execution of Shi'ite cleric Nimr al-Nimr in 2016, who was active in the 2011 domestic protests. Despite Crown Prince Mohammed bin Salman's reforms, Shi’ites continue to face discrimination today. Lebanon Sectarianism in Lebanon has been formalized and legalized within state and non-state institutions and is inscribed in its constitution. Lebanon recognizes 18 different sects, mainly within Muslim and Christian worlds. The foundations of sectarianism in Lebanon date back to the mid-19th century during Ottoman rule. It was subsequently reinforced with the creation of the Republic of Lebanon in 1920 and its 1926 constitution and in the National Pact of 1943. In 1990, with the Taif Agreement, the constitution was revised but did not structurally change aspects relating to political sectarianism. The dynamic nature of sectarianism in Lebanon has prompted some historians and authors to refer to it as "the sectarian state par excellence" because it is a mixture of religious communities and their myriad sub-divisions, with a constitutional and political order to match. Yet, the reality on the ground has been more complex than such a conclusion, because as Nadya Sbaiti has shown in her research, in the aftermath of the First World War, the “need of shaping a collective future that paralleled shifting conceptions of the newly territorialized nation-state of Lebanon” was clearly present. “Over the course of the Mandate, educational practitioners and the wide range of schools that proliferated helped shape the epistemological infrastructure en route to creating this entity. By ‘epistemological infrastructure’, one means the cast array of ideas that become validated as truths and convincing explanations.” In other words, contrary to the colonial sectarian education system, “students, parents, and teachers created educational content through curricula, and educational practices so as to produce new ‘communities of knowledge’. These communities of knowledge, connected as they were by worlds of ideas and networks of knowledge, often transcended confessional, sociopolitical, and even at times regional subjectivities.” Sectarian violence See also References Further reading Sectarianism in Syria (Survey Study), The Day After, 2016. Middle East sectarianism explained: the narcissism of small differences Victor Argo 13 April 2015 Your Middle East Bryan R. Wilson, The Social Dimensions of Sectarianism: Sects and New Religious Movements in Contemporary Society, Oxford, Clarendon Press, 1990
0.765349
0.997808
0.763671
Palaeogeography
Palaeogeography (or paleogeography) is the study of historical geography, generally physical landscapes. Palaeogeography can also include the study of human or cultural environments. When the focus is specifically on landforms, the term paleogeomorphology is sometimes used instead. Paleomagnetism, paleobiogeography, and tectonic history are among its main tools. Palaeogeography yields information that is crucial to scientific understanding in a variety of contexts. For example, palaeogeographical analysis of sedimentary basins plays a key role in the field of petroleum geology, because ancient geomorphological environments of the Earth's surface are preserved in the stratigraphic record. Palaeogeographers also study the sedimentary environment associated with fossils for clues to the evolutionary development of extinct species. Palaeogeography is furthermore crucial to the understanding of palaeoclimatology, due to the impact of the positions of continents and oceans on influencing global and regional climates. Palaeogeographical evidence contributed to the development of continental drift theory, and continues to inform current plate tectonic theories, yielding information about the shape and latitudinal location of supercontinents such as Pangaea and ancient oceans such as Panthalassa, thus enabling reconstruction of prehistoric continents and oceans. See also Paleogeography of the India–Asia collision system , often involving fossils and pollen (palynology). References Further reading External links An interactive tool to visualize paleogeography and reconstruct data (map.paleoenvironment.eu) Deep Time Maps Physical geography
0.787287
0.97
0.763668
Paleoanthropology
Paleoanthropology or paleo-anthropology is a branch of paleontology and anthropology which seeks to understand the early development of anatomically modern humans, a process known as hominization, through the reconstruction of evolutionary kinship lines within the family Hominidae, working from biological evidence (such as petrified skeletal remains, bone fragments, footprints) and cultural evidence (such as stone tools, artifacts, and settlement localities). The field draws from and combines primatology, paleontology, biological anthropology, and cultural anthropology. As technologies and methods advance, genetics plays an ever-increasing role, in particular to examine and compare DNA structure as a vital tool of research of the evolutionary kinship lines of related species and genera. Etymology The term paleoanthropology derives from Greek palaiós (παλαιός) "old, ancient", ánthrōpos (ἄνθρωπος) "man, human" and the suffix -logía (-λογία) "study of". Hominoid taxonomies Hominoids are a primate superfamily, the hominid family is currently considered to comprise both the great ape lineages and human lineages within the hominoid superfamily. The "Homininae" comprise both the human lineages and the African ape lineages. The term "African apes" refers only to chimpanzees and gorillas. The terminology of the immediate biological family is currently in flux. The term "hominin" refers to any genus in the human tribe (Hominini), of which Homo sapiens (modern humans) is the only living specimen. History 18th century In 1758 Carl Linnaeus introduced the name Homo sapiens as a species name in the 10th edition of his work Systema Naturae although without a scientific description of the species-specific characteristics. Since the great apes were considered the closest relatives of human beings, based on morphological similarity, in the 19th century, it was speculated that the closest living relatives to humans were chimpanzees (genus Pan) and gorilla (genus Gorilla), and based on the natural range of these creatures, it was surmised that humans shared a common ancestor with African apes and that fossils of these ancestors would ultimately be found in Africa. 19th century The science arguably began in the late 19th century when important discoveries occurred that led to the study of human evolution. The discovery of the Neanderthal in Germany, Thomas Huxley's Evidence as to Man's Place in Nature, and Charles Darwin's The Descent of Man were both important to early paleoanthropological research. The modern field of paleoanthropology began in the 19th century with the discovery of "Neanderthal man" (the eponymous skeleton was found in 1856, but there had been finds elsewhere since 1830), and with evidence of so-called cave men. The idea that humans are similar to certain great apes had been obvious to people for some time, but the idea of the biological evolution of species in general was not legitimized until after Charles Darwin published On the Origin of Species in 1859. Though Darwin's first book on evolution did not address the specific question of human evolution—"light will be thrown on the origin of man and his history," was all Darwin wrote on the subject—the implications of evolutionary theory were clear to contemporary readers. Debates between Thomas Huxley and Richard Owen focused on the idea of human evolution. Huxley convincingly illustrated many of the similarities and differences between humans and apes in his 1863 book Evidence as to Man's Place in Nature. By the time Darwin published his own book on the subject, Descent of Man, it was already a well-known interpretation of his theory—and the interpretation which made the theory highly controversial. Even many of Darwin's original supporters (such as Alfred Russel Wallace and Charles Lyell) balked at the idea that human beings could have evolved their apparently boundless mental capacities and moral sensibilities through natural selection. Asia Prior to the general acceptance of Africa as the root of genus Homo, 19th-century naturalists sought the origin of humans in Asia. So-called "dragon bones" (fossil bones and teeth) from Chinese apothecary shops were known, but it was not until the early 20th century that German paleontologist, Max Schlosser, first described a single human tooth from Beijing. Although Schlosser (1903) was very cautious, identifying the tooth only as "?Anthropoide g. et sp. indet?," he was hopeful that future work would discover a new anthropoid in China. Eleven years later, the Swedish geologist Johan Gunnar Andersson was sent to China as a mining advisor and soon developed an interest in "dragon bones". It was he who, in 1918, discovered the sites around Zhoukoudian, a village about 50 kilometers southwest of Beijing. However, because of the sparse nature of the initial finds, the site was abandoned. Work did not resume until 1921, when the Austrian paleontologist, Otto Zdansky, fresh with his doctoral degree from Vienna, came to Beijing to work for Andersson. Zdansky conducted short-term excavations at Locality 1 in 1921 and 1923, and recovered only two teeth of significance (one premolar and one molar) that he subsequently described, cautiously, as "?Homo sp." (Zdansky, 1927). With that done, Zdansky returned to Austria and suspended all fieldwork. News of the fossil hominin teeth delighted the scientific community in Beijing, and plans for developing a larger, more systematic project at Zhoukoudian were soon formulated. At the epicenter of excitement was Davidson Black, a Canadian-born anatomist working at Peking Union Medical College. Black shared Andersson’s interest, as well as his view that central Asia was a promising home for early humankind. In late 1926, Black submitted a proposal to the Rockefeller Foundation seeking financial support for systematic excavation at Zhoukoudian and the establishment of an institute for the study of human biology in China. The Zhoukoudian Project came into existence in the spring of 1927, and two years later, the Cenozoic Research Laboratory of the Geological Survey of China was formally established. Being the first institution of its kind, the Cenozoic Laboratory opened up new avenues for the study of paleogeology and paleontology in China. The Laboratory was the precursor of the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) of the Chinese Academy of Science, which took its modern form after 1949. The first of the major project finds are attributed to the young Swedish paleontologist, Anders Birger Bohlin, then serving as the field advisor at Zhoukoudian. He recovered a left lower molar that Black (1927) identified as unmistakably human (it compared favorably to the previous find made by Zdansky), and subsequently coined it Sinanthropus pekinensis. The news was at first met with skepticism, and many scholars had reservations that a single tooth was sufficient to justify the naming of a new type of early hominin. Yet within a little more than two years, in the winter of 1929, Pei Wenzhong, then the field director at Zhoukoudian, unearthed the first complete calvaria of Peking Man. Twenty-seven years after Schlosser’s initial description, the antiquity of early humans in East Asia was no longer a speculation, but a reality. Excavations continued at the site and remained fruitful until the outbreak of the Second Sino-Japanese War in 1937. The decade-long research yielded a wealth of faunal and lithic materials, as well as hominin fossils. These included 5 more complete calvaria, 9 large cranial fragments, 6 facial fragments, 14 partial mandibles, 147 isolated teeth, and 11 postcranial elements—estimated to represent as least 40 individuals. Evidence of fire, marked by ash lenses and burned bones and stones, were apparently also present, although recent studies have challenged this view. Franz Weidenreich came to Beijing soon after Black’s untimely death in 1934, and took charge of the study of the hominin specimens. Following the loss of the Peking Man materials in late 1941, scientific endeavors at Zhoukoudian slowed, primarily because of lack of funding. Frantic search for the missing fossils took place, and continued well into the 1950s. After the establishment of the People’s Republic of China in 1949, excavations resumed at Zhoukoudian. But with political instability and social unrest brewing in China, beginning in 1966, and major discoveries at Olduvai Gorge and East Turkana (Koobi Fora), the paleoanthropological spotlight shifted westward to East Africa. Although China re-opened its doors to the West in the late 1970s, national policy calling for self-reliance, coupled with a widened language barrier, thwarted all the possibilities of renewed scientific relationships. Indeed, Harvard anthropologist K. C. Chang noted, "international collaboration (in developing nations very often a disguise for Western domination) became a thing of the past" (1977: 139). Africa 1920s – 1940s The first paleoanthropological find made in Africa was the 1921 discovery of the Kabwe 1 skull at Kabwe (Broken Hill), Zambia. Initially, this specimen was named Homo rhodesiensis; however, today it is considered part of the species Homo heidelbergensis. In 1924 in a limestone quarry at Taung, Professor Raymond Dart discovered a remarkably well-preserved juvenile specimen (face and brain endocast), which he named Australopithecus africanus (Australopithecus meaning "Southern Ape"). Although the brain was small (410 cm3), its shape was rounded, unlike the brain shape of chimpanzees and gorillas, and more like the shape seen in modern humans. In addition, the specimen exhibited short canine teeth, and the anterior placement of the foramen magnum was more like the placement seen in modern humans than the placement seen in chimpanzees and gorillas, suggesting that this species was bipedal. All of these traits convinced Dart that the Taung child was a bipedal human ancestor, a transitional form between ape and human. However, Dart's conclusions were largely ignored for decades, as the prevailing view of the time was that a large brain evolved before bipedality. It took the discovery of additional australopith fossils in Africa that resembled his specimen, and the rejection of the Piltdown Man hoax, for Dart's claims to be taken seriously. In the 1930s, paleontologist Robert Broom discovered and described a new species at Kromdraai, South Africa. Although similar in some ways to Dart's Australopithecus africanus, Broom's specimen had much larger cheek teeth. Because of this difference, Broom named his specimen Paranthropus robustus, using a new genus name. In doing so, he established the practice of grouping gracile australopiths in the genus Australopithecus and robust australopiths in the genus Paranthropus. During the 1960s, the robust variety was commonly moved into Australopithecus. A more recent consensus has been to return to the original classification of Paranthropus as a separate genus. 1950s – 1990s The second half of the twentieth century saw a significant increase in the number of paleoanthropological finds made in Africa. Many of these finds were associated with the work of the Leakey family in eastern Africa. In 1959, Mary Leakey's discovery of the Zinj fossin (OH 5) at Olduvai Gorge, Tanzania, led to the identification of a new species, Paranthropus boisei. In 1960, the Leakeys discovered the fossil OH 7, also at Olduvai Gorge, and assigned it to a new species, Homo habilis. In 1972, Bernard Ngeneo, a fieldworker working for Richard Leakey, discovered the fossil KNM-ER 1470 near Lake Turkana in Kenya. KNM-ER 1470 has been interpreted as either a distinct species, Homo rudolfensis, or alternatively as evidence of sexual dimorphism in Homo habilis. In 1967, Richard Leakey reported the earliest definitive examples of anatomically modern Homo sapiens from the site of Omo Kibish in Ethiopia, known as the Omo remains. In the late 1970s, Mary Leakey excavated the famous Laetoli footprints in Tanzania, which demonstrated the antiquity of bipedality in the human lineage. In 1985, Richard Leakey and Alan Walker discovered a specimen they called the Black Skull, found near Lake Turkana. This specimen was assigned to another species, Paranthropus aethiopicus. In 1994, a team led by Meave Leakey announced a new species, Australopithecus anamensis, based on specimens found near Lake Turkana. Numerous other researchers have made important discoveries in eastern Africa. Possibly the most famous is the Lucy skeleton, discovered in 1973 by Donald Johanson and Maurice Taieb in Ethiopia's Afar Triangle at the site of Hadar. On the basis of this skeleton and subsequent discoveries, the researchers came up with a new species, Australopithecus afarensis. In 1975, Colin Groves and Vratislav Mazák announced a new species of human they called Homo ergaster. Homo ergaster specimens have been found at numerous sites in eastern and southern Africa. In 1994, Tim D. White announced a new species, Ardipithecus ramidus, based on fossils from Ethiopia. In 1999, two new species were announced. Berhane Asfaw and Tim D. White named Australopithecus garhi based on specimens discovered in Ethiopia's Awash valley. Meave Leakey announced a new species, Kenyanthropus platyops, based on the cranium KNM-WT 40000 from Lake Turkana. 21st century In the 21st century, numerous fossils have been found that add to current knowledge of existing species. For example, in 2001, Zeresenay Alemseged discovered an Australopithecus afarensis child fossil, called Selam, from the site of Dikika in the Afar region of Ethiopia. This find is particularly important because the fossil included a preserved hyoid bone, something rarely found in other paleoanthropological fossils but important for understanding the evolution of speech capacities. Two new species from southern Africa have been discovered and described in recent years. In 2008, a team led by Lee Berger announced a new species, Australopithecus sediba, based on fossils they had discovered in Malapa cave in South Africa. In 2015, a team also led by Lee Berger announced another species, Homo naledi, based on fossils representing 15 individuals from the Rising Star Cave system in South Africa. New species have also been found in eastern Africa. In 2000, Brigitte Senut and Martin Pickford described the species Orrorin tugenensis, based on fossils they found in Kenya. In 2004, Yohannes Haile-Selassie announced that some specimens previously labeled as Ardipithecus ramidus made up a different species, Ardipithecus kadabba. In 2015, Haile-Selassie announced another new species, Australopithecus deyiremeda, though some scholars are skeptical that the associated fossils truly represent a unique species. Although most hominin fossils from Africa have been found in eastern and southern Africa, there are a few exceptions. One is Sahelanthropus tchadensis, discovered in the central African country of Chad in 2002. This find is important because it widens the assumed geographic range of early hominins. Renowned paleoanthropologists Robert Ardrey (1908–1980) Lee Berger (1965–) Davidson Black (1884–1934) Robert Broom (1866–1951) Michel Brunet (1940–) J. Desmond Clark (1916–2002) Carleton S. Coon (1904–1981) Raymond Dart (1893–1988) Eugene Dubois (1858–1940) Johann Carl Fuhlrott (1803–1877) Yohannes Haile-Selassie (1961–) Sonia Harmand (1974–) John D. Hawks Aleš Hrdlička (1869–1943) Glynn Isaac (1937–1985) Donald C. Johanson (1943–) Kamoya Kimeu (1938–2022) Jeffrey Laitman (1951–) Louis Leakey (1903–1972) Meave Leakey (1942–) Mary Leakey (1913–1996) Richard Leakey (1944–2022) André Leroi-Gourhan (1911–1986) Peter Wilhelm Lund (1801–1880) Kenneth Oakley (1911–1981) David Pilbeam (1940–) Gustav Heinrich Ralph von Koenigswald (1902–1982) John T. Robinson (1923–2001) Jeffrey H. Schwartz (1948–) Chris Stringer (1947–) Ian Tattersall (1945–) Pierre Teilhard de Chardin (1881–1955) Phillip V. Tobias (1925–2012) Erik Trinkaus (1948–) Alan Walker (1938–2017) Franz Weidenreich (1873–1948) Tim D. White (1950–) Milford H. Wolpoff (1942–) Bernard Wood (1945–) See also Dawn of Humanity (2015 PBS film) Human evolution The Incredible Human Journey—Television documentary film series List of human evolution fossils Timeline of human evolution References Further reading David R. Begun, A Companion to Paleoanthropology, Malden, Wiley-Blackwell, 2013. Winfried Henke, Ian Tattersall (eds.), Handbook of Paleoanthropology, Dordrecht, Springer, 2007. External links Paleoanthropology in the 1990s – Essays by James Q. Jacobs. Fossil Hominids Aspects of Paleoanthropology Becoming Human: Paleoanthropology, Evolution and Human Origins Department of Human Evolution ~ Max Planck Institute, Leipzig Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). Anthropology Biological anthropology
0.772388
0.988706
0.763665
18th century
The 18th century lasted from 1 January 1701 (represented by the Roman numerals MDCCI) to 31 December 1800 (MDCCC). During the 18th century, elements of Enlightenment thinking culminated in the Atlantic Revolutions. Revolutions began to challenge the legitimacy of monarchical and aristocratic power structures. The Industrial Revolution began during mid-century, leading to radical changes in human society and the environment. The European colonization of the Americas and other parts of the world intensified and associated mass migrations of people grew in size as part of the Age of Sail. During the century, slave trading expanded across the shores of the Atlantic Ocean, while declining in Russia and China. Western historians have occasionally defined the 18th century otherwise for the purposes of their work. For example, the "short" 18th century may be defined as 1715–1789, denoting the period of time between the death of Louis XIV of France and the start of the French Revolution, with an emphasis on directly interconnected events. To historians who expand the century to include larger historical movements, the "long" 18th century may run from the Glorious Revolution of 1688 to the Battle of Waterloo in 1815 or even later. In Europe, philosophers ushered in the Age of Enlightenment. This period coincided with the French Revolution of 1789, and was later compromised by the excesses of the Reign of Terror. At first, many monarchies of Europe embraced Enlightenment ideals, but in the wake of the French Revolution they feared loss of power and formed broad coalitions to oppose the French Republic in the French Revolutionary Wars. Various conflicts throughout the century, including the War of the Spanish Succession and the Seven Years' War, saw Great Britain triumph over its rivals to become the preeminent power in Europe. However, Britain's attempts to exert its authority over the Thirteen Colonies became a catalyst for the American Revolution. The 18th century also marked the end of the Polish–Lithuanian Commonwealth as an independent state. Its semi-democratic government system was not robust enough to prevent partition by the neighboring states of Austria, Prussia, and Russia. In West Asia, Nader Shah led Persia in successful military campaigns. The Ottoman Empire experienced an unprecedented period of peace and economic expansion, taking no part in European wars from 1740 to 1768. As a result, the empire was not exposed to Europe's military improvements during the Seven Years' War. The Ottoman military consequently lagged behind and suffered several defeats against Russia in the second half of the century. In South Asia, the death of Mughal emperor Aurangzeb was followed by the expansion of the Maratha Confederacy and an increasing level of European influence and control in the region. In 1739, Persian emperor Nader Shah invaded and plundered Delhi, the capital of the Mughal Empire. Later, his general Ahmad Shah Durrani scored another victory against the Marathas, the then dominant power in India, in the Third Battle of Panipat in 1761. By the middle of the century, the British East India Company began to conquer eastern India, and by the end of the century, the Anglo-Mysore Wars against Tipu Sultan and his father Hyder Ali, led to Company rule over the south. In East Asia, the century marked the High Qing era and the continual seclusion policy of the Tokugawa shogunate. In Southeast Asia, the Konbaung–Ayutthaya Wars and the Tây Sơn Wars broke out while the Dutch East India Company established increasing levels of control over the Mataram Sultanate. In Africa, the Ethiopian Empire underwent the Zemene Mesafint, a period when the country was ruled by a class of regional noblemen and the emperor was merely a figurehead. The Atlantic slave trade also saw the continued involvement of states such as the Oyo Empire. In Oceania, the European colonization of Australia and New Zealand began during the late half of the century. In the Americas, the United States declared its independence from Great Britain. In 1776, Thomas Jefferson wrote the Declaration of Independence. In 1789, George Washington was inaugurated as the first president. Benjamin Franklin traveled to Europe where he was hailed as an inventor. Examples of his inventions include the lightning rod and bifocal glasses. Túpac Amaru II led an uprising that sought to end Spanish colonial rule in Peru. Events 1701–1750 1700–1721: Great Northern War between the Russian and Swedish Empires. 1701: Kingdom of Prussia declared under King Frederick I. 1701: The Battle of Feyiase marks the rise of the Ashanti Empire. 1701–1714: The War of the Spanish Succession is fought, involving most of continental Europe. 1702–1715: Camisard rebellion in France. 1703: Saint Petersburg is founded by Peter the Great; it is the Russian capital until 1918. 1703–1711: The Rákóczi uprising against the Habsburg monarchy. 1704: End of Japan's Genroku period. 1704: First Javanese War of Succession. 1706–1713: The War of the Spanish Succession: French troops defeated at the Battle of Ramillies and the Siege of Turin. 1707: Death of Mughal Emperor Aurangzeb leads to the fragmentation of the Mughal Empire. 1707: The Act of Union is passed, merging the Scottish and English Parliaments, thus establishing the Kingdom of Great Britain. 1708: The Company of Merchants of London Trading into the East Indies and English Company Trading to the East Indies merge to form the United Company of Merchants of England Trading to the East Indies. 1708–1709: Famine kills one-third of East Prussia's population. 1709: Foundation of the Hotak Empire. 1709: The Great Frost of 1709 marks the coldest winter in 500 years, contributing to the defeat of Sweden at Poltava. 1710: The world's first copyright legislation, Britain's Statute of Anne, takes effect. 1710–1711: Ottoman Empire fights Russia in the Russo-Turkish War and regains Azov. 1711: Bukhara Khanate dissolves as local begs seize power. 1711–1715: Tuscarora War between British, Dutch, and German settlers and the Tuscarora people of North Carolina. 1713: The Kangxi Emperor acknowledges the full recovery of the Chinese economy since its apex during the Ming. 1714: In Amsterdam, Daniel Gabriel Fahrenheit invents the mercury-in-glass thermometer, which remains the most reliable and accurate thermometer until the electronic era. 1715: The first Jacobite rising breaks out; the British halt the Jacobite advance at the Battle of Sheriffmuir; Battle of Preston. 1716: Establishment of the Sikh Confederacy along the present-day India-Pakistan border. 1716–1718: Austro-Venetian-Turkish War. 1718: The city of New Orleans is founded by the French in North America. 1718–1720: War of the Quadruple Alliance with Spain versus France, Britain, Austria, and the Netherlands. 1718–1730: Tulip period of the Ottoman Empire. 1719: Second Javanese War of Succession. 1720: The South Sea Bubble. 1720–1721: The Great Plague of Marseille. 1720: Qing forces oust Dzungar invaders from Tibet. 1721: The Treaty of Nystad is signed, ending the Great Northern War. 1721: Sack of Shamakhi, massacre of its Shia population by Sunni Lezgins. 1722: Siege of Isfahan results in the handover of Iran to the Hotaki Afghans. 1722–1723: Russo-Persian War. 1722–1725: Controversy over William Wood's halfpence leads to the Drapier's Letters and begins the Irish economic independence from England movement. 1723: Slavery is abolished in Russia; Peter the Great converts household slaves into house serfs. 1723–1730: The "Great Disaster", an invasion of Kazakh territories by the Dzungars. 1723–1732: The Qing and the Dzungars fight a series of wars across Qinghai, Dzungaria, and Outer Mongolia, with inconclusive results. 1724: Daniel Gabriel Fahrenheit proposes the Fahrenheit temperature scale. 1725: Austro-Spanish alliance revived. Russia joins in 1726. 1727–1729: Anglo-Spanish War ends inconclusively. 1730: Mahmud I takes over Ottoman Empire after the Patrona Halil revolt, ending the Tulip period. 1730–1760: The First Great Awakening takes place in Great Britain and North America. 1732–1734: Crimean Tatar raids into Russia. 1733–1738: War of the Polish Succession. 1735–1739: Austro-Russo-Turkish War. 1735–1799: The Qianlong Emperor of China oversees a huge expansion in territory. 1738–1756: Famine across the Sahel; half the population of Timbuktu dies. 1737–1738: Hotak Empire ends after the siege of Kandahar by Nader Shah. 1739: Great Britain and Spain fight the War of Jenkins' Ear in the Caribbean. 1739: Nader Shah defeats a pan-Indian army of 300,000 at the Battle of Karnal. Taxation is stopped in Iran for three years. 1739–1740: Nader Shah's Sindh expedition. 1740: George Whitefield brings the First Great Awakening to New England 1740–1741: Famine in Ireland kills 20 percent of the population. 1741–1743: Iran invades Uzbekistan, Khwarazm, Dagestan, and Oman. 1741–1751: Maratha invasions of Bengal. 1740–1748: War of the Austrian Succession. 1742: Marvel's Mill, the first water-powered cotton mill, begins operation in England. 1742: Anders Celsius proposes an inverted form of the centigrade temperature, which is later renamed Celsius in his honor. 1742: Premiere of George Frideric Handel's Messiah. 1743–1746: Another Ottoman-Persian War involves 375,000 men but ultimately ends in a stalemate. 1744: The First Saudi State is founded by Mohammed Ibn Saud. 1744: Battle of Toulon is fought off the coast of France. 1744–1748: The First Carnatic War is fought between the British, the French, the Marathas, and Mysore in India. 1745: Second Jacobite rising is begun by Charles Edward Stuart in Scotland. 1747: The Durrani Empire is founded by Ahmad Shah Durrani. 1748: The Treaty of Aix-La-Chapelle ends the War of the Austrian Succession and First Carnatic War. 1748–1754: The Second Carnatic War is fought between the British, the French, the Marathas, and Mysore in India. 1750: Peak of the Little Ice Age. 1751–1800 1752: The British Empire adopts the Gregorian Calendar, skipping 11 days from 3 September to 13 September. On the calendar, 2 September is followed directly by 14 September. 1754: The Treaty of Pondicherry ends the Second Carnatic War and recognizes Muhammed Ali Khan Wallajah as Nawab of the Carnatic. 1754: King's College is founded by a royal charter of George II of Great Britain. 1754–1763: The French and Indian War, the North American chapter of the Seven Years' War, is fought in colonial North America, mostly by the French and their allies against the English and their allies. 1755: The great Lisbon earthquake destroys most of Portugal's capital and kills up to 100,000. 1755: The Dzungar genocide depopulates much of northern Xinjiang, allowing for Han, Uyghur, Khalkha Mongol, and Manchu colonization. 1755–1763: The Great Upheaval forces transfer of the French Acadian population from Nova Scotia and New Brunswick. 1756–1763: The Seven Years' War is fought among European powers in various theaters around the world. 1756–1763: The Third Carnatic War is fought between the British, the French, and Mysore in India. 1757: British conquest of Bengal. 1760: George III becomes King of Britain. 1761: Maratha Empire defeated at Battle of Panipat. 1762–1796: Reign of Catherine the Great of Russia. 1763: The Treaty of Paris ends the Seven Years' War and Third Carnatic War. 1764: Dahomey and the Oyo Empire defeat the Ashanti army at the Battle of Atakpamé. 1764: The Mughals are defeated at the Battle of Buxar. 1765: The Stamp Act is introduced into the American colonies by the British Parliament. 1765–1767: The Burmese invade Thailand and utterly destroy Attuthaya. 1765–1769: Burma under Hsinbyushin repels four invasions from Qing China, securing hegemony over the Shan states. 1766: Christian VII becomes king of Denmark. He was king of Denmark to 1808. 1766–1799: Anglo-Mysore Wars. 1767: Taksin expels Burmese invaders and reunites Thailand under an authoritarian regime. 1768–1772: War of the Bar Confederation. 1768–1774: Russo-Turkish War. 1769: Spanish missionaries establish the first of 21 missions in California. 1769–1770: James Cook explores and maps New Zealand and Australia. 1769–1773: The Bengal famine of 1770 kills one-third of the Bengal population. 1769: The French East India Company dissolves, only to be revived in 1785. 1769: French expeditions capture clove plants in Ambon, ending the Dutch East India Company's (VOC) monopoly of the plant. 1770–1771: Famine in Czech lands kills hundreds of thousands. 1771: The Plague Riot in Moscow. 1771: The Kalmyk Khanate dissolves as the territory becomes colonized by Russians. More than a hundred thousand Kalmyks migrate back to Qing Dzungaria. 1772: Gustav III of Sweden stages a coup d'état, becoming almost an absolute monarch. 1772–1779: Maratha Empire fights Britain and Raghunathrao's forces during the First Anglo-Maratha War. 1772–1795: The Partitions of Poland end the Polish–Lithuanian Commonwealth and erase Poland from the map for 123 years. 1773–1775: Pugachev's Rebellion, the largest peasant revolt in Russian history. 1773: East India Company starts operations in Bengal to smuggle opium into China. 1775: Russia imposes a reduction in autonomy on the Zaporizhian Cossacks of Ukraine. 1775–1782: First Anglo-Maratha War. 1775–1783: American Revolutionary War. 1776: Several kongsi republics are founded by Chinese settlers in the island of Borneo. They are some of the first democracies in Asia. 1776–1777: A Spanish-Portuguese War occurs over land in the South American frontiers. 1776: Illuminati founded by Adam Weishaupt. 1776: The United States Declaration of Independence is adopted by the Second Continental Congress in Philadelphia. 1776: Adam Smith publishes The Wealth of Nations. 1778: James Cook becomes the first European to land on the Hawaiian Islands. 1778: Franco-American alliance signed. 1778: Spain acquires its first permanent holding in Africa from the Portuguese, which is administered by the newly-established La Plata Viceroyalty. 1778: Vietnam is reunified for the first time in 200 years by the Tay Son brothers. The Tây Sơn dynasty has been established, terminating the Lê dynasty. 1779–1879: Xhosa Wars between British and Boer settlers and the Xhosas in the South African Republic. 1779–1783: Britain loses several islands and colonial outposts all over the world to the combined Franco-Spanish navy. 1779: Iran enters yet another period of conflict and civil war after the prosperous reign of Karim Khan Zand. 1780: Outbreak of the indigenous rebellion against Spanish colonization led by Túpac Amaru II in Peru. 1781: The city of Los Angeles is founded by Spanish settlers. 1781–1785: Serfdom is abolished in the Austrian monarchy (first step; second step in 1848). 1782: The Thonburi Kingdom of Thailand is dissolved after a palace coup. 1783: The Treaty of Paris formally ends the American Revolutionary War. 1783: Russian annexation of Crimea. 1785–1791: Imam Sheikh Mansur, a Chechen warrior and Muslim mystic, leads a coalition of Muslim Caucasian tribes from throughout the Caucasus in a holy war against Russian settlers and military bases in the Caucasus, as well as against local traditionalists, who followed the traditional customs and common law (Adat) rather than the theocratic Sharia. 1785–1795: The Northwest Indian War is fought between the United States and Native Americans. 1785–1787: The Maratha–Mysore Wars concludes with an exchange of territories in the Deccan. 1786–1787: Wolfgang Amadeus Mozart premieres The Marriage of Figaro and Don Giovanni. 1787: The Tuareg occupy Timbuktu until the 19th century. 1787–1792: Russo-Turkish War. 1788: First Fleet arrives in Australia 1788–1790: Russo-Swedish War (1788–1790). 1788: Dutch Geert Adriaans Boomgaard (1788–1899) would become the first generally accepted validated case of a supercentenarian on record. 1788–1789: A Qing attempt to reinstall an exiled Vietnamese king in northern Vietnam ends in disaster. 1789: George Washington is elected the first President of the United States; he serves until 1797. 1789: Quang Trung defeats the Qing army. 1789–1799: French Revolution. 1789: The Liège Revolution. 1789: The Brabant Revolution. 1789: The , an unsuccessful separatist movement in central Brazil led by Tiradentes 1791: Suppression of the Liège Revolution by Austrian forces and re-establishment of the Prince-Bishopric of Liège. 1791–1795: George Vancouver explores the world during the Vancouver Expedition. 1791–1804: The Haitian Revolution. 1791: Mozart premieres The Magic Flute. 1792–1802: The French Revolutionary Wars lead into the Napoleonic Wars, which last from 1803–1815. 1792: The New York Stock & Exchange Board is founded. 1792: Polish–Russian War of 1792. 1792: Margaret Ann Neve (1792–1903) would become the first recorded female supercentenarian to reach the age of 110. 1793: Upper Canada bans slavery. 1793: The largest yellow fever epidemic in American history kills as many as 5,000 people in Philadelphia, roughly 10% of the population. 1793–1796: Revolt in the Vendée against the French Republic at the time of the Revolution. 1794–1816: The Hawkesbury and Nepean Wars, which were a series of incidents between settlers and New South Wales Corps and the Aboriginal Australian clans of the Hawkesbury river in Sydney, Australia. 1795: The Marseillaise is officially adopted as the French national anthem. 1795: The Battle of Nuuanu in the final days of King Kamehameha I's wars to unify the Hawaiian Islands. 1795–1796: Iran invades and devastates Georgia, prompting Russia to intervene and march on Tehran. 1796: Edward Jenner administers the first smallpox vaccination; smallpox killed an estimated 400,000 Europeans each year during the 18th century, including five reigning monarchs. 1796: War of the First Coalition: The Battle of Montenotte marks Napoleon Bonaparte's first victory as an army commander. 1796: The British eject the Dutch from Ceylon and South Africa. 1796–1804: The White Lotus Rebellion against the Manchu dynasty in China. 1798: The Irish Rebellion fails to overthrow British rule in Ireland. 1798–1800: The Quasi-War is fought between the United States and France. 1799: Dutch East India Company is dissolved. 1799: Austro-Russian forces under Alexander Suvorov liberates much of Italy and Switzerland from French occupation. 1799: Coup of 18 Brumaire - Napoleon's coup d'etat brings the end of the French Revolution. 1799: Death of the Qianlong Emperor after 60 years of rule over China. His favorite official, Heshen, is ordered to commit suicide. 1800: On 1 January, the bankrupt VOC is formally dissolved and the nationalized Dutch East Indies are established. Inventions, discoveries, and introductions 1709: The first piano was built by Bartolomeo Cristofori 1711: Tuning fork was invented by John Shore 1712: Steam engine invented by Thomas Newcomen 1714: Mercury thermometer by Daniel Gabriel Fahrenheit 1717: Diving bell was successfully tested by Edmond Halley, sustainable to a depth of 55 ft c. 1730: Octant navigational tool was developed by John Hadley in England, and Thomas Godfrey in America 1733: Flying shuttle invented by John Kay 1736: Europeans encountered rubber – the discovery was made by Charles Marie de La Condamine while on expedition in South America. It was named in 1770 by Joseph Priestley c. 1740: Modern steel was developed by Benjamin Huntsman 1741: Vitus Bering discovers Alaska 1745: Leyden jar invented by Ewald Georg von Kleist was the first electrical capacitor 1751: Jacques de Vaucanson perfects the first precision lathe 1752: Lightning rod invented by Benjamin Franklin 1753: The first clock to be built in the New World (North America) was invented by Benjamin Banneker. 1755: The tallest wooden Bodhisattva statue in the world is erected at Puning Temple, Chengde, China. 1764: Spinning jenny created by James Hargreaves brought on the Industrial Revolution 1765: James Watt enhances Newcomen's steam engine, allowing new steel technologies 1761: The problem of longitude was finally resolved by the fourth chronometer of John Harrison 1763: Thomas Bayes publishes first version of Bayes' theorem, paving the way for Bayesian probability 1768–1779: James Cook mapped the boundaries of the Pacific Ocean and discovered many Pacific Islands 1774: Joseph Priestley discovers "dephlogisticated air", oxygen 1775: Joseph Priestley's first synthesis of "phlogisticated nitrous air", nitrous oxide, "laughing gas" 1776: First improved steam engines installed by James Watt 1776: Steamboat invented by Claude de Jouffroy 1777: Circular saw invented by Samuel Miller 1779: Photosynthesis was first discovered by Jan Ingenhousz 1781: William Herschel announces discovery of Uranus 1784: Bifocals invented by Benjamin Franklin 1784: Argand lamp invented by Aimé Argand 1785: Power loom invented by Edmund Cartwright 1785: Automatic flour mill invented by Oliver Evans 1786: Threshing machine invented by Andrew Meikle 1787: Jacques Charles discovers Charles's law 1789: Antoine Lavoisier discovers the law of conservation of mass, the basis for chemistry, and begins modern chemistry 1798: Edward Jenner publishes a treatise about smallpox vaccination 1798: The Lithographic printing process invented by Alois Senefelder 1799: Rosetta Stone discovered by Napoleon's troops Literary and philosophical achievements 1703: The Love Suicides at Sonezaki by Chikamatsu first performed 1704–1717: One Thousand and One Nights translated into French by Antoine Galland. The work becomes immensely popular throughout Europe. 1704: A Tale of a Tub by Jonathan Swift first published 1712: The Rape of the Lock by Alexander Pope (publication of first version) 1719: Robinson Crusoe by Daniel Defoe 1725: The New Science by Giambattista Vico 1726: Gulliver's Travels by Jonathan Swift 1728: The Dunciad by Alexander Pope (publication of first version) 1744: A Little Pretty Pocket-Book becomes one of the first books marketed for children 1748: Chushingura (The Treasury of Loyal Retainers), popular Japanese puppet play, composed 1748: Clarissa by Samuel Richardson 1749: The History of Tom Jones, a Foundling by Henry Fielding 1751: Elegy Written in a Country Churchyard by Thomas Gray published 1751–1785: The French Encyclopédie 1755: A Dictionary of the English Language by Samuel Johnson 1758: Arithmetika Horvatzka by Mihalj Šilobod Bolšić 1759: Candide by Voltaire 1759: The Theory of Moral Sentiments by Adam Smith 1759–1767: Tristram Shandy by Laurence Sterne 1762: Emile: or, On Education by Jean-Jacques Rousseau 1762: The Social Contract, Or Principles of Political Right by Jean-Jacques Rousseau 1774: The Sorrows of Young Werther by Goethe first published 1776: (Tales of Moonlight and Rain) by Ueda Akinari 1776: The Wealth of Nations, foundation of the modern theory of economy, was published by Adam Smith 1776–1789: The History of the Decline and Fall of the Roman Empire was published by Edward Gibbon 1779: Amazing Grace published by John Newton 1779–1782: Lives of the Most Eminent English Poets by Samuel Johnson 1781: Critique of Pure Reason by Immanuel Kant (publication of first edition) 1781: The Robbers by Friedrich Schiller first published 1782: Les Liaisons dangereuses by Pierre Choderlos de Laclos 1786: Poems, Chiefly in the Scottish Dialect by Robert Burns 1787–1788: The Federalist Papers by Alexander Hamilton, James Madison, and John Jay 1788: Critique of Practical Reason by Immanuel Kant 1789: Songs of Innocence by William Blake 1789: The Interesting Narrative of the Life of Olaudah Equiano by Olaudah Equiano 1790: Journey from St. Petersburg to Moscow by Alexander Radishchev 1790: Reflections on the Revolution in France by Edmund Burke 1791: Rights of Man by Thomas Paine 1792: A Vindication of the Rights of Woman by Mary Wollstonecraft 1794: Songs of Experience by William Blake 1798: Lyrical Ballads by William Wordsworth and Samuel Taylor Coleridge 1798: An Essay on the Principle of Population published by Thomas Malthus (mid–18th century): The Dream of the Red Chamber (authorship attributed to Cao Xueqin), one of the most famous Chinese novels Musical works 1711: Rinaldo, Handel's first opera for the London stage, premiered 1721: Brandenburg Concertos by J.S. Bach 1723: The Four Seasons, violin concertos by Antonio Vivaldi, composed 1724: St John Passion by J.S. Bach 1727: St Matthew Passion composed by J.S. Bach 1727: Zadok the Priest is composed by Handel for the coronation of George II of Great Britain. It has been performed at every subsequent British coronation. 1733: Hippolyte et Aricie, first opera by Jean-Philippe Rameau 1741: Goldberg Variations for harpsichord published by Bach 1742: Messiah, oratorio by Handel premiered in Dublin 1749: Mass in B minor by J.S. Bach assembled in current form 1751: The Art of Fugue by J.S. Bach 1762: Orfeo ed Euridice, first "reform opera" by Gluck, performed in Vienna 1786: The Marriage of Figaro, opera by Mozart 1787: Don Giovanni, opera by Mozart 1788: Jupiter Symphony (Symphony No. 41) composed by Mozart 1791: The Magic Flute, opera by Mozart 1791–1795: London symphonies by Haydn 1798: The Pathétique, piano sonata by Beethoven 1798: The Creation, oratorio by Haydn first performed References Further reading Black, Jeremy and Roy Porter, eds. A Dictionary of Eighteenth-Century World History (1994) 890pp Klekar, Cynthia. "Fictions of the Gift: Generosity and Obligation in Eighteenth-Century English Literature." Innovative Course Design Winner. American Society for Eighteenth-Century Studies: Wake Forest University, 2004. <Home | American Society for Eighteenth-Century Studies (ASECS)>. Refereed. Langer, William. An Encyclopedia of World History (5th ed. 1973); highly detailed outline of events online free Morris, Richard B. and Graham W. Irwin, eds. Harper Encyclopedia of the Modern World: A Concise Reference History from 1760 to the Present (1970) online Milward, Alan S, and S. B. Saul, eds. The economic development of continental Europe: 1780–1870 (1973) online; note there are two different books with identical authors and slightly different titles. Their coverfage does not overlap. Milward, Alan S, and S. B. Saul, eds. The development of the economies of continental Europe, 1850–1914 (1977) online The Wallace Collection, London, houses one of the finest collections of 18th-century decorative arts from France, England and Italy, including paintings, furniture, porcelain and gold boxes. External links 2nd millennium Centuries Early modern period
0.765238
0.997923
0.763649
Political geography
Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures. Conventionally, for the purposes of analysis, political geography adopts a three-scale structure with the study of the state at the centre, the study of international relations (or geopolitics) above it, and the study of localities below it. The primary concerns of the subdiscipline can be summarized as the inter-relationships between people, state, and territory. History The origins of political geography lie in the origins of human geography itself, and the early practitioners were concerned mainly with the military and political consequences of the relationships between physical geography, state territories, and state power. In particular there was a close association with both regional geography, with its focus on the unique characteristics of regions, and environmental determinism, with its emphasis on the influence of the physical environment on human activities. This association found expression in the work of the German geographer Friedrich Ratzel, who in 1897 in his book Politische Geographie, developed the concept of Lebensraum (living space) which explicitly linked the cultural growth of a nation with territorial expansion, and which was later used to provide academic legitimisation for the imperialist expansion of the German Third Reich in the 1930s. The British geographer Halford Mackinder was also heavily influenced by environmental determinism and in developing his concept of the 'geographical pivot of history' or the Heartland Theory (in 1904) he argued that the era of sea power was coming to an end and that land based powers were in the ascendant, and, in particular, that whoever controlled the heartland of 'Euro-Asia' would control the world. This theory involved concepts diametrically opposed to the ideas of Alfred Thayer Mahan about the significance of sea power in world conflict. The heartland theory hypothesized the possibility of a huge empire being created which didn't need to use coastal or transoceanic transport to supply its military–industrial complex, and that this empire could not be defeated by the rest of the world allied against it. This perspective proved influential throughout the period of the Cold War, underpinning military thinking about the creation of buffer states between East and West in central Europe. The heartland theory depicted a world divided into a Heartland (Eastern Europe/Western Russia); World Island (Eurasia and Africa); Peripheral Islands (British Isles, Japan, Indonesia and Australia) and New World (The Americas). Mackinder argued that whoever controlled the Heartland would have control of the world. He used these ideas to politically influence events such as the Treaty of Versailles, where buffer states were created between the USSR and Germany, to prevent either of them controlling the Heartland. At the same time, Ratzel was creating a theory of states based around the concepts of Lebensraum and Social Darwinism. He argued that states were analogous to 'organisms' that needed sufficient room in which to live. Both of these writers created the idea of a political and geographical science, with an objective view of the world. Prior to World War II political geography was concerned largely with these issues of global power struggles and influencing state policy, and the above theories were taken on board by German geopoliticians (see Geopolitik) such as Karl Haushofer who - perhaps inadvertently - greatly influenced Nazi political theory, which was a form of politics seen to be legitimated by such 'scientific' theories. The close association with environmental determinism and the freezing of political boundaries during the Cold War led to a significant decline in the perceived importance of political geography, which was described by Brian Berry in 1968 as a 'moribund backwater'. Although at this time in most other areas of human geography new approaches, including quantitative spatial science, behavioural studies, and structural Marxism, were invigorating academic research these were largely ignored by political geographers whose main point of reference remained the regional approach. As a result, most of the political geography texts produced during this period were descriptive, and it was not until 1976 that Richard Muir could argue that political geography was no longer a dead duck, but could in fact be a phoenix. Areas of study From the late-1970s onwards, political geography has undergone a renaissance, and could fairly be described as one of the most dynamic of the sub-disciplines today. The revival was underpinned by the launch of the journal Political Geography Quarterly (and its expansion to bi-monthly production as Political Geography). In part this growth has been associated with the adoption by political geographers of the approaches taken up earlier in other areas of human geography, for example, Ron J. Johnston's (1979) work on electoral geography relied heavily on the adoption of quantitative spatial science, Robert Sack's (1986) work on territoriality was based on the behavioural approach, Henry Bakis (1987) showed the impact of information and telecommunications networks on political geography, and Peter Taylor's (e.g. 2007) work on World Systems Theory owed much to developments within structural Marxism. However, the recent growth in vitality and importance of this sub-discipline is also related to the changes in the world as a result of the end of the Cold War. With the emergence of a new world order (which as yet, is only poorly defined) and the development of new research agendas, such as the more recent focus on social movements and political struggles, going beyond the study of nationalism with its explicit territorial basis. There has also been increasing interest in the geography of green politics (see, for example, David Pepper's (1996) work), including the geopolitics of environmental protest, and in the capacity of our existing state apparatus and wider political institutions, to address any contemporary and future environmental problems competently. Political geography has extended the scope of traditional political science approaches by acknowledging that the exercise of power is not restricted to states and bureaucracies, but is part of everyday life. This has resulted in the concerns of political geography increasingly overlapping with those of other human geography sub-disciplines such as economic geography, and, particularly, with those of social and cultural geography in relation to the study of the politics of place (see, for example, the books by David Harvey (1996) and Joe Painter (1995)). Although contemporary political geography maintains many of its traditional concerns (see below) the multi-disciplinary expansion into related areas is part of a general process within human geography which involves the blurring of boundaries between formerly discrete areas of study, and through which the discipline as a whole is enriched. In particular, contemporary political geography often considers: How and why states are organized into regional groupings, both formally (e.g. the European Union) and informally (e.g. the Third World) The relationship between states and former colonies, and how these are propagated over time, for example through neo-colonialism The relationship between a government and its people The relationships between states including international trades and treaties The functions, demarcations and policing of boundaries How imagined geographies have political implications The influence of political power on geographical space The political implications of modern media (e.g. radio, TV, ICT, Internet, social networks) The study of election results (electoral geography) Critical political geography Critical political geography is mainly concerned with the criticism of traditional political geographies vis-a-vis modern trends. As with much of the move towards 'Critical geographies', the arguments have drawn largely from postmodern, post structural and postcolonial theories. Examples include: Feminist geography, which argues for recognition of the power relations as patriarchal and attempts to theorise alternative conceptions of identity and identity politics. Alongside related concerns such as Queer theory and Youth studies Postcolonial theories which recognise the Imperialistic, universalising nature of much political geography, especially in Development geography Notable political geographers John A. Agnew Simon Dalby Klaus Dodds Derek Gregory Richard Hartshorne Karl Haushofer Ron J. Johnston Reece Jones Cindi Katz Peter Kropotkin Yves Lacoste Halford Mackinder Doreen Massey Joe Painter Friedrich Ratzel Rachel Pain Gillian Rose Linda McDowell Cindi Katz Ellen Churchill Semple Peter J. Taylor See also Index of geography articles History of geography Critical geography List of sovereign states Tobler's first law of geography Tobler's second law of geography References Bakis H (1987) Géopolitique de l'information Presses Universitaires de France, Paris Harvey D (1996) Justice, nature and the geography of difference Oxford: Blackwell Johnston RJ (1979) Political, electoral and spatial systems Oxford: Clarendon Press Painter J (1995) Politics, geography and 'political geography': a critical perspective London: Arnold Pepper D (1996) Modern environmentalism London: Routledge Ratzel F (1897) Politische Geographie, Munich, Oldenbourg Sack RD (1986) Human territoriality: its theory and history Cambridge: Cambridge University Press Further reading Agnew J (1997) Political geography: a reader London: Arnold Bakis H (1995) ‘Communication and Political Geography in a Changing World’ Revue Internationale de Science Politique 16 (3) pp219–311 - http://ips.sagepub.com/content/16/3.toc Buleon P (1992) 'The state of political geography in France in the 1970s and 1980s' Progress in Human Geography 16 (1) pp24–40 Claval P (1978) Espace et pouvoir, Paris, Presses Universitaires de France Cox KR, Low M & Robinson J (2008) Handbook of Political Geography London: Sage Okunev I (2021) Political geography Brussels: Peter Lang Sanguin A-L & Prevelakis G (1996), 'Jean Gottmann (1915-1994), un pionnier de la géographie politique', Annales de Géographie, 105, 587. pp73–78 Short JR (1993) An introduction to political geography - 2nd edn. London: Routledge Spykman NJ (1944) The Geography of the Peace New York: Harcourt, Brace and Co. Sutton I (1991) 'The Political Geography of Indian Country' American Indian Culture and Research Journal 15(2) pp1–169. Taylor PJ & Flint C (2007) Political geography: world-economy, nation-state and locality Harlow: Pearson Education Lim. External links Human geography
0.772708
0.98822
0.763606
Auxiliary sciences of history
Auxiliary (or ancillary) sciences of history are scholarly disciplines which help evaluate and use historical sources and are seen as auxiliary for historical research. Many of these areas of study, classification and analysis were originally developed between the 16th and 19th centuries by antiquaries, and would then have been regarded as falling under the broad heading of antiquarianism. "History" was at that time regarded as a largely literary skill. However, with the spread of the principles of empirical source-based history championed by the Göttingen school of history in the late 18th century and later by Leopold von Ranke from the mid-19th century onwards, they have been increasingly regarded as falling within the skill-set of the trained historian. Examples Auxiliary sciences of history include, but are not limited to: Archaeology, the study of human activity through the recovery and analysis of material culture Archaeography, the study of ancient (historical) documents (antique writings) Archival science, the study and theory of creating and maintaining archives Chorography, the study of regions and places Chronology, the study of the sequence of past events Cliometrics, the systematic application of economic theory, econometric techniques, and other formal or mathematical methods to the study of history Codicology, the study of books as physical objects Diplomatics, the study and textual analysis of historical documents Encyclopaedistics, the study of encyclopaedias as sources of encyclopaedic knowledge Epigraphy, the study of ancient inscriptions Genealogy, the study of family relationships Heraldry, the study of armorial devices Numismatics, the study of coins Onomastics, the study of proper names Palaeography, the study of old handwriting Paleoanthropology, the study of human evolution and ecology through the fossil record Phaleristics, the study of military orders, fraternities, and award items Philately, the study of postage stamps Philology, the study of the language of historical sources Prosopography, the investigation of a historical group of individuals through a collective study of their lives Sigillography (or sphragistics), the study of seals Toponymy, the study of place names Vexillology, the study of flags See also Library of Congress Classification:Class C -- Auxiliary Sciences of History References Historiography Fields of history
0.776227
0.983682
0.763561
Postmodernism
Postmodernism is a term used to refer to a variety of artistic, cultural, and philosophical movements that claim to mark a break from modernism. They have in common the conviction that it is no longer possible to rely upon previous ways of representing reality. Still, there is disagreement among experts about its more precise meaning even within narrow contexts. The term began to acquire its current range of meanings in literary criticism and architectural theory during the 1950s–1960s. In opposition to modernism's alleged self-seriousness, postmodernism is characterized by its playful use of eclectic styles and performative irony, among other features. Critics claim it supplants moral, political, and aesthetic ideals with mere style and spectacle. In the 1990s, "postmodernism" came to denote a general – and, in general, celebratory – response to cultural pluralism. Proponents align themselves with feminism, multiculturalism, and postcolonialism. Building upon poststructural theory, postmodern thought defined itself by the rejection of any single, foundational historical narrative. This called into question the legitimacy of the Enlightenment account of progress and rationality. Critics allege that its premises lead to a nihilistic form of relativism. In this sense, it has become a term of abuse in popular culture. Definitions "Postmodernism" is "a highly contested term", referring to "a particularly unstable concept", that "names many different kinds of cultural objects and phenomena in many different ways". It is "diffuse, fragmentary, [and] multi-dimensional". Critics have described it as "an exasperating term" and claim that its indefinability is "a truism". Put otherwise, postmodernism is "several things at once". It has no single definition, and the term does not name any single unified phenomenon, but rather many diverse phenomena: "postmodernisms rather than one postmodernism". Although postmodernisms are generally united in their effort to transcend the perceived limits of modernism, "modernism" also means different things to different critics in various arts. Further, there are outliers on even this basic stance; for instance, literary critic William Spanos conceives postmodernism, not in period terms, but in terms of a certain kind of literary imagination so that pre-modern texts such as Euripides' Orestes or Cervantes' Don Quixote count as postmodern. All this notwithstanding, scholar Hans Bertens offers the following: If there is a common denominator to all these postmodernisms, it is that of a crisis in representation: a deeply felt loss of faith in our ability to represent the real, in the widest sense. No matter whether they are aesthestic [sic], epistemological, moral, or political in nature, the representations that we used to rely on can no longer be taken for granted.In practice, across its many manifestations, postmodernism shares an attitude of skepticism towards grand explanations and established ways of doing things. In art, literature, and architecture, it blurs boundaries between styles and genres, and encourages freely mixing elements, challenging traditional distinctions like high art versus "popular art". In science, it emphasizes multiple ways of seeing things, and how our cultural and personal backgrounds shape our realities, making it impossible to be completely neutral and "objective". In philosophy, education, history, politics, and many other fields, it encourages critical re-examination of established institutions and social norms, embracing diversity and breaking down disciplinary boundaries. Though these ideas weren't strictly new, postmodernism amplified them, using an often playful, at times deeply critical, attitude of pervasive skepticism to turn them into defining features. Historical overview The term first appeared in print in 1870, but it only began to enter circulation with its current range of meanings in the 1950s—60s. Early appearances The term "postmodern" was first used in 1870 by the artist John Watkins Chapman, who described "a Postmodern style of painting" as a departure from French Impressionism. Similarly, the first citation given by the Oxford English Dictionary is dated to 1916, describing Gus Mager as "one of the few 'post' modern painters whose style is convincing". Episcopal priest and cultural commentator J. M. Thompson, in a 1914 article, uses the term to describe changes in attitudes and beliefs in the critique of religion, writing, "the raison d'être of Post-Modernism is to escape from the double-mindedness of modernism by being thorough in its criticism by extending it to religion as well as theology, to Catholic feeling as well as to Catholic tradition". In 1926, Bernard Iddings Bell, president of St. Stephen's College and also an Episcopal priest, published Postmodernism and Other Essays, which marks the first use of the term to describe an historical period following modernity. The essay criticizes lingering socio-cultural norms, attitudes, and practices of the Enlightenment. It is also critical of a purported cultural shift away from traditional Christian beliefs. The term "postmodernity" was first used in an academic historical context as a general concept for a movement by Arnold J. Toynbee in a 1939 essay, which states that "Our own Post-Modern Age has been inaugurated by the general war of 1914–1918". In 1942, the literary critic and author H. R. Hays describes postmodernism as a new literary form. Also in the arts, the term was first used in 1949 to describe a dissatisfaction with the modernist architectural movement known as the International Style. Although these early uses anticipate some of the concerns of the debate in the second part of the 20th century, there is little direct continuity in the discussion. Just when the new discussion begins, however, is also a matter of dispute. Various authors place its beginnings in the 1950s, 1960s, 1970s, and 1980s. Theoretical development In the mid-1970s, the American sociologist Daniel Bell provided a general account of the postmodern as an effectively nihilistic response to modernism's alleged assault on the Protestant work ethic and its rejection of what he upheld as traditional values. The ideals of modernity, per his diagnosis, were degraded to the level of consumer choice. This research project, however, was not taken up in a significant way by others until the mid-1980s when the work of Jean Baudrillard and Fredric Jameson, building upon art and literary criticism, reintroduced the term to sociology. Discussion about the postmodern in the second part of the 20th century was most articulate in areas with a large body of critical discourse around the modernist movement. Even here, however, there continued to be disagreement about such basic issues as whether postmodernism is a break with modernism, a renewal and intensification of modernism, or even, both at once, a rejection and a radicalization of its historical predecessor. According to scholar Steven Connor, discussions of the 1970s were dominated by literary criticism, to be supplanted by architectural theory in the 1980s. Some of these conversations made use of French poststructuralist thought, but only after these innovations and critical discourse in the arts did postmodernism emerge as a philosophical term in its own right. In literary and architectural theory According to Hans Bertens and Perry Anderson, the Black Mountain poets Charles Olson and Robert Creeley first introduced the term "postmodern" in its current sense during the 1950s. Their stance against modernist poetry – and Olson's Heideggerian orientation – were influential in the identification of postmodernism as a polemical position opposed to the rationalist values championed by the Enlightenment project. During the 1960s, this affirmative use gave way to a pejorative use by the New Left, who used it to describe a waning commitment among youth to the political ideals socialism and communism. The literary critic Irving Howe, for instance, denounced postmodern literature for being content to merely reflect, rather than actively attempt to refashion, what he saw as the "increasingly shapeless" character of contemporary society. In the 1970s, this changed again, largely under the influence of the literary critic Ihab Hassan's large-scale survey of works that he said could no longer be called modern. Taking the Black Mountain poets an exemplary instance of the new postmodern type, Hassan celebrates its Nietzschean playfulness and cheerfully anarchic spirit, which he sets off against the high seriousness of modernism. (Yet, from another perspective, Friedrich Nietzsche's attack on Western philosophy and Martin Heidegger's critique of metaphysics posed deep theoretical problems not necessarily a cause for aesthetic celebration. Their further influence on the conversation about postmodernism, however, would be largely mediated by French poststructuralism.) If literature was at the center of the discussion in the 1970s, architecture is at the center in the 1980s. The architectural theorist Charles Jencks, in particular, connects the artistic avant-garde to social change in a way that captures attention outside of academia. Jenckes, much influenced by the American architect Robert Venturi, celebrates a plurality of forms and encourages participation and active engagement with the local context of the built environment. He presents this as in opposition to the "authoritarian style" of International Modernism. The influence of poststructuralism In the 1970s, postmodern criticism increasingly came to incorporate poststructuralist theory, particularly the deconstructive approach to texts most strongly associated with Jacques Derrida. Derrida attempted to demonstrate that the whole foundationalist approach to language and knowledge was untenable and misguided. He was also critical of what he claimed to expose as the artificial binary oppositions (e.g., subject/object, speech/writing) that he claims are at the heart of Western culture and philosophy. It is during this period that postmodernism comes to be particularly equated with a kind of anti-representational self-reflexivity. In the 1980s, some critics begin to take an interest in the work of Michel Foucault. This introduces a political concern about social power-relations into discussions about postmodernism. Much of Foucault's project is, against the Enlightenment tradition, to expose modern social institutions and forms of knowledge as historically contingent forces of domination. He aims to detotalize or decenter historical narratives to display modern consciousness as it is constituted by specific discourses and institutions that shape individuals into the docile subjects of social systems. This is also the beginning of the affiliation of postmodernism with feminism and multiculturalism. The art critic Craig Owens, in particular, not only made the connection to feminism explicit, but went so far as to claim feminism for postmodernism wholesale, a broad claim resisted by even many sympathetic feminists such as Nancy Fraser and Linda Nicholson. In social theory Although postmodern criticism and thought drew on philosophical ideas from early on, "postmodernism" was only introduced to the expressly philosophical lexicon by Jean-François Lyotard in his 1979 The Postmodern Condition: A Report on Knowledge. In this influential work, Lyotard offers the following definition: "Simplifying to the extreme, I define postmodern as incredulity towards metanarratives [such as Enlightenment progress or Marxist revolution]". In a society with no unifying narrative, he argues, we are left with heterogeneous, group-specific narratives (or "language games", as adopted from Ludwig Wittgenstein) with no universal perspective from which to adjudicate among them. According to Lyotard, this introduces a general crisis of legitimacy, a theme he adopts from the philosopher Jürgen Habermas, whose theory of communicative rationality Lyotard rejects. While he was particularly concerned with the way that this insight undermines claims of scientific objectivity, Lyotard's argument undermines the entire principle of transcendent legitimization. Instead, proponents of a language game must make the case for their legitimacy with reference to such considerations as efficiency or practicality. Far from celebrating the apparently relativistic consequences of this argument, however, Lyotard focused much of his subsequent work on how links among games could be established, particularly with respect to ethics and politics. Nevertheless, the appearance of linguistic relativism inspired an extensive rebuttal by the Marxist critic Fredric Jameson. Building upon the theoretical foundations laid out by the Marxist economist Ernst Mandel and observations in the early work of the French sociologist Jean Baudrillard, Jameson develops his own conception of the postmodern as "the cultural logic of late capitalism" in the form of an enormous cultural expansion into an economy of spectacle and style, rather than the production of goods. Baudrillard himself broke with Marxism, but continued to theorize the postmodern as the condition in which the domain of reality has become so heavily mediated by signs as to become inaccessible in itself, leaving us entirely in the domain of the simulacrum, an image that bears no relation to anything outside of itself. Scholars, however, disagree about whether his later works are intended as science fiction or truthful theoretical claims. In the 1990s, postmodernism became increasingly identified with critical and philosophical discourse directly about postmodernity or the postmodern idiom itself. No longer centered on any particular art or even the arts in general, it instead turns to address the more general problems posed to society in general by a new proliferation of cultures and forms. It is during this period that it also comes to be associated with postcolonialism and identity politics. Around this time, postmodernism also begins to be conceived in popular culture as a general "philosophical disposition" associated with a loose sort of relativism. In this sense, the term also starts to appear as a "casual term of abuse" in non-academic contexts. Others identify it as an aesthetic "lifestyle" of eclecticism and playful self-irony. Others argue that postmodernism utilizes compositional and semantic practices such as inclusivity, intentional indiscrimination, nonselection, and "logical impossibility." In various arts Architecture Scholarship regarding postmodernism and architecture is closely linked with the writings of critic-turned-architect Charles Jencks, beginning with lectures in the early 1970s and his essay "The Rise of Post-Modern Architecture" from 1975. His magnum opus, however, is the book The Language of Post-Modern Architecture, first published in 1977, and since running to seven editions (in which he famously wrote: "Modern architecture died in St. Louis, Missouri, on 15 July 1972 at 3:32 p.m. (or thereabouts) when the infamous Pruitt–Igoe scheme, or rather several of its slab blocks, were given the final coup de grâce by dynamite."). Jencks makes the point that postmodernism (like modernism) varies for each field of art, and that for architecture it is not just a reaction to modernism but what he terms double coding: "Double Coding: the combination of Modern techniques with something else (usually traditional building) in order for architecture to communicate with the public and a concerned minority, usually other architects." In their book, "Revisiting Postmodernism", Terry Farrell and Adam Furman argue that postmodernism brought a more joyous and sensual experience to the culture, particularly in architecture. For instance, in response to the modernist slogan of Ludwig Mies van der Rohe that "less is more", the postmodernist Robert Venturi rejoined that "less is a bore". Dance The term "postmodern dance" is most strongly associated with the Judson Dance Theater located in New York's Greenwich Village during the 1960s and 1970s. Arguably its most important principle is taken from the composer John Cage's efforts to break down the distinction between art and life. The Judson dancers "[stripped] dance of its theatrical conventions such as virtuoso technique, fanciful costumes, complex storylines, and the traditional stage [and] drew on everyday movements (sitting, walking, kneeling, and other gestures) to create their pieces, often performing them in ordinary spaces." This was developed in particular by the American dancer and choreographer Merce Cunningham, Cage's partner. In the 1980s and 1990s, dance began to incorporate other typically postmodern features such as the mixing of genres, challenging high–low cultural distinctions, and incorporating a political dimension. Fashion One manifestation of postmodernism in fashion explored alternatives to conventional concepts of elegance. Rei Kawakubo’s Spring/Summer 1997 collection featured "dresses asymmetrically padded with goose down, creating bumps in unexpected areas of the body". Issey Miyake’s 1985 dreadlocks hat "offered an immediate, yet impermanent, 'multi-culti' fashion experience". Vivienne Westwood took "an extremely polyglot approach", from early work with copies of 1950s clothes, to exploration of historic modes and ethnic influences: her first runway show, "Pirate", merged British history, 18th- and 19th-century dress, and African textile design. Film Postmodern film aims to subvert the mainstream conventions of narrative structure and characterization, and to test the audience's suspension of disbelief. Typically, such films also break down the cultural divide between high and low art and often upend typical portrayals of gender, race, class, genre, and time with the goal of creating something that does not abide by traditional narrative expression. Postmodern film is often separated from modernist cinema and traditional narrative film by three key characteristics. One is an extensive use of homage or pastiche. The second is meta-reference or self-reflexivity, highlighting the construction and relation of the image to other images in media and not to any kind of external reality. A self-referential film reminds the viewer – either through characters' knowledge of their own fictional nature, or through visuals – that the film itself is only a film. One technique used to achieve meta-reference is the use of intertextuality, in which the film's characters reference or discuss other works of fiction. A third characteristic is stories that unfold out of chronological order, deconstructing or fragmenting time to highlight that what is appearing on screen is constructed. Another common element is a bridging of the gap between highbrow and lowbrow activities and artistic styles, for example, a parody of Michelangelo's Sistine Chapel ceiling in which Adam is reaching for a McDonald's burger rather than the hand of God. Contradictions of all sorts – whether it be in visual technique, characters' morals, etc. – are crucial to postmodernism. Ridley Scott's Blade Runner (1982) might be the best-known postmodernist film, about a future dystopia where "replicants", androids with enhanced abilities and all but indistinguishable from humans, have been invented and are deemed dangerous enough to hunt down when they escape. There is extensive blurring of boundaries between genres and cultures, and styles that are generally more separate, along with the fusion of disparate styles and times, a common trope in postmodern cinema. In particular, the blending of film noir and science-fiction into tech noir is an example of the film deconstructing cinema and genre. Graphic design Early mention of postmodernism as an element of graphic design appeared in the British magazine, "Design". A characteristic of postmodern graphic design is that "retro, techno, punk, grunge, beach, parody, and pastiche were all conspicuous trends. Each had its own sites and venues, detractors and advocates." Literature In 1971, the American scholar Ihab Hassan made the term popular in literary studies as a description of the new art emerging in the 1960s. According to scholar David Herwitz, writers such as John Barth and Donald Barthelme (and, later, Thomas Pynchon) responded in various ways to the aesthetic innovations of Finnegans Wake and the late work of Samuel Beckett. Postmodern literature often calls attention to issues regarding its own complicated connection to reality. The French critic Roland Barthes declared the novel to be an exhaustive form and explored what it means to continue to write novels under such a condition. In Postmodernist Fiction (1987), Brian McHale details the shift from modernism to postmodernism, arguing that the former is characterized by an epistemological dominant and that postmodern works have developed out of modernism and are primarily concerned with questions of ontology. McHale's "What Was Postmodernism?" (2007) follows Raymond Federman's lead in now using the past tense when discussing postmodernism. Music Music critic Andy Cush described Talking Heads as "New York art-punks" whose "blend of nervy postmodernism and undeniable groove made them one of the defining rock bands of the late 1970s and ’80s." Media theorist Dick Hebdige, examining the "Road to Nowhere" (1985) music video, said the group "draw eclectically on a wide range of visual and aural sources to create a distinctive pastiche or hybrid 'house style' which they have used since their formation in the mid-1970s deliberately to stretch received (industrial) definitions of what rock/pop/video/Art/ performance/audience are", calling them "a properly postmodernist band." According to lead vocalist/guitarist/songwriter David Byrne, commenting for a 2011 museum exhibition, Postmodernism: Style and Subversion 1970-1990: "Anything could be mixed and matched – or mashed up, as is said today – and anything was fair game for inspiration.” The composer Jonathan Kramer has written that avant-garde musical compositions (which some would consider modernist rather than postmodernist) "defy more than seduce the listener, and they extend by potentially unsettling means the very idea of what music is." In the 1960s, composers such as Terry Riley, Henryk Górecki, Bradley Joseph, John Adams, Steve Reich, Philip Glass, Michael Nyman, and Lou Harrison reacted to the perceived elitism and dissonant sound of atonal academic modernism by producing music with simple textures and relatively consonant harmonies, whilst others, most notably John Cage challenged the prevailing narratives of beauty and objectivity common to Modernism. Author on postmodernism, Dominic Strinati, has noted, it is also important "to include in this category the so-called 'art rock' musical innovations and mixing of styles associated with groups like Talking Heads, and performers like Laurie Anderson, together with the self-conscious 'reinvention of disco' by the Pet Shop Boys". In the late-20th century, avant-garde academics labelled American singer Madonna as the "personification of the postmodern" because "the postmodern condition is characterized by fragmentation, de-differentiation, pastiche, retrospection and anti-foundationalism", which they argued Madonna embodied. Christian writer Graham Cray also said that "Madonna is perhaps the most visible example of what is called post-modernism", and Martin Amis described her as "perhaps the most postmodern personage on the planet". She was also suggested by literary critic Olivier Sécardin to epitomise postmodernism. Sculpture Sculptor Claes Oldenberg, at the forefront of the pop art movement, declared in 1961: "I am for an art that is political-erotical-mystical … I am for an art that embroils itself with everyday crap and still comes out on top." That year, he opened The Store in New York's Lower East Side, where he blurred the line between art and commerce by selling brightly painted plaster reliefs and sculptures of commercial and manufactured objects. Oldenburg was one of the most recognizable sculptors identified with postmodernism, a group that included Jeff Koons, Eva Hesse, Louise Bourgeois, Anish Kapoor, Damien Hirst, Rachel Whiteread, and Richard Serra. Theater Postmodern theater emerged as a reaction against modernist theater. Most postmodern productions are centered on highlighting the fallibility of definite truth, instead encouraging the audience to reach their own individual understanding. Essentially, thus, postmodern theater raises questions rather than attempting to supply answers. In philosophy In the 1970s, a disparate group of poststructuralists in France developed a critique of modern philosophy with roots discernible in Friedrich Nietzsche, Søren Kierkegaard, and Martin Heidegger. Although few themselves relied upon the term, they became known to many as postmodern theorists. Notable figures include Jacques Derrida, Michel Foucault, Jean-François Lyotard, Jean Baudrillard, and others. By the 1980s, this spread to America in the work of Richard Rorty and others. According to scholar Stuart Sim, one of the best ways to describe a specifically philosophical conception of postmodernism is as an anti-foundational "scepticism about authority, received wisdom, cultural and political norms and so on", which he says places it within a tradition dating back to ancient Greece. Poststructuralism Poststructuralists, like structuralists, start from the assumption that people's identities, values, and economic conditions determine each other rather than having intrinsic properties that can be understood in isolation. While structuralism explores how meaning is produced by a set of essential relationships in an overarching quasi-linguistic system, poststructuralism accepts this premise, but rejects the assumption that such systems can ever be fixed or centered. Deconstruction Deconstruction is a practice of philosophy, literary criticism, and textual analysis developed by Jacques Derrida. Derrida's work has been seen as rooted in a statement found in Of Grammatology: "" ("there is no outside-text"). This statement is part of a critique of "inside" and "outside" metaphors when referring to the text, and is a corollary to the observation that there is no "inside" of a text as well. This attention to a text's unacknowledged reliance on metaphors and figures embedded within its discourse is characteristic of Derrida's approach. Derrida's method sometimes involves demonstrating that a given philosophical discourse depends on binary oppositions or excluding terms that the discourse itself has declared to be irrelevant or inapplicable. Derrida's philosophy inspired a postmodern movement called deconstructivism among architects, characterized by a design that rejects structural "centers" and encourages decentralized play among its elements. Derrida discontinued his involvement with the movement after the publication of his collaborative project with architect Peter Eisenman in Chora L Works: Jacques Derrida and Peter Eisenman. Michel Foucault on power relations French philosopher and social theorist Michel Foucault argued that power operates according to the logics of social institutions that have become unmoored from the intentions of any actual individuals. Individuals, according to Foucault, are both products and participants in these dynamics. In the 1970s, Foucault employed a Nietzsche-inspired "genealogical method" to analyze power-relations across their historical permutations. Both his political orientation and the consistency of his positions continue to be debated among critics and defenders alike. Nevertheless, Foucault's political works share two common elements: a historical perspective and a discursive methodology. He analyzed social phenomena in historical contexts and focused on how they have evolved over time. Additionally, he employed the study of texts, usually academic texts, as the material for his inquiries. In this way, Foucault sought to understand how the historical formation of discourses has shaped contemporary political thinking and institutions. Gilles Deleuze on productive difference The work of Gilles Deleuze develops a concept of as a productive mechanism, rather than as a merely negative phenomenon. He advocates for a critique of reason that emphasizes sensibility and feeling over rational judgment. Following Nietzsche, Deleuze argues that philosophical critique is an encounter between thought and what forces it into action, and that this requires training, discipline, inventiveness, and even a certain "cruelty". He believes that thought cannot activate itself, but needs external forces to awaken and move it. Art, science, and philosophy can provide such activation through their transformative and experimental nature. The criticisms of Jürgen Habermas The philosopher Jürgen Habermas, a prominent critic of philosophical postmodernism, argues in his 1985 work The Philosophical Discourse of Modernity that postmodern thinkers are caught in a performative contradiction, more specifically, that their critiques of modernism rely on concepts and methods that are themselves products of modern reason. Habermas criticizes these thinkers for their rejection of the subject and their embrace of experimental, avant-garde strategies. He asserts that their critiques of modernism ultimately lead to a longing for the very subject they seek to dismantle. Habermas also takes issue with postmodernists' leveling of the distinction between philosophy and literature. He argues that such rhetorical strategies undermine the importance of argument and communicative reason. Habermas's critique of postmodernism set the stage for much of the subsequent debate by clarifying some of its key underlying issues. Additionally, according to scholar Gary Aylesworth, "that he is able to read postmodernist texts closely and discursively testifies to their intelligibility", against those who would dismiss them as simple nonsense. His engagement with their ideas has led some postmodern philosophers, such as Lyotard, to similarly engage with Habermas's criticisms. The Postmodern Condition Jean-François Lyotard is credited with being the first to use the term "postmodern" in a philosophical context, in his 1979 work . In it, he follows Wittgenstein's language games model and speech act theory, contrasting two different language games, that of the expert, and that of the philosopher. He talks about the transformation of knowledge into information in the computer age and likens the transmission or reception of coded messages (information) to a position within a language game. Lyotard defined philosophical postmodernism in The Postmodern Condition, writing: "Simplifying to the extreme, I define postmodern as incredulity towards metanarratives...." where what he means by metanarrative (in French, grands récits) is something like a unified, complete, universal, and epistemically certain story about everything that is. Against totalizing metanarratives, Lyotard and other postmodern philosophers argue that truth is always dependent upon historical and social context rather than being absolute and universal—and that truth is always partial and "at issue" rather than being complete and certain. Jean Baudrillard on hyperreality In postmodernism, hyperreality refers to a state where experiences are mediated by technology, resulting in a network of images and signs without a corresponding external reality. Baudrillard describes hyperreality as the terminal stage of simulation, where signs and images become entirely self-referential. Drawing upon some of the technical vocabulary of the psychoanalyst Jacques Lacan, Baudrillard argues that production has shifted from creating real objects to producing signs and symbols. This system of symbolic exchange, detached from the real, constitutes hyperreality. In the words of one commentartor, "the hyperreal is a system of simulation that simulates itself." Richard Rorty's neopragmatism Richard Rorty was an American philosopher known for his linguistic form of neopragmatism. Initially attracted to analytic philosophy, Rorty later rejected its representationalism. His major influences include Charles Darwin, Hans Georg Gadamer, G. W. F. Hegel, and Martin Heidegger. In his Philosophy and the Mirror of Nature, Rorty challenged the notion of a mind-independent, language-independent reality. He argued that language is a tool used to adapt to the environment and achieve desired ends. This naturalistic approach led him to abandon the traditional quest for a privileged mental power that allows direct access to things-in-themselves. Instead, Rorty advocated for a focus on imaginative alternatives to present beliefs rather than the pursuit of well-grounded truths. He believed that creative, secular humanism, free from authoritarian assertions about truth and goodness, is the key to a better future. Rorty saw his neopragmatism as a continuation of the Enlightenment project, aiming to demystify human life and replace traditional power relations with those based on tolerance and freedom. In society Postmodernism has influenced society at large, in such diverse fields as law, education, media, urban planning, science, religious studies, politics and others. Law Postmodern interpretations of the law can involve critically considering legal inequalities connected to gender, class, race and ethnicity by acknowledging "diversity and multiplicity". Critical practices connected to postmodern philosophy, such as critical literacy and deconstruction, can be used as an interpretative tool to ensure that a range of different and diverse values and norms are acknowledged or considered. Marketing Postmodern marketing focuses on customized experiences where broad market generalizations are no-longer applied. According to academic Stephen Brown, from the University of Ulster, "Marketers know about consumers, consumers know about marketers, marketers know consumers know about marketers, and consumers know marketers know consumers know about marketers." Brown, writing in the European Journal of Marketing in 1993, stated that the postmodern approach in many ways rejects attempts to impose order and work in silos. Instead marketers should work collectively with "artistic" attributes of intuition, creativity, spontaneity, speculation, emotion and involvement. A 2020 paper in the Journal of Business Research sought to identify the transition from postmodernism to post-postmodernism, to benefit marketing efforts. Focusing on "the changing social conditions that lead the consumer to consume in a particular manner", the study takes the approach of analyzing and comparing song lyrics. Madonna is identified as postmodern and Taylor Swift as post-postmodern, with Lady Gaga used as a transitional example. Noting that "definitions of postmodernism are notoriously messy, frequently paradoxical and multi-faceted", five themes and characteristics of postmodernism consistently found in marketing literature – anti-foundationalism, de-differentiation, fragmentation, the reversal of production and consumption, and hyper-reality – were employed in the comparative analysis. Urban planning Modernism sought to design and plan cities that followed the logic of the new model of industrial mass production; reverting to large-scale solutions, aesthetic standardisation, and prefabricated design solutions. Modernism eroded urban living by its failure to recognise differences and aim towards homogeneous landscapes (Simonsen 1990, 57). Jane Jacobs' 1961 book The Death and Life of Great American Cities was a sustained critique of urban planning as it had developed within modernism and marked a transition from modernity to postmodernity in thinking about urban planning. Postmodernism has involved theories that embrace and aim to create diversity. It exalts uncertainty, flexibility and change and rejects utopianism while embracing a utopian way of thinking and acting. Postmodernity of 'resistance' seeks to deconstruct modernism and is a critique of the origins without necessarily returning to them. As a result of postmodernism, planners are much less inclined to lay a firm or steady claim to there being one single 'right way' of engaging in urban planning and are more open to different styles and ideas of 'how to plan'. Emerging in the mid-1980s, the "Los Angeles School" of urbanism, an academic movement loosely centered around the University of California, Los Angeles' Urban Planning Department, considered contemporary Los Angeles to be the quintessential postmodern city. This was in contrast with what had been the dominant ideas of the Chicago School, formed in the 1920s at the University of Chicago, with its framework of urban ecology and emphasis on functional areas of use within a city, and the concentric circles to understand the sorting of different population groups. Edward Soja of the Los Angeles School combined Marxist and postmodern perspectives and focused on the economic and social changes (globalization, specialization, industrialization/deindustrialization, neo-liberalism, mass migration) that lead to the creation of large city-regions with their patchwork of population groups and economic uses. Legacy Since the late 1990s, there has been a growing sentiment in popular culture and in academia that postmodernism "has gone out of fashion". Others argue that postmodernism is dead in the context of current cultural production. In "White Noise/White Heat, or Why the Postmodern Turn in Rock Music Led to Nothing but Road" (2004), literary critic and professor of English and comparative literature Larry McCaffery reexamined his rock music essay, "White Noise", published in the journal American Book Review in 1990. He noted "the almost casual assurance" of its definition of postmodernism, and the "easy assumption throughout that it is possible to draw analogies about the 'innovative features' of fundamentally different media, such as music and fiction." From his 2004 perspective, he says, "If I were writing such an essay today I would omit 'postmodernism' entirely because I no longer believe that I (or anyone else for that matter) can articulate with any degree of coherence or specificity what 'postmodernism' is, or was, what it's supposed to mean, or, indeed, whether it ever existed at all." In 2011, Postmodernism: Style and Subversion 1970 –1990, at the Victoria and Albert Museum in London, was billed as "the first in-depth survey of art, design and architecture of the 1970s and 1980s". The exhibition was organized in three "broadly chronological" sections. The first focused mainly on architecture, "the discipline in which the ideas of postmodernism first emerged", introducing architects like Aldo Rossi, Charles Moore and James Stirling, also designers like Ron Arad, Vivienne Westwood and Rei Kawakubo. The second focused on 1980s design, art, music, fashion, performance, and club culture, with artists like Grace Jones, Leigh Bowery, Klaus Nomi, Guy Bourdin, and Helmut Newton, and artifacts employed by Annie Lennox, Devo, Grandmaster Flash, Karole Armitage, Kazuo Ohno, and Michael Clark. The final section examined "the hyper-inflated commodity culture of the 1980s", focusing on money as "a source of endless fascination for artists, designers and authors", including Andy Warhol, Karl Lagerfeld, Swatch, MTV and Disney. A review in the journal Design Issues noted the "daunting prospect" of reviewing an exhibition "on what might be considered the most slippery, indefinable 'movement'", and wondered what the curators must have felt: "One reviewer thought it 'a risky curatorial undertaking,' and even the curators themselves admit it could be seen as 'a fool's errand.'" Post-postmodernism The connection between postmodernism, posthumanism, and cyborgism has led to a challenge to postmodernism, for which the terms Post-postmodernism and postpoststructuralism were first coined in 2003: More recently metamodernism, post-postmodernism and the "death of postmodernism" have been widely debated: in 2007 Andrew Hoberek noted in his introduction to a special issue of the journal Twentieth-Century Literature titled "After Postmodernism" that "declarations of postmodernism's demise have become a critical commonplace". A small group of critics has put forth a range of theories that aim to describe culture or society in the alleged aftermath of postmodernism, most notably Raoul Eshelman (performatism), Gilles Lipovetsky (hypermodernity), Nicolas Bourriaud (altermodern), and Alan Kirby (digimodernism, formerly called pseudo-modernism). None of these new theories or labels have so far gained very widespread acceptance. Sociocultural anthropologist Nina Müller-Schwarze offers neostructuralism as a possible direction. Criticisms Criticisms of postmodernism are intellectually diverse. Since postmodernism criticizes both conservative and modernist values as well as universalist concepts such as objective reality, morality, truth, reason, and social progress, critics of postmodernism often defend such concepts from various angles. Media theorist Dick Hebdige criticized the vagueness of the term, enumerating a long list of otherwise unrelated concepts that people have designated as postmodernism, from "the décor of a room" or "a 'scratch' video", to fear of nuclear armageddon and the "implosion of meaning", and stated that anything that could signify all of those things was "a buzzword". The analytic philosopher Daniel Dennett criticized its impact on the humanities, characterizing it as producing conversations' in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster." Criticism of postmodernist movements in the arts include objections to departure from beauty, the reliance on language for the art to have meaning, a lack of coherence or comprehensibility, deviation from clear structure, and consistent use of dark and negative themes. See also Theory Culture and politics Religion History Opposed by Notes Citations Bibliography External links Discourses of Postmodernism. Multilingual bibliography by Janusz Przychodzen (PDF file) Modernity, postmodernism and the tradition of dissent, by Lloyd Spencer (1998) Postmodernism and truth by philosopher Daniel Dennett Stanford Encyclopedia of Philosophy's entry on postmodernism 1880s neologisms Criticism of rationalism Metanarratives Modernism Science fiction themes Philosophical schools and traditions Theories of aesthetics Art movements Cultural trends
0.763904
0.999532
0.763546
Social constructivism
Social constructivism is a sociological theory of knowledge according to which human development is socially situated, and knowledge is constructed through interaction with others. Like social constructionism, social constructivism states that people work together to actively construct artifacts. But while social constructivism focuses on cognition, social constructionism focuses on the making of social reality. A very simple example is an object like a cup. The object can be used for many things, but its shape does suggest some 'knowledge' about carrying liquids (see also Affordance). A more complex example is an online course—not only do the 'shapes' of the software tools indicate certain things about the way online courses should work, but the activities and texts produced within the group as a whole will help shape how each person behaves within that group. A person's cognitive development will also be influenced by the culture that they are involved in, such as the language, history, and social context. For a philosophical account of one possible social-constructionist ontology, see the 'Criticism' section of Representative realism. Philosophy Strong social constructivism as a philosophical approach tends to suggest that "the natural world has a small or non-existent role in the construction of scientific knowledge". According to Maarten Boudry and Filip Buekens, Freudian psychoanalysis is a good example of this approach in action. However, Boudry and Buekens do not claim that 'bona fide' science is completely immune from all socialisation and paradigm shifts, merely that the strong social constructivist claim that all scientific knowledge is constructed ignores the reality of scientific success. One characteristic of social constructivism is that it rejects the role of superhuman necessity in either the invention/discovery of knowledge or its justification. In the field of invention it looks to contingency as playing an important part in the origin of knowledge, with historical interests and resourcing swaying the direction of mathematical and scientific knowledge growth. In the area of justification while acknowledging the role of logic and reason in testing, it also accepts that the criteria for acceptance vary and change over time. Thus mathematical proofs follow different standards in the present and throughout different periods in the past, as Paul Ernest argues. Education Social constructivism has been studied by many educational psychologists, who are concerned with its implications for teaching and learning. Social constructivism extends constructivism by incorporating the role of other actors and culture in development. In this sense it can also be contrasted with social learning theory by stressing interaction over observation. For more on the psychological dimensions of social constructivism, see the work of A. Sullivan Palincsar. Psychological tools are one of the key concepts in Lev Vygotsky's sociocultural perspective. Studies on increasing the use of student discussion in the classroom both support and are grounded in theories of social constructivism. There is a full range of advantages that results from the implementation of discussion in the classroom. Participating in group discussion allows students to generalize and transfer their knowledge of classroom learning and builds a strong foundation for communicating ideas orally. Many studies argue that discussion plays a vital role in increasing student ability to test their ideas, synthesize the ideas of others, and build deeper understanding of what they are learning. Large and small group discussion also affords students opportunities to exercise self-regulation, self-determination, and a desire to persevere with tasks. Additionally, discussion increases student motivation, collaborative skills, and the ability to problem solve. Increasing students’ opportunity to talk with one another and discuss their ideas increases their ability to support their thinking, develop reasoning skills, and to argue their opinions persuasively and respectfully. Furthermore, the feeling of community and collaboration in classrooms increases through offering more chances for students to talk together. Studies have found that students are not regularly accustomed to participating in academic discourse. Martin Nystrand argues that teachers rarely choose classroom discussion as an instructional format. The results of Nystrand’s (1996) three-year study focusing on 2400 students in 60 different classrooms indicate that the typical classroom teacher spends under three minutes an hour allowing students to talk about ideas with one another and the teacher. Even within those three minutes of discussion, most talk is not true discussion because it depends upon teacher-directed questions with predetermined answers. Multiple observations indicate that students in low socioeconomic schools and lower track classrooms are allowed even fewer opportunities for discussion. Discussion and interactive discourse promote learning because they afford students the opportunity to use language as a demonstration of their independent thoughts. Discussion elicits sustained responses from students that encourage meaning-making through negotiating with the ideas of others. This type of learning “promotes retention and in-depth processing associated with the cognitive manipulation of information”. One recent branch of work exploring social constructivist perspectives on learning focuses on the role of social technologies and social media in facilitating the generation of socially constructed knowledge and understanding in online environments. Academic writing In a constructivist approach, the focus is on the sociocultural conventions of academic discourse such as citing evidence, hedging and boosting claims, interpreting the literature to back one's own claims, and addressing counter claims. These conventions are inherent to a constructivist approach as they place value on the communicative, interpersonal nature of academic writing with a strong focus on how the reader receives the message. The act of citing others’ work is more than accurate attribution; it is an important exercise in critical thinking in the construction of an authorial self. See also Constructivist epistemology Educational psychology Experiential learning Learning theory Virtual community References Further reading Books Dyson, A. H. (2004). Writing and the sea of voices: Oral language in, around, and about writing. In R.B. Ruddell, & N.J. Unrau (Eds.), Theoretical Models and Processes of Reading (pp. 146–162). Newark, DE: International Reading Association. Paul Ernest (1998), Social Constructivism as a Philosophy of Mathematics, Albany NY: SUNY Press Fry, H & Kettering, S & Marshall, S (Eds.) (2008). A Handbook for Teaching and Learning in Higher Education. Routledge Glasersfeld, Ernst von (1995). Radical Constructivism: A Way of Knowing and Learning. London: RoutledgeFalmer. Grant, Colin B. (2000). Functions and Fictions of Communication. Oxford and Bern: Peter Lang. Grant, Colin B. (2007). Uncertainty and Communication: New Theoretical Investigations. Basingstoke: Palgrave Macmillan. Hale, M.S. & City, E.A. (2002). “But how do you do that?”: Decision making for the seminar facilitator. In J. Holden & J.S. Schmit. Inquiry and the literary text: Constructing discussions in the English classroom / Classroom practices in teaching English, volume 32. Urbana, IL: National Council of Teachers of English. André Kukla (2000), Social Constructivism and the Philosophy of Science, London: Routledge Nystrand, M. (1996). Opening dialogue: Understanding the dynamics of language and learning in the English classroom. New York: Teachers College Press. Poerksen, Bernhard (2004), The Certainty of Uncertainty: Dialogues Introducing Constructivism. Exeter: Imprint-Academic. Schmidt, Siegfried J. (2007). Histories & Discourses: Rewriting Constructivism. Exeter: Imprint-Academic. Vygotsky, L. (1978). Mind in Society. London: Harvard University Press. Chapter 6, Social Constructivism in Introduction to International Relations: Theories and Approaches, Robert Jackson and Georg Sørensen, Third Edition, OUP 2006 Papers Barab, S., Dodge, T. Thomas, M.K., Jackson, C. & Tuzun, H. (2007). Our designs and the social agendas they carry. Journal of the Learning Sciences, 16(2), 263-305. Boudry, M & Buekens, F (2011) The Epistemic Predicament of a Pseudoscience: Social Constructivism Confronts Freudian Psychoanalysis. Theoria, 77, 159–179 Collins, H. M. (1981) Stages in the Empirical Program of Relativism - Introduction. Social Studies of Science. 11(1) 3-10 Corden, R.E. (2001). Group discussion and the importance of a shared perspective: Learning from collaborative research. Qualitative Research, 1(3), 347-367. Paul Ernest, Social constructivism as a philosophy of mathematics: Radical constructivism rehabilitated? 1990 Mark McMahon, Social Constructivism and the World Wide Web - A Paradigm for Learning, ASCILITE 1997 Carlson, J. D., Social Constructivism, Moral Reasoning and the Liberal Peace: From Kant to Kohlberg, Paper presented at the annual meeting of The Midwest Political Science Association, Palmer House Hilton, Chicago, Illinois 2005 Glasersfeld, Ernst von, 1981. ‘An attentional model for the conceptual construction of units and number’, Journal for Research in Mathematics Education, 12:2, 83-94. Glasersfeld, Ernst von, 1989. Cognition, construction of knowledge, and teaching, Synthese, 80, 121-40. Matsumura, L.C., Slater, S.C., & Crosson, A. (2008). Classroom climate, rigorous instruction and curriculum, and students’ interactions in urban middle schools. The Elementary School Journal, 108(4), 294-312. McKinley, J. (2015). Critical argument and writer identity: social constructivism as a theoretical framework for EFL academic writing. Critical Inquiry in Language Studies, 12(3), 184-207. Reznitskaya, A., Anderson, R.C., & Kuo, L. (2007). Teaching and learning argumentation, The Elementary School Journal, 107(5), 449-472. Ronald Elly Wanda. "The Contributions of Social Constructivism in Political Studies". Weber, K., Maher, C., Powell, A., & Lee, H.S. (2008). Learning opportunities from group discussions: Warrants become the objects of debate. Educational Studies in Mathematics, 68 (3), 247-261. Constructivism Enactive cognition Social epistemology Epistemology of science
0.768028
0.994121
0.763513
Downshifting (lifestyle)
In social behavior, downshifting is a trend where individuals adopt simpler lives from what critics call the "rat race". The long-term effect of downshifting can include an escape from what has been described as economic materialism, as well as reduce the "stress and psychological expense that may accompany economic materialism". This new social trend emphasizes finding an improved balance between leisure and work, while also focusing life goals on personal fulfillment, as well as building personal relationships instead of the all-consuming pursuit of economic success. Downshifting, as a concept, shares characteristics with simple living. However, it is distinguished as an alternative form by its focus on moderate change and concentration on an individual comfort level and a gradual approach to living. In the 1990s, this new form of simple living began appearing in the mainstream media, and has continually grown in popularity among populations living in industrial societies, especially the United States, the United Kingdom, New Zealand, and Australia, as well as Russia. Values and motives "Down-shifters" refers to people who adopt long-term voluntary simplicity in their lives. A few of the main practices of down-shifters include accepting less money for fewer hours worked, while placing an emphasis on consuming less in order to reduce their ecological footprint. One of the main results of these practices is being able to enjoy more leisure time in the company of others, especially loved ones. The primary motivations for downshifting are gaining leisure time, escaping from work-and-spend cycle, and removing the clutter of unnecessary possessions. The personal goals of downshifting are simple: To reach a holistic self-understanding and satisfying meaning in life. Because of its personalized nature and emphasis on many minor changes, rather than complete lifestyle overhaul, downshifting attracts participants from across the socioeconomic spectrum. An intrinsic consequence of downshifting is increased time for non-work-related activities, which, combined with the diverse demographics of downshifters, cultivates higher levels of civic engagement and social interaction. The scope of participation is limitless, because all members of society—adults, children, businesses, institutions, organizations, and governments—are able to downshift even if many demographic strata do not start "high" enough to "down"-shift. In practice, down-shifting involves a variety of behavioral and lifestyle changes. The majority of these down-shifts are voluntary choices. Natural life course events, such as the loss of a job, or birth of a child can prompt involuntary down-shifting. There is also a temporal dimension, because a down-shift could be either temporary or permanent. Methods Work and income The most common form of down-shifting is work (or income) down-shifting. Down-shifting is fundamentally based on dissatisfaction with the conditions and consequences of the workplace environment. The philosophy of work-to-live replaces the social ideology of live-to-work. Reorienting economic priorities shifts the work–life balance away from the workplace. Economically, work downshifts are defined in terms of reductions in either actual or potential income, work hours, and spending levels. Following a path of earnings that is lower than the established market path is a downshift in potential earnings in favor of gaining other non-material benefits. On an individual level, work downshifting is a voluntary reduction in annual income. Downshifters desire meaning in life outside of work and, therefore, will opt to decrease the amount of time spent at work or work hours. Reducing the number of hours of work, consequently, lowers the amount earned. Simply not working overtime or taking a half-day a week for leisure time, are work downshifts. Career downshifts are another way of downshifting economically and entail lowering previous aspirations of wealth, a promotion or higher social status. Quitting a job to work locally in the community, from home or to start a business are examples of career downshifts. Although more radical, these changes do not mean stopping work altogether. Many reasons are cited by workers for this choice and usually center on a personal cost–benefit analysis of current working situations and desired extracurricular activities. High stress, pressure from employers to increase productivity, and long commutes can be factors that contribute to the costs of being employed. If the down-shifter wants more non-material benefits like leisure time, a healthy family life, or personal freedom then switching jobs could be a desirable option. Work down-shifting may also be a key to considerable health benefits as well as a healthy retirement. People are retiring later in life than previous generations. As can be seen by looking at The Health and Retirement Study, done by the Health and Retirement Study Survey Research Center, women can show the long term health benefits of down-shifting their work lives by working part time hours over a long period of years. Men however prove to be more unhealthy if they work part time from middle age till retirement. Men who down-shift their work life to part time hours at the age of 60 to 65 however benefit from continuing to work a part-time job through a semi retirement even over the age of 70. This is an example of how flexible working policies can be a key to being healthy while in retirement. Spending habits Another aspect of down-shifting is being a conscious consumer or actively practicing alternative forms of consumption. Proponents of down-shifting point to consumerism as a primary source of stress and dissatisfaction because it creates a society of individualistic consumers who measure both social status and general happiness by an unattainable quantity of material possessions. Instead of buying goods for personal satisfaction, consumption down-shifting, purchasing only the necessities, is a way to focus on quality of life rather than quantity. This realignment of spending priorities promotes the functional utility of goods over their ability to convey status which is evident in downshifters being generally less brand-conscious. These consumption habits also facilitate the option of working and earning less because annual spending is proportionally lower. Reducing spending is less demanding than more extreme downshifts in other areas, like employment, as it requires only minor lifestyle changes. Policies that enable downshifting Unions, business, and governments could implement more flexible working hours, part-time work, and other non-traditional work arrangements that enable people to work less, while still maintaining employment. Small business legislation, reduced filing requirements and reduced tax rates encourage small-scale individual entrepreneurship and therefore help individuals quit their jobs altogether and work for themselves on their own terms. Environmental consequences The catch-phrase of International Downshifting Week is "Slow Down and Green Up". Whether intentional or unintentional, generally, the choices and practices of down-shifters nurture environmental health because they reject the fast-paced lifestyle fueled by fossil fuels and adopt more sustainable lifestyles. The latent function of consumption down-shifting is to reduce, to some degree, the carbon footprint of the individual down-shifter. An example is to shift from a corporate suburban rat race lifestyle to a small eco friendly farming lifestyle. Down-shifting geographically Downshifting geographically is a relocation to a smaller, rural, or more slow-paced community. This is often a response to the hectic pace of life and stresses in urban areas. It is a significant change but does not bring total removal from mainstream culture. Sociopolitical implications Although downshifting is primarily motivated by personal desire and not by a conscious political stance, it does define societal overconsumption as the source of much personal discontent. By redefining life satisfaction in non-material terms, downshifters assume an alternative lifestyle but continue to coexist in a society and political system preoccupied with the economy. In general, downshifters are politically apathetic because mainstream politicians mobilize voters by proposing governmental solutions to periods of financial hardship and economic recessions. This economic rhetoric is meaningless to downshifters who have forgone worrying about money. In the United States, the UK, and Australia, a significant minority, approximately 20 to 25 percent, of these countries' citizens identify themselves in some respect as downshifters. Downshifting is not an isolated or unusual choice. Politics still centers around consumerism and unrestricted growth, but downshifting values, such as family priorities and workplace regulation, appear in political debates and campaigns. Like downshifters, the Cultural Creatives is another social movement whose ideology and practices diverge from mainstream consumerism and according to Paul Ray, are followed by at least a quarter of U.S. citizens. In his book In Praise of Slowness, Carl Honoré relates followers of downshifting and simple living to the global slow movement. The significant number and diversity of downshifters are a challenge to economic approaches to improving society. The rise in popularity of downshifting and similar, post-materialist ideologies represents unorganized social movements without political aspirations or motivating grievances. This is a result of their grassroots nature and relatively inconspicuous, non-confrontational subcultures. See also Anti-consumerism Conspicuous consumption Degrowth Demotion Downsizing Eco-communalism Ecological economics Ecovillage Ethical consumerism FIRE movement Frugality Homesteading Intentional community Intentional living Minimalism / Simple living Permaculture Slow living Sustainable living Transition towns Workaholic References Further reading Blanchard, Elisa A. (1994). Beyond Consumer Culture: A Study of Revaluation and Voluntary Action. Unpublished thesis, Tufts University. Bull, Andy. (1998). Downshifting: The Ultimate Handbook. London: Thorsons Etziomi, Amitai. (1998). Voluntary simplicity: Characterization, select psychological implications, and societal consequences. Journal of Economic Psychology 19:619–43. Hamilton, Clive (November 2003). Downshifting in Britain: A sea-change in the pursuit of happiness. The Australia Institute Discussion Paper No. 58. 42p. Hamilton, C., Mail, E. (January 2003). Downshifting in Australia: A sea-change in the pursuit of happiness. The Australia Institute Discussion Paper No. 50. 12p. ISSN 1322-5421 Juniu, Susana (2000). Downshifting: Regaining the Essence of Leisure, Journal of Leisure Research, 1st Quarter, Vol. 32 Issue 1, p69, 5p. Levy, Neil (2005). Downshifting and Meaning in Life, Ratio, Vol. 18, Issue 2, 176–89. J. B. MacKinnon (2021). The Day the World Stops Shopping: How ending consumerism gives us a better life and a greener world, Penguin Random House. Mazza, P. (1997). Keeping it simple. Reflections 36 (March): 10–12. Nelson, Michelle R., Paek, Hye-Jin, Rademacher, Mark A. (2007). Downshifting Consumer = Upshifting Citizen?: An Examination of a Local Freecycle Community. The Annals of the American Academy of Political and Social Science, 141–56. Saltzman, Amy. (1991). Downshifting: Reinventing Success on a Slower Track. New York: Harper Collins. Schor, Juliet B (1998). Voluntary Downshifting in the 1990s. In E. Houston, J. Stanford, & L. Taylor (Eds.), Power, Employment, and Accumulation: Social Structures in Economic Theory and Practice (pp. 66–79). Armonk, NY: M. E. Sharpe, 2003. Text from University of Chapel Hill Library Collections. External links The Homemade Life, a web forum aimed at promoting simple living Official website for the Slow Movement How To Be Rich Today – downloadable guide to Downshifting (UK) Personal finance Simple living Subcultures Waste minimisation Work–life balance fr:Simplicité volontaire
0.769749
0.991883
0.763502
Uniformitarianism
Uniformitarianism, also known as the Doctrine of Uniformity or the Uniformitarian Principle, is the assumption that the same natural laws and processes that operate in our present-day scientific observations have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of cause and effect throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, some consider that uniformitarianism should be a required first principle in scientific research. Other scientists disagree and consider that nature is not absolutely uniform, even though it does exhibit certain regularities. In geology, uniformitarianism has included the gradualistic concept that "the present is the key to the past" and that geological events occur at the same rate now as they have always done, though many modern geologists no longer hold to a strict gradualism. Coined by William Whewell, uniformitarianism was originally proposed in contrast to catastrophism by British naturalists in the late 18th century, starting with the work of the geologist James Hutton in his many books including Theory of the Earth. Hutton's work was later refined by scientist John Playfair and popularised by geologist Charles Lyell's Principles of Geology in 1830. Today, Earth's history is considered to have been a slow, gradual process, punctuated by occasional natural catastrophic events. History 18th century Abraham Gottlob Werner (1749–1817) proposed Neptunism, where strata represented deposits from shrinking seas precipitated onto primordial rocks such as granite. In 1785 James Hutton proposed an opposing, self-maintaining infinite cycle based on natural history and not on the Biblical account. Hutton then sought evidence to support his idea that there must have been repeated cycles, each involving deposition on the seabed, uplift with tilting and erosion, and then moving undersea again for further layers to be deposited. At Glen Tilt in the Cairngorm mountains he found granite penetrating metamorphic schists, in a way which indicated to him that the presumed primordial rock had been molten after the strata had formed. He had read about angular unconformities as interpreted by Neptunists, and found an unconformity at Jedburgh where layers of greywacke in the lower layers of the cliff face have been tilted almost vertically before being eroded to form a level plane, under horizontal layers of Old Red Sandstone. In the spring of 1788 he took a boat trip along the Berwickshire coast with John Playfair and the geologist Sir James Hall, and found a dramatic unconformity showing the same sequence at Siccar Point. Playfair later recalled that "the mind seemed to grow giddy by looking so far into the abyss of time", and Hutton concluded a 1788 paper he presented at the Royal Society of Edinburgh, later rewritten as a book, with the phrase "we find no vestige of a beginning, no prospect of an end". Both Playfair and Hall wrote their own books on the theory, and for decades robust debate continued between Hutton's supporters and the Neptunists. Georges Cuvier's paleontological work in the 1790s, which established the reality of extinction, explained this by local catastrophes, after which other fixed species repopulated the affected areas. In Britain, geologists adapted this idea into "diluvial theory" which proposed repeated worldwide annihilation and creation of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical flood. 19th century From 1830 to 1833 Charles Lyell's multi-volume Principles of Geology was published. The work's subtitle was "An attempt to explain the former changes of the Earth's surface by reference to causes now in operation". He drew his explanations from field studies conducted directly before he went to work on the founding geology text, and developed Hutton's idea that the earth was shaped entirely by slow-moving forces still in operation today, acting over a very long period of time. The terms uniformitarianism for this idea, and catastrophism for the opposing viewpoint, was coined by William Whewell in a review of Lyell's book. Principles of Geology was the most influential geological work in the middle of the 19th century. Systems of inorganic earth history Geoscientists support diverse systems of Earth history, the nature of which rests on a certain mixture of views about the process, control, rate, and state which are preferred. Because geologists and geomorphologists tend to adopt opposite views over process, rate, and state in the inorganic world, there are eight different systems of beliefs in the development of the terrestrial sphere. All geoscientists stand by the principle of uniformity of law. Most, but not all, are directed by the principle of simplicity. All make definite assertions about the quality of rate and state in the inorganic realm. Lyell Lyell's uniformitarianism is a family of four related propositions, not a single idea: Uniformity of law – the laws of nature are constant across time and space. Uniformity of methodology – the appropriate hypotheses for explaining the geological past are those with analogy today. Uniformity of kind – past and present causes are all of the same kind, have the same energy, and produce the same effects. Uniformity of degree – geological circumstances have remained the same over time. None of these connotations requires another, and they are not all equally inferred by uniformitarians. Gould explained Lyell's propositions in Time's Arrow, Time's Cycle (1987), stating that Lyell conflated two different types of propositions: a pair of methodological assumptions with a pair of substantive hypotheses. The four together make up Lyell's uniformitarianism. Methodological assumptions The two methodological assumptions below are accepted to be true by the majority of scientists and geologists. Gould claims that these philosophical propositions must be assumed before you can proceed as a scientist doing science. "You cannot go to a rocky outcrop and observe either the constancy of nature's laws or the working of unknown processes. It works the other way around." You first assume these propositions and "then you go to the outcrop." Uniformity of law across time and space: Natural laws are constant across space and time. The axiom of uniformity of law <ref name=gould1987>, "Making inferences about the past is wrapped up in the difference between studying the observable and the unobservable. In the observable, erroneous beliefs can be proven wrong and be inductively corrected by other observations. This is Popper's principle of falsifiability. However, past processes are not observable by their very nature. Therefore, the invariance of nature's laws must be assumed to come to conclusions about the past."</ref> is necessary in order for scientists to extrapolate (by inductive inference) into the unobservable past. The constancy of natural laws must be assumed in the study of the past; else we cannot meaningfully study it. Uniformity of process across time and space: Natural processes are constant across time and space. Though similar to uniformity of law, this second a priori assumption, shared by the vast majority of scientists, deals with geological causes, not physicochemical laws. The past is to be explained by processes acting currently in time and space rather than inventing extra esoteric or unknown processes without good reason'',, "Strict uniformitarianism may often be a guarantee against pseudo-scientific phantasies and loose conjectures, but it makes one easily forget that the principle of uniformity is not a law, not a rule established after comparison of facts, but a methodological principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of the economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there is an infinity of ways in which they could be supposed different." otherwise known as parsimony or Occam's razor. Substantive hypotheses The substantive hypotheses were controversial and, in some cases, accepted by few. These hypotheses are judged true or false on empirical grounds through scientific observation and repeated experimental data. This is in contrast with the previous two philosophical assumptions that come before one can do science and so cannot be tested or falsified by science. Uniformity of rate across time and space: Change is typically slow, steady, and gradual. Uniformity of rate (or gradualism) is what most people (including geologists) think of when they hear the word "uniformitarianism", confusing this hypothesis with the entire definition. As late as 1990, Lemon, in his textbook of stratigraphy, affirmed that "The uniformitarian view of earth history held that all geologic processes proceed continuously and at a very slow pace." Gould explained Hutton's view of uniformity of rate; mountain ranges or grand canyons are built by the accumulation of nearly insensible changes added up through vast time. Some major events such as floods, earthquakes, and eruptions, do occur. But these catastrophes are strictly local. They neither occurred in the past nor shall happen in the future, at any greater frequency or extent than they display at present. In particular, the whole earth is never convulsed at once. Uniformity of state across time and space''': Change is evenly distributed throughout space and time. The uniformity of state hypothesis implies that throughout the history of our earth there is no progress in any inexorable direction. The planet has almost always looked and behaved as it does now. Change is continuous but leads nowhere. The earth is in balance: a dynamic steady state. 20th century Stephen Jay Gould's first scientific paper, "Is uniformitarianism necessary?" (1965), reduced these four assumptions to two. He dismissed the first principle, which asserted spatial and temporal invariance of natural laws, as no longer an issue of debate. He rejected the third (uniformity of rate) as an unjustified limitation on scientific inquiry, as it constrains past geologic rates and conditions to those of the present. So, Lyell's uniformitarianism was deemed unnecessary. Uniformitarianism was proposed in contrast to catastrophism, which states that the distant past "consisted of epochs of paroxysmal and catastrophic action interposed between periods of comparative tranquility" Especially in the late 19th and early 20th centuries, most geologists took this interpretation to mean that catastrophic events are not important in geologic time; one example of this is the debate of the formation of the Channeled Scablands due to the catastrophic Missoula glacial outburst floods. An important result of this debate and others was the re-clarification that, while the same principles operate in geologic time, catastrophic events that are infrequent on human time-scales can have important consequences in geologic history. Derek Ager has noted that "geologists do not deny uniformitarianism in its true sense, that is to say, of interpreting the past by means of the processes that are seen going on at the present day, so long as we remember that the periodic catastrophe is one of those processes. Those periodic catastrophes make more showing in the stratigraphical record than we have hitherto assumed." Modern geologists do not apply uniformitarianism in the same way as Lyell. They question if rates of processes were uniform through time and only those values measured during the history of geology are to be accepted. The present may not be a long enough key to penetrating the deep lock of the past. Geologic processes may have been active at different rates in the past that humans have not observed. "By force of popularity, uniformity of rate has persisted to our present day. For more than a century, Lyell's rhetoric conflating axiom with hypotheses has descended in unmodified form. Many geologists have been stifled by the belief that proper methodology includes an a priori commitment to gradual change, and by a preference for explaining large-scale phenomena as the concatenation of innumerable tiny changes." The current consensus is that Earth's history is a slow, gradual process punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants. In practice it is reduced from Lyell's conflation, or blending, to simply the two philosophical assumptions. This is also known as the principle of geological actualism, which states that all past geological action was like all present geological action. The principle of actualism is the cornerstone of paleoecology. Social sciences Uniformitarianism has also been applied in historical linguistics, where it is considered a foundational principle of the field. Linguist Donald Ringe gives the following definition: The principle is known in linguistics, after William Labov and associates, as the Uniformitarian Principle or Unifomitarian Hypothesis. See also Conservation law Noether's theorem Law of universal gravitation Astronomical spectroscopy Cosmological principle History of paleontology Paradigm shift Physical constant Physical cosmology Scientific consensus Time-variation of fundamental constants Notes References Web External links Uniformitarianism at Physical Geography Have physical constants changed with time? Metatheory of science Evolution Geological history of Earth History of Earth science Epistemology of science
0.768558
0.993408
0.763492
Pseudohistory
Pseudohistory is a form of pseudoscholarship that attempts to distort or misrepresent the historical record, often by employing methods resembling those used in scholarly historical research. The related term cryptohistory is applied to pseudohistory derived from the superstitions intrinsic to occultism. Pseudohistory is related to pseudoscience and pseudoarchaeology, and usage of the terms may occasionally overlap. Although pseudohistory comes in many forms, scholars have identified many features that tend to be common in pseudohistorical works; one example is that the use of pseudohistory is almost always motivated by a contemporary political, religious, or personal agenda. Pseudohistory also frequently presents sensational claims or a big lie about historical facts which would require unwarranted revision of the historical record. Another hallmark of pseudohistory is an underlying premise that scholars have a furtive agenda to suppress the promotor's thesis—a premise commonly corroborated by elaborate conspiracy theories. Works of pseudohistory often point exclusively to unreliable sources—including myths and legends, often treated as literal historical truth—to support the thesis being promoted while ignoring valid sources that contradict it. Sometimes a work of pseudohistory will adopt a position of historical relativism, insisting that there is really no such thing as historical truth and that any hypothesis is just as good as any other. Many works of pseudohistory conflate mere possibility with actuality, assuming that if something could have happened, then it did. Notable examples of pseudohistory include British Israelism, the Lost Cause of the Confederacy, the Irish slaves myth, the witch-cult, Armenian genocide denial, Holocaust denial, the clean Wehrmacht myth, the 16th- and 17th-century Spanish Black Legend, and the claim that the Katyn massacre was not committed by the Soviet NKVD. Definition and etymology The term pseudohistory was coined in the early nineteenth century, which makes the word older than the related terms pseudo-scholarship and pseudoscience. In an attestation from 1815, it is used to refer to the Contest of Homer and Hesiod, a purportedly historical narrative describing an entirely fictional contest between the Greek poets Homer and Hesiod. The pejorative sense of the term, labelling a flawed or disingenuous work of historiography, is found in another 1815 attestation. Pseudohistory is akin to pseudoscience in that both forms of falsification are achieved using the methodology that purports to, but does not, adhere to the established standards of research for the given field of intellectual enquiry of which the pseudoscience claims to be a part, and which offers little or no supporting evidence for its plausibility. Writers Michael Shermer and Alex Grobman define pseudohistory as "the rewriting of the past for present personal or political purposes". Other writers take a broader definition; Douglas Allchin, a historian of science, contends that when the history of scientific discovery is presented in a simplified way, with drama exaggerated and scientists romanticized, this creates wrong stereotypes about how science works, and in fact constitutes pseudohistory, despite being based on real facts. Characteristics Robert Todd Carroll has developed a list of criteria to identify pseudo-historic works. He states that: Nicholas Goodrick-Clarke prefers the term "cryptohistory". He identifies two necessary elements as "a complete ignorance of the primary sources" and the repetition of "inaccuracies and wild claims". Other common characteristics of pseudohistory are: The arbitrary linking of disparate events so as to form – in the theorist's opinion – a pattern. This is typically then developed into a conspiracy theory postulating a hidden agent responsible for creating and maintaining the pattern. For example, the pseudohistorical The Holy Blood and the Holy Grail links the Knights Templar, the medieval Grail Romances, the Merovingian Frankish dynasty and the artist Nicolas Poussin in an attempt to identify lineal descendants of Jesus. Hypothesising the consequences of unlikely events that "could" have happened, thereby assuming tacitly that they did. Sensationalism, or shock value Cherry picking, or "law office history", evidence that helps the historical argument being made and suppressing evidence that hurts it. Categories and examples The following are some common categories of pseudohistorical theory, with examples. Not all theories in a listed category are necessarily pseudohistorical; they are rather categories that seem to attract pseudohistorians. Main categories Alternative chronologies An alternative chronology is a revised sequence of events that deviates from the standard timeline of world history accepted by mainstream scholars. An example of an "alternative chronology" is Anatoly Fomenko's New Chronology, which claims that recorded history actually began around AD 800 and all events that allegedly occurred prior to that point either never really happened at all or are simply inaccurate retellings of events that happened later. One of its outgrowths is the Tartary conspiracy theory. Other, less extreme examples, are the phantom time hypothesis, which asserts that the years AD 614–911 never took place; and the New Chronology of David Rohl, which claims that the accepted timelines for ancient Egyptian and Israelite history are wrong. Historical falsification In the eighth century, a forged document known as Donation of Constantine, which supposedly transferred authority over Rome and the western part of the Roman Empire to the Pope, became widely circulated. In the twelfth century, Geoffrey of Monmouth published the History of the Kings of Britain, a pseudohistorical work purporting to describe the ancient history and origins of the British people. The book synthesises earlier Celtic mythical traditions to inflate the deeds of the mythical King Arthur. The contemporary historian William of Newburgh wrote around 1190 that "it is quite clear that everything this man wrote about Arthur and his successors, or indeed about his predecessors from Vortigern onwards, was made up, partly by himself and partly by others". Historical revisionism The Shakespeare authorship question is a fringe theory that claims that the works attributed to William Shakespeare were actually written by someone other than William Shakespeare of Stratford-upon-Avon. Another example of historical revisionism is the thesis, found in the writings of David Barton and others, asserting that the United States was founded as an exclusively Christian nation. Mainstream historians instead support the traditional position, which holds that the American founding fathers intended for church and state to be kept separate. Confederate revisionists (a.k.a. Civil War revisionists), "Lost Cause" advocates, and Neo-Confederates argue that the Confederate States of America's prime motivation was the maintenance of states' rights and limited government, rather than the preservation and expansion of slavery. Connected to the Lost Cause is the Irish slaves myth, a pseudo-historical narrative which conflates the experiences of Irish indentured servants and enslaved Africans in the Americas. This myth, which was historically promoted by Irish nationalists such as John Mitchel, has in the modern-day been promoted by white supremacists in the United States to minimize the mistreatment experienced by African Americans (such as racism and segregation) and oppose demands for slavery reparations. The myth has also been used to obscure and downplay Irish involvement in the transatlantic slave trade. Historical negationism While closely related to previous categories, historical negationism or denialism specifically aims to outright deny the existence of confirmed events, often including various massacres, genocides, and national histories. Some examples include Holocaust denial, Armenian Genocide denial , as well as Nakba Denial in the 1984 work From Time Immemorial by Joan Peters. Psychohistory Mainstream historians have categorized psychohistory as pseudohistory. Psychohistory is an amalgam of psychology, history, and related social sciences and the humanities. Its stated goal is to examine the "why" of history, especially the difference between stated intention and actual behavior. It also states as its goal the combination of the insights of psychology, especially psychoanalysis, with the research methodology of the social sciences and humanities to understand the emotional origin of the behavior of individuals, groups and nations, past and present. Pseudoarchaeology Pseudoarchaeology refers to a false interpretation of records, namely physical ones, often by unqualified or otherwise amateur archeologists. These interpretations are often baseless and seldom align with established consensus. Nazi archaeology is a prominent example of this technique. Frequently, people who engage in pseudoarchaeology have a very strict interpretation of evidence and are unwilling to alter their stance, resulting in interpretations that often appear overly simplistic and fail to capture the complexity and nuance of the complete narrative. Various examples of pseudohistory (These following examples can belong to a variety of the above mentioned categories, or ones not mentioned as well). Ancient aliens, ancient technologies, and lost lands Immanuel Velikovsky's books Worlds in Collision (1950), Ages in Chaos (1952), and Earth in Upheaval (1955), which became "instant bestsellers", demonstrated that pseudohistory based on ancient mythology held potential for tremendous financial success and became models of success for future works in the genre. In 1968, Erich von Däniken published Chariots of the Gods?, which claims that ancient visitors from outer space constructed the pyramids and other monuments. He has since published other books in which he makes similar claims. These claims have all been categorized as pseudohistory. Similarly, Zechariah Sitchin has published numerous books claiming that a race of extraterrestrial beings from the Planet Nibiru known as the Anunnaki visited Earth in ancient times in search of gold, and that they genetically engineered humans to serve as their slaves. He claims that memories of these occurrences are recorded in Sumerian mythology, as well as other mythologies all across the globe. These speculations have likewise been categorized as pseudohistory. The ancient astronaut hypothesis was further popularized in the United States by the History Channel television series Ancient Aliens. History professor Ronald H. Fritze observed that the pseudohistorical claims promoted by von Däniken and the Ancient Aliens program have a periodic popularity in the US: "In a pop culture with a short memory and a voracious appetite, aliens and pyramids and lost civilizations are recycled like fashions." The author Graham Hancock has sold over four million copies of books promoting the pseudohistorical thesis that all the major monuments of the ancient world, including Stonehenge, the Egyptian pyramids, and the moai of Easter Island, were built by a single ancient supercivilization, which Hancock claims thrived from 15,000 to 10,000 BC and possessed technological and scientific knowledge equal to or surpassing that of modern civilization. He first advanced the full form of this argument in his 1995 bestseller Fingerprints of the Gods, which won popular acclaim, but scholarly disdain. Christopher Knight has published numerous books, including Uriel's Machine (2000), expounding pseudohistorical assertions that ancient civilizations possessed technology far more advanced than the technology of today. The claim that a lost continent known as Lemuria once existed in the Pacific Ocean has likewise been categorized as pseudohistory. Furthermore, similar conspiracy theories promote the idea of embellished, fabricated accounts of historical civilizations, namely Khazaria and Tartaria. Antisemitic pseudohistory The Protocols of the Learned Elders of Zion is a fraudulent work purporting to show a historical conspiracy for world domination by Jews. The work was conclusively proven to be a forgery in August 1921, when The Times revealed that extensive portions of the document were directly plagiarized from Maurice Joly's 1864 satirical dialogue The Dialogue in Hell Between Machiavelli and Montesquieu, as well as Hermann Goedsche's 1868 anti-Semitic novel Biarritz. The Khazar theory is an academic fringe theory that postulates the belief that the bulk of European Jewry is of Central Asian (Turkic) origin. In spite of the mainstream academic consensus which conclusively rejects it, this theory has been promoted in Anti-Semitic and some Anti-Zionist circles, they argue that Jews are an alien element in both Europe and Palestine. Holocaust denial in particular and genocide denial in general are widely categorized as pseudohistory. Major proponents of Holocaust denial include David Irving and others, who argue that the Holocaust, the Holodomor, the Armenian genocide, the Assyrian genocide, the Greek genocide and other genocides did not occur, or accounts of them were greatly exaggerated. Ethnocentric or nationalist revisionism Most Afrocentric (i.e. Pre-Columbian Africa-Americas contact theories, see Ancient Egyptian race controversy) ideas have been identified as pseudohistorical, alongside the "Indigenous Aryans" theories published by Hindu nationalists during the 1990s and 2000s. The "crypto-history" developed within Germanic mysticism and Nazi occultism has likewise been placed under this categorization. Among leading Nazis, Heinrich Himmler is believed to have been influenced by occultism and according to one theory, developed the SS base at Wewelsburg in accordance with an esoteric plan. The Sun Language Theory is a pseudohistorical ideology which argues that all languages are descended from a form of proto-Turkish. The theory may have been partially devised in order to legitimize Arabic and Semitic loanwords occurring in the Turkish language by instead asserting that the Arabic and Semitic words were derived from the Turkish ones rather than vice versa. A large number of nationalist pseudohistorical theories deal with the legendary Ten Lost Tribes of ancient Israel. British-Israelism, also known as Anglo-Israelism, the most famous example of this type, has been conclusively refuted by mainstream historians using evidence from a vast array of different fields of study. Another nationalistic pseudohistorical theory is Antiquization or Ancient Macedonism, which postulates direct demographic, cultural and linguistic continuity between ancient Macedonians and the main ethnic group in present-day North Macedonia. The Bulgarian medieval dynasty of the Komitopules, which ruled the First Bulgarian Empire in late 10th and early 11th centuries AD, is presented as "Macedonian", ruling a "medieval Macedonian state", because its capitals were located in what was previously the ancient kingdom of Macedonia. North Macedonian historians often replace the ethnonym "Bulgarians" with "Macedonians", or avoid it. North Macedonian scholars say the theory is intended to forge a national identity distinct from modern Bulgaria, which regards North Macedonia as an artificial nation. The theory is controversial in Greece and sparked 2018 mass protests there. A particular item of dispute is North Macedonian veneration of Alexander the Great; mainstream scholarship holds that Alexander had Greek ancestry, he was born in an area of ancient Macedonia that is now Greece, and he ruled over North Macedonia but never lived there and did not speak the local language. To placate Greece and thereby facilitate the country's entry into the European Union and NATO, the Macedonian government formally renounced claims of ancient Macedonian heritage with the 2018 Prespa Agreement. Dacianism is a Romanian pseudohistorical current that attempts to attribute far more influence over European and world history to the Dacians than that which they actually enjoyed. Dacianist historiography claims that the Dacians held primacy over all other civilizations, including the Romans; that the Dacian language was the origin of Latin and all other languages, such as Hindi and Babylonian; and sometimes that the Zalmoxis cult has structural links to Christianity. Dacianism was most prevalent in National Communist Romania, as the Ceaușescu regime portrayed the Dacians as insurgents defying an "imperialist" Rome; the Communist Party had formally attached "protochronism", as Dacianism was known, to Marxist ideology by 1974. Matriarchy The consensus among academics is that no unambiguously and strictly matriarchal society is known to have existed, though many societies are known to have or have had some matriarchal features, in particular matrilineality, matrilocality, and/or matrifocality. Anthropologist Donald Brown's list of human cultural universals (viz., features shared by nearly all current human societies) includes men being the "dominant element" in public political affairs, which is the contemporary opinion of mainstream anthropology. Some societies that are matrilineal or matrifocal may in fact have patriarchal power structures, and thus be misidentified as matriarchal. The idea that matriarchal societies existed and they preceded patriarchal societies was first raised in the 19th-century among Western academics, but it has since been discredited. Despite this however, some second-wave feminists assert that a matriarchy preceded the patriarchy. The Goddess Movement and Riane Eisler's The Chalice and the Blade cite Venus figurines as evidence that societies of paleolithic and neolithic Europe were matriarchies that worshipped a goddess. This belief is not supported by mainstream academics. Pre-Columbian trans-oceanic contact theories Excluding the Norse colonization of the Americas, most theories of pre-Columbian trans-oceanic contact have been classified as pseudohistory, including claims that the Americas were actually discovered by Arabs or Muslims. Gavin Menzies' book 1421: The Year China Discovered the World, which argues for the idea that Chinese sailors discovered America, has also been categorized as a work of pseudohistory. Racist pseudohistory Josiah Priest and other nineteenth-century American writers wrote pseudohistorical narratives that portrayed African Americans and Native Americans in an extremely negative light. Priest's first book was The Wonders of Nature and Providence, Displayed (1826). The book is regarded by modern critics as one of the earliest works of modern American pseudohistory. Priest attacked Native Americans in American Antiquities and Discoveries of the West (1833) and African-Americans in Slavery, As It Relates to the Negro (1843). Other nineteenth-century writers, such as Thomas Gold Appleton, in his A Sheaf of Papers (1875), and George Perkins Marsh, in his The Goths in New England, seized upon false notions of Viking history to promote the superiority of white people (as well as to oppose the Catholic Church). Such misuse of Viking history and imagery reemerged in the twentieth century among some groups promoting white supremacy. Soviet communist pseudohistory Supporters of Soviet communist pseudohistory claim, among other things, that Joseph Stalin and other top Soviet leaders did not realize the scope of mass killings perpetrated under the Stalin regime, that executions of prisoners were legally justifiable, and that prisoners in Soviet gulags performed important construction work that helped the Soviet Union economically, particularly during World War II. Scholars point to overwhelming evidence that Stalin directly helped plan mass killings, that many prisoners were sent to gulags or executed extrajudicially, and that many prisoners did no productive work, often being isolated in remote camps or given pointless and menial tasks. Israeli pseudohistory In 2015, Israeli Prime Minister Benjamin Netanyahu asserted that Amin Al-Husseini, the Mufti of Jerusalem, gave Adolf Hitler and other Nazi leaders the idea for the Holocaust. Historians across the world, along with the Palestinian Liberation Organization (PLO) and the German government, characterized the claim as historically baseless. The PLO and the Zionist Union said the statement was politically motivated because it wrongly places blame for the Holocaust on Palestinian nationalists, whom Netanyahu opposes, while implicitly absolving Hitler. Anti-religious pseudohistory The Christ myth theory claims that Jesus of Nazareth never existed as a historical figure and that his existence was invented by early Christians. This argument currently finds very little support among scholars and historians of all faiths and has been described as pseudohistorical. Likewise, some minority historian views assert that Muhammad either did not exist or was not central to founding Islam. Religious pseudohistory The Holy Blood and the Holy Grail (1982) by Michael Baigent, Richard Leigh, and Henry Lincoln is a book that purports to show that certain historical figures, such as Godfrey of Bouillon, and contemporary aristocrats are the lineal descendants of Jesus. Mainstream historians have widely panned the book, categorizing it as pseudohistory, and pointing out that the genealogical tables used in it are now known to be spurious. Nonetheless, the book was an international best-seller and inspired Dan Brown's bestselling mystery thriller novel The Da Vinci Code. Although historians and archaeologists consider the Book of Mormon to be an anachronistic invention of Joseph Smith, many members of The Church of Jesus Christ of Latter-day Saints (LDS Church) believe that it describes ancient historical events in the Americas. Searches for Noah's Ark have also been categorized as pseudohistory. In her books, starting with The Witch-Cult in Western Europe (1921), English author Margaret Murray claimed that the witch trials in the early modern period were actually an attempt by chauvinistic Christians to annihilate a secret, pagan religion, which she claimed worshipped a Horned God. Murray's claims have now been widely rejected by respected historians. Nonetheless, her ideas have become the foundation myth for modern Wicca, a contemporary Neopagan religion. Belief in Murray's alleged witch-cult is still prevalent among Wiccans, but is gradually declining. Hinduism The belief that Ancient India was technologically advanced to the extent of being a nuclear power is gaining popularity in India. Emerging extreme nationalist trends and ideologies based on Hinduism in the political arena promote these discussions. Vasudev Devnani, the education minister for the western state of Rajasthan, said in January 2017 that it was important to "understand the scientific significance" of the cow, as it was the only animal in the world to both inhale and exhale oxygen. In 2014, Prime Minister Narendra Modi told a gathering of doctors and medical staff at a Mumbai hospital that the story of the Hindu god Ganesha showed genetic science existed in ancient India. Many new age pseudohistorians who focus on converting mythological stories into history are well received among the crowd. Indian Science Congress ancient aircraft controversy is a related event when Capt. Anand J. Bodas, retired principal of a pilot training facility, claimed that aircraft more advanced than today's aircraft existed in ancient India at the Indian Science Congress. As a topic of study Courses critiquing pseudohistory are offered as undergraduate courses in liberal arts settings, one example being in Claremont McKenna College. See also List of pseudohistorians Pseudoscientific metrology Disinformation References External links "Pseudohistory and Pseudoscience" Program in the History of Science and Technology, University of Minnesota, Minneapolis, Minnesota, United States. Pseudohistory entry at Skeptic's Dictionary The Hall of Ma'at "The Restoration of History" from the American Skeptic magazine. Fringe theory Barriers to critical thinking Fiction
0.766676
0.995847
0.763491
Descent from antiquity
In European genealogy, a descent from antiquity (DFA or DfA) is a proven unbroken line of descent between specific individuals from ancient history and people living today. Descents can readily be traced back to the Early Middle Ages, but beyond that, insufficient documentation of the ancestry of the new royal and noble families of the period makes tracing them to historical figures from antiquity challenging. Though the subject of ongoing effort, no well-researched, historically-documented generation-by-generation genealogical descents are known to exist in Europe. Past claims The idea of descent from antiquity is by no means new to genealogy. Hellenistic dynasties, such as the Ptolemies, claimed ancestry from deities and mythical figures. In the Middle Ages, major royal dynasties of Europe sponsored compilations claiming their descent from Julius Caesar, Alexander the Great, and in particular the rulers of Troy (see also Euhemerism). Such claims were intended as propaganda glorifying a royal patron by trumpeting the antiquity and nobility of his ancestry. These lines of descent included not only mythical figures but also outright invention, much of which is still widely perpetuated today. Current efforts The distinguishing feature of a DFA compared to such traditional pedigrees is the intent to establish an ancestry that is historically accurate and verifiable in each generation of the descent, setting the DFA apart from the legendary descents found in medieval genealogical sources and from modern pseudogenealogical descents appearing in books like The Holy Blood and the Holy Grail and The Da Vinci Code. DFA research has focused on the ancestries of royal and noble families, since the historical record is most complete for such families. Particular attention has focused on possible genealogical links between the new dynasties of western Europe from which well-documented descents are known, such as the Carolingians, Robertians, Cerdicings, and the Astur-Leonese dynasty, through the ruling families of the post-Roman Germanic dynasties and Franco-Romans to the gentility of the Roman Empire, or in the Eastern Mediterranean linking the royal Armenian wives of some Byzantine emperors through the ruling families of the Caucasus to the rulers of the Hellenistic, Parthian, and Roman-client kingdoms of the Middle East. The phrase descent from antiquity was used by Tobias Smollett in the 18th-century newspaper The Critical Review. Reviewing William Betham's Genealogical Tables of the Sovereigns of the World, from the earliest to the present period, he wrote "From a barren list of names we learn who were the fathers or mothers, or more distant progenitors, of the select few, who are able to trace what is called their descent from antiquity." The possibility of establishing a DFA as a result of serious genealogical research was raised in a pair of influential essays, by Iain Moncreiffe and Anthony Wagner. Wagner explored the reasons why it was difficult to do and suggested several possible routes. The following years have seen a number of studies of possible routes through which an appropriately documented descent might be found. These routes typically involve either linkages among the ruling dynasties of the post-Roman Empire Germanic states, or those between the ancient dynasties of the Caucasus and the rulers of the Byzantine Empire. Though largely based on historical documentation, these proposed routes have invariably resorted to speculation based on known political relationships and onomastics - the tendency of families to name children in honor of relatives is used as evidence for hypothesized relationships between people bearing the same name. Proposed DFAs vary greatly both in the quality of their research and the degree to which speculation plays a role in their proposed connections. No European DFA is accepted as established. The outlines of several possible ancestries that could become DFAs have been proposed, but they each lack crucial evidence. Nonetheless, the pursuit of DFAs has stimulated detailed inquiry into the prosopography of ancient and early medieval societies. See also Descent from Genghis Khan Family tree of Confucius in the main line of descent Imperial House of Japan Kohen – the ancient Jewish priesthood upon whom some make claims of direct patrilineal descent Notes References I. Moncreiffe of that Ilk & D. Pottinger, Blood Royal, (Nelson, London, 1956). T. S. M. Mommaerts-Browne, 'A Key to Descents from Antiquity', Journal of Ancient and Medieval Studies III, (1984–85) 76–107 Walter Pohl, in Walter Pohl, et al., eds., "Genealogy: A Comparative Perspective from the Early Medieval West", in Meanings of Community across Medieval Eurasia: Comparative Approaches, Brill, 2016, , N. L. Taylor, "Roman Genealogical Continuity and the 'Descents from Antiquity' Question: A Review Article", The American Genealogist, 76 (2001) 129–136. Also available at Roman Genealogical Continuity A. R. Wagner, "Bridges to Antiquity" in Pedigree and Progress: Essays in the Genealogical Interpretation of History (Phillimore, London, 1975) External links Genealogy Antiquity
0.777829
0.981559
0.763485
Neanderthal
Neanderthals ( ; Homo neanderthalensis or H. sapiens neanderthalensis) are an extinct group of archaic humans (generally regarded as a distinct species, though some regard it as a subspecies of Homo sapiens) who lived in Eurasia until about 40,000 years ago. The type specimen, Neanderthal 1, was found in 1856 in the Neander Valley in present-day Germany. It is not clear when the line of Neanderthals split from that of modern humans; studies have produced various times ranging from 315,000 to more than 800,000 years ago. The date of divergence of Neanderthals from their ancestor H. heidelbergensis is also unclear. The oldest potential Neanderthal bones date to 430,000 years ago, but the classification remains uncertain. Neanderthals are known from numerous fossils, especially from after 130,000 years ago. The reasons for Neanderthal extinction are disputed. Theories for their extinction include demographic factors such as small population size and inbreeding, competitive replacement, interbreeding and assimilation with modern humans, change of climate, disease, or a combination of these factors. For much of the early 20th century, European researchers depicted Neanderthals as primitive, unintelligent and brutish. Although knowledge and perception of them has markedly changed since then in the scientific community, the image of the unevolved caveman archetype remains prevalent in popular culture. In truth, Neanderthal technology was quite sophisticated. It includes the Mousterian stone-tool industry as well as the abilities to create fire, build cave hearths (to cook food, keep warm, defend themselves from animals, placing it at the centre of their homes), make adhesive birch bark tar, craft at least simple clothes similar to blankets and ponchos, weave, go seafaring through the Mediterranean, make use of medicinal plants, treat severe injuries, store food, and use various cooking techniques such as roasting, boiling, and smoking. Neanderthals consumed a wide array of food, mainly hoofed mammals, but also megafauna, plants, small mammals, birds, and aquatic and marine resources. Although they were probably apex predators, they still competed with cave lions, cave hyenas and other large predators. A number of examples of symbolic thought and Palaeolithic art have been inconclusively attributed to Neanderthals, namely possible ornaments made from bird claws and feathers, shells, collections of unusual objects including crystals and fossils, engravings, music production (possibly indicated by the Divje Babe flute), and Spanish cave paintings contentiously dated to before 65,000 years ago. Some claims of religious beliefs have been made. Neanderthals were likely capable of speech, possibly articulate, although the complexity of their language is not known. Compared with modern humans, Neanderthals had a more robust build and proportionally shorter limbs. Researchers often explain these features as adaptations to conserve heat in a cold climate, but they may also have been adaptations for sprinting in the warmer, forested landscape that Neanderthals often inhabited. They had cold-specific adaptations, such as specialised body-fat storage and an enlarged nose to warm air (although the nose could have been caused by genetic drift). Average Neanderthal men stood around and women tall, similar to pre-industrial modern Europeans. The braincases of Neanderthal men and women averaged about and , respectively, which is considerably larger than the modern human average ( and , respectively). The Neanderthal skull was more elongated and the brain had smaller parietal lobes and cerebellum, but larger temporal, occipital and orbitofrontal regions. The total population of Neanderthals remained low, proliferating weakly harmful gene variants and precluding effective long-distance networks. Despite this, there is evidence of regional cultures and regular communication between communities. They may have frequented caves and moved between them seasonally. Neanderthals lived in a high-stress environment with high trauma rates, and about 80% died before the age of 40. The 2010 Neanderthal genome project's draft report presented evidence for interbreeding between Neanderthals and modern humans. It possibly occurred 316,000 to 219,000 years ago, but more likely 100,000 years ago and again 65,000 years ago. Neanderthals also appear to have interbred with Denisovans, a different group of archaic humans, in Siberia. Around 1–4% of genomes of Eurasians, Indigenous Australians, Melanesians, Native Americans and North Africans is of Neanderthal ancestry, while most inhabitants of sub-Saharan Africa have around 0.3% of Neanderthal genes, save possible traces from early sapiens-to-Neanderthal gene flow and/or more recent back-migration of Eurasians to Africa. In all, about 20% of distinctly Neanderthal gene variants survive in modern humans. Although many of the gene variants inherited from Neanderthals may have been detrimental and selected out, Neanderthal introgression appears to have affected the modern human immune system, and is also implicated in several other biological functions and structures, but a large portion appears to be non-coding DNA. Taxonomy Etymology Neanderthals are named after the Neander Valley in which the first identified specimen was found. The valley was spelled Neanderthal and the species was spelled Neanderthaler in German until the spelling reform of 1901. The spelling Neandertal for the species is occasionally seen in English, even in scientific publications, but the scientific name, H. neanderthalensis, is always spelled with th according to the principle of priority. The vernacular name of the species in German is always Neandertaler ("inhabitant of the Neander Valley"), whereas Neandertal always refers to the valley. The valley itself was named after the late 17th century German theologian and hymn writer Joachim Neander, who often visited the area. His name in turn means 'new man', being a learned Graecisation of the German surname Neumann. Neanderthal can be pronounced using the (as in ) or the standard English pronunciation of th with the fricative /θ/ (as ). The latter pronunciation, nevertheless, has no basis in the original German word which is pronounced always with a t regardless of the historical spelling. Neanderthal 1, the type specimen, was known as the "Neanderthal cranium" or "Neanderthal skull" in anthropological literature, and the individual reconstructed on the basis of the skull was occasionally called "the Neanderthal man". The binomial name Homo neanderthalensis—extending the name "Neanderthal man" from the individual specimen to the entire species, and formally recognising it as distinct from humans—was first proposed by Irish geologist William King in a paper read to the 33rd British Science Association in 1863. However, in 1864, he recommended that Neanderthals and modern humans be classified in different genera as he compared the Neanderthal braincase to that of a chimpanzee and argued that they were "incapable of moral and [theistic] conceptions". Research history The first Neanderthal remains—Engis 2 (a skull)—were discovered in 1829 by Dutch/Belgian prehistorian Philippe-Charles Schmerling in the Grottes d'Engis, Belgium. He concluded that these "poorly developed" human remains must have been buried at the same time and by the same causes as the co-existing remains of extinct animal species. In 1848, Gibraltar 1 from Forbes' Quarry was presented to the Gibraltar Scientific Society by their Secretary Lieutenant Edmund Henry Réné Flint, but was thought to be a modern human skull. In 1856, local schoolteacher Johann Carl Fuhlrott recognised bones from Kleine Feldhofer Grotte in Neander Valley—Neanderthal 1 (the holotype specimen)—as distinct from modern humans, and gave them to German anthropologist Hermann Schaaffhausen to study in 1857. It comprised the cranium, thigh bones, right arm, left humerus and ulna, left ilium (hip bone), part of the right shoulder blade, and pieces of the ribs. Following Charles Darwin's On the Origin of Species, Fuhlrott and Schaaffhausen argued the bones represented an ancient modern human form; Schaaffhausen, a social Darwinist, believed that humans linearly progressed from savage to civilised, and so concluded that Neanderthals were barbarous cave-dwellers. Fuhlrott and Schaaffhausen met opposition namely from the prolific pathologist Rudolf Virchow who argued against defining new species based on only a single find. In 1872, Virchow erroneously interpreted Neanderthal characteristics as evidence of senility, disease and malformation instead of archaicness, which stalled Neanderthal research until the end of the century. By the early 20th century, numerous other Neanderthal discoveries were made, establishing H. neanderthalensis as a legitimate species. The most influential specimen was La Chapelle-aux-Saints 1 ("The Old Man") from La Chapelle-aux-Saints, France. French palaeontologist Marcellin Boule authored several publications, among the first to establish palaeontology as a science, detailing the specimen, but reconstructed him as slouching, ape-like, and only remotely related to modern humans. The 1912 'discovery' of Piltdown Man (a hoax), appearing much more similar to modern humans than Neanderthals, was used as evidence that multiple different and unrelated branches of primitive humans existed, and supported Boule's reconstruction of H. neanderthalensis as a far distant relative and an evolutionary dead-end. He fuelled the popular image of Neanderthals as barbarous, slouching, club-wielding primitives; this image was reproduced for several decades and popularised in science fiction works, such as the 1911 The Quest for Fire by J.-H. Rosny aîné and the 1927 The Grisly Folk by H. G. Wells in which they are depicted as monsters. In 1911, Scottish anthropologist Arthur Keith reconstructed La Chapelle-aux-Saints 1 as an immediate precursor to modern humans, sitting next to a fire, producing tools, wearing a necklace, and having a more humanlike posture, but this failed to garner much scientific rapport, and Keith later abandoned his thesis in 1915. By the middle of the century, based on the exposure of Piltdown Man as a hoax as well as a reexamination of La Chapelle-aux-Saints 1 (who had osteoarthritis which caused slouching in life) and new discoveries, the scientific community began to rework its understanding of Neanderthals. Ideas such as Neanderthal behaviour, intelligence and culture were being discussed, and a more humanlike image of them emerged. In 1939, American anthropologist Carleton Coon reconstructed a Neanderthal in a modern business suit and hat to emphasise that they would be, more or less, indistinguishable from modern humans had they survived into the present. William Golding's 1955 novel The Inheritors depicts Neanderthals as much more emotional and civilised. However, Boule's image continued to influence works until the 1960s. In modern-day, Neanderthal reconstructions are often very humanlike. Hybridisation between Neanderthals and early modern humans had been suggested early on, such as by English anthropologist Thomas Huxley in 1890, Danish ethnographer Hans Peder Steensby in 1907, and Coon in 1962. In the early 2000s, supposed hybrid specimens were discovered: Lagar Velho 1 and Muierii 1. However, similar anatomy could also have been caused by adapting to a similar environment rather than interbreeding. Neanderthal admixture was found to be present in modern populations in 2010 with the mapping of the first Neanderthal genome sequence. This was based on three specimens in Vindija Cave, Croatia, which contained almost 4% archaic DNA (allowing for near complete sequencing of the genome). However, there was approximately 1 error for every 200 letters (base pairs) based on the implausibly high mutation rate, probably due to the preservation of the sample. In 2012, British-American geneticist Graham Coop hypothesised that they instead found evidence of a different archaic human species interbreeding with modern humans, which was disproven in 2013 by the sequencing of a high-quality Neanderthal genome preserved in a toe bone from Denisova Cave, Siberia. Classification Neanderthals are hominids in the genus Homo, humans, and generally classified as a distinct species, H. neanderthalensis, although sometimes as a subspecies of modern human as Homo sapiens neanderthalensis. This would necessitate the classification of modern humans as H. sapiens sapiens. A large part of the controversy stems from the vagueness of the term "species", as it is generally used to distinguish two genetically isolated populations, but admixture between modern humans and Neanderthals is known to have occurred. However, the absence of Neanderthal-derived patrilineal Y-chromosome and matrilineal mitochondrial DNA (mtDNA) in modern humans, along with the underrepresentation of Neanderthal X chromosome DNA, could imply reduced fertility or frequent sterility of some hybrid crosses, representing a partial biological reproductive barrier between the groups, and therefore species distinction. In 2014 geneticist Svante Pääbo summarised the controversy, describing such "taxonomic wars" as unresolvable, "since there is no definition of species perfectly describing the case". Neanderthals are thought to have been more closely related to Denisovans than to modern humans. Likewise, Neanderthals and Denisovans share a more recent last common ancestor (LCA) than to modern humans, based on nuclear DNA (nDNA). However, Neanderthals and modern humans share a more recent mitochondrial LCA (observable by studying mtDNA) and Y chromosome LCA. This likely resulted from an interbreeding event subsequent to the Neanderthal/Denisovan split. This involved either introgression coming from an unknown archaic human into Denisovans, or introgression from an earlier unidentified modern human wave from Africa into Neanderthals. The fact that the mtDNA of a ~430,000 years old early Neanderthal-line archaic human from Sima de los Huesos in Spain is more closely related to those of Denisovans that to other Neanderthals or modern humans has been cited as evidence in favour of the latter hypothesis. Evolution It is largely thought that H. heidelbergensis was the last common ancestor of Neanderthals, Denisovans and modern humans before populations became isolated in Europe, Asia and Africa, respectively. The taxonomic distinction between H. heidelbergensis and Neanderthals is mostly based on a fossil gap in Europe between 300 and 243,000 years ago during marine isotope stage 8. "Neanderthals", by convention, are fossils which date to after this gap. DNA from archaic humans from the 430,000-year-old Sima de los Huesos site in Spain indicate that they are more closely related to Neanderthals than to Denisovans, indicating that the split between Neanderthals and Denisovans must predate this time. The 400,000-year-old Aroeira 3 skull may also represent an early member of the Neanderthal line. It is possible that gene flow between Western Europe and Africa during the Middle Pleistocene, may have obscured Neanderthal characteristics in some Middle Pleistocene European hominin specimens, such those from Ceprano, Italy, and Sićevo Gorge, Serbia. The fossil record is much more complete from 130,000 years ago onwards, and specimens from this period make up the bulk of known Neanderthal skeletons. Dental remains from the Italian Visogliano and Fontana Ranuccio sites indicate that Neanderthal dental features had evolved by around 450–430,000 years ago during the Middle Pleistocene. There are two main hypotheses regarding the evolution of Neanderthals following the Neanderthal/human split: two-phase and accretion. Two-phase argues that a single major environmental event—such as the Saale glaciation—caused European H. heidelbergensis to increase rapidly in body size and robustness, as well as undergoing a lengthening of the head (phase 1), which then led to other changes in skull anatomy (phase 2). However, Neanderthal anatomy may not have been driven entirely by adapting to cold weather. Accretion holds that Neanderthals slowly evolved over time from the ancestral H. heidelbergensis, divided into four stages: early-pre-Neanderthals (MIS 12, Elster glaciation), pre-Neanderthals (MIS 11–9, Holstein interglacial), early Neanderthals (MIS 7–5, Saale glaciation–Eemian), and classic Neanderthals (MIS 4–3, Würm glaciation). Numerous dates for the Neanderthal/human split have been suggested. The date of around 250,000 years ago cites "H. helmei" as being the last common ancestor (LCA), and the split is associated with the Levallois technique of making stone tools. The date of about 400,000 years ago uses H. heidelbergensis as the LCA. Estimates of 600,000 years ago assume that "H. rhodesiensis" was the LCA, which split off into modern human lineage and a Neanderthal/H. heidelbergensis lineage. Eight hundred thousand years ago has H. antecessor as the LCA, but different variations of this model would push the date back to 1 million years ago. However, a 2020 analysis of H. antecessor enamel proteomes suggests that H. antecessor is related but not a direct ancestor. DNA studies have yielded various results for the Neanderthal/human divergence time, such as 538–315, 553–321, 565–503, 654–475, 690–550, 765–550, 741–317, and 800–520,000 years ago; and a dental analysis concluded before 800,000 years ago. Neanderthals and Denisovans are more closely related to each other than they are to modern humans, meaning the Neanderthal/Denisovan split occurred after their split with modern humans. Assuming a mutation rate of 1 × 10−9 or 0.5 × 10−9 per base pair (bp) per year, the Neanderthal/Denisovan split occurred around either 236–190,000 or 473–381,000 years ago, respectively. Using 1.1 × 10−8 per generation with a new generation every 29 years, the time is 744,000 years ago. Using 5 × 10−10 nucleotide sites per year, it is 616,000 years ago. Using the latter dates, the split had likely already occurred by the time hominins spread out across Europe, and unique Neanderthal features had begun evolving by 600–500,000 years ago. Before splitting, Neanderthal/Denisovans (or "Neandersovans") migrating out of Africa into Europe apparently interbred with an unidentified "superarchaic" human species who were already present there; these superarchaics were the descendants of a very early migration out of Africa around 1.9 mya. Demographics Range Pre- and early Neanderthals, living before the Eemian interglacial (130,000 years ago), are poorly known and come mostly from Western European sites. From 130,000 years ago onwards, the quality of the fossil record increases dramatically with classic Neanderthals, who are recorded from Western, Central, Eastern and Mediterranean Europe, as well as Southwest, Central and Northern Asia up to the Altai Mountains in southern Siberia. Pre- and early Neanderthals, on the other hand, seem to have continuously occupied only France, Spain and Italy, although some appear to have moved out of this "core-area" to form temporary settlements eastward (although without leaving Europe). Nonetheless, southwestern France has the highest density of sites for pre-, early and classic Neanderthals. The Neanderthals were the first human species to permanently occupy Europe as the continent was only sporadically occupied by earlier humans. The southernmost find was recorded at Shuqba Cave, Levant; reports of Neanderthals from the North African Jebel Irhoud and Haua Fteah have been reidentified as H. sapiens. Their easternmost presence is recorded at Denisova Cave, Siberia 85°E; the southeast Chinese Maba Man, a skull, shares several physical attributes with Neanderthals, although these may be the result of convergent evolution rather than Neanderthals extending their range to the Pacific Ocean. The northernmost bound is generally accepted to have been 55°N, with unambiguous sites known between 50–53°N, although this is difficult to assess because glacial advances destroy most human remains, and palaeoanthropologist Trine Kellberg Nielsen has argued that a lack of evidence of Southern Scandinavian occupation is (at least during the Eemian interglacial) due to the former explanation and a lack of research in the area. Middle Palaeolithic artefacts have been found up to 60°N on the Russian plains, but these are more likely attributed to modern humans. A 2017 study claimed the presence of Homo at the 130,000-year-old Californian Cerutti Mastodon site in North America, but this is largely considered implausible. It is unknown how the rapidly fluctuating climate of the last glacial period (Dansgaard–Oeschger events) impacted Neanderthals, as warming periods would produce more favourable temperatures but encourage forest growth and deter megafauna, whereas frigid periods would produce the opposite. However, Neanderthals may have preferred a forested landscape. Stable environments with mild mean annual temperatures may have been the most suitable Neanderthal habitats. Populations may have peaked in cold but not extreme intervals, such as marine isotope stages 8 and 6 (respectively, 300,000 and 191,000 years ago during the Saale glaciation). It is possible their range expanded and contracted as the ice retreated and grew, respectively, to avoid permafrost areas, residing in certain refuge zones during glacial maxima. In 2021, Israeli anthropologist Israel Hershkovitz and colleagues suggested the 140- to 120,000-year-old Israeli Nesher Ramla remains, which feature a mix of Neanderthal and more ancient H. erectus traits, represent one such source population which recolonised Europe following a glacial period. Population Like modern humans, Neanderthals probably descended from a very small population with an effective population—the number of individuals who can bear or father children—of 3,000 to 12,000 approximately. However, Neanderthals maintained this very low population, proliferating weakly harmful genes due to the reduced effectivity of natural selection. Various studies, using mtDNA analysis, yield varying effective populations, such as about 1,000 to 5,000; 5,000 to 9,000 remaining constant; or 3,000 to 25,000 steadily increasing until 52,000 years ago before declining until extinction. Archaeological evidence suggests that there was a tenfold increase in the modern human population in Western Europe during the period of the Neanderthal/modern human transition, and Neanderthals may have been at a demographic disadvantage due to a lower fertility rate, a higher infant mortality rate, or a combination of the two. Estimates giving a total population in the higher tens of thousands are contested. A consistently low population may be explained in the context of the "Boserupian Trap": a population's carrying capacity is limited by the amount of food it can obtain, which in turn is limited by its technology. Innovation increases with population, but if the population is too low, innovation will not occur very rapidly and the population will remain low. This is consistent with the apparent 150,000 year stagnation in Neanderthal lithic technology. In a sample of 206 Neanderthals, based on the abundance of young and mature adults in comparison to other age demographics, about 80% of them above the age of 20 died before reaching 40. This high mortality rate was probably due to their high-stress environment. However, it has also been estimated that the age pyramids for Neanderthals and contemporary modern humans were the same. Infant mortality was estimated to have been very high for Neanderthals, about 43% in northern Eurasia. Anatomy Build Neanderthals had more robust and stockier builds than typical modern humans, wider and barrel-shaped rib cages; wider pelvises; and proportionally shorter forearms and forelegs. Based on 45 Neanderthal long bones from 14 men and 7 women, the average height was for males and for females. For comparison, the average height of 20 males and 10 females Upper Palaeolithic humans is, respectively, and , although this decreases by nearer the end of the period based on 21 males and 15 females; and the average in the year 1900 was and , respectively. The fossil record shows that adult Neanderthals varied from about in height, although some may have grown much taller (73.8 to 184.8 cm based on footprint length and from 65.8 to 189.3 cm based on footprint width). For Neanderthal weight, samples of 26 specimens found an average of for males and for females. Using , the body mass index for Neanderthal males was calculated to be 26.9–28.2, which in modern humans correlates to being overweight. This indicates a very robust build. The Neanderthal LEPR gene concerned with storing fat and body heat production is similar to that of the woolly mammoth, and so was likely an adaptation for cold climate. The neck vertebrae of Neanderthals are thicker from the front to the rear and transversely than those of (most) modern humans, leading to stability, possibly to accommodate a different head shape and size. Although the Neanderthal thorax (where the ribcage is) was similar in size to modern humans, the longer and straighter ribs would have equated to a widened mid-lower thorax and stronger breathing in the lower thorax, which are indicative of a larger diaphragm and possibly greater lung capacity. The lung capacity of Kebara 2 was estimated to have been , compared to the average human capacity of for males and for females. The Neanderthal chest was also more pronounced (expanded front-to-back, or antero-posteriorly). The sacrum (where the pelvis connects to the spine) was more vertically inclined, and was placed lower in relation to the pelvis, causing the spine to be less curved (exhibit less lordosis) and to fold in on itself somewhat (to be invaginated). In modern populations, this condition affects just a proportion of the population, and is known as a lumbarised sacrum. Such modifications to the spine would have enhanced side-to-side (mediolateral) flexion, better supporting the wider lower thorax. It is claimed by some that this feature would be normal for all Homo, even tropically-adapted Homo ergaster or erectus, with the condition of a narrower thorax in most modern humans being a unique characteristic. Body proportions are usually cited as being "hyperarctic" as adaptations to the cold, because they are similar to those of human populations which developed in cold climates—the Neanderthal build is most similar to that of Inuit and Siberian Yupiks among modern humans—and shorter limbs result in higher retention of body heat. Nonetheless, Neanderthals from more temperate climates—such as Iberia—still retain the "hyperarctic" physique. In 2019, English anthropologist John Stewart and colleagues suggested Neanderthals instead were adapted for sprinting, because of evidence of Neanderthals preferring warmer wooded areas over the colder mammoth steppe, and DNA analysis indicating a higher proportion of fast-twitch muscle fibres in Neanderthals than in modern humans. He explained their body proportions and greater muscle mass as adaptations to sprinting as opposed to the endurance-oriented modern human physique, as persistence hunting may only be effective in hot climates where the hunter can run prey to the point of heat exhaustion (hyperthermia). They had longer heel bones, reducing their ability for endurance running, and their shorter limbs would have reduced moment arm at the limbs, allowing for greater net rotational force at the wrists and ankles, causing faster acceleration. In 1981, American palaeoanthropologist Erik Trinkaus made note of this alternate explanation, but considered it less likely. Face Neanderthals had less developed chins, sloping foreheads, and longer, broader, more projecting noses. The Neanderthal skull is typically more elongated, but also wider, and less globular than that of most modern humans, and features much more of an occipital bun, or "chignon", a protrusion on the back of the skull, although it is within the range of variation for modern humans who have it. It is caused by the cranial base and temporal bones being placed higher and more towards the front of the skull, and a flatter skullcap. The Neanderthal face is characterised by subnasal as well as mid-facial prognathism, where the zygomatic arches are positioned in a rearward location relative to modern humans, while their maxillary bones and nasal bones are positioned in a more forward direction, by comparison. Neanderthal eyeballs are larger than those of modern humans. One study proposed that this was due to Neanderthals having enhanced visual abilities, at the expense of neocortical and social development. However, this study was rejected by other researchers who concluded that eyeball size does not offer any evidence for the cognitive abilities of Neanderthal or modern humans. The projected Neanderthal nose and paranasal sinuses have generally been explained as having warmed air as it entered the lungs and retained moisture ("nasal radiator" hypothesis); if their noses were wider, it would differ to the generally narrowed shape in cold-adapted creatures, and that it would have been caused instead by genetic drift. Also, the sinuses reconstructed wide are not grossly large, being comparable in size to those of modern humans. However, if sinus size is not an important factor for breathing cold air, then the actual function would be unclear, so they may not be a good indicator of evolutionary pressures to evolve such a nose. Further, a computer reconstruction of the Neanderthal nose and predicted soft tissue patterns shows some similarities to those of modern Arctic peoples, potentially meaning the noses of both populations convergently evolved for breathing cold, dry air. Neanderthals featured a rather large jaw which was once cited as a response to a large bite force evidenced by heavy wearing of Neanderthal front teeth (the "anterior dental loading" hypothesis), but similar wearing trends are seen in contemporary humans. It could also have evolved to fit larger teeth in the jaw, which would better resist wear and abrasion, and the increased wear on the front teeth compared to the back teeth probably stems from repetitive use. Neanderthal dental wear patterns are most similar to those of modern Inuit. The incisors are large and shovel-shaped, and, compared to modern humans, there was an unusually high frequency of taurodontism, a condition where the molars are bulkier due to an enlarged pulp (tooth core). Taurodontism was once thought to have been a distinguishing characteristic of Neanderthals which lent some mechanical advantage or stemmed from repetitive use, but was more likely simply a product of genetic drift. The bite force of Neanderthals and modern humans is now thought to be about the same, about and in modern human males and females, respectively. Brain The Neanderthal braincase averages for males and for females, which is significantly larger than the averages for all groups of extant humans; for example, modern European males average and females . For 28 modern human specimens from 190,000 to 25,000 years ago, the average was about disregarding sex, and modern human brain size is suggested to have decreased since the Upper Palaeolithic. The largest Neanderthal brain, Amud 1, was calculated to be , one of the largest ever recorded in hominids. Both Neanderthal and human infants measure about . When viewed from the rear, the Neanderthal braincase has lower, wider, rounder appearance than in anatomically modern humans. This characteristic shape is referred to as "en bombe" (bomb-like), and is unique to Neanderthals, with all other hominid species (including most modern humans) generally having narrow and relatively upright cranial vaults, when viewed from behind. The Neanderthal brain would have been characterised by relatively smaller parietal lobes and a larger cerebellum. Neanderthal brains also have larger occipital lobes (relating to the classic occurrence of an occipital bun in Neanderthal skull anatomy, as well as the greater width of their skulls), which implies internal differences in the proportionality of brain-internal regions, relative to Homo sapiens, consistent with external measurements obtained with fossil skulls. Their brains also have larger temporal lobe poles, wider orbitofrontal cortex, and larger olfactory bulbs, suggesting potential differences in language comprehension and associations with emotions (temporal functions), decision making (the orbitofrontal cortex) and sense of smell (olfactory bulbs). Their brains also show different rates of brain growth and development. Such differences, while slight, would have been visible to natural selection and may underlie and explain differences in the material record in things like social behaviours, technological innovation and artistic output. Hair and skin colour The lack of sunlight most likely led to the proliferation of lighter skin in Neanderthals; however, it has been recently claimed that light skin in modern Europeans was not particularly prolific until perhaps the Bronze Age. Genetically, BNC2 was present in Neanderthals, which is associated with light skin colour; however, a second variation of BNC2 was also present, which in modern populations is associated with darker skin colour in the UK Biobank. DNA analysis of three Neanderthal females from southeastern Europe indicates that they had brown eyes, dark skin colour and brown hair, with one having red hair. In modern humans, skin and hair colour is regulated by the melanocyte-stimulating hormone—which increases the proportion of eumelanin (black pigment) to phaeomelanin (red pigment)—which is encoded by the MC1R gene. There are five known variants in modern humans of the gene which cause loss-of-function and are associated with light skin and hair colour, and another unknown variant in Neanderthals (the R307G variant) which could be associated with pale skin and red hair. The R307G variant was identified in a Neanderthal from Monti Lessini, Italy, and possibly Cueva del Sidrón, Spain. However, as in modern humans, red was probably not a very common hair colour because the variant is not present in many other sequenced Neanderthals. Metabolism Maximum natural lifespan and the timing of adulthood, menopause and gestation were most likely very similar to modern humans. However, it has been hypothesised, based on the growth rates of teeth and tooth enamel, that Neanderthals matured faster than modern humans, although this is not backed up by age biomarkers. The main differences in maturation are the atlas bone in the neck as well as the middle thoracic vertebrae fused about 2 years later in Neanderthals than in modern humans, but this was more likely caused by a difference in anatomy rather than growth rate. Generally, models on Neanderthal caloric requirements report significantly higher intakes than those of modern humans because they typically assume Neanderthals had higher basal metabolic rates (BMRs) due to higher muscle mass, faster growth rate and greater body heat production against the cold; and higher daily physical activity levels (PALs) due to greater daily travelling distances while foraging. However, using a high BMR and PAL, American archaeologist Bryan Hockett estimated that a pregnant Neanderthal would have consumed 5,500 calories per day, which would have necessitated a heavy reliance on big game meat; such a diet would have caused numerous deficiencies or nutrient poisonings, so he concluded that these are poorly warranted assumptions to make. Neanderthals may have been more active during dimmer light conditions rather than broad daylight because they lived in regions with reduced daytime hours in the winter, hunted large game (such predators typically hunt at night to enhance ambush tactics), and had large eyes and visual processing neural centres. Genetically, colour blindness (which may enhance mesopic vision) is typically correlated with northern-latitude populations, and the Neanderthals from Vindija Cave, Croatia, had some substitutions in the Opsin genes which could have influenced colour vision. However, the functional implications of these substitutions are inconclusive. Neanderthal-derived alleles near ASB1 and EXOC6 are associated with being an evening person, narcolepsy and day-time napping. Pathology Neanderthals suffered a high rate of traumatic injury, with an estimated 79–94% of specimens showing evidence of healed major trauma, of which 37–52% were severely injured, and 13–19% injured before reaching adulthood. One extreme example is Shanidar 1, who shows signs of an amputation of the right arm likely due to a nonunion after breaking a bone in adolescence, osteomyelitis (a bone infection) on the left clavicle, an abnormal gait, vision problems in the left eye, and possible hearing loss (perhaps swimmer's ear). In 1995, Trinkaus estimated that about 80% succumbed to their injuries and died before reaching 40, and thus theorised that Neanderthals employed a risky hunting strategy ("rodeo rider" hypothesis). However, rates of cranial trauma are not significantly different between Neanderthals and Middle Palaeolithic modern humans (although Neanderthals seem to have had a higher mortality risk), there are few specimens of both Upper Palaeolithic modern humans and Neanderthals who died after the age of 40, and there are overall similar injury patterns between them. In 2012, Trinkaus concluded that Neanderthals instead injured themselves in the same way as contemporary humans, such as by interpersonal violence. A 2016 study looking at 124 Neanderthal specimens argued that high trauma rates were instead caused by animal attacks, and found that about 36% of the sample were victims of bear attacks, 21% big cat attacks, and 17% wolf attacks (totalling 92 positive cases, 74%). There were no cases of hyena attacks, although hyenas still nonetheless probably attacked Neanderthals, at least opportunistically. Such intense predation probably stemmed from common confrontations due to competition over food and cave space, and from Neanderthals hunting these carnivores. Low population caused a low genetic diversity and probably inbreeding, which reduced the population's ability to filter out harmful mutations (inbreeding depression). However, it is unknown how this affected a single Neanderthal's genetic burden and, thus, if this caused a higher rate of birth defects than in modern humans. It is known, however, that the 13 inhabitants of Sidrón Cave collectively exhibited 17 different birth defects likely due to inbreeding or recessive disorders. Likely due to advanced age (60s or 70s), La Chapelle-aux-Saints 1 had signs of Baastrup's disease, affecting the spine, and osteoarthritis. Shanidar 1, who likely died at about 30 or 40, was diagnosed with the most ancient case of diffuse idiopathic skeletal hyperostosis (DISH), a degenerative disease which can restrict movement, which, if correct, would indicate a moderately high incident rate for older Neanderthals. Neanderthals were subject to several infectious diseases and parasites. Modern humans likely transmitted diseases to them; one possible candidate is the stomach bacteria Helicobacter pylori. The modern human papillomavirus variant 16A may descend from Neanderthal introgression. A Neanderthal at Cueva del Sidrón, Spain, shows evidence of a gastrointestinal Enterocytozoon bieneusi infection. The leg bones of the French La Ferrassie 1 feature lesions that are consistent with periostitis—inflammation of the tissue enveloping the bone—likely a result of hypertrophic osteoarthropathy, which is primarily caused by a chest infection or lung cancer. Neanderthals had a lower cavity rate than modern humans, despite some populations consuming typically cavity-causing foods in great quantity, which could indicate a lack of cavity-causing oral bacteria, namely Streptococcus mutans. Two 250,000-year-old Neanderthaloid children from Payré, France, present the earliest known cases of lead exposure of any hominin. They were exposed on two distinct occasions either by eating or drinking contaminated food or water, or inhaling lead-laced smoke from a fire. There are two lead mines within of the site. Culture Social structure Group dynamics Neanderthals likely lived in more sparsely distributed groups than contemporary modern humans, but group size is thought to have averaged 10 to 30 individuals, similar to modern hunter-gatherers. Reliable evidence of Neanderthal group composition comes from Cueva del Sidrón, Spain, and the footprints at Le Rozel, France: the former shows 7 adults, 3 adolescents, 2 juveniles and an infant; whereas the latter, based on footprint size, shows a group of 10 to 13 members where juveniles and adolescents made up 90%. A Neanderthal child's teeth analysed in 2018 showed it was weaned after 2.5 years, similar to modern hunter gatherers, and was born in the spring, which is consistent with modern humans and other mammals whose birth cycles coincide with environmental cycles. Indicated from various ailments resulting from high stress at a low age, such as stunted growth, British archaeologist Paul Pettitt hypothesised that children of both sexes were put to work directly after weaning; and Trinkaus said that, upon reaching adolescence, an individual may have been expected to join in hunting large and dangerous game. However, the bone trauma is comparable to modern Inuit, which could suggest a similar childhood between Neanderthals and contemporary modern humans. Further, such stunting may have also resulted from harsh winters and bouts of low food resources. Sites showing evidence of no more than three individuals may have represented nuclear families or temporary camping sites for special task groups (such as a hunting party). Bands likely moved between certain caves depending on the season, indicated by remains of seasonal materials such as certain foods, and returned to the same locations generation after generation. Some sites may have been used for over 100 years. Cave bears may have greatly competed with Neanderthals for cave space, and there is a decline in cave bear populations starting 50,000 years ago onwards (although their extinction occurred well after Neanderthals had died out). Neanderthals also had a preference for caves whose openings faced towards the south. Although Neanderthals are generally considered to have been cave dwellers, with 'home base' being a cave, open-air settlements near contemporaneously inhabited cave systems in the Levant could indicate mobility between cave and open-air bases in this area. Evidence for long-term open-air settlements is known from the 'Ein Qashish site in Israel, and Moldova I in Ukraine. Although Neanderthals appear to have had the ability to inhabit a range of environments—including plains and plateaux—open-air Neanderthals sites are generally interpreted as having been used as slaughtering and butchering grounds rather than living spaces. In 2022, remains of the first-known Neanderthal family (six adults and five children) were excavated from Chagyrskaya Cave in the Altai Mountains of southern Siberia in Russia. The family, which included a father, a daughter, and what appear to be cousins, most likely died together, presumably from starvation. Neanderthals, like contemporaneous modern humans, were most likely polygynous, based on their low second-to-fourth digit ratios, a biomarker for prenatal androgen effects that corresponds to high incidence of polygyny in haplorhine primates. Inter-group relations Canadian ethnoarchaeologist Brian Hayden calculated a self-sustaining population that avoids inbreeding to consist of about 450–500 individuals, which would necessitate these bands to interact with 8–53 other bands, but more likely the larger estimate given low population density. Analysis of the mtDNA of the Neanderthals of Cueva del Sidrón, Spain, showed that the three adult men belonged to the same maternal lineage, while the three adult women belonged to different ones. This suggests a patrilocal residence (that a woman moved out of her group to live with her partner). However, the DNA of a Neanderthal from Denisova Cave, Russia, shows that she had an inbreeding coefficient of (her parents were either half-siblings with a common mother, double first cousins, an uncle and niece or aunt and nephew, or a grandfather and granddaughter or grandmother and grandson) and the inhabitants of Cueva del Sidrón show several defects, which may have been caused by inbreeding or recessive disorders. Considering most Neanderthal artefacts were sourced no more than from the main settlement, Hayden considered it unlikely these bands interacted very often, and mapping of the Neanderthal brain and their small group size and population density could indicate that they had a reduced ability for inter-group interaction and trade. However, a few Neanderthal artefacts in a settlement could have originated 20, 30, 100 and 300 km (12.5, 18.5, 60 and 185 mi) away. Based on this, Hayden also speculated that macro-bands formed which functioned much like those of the low-density hunter-gatherer societies of the Western Desert of Australia. Macro-bands collectively encompass , with each band claiming , maintaining strong alliances for mating networks or to cope with leaner times and enemies. Similarly, British anthropologist Eiluned Pearce and Cypriot archaeologist Theodora Moutsiou speculated that Neanderthals were possibly capable of forming geographically expansive ethnolinguistic tribes encompassing upwards of 800 people, based on the transport of obsidian up to from the source compared to trends seen in obsidian transfer distance and tribe size in modern hunter-gatherers. However, according to their model Neanderthals would not have been as efficient at maintaining long-distance networks as modern humans, probably due to a significantly lower population. Hayden noted an apparent cemetery of six or seven individuals at La Ferrassie, France, which, in modern humans, is typically used as evidence of a corporate group which maintained a distinct social identity and controlled some resource, trading, manufacturing and so on. La Ferrassie is also located in one of the richest animal-migration routes of Pleistocene Europe. Genetic analysis indicates there were at least three distinct geographical groups—Western Europe, the Mediterranean coast, and east of the Caucasus—with some migration among these regions. Post-Eemian Western European Mousterian lithics can also be broadly grouped into three distinct macro-regions: Acheulean-tradition Mousterian in the southwest, Micoquien in the northeast, and Mousterian with bifacial tools (MBT) in between the former two. MBT may actually represent the interactions and fusion of the two different cultures. Southern Neanderthals exhibit regional anatomical differences from northern counterparts: a less protrusive jaw, a shorter gap behind the molars, and a vertically higher jawbone. These all instead suggest Neanderthal communities regularly interacted with neighbouring communities within a region, but not as often beyond. Nonetheless, over long periods of time, there is evidence of large-scale cross-continental migration. Early specimens from Mezmaiskaya Cave in the Caucasus and Denisova Cave in the Siberian Altai Mountains differ genetically from those found in Western Europe, whereas later specimens from these caves both have genetic profiles more similar to Western European Neanderthal specimens than to the earlier specimens from the same locations, suggesting long-range migration and population replacement over time. Similarly, artefacts and DNA from Chagyrskaya and Okladnikov Caves, also in the Altai Mountains, resemble those of eastern European Neanderthal sites about away more than they do artefacts and DNA of the older Neanderthals from Denisova Cave, suggesting two distinct migration events into Siberia. Neanderthals seem to have suffered a major population decline during MIS 4 (71–57,000 years ago), and the distribution of the Micoquian tradition could indicate that Central Europe and the Caucasus were repopulated by communities from a refuge zone either in eastern France or Hungary (the fringes of the Micoquian tradition) who dispersed along the rivers Prut and Dniester. There is also evidence of inter-group conflict: a skeleton from La Roche à Pierrot, France, showing a healed fracture on top of the skull apparently caused by a deep blade wound, and another from Shanidar Cave, Iraq, found to have a rib lesion characteristic of projectile weapon injuries. Social hierarchy It is sometimes suggested that, since they were hunters of challenging big game and lived in small groups, there was no sexual division of labour as seen in modern hunter-gatherer societies. That is, men, women and children all had to be involved in hunting, instead of men hunting with women and children foraging. However, with modern hunter-gatherers, the higher the meat dependency, the higher the division of labour. Further, tooth-wearing patterns in Neanderthal men and women suggest they commonly used their teeth for carrying items, but men exhibit more wearing on the upper teeth, and women the lower, suggesting some cultural differences in tasks. It is controversially proposed that some Neanderthals wore decorative clothing or jewellery—such as a leopard skin or raptor feathers—to display elevated status in the group. Hayden postulated that the small number of Neanderthal graves found was because only high-ranking members would receive an elaborate burial, as is the case for some modern hunter-gatherers. Trinkaus suggested that elderly Neanderthals were given special burial rites for lasting so long given the high mortality rates. Alternatively, many more Neanderthals may have received burials, but the graves were infiltrated and destroyed by bears. Given that 20 graves of Neanderthals aged under 4 have been found—over a third of all known graves—deceased children may have received greater care during burial than other age demographics. Looking at Neanderthal skeletons recovered from several natural rock shelters, Trinkaus said that, although Neanderthals were recorded as bearing several trauma-related injuries, none of them had significant trauma to the legs that would debilitate movement. He suggested that self worth in Neanderthal culture derived from contributing food to the group; a debilitating injury would remove this self-worth and result in near-immediate death, and individuals who could not keep up with the group while moving from cave to cave were left behind. However, there are examples of individuals with highly debilitating injuries being nursed for several years, and caring for the most vulnerable within the community dates even further back to H. heidelbergensis. Especially given the high trauma rates, it is possible that such an altruistic strategy ensured their survival as a species for so long. Food Hunting and gathering Neanderthals were once thought of as scavengers, but are now considered to have been apex predators. In 1980, it was hypothesised that two piles of mammoth skulls at La Cotte de St Brelade, Jersey, at the base of a gulley were evidence of mammoth drive hunting (causing them to stampede off a ledge), but this is contested. Living in a forested environment, Neanderthals were likely ambush hunters, getting close to and attacking their target—a prime adult—in a short burst of speed, thrusting in a spear at close quarters. Younger or wounded animals may have been hunted using traps, projectiles, or pursuit. Some sites show evidence that Neanderthals slaughtered whole herds of animals in large, indiscriminate hunts and then carefully selected which carcasses to process. Nonetheless, they were able to adapt to a variety of habitats. They appear to have eaten predominantly what was abundant within their immediate surroundings, with steppe-dwelling communities (generally outside of the Mediterranean) subsisting almost entirely on meat from large game, forest-dwelling communities consuming a wide array of plants and smaller animals, and waterside communities gathering aquatic resources, although even in more southerly, temperate areas such as the southeastern Iberian Peninsula, large game still featured prominently in Neanderthal diets. Contemporary humans, in contrast, seem to have used more complex food extraction strategies and generally had a more diverse diet. Nonetheless, Neanderthals still would have had to have eaten a varied enough diet to prevent nutrient deficiencies and protein poisoning, especially in the winter when they presumably ate mostly lean meat. Any food with high contents of other essential nutrients not provided by lean meat would have been vital components of their diet, such as fat-rich brains, carbohydrate-rich and abundant underground storage organs (including roots and tubers), or, like modern Inuit, the stomach contents of herbivorous prey items. For meat, Neanderthals appear to have fed predominantly on hoofed mammals. They primarily consumed red deer and reindeer, as these two were the most abundant game; however, they also ate other Pleistocene megafauna such as chamois, ibex, wild boar, steppe wisent, aurochs, Irish elk, woolly mammoth, straight-tusked elephant, woolly rhinoceros, Merck's rhinoceros the narrow-nosed rhinoceros, wild horse, and so on. There is evidence of directed cave and brown bear hunting both in and out of hibernation, as well as butchering. Analysis of Neanderthal bone collagen from Vindija Cave, Croatia, shows nearly all of their protein needs derived from animal meat. Some caves show evidence of regular rabbit and tortoise consumption. At Gibraltar sites, there are remains of 143 different bird species, many ground-dwelling such as the common quail, corn crake, woodlark, and crested lark. Scavenging birds such as corvids and eagles were commonly exploited. Neanderthals also exploited marine resources on the Iberian, Italian and Peloponnesian Peninsulas, where they waded or dived for shellfish, as early as 150,000 years ago at Cueva Bajondillo, Spain, similar to the fishing record of modern humans. At Vanguard Cave, Gibraltar, the inhabitants consumed Mediterranean monk seal, short-beaked common dolphin, common bottlenose dolphin, Atlantic bluefin tuna, sea bream and purple sea urchin; and at Gruta da Figueira Brava, Portugal, there is evidence of large-scale harvest of shellfish, crabs and fish. Evidence of freshwater fishing was found in Grotte di Castelcivita, Italy, for trout, chub and eel; Abri du Maras, France, for chub and European perch; Payré, France; and Kudaro Cave, Russia, for Black Sea salmon. Edible plant and mushroom remains are recorded from several caves. Neanderthals from Cueva del Sidrón, Spain, based on dental tartar, likely had a meatless diet of mushrooms, pine nuts and moss, indicating they were forest foragers. Remnants from Amud Cave, Israel, indicates a diet of figs, palm tree fruits and various cereals and edible grasses. Several bone traumas in the leg joints could possibly suggest habitual squatting, which, if the case, was likely done while gathering food. Dental tartar from Grotte de Spy, Belgium, indicates the inhabitants had a meat-heavy diet including woolly rhinoceros and mouflon sheep, while also regularly consuming mushrooms. Neanderthal faecal matter from El Salt, Spain, dated to 50,000 years ago—the oldest human faecal matter remains recorded—show a diet mainly of meat but with a significant component of plants. Evidence of cooked plant foods—mainly legumes and, to a far lesser extent, acorns—was discovered at the Kebara Cave site in Israel, with its inhabitants possibly gathering plants in spring and fall and hunting in all seasons except fall, although the cave was probably abandoned in late summer to early fall. At Shanidar Cave, Iraq, Neanderthals collected plants with various harvest seasons, indicating they scheduled returns to the area to harvest certain plants, and that they had complex food-gathering behaviours for both meat and plants. Food preparation Neanderthals probably could employ a wide range of cooking techniques, such as roasting, and they may have been able to heat up or boil soup, stew, or animal stock. The abundance of animal bone fragments at settlements may indicate the making of fat stocks from boiling bone marrow, possibly taken from animals that had already died of starvation. These methods would have substantially increased fat consumption, which was a major nutritional requirement of communities with low carbohydrate and high protein intake. Neanderthal tooth size had a decreasing trend after 100,000 years ago, which could indicate an increased dependence on cooking or the advent of boiling, a technique that would have softened food. At Cueva del Sidrón, Spain, Neanderthals likely cooked and possibly smoked food, as well as used certain plants—such as yarrow and camomile—as flavouring, although these plants may have instead been used for their medicinal properties. At Gorham's Cave, Gibraltar, Neanderthals may have been roasting pinecones to access pine nuts. At Grotte du Lazaret, France, a total of twenty-three red deer, six ibexes, three aurochs, and one roe deer appear to have been hunted in a single autumn hunting season, when strong male and female deer herds would group together for rut. The entire carcasses seem to have been transported to the cave and then butchered. Because this is such a large amount of food to consume before spoilage, it is possible these Neanderthals were curing and preserving it before winter set in. At 160,000 years old, it is the oldest potential evidence of food storage. The great quantities of meat and fat which could have been gathered in general from typical prey items (namely mammoths) could also indicate food storage capability. With shellfish, Neanderthals needed to eat, cook, or in some manner preserve them soon after collection, as shellfish spoils very quickly. At Cueva de los Aviones, Spain, the remains of edible, algae eating shellfish associated with the alga Jania rubens could indicate that, like some modern hunter gatherer societies, harvested shellfish were held in water-soaked algae to keep them alive and fresh until consumption. Competition Competition from large Ice Age predators was rather high. Cave lions likely targeted horses, large deer and wild cattle; and leopards primarily reindeer and roe deer; which heavily overlapped with Neanderthal diet. To defend a kill against such ferocious predators, Neanderthals may have engaged in a group display of yelling, arm waving, or stone throwing; or quickly gathered meat and abandoned the kill. However, at Grotte de Spy, Belgium, the remains of wolves, cave lions and cave bears—which were all major predators of the time—indicate Neanderthals hunted their competitors to some extent. Neanderthals and cave hyenas may have exemplified niche differentiation, and actively avoided competing with each other. Although they both mainly targeted the same groups of creatures—deer, horses and cattle—Neanderthals mainly hunted the former and cave hyenas the latter two. Further, animal remains from Neanderthal caves indicate they preferred to hunt prime individuals, whereas cave hyenas hunted weaker or younger prey, and cave hyena caves have a higher abundance of carnivore remains. Nonetheless, there is evidence that cave hyenas stole food and leftovers from Neanderthal campsites and scavenged on dead Neanderthal bodies. Similarly, evidence from the site of Payre in southern France shows that Neanderthals exhibited resource partitioning with wolves. Cannibalism There are several instances of Neanderthals practising cannibalism across their range. The first example came from the Krapina, Croatia site, in 1899, and other examples were found at Cueva del Sidrón and Zafarraya in Spain; and the French Grotte de Moula-Guercy, Les Pradelles, and La Quina. For the five cannibalised Neanderthals at the Grottes de Goyet, Belgium, there is evidence that the upper limbs were disarticulated, the lower limbs defleshed and also smashed (likely to extract bone marrow), the chest cavity disembowelled, and the jaw dismembered. There is also evidence that the butchers used some bones to retouch their tools. The processing of Neanderthal meat at Grottes de Goyet is similar to how they processed horse and reindeer. About 35% of the Neanderthals at Marillac-le-Franc, France, show clear signs of butchery, and the presence of digested teeth indicates that the bodies were abandoned and eaten by scavengers, likely hyaenas. These cannibalistic tendencies have been explained as either ritual defleshing, pre-burial defleshing (to prevent scavengers or foul smell), an act of war, or simply for food. Due to a small number of cases, and the higher number of cut marks seen on cannibalised individuals than animals (indicating inexperience), cannibalism was probably not a very common practice, and it may have only been done in times of extreme food shortages as in some cases in recorded human history. The arts Personal adornment Neanderthals used ochre, a clay earth pigment. Ochre is well documented from 60 to 45,000 years ago in Neanderthal sites, with the earliest example dating to 250–200,000 years ago from Maastricht-Belvédère, the Netherlands (a similar timespan to the ochre record of H. sapiens). It has been hypothesised to have functioned as body paint, and analyses of pigments from Pech de l'Azé, France, indicates they were applied to soft materials (such as a hide or human skin). However, modern hunter gatherers, in addition to body paint, also use ochre for medicine, for tanning hides, as a food preservative, and as an insect repellent, so its use as decorative paint for Neanderthals is speculative. Containers apparently used for mixing ochre pigments were found in Peștera Cioarei, Romania, which could indicate modification of ochre for solely aesthetic purposes. Neanderthals collected uniquely shaped objects and are suggested to have modified them into pendants, such as a fossil Aspa marginata sea snail shell possibly painted red from Grotta di Fumane, Italy, transported over to the site about 47,500 years ago; three shells, dated to about 120–115,000 years ago, perforated through the umbo belonging to a rough cockle, a Glycymeris insubrica, and a Spondylus gaederopus from Cueva de los Aviones, Spain, the former two associated with red and yellow pigments, and the latter a red-to-black mix of hematite and pyrite; and a king scallop shell with traces of an orange mix of goethite and hematite from Cueva Antón, Spain. The discoverers of the latter two claim that pigment was applied to the exterior to make it match the naturally vibrant inside colouration. Excavated from 1949 to 1963 from the French Grotte du Renne, Châtelperronian beads made from animal teeth, shells and ivory were found associated with Neanderthal bones, but the dating is uncertain and Châtelperronian artefacts may actually have been crafted by modern humans and simply redeposited with Neanderthal remains. Gibraltarian palaeoanthropologists Clive and Geraldine Finlayson suggested that Neanderthals used various bird parts as artistic media, specifically black feathers. In 2012, the Finlaysons and colleagues examined 1,699 sites across Eurasia, and argued that raptors and corvids, species not typically consumed by any human species, were overrepresented and show processing of only the wing bones instead of the fleshier torso, and thus are evidence of feather plucking of specifically the large flight feathers for use as personal adornment. They specifically noted the cinereous vulture, red-billed chough, kestrel, lesser kestrel, alpine chough, rook, jackdaw and the white tailed eagle in Middle Palaeolithic sites. Other birds claimed to present evidence of modifications by Neanderthals are the golden eagle, rock pigeon, common raven and the bearded vulture. The earliest claim of bird bone jewellery is a number of 130,000-year-old white tailed eagle talons found in a cache near Krapina, Croatia, speculated, in 2015, to have been a necklace. A similar 39,000-year-old Spanish imperial eagle talon necklace was reported in 2019 at Cova Foradà in Spain, though from the contentious Châtelperronian layer. In 2017, 17 incision-decorated raven bones from the Zaskalnaya VI rock shelter, Ukraine, dated to 43–38,000 years ago were reported. Because the notches are more-or-less equidistant to each other, they are the first modified bird bones that cannot be explained by simple butchery, and for which the argument of design intent is based on direct evidence. Discovered in 1975, the so-called Mask of la Roche-Cotard, a mostly flat piece of flint with a bone pushed through a hole on the midsection—dated to 32, 40, or 75,000 years ago—has been purported to resemble the upper half of a face, with the bone representing eyes. It is contested whether it represents a face, or if it even counts as art. In 1988, American archaeologist Alexander Marshack speculated that a Neanderthal at Grotte de L'Hortus, France, wore a leopard pelt as personal adornment to indicate elevated status in the group based on a recovered leopard skull, phalanges and tail vertebrae. Abstraction As of 2014, 63 purported engravings have been reported from 27 different European and Middle Eastern Lower-to-Middle Palaeolithic sites, of which 20 are on flint cortexes from 11 sites, 7 are on slabs from 7 sites, and 36 are on pebbles from 13 sites. It is debated whether or not these were made with symbolic intent. In 2012, deep scratches on the floor of Gorham's Cave, Gibraltar, were discovered, dated to older than 39,000 years ago, which the discoverers have interpreted as Neanderthal abstract art. The scratches could have also been produced by a bear. In 2021, an Irish elk phalanx with five engraved offset chevrons stacked above each other was discovered at the entrance to the Einhornhöhle cave in Germany, dating to about 51,000 years ago. In 2018, some red-painted dots, disks, lines and hand stencils on the cave walls of the Spanish La Pasiega, Maltravieso, and Doña Trinidad were dated to be older than 66,000 years ago, at least 20,000 years prior to the arrival of modern humans in Western Europe. This would indicate Neanderthal authorship, and similar iconography recorded in other Western European sites—such as Les Merveilles, France, and Cueva del Castillo, Spain—could potentially also have Neanderthal origins. However, the dating of these Spanish caves, and thus attribution to Neanderthals, is contested. Neanderthals are known to have collected a variety of unusual objects—such as crystals or fossils—without any real functional purpose or any indication of damage caused by use. It is unclear if these objects were simply picked up for their aesthetic qualities, or if some symbolic significance was applied to them. These items are mainly quartz crystals, but also other minerals such as cerussite, iron pyrite, calcite and galena. A few findings feature modifications, such as a mammoth tooth with an incision and a fossil nummulite shell with a cross etched in from Tata, Hungary; a large slab with 18 cupstones hollowed out from a grave in La Ferrassie, France; and a geode from Peștera Cioarei, Romania, coated with red ochre. A number of fossil shells are also known from French Neanderthals sites, such as a rhynchonellid and a Taraebratulina from Combe Grenal; a belemnite beak from Grottes des Canalettes; a polyp from Grotte de l'Hyène; a sea urchin from La Gonterie-Boulouneix; and a rhynchonella, feather star and belemnite beak from the contentious Châtelperronian layer of Grotte du Renne. Music Purported Neanderthal bone flute fragments made of bear long bones were reported from Potočka zijalka, Slovenia, in the 1920s, and Istállós-kői-barlang, Hungary, and Mokriška jama, Slovenia, in 1985; but these are now attributed to modern human activities. The 43,000-year-old Divje Babe flute from Slovenia, found in 1995, has been attributed by some researchers to Neanderthals, though its status as a flute is heavily disputed. Many researchers consider it to be most likely the product of a carnivorous animal chewing the bone, but its discoverer Ivan Turk and other researchers have maintained an argument that it was manufactured by Neanderthal as a musical instrument. Technology Despite the apparent 150,000-year stagnation in Neanderthal lithic innovation, there is evidence that Neanderthal technology was more sophisticated than was previously thought. However, the high frequency of potentially debilitating injuries could have prevented very complex technologies from emerging, as a major injury would have impeded an expert's ability to effectively teach a novice. Stone tools Neanderthals made stone tools, and are associated with the Mousterian industry. The Mousterian is also associated with North African H. sapiens as early as 315,000 years ago and was found in Northern China about 47–37,000 years ago in caves such as Jinsitai or Tongtiandong. It evolved around 300,000 years ago with the Levallois technique which developed directly from the preceding Acheulean industry (invented by H. erectus about 1.8 mya). Levallois made it easier to control flake shape and size, and as a difficult-to-learn and unintuitive process, the Levallois technique may have been directly taught generation to generation rather than via purely observational learning. There are distinct regional variants of the Mousterian industry, such as: the Quina and La Ferrassie subtypes of the Charentian industry in southwestern France, Acheulean-tradition Mousterian subtypes A and B along the Atlantic and northwestern European coasts, the Micoquien industry of Central and Eastern Europe and the related Sibiryachikha variant in the Siberian Altai Mountains, the Denticulate Mousterian industry in Western Europe, the racloir industry around the Zagros Mountains, and the flake cleaver industry of Cantabria, Spain, and both sides of the Pyrenees. In the mid-20th century, French archaeologist François Bordes debated against American archaeologist Lewis Binford to explain this diversity (the "Bordes–Binford debate"), with Bordes arguing that these represent unique ethnic traditions and Binford that they were caused by varying environments (essentially, form vs. function). The latter sentiment would indicate a lower degree of inventiveness compared to modern humans, adapting the same tools to different environments rather than creating new technologies. A continuous sequence of occupation is well documented in Grotte du Renne, France, where the lithic tradition can be divided into the Levallois–Charentian, Discoid–Denticulate (43,300  ±929 – 40,900 ±719 years ago), Levallois Mousterian (40,200 ±1,500 – 38,400 ±1,300 years ago) and Châtelperronian (40,930 ±393 – 33,670 ±450 years ago). There is some debate if Neanderthals had long-ranged weapons. A wound on the neck of an African wild ass from Umm el Tlel, Syria, was likely inflicted by a heavy Levallois-point javelin, and bone trauma consistent with habitual throwing has been reported in Neanderthals. Some spear tips from Abri du Maras, France, may have been too fragile to have been used as thrusting spears, possibly suggesting their use as darts. Organic tools The Châtelperronian in central France and northern Spain is a distinct industry from the Mousterian, and is controversially hypothesised to represent a culture of Neanderthals borrowing (or by process of acculturation) tool-making techniques from immigrating modern humans, crafting bone tools and ornaments. In this frame, the makers would have been a transitional culture between the Neanderthal Mousterian and the modern human Aurignacian. The opposing viewpoint is that the Châtelperronian was manufactured by modern humans instead. Abrupt transitions similar to the Mousterian/Châtelperronian could also simply represent natural innovation, like the La Quina–Neronian transition 50,000 years ago featuring technologies generally associated with modern humans such as bladelets and microliths. Other ambiguous transitional cultures include the Italian Uluzzian industry, and the Balkan Szeletian industry. Before immigration, the only evidence of Neanderthal bone tools are animal rib lissoirs—which are rubbed against hide to make it more supple or waterproof—although this could also be evidence for modern humans immigrating earlier than expected. In 2013, two 51,400- to 41,100-year-old deer rib lissoirs were reported from Pech-de-l'Azé and the nearby Abri Peyrony in France. In 2020, five more lissoirs made of aurochs or bison ribs were reported from Abri Peyrony, with one dating to about 51,400 years ago and the other four to 47,700–41,100 years ago. This indicates the technology was in use in this region for a long time. Since reindeer remains were the most abundant, the use of less abundant bovine ribs may indicate a specific preference for bovine ribs. Potential lissoirs have also been reported from Grosse Grotte, Germany (made of mammoth), and Grottes des Canalettes, France (red deer). The Neanderthals in 10 coastal sites in Italy (namely Grotta del Cavallo and Grotta dei Moscerini) and Kalamakia Cave, Greece, are known to have crafted scrapers using smooth clam shells, and possibly hafted them to a wooden handle. They probably chose this clam species because it has the most durable shell. At Grotta dei Moscerini, about 24% of the shells were gathered alive from the seafloor, meaning these Neanderthals had to wade or dive into shallow waters to collect them. At Grotta di Santa Lucia, Italy, in the Campanian volcanic arc, Neanderthals collected the porous volcanic pumice, which, for contemporary humans, was probably used for polishing points and needles. The pumices are associated with shell tools. At Abri du Maras, France, twisted fibres and a 3-ply inner-bark-fibre cord fragment associated with Neanderthals show that they produced string and cordage, but it is unclear how widespread this technology was because the materials used to make them (such as animal hair, hide, sinew, or plant fibres) are biodegradable and preserve very poorly. This technology could indicate at least a basic knowledge of weaving and knotting, which would have made possible the production of nets, containers, packaging, baskets, carrying devices, ties, straps, harnesses, clothes, shoes, beds, bedding, mats, flooring, roofing, walls and snares, and would have been important in hafting, fishing and seafaring. Dating to 52–41,000 years ago, the cord fragment is the oldest direct evidence of fibre technology, although 115,000-year-old perforated shell beads from Cueva Antón possibly strung together to make a necklace are the oldest indirect evidence. In 2020, British archaeologist Rebecca Wragg Sykes expressed cautious support for the genuineness of the find, but pointed out that the string would have been so weak that it would have had limited functions. One possibility is as a thread for attaching or stringing small objects. The archaeological record shows that Neanderthals commonly used animal hide and birch bark, and may have used them to make cooking containers. However, this is based largely on circumstantial evidence, as neither fossilises well. It is possible that the Neanderthals at Kebara Cave in Israel, used the shells of the spur-thighed tortoise as containers. At the Italian Poggetti Vecchi site, there is evidence that they used fire to process boxwood branches to make digging sticks, a common implement in hunter-gatherer societies. The Schöningen spears are a collection of wooden spears probably made by early Neanderthals found in Germany, dating to around 300,000 years ago. They were likely both thrown and used as handheld thrusting spears. The tools were specifically made of spruce, (or possibly larch in some specimens) and pine despite their uncommonness in the environment, suggesting that they had been deliberately selected for their material properties. The spears had been deliberately debarked, followed by the ends being sharpened using cutting and scraping. Other wooden tools made of split wood were also found at the site, some rounded and some pointed, which may have functioned for domestic tasks, like serving as awls (used to make holes) and hide smoothers for the pointed and rounded types respectively. The wooden artefacts show evidence of being repurposed and reshaped. Fire and construction Many Mousterian sites have evidence of fire, some for extended periods of time, though it is unclear whether they were capable of starting fire or simply scavenged from naturally occurring wildfires. Indirect evidence of fire-starting ability includes pyrite residue on a couple of dozen bifaces from late Mousterian (c. 50,000 years ago) northwestern France (which could indicate they were used as percussion fire starters), and collection of manganese dioxide by late Neanderthals which can lower the combustion temperature of wood. They were also capable of zoning areas for specific activities, such as for knapping, butchering, hearths and wood storage. Many Neanderthal sites lack evidence for such activity perhaps due to natural degradation of the area over tens of thousands of years, such as by bear infiltration after abandonment of the settlement. In a number of caves, evidence of hearths has been detected. Neanderthals likely considered air circulation when making hearths as a lack of proper ventilation for a single hearth can render a cave uninhabitable in several minutes. Abric Romaní rock shelter, Spain, indicates eight evenly spaced hearths lined up against the rock wall, likely used to stay warm while sleeping, with one person sleeping on either side of the fire. At Cueva de Bolomor, Spain, with hearths lined up against the wall, the smoke flowed upwards to the ceiling, and led to outside the cave. In Grotte du Lazaret, France, smoke was probably naturally ventilated during the winter as the interior cave temperature was greater than the outside temperature; likewise, the cave was likely only inhabited in the winter. In 1990, two 176,000-year-old ring structures, several metres wide, made of broken stalagmite pieces, were discovered in a large chamber more than from the entrance within Grotte de Bruniquel, France. One ring was with stalagmite pieces averaging in length, and the other with pieces averaging . There were also four other piles of stalagmite pieces for a total of or worth of stalagmite pieces. Evidence of the use of fire and burnt bones also suggest human activity. A team of Neanderthals was likely necessary to construct the structure, but the chamber's actual purpose is uncertain. Building complex structures so deep in a cave is unprecedented in the archaeological record, and indicates sophisticated lighting and construction technology, and great familiarity with subterranean environments. The 44,000-year-old Moldova I open-air site, Ukraine, shows evidence of a ring-shaped dwelling made out of mammoth bones meant for long-term habitation by several Neanderthals, which would have taken a long time to build. It appears to have contained hearths, cooking areas and a flint workshop, and there are traces of woodworking. Upper Palaeolithic modern humans in the Russian plains are thought to have also made housing structures out of mammoth bones. Birch tar Neanderthal produced the adhesive birch bark tar, using the bark of birch trees, for hafting. It was long believed that birch bark tar required a complex recipe to be followed, and that it thus showed complex cognitive skills and cultural transmission. However, a 2019 study showed it can be made simply by burning birch bark beside smooth vertical surfaces, such as a flat, inclined rock. Thus, tar making does not require cultural processes per se. However, at Königsaue (Germany), Neanderthals did not make tar with such an aboveground method but rather employed a technically more demanding underground production method. This is one of our best indicators that some of their techniques were conveyed by cultural processes. Clothes Neanderthals were likely able to survive in a similar range of temperatures to modern humans while sleeping: about while naked in the open and windspeed , or while naked in an enclosed space. Since ambient temperatures were markedly lower than this—averaging, during the Eemian interglacial, in July and in January and dropping to as a low as on the coldest days—Danish physicist Bent Sørensen hypothesised that Neanderthals required tailored clothing capable of preventing airflow to the skin. Especially during extended periods of travelling (such as a hunting trip), tailored footwear completely enwrapping the feet may have been necessary. Nonetheless, as opposed to the bone sewing-needles and stitching awls assumed to have been in use by contemporary modern humans, the only known Neanderthal tools that could have been used to fashion clothes are hide scrapers, which could have made items similar to blankets or ponchos, and there is no direct evidence they could produce fitted clothes. Indirect evidence of tailoring by Neanderthals includes the ability to manufacture string, which could indicate weaving ability, and a naturally-pointed horse metatarsal bone from Cueva de los Aviones, Spain, which was speculated to have been used as an awl, perforating dyed hides, based on the presence of orange pigments. Whatever the case, Neanderthals would have needed to cover up most of their body, and contemporary humans would have covered 80–90%. Since human/Neanderthal admixture is known to have occurred in the Middle East, and no modern body louse species descends from their Neanderthal counterparts (body lice only inhabit clothed individuals), it is possible Neanderthals (and/or humans) in hotter climates did not wear clothes, or Neanderthal lice were highly specialised. Seafaring Remains of Middle Palaeolithic stone tools on Greek islands indicate early seafaring by Neanderthals in the Ionian Sea possibly starting as far back as 200–150,000 years ago. The oldest stone artefacts from Crete date to 130–107,000 years ago, Cephalonia 125,000 years ago, and Zakynthos 110–35,000 years ago. The makers of these artefacts likely employed simple reed boats and made one-day crossings back and forth. Other Mediterranean islands with such remains include Sardinia, Melos, Alonnisos, and Naxos (although Naxos may have been connected to land), and it is possible they crossed the Strait of Gibraltar. If this interpretation is correct, Neanderthals' ability to engineer boats and navigate through open waters would speak to their advanced cognitive and technical skills. Medicine Given their dangerous hunting and extensive skeletal evidence of healing, Neanderthals appear to have lived lives of frequent traumatic injury and recovery. Well-healed fractures on many bones indicate the setting of splints. Individuals with severe head and rib traumas (which would have caused massive blood loss) indicate they had some manner of dressing major wounds, such as bandages made from animal skin. By and large, they appear to have avoided severe infections, indicating good long-term treatment of such wounds. Their knowledge of medicinal plants was comparable to that of contemporary humans. An individual at Cueva del Sidrón, Spain, seems to have been medicating a dental abscess using poplar—which contains salicylic acid, the active ingredient in aspirin—and there were also traces of the antibiotic-producing Penicillium chrysogenum. They may also have used yarrow and camomile, and their bitter taste—which should act as a deterrent as it could indicate poison—means it was likely a deliberate act. At the Kebara Cave in Israel, plant remains which have historically been used for their medicinal properties were found, including the common grape vine, the pistachios of the Persian turpentine tree, ervil seeds and oak acorns. Language The degree of language complexity is difficult to establish, but given that Neanderthals achieved some technical and cultural complexity, and interbred with humans, it is reasonable to assume they were at least fairly articulate, comparable to modern humans. Some researchers have argued, a somewhat complex language—possibly using syntax—was likely necessary to survive in their harsh environment, with Neanderthals needing to communicate about topics such as locations, hunting and gathering, and tool-making techniques. The FOXP2 gene in modern humans is associated with speech and language development. FOXP2 was present in Neanderthals, but not the gene's modern human variant. Neurologically, Neanderthals had an expanded Broca's area—operating the formulation of sentences, and speech comprehension, but out of a group of 48 genes believed to affect the neural substrate of language, 11 had different methylation patterns between Neanderthals and modern humans. This could indicate a stronger ability in modern humans than in Neanderthals to express language. In 1971, cognitive scientist Philip Lieberman attempted to reconstruct the Neanderthal vocal tract and concluded that it was similar to that of a newborn and incapable of producing a large range of speech sounds, due to the large size of the mouth and the small size of the pharyngeal cavity (according to his reconstruction), thus no need for a descended larynx to fit the entire tongue inside the mouth. He claimed that they were anatomically unable to produce the sounds /a/, /i/, /u/, /ɔ/, /g/, and /k/ and thus lacked the capacity for articulate speech, though were still able to speak at a level higher than non-human primates. However, the lack of a descended larynx does not necessarily equate to a reduced vowel capacity. The 1983 discovery of a Neanderthal hyoid bone—used in speech production in humans—in Kebara 2 which is almost identical to that of humans suggests Neanderthals were capable of speech. Also, the ancestral Sima de los Huesos hominins had humanlike hyoid and ear bones, which could suggest the early evolution of the modern human vocal apparatus. However, the hyoid does not definitively provide insight into vocal tract anatomy. Subsequent studies reconstruct the Neanderthal vocal apparatus as comparable to that of modern humans, with a similar vocal repertoire. In 2015, Lieberman hypothesized that Neanderthals were capable of syntactical language, although nonetheless incapable of mastering any human dialect. It is debated if behavioural modernity is a recent and uniquely modern human innovation, or if Neanderthals also possessed it. Religion Funerals Although Neanderthals did bury their dead, at least occasionally—which may explain the abundance of fossil remains—the behaviour is not indicative of a religious belief of life after death because it could also have had non-symbolic motivations, such as great emotion or the prevention of scavenging. Estimates made regarding the number of known Neanderthal burials range from thirty-six to sixty. The oldest confirmed burials do not seem to occur before approximately 70,000 years ago. The small number of recorded Neanderthal burials implies that the activity was not particularly common. The setting of inhumation in Neanderthal culture largely consisted of simple, shallow graves and pits. Sites such as La Ferrassie in France or Shanidar in Iraq may imply the existence of mortuary centers or cemeteries in Neanderthal culture due to the number of individuals found buried at them. The debate on Neanderthal funerals has been active since the 1908 discovery of La Chapelle-aux-Saints 1 in a small, artificial hole in a cave in southwestern France, very controversially postulated to have been buried in a symbolic fashion. Another grave at Shanidar Cave, Iraq, was associated with the pollen of several flowers that may have been in bloom at the time of deposition—yarrow, centaury, ragwort, grape hyacinth, joint pine and hollyhock. The medicinal properties of the plants led American archaeologist Ralph Solecki to claim that the man buried was some leader, healer, or shaman, and that "The association of flowers with Neanderthals adds a whole new dimension to our knowledge of his humanness, indicating that he had 'soul' ". However, it is also possible the pollen was deposited by a small rodent after the man's death. The graves of children and infants, especially, are associated with grave goods such as artefacts and bones. The grave of a newborn from La Ferrassie, France, was found with three flint scrapers, and an infant from Cave, Syria, was found with a triangular flint placed on its chest. A 10-month-old from Amud Cave, Israel, was associated with a red deer mandible, likely purposefully placed there given other animal remains are now reduced to fragments. Teshik-Tash 1 from Uzbekistan was associated with a circle of ibex horns, and a limestone slab argued to have supported the head. A child from Kiik-Koba, Crimea, Ukraine, had a flint flake with some purposeful engraving on it, likely requiring a great deal of skill. Nonetheless, these contentiously constitute evidence of symbolic meaning as the grave goods' significance and worth are unclear. Cults It was once argued that the bones of the cave bear, particularly the skull, in some European caves were arranged in a specific order, indicating an ancient bear cult that killed bears and then ceremoniously arranged the bones. This would be consistent with bear-related rituals of modern human Arctic hunter-gatherers. However, the alleged peculiarity of the arrangement could also be sufficiently explained by natural causes, and bias could be introduced as the existence of a bear cult would conform with the idea that totemism was the earliest religion, leading to undue extrapolation of evidence. It was also once thought that Neanderthals ritually hunted, killed and cannibalised other Neanderthals and used the skull as the focus of some ceremony. In 1962, Italian palaeontologist Alberto Blanc believed a skull from Grotta Guattari, Italy, had evidence of a swift blow to the head—indicative of ritual murder—and a precise and deliberate incising at the base to access the brain. He compared it to the victims of headhunters in Malaysia and Borneo, putting it forward as evidence of a skull cult. However, it is now thought to have been a result of cave hyaena scavengery. Although Neanderthals are known to have practised cannibalism, there is unsubstantial evidence to suggest ritual defleshing. In 2019, Gibraltarian palaeoanthropologists Stewart, Geraldine and Clive Finlayson and Spanish archaeologist Francisco Guzmán speculated that the golden eagle had iconic value to Neanderthals, as exemplified in some modern human societies because they reported that golden eagle bones had a conspicuously high rate of evidence of modification compared to the bones of other birds. They then proposed some "Cult of the Sun Bird" where the golden eagle was a symbol of power. There is evidence from Krapina, Croatia, from wear use and even remnants of string, that suggests that raptor talons were worn as personal ornaments. Interbreeding Interbreeding with modern humans The first Neanderthal genome sequence was published in 2010, and strongly indicated interbreeding between Neanderthals and early modern humans. The genomes of all studied modern populations contain Neanderthal DNA. Various estimates exist for the proportion, such as 1–4% or 3.4–7.9% in modern Eurasians, or 1.8–2.4% in modern Europeans and 2.3–2.6% in modern East Asians. Pre-agricultural Europeans appear to have had similar, or slightly higher, percentages to modern East Asians, and the numbers may have decreased in the former due to dilution with a group of people which had split off before Neanderthal introgression. Typically, studies have reported finding no significant levels of Neanderthal DNA in Sub-Saharan Africans, but a 2020 study detected 0.3-0.5% in the genomes of five African sample populations, likely the result of Eurasians back-migrating and interbreeding with Africans, as well as human-to-neanderthal gene flow from dispersals of Homo sapiens preceding the larger Out-of-Africa migration, and also showed more equal Neanderthal DNA percentages for European and Asian populations. Such low percentages of Neanderthal DNA in all present day populations indicate infrequent past interbreeding, unless interbreeding was more common with a different population of modern humans which did not contribute to the present day gene pool. Of the inherited Neanderthal genome, 25% in modern Europeans and 32% in modern East Asians may be related to viral immunity. In all, approximately 20% of the Neanderthal genome appears to have survived in the modern human gene pool. However, due to their small population and resulting reduced effectivity of natural selection, Neanderthals accumulated several weakly harmful mutations, which were introduced to and slowly selected out of the much larger modern human population; the initial hybridised population may have experienced up to a 94% reduction in fitness compared to contemporary humans. By this measure, Neanderthals may have substantially increased in fitness. A 2017 study focusing on archaic genes in Turkey found associations with coeliac disease, malaria severity and Costello syndrome. Nonetheless, some genes may have helped modern East Asians adapt to the environment; the putatively Neanderthal Val92Met variant of the MC1R gene, which may be weakly associated with red hair and UV radiation sensitivity, is primarily found in East Asian, rather than European, individuals. Some genes related to the immune system appear to have been affected by introgression, which may have aided migration, such as OAS1, STAT2, TLR6, TLR1, TLR10, and several related to immune response. In addition, Neanderthal genes have also been implicated in the structure and function of the brain, keratin filaments, sugar metabolism, muscle contraction, body fat distribution, enamel thickness and oocyte meiosis. Nonetheless, a large portion of surviving introgression appears to be non-coding ("junk") DNA with few biological functions. There is considerably less Neanderthal ancestry on the X-chromosome as compared to the autosomal chromosomes. This has led to suggestions that admixture with modern humans was sex biased, and primarily the result of mating between modern human females and Neanderthal males. Other authors have suggested that this may be due to negative selection against Neanderthal alleles, however these two proposals are not mutually exclusive. A 2023 study confirmed that the low level of Neanderthal ancestry on the X-chromosomes is best explained by sex bias in the admixture events, and these authors also found evidence for negative selection on archaic genes. Neanderthal mtDNA (which is passed on from mother to child) is absent in modern humans. This is evidence that interbreeding occurred mainly between Neanderthal males and modern human females. According to Svante Pääbo, it is not clear that modern humans were socially dominant over Neanderthals, which may explain why the interbreeding occurred primarily between Neanderthal males and modern human females. Furthermore, even if Neanderthal women and modern human males had interbred, Neanderthal mtDNA lineages may have gone extinct if women who carried them only gave birth to sons. Due to the lack of Neanderthal-derived Y-chromosomes in modern humans (which is passed on from father to son), it has also been suggested that the hybrids that contributed ancestry to modern populations were predominantly females, or the Neanderthal Y-chromosome was not compatible with H. sapiens and became extinct. According to linkage disequilibrium mapping, the last Neanderthal gene flow into the modern human genome occurred 86–37,000 years ago, but most likely 65–47,000 years ago. It is thought that Neanderthal genes which contributed to the present day human genome stemmed from interbreeding in the Near East rather than the entirety of Europe. However, interbreeding still occurred without contributing to the modern genome. The approximately 40,000-year-old modern human Oase 2 was found, in 2015, to have had 6–9% (point estimate 7.3%) Neanderthal DNA, indicating a Neanderthal ancestor up to four to six generations earlier, but this hybrid population does not appear to have made a substantial contribution to the genomes of later Europeans. In 2016, the DNA of Neanderthals from Denisova Cave revealed evidence of interbreeding 100,000 years ago, and interbreeding with an earlier dispersal of H. sapiens may have occurred as early as 120,000 years ago in places such as the Levant. The earliest H. sapiens remains outside of Africa occur at Misliya Cave 194–177,000 years ago, and Skhul and Qafzeh 120–90,000 years ago. The Qafzeh humans lived at approximately the same time as the Neanderthals from the nearby Tabun Cave. The Neanderthals of the German Hohlenstein-Stadel have deeply divergent mtDNA compared to more recent Neanderthals, possibly due to introgression of human mtDNA between 316,000 and 219,000 years ago, or simply because they were genetically isolated. Whatever the case, these first interbreeding events have not left any trace in modern human genomes. Genetic evidence suggests that following their split from Denisovans, Neanderthals experienced gene flow (around 3% of their genome) from the lineage leading to modern humans prior to the expansion of modern humans outside of Africa during the Last Glacial Period, with this interbreeding suggested to have taken place around 200-300,000 years ago. Detractors of the interbreeding model argue that the genetic similarity is only a remnant of a common ancestor instead of interbreeding, although this is unlikely as it fails to explain why sub-Saharan Africans do not have Neanderthal DNA. Interbreeding with Denisovans Although nDNA confirms that Neanderthals and Denisovans are more closely related to each other than they are to modern humans, Neanderthals and modern humans share a more recent maternally-transmitted mtDNA common ancestor, possibly due to interbreeding between Denisovans and some unknown human species. The 400,000-year-old Neanderthal-like humans from Sima de los Huesos in northern Spain, looking at mtDNA, are more closely related to Denisovans than Neanderthals. Several Neanderthal-like fossils in Eurasia from a similar time period are often grouped into H. heidelbergensis, of which some may be relict populations of earlier humans, which could have interbred with Denisovans. This is also used to explain an approximately 124,000-year-old German Neanderthal specimen with mtDNA that diverged from other Neanderthals (except for Sima de los Huesos) about 270,000 years ago, while its genomic DNA indicated divergence less than 150,000 years ago. Sequencing of the genome of a Denisovan from Denisova Cave has shown that 17% of its genome derives from Neanderthals. This Neanderthal DNA more closely resembled that of a 120,000-year-old Neanderthal bone from the same cave than that of Neanderthals from Vindija Cave, Croatia, or Mezmaiskaya Cave in the Caucasus, suggesting that interbreeding was local. For the 90,000-year-old Denisova 11, it was found that her father was a Denisovan related to more recent inhabitants of the region, and her mother a Neanderthal related to more recent European Neanderthals at Vindija Cave, Croatia. Given how few Denisovan bones are known, the discovery of a first-generation hybrid indicates interbreeding was very common between these species, and Neanderthal migration across Eurasia likely occurred sometime after 120,000 years ago. Extinction Transition The extinction of Neanderthals was part of the broader Late Pleistocene megafaunal extinction event. Whatever the cause of their extinction, Neanderthals were replaced by modern humans, indicated by near full replacement of Middle Palaeolithic Mousterian stone technology with modern human Upper Palaeolithic Aurignacian stone technology across Europe (the Middle-to-Upper Palaeolithic Transition) from 41,000 to 39,000 years ago. By between 44,200 to 40,600 BP, Neanderthals vanished from northwestern Europe. However, it is postulated that Iberian Neanderthals persisted until about 35,000 years ago, as indicated by the date range of transitional lithic assemblages—Châtelperronian, Uluzzian, Protoaurignacian and Early Aurignacian. The latter two are attributed to modern humans, but the former two have unconfirmed authorship, potentially products of Neanderthal/modern human cohabitation and cultural transmission. Further, the appearance of the Aurignacian south of the Ebro River has been dated to roughly 37,500 years ago, which has prompted the "Ebro Frontier" hypothesis which states that the river presented a geographic barrier preventing modern human immigration, and thus prolonging Neanderthal persistence. However, the dating of the Iberian Transition is debated, with a contested timing of 43,000–40,800 years ago at Cueva Bajondillo, Spain. The Châtelperronian appears in northeastern Iberia about 42,500–41,600 years ago. Some Neanderthals in Gibraltar were dated to much later than this—such as Zafarraya (30,000 years ago) and Gorham's Cave (28,000 years ago)—which may be inaccurate as they were based on ambiguous artefacts instead of direct dating. A claim of Neanderthals surviving in a polar refuge in the Ural Mountains is loosely supported by Mousterian stone tools dating to 34,000 years ago from the northern Siberian Byzovaya site at a time when modern humans may not yet have colonised the northern reaches of Europe; however, modern human remains are known from the nearby Mamontovaya Kurya site dating to 40,000 years ago. Indirect dating of Neanderthals remains from Mezmaiskaya Cave reported a date of about 30,000 years ago, but direct dating instead yielded 39,700 ±1,100 years ago, more in line with trends exhibited in the rest of Europe. The earliest indication of Upper Palaeolithic modern human immigration into Europe is a series of modern human teeth with Neronian industry stone tools found at Mandrin Cave, Malataverne in France, dated in 2022 to between 56,800 and 51,700 years ago. The earliest bones in Europe date to roughly 45–43,000 years ago in Bulgaria, Italy, and Britain. This wave of modern humans replaced Neanderthals. However, Neanderthals and H. sapiens have a much longer contact history. DNA evidence indicates H. sapiens contact with Neanderthals and admixture as early as 120–100,000 years ago. A 2019 reanalysis of 210,000-year-old skull fragments from the Greek Apidima Cave assumed to have belonged to a Neanderthal concluded that they belonged to a modern human, and a Neanderthal skull dating to 170,000 years ago from the cave indicates H. sapiens were replaced by Neanderthals until returning about 40,000 years ago. This identification was refuted by a 2020 study. Archaeological evidence suggests that Neanderthals displaced modern humans in the Near East around 100,000 years ago until about 60–50,000 years ago. Cause Modern humans Historically, modern human technology was viewed as vastly superior to that of Neanderthals, with more efficient weaponry and subsistence strategies, and Neanderthals simply went extinct because they could not compete. The discovery of Neanderthal/modern human introgression has caused the resurgence of the multiregional hypothesis, wherein the present day genetic makeup of all humans is the result of complex genetic contact among several different populations of humans dispersed across the world. By this model, Neanderthals and other recent archaic humans were simply assimilated into the modern human genome – that is, they were effectively bred out into extinction. Modern humans coexisted with Neanderthals in Europe for around 3,000 to 5,000 years. Climate change Their ultimate extinction coincides with Heinrich event 4, a period of intense seasonality; later Heinrich events are also associated with massive cultural turnovers when European human populations collapsed. This climate change may have depopulated several regions of Neanderthals, like previous cold spikes, but these areas were instead repopulated by immigrating humans, leading to Neanderthal extinction. In southern Iberia, there is evidence that Neanderthal populations declined during H4 and the associated proliferation of Artemisia-dominated desert-steppes. It has also been proposed that climate change was the primary driver, as their low population left them vulnerable to any environmental change, with even a small drop in survival or fertility rates possibly quickly leading to their extinction. However, Neanderthals and their ancestors had survived through several glacial periods over their hundreds of thousands of years of European habitation. It is also proposed that around 40,000 years ago, when Neanderthal populations may have already been dwindling from other factors, the Campanian Ignimbrite Eruption in Italy could have led to their final demise, as it produced cooling for a year and acid rain for several more years. Disease Modern humans may have introduced African diseases to Neanderthals, contributing to their extinction. A lack of immunity, compounded by an already low population, was potentially devastating to the Neanderthal population, and low genetic diversity could have also rendered fewer Neanderthals naturally immune to these new diseases ("differential pathogen resistance" hypothesis). However, compared to modern humans, Neanderthals had a similar or higher genetic diversity for 12 major histocompatibility complex (MHC) genes associated with the adaptive immune system, casting doubt on this model. Low population and inbreeding depression may have caused maladaptive birth defects, which could have contributed to their decline (mutational meltdown). In late-20th-century New Guinea, due to cannibalistic funerary practices, the Fore people were decimated by transmissible spongiform encephalopathies, specifically kuru, a highly virulent disease spread by ingestion of prions found in brain tissue. However, individuals with the 129 variant of the PRNP gene were naturally immune to the prions. Studying this gene led to the discovery that the 129 variant was widespread among all modern humans, which could indicate widespread cannibalism at some point in human prehistory. Because Neanderthals are known to have practised cannibalism to an extent and to have co-existed with modern humans, British palaeoanthropologist Simon Underdown speculated that modern humans transmitted a kuru-like spongiform disease to Neanderthals, and, because the 129 variant appears to have been absent in Neanderthals, it quickly killed them off. In popular culture Neanderthals have been portrayed in popular culture including appearances in literature, visual media and comedy. The "caveman" archetype often mocks Neanderthals and depicts them as primitive, hunchbacked, knuckle-dragging, club-wielding, grunting, nonsocial characters driven solely by animal instinct. "Neanderthal" can also be used as an insult. In literature, they are sometimes depicted as brutish or monstrous, such as in H. G. Wells' The Grisly Folk and Elizabeth Marshall Thomas' The Animal Wife, but sometimes with a civilised but unfamiliar culture, as in William Golding's The Inheritors, Björn Kurtén's Dance of the Tiger, and Jean M. Auel's Clan of the Cave Bear and her Earth's Children series. See also Early human migrations Footnotes References Sources Further reading External links Proof that Neanderthals ate crabs is another 'nail in the coffin' for primitive cave dweller stereotypes - Phys.org February 7, 2023 Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). : Includes Neanderthal mtDNA sequences GenBank records for H. s. neanderthalensis maintained by the National Center for Biotechnology Information (NCBI) Fossil taxa described in 1864 Stone Age Asia Stone Age Europe
0.763569
0.99985
0.763454
Temporality
In philosophy, temporality refers to the idea of a linear progression of past, present, and future. The term is frequently used, however, in the context of critiques of commonly held ideas of linear time. In social sciences, temporality is studied with respect to the human perception of time and the social organization of time. The perception of time in Western thought underwent significant changes in the three hundred years between the Middle Ages and modernity. Examples in continental philosophy of philosophers raising questions of temporality include Edmund Husserl's analysis of internal time consciousness, Martin Heidegger's Being and Time, J. M. E. McTaggart's article "The Unreality of Time", George Herbert Mead's Philosophy of the Present, and Jacques Derrida's criticisms of Husserl's analysis. Temporality is "deeply intertwined with the rhetorical act of harnessing and subverting power in the unfolding struggle for justice." Temporalities, particularly in European settler colonialism, have been observed in critical theory as a tool for both subjugation and oppression of Indigenous communities, and Native resistance to that oppression. Temporal turn In historiography, questioning periodization, and as a further development after the spatial turn, social sciences have started re-investigating time and its different social understanding. Temporal turn social science investigates different understandings of time at different times and locations, giving rise to concepts such as timespace where time and space are thought together. See also Futures studies Historicity (philosophy) Impermanence Linear temporal logic Philosophy of space and time Time series Vanitas References External links CEITT – Time and Temporality Research Center Philosophy of time
0.778535
0.980589
0.763423
Industrial society
In sociology, an industrial society is a society driven by the use of technology and machinery to enable mass production, supporting a large population with a high capacity for division of labour. Such a structure developed in the Western world in the period of time following the Industrial Revolution, and replaced the agrarian societies of the pre-modern, pre-industrial age. Industrial societies are generally mass societies, and may be succeeded by an information society. They are often contrasted with traditional societies. Industrial societies use external energy sources, such as fossil fuels, to increase the rate and scale of production. The production of food is shifted to large commercial farms where the products of industry, such as combine harvesters and fossil fuel-based fertilizers, are used to decrease required human labor while increasing production. No longer needed for the production of food, excess labor is moved into these factories where mechanization is utilized to further increase efficiency. As populations grow, and mechanization is further refined, often to the level of automation, many workers shift to expanding service industries. Industrial society makes urbanization desirable, in part so that workers can be closer to centers of production, and the service industry can provide labor to workers and those that benefit financially from them, in exchange for a piece of production profits with which they can buy goods. This leads to the rise of very large cities and surrounding suburb areas with a high rate of economic activity. These urban centers require the input of external energy sources in order to overcome the diminishing returns of agricultural consolidation, due partially to the lack of nearby arable land, associated transportation and storage costs, and are otherwise unsustainable. This makes the reliable availability of the needed energy resources high priority in industrial government policies. Industrial development Prior to the Industrial Revolution in Europe and North America, followed by further industrialization throughout the world in the 20th century, most economies were largely agrarian. Basics were often made within the household and most other manufacturing was carried out in smaller workshops by artisans with limited specialization or machinery. In Europe during the late Middle Ages, artisans in many towns formed guilds to self-regulate their trades and collectively pursue their business interests. Economic historian Sheilagh Ogilvie has suggested the guilds further restrained the quality and productivity of manufacturing. There is some evidence, however, that even in ancient times, large economies such as the Roman empire or Chinese Han dynasty had developed factories for more centralized production in certain industries. With the Industrial Revolution, the manufacturing sector became a major part of European and North American economies, both in terms of labor and production, contributing possibly a third of all economic activity. Along with rapid advances in technology, such as steam power and mass steel production, the new manufacturing drastically reconfigured previously mercantile and feudal economies. Even today, industrial manufacturing is significant to many developed and semi-developed economies. Deindustrialisation Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced. A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This has come with a major shift in labor and production away from manufacturing and towards the service sector, a process dubbed tertiarization. Additionally, since the late 20th century, rapid changes in communication and information technology (sometimes called an information revolution) have allowed sections of some economies to specialize in a quaternary sector of knowledge and information-based services. For these and other reasons, in a post-industrial society, manufacturers can and often do relocate their industrial operations to lower-cost regions in a process known as off-shoring. Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process. Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff. Industrial policy Today, as industry is an important part of most societies and nations, many governments will have at least some role in planning and regulating industry. This can include issues such as industrial pollution, financing, vocational education, and labour law. Industrial labour In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers. Effects on slavery Ancient Mediterranean cultures relied on slavery throughout their economy. While serfdom largely supplanted the practice in Europe during the Middle Ages, several European powers reintroduced slavery extensively in the early modern period, particularly for the harshest labor in their colonies. The Industrial revolution played a central role in the later abolition of slavery, partly because domestic manufacturing's new economic dominance undercut interests in the slave trade. Additionally, the new industrial methods required a complex division of labor with less worker supervision, which may have been incompatible with forced labor. War The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare. Use in 20th century social science and politics “Industrial society” took on a more specific meaning after World War II in the context of the Cold War, the internationalization of sociology through organizations like UNESCO, and the spread of American industrial relations to Europe. The cementation of the Soviet Union’s position as a world power inspired reflection on whether the sociological association of highly-developed industrial economies with capitalism required updating. The transformation of capitalist societies in Europe and the United States to state-managed, regulated welfare capitalism, often with significant sectors of nationalized industry, also contributed to the impression that they might be evolving beyond capitalism, or toward some sort of “convergence” common to all “types” of industrial societies, whether capitalist or communist. State management, automation, bureaucracy, institutionalized collective bargaining, and the rise of the tertiary sector were taken as common markers of industrial society. The “industrial society” paradigm of the 1950s and 1960s was strongly marked by the unprecedented economic growth in Europe and the United States after World War II, and drew heavily on the work of economists like Colin Clark, John Kenneth Galbraith, W.W. Rostow, and Jean Fourastié. The fusion of sociology with development economics gave the industrial society paradigm strong resemblances to modernization theory, which achieved major influence in social science in the context of postwar decolonization and the development of post-colonial states. The French sociologist Raymond Aron, who gave the most developed definition to the concept of “industrial society” in the 1950s, used the term as a comparative method to identify common features of the Western capitalist and Soviet-style communist societies. Other sociologists, including Daniel Bell, Reinhard Bendix, Ralf Dahrendorf, Georges Friedmann, Seymour Martin Lipset, and Alain Touraine, used similar ideas in their own work, though with sometimes very different definitions and emphases. The principal notions of industrial-society theory were also commonly expressed in the ideas of reformists in European social-democratic parties who advocated a turn away from Marxism and an end to revolutionary politics. Because of its association with non-Marxist modernization theory and American anticommunist organizations like the Congress for Cultural Freedom, “industrial society” theory was often criticized by left-wing sociologists and Communists as a liberal ideology that aimed to justify the postwar status quo and undermine opposition to capitalism. However, some left-wing thinkers like André Gorz, Serge Mallet, Herbert Marcuse, and the Frankfurt School used aspects of industrial society theory in their critiques of capitalism. Selected bibliography of industrial society theory Adorno, Theodor. "Late Capitalism or Industrial Society?" (1968) Aron, Raymond. Dix-huit leçons sur la société industrielle. Paris: Gallimard, 1961. Aron, Raymond. La lutte des classes: nouvelles leçons sur les sociétés industrielles. Paris: Gallimard, 1964. Bell, Daniel. The End of Ideology: On the Exhaustion of Political Ideas in the Fifties. New York: Free Press, 1960. Dahrendorf, Ralf. Class and Class Conflict in Industrial Society. Stanford: Stanford University Press, 1959. Gorz, André. Stratégie ouvrière et néo-capitalisme. Paris: Seuil, 1964. Friedmann, Georges. Le Travail en miettes. Paris: Gallimard, 1956. Kaczynski, Theodore J. "Industrial Society and its Future". Berkeley, CA: Jolly Roger Press, 1995. Kerr, Clark, et al. Industrialism and Industrial Man. Oxford: Oxford University Press, 1960. Lipset, Seymour Martin. Political Man: The Social Bases of Politics. Garden City, NJ: Doubleday, 1959. Marcuse, Herbert. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press, 1964. Touraine, Alain. Sociologie de l'action. Paris: Seuil, 1965. See also Developed country Food industry Industrialization North–South divide Post-industrial society Western world Industrial Revolution Newly industrialized country Mechanization Dystopian future References Sociological theories Secondary sector of the economy Society Society Stages of history
0.768027
0.993984
0.763407
Indigenization
Indigenization is the act of making something more indigenous; transformation of some service, idea, etc. to suit a local culture, especially through the use of more indigenous people in public administration, employment and other fields. The term is primarily used by anthropologists to describe what happens when locals take something from the outside and make it their own (such as: Africanization or Americanization). History History of the word The first use of the word indigenization recorded by the OED is in a 1951 paper about studies conducted in India about Christian missionaries. The word was used to describe the process of making churches indigenous in southern India. It was used in The Economist in 1962 to describe managerial positions and in the 1971 book English Language in West Africa by John Spencer, where it was used to describe the adoption of English. Indigenization is often used to describe the adoption of colonial culture in Africa because of the effects of colonialism by Europe in the 19th and the early 20th centuries. History of the use Throughout history, the process of making something indigenous has taken different forms. Other words that describe similar processes of making something local are Africanization, localization, glocalization, and Americanization. However, those terms describe a specific case of the process of making something indigenous. The terms may be rejected in favor of the more general term of indigenization because the others may have too narrow of a scope. For example, christianization was a form of indigenization by converting areas and groups to follow Christianity. Types Linguistics In this context, indigenization is used to refer to how a language is adopted in a certain area such as French in Africa. The term is used to describe the process where a language needed to be indigenized was in Africa where the ex-colonizer's language required some references to African religion and culture, even though in the original language there was no vocabulary for this. As this process is being carried out, there is usually a metalanguage created that is some combination of the original language and the introduced language. This language shares cultural aspects from both cultures, making it distinct and usually done in order to understand the foreign language in the context of the local region. Sometimes the term indigenization is preferred over other terms such as Africanization because it carries no negative connotations and does not imply any underlying meaning. Economy Indigenization is seen as the process of changing someone to a person of more corroboration towards their surroundings. A large part of that process is the economy of said surroundings. Indigenization has played an important part in the economic roles of society. Thanks to The Indigenization and Economic Empowerment Act, black people were offered a more distinguished position in the economy, with foreigners having to give up 51% of their business to black people. China's Open Door Policy is seen as a big step of indigenization for their economy, as it is opening its doors to the western world. This allowed different cultures to experience one another and opened up China's businesses to the western world as well, which set China forth in a sort of economic reform. Social work Another big part of indigenization is social work, although it is unknown what the actual process is when social forces come into place. Indigenization is seen by some as less of a process of naturalization and more of a process of culturally relevant social work. Indigenization was not the standard, but it was a way to accustom others to a surrounding point of view but also to help understand where the people came from and their heritage. However, some argue that the indigenization of social work may work when it comes to foreigners being brought into Western cultures, it would not work as well in non-Western cultures. They also argue that Western cultures seem to exaggerate the similarities and the differences between Western and foreign cultures. Indigenization and the Economic Empowerment Act The Indigenization and Economic Empowerment Act was passed by Zimbabwe Parliament in 2008. It is a set of regulations meant to regulate businesses, compelling foreign-owned firms to sell 51-percent of their business to native Zimbabweans over the following years. Five-year jail terms are assigned to foreigners who do not submit an indigenisation plan or use natives as fronts for their businesses. The intent of the law is to ensure the country's indigenous members fulfill a more prominent role in the economy. Controversy rose over this intent, with opponents stating that the law will scare away foreign investors. Indigenous Zimbabweans are defined as "any person who, before the 18th April, 1980 [when Zimbabwe gained independence from Britain], was disadvantaged by unfair discrimination on the grounds of his or her race, and any descendant of such person, and includes any company, association, syndicate or partnership of which indigenous Zimbabweans form the majority of the members or hold the controlling interest". This provision allows the minister of youth development, indigenization and economic empowerment, Saviour Kasukuwere, to keep a database of indigenous businesses from which foreign interest can pick partners. At the time of the law passing, the ruling party in Zimbabwe was Zanu-PF, led by the president Robert Mugabe. Saviour Kasukuwere is a member of this party, which brought up skepticism among economists who speculated that the database may be used by the party to give its allies the best deals. Mr. Kasukuwere stated that he will implement the law regardless of objections. Place names Federal government organizations like the Geographical Names Board of Canada may change already existing place names with feedback and action from provincial and local authorities as well as accepting submissions for change from the public via accessible forms. Indigenous names may become revived as a result, notable examples include Sanirajak, Kinngait, qathet, Haida Gwaii, and the Salish Sea. See also Angolanidade Cultural homogenization Indigenism Indigenismo Korenizatsiya Language localisation Nation-building Westernization References Ethnicity Social history Linguistic controversies Cultural assimilation Indigenous politics
0.778234
0.980946
0.763406
Nobility
Nobility is a social class found in many societies that have an aristocracy. It is normally ranked immediately below royalty. Nobility has often been an estate of the realm with many exclusive functions and characteristics. The characteristics associated with nobility may constitute substantial advantages over or relative to non-nobles or simply formal functions (e.g., precedence), and vary by country and by era. Membership in the nobility, including rights and responsibilities, is typically hereditary and patrilineal. Membership in the nobility has historically been granted by a monarch or government, and acquisition of sufficient power, wealth, ownerships, or royal favour has occasionally enabled commoners to ascend into the nobility. There are often a variety of ranks within the noble class. Legal recognition of nobility has been much more common in monarchies, but nobility also existed in such regimes as the Dutch Republic (1581–1795), the Republic of Genoa (1005–1815), the Republic of Venice (697–1797), and the Old Swiss Confederacy (1300–1798), and remains part of the legal social structure of some small non-hereditary regimes, e.g., San Marino, and the Vatican City in Europe. In Classical Antiquity, the (nobles) of the Roman Republic were families descended from persons who had achieved the consulship. Those who belonged to the hereditary patrician families were nobles, but plebeians whose ancestors were consuls were also considered . In the Roman Empire, the nobility were descendants of this Republican aristocracy. While ancestry of contemporary noble families from ancient Roman nobility might technically be possible, no well-researched, historically documented generation-by-generation genealogical descents from ancient Roman times are known to exist in Europe. Hereditary titles and styles added to names (such as "Prince", "Lord", or "Lady"), as well as honorifics, often distinguish nobles from non-nobles in conversation and written speech. In many nations, most of the nobility have been untitled, and some hereditary titles do not indicate nobility (e.g., vidame). Some countries have had non-hereditary nobility, such as the Empire of Brazil or life peers in the United Kingdom. History The term derives from Latin , the abstract noun of the adjective ("noble but also secondarily well-known, famous, notable"). In ancient Roman society, originated as an informal designation for the political governing class who had allied interests, including both patricians and plebeian families with an ancestor who had risen to the consulship through his own merit (see , "new man"). In modern usage, "nobility" is applied to the highest social class in pre-modern societies. In the feudal system (in Europe and elsewhere), the nobility were generally those who held a fief, often land or office, under vassalage, i.e., in exchange for allegiance and various, mainly military, services to a suzerain, who might be a higher-ranking nobleman or a monarch. It rapidly became a hereditary caste, sometimes associated with a right to bear a hereditary title and, for example in pre-revolutionary France, enjoying fiscal and other privileges. While noble status formerly conferred significant privileges in most jurisdictions, by the 21st century it had become a largely honorary dignity in most societies, although a few, residual privileges may still be preserved legally (e.g. Spain, UK) and some Asian, Pacific and African cultures continue to attach considerable significance to formal hereditary rank or titles. (Compare the entrenched position and leadership expectations of the nobility of the Kingdom of Tonga.) More than a third of British land is in the hands of aristocrats and traditional landed gentry. Nobility is a historical, social, and often legal notion, differing from high socio-economic status in that the latter is mainly based on pedigree, income, possessions, or lifestyle. Being wealthy or influential cannot make one noble, nor are all nobles wealthy or influential (aristocratic families have lost their fortunes in various ways, and the concept of the 'poor nobleman' is almost as old as nobility itself). Although many societies have a privileged upper class with substantial wealth and power, the status is not necessarily hereditary and does not entail a distinct legal status, nor differentiated forms of address. Various republics, including European countries such as Greece, Turkey, and Austria, and former Iron Curtain countries and places in the Americas such as Mexico and the United States, have expressly abolished the conferral and use of titles of nobility for their citizens. This is distinct from countries that have not abolished the right to inherit titles, but which do not grant legal recognition or protection to them, such as Germany and Italy, although Germany recognizes their use as part of the legal surname. Still, other countries and authorities allow their use, but forbid attachment of any privilege thereto, e.g., Finland, Norway, and the European Union, while French law also protects lawful titles against usurpation. Noble privileges Not all of the benefits of nobility derived from noble status . Usually privileges were granted or recognized by the monarch in association with possession of a specific title, office or estate. Most nobles' wealth derived from one or more estates, large or small, that might include fields, pasture, orchards, timberland, hunting grounds, streams, etc. It also included infrastructure such as a castle, well and mill to which local peasants were allowed some access, although often at a price. Nobles were expected to live "nobly", that is, from the proceeds of these possessions. Work involving manual labor or subordination to those of lower rank (with specific exceptions, such as in military or ecclesiastic service) was either forbidden (as derogation from noble status) or frowned upon socially. On the other hand, membership in the nobility was usually a prerequisite for holding offices of trust in the realm and for career promotion, especially in the military, at court and often the higher functions in the government, judiciary and church. Prior to the French Revolution, European nobles typically commanded tribute in the form of entitlement to cash rents or usage taxes, labor or a portion of the annual crop yield from commoners or nobles of lower rank who lived or worked on the noble's manor or within his seigneurial domain. In some countries, the local lord could impose restrictions on such a commoner's movements, religion or legal undertakings. Nobles exclusively enjoyed the privilege of hunting. In France, nobles were exempt from paying the taille, the major direct tax. Peasants were not only bound to the nobility by dues and services, but the exercise of their rights was often also subject to the jurisdiction of courts and police from whose authority the actions of nobles were entirely or partially exempt. In some parts of Europe the right of private war long remained the privilege of every noble. During the early Renaissance, duelling established the status of a respectable gentleman and was an accepted manner of resolving disputes. Since the end of World War I the hereditary nobility entitled to special rights has largely been abolished in the Western World as intrinsically discriminatory, and discredited as inferior in efficiency to individual meritocracy in the allocation of societal resources. Nobility came to be associated with social rather than legal privilege, expressed in a general expectation of deference from those of lower rank. By the 21st century even that deference had become increasingly minimized. In general, the present nobility present in the European monarchies has no more privileges than the citizens decorated in republics. Ennoblement In France, a (lordship) might include one or more manors surrounded by land and villages subject to a noble's prerogatives and disposition. could be bought, sold or mortgaged. If erected by the crown into, e.g., a barony or countship, it became legally entailed for a specific family, which could use it as their title. Yet most French nobles were untitled ("seigneur of Montagne" simply meant ownership of that lordship but not, if one was not otherwise noble, the right to use a title of nobility, as commoners often purchased lordships). Only a member of the nobility who owned a countship was allowed, , to style himself as its , although this restriction came to be increasingly ignored as the drew to its close. In other parts of Europe, sovereign rulers arrogated to themselves the exclusive prerogative to act as within their realms. For example, in the United Kingdom royal letters patent are necessary to obtain a title of the peerage, which also carries nobility and formerly a seat in the House of Lords, but never came with automatic entail of land nor rights to the local peasants' output. Rank within the nobility Nobility might be either inherited or conferred by a fons honorum. It is usually an acknowledged preeminence that is hereditary, i.e. the status descends exclusively to some or all of the legitimate, and usually male-line, descendants of a nobleman. In this respect, the nobility as a class has always been much more extensive than the primogeniture-based titled nobility, which included peerages in France and in the United Kingdom, grandezas in Portugal and Spain, and some noble titles in Belgium, Italy, the Netherlands, Prussia, and Scandinavia. In Russia, Scandinavia and non-Prussian Germany, titles usually descended to all male-line descendants of the original titleholder, including females. In Spain, noble titles are now equally heritable by females and males alike. Noble estates, on the other hand, gradually came to descend by primogeniture in much of western Europe aside from Germany. In Eastern Europe, by contrast, with the exception of a few Hungarian estates, they usually descended to all sons or even all children. In France, some wealthy bourgeois, most particularly the members of the various parlements, were ennobled by the king, constituting the noblesse de robe. The old nobility of landed or knightly origin, the noblesse d'épée, increasingly resented the influence and pretensions of this parvenu nobility. In the last years of the ancien régime the old nobility pushed for restrictions of certain offices and orders of chivalry to noblemen who could demonstrate that their lineage had extended "quarterings", i.e. several generations of noble ancestry, to be eligible for offices and favours at court along with nobles of medieval descent, although historians such as William Doyle have disputed this so-called "Aristocratic Reaction". Various court and military positions were reserved by tradition for nobles who could "prove" an ancestry of at least seize quartiers (16 quarterings), indicating exclusively noble descent (as displayed, ideally, in the family's coat of arms) extending back five generations (all 16 great-great-grandparents). This illustrates the traditional link in many countries between heraldry and nobility; in those countries where heraldry is used, nobles have almost always been armigerous, and have used heraldry to demonstrate their ancestry and family history. However, heraldry has never been restricted to the noble classes in most countries, and being armigerous does not necessarily demonstrate nobility. Scotland, however, is an exception. In a number of recent cases in Scotland the Lord Lyon King of Arms has controversially ( Scotland's Salic law) granted the arms and allocated the chiefships of medieval noble families to female-line descendants of lords, even when they were not of noble lineage in the male line, while persons of legitimate male-line descent may still survive (e.g. the modern Chiefs of Clan MacLeod). In some nations, hereditary titles, as distinct from noble rank, were not always recognised in law, e.g., Poland's Szlachta. European ranks of nobility lower than baron or its equivalent, are commonly referred to as the petty nobility, although baronets of the British Isles are deemed titled gentry. Most nations traditionally had an untitled lower nobility in addition to titled nobles. An example is the landed gentry of the British Isles. Unlike England's gentry, the Junkers of Germany, the noblesse de robe of France, the hidalgos of Spain and the nobili of Italy were explicitly acknowledged by the monarchs of those countries as members of the nobility, although untitled. In Scandinavia, the Benelux nations and Spain there are still untitled as well as titled families recognised in law as noble. In Hungary members of the nobility always theoretically enjoyed the same rights. In practice, however, a noble family's financial assets largely defined its significance. Medieval Hungary's concept of nobility originated in the notion that nobles were "free men", eligible to own land. This basic standard explains why the noble population was relatively large, although the economic status of its members varied widely. Untitled nobles were not infrequently wealthier than titled families, while considerable differences in wealth were also to be found within the titled nobility. The custom of granting titles was introduced to Hungary in the 16th century by the House of Habsburg. Historically, once nobility was granted, if a nobleman served the monarch well he might obtain the title of baron, and might later be elevated to the rank of count. As in other countries of post-medieval central Europe, hereditary titles were not attached to a particular land or estate but to the noble family itself, so that all patrilineal descendants shared a title of baron or count (cf. peerage). Neither nobility nor titles could be transmitted through women. Some con artists sell fake titles of nobility, often with impressive-looking documentation. This may be illegal, depending on local law. They are more often illegal in countries that actually have nobilities, such as European monarchies. In the United States, such commerce may constitute actionable fraud rather than criminal usurpation of an exclusive right to use of any given title by an established class. Other terms "Aristocrat" and "aristocracy", in modern usage, refer colloquially and broadly to persons who inherit elevated social status, whether due to membership in the (formerly) official nobility or the monied upper class. Blue blood is an English idiom recorded since 1811 in the Annual Register and in 1834 for noble birth or descent; it is also known as a translation of the Spanish phrase sangre azul, which described the Spanish royal family and high nobility who claimed to be of Visigothic descent, in contrast to the Moors. The idiom originates from ancient and medieval societies of Europe and distinguishes an upper class (whose superficial veins appeared blue through their untanned skin) from a working class of the time. The latter consisted mainly of agricultural peasants who spent most of their time working outdoors and thus had tanned skin, through which superficial veins appear less prominently. Robert Lacey explains the genesis of the blue blood concept: It was the Spaniards who gave the world the notion that an aristocrat's blood is not red but blue. The Spanish nobility started taking shape around the ninth century in classic military fashion, occupying land as warriors on horseback. They were to continue the process for more than five hundred years, clawing back sections of the peninsula from its Moorish occupiers, and a nobleman demonstrated his pedigree by holding up his sword arm to display the filigree of blue-blooded veins beneath his pale skin—proof that his birth had not been contaminated by the dark-skinned enemy. Africa Africa has a plethora of ancient lineages in its various constituent nations. Some, such as the numerous sharifian families of North Africa, the Keita dynasty of Mali, the Solomonic dynasty of Ethiopia, the De Souza family of Benin, the Zulfikar family of Egypt and the Sherbro Tucker clan of Sierra Leone, claim descent from notables from outside of the continent. Most, such as those composed of the descendants of Shaka and those of Moshoeshoe of Southern Africa, belong to peoples that have been resident in the continent for millennia. Generally their royal or noble status is recognized by and derived from the authority of traditional custom. A number of them also enjoy either a constitutional or a statutory recognition of their high social positions. Ethiopia Ethiopia has a nobility that is almost as old as the country itself. Throughout the history of the Ethiopian Empire most of the titles of nobility have been tribal or military in nature. However the Ethiopian nobility resembled its European counterparts in some respects; until 1855, when Tewodros II ended the Zemene Mesafint its aristocracy was organized similarly to the feudal system in Europe during the Middle Ages. For more than seven centuries, Ethiopia (or Abyssinia, as it was then known) was made up of many small kingdoms, principalities, emirates and imamates, which owed their allegiance to the nəgusä nägäst (literally "King of Kings"). Despite its being a Christian monarchy, various Muslim states paid tribute to the emperors of Ethiopia for centuries: including the Adal Sultanate, the Emirate of Harar, and the Awsa sultanate. Ethiopian nobility were divided into two different categories: Mesafint ("prince"), the hereditary nobility that formed the upper echelon of the ruling class; and the Mekwanin ("governor") who were appointed nobles, often of humble birth, who formed the bulk of the nobility (cf. the Ministerialis of the Holy Roman Empire). In Ethiopia there were titles of nobility among the Mesafint borne by those at the apex of medieval Ethiopian society. The highest royal title (after that of emperor) was Negus ("king") which was held by hereditary governors of the provinces of Begemder, Shewa, Gojjam, and Wollo. The next highest seven titles were Ras, Dejazmach, Fit'awrari, Grazmach, Qenyazmach, Azmach and Balambaras. The title of Le'ul Ras was accorded to the heads of various noble families and cadet branches of the Solomonic dynasty, such as the princes of Gojjam, Tigray, and Selalle. The heirs of the Le'ul Rases were titled Le'ul Dejazmach, indicative of the higher status they enjoyed relative to Dejazmaches who were not of the blood imperial. There were various hereditary titles in Ethiopia: including that of Jantirar, reserved for males of the family of Empress Menen Asfaw who ruled over the mountain fortress of Ambassel in Wollo; Wagshum, a title created for the descendants of the deposed Zagwe dynasty; and Shum Agame, held by the descendants of Dejazmach Sabagadis, who ruled over the Agame district of Tigray. The vast majority of titles borne by nobles were not, however, hereditary. Despite being largely dominated by Christian elements, some Muslims obtained entrée into the Ethiopian nobility as part of their quest for aggrandizement during the 1800s. To do so they were generally obliged to abandon their faith and some are believed to have feigned conversion to Christianity for the sake of acceptance by the old Christian aristocratic families. One such family, the Wara Seh (more commonly called the "Yejju dynasty") converted to Christianity and eventually wielded power for over a century, ruling with the sanction of the Solomonic emperors. The last such Muslim noble to join the ranks of Ethiopian society was Mikael of Wollo who converted, was made Negus of Wollo, and later King of Zion, and even married into the Imperial family. He lived to see his son, Lij Iyasu, inherit the throne in 1913—only to be deposed in 1916 because of his conversion to Islam. Madagascar The nobility in Madagascar are known as the Andriana. In much of Madagascar, before French colonization of the island, the Malagasy people were organised into a rigid social caste system, within which the Andriana exercised both spiritual and political leadership. The word "Andriana" has been used to denote nobility in various ethnicities in Madagascar: including the Merina, the Betsileo, the Betsimisaraka, the Tsimihety, the Bezanozano, the Antambahoaka and the Antemoro. The word Andriana has often formed part of the names of Malagasy kings, princes and nobles. Linguistic evidence suggests that the origin of the title Andriana is traceable back to an ancient Javanese title of nobility. Before the colonization by France in the 1890s, the Andriana held various privileges, including land ownership, preferment for senior government posts, free labor from members of lower classes, the right to have their tombs constructed within town limits, etc. The Andriana rarely married outside their caste: a high-ranking woman who married a lower-ranking man took on her husband's lower rank, but a high-ranking man marrying a woman of lower rank did not forfeit his status, although his children could not inherit his rank or property (cf. morganatic marriage). In 2011, the Council of Kings and Princes of Madagascar endorsed the revival of a Christian Andriana monarchy that would blend modernity and tradition. Nigeria Contemporary Nigeria has a class of traditional notables which is led by its reigning monarchs, the Nigerian traditional rulers. Though their functions are largely ceremonial, the titles of the country's royals and nobles are often centuries old and are usually vested in the membership of historically prominent families in the various subnational kingdoms of the country. Membership of initiatory societies that have inalienable functions within the kingdoms is also a common feature of Nigerian nobility, particularly among the southern tribes, where such figures as the Ogboni of the Yoruba, the Nze na Ozo of the Igbo and the Ekpe of the Efik are some of the most famous examples. Although many of their traditional functions have become dormant due to the advent of modern governance, their members retain precedence of a traditional nature and are especially prominent during festivals. Outside of this, many of the traditional nobles of Nigeria continue to serve as privy counsellors and viceroys in the service of their traditional sovereigns in a symbolic continuation of the way that their titled ancestors and predecessors did during the pre-colonial and colonial periods. Many of them are also members of the country's political elite due to their not being covered by the prohibition from involvement in politics that governs the activities of the traditional rulers. Holding a chieftaincy title, either of the traditional variety (which involves taking part in ritual re-enactments of your title's history during annual festivals, roughly akin to a British peerage) or the honorary variety (which does not involve the said re-enactments, roughly akin to a knighthood), grants an individual the right to use the word "chief" as a pre-nominal honorific while in Nigeria. Asia India, Pakistan, Bangladesh, and Nepal Historically Rajputs formed a class of aristocracy associated with warriorhood, developing after the 10th century in the Indian subcontinent. During the Mughal era, a class of administrators known as Nawabs emerged who initially served as governors of provinces, later becoming independent. In the British Raj, many members of the nobility were elevated to royalty as they became the monarchs of their princely states, but as many princely state rulers were reduced from royals to noble zamindars. Hence, many nobles in the subcontinent had royal titles of Raja, Rai, Rana, Rao, etc. In Nepal, Kaji was a title and position used by nobility of Gorkha Kingdom (1559–1768) and Kingdom of Nepal (1768–1846). Historian Mahesh Chandra Regmi suggests that Kaji is derived from Sanskrit word Karyi which meant functionary. Other noble and aristocratic titles were Thakur, Sardar, Jagirdar, Mankari, Dewan, Pradhan, Kaji etc. China In East Asia, the system was often modeled on imperial China, the leading culture. Emperors conferred titles of nobility. Imperial descendants formed the highest class of ancient Chinese nobility, their status based upon the rank of the empress or concubine from which they descend maternally (as emperors were polygamous). Numerous titles such as Taizi (crown prince), and equivalents of "prince" were accorded, and due to complexities in dynastic rules, rules were introduced for Imperial descendants. The titles of the junior princes were gradually lowered in rank by each generation while the senior heir continued to inherit their father's titles. It was a custom in China for the new dynasty to ennoble and enfeoff a member of the dynasty which they overthrew with a title of nobility and a fief of land so that they could offer sacrifices to their ancestors, in addition to members of other preceding dynasties. China had a feudal system in the Shang and Zhou dynasties, which gradually gave way to a more bureaucratic one beginning in the Qin dynasty (221 BC). This continued through the Song dynasty, and by its peak power shifted from nobility to bureaucrats. This development was gradual and generally only completed in full by the Song dynasty. In the Han dynasty, for example, even though noble titles were no longer given to those other than the emperor's relatives, the fact that the process of selecting officials was mostly based on a vouching system by current officials as officials usually vouched for their own sons or those of other officials meant that a de facto aristocracy continued to exist. This process was further deepened during the Three Kingdoms period with the introduction of the Nine-rank system. By the Sui dynasty, however, the institution of the Imperial examination system marked the transformation of a power shift towards a full bureaucracy, though the process would not be truly completed until the Song dynasty. Titles of nobility became symbolic along with a stipend while governance of the country shifted to scholar officials. In the Qing dynasty, titles of nobility were still granted by the emperor, but served merely as honorifics based on a loose system of favours to the Qing emperor. Under a centralized system, the empire's governance was the responsibility of the Confucian-educated scholar-officials and the local gentry, while the literati were accorded gentry status. For male citizens, advancement in status was possible via garnering the top three positions in imperial examinations. The Qing appointed the Ming imperial descendants to the title of Marquis of Extended Grace. The oldest held continuous noble title in Chinese history was that held by the descendants of Confucius, as Duke Yansheng, which was renamed as the Sacrificial Official to Confucius in 1935 by the Republic of China. The title is held by Kung Tsui-chang. There is also a "Sacrificial Official to Mencius" for a descendant of Mencius, a "Sacrificial Official to Zengzi" for a descendant of Zengzi, and a "Sacrificial Official to Yan Hui" for a descendant of Yan Hui. The bestowal of titles was abolished upon the establishment of the People's Republic of China in 1949, as part of a larger effort to remove feudal influences and practises from Chinese society. Islamic world In some Islamic countries, there are no definite noble titles (titles of hereditary rulers being distinct from those of hereditary intermediaries between monarchs and commoners). Persons who can trace legitimate descent from Muhammad or the clans of Quraysh, as can members of several present or formerly reigning dynasties, are widely regarded as belonging to the ancient, hereditary Islamic nobility. In some Islamic countries they inherit (through mother or father) hereditary titles, although without any other associated privilege, e.g., variations of the title Sayyid and Sharif. Regarded as more religious than the general population, many people turn to them for clarification or guidance in religious matters. In Iran, historical titles of the nobility including Mirza, Khan, ed-Dowleh and Shahzada ("Son of a Shah), are now no longer recognised. An aristocratic family is now recognised by their family name, often derived from the post held by their ancestors, considering the fact that family names in Iran only appeared in the beginning of the 20th century. Sultans have been an integral part of Islamic history. See: Zarabi During the Ottoman Empire in the Imperial Court and the provinces there were many Ottoman titles and appellations forming a somewhat unusual and complex system in comparison with the other Islamic countries. The bestowal of noble and aristocratic titles was widespread across the empire even after its fall by independent monarchs. One of the most elaborate examples is that of the Egyptian aristocracy's largest clan, the Abaza family, of maternal Abazin and Circassian origin. Japan Medieval Japan developed a feudal system similar to the European system, where land was held in exchange for military service. The daimyō class, or hereditary landowning nobles, held great socio-political power. As in Europe, they commanded private armies made up of samurai, an elite warrior class; for long periods, these held the real power without a real central government, and often plunged the country into a state of civil war. The daimyō class can be compared to European peers, and the samurai to European knights, but important differences exist. Following the Meiji Restoration in 1868, feudal titles and ranks were reorgnised into the kazoku, a five-rank peerage system after the British example, which granted seats in the upper house of the Imperial Diet; this ended in 1947 following Japan's defeat in World War II. Philippines Like other Southeast Asian countries, many regions in the Philippines have indigenous nobility, partially influenced by Hindu, Chinese, and Islamic custom. Since ancient times, Datu was the common title of a chief or monarch of the many pre-colonial principalities and sovereign dominions throughout the isles; in some areas the term Apo was also used. With the titles Sultan and Rajah, Datu (and its Malay cognate, Datok) are currently used in some parts of the Philippines, Indonesia, Malaysia and Brunei. These titles are the rough equivalents of European titles, albeit dependent on the actual wealth and prestige of the bearer. Recognition by the Spanish Crown Upon the islands' Christianization, the datus retained governance of their territories despite annexation to the Spanish Empire. In a law signed 11 June 1594, King Philip II of Spain ordered that the indigenous rulers continue to receive the same honors and privileges accorded them prior their conversion to Catholicism. The baptized nobility subsequently coalesced into the exclusive, landed ruling class of the lowlands known as the principalía. On 22 March 1697, King Charles II of Spain confirmed the privileges granted by his predecessors (in Title VII, Book VI of the Laws of the Indies) to indigenous nobilities of the Crown colonies, including the principalía of the Philippines, and extended to them and to their descendants the preeminence and honors customarily attributed to the hidalgos of Castile. Filipino nobles during the Spanish era The Laws of the Indies and other pertinent royal decrees were enforced in the Philippines and benefited many indigenous nobles. It can be seen very clearly and irrefutably that, during the colonial period, indigenous chiefs were equated with the Spanish hidalgos, and the most resounding proof of the application of this comparison is the General Military Archive in Segovia, where the qualifications of "nobility" (found in the service records) are attributed to those Filipinos who were admitted to the Spanish military academies and whose ancestors were caciques, encomenderos, notable Tagalogs, chieftains, governors or those who held positions in the municipal administration or government in all different regions of the large islands of the Archipelago, or of the many small islands of which it is composed. In the context of the ancient tradition and norms of Castilian nobility, all descendants of a noble are considered noble, regardless of fortune. At the Real Academia de la Historia, there is a substantial number of records providing reference to the Philippine Islands, and while most parts correspond to the history of these islands, the Academia did not exclude among its documents the presence of many genealogical records. The archives of the Academia and its royal stamp recognized the appointments of hundreds of natives of the Philippines who, by virtue of their social position, occupied posts in the administration of the territories and were classified as "nobles". The presence of these notables demonstrates the cultural concern of Spain in those Islands to prepare the natives and the collaboration of these in the government of the Archipelago. This aspect of Spanish rule in the Philippines appears much more strongly implemented than in the Americas. Hence in the Philippines, the local nobility, by reason of charge accorded to their social class, acquired greater importance than in the Indies of the New World. With the recognition of the Spanish monarchs came the privilege of being addressed as Don or Doña, a mark of esteem and distinction in Europe reserved for a person of noble or royal status during the colonial period. Other honors and high regard were also accorded to the Christianized Datus by the Spanish Empire. For example, the Gobernadorcillos (elected leader of the Cabezas de Barangay or the Christianized Datus) and Filipino officials of justice received the greatest consideration from the Spanish Crown officials. The colonial officials were under obligation to show them the honor corresponding to their respective duties. They were allowed to sit in the houses of the Spanish Provincial Governors, and in any other places. They were not left to remain standing. It was not permitted for Spanish Parish Priests to treat these Filipino nobles with less consideration. The Gobernadorcillos exercised the command of the towns. They were Port Captains in coastal towns. They also had the rights and powers to elect assistants and several lieutenants and alguaciles, proportionate in number to the inhabitants of the town. Current status questionis The recognition of the rights and privileges accorded to the Filipino principalía as hijosdalgos of Castile seems to facilitate entrance of Filipino nobles into institutions of under the Spanish Crown, either civil or religious, which required proofs of nobility. However, to see such recognition as an approximation or comparative estimation of rank or status might not be correct since in reality, although the principales were vassals of the Crown, their rights as sovereign in their former dominions were guaranteed by the Laws of the Indies, more particularly the Royal Decree of Philip II of 11 June 1594, which Charles II confirmed for the purpose stated above to satisfy the requirements of the existing laws in the Peninsula. It must be recalled that ever since the beginning of the colonialization, the conquistador Miguel López de Legazpi did not strip the ancient sovereign rulers of the Archipelago (who vowed allegiance to the Spanish Crown) of their legitimate rights. Many of them accepted the Catholic religion and were his allies from the very beginning. He only demanded from these local rulers vassalage to the Spanish Crown, replacing the similar overlordship, which previously existed in a few cases, e.g., Sultanate of Brunei's overlordship of the Kingdom of Maynila. Other independent polities that were not vassals to other States, e.g., Confederation of Madja-as and the Rajahnate of Cebu, were more of potectorates or suzerainties having had alliances with the Spanish Crown before the Kingdom took total control of most parts of the Archipelago. Europe European nobility originated in the feudal/seignorial system that arose in Europe during the Middle Ages. Originally, knights or nobles were mounted warriors who swore allegiance to their sovereign and promised to fight for him in exchange for an allocation of land (usually together with serfs living thereon). During the period known as the Military Revolution, nobles gradually lost their role in raising and commanding private armies, as many nations created cohesive national armies. This was coupled with a loss of the socio-economic power of the nobility, owing to the economic changes of the Renaissance and the growing economic importance of the merchant classes, which increased still further during the Industrial Revolution. In countries where the nobility was the dominant class, the bourgeoisie gradually grew in power; a rich city merchant came to be more influential than a nobleman, and the latter sometimes sought inter-marriage with families of the former to maintain their noble lifestyles. However, in many countries at this time, the nobility retained substantial political importance and social influence: for instance, the United Kingdom's government was dominated by the (unusually small) nobility until the middle of the 19th century. Thereafter the powers of the nobility were progressively reduced by legislation. However, until 1999, all hereditary peers were entitled to sit and vote in the House of Lords. Since then, only 92 of them have this entitlement, of whom 90 are elected by the hereditary peers as a whole to represent the peerage. The countries with the highest proportion of nobles were Polish–Lithuanian Commonwealth (15% of an 18th-century population of 800,000), Castile (probably 10%), Spain (722,000 in 1768 which was 7–8% of the entire population) and other countries with lower percentages, such as Russia in 1760 with 500,000–600,000 nobles (2–3% of the entire population), and pre-revolutionary France where there were no more than 300,000 prior to 1789, which was 1% of the population (although some scholars believe this figure is an overestimate). In 1718 Sweden had between 10,000 and 15,000 nobles, which was 0.5% of the population. In Germany it was 0.01%. In the Kingdom of Hungary nobles made up 5% of the population. All the nobles in 18th-century Europe numbered perhaps 3–4 million out of a total of 170–190 million inhabitants. By contrast, in 1707, when England and Scotland united into Great Britain, there were only 168 English peers, and 154 Scottish ones, though their immediate families were recognised as noble. Apart from the hierarchy of noble titles, in England rising through baron, viscount, earl, and marquess to duke, many countries had categories at the top or bottom of the nobility. The gentry, relatively small landowners with perhaps one or two villages, were mostly noble in most countries, for example the Polish landed gentry. At the top, Poland had a far smaller class of "magnates", who were hugely rich and politically powerful. In other countries the small groups of Spanish Grandee or Peer of France had great prestige but little additional power. Latin America In addition to the nobility of a variety of native populations in what is now Latin America (such as the Aymara, Aztecs, Maya, and Quechua) who had long traditions of being led by monarchs and nobles, peerage traditions dating to the colonial and post-colonial imperial periods (in the case of such countries as Mexico and Brazil), have left noble families in each of them that have ancestral ties to those nations' Indigenous and European families, especially the Spanish nobility, but also the Portuguese and French nobility. Bolivia From the many historical native chiefs and rulers of pre-Columbian Bolivia to the Criollo upper class that dates to the era of colonial Bolivia and that has ancestral ties to the Spanish nobility, Bolivia has several groups that may fit into the category of nobility. For example, there is a ceremonial monarchy led by a titular ruler who is known as the Afro-Bolivian king. The members of his dynasty are the direct descendants of an old African tribal monarchy that were brought to Bolivia as slaves. They have provided leadership to the Afro-Bolivian community ever since that event and have been officially recognized by Bolivia's government since 2007. Brazil The nobility in Brazil began during the colonial era with the Portuguese nobility. When Brazil became a united kingdom with Portugal in 1815, the first Brazilian titles of nobility were granted by the king of Portugal, Brazil and the Algarves. With the independence of Brazil in 1822 as a constitutional monarchy, the titles of nobility initiated by the king of Portugal were continued and new titles of nobility were created by the emperor of Brazil. However, according to the Brazilian Constitution of 1824, the emperor conferred titles of nobility, which were personal and therefore non-hereditary, unlike the earlier Portuguese and Portuguese-Brazilian titles, being inherited exclusively to the royal titles of the Brazilian imperial family. During the existence of the Empire of Brazil, 1,211 noble titles were acknowledged. With the proclamation of the First Brazilian Republic, in 1889, the Brazilian nobility was discontinued. It was also prohibited, under penalty of accusation of high treason and the suspension of political rights, to accept noble titles and foreign decorations without the proper permission of the state. In particular, the nobles of greater distinction, by respect and tradition, were allowed to use their titles during the republican regime. The imperial family also could not return to the Brazilian soil until 1921, when the Banishment Law was repealed. Mexico The Mexican nobility were a hereditary nobility of Mexico, with specific privileges and obligations determined in the various political systems that historically ruled over the Mexican territory. The term is used in reference to various groups throughout the entirety of Mexican history, from formerly ruling indigenous families of the pre-Columbian states of present-day Mexico, to noble Mexican families of Spanish, mestizo, and other European descent, which include conquistadors and their descendants (ennobled by King Philip II in 1573), untitled noble families of Mexico, and holders of titles of nobility acquired during the Viceroyalty of the New Spain (1521–1821), the First Mexican Empire (1821–1823), and the Second Mexican Empire (1862–1867); as well as bearers of titles and other noble prerogatives granted by foreign powers who have settled in Mexico. The Political Constitution of Mexico has prohibited the state from recognizing any titles of nobility since 1917. The present United Mexican States does not issue or recognize titles of nobility or any hereditary prerogatives and honors. Informally, however, a Mexican aristocracy remains a part of Mexican culture and its hierarchical society. Nobility by nation A list of noble titles for different European countries can be found at Royal and noble ranks. Africa Botswanan chieftaincy Kgosi Burundian nobility Egyptian nobility Ethiopian nobility Ras Jantirar Ghanaian chieftaincy Akan chieftaincy Malagasy nobility Malian nobility Nigerian Chieftaincy Nigerian traditional rulers Lamido Hakimi Oba Ogboni Eze Nze na Ozo Rwandan nobility Somali nobility Zimbabwean chieftaincy Americas Canadian peers and baronets French-Canadian nobility Brazilian nobility Cuban nobility Kuraka (Peru) Mexican nobility Pipiltin United States – While its constitution bars the federal and state governments from granting titles of nobility, in most cases citizens are not barred from accepting, holding or inheriting them. And, since at least 1953, the U.S. requires applicants for naturalization to renounce any titles. Asia Armenian nobility Chinese nobility Indian peers and baronets Kaji (Nepal) Basnyat family Kunwar family Pande family Rana dynasty Thapa family Indonesian (Dutch East Indies) nobility Japanese nobility Daimyō Kazoku Kuge Fujiwara family Minamoto family Tachibana family Taira family Burmese nobility Burmese Mon nobility Korean nobility Vietnamese nobility Malay nobility Mongolian nobility Ottoman titles Principalía of the Philippines Thai nobility Europe Albanian nobility Austrian nobility Baltic nobility – ethnically Baltic German nobility in the modern area of Estonia and Latvia Belgian nobility British nobility British peerage Peerage of Great Britain Peerage of the United Kingdom English peerage Scottish noblesse Scottish peerage Barons Lairds Welsh Peers Irish peerage Chiefs of the Name British gentry/minor nobility Baronets Knights Byzantine aristocracy and bureaucracy Phanariotes Croatian nobility Czech nobility Danish nobility Dutch nobility Finnish nobility French nobility German nobility Graf Junker Hungarian nobility Icelandic nobility Irish nobility Italian nobility Black Nobility Lithuanian nobility Maltese nobility Montenegrin nobility Norwegian nobility Polish nobility Magnates Portuguese nobility Russian nobility Boyars Ruthenian nobility Serbian nobility Spanish nobility Swedish nobility Swiss nobility Oceania Australian peers and baronets Fijian nobility Polynesian nobility Samoan nobility Tongan nobles See also Almanach de Gotha Aristocracy (class) Ascribed status Baig Caste (social hierarchy of India) Debutante False titles of nobility Gentleman Gentry Grand Burgher (German: Großbürger) Heraldry Honour Kaji (Nepal) King List of fictional nobility List of noble houses Magnate Nobiliary particle Noblesse oblige Noble women Nze na Ozo Ogboni Pasha Patrician (ancient Rome) Patrician (post-Roman Europe) Peerage Petty nobility Princely state Raja Redorer son blason Royal descent Social environment Symbolic capital References External links WW-Person, an on-line database of European noble genealogy (archived) Worldroots, a selection of art and genealogy of European nobility Worldwidewords Etymology OnLine Genesis of European Nobility A few notes about grants of titles of nobility by modern Serbian Monarchs Estates (social groups) Feudalism Oligarchy Social classes
0.764268
0.998842
0.763382
Thaumaturgy
Thaumaturgy, derived from the Greek words thauma (wonder) and ergon (work), refers to the practical application of magic to effect change in the physical world. Historically, thaumaturgy has been associated with the manipulation of natural forces, the creation of wonders, and the performance of magical feats through esoteric knowledge and ritual practice. Unlike theurgy, which focuses on invoking divine powers, thaumaturgy is more concerned with utilizing occult principles to achieve specific outcomes, often in a tangible and observable manner. It is sometimes translated into English as wonderworking. This concept has evolved from its ancient roots in magical traditions to its incorporation into modern Western esotericism. Thaumaturgy has been practiced by individuals seeking to exert influence over the material world through both subtle and overt magical means. It has played a significant role in the development of magical systems, particularly those that emphasize the practical aspects of esoteric work. In modern times, thaumaturgy continues to be a subject of interest within the broader field of occultism, where it is studied and practiced as part of a larger system of magical knowledge. Its principles are often applied in conjunction with other forms of esoteric practice, such as alchemy and Hermeticism, to achieve a deeper understanding and mastery of the forces that govern the natural and supernatural worlds. A practitioner of thaumaturgy is a "thaumaturge", "thaumaturgist", "thaumaturgus", "miracle worker", or "wonderworker". Etymology The word thaumaturgy derives from Greek thaûma, meaning "miracle" or "marvel" (final t from genitive thaûmatos) and érgon, meaning "work". In the 16th century, the word thaumaturgy entered the English language meaning miraculous or magical powers. The word was first anglicized and used in the magical sense in John Dee's book The Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570). He mentions an "art mathematical" called "thaumaturgy... which giveth certain order to make strange works, of the sense to be perceived and of men greatly to be wondered at". Historical development Ancient roots The origins of thaumaturgy can be traced back to ancient civilizations where magical practices were integral to both religious rituals and daily life. In ancient Egypt, priests were often regarded as thaumaturges, wielding their knowledge of rituals and incantations to influence natural and supernatural forces. These practices were aimed at protecting the Pharaoh, ensuring a successful harvest, or even controlling the weather. Similarly, in ancient Greece, certain figures were believed to possess the ability to perform miraculous feats, often attributed to their deep understanding of the mysteries of the gods and nature. This blending of religious and magical practices laid the groundwork for what would later be recognized as thaumaturgy in Western esotericism. In Greek writings, the term thaumaturge also referred to several Christian saints. In this context, the word is usually translated into English as 'wonderworker'. Notable early Christian thaumaturges include Gregory Thaumaturgus (c. 213–270), Saint Menas of Egypt (285–c. 309), Saint Nicholas (270–343), and Philomena ( 300 (?)). Medieval and Renaissance Europe During the medieval period, thaumaturgy evolved within the context of Christian mysticism and early scientific thought. The medieval understanding of thaumaturgy was closely linked to the idea of miracles, with saints and holy men often credited with thaumaturgic powers. The seventeenth-century Irish Franciscan editor John Colgan called the three early Irish saints, Patrick, Brigid, and Columba, thaumaturges in his Acta Triadis Thaumaturgae (Louvain, 1647). Later notable medieval Christian thaumaturges include Anthony of Padua (1195–1231) and the bishop of Fiesole, Andrew Corsini of the Carmelites (1302–1373), who was called a thaumaturge during his lifetime. This period also saw the development of grimoires—manuals for magical practices—where rituals and spells were documented, often blending Christian and pagan traditions. In the Renaissance, the concept of thaumaturgy expanded as scholars like John Dee explored the intersections between magic, science, and religion. Dee's Mathematicall Praeface to Elements of Geometrie of Euclid of Megara (1570) is one of the earliest English texts to discuss thaumaturgy, describing it as the art of creating "strange works" through a combination of natural and mathematical principles. Dee's work reflects the Renaissance pursuit of knowledge that blurred the lines between the magical and the mechanical, as thaumaturges were often seen as early scientists who harnessed the hidden powers of nature. In Dee's time, "the Mathematicks" referred not merely to the abstract computations associated with the term today, but to physical mechanical devices which employed mathematical principles in their design. These devices, operated by means of compressed air, springs, strings, pulleys or levers, were seen by unsophisticated people (who did not understand their working principles) as magical devices which could only have been made with the aid of demons and devils. By building such mechanical devices, Dee earned a reputation as a conjurer "dreaded" by neighborhood children. He complained of this assessment in his Mathematicall Praeface: Notable Renaissance and Age of Enlightenment Christian thaumaturges of the period include Gerard Majella (1726–1755), Ambrose of Optina (1812–1891), and John of Kronstadt (1829–1908). Incorporation into modern esotericism The transition into modern esotericism saw thaumaturgy taking on a more structured role within various magical systems, particularly those developed in the 18th and 19th centuries. In Hermeticism and the Western occult tradition, thaumaturgy was often practiced alongside alchemy and theurgy, with a focus on manipulating the material world through ritual and symbolic action. The Hermetic Order of the Golden Dawn, a prominent magical order founded in the late 19th century, incorporated thaumaturgy into its curriculum, emphasizing the importance of both theory and practice in the mastery of magical arts. Thaumaturgy's role in modern esotericism also intersects with the rise of ceremonial magic, where it is often employed to achieve specific, practical outcomes—ranging from healing to the invocation of spirits. Contemporary magicians continue to explore and adapt thaumaturgic practices, often drawing from a wide range of historical and cultural sources to create eclectic and personalized systems of magic. Core principles and practices Principles of sympathy and contagion Thaumaturgy is often governed by two key magical principles: the Principle of Sympathy and the Principle of Contagion. These principles are foundational in understanding how thaumaturges influence the physical world through magical means. The Principle of Sympathy operates on the idea that "like affects like", meaning that objects or symbols that resemble each other can influence each other. For example, a miniature representation of a desired outcome, such as a model of a bridge, could be used in a ritual to ensure the successful construction of an actual bridge. The Principle of Contagion, on the other hand, is based on the belief that objects that were once in contact continue to influence each other even after they are separated. This principle is often employed in the use of personal items, such as hair or clothing, in rituals to affect the person to whom those items belong. These principles are not unique to thaumaturgy but are integral to many forms of magic across cultures. However, in the context of thaumaturgy, they are particularly important because they provide a theoretical framework for understanding how magical actions can produce tangible results in the material world. This focus on practical outcomes distinguishes thaumaturgy from other forms of magic that may be more concerned with spiritual or symbolic meanings. Tools and rituals Thaumaturgical practices often involve the use of specific tools and rituals designed to channel and direct magical energy. Common tools include wands, staffs, talismans, and ritual knives, each of which serves a particular purpose in the practice of magic. For instance, a wand might be used to direct energy during a ritual, while a talisman could serve as a focal point for the thaumaturge's intent. The creation and consecration of these tools are themselves ritualized processes, often requiring specific materials and astrological timing to ensure their effectiveness. Rituals in thaumaturgy are typically elaborate and may involve the recitation of incantations, the drawing of protective circles, and the invocation of spirits or deities. These rituals are designed to create a controlled environment in which the thaumaturge can manipulate natural forces according to their will. The complexity of these rituals varies depending on the desired outcome, with more significant or ambitious goals requiring more intricate and time-consuming procedures. Energy manipulation At the heart of thaumaturgy is the metaphor of energy manipulation. Thaumaturges believe that the world is filled with various forms of energy that can be harnessed and directed through magical practices. This energy is often conceptualized as a natural force that permeates the universe, and through the use of specific techniques, thaumaturges believe that they can influence this energy to bring about desired changes in the physical world. Energy manipulation in thaumaturgy involves both drawing energy from the surrounding environment and directing it toward a specific goal. This process often requires a deep understanding of the natural world, as well as the ability to focus and control one's own mental and spiritual energies. In many traditions, this energy is also linked to the practitioner's life force, meaning that the act of performing thaumaturgy can be physically and spiritually taxing. As a result, practitioners often undergo rigorous training and preparation to build their capacity to manipulate energy effectively and safely. In esoteric traditions Hermetic Qabalah In Hermetic Qabalah, thaumaturgy occupies a significant role as it involves the practical application of mystical principles to influence the physical world. This tradition is deeply rooted in the concept of correspondences, where different elements of the cosmos are seen as interconnected. In the Hermetic tradition, a thaumaturge seeks to manipulate these correspondences to bring about desired changes. The sephiroth on the Tree of Life serve as a map for these interactions, with specific rituals and symbols corresponding to different sephiroth and their associated powers. For example, a ritual focusing on Yesod (the sephirah of the Moon) might involve elements such as silver, the color white, and the invocation of lunar deities to influence matters of intuition, dreams, or the subconscious mind. The manipulation of these correspondences through ritual is not just symbolic but is believed to produce real effects in the material world. Practitioners use complex rituals that might include the use of sacred geometry, invocations, and the creation of talismans. These practices are believed to align the practitioner with the forces they wish to control, creating a sympathetic connection that enables them to direct these forces effectively. Aleister Crowley's Magick (Book 4) provides an extensive discussion on the use of ritual tools such as the wand, cup, and sword, each of which corresponds to different elements and powers within the Qabalistic system, emphasizing the practical aspect of these tools in thaumaturgic practices. Alchemy and thaumaturgy Alchemy and thaumaturgy are often intertwined, particularly in the context of spiritual transformation and the pursuit of enlightenment. Alchemy, with its focus on the transmutation of base metals into gold and the quest for the philosopher's stone, can be seen as a form of thaumaturgy where the practitioner seeks to transform not just physical substances but also the self. This process, known as the Great Work, involves the purification and refinement of both matter and spirit. Thaumaturgy comes into play as the practical aspect of alchemy, where rituals, symbols, and substances are used to facilitate these transformations. The alchemical process is heavily laden with symbolic meanings, with each stage representing a different phase of transformation. The stages of nigredo (blackening), albedo (whitening), citrinitas (yellowing), and rubedo (reddening) correspond not only to physical changes in the material being worked on but also to stages of spiritual purification and enlightenment. Thaumaturgy, in this context, is the application of these principles to achieve tangible results, whether in the form of creating alchemical elixirs, talismans, or achieving spiritual goals. Crowley also elaborates on these alchemical principles in Magick (Book 4), particularly in his discussions on the symbolic and practical uses of alchemical symbols and processes within magical rituals. Other esoteric systems Thaumaturgy also plays a role in various other esoteric systems, where it is often viewed as a means of bridging the gap between the mundane and the divine. In Theosophy, for example, thaumaturgy is seen as part of the esoteric knowledge that allows practitioners to manipulate spiritual and material forces. Theosophical teachings emphasize the unity of all life and the interconnection of the cosmos, with thaumaturgy being a practical tool for engaging with these truths. Rituals and meditative practices are used to align the practitioner's will with higher spiritual forces, enabling them to effect change in the physical world. In Rosicrucianism, thaumaturgy is similarly regarded as a method of spiritual practice that leads to the mastery of natural and spiritual laws. Rosicrucians believe that through the study of nature and the application of esoteric principles, one can achieve a deep understanding of the cosmos and develop the ability to influence it. This includes the use of rituals, symbols, and sacred texts to bring about spiritual growth and material success. In the introduction of his translation of the "Spiritual Powers (神通 Jinzū)" chapter of Dōgen's Shōbōgenzō, Carl Bielefeldt refers to the powers developed by adepts of Esoteric Buddhism as belonging to the "thaumaturgical tradition". These powers, known as siddhi or abhijñā, were ascribed to the Buddha and subsequent disciples. Legendary monks like Bodhidharma, Upagupta, Padmasambhava, and others were depicted in popular legends and hagiographical accounts as wielding various supernatural powers. Misconceptions and modern interpretations Distinction from theurgy A common misconception about thaumaturgy is its conflation with theurgy. While both involve the practice of magic, they serve distinct purposes and operate on different principles. Theurgy is primarily concerned with invoking divine or spiritual beings to achieve union with the divine, often for purposes of spiritual ascent or enlightenment. Thaumaturgy, on the other hand, focuses on the manipulation of natural forces to produce tangible effects in the physical world. This distinction is crucial in understanding the differing objectives of these practices: theurgy is inherently religious and mystical, while thaumaturgy is more pragmatic and results-oriented. Aleister Crowley, in his Magick (Book 4), emphasizes the importance of understanding these differences, noting that while theurgic practices seek to align the practitioner with divine will, thaumaturgy allows the practitioner to exert their will over the material world through the application of esoteric knowledge and ritual. Modern misunderstandings In modern times, thaumaturgy is often misunderstood, particularly in popular culture where it is sometimes depicted as synonymous with fantasy magic or "miracle-working" in a religious sense. These portrayals can dilute the rich historical and esoteric significance of thaumaturgy, reducing it to a mere trope of magical fiction. For instance, the term is frequently used in fantasy literature and role-playing games to describe a generic form of magic, without consideration for its historical roots or the complex practices associated with it in esoteric traditions. This modern misunderstanding is partly due to the broadening of the term "thaumaturgy" in contemporary discourse, where it is often detached from its original context and used more loosely. As a result, the nuanced distinctions between different types of magic, such as thaumaturgy and theurgy, are often overlooked, leading to a homogenized view of magical practices. In popular culture The term thaumaturgy is used in various games as a synonym for magic, a particular sub-school (often mechanical) of magic, or as the "science" of magic. Thaumaturgy is defined as the "science" or "physics" of magic by Isaac Bonewits in his 1971 book Real Magic, a definition he also used in creating an RPG reference called Authentic Thaumaturgy (1978, 1998, 2005). See also ; for example, the sigils of the Behenian fixed stars References Works cited External links Alchemy Ceremonial magic Hermetic Qabalah Hermeticism Magic (supernatural) Magical terminology Rosicrucianism Thelema Theosophy Vajrayana Zen
0.765237
0.99755
0.763363
Post-glacial rebound
Post-glacial rebound (also called isostatic rebound or crustal rebound) is the rise of land masses after the removal of the huge weight of ice sheets during the last glacial period, which had caused isostatic depression. Post-glacial rebound and isostatic depression are phases of glacial isostasy (glacial isostatic adjustment, glacioisostasy), the deformation of the Earth's crust in response to changes in ice mass distribution. The direct raising effects of post-glacial rebound are readily apparent in parts of Northern Eurasia, Northern America, Patagonia, and Antarctica. However, through the processes of ocean siphoning and continental levering, the effects of post-glacial rebound on sea level are felt globally far from the locations of current and former ice sheets. Overview During the last glacial period, much of northern Europe, Asia, North America, Greenland and Antarctica was covered by ice sheets, which reached up to three kilometres thick during the glacial maximum about 20,000 years ago. The enormous weight of this ice caused the surface of the Earth's crust to deform and warp downward, forcing the viscoelastic mantle material to flow away from the loaded region. At the end of each glacial period when the glaciers retreated, the removal of this weight led to slow (and still ongoing) uplift or rebound of the land and the return flow of mantle material back under the deglaciated area. Due to the extreme viscosity of the mantle, it will take many thousands of years for the land to reach an equilibrium level. The uplift has taken place in two distinct stages. The initial uplift following deglaciation was almost immediate due to the elastic response of the crust as the ice load was removed. After this elastic phase, uplift proceeded by slow viscous flow at an exponentially decreasing rate. Today, typical uplift rates are of the order of 1 cm/year or less. In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network; for example in Finland, the total area of the country is growing by about seven square kilometers per year. Studies suggest that rebound will continue for at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred metres near the centre of rebound. Recently, the term "post-glacial rebound" is gradually being replaced by the term "glacial isostatic adjustment". This is in recognition that the response of the Earth to glacial loading and unloading is not limited to the upward rebound movement, but also involves downward land movement, horizontal crustal motion, changes in global sea levels and the Earth's gravity field, induced earthquakes, and changes in the Earth's rotation. Another alternate term is "glacial isostasy", because the uplift near the centre of rebound is due to the tendency towards the restoration of isostatic equilibrium (as in the case of isostasy of mountains). Unfortunately, that term gives the wrong impression that isostatic equilibrium is somehow reached, so by appending "adjustment" at the end, the motion of restoration is emphasized. Effects Post-glacial rebound produces measurable effects on vertical crustal motion, global sea levels, horizontal crustal motion, gravity field, Earth's rotation, crustal stress, and earthquakes. Studies of glacial rebound give us information about the flow law of mantle rocks, which is important to the study of mantle convection, plate tectonics and the thermal evolution of the Earth. It also gives insight into past ice sheet history, which is important to glaciology, paleoclimate, and changes in global sea level. Understanding postglacial rebound is also important to our ability to monitor recent global change. Vertical crustal motion Erratic boulders, U-shaped valleys, drumlins, eskers, kettle lakes, bedrock striations are among the common signatures of the Ice Age. In addition, post-glacial rebound has caused numerous significant changes to coastlines and landscapes over the last several thousand years, and the effects continue to be significant. In Sweden, Lake Mälaren was formerly an arm of the Baltic Sea, but uplift eventually cut it off and led to its becoming a freshwater lake in about the 12th century, at the time when Stockholm was founded at its outlet. Marine seashells found in Lake Ontario sediments imply a similar event in prehistoric times. Other pronounced effects can be seen on the island of Öland, Sweden, which has little topographic relief due to the presence of the very level Stora Alvaret. The rising land has caused the Iron Age settlement area to recede from the Baltic Sea, making the present day villages on the west coast set back unexpectedly far from the shore. These effects are quite dramatic at the village of Alby, for example, where the Iron Age inhabitants were known to subsist on substantial coastal fishing. As a result of post-glacial rebound, the Gulf of Bothnia is predicted to eventually close up at Kvarken in more than 2,000 years. The Kvarken is a UNESCO World Natural Heritage Site, selected as a "type area" illustrating the effects of post-glacial rebound and the holocene glacial retreat. In several other Nordic ports, like Tornio and Pori (formerly at Ulvila), the harbour has had to be relocated several times. Place names in the coastal regions also illustrate the rising land: there are inland places named 'island', 'skerry', 'rock', 'point' and 'sound'. For example, Oulunsalo "island of Oulujoki" is a peninsula, with inland names such as Koivukari "Birch Rock", Santaniemi "Sandy Cape", and Salmioja "the brook of the Sound". (Compare and .) In Great Britain, glaciation affected Scotland but not southern England, and the post-glacial rebound of northern Great Britain (up to 10 cm per century) is causing a corresponding downward movement of the southern half of the island (up to 5 cm per century). This will eventually lead to an increased risk of floods in southern England and south-western Ireland. Since the glacial isostatic adjustment process causes the land to move relative to the sea, ancient shorelines are found to lie above present day sea level in areas that were once glaciated. On the other hand, places in the peripheral bulge area which was uplifted during glaciation now begins to subside. Therefore, ancient beaches are found below present day sea level in the bulge area. The "relative sea level data", which consists of height and age measurements of the ancient beaches around the world, tells us that glacial isostatic adjustment proceeded at a higher rate near the end of deglaciation than today. The present-day uplift motion in northern Europe is also monitored by a GPS network called BIFROST. Results of GPS data show a peak rate of about 11 mm/year in the north part of the Gulf of Bothnia, but this uplift rate decreases away and becomes negative outside the former ice margin. In the near field outside the former ice margin, the land sinks relative to the sea. This is the case along the east coast of the United States, where ancient beaches are found submerged below present day sea level and Florida is expected to be submerged in the future. GPS data in North America also confirms that land uplift becomes subsidence outside the former ice margin. Global sea levels To form the ice sheets of the last Ice Age, water from the oceans evaporated, condensed as snow and was deposited as ice in high latitudes. Thus global sea level fell during glaciation. The ice sheets at the last glacial maximum were so massive that global sea level fell by about 120 metres. Thus continental shelves were exposed and many islands became connected with the continents through dry land. This was the case between the British Isles and Europe (Doggerland), or between Taiwan, the Indonesian islands and Asia (Sundaland). A land bridge also existed between Siberia and Alaska that allowed the migration of people and animals during the last glacial maximum. The fall in sea level also affects the circulation of ocean currents and thus has important impact on climate during the glacial maximum. During deglaciation, the melted ice water returns to the oceans, thus sea level in the ocean increases again. However, geological records of sea level changes show that the redistribution of the melted ice water is not the same everywhere in the oceans. In other words, depending upon the location, the rise in sea level at a certain site may be more than that at another site. This is due to the gravitational attraction between the mass of the melted water and the other masses, such as remaining ice sheets, glaciers, water masses and mantle rocks and the changes in centrifugal potential due to Earth's variable rotation. Horizontal crustal motion Accompanying vertical motion is the horizontal motion of the crust. The BIFROST GPS network shows that the motion diverges from the centre of rebound. However, the largest horizontal velocity is found near the former ice margin. The situation in North America is less certain; this is due to the sparse distribution of GPS stations in northern Canada, which is rather inaccessible. Tilt The combination of horizontal and vertical motion changes the tilt of the surface. That is, locations farther north rise faster, an effect that becomes apparent in lakes. The bottoms of the lakes gradually tilt away from the direction of the former ice maximum, such that lake shores on the side of the maximum (typically north) recede and the opposite (southern) shores sink. This causes the formation of new rapids and rivers. For example, Lake Pielinen in Finland, which is large (90 x 30 km) and oriented perpendicularly to the former ice margin, originally drained through an outlet in the middle of the lake near Nunnanlahti to Lake Höytiäinen. The change of tilt caused Pielinen to burst through the Uimaharju esker at the southwestern end of the lake, creating a new river (Pielisjoki) that runs to the sea via Lake Pyhäselkä to Lake Saimaa. The effects are similar to that concerning seashores, but occur above sea level. Tilting of land will also affect the flow of water in lakes and rivers in the future, and thus is important for water resource management planning. In Sweden Lake Sommen's outlet in the northwest has a rebound of 2.36 mm/a while in the eastern Svanaviken it is 2.05 mm/a. This means the lake is being slowly tilted and the southeastern shores drowned. Gravity field Ice, water, and mantle rocks have mass, and as they move around, they exert a gravitational pull on other masses towards them. Thus, the gravity field, which is sensitive to all mass on the surface and within the Earth, is affected by the redistribution of ice/melted water on the surface of the Earth and the flow of mantle rocks within. Today, more than 6000 years after the last deglaciation terminated, the flow of mantle material back to the glaciated area causes the overall shape of the Earth to become less oblate. This change in the topography of Earth's surface affects the long-wavelength components of the gravity field. The changing gravity field can be detected by repeated land measurements with absolute gravimeters and recently by the GRACE satellite mission. The change in long-wavelength components of Earth's gravity field also perturbs the orbital motion of satellites and has been detected by LAGEOS satellite motion. Vertical datum The vertical datum is a reference surface for altitude measurement and plays vital roles in many human activities, including land surveying and construction of buildings and bridges. Since postglacial rebound continuously deforms the crustal surface and the gravitational field, the vertical datum needs to be redefined repeatedly through time. State of stress, intraplate earthquakes and volcanism According to the theory of plate tectonics, plate-plate interaction results in earthquakes near plate boundaries. However, large earthquakes are found in intraplate environments like eastern Canada (up to M7) and northern Europe (up to M5) which are far away from present-day plate boundaries. An important intraplate earthquake was the magnitude 8 New Madrid earthquake that occurred in mid-continental US in the year 1811. Glacial loads provided more than 30 MPa of vertical stress in northern Canada and more than 20 MPa in northern Europe during glacial maximum. This vertical stress is supported by the mantle and the flexure of the lithosphere. Since the mantle and the lithosphere continuously respond to the changing ice and water loads, the state of stress at any location continuously changes in time. The changes in the orientation of the state of stress is recorded in the postglacial faults in southeastern Canada. When the postglacial faults formed at the end of deglaciation 9000 years ago, the horizontal principal stress orientation was almost perpendicular to the former ice margin, but today the orientation is in the northeast–southwest, along the direction of seafloor spreading at the Mid-Atlantic Ridge. This shows that the stress due to postglacial rebound had played an important role at deglacial time, but has gradually relaxed so that tectonic stress has become more dominant today. According to the Mohr–Coulomb theory of rock failure, large glacial loads generally suppress earthquakes, but rapid deglaciation promotes earthquakes. According to Wu & Hasagawa, the rebound stress that is available to trigger earthquakes today is of the order of 1 MPa. This stress level is not large enough to rupture intact rocks but is large enough to reactivate pre-existing faults that are close to failure. Thus, both postglacial rebound and past tectonics play important roles in today's intraplate earthquakes in eastern Canada and southeast US. Generally postglacial rebound stress could have triggered the intraplate earthquakes in eastern Canada and may have played some role in triggering earthquakes in the eastern US including the New Madrid earthquakes of 1811. The situation in northern Europe today is complicated by the current tectonic activities nearby and by coastal loading and weakening. Increasing pressure due to the weight of the ice during glaciation may have suppressed melt generation and volcanic activities below Iceland and Greenland. On the other hand, decreasing pressure due to deglaciation can increase the melt production and volcanic activities by 20-30 times. Recent global warming Recent global warming has caused mountain glaciers and the ice sheets in Greenland and Antarctica to melt and global sea level to rise. Therefore, monitoring sea level rise and the mass balance of ice sheets and glaciers allows people to understand more about global warming. Recent rise in sea levels has been monitored by tide gauges and satellite altimetry (e.g. TOPEX/Poseidon). As well as the addition of melted ice water from glaciers and ice sheets, recent sea level changes are affected by the thermal expansion of sea water due to global warming, sea level change due to deglaciation of the last glacial maximum (postglacial sea level change), deformation of the land and ocean floor and other factors. Thus, to understand global warming from sea level change, one must be able to separate all these factors, especially postglacial rebound, since it is one of the leading factors. Mass changes of ice sheets can be monitored by measuring changes in the ice surface height, the deformation of the ground below and the changes in the gravity field over the ice sheet. Thus ICESat, GPS and GRACE satellite mission are useful for such purpose. However, glacial isostatic adjustment of the ice sheets affect ground deformation and the gravity field today. Thus understanding glacial isostatic adjustment is important in monitoring recent global warming. One of the possible impacts of global warming-triggered rebound may be more volcanic activity in previously ice-capped areas such as Iceland and Greenland. It may also trigger intraplate earthquakes near the ice margins of Greenland and Antarctica. Unusually rapid (up to 4.1 cm/year) present glacial isostatic rebound due to recent ice mass losses in the Amundsen Sea embayment region of Antarctica coupled with low regional mantle viscosity is predicted to provide a modest stabilizing influence on marine ice sheet instability in West Antarctica, but likely not to a sufficient degree to arrest it. Applications The speed and amount of postglacial rebound is determined by two factors: the viscosity or rheology (i.e., the flow) of the mantle, and the ice loading and unloading histories on the surface of Earth. The viscosity of the mantle is important in understanding mantle convection, plate tectonics, the dynamical processes in Earth, and the thermal state and thermal evolution of Earth. However viscosity is difficult to observe because creep experiments of mantle rocks at natural strain rates would take thousands of years to observe and the ambient temperature and pressure conditions are not easy to attain for a long enough time. Thus, the observations of postglacial rebound provide a natural experiment to measure mantle rheology. Modelling of glacial isostatic adjustment addresses the question of how viscosity changes in the radial and lateral directions and whether the flow law is linear, nonlinear, or composite rheology. Mantle viscosity may additionally be estimated using seismic tomography, where seismic velocity is used as a proxy observable. Ice thickness histories are useful in the study of paleoclimatology, glaciology and paleo-oceanography. Ice thickness histories are traditionally deduced from the three types of information: First, the sea level data at stable sites far away from the centers of deglaciation give an estimate of how much water entered the oceans or equivalently how much ice was locked up at glacial maximum. Secondly, the location and dates of terminal moraines tell us the areal extent and retreat of past ice sheets. Physics of glaciers gives us the theoretical profile of ice sheets at equilibrium, it also says that the thickness and horizontal extent of equilibrium ice sheets are closely related to the basal condition of the ice sheets. Thus the volume of ice locked up is proportional to their instantaneous area. Finally, the heights of ancient beaches in the sea level data and observed land uplift rates (e.g. from GPS or VLBI) can be used to constrain local ice thickness. A popular ice model deduced this way is the ICE5G model. Because the response of the Earth to changes in ice height is slow, it cannot record rapid fluctuation or surges of ice sheets, thus the ice sheet profiles deduced this way only gives the "average height" over a thousand years or so. Glacial isostatic adjustment also plays an important role in understanding recent global warming and climate change. Discovery Before the eighteenth century, it was thought, in Sweden, that sea levels were falling. On the initiative of Anders Celsius a number of marks were made in rock on different locations along the Swedish coast. In 1765 it was possible to conclude that it was not a lowering of sea levels but an uneven rise of land. In 1865 Thomas Jamieson came up with a theory that the rise of land was connected with the ice age that had been first discovered in 1837. The theory was accepted after investigations by Gerard De Geer of old shorelines in Scandinavia published in 1890. Legal implications In areas where the rising of land is seen, it is necessary to define the exact limits of property. In Finland, the "new land" is legally the property of the owner of the water area, not any land owners on the shore. Therefore, if the owner of the land wishes to build a pier over the "new land", they need the permission of the owner of the (former) water area. The landowner of the shore may redeem the new land at market price. Usually the owner of the water area is the partition unit of the landowners of the shores, a collective holding corporation. Formulation: sea-level equation The sea-level equation (SLE) is a linear integral equation that describes the sea-level variations associated with the PGR. The basic idea of the SLE dates back to 1888, when Woodward published his pioneering work on the form and position of mean sea level, and only later has been refined by Platzman and Farrell in the context of the study of the ocean tides. In the words of Wu and Peltier, the solution of the SLE yields the space– and time–dependent change of ocean bathymetry which is required to keep the gravitational potential of the sea surface constant for a specific deglaciation chronology and viscoelastic earth model. The SLE theory was then developed by other authors as Mitrovica & Peltier, Mitrovica et al. and Spada & Stocchi. In its simplest form, the SLE reads where is the sea–level change, is the sea surface variation as seen from Earth's center of mass, and is vertical displacement. In a more explicit form the SLE can be written as follow: where is colatitude and is longitude, is time, and are the densities of ice and water, respectively, is the reference surface gravity, is the sea–level Green's function (dependent upon the and viscoelastic load–deformation coefficients - LDCs), is the ice thickness variation, represents the eustatic term (i.e. the ocean–averaged value of ), and denote spatio-temporal convolutions over the ice- and ocean-covered regions, and the overbar indicates an average over the surface of the oceans that ensures mass conservation. See also Isostatic depression - The opposite of isostatic rebound References Further reading As Alaska Glaciers Melt, It’s Land That’s Rising May 17, 2009 New York Times External links Glacial Rebound NASA GRACE Gravity Mission from GPZ , Potsdam BIFROST GPS results from Harvard University Glaciology Geomorphology Geodynamics Events in the geological history of Earth
0.769095
0.992532
0.763351
Reification (Marxism)
In Marxist philosophy, reification (Verdinglichung, "making into a thing") is the process by which human social relations are perceived as inherent attributes of the people involved in them, or attributes of some product of the relation, such as a traded commodity. As a practice of economics, reification transforms objects into subjects and subjects into objects, with the result that subjects (people) are rendered passive (of determined identity), whilst objects (commodities) are rendered as the active factor that determines the nature of a social relation. Analogously, the term hypostatization describes an effect of reification that results from presuming the existence of any object that can be named and presuming the existence of an abstractly conceived object, which is a fallacy of reification of ontological and epistemological interpretation. Reification is conceptually related to, but different from Marx's theory of alienation and theory of commodity fetishism; alienation is the general condition of human estrangement; reification is a specific form of alienation; and commodity fetishism is a specific form of reification. Georg Lukács The concept of reification arose through the work of Lukács (1923), in the essay "Reification and the Consciousness of the Proletariat" (1923), in his book History and Class Consciousness, which defines the term reification. Lukács treats reification as a problem of capitalist society that is related to the prevalence of the commodity form, through a close reading of "The Fetishism of the Commodity and its Secret" in the first volume of Capital. Those who have written about this concept include Max Stirner, Guy Debord, Raya Dunayevskaya, Raymond Williams, Timothy Bewes, and Slavoj Žižek. Marxist humanist Gajo Petrović (1965), drawing from Lukács, defines reification as: Andrew Feenberg (1981) reinterprets Lukács's central category of "consciousness" as similar to anthropological notions of culture as a set of practices. The reification of consciousness in particular, therefore, is more than just an act of misrecognition; it affects the everyday social practice at a fundamental level beyond the individual subject. Frankfurt School Lukács's account was influential for the philosophers of the Frankfurt School, for example in Horkheimer's and Adorno's Dialectic of Enlightenment, and in the works of Herbert Marcuse, and Axel Honneth. Frankfurt School philosopher Axel Honneth (2008) reformulates this "Western Marxist" concept in terms of intersubjective relations of recognition and power. Instead of being an effect of the structural character of social systems such as capitalism, as Karl Marx and György Lukács argued, Honneth contends that all forms of reification are due to pathologies of intersubjectively based struggles for recognition. Social construction Reification occurs when specifically human creations are misconceived as "facts of nature, results of cosmic laws, or manifestations of divine will." However, some scholarship on Lukács's (1923) use of the term "reification" in History and Class Consciousness has challenged this interpretation of the concept, according to which reification implies that a pre-existing subject creates an objective social world from which it is then alienated. Phenomenology Other scholarship has suggested that Lukács's use of the term may have been strongly influenced by Edmund Husserl's phenomenology to understand his preoccupation with the reification of consciousness in particular. On this reading, reification entails a stance that separates the subject from the objective world, creating a mistaken relation between subject and object that is reduced to disengaged knowing. Applied to the social world, this leaves individual subjects feeling that society is something they can only know as an alien power, rather than interact with. In this respect, Lukács's use of the term could be seen as prefiguring some of the themes Martin Heidegger (1927) touches on in Being and Time, supporting the suggestion of Lucien Goldman (2009) that Lukács and Heidegger were much closer in their philosophical concerns than typically thought. Louis Althusser French philosopher Louis Althusser criticized what he called the "ideology of reification" that sees "'things' everywhere in human relations." Althusser's critique derives from his understanding that Marx underwent significant theoretical and methodological change or an "epistemological break" between his early and his mature work. Though the concept of reification is used in Das Kapital by Marx, Althusser finds in it an important influence from the similar concept of alienation developed in the early The German Ideology and in the Economic and Philosophical Manuscripts of 1844. See also The Secret of Hegel Character mask Objectification Caste Reification (fallacy) References Further reading Arato, Andrew. 1972. "Lukács’s Theory of Reification" Telos. Bewes, Timothy. 2002. "Reification, or The Anxiety of Late Capitalism" (illustrated ed.). Verso. . Retrieved via Google Books. Burris, Val. 1988. "Reification: A marxist perspective." California Sociologist 10(1). Pp. 22–43. Dabrowski, Tomash. 2014. "Reification." Blackwell Encyclopedia of Political Thought. Blackwell. . Dahms, Harry. 1998. "Beyond the Carousel of Reification: Critical Social Theory after Lukács, Adorno, and Habermas." Current Perspectives in Social Theory 18(1):3–62. Duarte, German A. 2011. Reificación Mediática (Sic Editorial) Dunayevskaya, Raya. "Reification of People and the Fetishism of Commodities." Pp. 167–91 in The Raya Dunayevskaya Collection. Floyd, Kevin: "Introduction: On Capital, Sexuality, and the Situations of Knowledge," in The Reification of Desire: Toward a Queer Marxism. Minneapolis, MN.: University of Minnesota Press, 2009. Gabel, Joseph. 1975. False Consciousness: An Essay On Reification. New York: Harper & Row. Goldmann, Lucien. 1959 "Réification." Recherches Dialectiques. Paris: Gallimard. Honneth, Axel. 2005 March 14–16. "Reification: A Recognition-Theoretical View." The Tanner Lectures on Human Values, delivered at University of California-Berkeley. Kangrga, Milan. 1968. Was ist Verdinglichung? Larsen, Neil. 2011. "Lukács sans Proletariat, or Can History and Class Consciousness be Rehistoricized?." Pp. 81–100 in Georg Lukács: The Fundamental Dissonance of Existence, edited by T. Bewes and T. Hall. London: Continuum. Löwith, Karl. 1982 [1932]. Max Weber and Karl Marx. Lukács, Georg. 167 [1923]. History & Class Consciousness. Merlin Press. "Reification and the Consciousness of the Proletariat." Rubin, I. I. 1972 [1928]. "Essays on Marx’s Theory of Value." Schaff, Adam. 1980. Alienation as a Social Phenomenon. Tadić, Ljubomir. 1969. "Bureaucracy—Reified Organization," edited by M. Marković and G. Petrović. Praxis. Vandenberghe, Frederic. 2009. A Philosophical History of German Sociology. London: Routledge. Westerman, Richard. 2018. Lukács' Phenomenology of Capitalism: Reification Revalued. New York: Palgrave Macmillan. Marxist theory György Lukács fr:Réification
0.768039
0.993891
0.763347
Pluralism (political theory)
Classical pluralism is the view that politics and decision-making are located mostly in the framework of government but that many non-governmental groups use their resources to exert influence. The central question for classical pluralism is how power and influence are distributed in a political process. Groups of individuals try to maximize their interests. Lines of conflict are multiple and shifting as power is a continuous bargaining process between competing groups. There may be inequalities but they tend to be distributed and evened out by the various forms and distributions of resources throughout a population. Any change under this view will be slow and incremental, as groups have different interests and may act as "veto groups" to destroy legislation. The existence of diverse and competing interests is the basis for a democratic equilibrium, and is crucial for the obtaining of goals by individuals. A polyarchy—a situation of open competition for electoral support within a significant part of the adult population—ensures competition of group interests and relative equality. Pluralists stress civil rights, such as freedom of expression and organization, and an electoral system with at least two parties. On the other hand, since the participants in this process constitute only a tiny fraction of the populace, the public acts mainly as bystanders. This is not necessarily undesirable for two reasons: (1) it may be representative of a population content with the political happenings, or (2) political issues require continuous and expert attention, which the average citizen may not have. Important theorists of pluralism include Robert A. Dahl (who wrote the seminal pluralist work, Who Governs?), David Truman, and Seymour Martin Lipset. The Anti-Pluralism Index in V-Party Dataset is modeled as a lack of commitment to the democratic process, disrespect for fundamental minority rights, demonization of opponents, and acceptance of political violence. Pluralist conception of power The list of possible sources of power is virtually endless: legal authority, money, prestige, skill, knowledge, charisma, legitimacy, free time, and experience. Pluralists also stress the differences between potential and actual power as it stands. Actual power means the ability to compel someone to do something and is the view of power as a causation. Dahl describes power as a "realistic relationship, such as A's capacity for acting in such a manner as to control B's responses". Potential power refers to the possibility of turning resources into actual power. Cash, one of many resources, is only a stack of bills until it is put to work. Malcolm X, for example, was certainly not a rich person growing up, but received money from many groups after his prison term and used other resources such as his forceful personality and organizational skills. He had a greater impact on American politics than most wealthy people. A particular resource like money cannot automatically be equated with power because the resource can be used skillfully or clumsily, fully or partially, or not at all. Pluralists believe that social heterogeneity prevents any single group from gaining dominance. In their view, politics is essentially a matter of aggregating preferences. This means that coalitions are inherently unstable (Polsby, 1980), hence competition is easily preserved. In Dahl's view, because "political heterogeneity follows socioeconomic heterogeneity", social differentiation increasingly disperses power. In this case, Hamed Kazemzadeh (Canadian Pluralist and Human rights activist) argues that organizational membership socializes individuals to democratic norms, increases participation and moderates the politics of society so that bargaining and negotiation are possible. The pluralist approach to the study of power, states that nothing categorical about power can be assumed in any community. The question then is not who runs a community, but if any group in fact does. To determine this, pluralists study specific outcomes. The reason for this is that they believe human behavior is governed in large part by inertia. That said, actual involvement in overt activity is a more valid marker of leadership than simply a reputation. Pluralists also believe that there is no one particular issue or point in time at which any group must assert itself to stay true to its own expressed values, but rather that there are a variety of issues and points at which this is possible. There are also costs involved in taking action at all not only losing, but the expenditure of time and effort. While a structuralist may argue that power distributions have a rather permanent nature, this rationale says that power may in fact be tied to issues, which vary widely in duration. Also, instead of focusing on actors within a system, the emphasis is on the leadership roles itself. By studying these, it can be determined to what extent there is a power structure present in a society. Three of the major tenets of the pluralist school are (1) resources and hence potential power are widely scattered throughout society; (2) at least some resources are available to nearly everyone; and (3) at any time the amount of potential power exceeds the amount of actual power. Finally, and perhaps most important, no one is all-powerful unless proven so through empirical observation. An individual or group that is influential in one realm may be weak in another. Large military contractors certainly throw their weight around on defense matters, but how much sway do they have on agricultural or health policies? A measure of power, therefore, is its scope, or the range of areas where it is successfully applied as observed by a researcher. Pluralists believe that with few exceptions power holders usually have a relatively limited scope of influence. Pluralism does leave room for an elitist situation- Should group A continuously exert power over multiple groups. For a pluralist to accept this notion, it must be empirically observed and not assumed so by definition. For all these reasons power cannot be taken for granted. One has to observe it empirically in order to know who really governs. The best way to do this, pluralists believe, is to examine a wide range of specific decisions, noting who took which side and who ultimately won and lost. Only by keeping score on a variety of controversies can one begin to identify actual power holders. Pluralism was associated with behavioralism. A contradiction to pluralist power is often cited from the origin of one's power. Although certain groups may share power, people within those groups set agendas, decide issues, and take on leadership roles through their own qualities. Some theorists argue that these qualities cannot be transferred, thus creating a system where elitism still exists. What this theory fails to take into account is the prospect of overcoming these qualities by garnering support from other groups. By aggregating power with other organizations, interest groups can over-power these non-transferable qualities. In this sense, political pluralism still applies to these aspects. Elite pluralism Elite pluralists agree with classical pluralists that there is "plurality" of power; however, this plurality is not "pure" when the supposedly democratic equilibrium maintains or increases inequities (social, economic or political) due to elites holding greatly disproportionate societal power in forms aforementioned, or by systemic distortions of the political process itself, perpetuated by, for example, regulatory or cultural capture. Thus, with elite pluralism, it has been said that representative democracy is flawed, and tends to deteriorate towards particracy or oligarchy, by the iron law of oligarchy, for example.<ref>Zur Soziologie des Parteiwesens in der modernen Demokratie. Untersuchungen über die oligarchischen Tendenzen des Gruppenlebens (1911, 1925; 1970). Translated as Sociologia del partito politico nella democrazia moderna : studi sulle tendenze oligarchiche degli aggregati politici, from the German original by Dr. Alfredo Polledro, revised and expanded (1912). Translated, from the Italian, by Eden and Cedar Paul as Political Parties: A Sociological Study of the Oligarchical Tendencies of Modern Democracy'" (Hearst's International Library Co., 1915; Free Press, 1949; Dover Publications, 1959); republished with an introduction by Seymour Martin Lipset (Crowell-Collier, 1962; Transaction Publishers, 1999, ); translated in French by S. Jankélévitch, Les partis politiques. Essai sur les tendances oligarchiques des démocraties, Brussels, Editions de l'Université de Bruxelles, 2009</ref> Neo-pluralism While Pluralism as a political theory of the state and policy formation gained its most traction during the 1950s and 1960s in America, some scholars argued that the theory was too simplistic (see Connolly (1969) The Challenge to Pluralist Theory) leading to the formulation of neo-pluralism. Views differed about the division of power in democratic society. Although neo-pluralism sees multiple pressure groups competing over political influence, the political agenda is biased towards corporate power. Neo-pluralism no longer sees the state as an umpire mediating and adjudicating between the demands of different interest groups, but as a relatively autonomous actor (with different departments) that forges and looks after its own (sectional) interests. Constitutional rules, which in pluralism are embedded in a supportive political culture, should be seen in the context of a diverse, and not necessarily supportive, political culture and a system of radically uneven economic sources. This diverse culture exists because of an uneven distribution of socioeconomic power. This creates possibilities for some groups while limiting others in their political options. In the international realm, order is distorted by powerful multinational interests and dominant states, while in classical pluralism emphasis is put on stability by a framework of pluralist rules and free market society. Charles Lindblom Charles E. Lindblom, who is seen as positing a strong neo-pluralist argument, still attributed primacy to the competition between interest groups in the policy process but recognized the disproportionate influence business interests have in the policy process. Corporatism Classical pluralism was criticized as it did not seem to apply to Westminster-style democracies or the European context. This led to the development of corporatist theories. Corporatism is the idea that a few select interest groups are actually (often formally) involved in the policy formulation process, to the exclusion of the myriad other 'interest groups'. For example, trade unions and major sectoral business associations are often consulted about (if not the drivers of) specific policies. These policies often concern tripartite relations between workers, employers and the state, with a coordinating role for the latter. The state constructs a framework in which it can address the political and economic issues with these organized and centralized groups. In this view, parliament and party politics lose influence in the policy forming process. In foreign policy From the political aspect, 'pluralism' has a huge effect on the process and decision-making in formulating policy. In international security, during the policymaking process, different parties may have a chance to take part in decision making. The one who has more power, the more opportunity that it gains and the higher possibility to get what it wants. According to M. Frances (1991), "decision making appears to be a maze of influence and power." Democratization The V-Party Dataset demonstrates higher autocratization for high anti-pluralism. See also Agonism Decision-making Distributism Foreign policy Elite theory International relations Legitimation crisis Marxism New institutionalism Salad bowl (cultural idea) Notes References Ankerl Guy(2000) Coexisting Contemporary Civilizations. Geneva: INUPress. Socialstudieshelp.com, Pluralism. Accessed 13 February 2007. Elmer Eric Schattschneider (1960) The Semi-Sovereign People. New York: Holt, Rinehart and Winston. Gad Barzilai (2003) Communities and Law: Politics and Cultures of Legal Identities. Ann Arbor: University of Michigan Press. Polsby, Nelson W. (1960) How to Study Community Power: The Pluralist Alternative. The Journal of Politics, (22)3, 474–484 William E. Connolly: The Ethos of Pluralization''. University of Minnesota Press, 1995. C. Alden (2011). Foreign policy analysis. London: University of London. H. Kazemzadeh (2020). Democratic platform in Social Pluralism. Internal Journal of ACPCS, Winter No.10 pp. 237–253. M. Frances Klein (1991). The Politics of Curriculum Decision-Making: Issues in Centralizing the Curriculum. New York: SUNY Press. Comparative politics Political science theories Power sharing az:Siyasi plüralizm ca:Pluralisme polític cs:Pluralismus (politická teorie) cy:Lluosogaeth wleidyddol de:Pluralismus (Politik) fr:Pluralisme hr:Pluralizam (politika) lt:Pliuralizmas (tarptautiniai santykiai) ja:多元論 pt:Pluralismo (política) ru:Политический плюрализм simple:Pluralism sh:Pluralizam fi:Pluralismi (yhteiskuntatieteet)
0.767956
0.993975
0.763329
History of medicine
The history of medicine is both a study of medicine throughout history as well as a multidisciplinary field of study that seeks to explore and understand medical practices, both past and present, throughout human societies. The history of medicine is the study and documentation of the evolution of medical treatments, practices, and knowledge over time. Medical historians often draw from other humanities fields of study including economics, health sciences, sociology, and politics to better understand the institutions, practices, people, professions, and social systems that have shaped medicine. When a period which predates or lacks written sources regarding medicine, information is instead drawn from archaeological sources. This field tracks the evolution of human societies' approach to health, illness, and injury ranging from prehistory to the modern day, the events that shape these approaches, and their impact on populations. Early medical traditions include those of Babylon, China, Egypt and India. Invention of the microscope was a consequence of improved understanding, during the Renaissance. Prior to the 19th century, humorism (also known as humoralism) was thought to explain the cause of disease but it was gradually replaced by the germ theory of disease, leading to effective treatments and even cures for many infectious diseases. Military doctors advanced the methods of trauma treatment and surgery. Public health measures were developed especially in the 19th century as the rapid growth of cities required systematic sanitary measures. Advanced research centers opened in the early 20th century, often connected with major hospitals. The mid-20th century was characterized by new biological treatments, such as antibiotics. These advancements, along with developments in chemistry, genetics, and radiography led to modern medicine. Medicine was heavily professionalized in the 20th century, and new careers opened to women as nurses (from the 1870s) and as physicians (especially after 1970). Prehistoric medicine Prehistoric medicine is a field of study focused on understanding the use of medicinal plants, healing practices, illnesses, and wellness of humans before written records existed. Although styled prehistoric "medicine", prehistoric healthcare practices were vastly different from what we understand medicine to be in the present era and more accurately refers to studies and exploration of early healing practices. This period extends across the first use of stone tools by early humans 3.3 million years ago to the beginning of writing systems and subsequent recorded history 5000 years ago. As human populations were once scattered across the world, forming isolated communities and cultures that sporadically interacted, a range of archaeological periods have been developed to account for the differing contexts of technology, sociocultural developments, and uptake of writing systems throughout early human societies. Prehistoric medicine is then highly contextual to the location and people in question, creating an ununiform period of study to reflect various degrees of societal development. Without written records, insights into prehistoric medicine comes indirectly from interpreting evidence left behind by prehistoric humans. One branch of this includes the archaeology of medicine; a discipline that uses a range of archaeological techniques from observing illness in human remains, plant fossils, to excavations to uncover medical practices. There is evidence of healing practices within Neanderthals and other early human species. Prehistoric evidence of human engagement with medicine include the discovery of psychoactive plant sources such as psilocybin mushrooms in 6000 BCE Sahara to primitive dental care in 10,900 BCE (13,000 BP) Riparo Fredian (present-day Italy) and 7000 BCE Mehrgarh (present-day Pakistan). Anthropology is another academic branch that contributes to understanding prehistoric medicine in uncovering the sociocultural relationships, meaning, and interpretation of prehistoric evidence. The overlap of medicine as both a root to healing the body as well as the spiritual throughout prehistoric periods highlights the multiple purposes that healing practices and plants could potentially have. From proto-religions to developed spiritual systems, relationships of humans and supernatural entities, from Gods to shamans, have played an interwoven part in prehistoric medicine. Ancient medicine Ancient history covers time between 3000 BCE to 500 CE, starting from evidenced development of writing systems to the end of the classical era and beginning of the post-classical period. This periodisation presents history as if it were the same everywhere, however it is important to note that socioculture and technological developments could differ locally from settlement to settlement as well as globally from one society to the next. Ancient medicine covers a similar period of time and presented a range of similar healing theories from across the world connecting nature, religion, and humans within ideas of circulating fluids and energy. Although prominent scholars and texts detailed well-defined medical insights, their real-world applications were marred by knowledge destruction and loss, poor communication, localised reinterpretations, and subsequent inconsistent applications. Ancient Mesopotamian medicine The Mesopotamian region, covering much of present-day Iraq, Kuwait, Syria, Iran, and Turkey, was dominated by a series of civilisations including Sumer, the earliest known civilisation in the Fertile Crescent region, alongside the Akkadians (including Assyrians and Babylonians). Overlapping ideas of what we now understand as medicine, science, magic, and religion characterised early Mesopotamian healing practices as a hybrid naturalistic and supernatural belief system. The Sumerians, having developed one of the earliest known writing systems in the 3rd millennium BCE, created numerous cuneiform clay tablets regarding their civilisation included detailed accounts of drug prescriptions, operations, to exorcisms. These were administered and carried out by highly defined professionals including bârû (seers), âs[h]ipu (exorcists), and asû (physician-priests). An example of an early, prescription-like medication appeared in Sumerian during the Third Dynasty of Ur ( 2112 BCE – 2004 BCE). Following the conquest of the Sumerian civilisation by the Akkadian Empire and the empire's eventual collapse from a number of social and environmental factors, the Babylonian civilisation began to dominate the region. Examples of Babylonian Medicine include the extensive Babylonian medical text, the Diagnostic Handbook, written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, in the middle of the 11th century BCE during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). This medical treatise presented great attention to the practice of diagnosis, prognosis, physical examination, and remedies. The text contains a list of medical symptoms and often detailed empirical observations along with logical rules used in combining observed symptoms on the body of a patient with its diagnosis and prognosis. Here, clearly developed rationales were developed to understand the causes of disease and injury, supported by agreed upon theories at-the-time of elements we might now understand as natural causes, supernatural magic and religious explanations.Most known and recovered artefacts from the ancient Mesopotamian civilisations centre on the neo-Assyrian ( 900 – 600 BCE) and neo-Babylonian ( 600 – 500 BCE) periods, as the last empires ruled by native Mesopotamian rulers. These discoveries include a huge array of medical clay tablets from this period, although damage to the clay documents creates large gaps in our understanding of medical practices. Throughout the civilisations of Mesopotamia there is a wide range of medical innovations including evidenced practices of prophylaxis, taking measures to prevent the spread of disease, accounts of stroke, to an awareness of mental illnesses. Ancient Egyptian medicine Ancient Egypt, a civilisation spanning across the river Nile (throughout parts of present-day Egypt, Sudan, and South Sudan), existed from its unification in 3150 BCE to its collapse via Persian conquest in 525 BCE and ultimate downfall from the conquest of Alexander the Great in 332 BCE. Throughout unique dynasties, golden eras, and intermediate periods of instability, ancient Egyptians developed a complex, experimental, and communicative medical tradition that has been uncovered through surviving documents, most made of papyrus, such as the Kahun Gynaecological Papyrus, the Edwin Smith Papyrus, the Ebers Papyrus, the London Medical Papyrus, to the Greek Magical Papyri. Herodotus described the Egyptians as "the healthiest of all men, next to the Libyans", because of the dry climate and the notable public health system that they possessed. According to him, "the practice of medicine is so specialized among them that each physician is a healer of one disease and no more." Although Egyptian medicine, to a considerable extent, dealt with the supernatural, it eventually developed a practical use in the fields of anatomy, public health, and clinical diagnostics. Medical information in the Edwin Smith Papyrus may date to a time as early as 3000 BCE. Imhotep in the 3rd dynasty is sometimes credited with being the founder of ancient Egyptian medicine and with being the original author of the Edwin Smith Papyrus, detailing cures, ailments and anatomical observations. The Edwin Smith Papyrus is regarded as a copy of several earlier works and was written c. 1600 BCE. It is an ancient textbook on surgery almost completely devoid of magical thinking and describes in exquisite detail the examination, diagnosis, treatment, and prognosis of numerous ailments. The Kahun Gynaecological Papyrus treats women's complaints, including problems with conception. Thirty four cases detailing diagnosis and treatment survive, some of them fragmentarily. Dating to 1800 BCE, it is the oldest surviving medical text of any kind. Medical institutions, referred to as Houses of Life are known to have been established in ancient Egypt as early as 2200 BCE. The Ebers Papyrus is the oldest written text mentioning enemas. Many medications were administered by enemas and one of the many types of medical specialists was an Iri, the Shepherd of the Anus. The earliest known physician is also credited to ancient Egypt: Hesy-Ra, "Chief of Dentists and Physicians" for King Djoser in the 27th century BCE. Also, the earliest known woman physician, Peseshet, practiced in Ancient Egypt at the time of the 4th dynasty. Her title was "Lady Overseer of the Lady Physicians." Ancient Chinese medicine Medical and healing practices in early Chinese dynasties were heavily shaped by the practice of traditional Chinese medicine (TCM). Starting around the Zhou dynasty, parts of this system were being developed and are demonstrated in early writings on herbs in Classic of Changes (Yi Jing) and Classic of Poetry (Shi Jing). China also developed a large body of traditional medicine. Much of the philosophy of traditional Chinese medicine derived from empirical observations of disease and illness by Taoist physicians and reflects the classical Chinese belief that individual human experiences express causative principles effective in the environment at all scales. These causative principles, whether material, essential, or mystical, correlate as the expression of the natural order of the universe. The foundational text of Chinese medicine is the Huangdi Neijing, (or Yellow Emperor's Inner Canon), written 5th century to 3rd century BCE. Near the end of the 2nd century CE, during the Han dynasty, Zhang Zhongjing, wrote a Treatise on Cold Damage, which contains the earliest known reference to the Neijing Suwen. The Jin dynasty practitioner and advocate of acupuncture and moxibustion, Huangfu Mi (215–282), also quotes the Yellow Emperor in his Jiayi jing, c. 265. During the Tang dynasty, the Suwen was expanded and revised and is now the best extant representation of the foundational roots of traditional Chinese medicine. Traditional Chinese medicine that is based on the use of herbal medicine, acupuncture, massage and other forms of therapy has been practiced in China for thousands of years. Critics say that TCM theory and practice have no basis in modern science, and TCM practitioners do not agree on what diagnosis and treatments should be used for any given person. A 2007 editorial in the journal Nature wrote that TCM "remains poorly researched and supported, and most of its treatments have no logical mechanism of action." It also described TCM as "fraught with pseudoscience". A review of the literature in 2008 found that scientists are "still unable to find a shred of evidence" according to standards of science-based medicine for traditional Chinese concepts such as qi, meridians, and acupuncture points, and that the traditional principles of acupuncture are deeply flawed. There are concerns over a number of potentially toxic plants, animal parts, and mineral Chinese compounds, as well as the facilitation of disease. Trafficked and farm-raised animals used in TCM are a source of several fatal zoonotic diseases. There are additional concerns over the illegal trade and transport of endangered species including rhinoceroses and tigers, and the welfare of specially farmed animals, including bears. Ancient Indian medicine The Atharvaveda, a sacred text of Hinduism dating from the middle Vedic age (c. 1200–900 BCE), is one of the first Indian texts dealing with medicine. It is a text filled with magical charms, spells, and incantations used for various purposes, such as protection against demons, rekindling love, ensuring childbirth, and achieving success in battle, trade, and even gambling. It also includes numerous charms aimed at curing diseases and several remedies from medicinal herbs, overall making it a key source of medical knowledge during the Vedic period. The use of herbs to treat ailments would later form a large part of Ayurveda. Ayurveda, meaning the "complete knowledge for long life" is another medical system of India. Its two most famous texts(samhitas) belong to the schools of Charaka and Sushruta. The Samhitas represent later revised versions(recensions) of their original works. The earliest foundations of Ayurveda were built on a synthesis of traditional herbal practices together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 600 BCE onwards, and coming out of the communities of thinkers which included the Buddha and others. According to the compendium of Charaka, the Charakasamhitā, health and disease are not predetermined and life may be prolonged by human effort. The compendium of Suśruta, the Suśrutasamhitā defines the purpose of medicine to cure the diseases of the sick, protect the healthy, and to prolong life. Both these ancient compendia include details of the examination, diagnosis, treatment, and prognosis of numerous ailments. The Suśrutasamhitā is notable for describing procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. Most remarkable was Susruta's surgery specially the rhinoplasty for which he is called father of plastic surgery. Susruta also described more than 125 surgical instruments in detail. Also remarkable is Sushruta's penchant for scientific classification: His medical treatise consists of 184 chapters, 1,120 conditions are listed, including injuries and illnesses relating to aging and mental illness. The Ayurvedic classics mention eight branches of medicine: kāyācikitsā (internal medicine), śalyacikitsā (surgery including anatomy), śālākyacikitsā (eye, ear, nose, and throat diseases), kaumārabhṛtya (pediatrics with obstetrics and gynaecology), bhūtavidyā (spirit and psychiatric medicine), agada tantra (toxicology with treatments of stings and bites), rasāyana (science of rejuvenation), and vājīkaraṇa (aphrodisiac and fertility). Apart from learning these, the student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The teaching of various subjects was done during the instruction of relevant clinical subjects. For example, the teaching of anatomy was a part of the teaching of surgery, embryology was a part of training in pediatrics and obstetrics, and the knowledge of physiology and pathology was interwoven in the teaching of all the clinical disciplines. Even today Ayurvedic treatment is practiced, but it is considered pseudoscientific because its premises are not based on science, some ayurvedic medicines have been found to contain toxic substances. Both the lack of scientific soundness in the theoretical foundations of ayurveda and the quality of research have been criticized. Ancient Greek medicine Humors The theory of humors was derived from ancient medical works, dominated Western medicine until the 19th century, and is credited to Greek philosopher and surgeon Galen of Pergamon (129 – ). In Greek medicine, there are thought to be four humors, or bodily fluids that are linked to illness: blood, phlegm, yellow bile, and black bile. Early scientists believed that food is digested into blood, muscle, and bones, while the humors that were not blood were then formed by indigestible materials that are left over. An excess or shortage of any one of the four humors is theorized to cause an imbalance that results in sickness; the aforementioned statement was hypothesized by sources before Hippocrates. Hippocrates deduced that the four seasons of the year and four ages of man that affect the body in relation to the humors. The four ages of man are childhood, youth, prime age, and old age. Black bile is associated with autumn, phlegm with winter, blood with spring, and yellow bile with summer. In De temperamentis, Galen linked what he called temperaments, or personality characteristics, to a person's natural mixture of humors. He also said that the best place to check the balance of temperaments was in the palm of the hand. A person that is considered to be phlegmatic is said to be an introvert, even-tempered, calm, and peaceful. This person would have an excess of phlegm, which is described as a viscous substance or mucous. Similarly, a melancholic temperament related to being moody, anxious, depressed, introverted, and pessimistic. A melancholic temperament is caused by an excess of black bile, which is sedimentary and dark in colour. Being extroverted, talkative, easygoing, carefree, and sociable coincides with a sanguine temperament, which is linked to too much blood. Finally, a choleric temperament is related to too much yellow bile, which is actually red in colour and has the texture of foam; it is associated with being aggressive, excitable, impulsive, and also extroverted. There are numerous ways to treat a disproportion of the humors. For example, if someone was suspected to have too much blood, then the physician would perform bloodletting as a treatment. Likewise, if a person believed to have too much phlegm should feel better after expectorating, and someone with too much yellow bile would purge. Another factor to be considered in the balance of humors is the quality of air where one resides, such as the climate and elevation. Also, the standard of food and drink, balance of sleeping and waking, exercise and rest, retention and evacuation are important. Moods such as anger, sadness, joy, and love can affect the balance. During that time, the importance of balance was demonstrated by the fact that women lose blood monthly during menstruation, and have a lesser occurrence of gout, arthritis, and epilepsy then men do. Galen also hypothesized that there are three faculties. The natural faculty affects growth and reproduction and is produced in the liver. Animal or vital faculty controls respiration and emotion, coming from the heart. In the brain, the psychic faculty commands the senses and thoughts. The structure of bodily functions is related to the humors as well. Greek physicians understood that food was cooked in the stomach; this is where the nutrients are extracted. The best, most potent and pure nutrients from food are reserved for blood, which is produced in the liver and carried through veins to organs. Blood enhanced with pneuma, which means wind or breath, is carried by the arteries. The path that blood take is as follows: venous blood passes through the vena cava and is moved into the right ventricle of the heart; then, the pulmonary artery takes it to the lungs. Later, the pulmonary vein then mixes air from the lungs with blood to form arterial blood, which has different observable characteristics. After leaving the liver, half of the yellow bile that is produced travels to the blood, while the other half travels to the gallbladder. Similarly, half of the black bile produced gets mixed in with blood, and the other half is used by the spleen. People Around 800 BCE Homer in the Iliad gives descriptions of wound treatment by the two sons of Asklepios, the admirable physicians Podaleirius and Machaon and one acting doctor, Patroclus. Because Machaon is wounded and Podaleirius is in combat Eurypylus asks Patroclus to "cut out the arrow-head, and wash the dark blood from my thigh with warm water, and sprinkle soothing herbs with power to heal on my wound". Asklepios, like Imhotep, came to be associated as a god of healing over time. Temples dedicated to the healer-god Asclepius, known as Asclepieia (, sing. , Asclepieion), functioned as centers of medical advice, prognosis, and healing. At these shrines, patients would enter a dream-like state of induced sleep known as enkoimesis not unlike anesthesia, in which they either received guidance from the deity in a dream or were cured by surgery. Asclepeia provided carefully controlled spaces conducive to healing and fulfilled several of the requirements of institutions created for healing. In the Asclepeion of Epidaurus, three large marble boards dated to 350 BCE preserve the names, case histories, complaints, and cures of about 70 patients who came to the temple with a problem and shed it there. Some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place, but with the patient in a state of enkoimesis induced with the help of soporific substances such as opium. Alcmaeon of Croton wrote on medicine between 500 and 450 BCE. He argued that channels linked the sensory organs to the brain, and it is possible that he discovered one type of channel, the optic nerves, by dissection. Hippocrates of Kos, considered the "father of modern medicine." The Hippocratic Corpus is a collection of around seventy early medical works from ancient Greece strongly associated with Hippocrates and his students. Most famously, the Hippocratics invented the Hippocratic Oath for physicians. Contemporary physicians swear an oath of office which includes aspects found in early editions of the Hippocratic Oath. Hippocrates and his followers were first to describe many diseases and medical conditions. Though humorism (humoralism) as a medical system predates 5th-century Greek medicine, Hippocrates and his students systematized the thinking that illness can be explained by an imbalance of blood, phlegm, black bile, and yellow bile. Hippocrates is given credit for the first description of clubbing of the fingers, an important diagnostic sign in chronic suppurative lung disease, lung cancer and cyanotic heart disease. For this reason, clubbed fingers are sometimes referred to as "Hippocratic fingers". Hippocrates was also the first physician to describe the Hippocratic face in Prognosis. Shakespeare famously alludes to this description when writing of Falstaff's death in Act II, Scene iii. of Henry V. Hippocrates began to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence." The Greek Galen (c. ) was one of the greatest physicians of the ancient world, as his theories dominated all medical studies for nearly 1500 years. His theories and experimentation laid the foundation for modern medicine surrounding the heart and blood. Galen's influence and innovations in medicine can be attributed to the experiments he conducted, which were unlike any other medical experiments of his time. Galen strongly believed that medical dissection was one of the essential procedures in truly understanding medicine. He began to dissect different animals that were anatomically similar to humans, which allowed him to learn more about the internal organs and extrapolate the surgical studies to the human body. In addition, he performed many audacious operations—including brain and eye surgeries—that were not tried again for almost two millennia. Through the dissections and surgical procedures, Galen concluded that blood is able to circulate throughout the human body, and the heart is most similar to the human soul. In Ars medica ("Arts of Medicine"), he further explains the mental properties in terms of specific mixtures of the bodily organs. While much of his work surrounded the physical anatomy, he also worked heavily in humoral physiology. Galen's medical work was regarded as authoritative until well into the Middle Ages. He left a physiological model of the human body that became the mainstay of the medieval physician's university anatomy curriculum. Although he attempted to extrapolate the animal dissections towards the model of the human body, some of Galen's theories were incorrect. This caused his model to suffer greatly from stasis and intellectual stagnation. Greek and Roman taboos caused dissection of the human body to usually be banned in ancient times, but in the Middle Ages it changed. In 1523 Galen's On the Natural Faculties was published in London. In the 1530s Belgian anatomist and physician Andreas Vesalius launched a project to translate many of Galen's Greek texts into Latin. Vesalius's most famous work, De humani corporis fabrica was greatly influenced by Galenic writing and form. Herophilus and Erasistratus Two great Alexandrians laid the foundations for the scientific study of anatomy and physiology, Herophilus of Chalcedon and Erasistratus of Ceos. Other Alexandrian surgeons gave us ligature (hemostasis), lithotomy, hernia operations, ophthalmic surgery, plastic surgery, methods of reduction of dislocations and fractures, tracheotomy, and mandrake as an anaesthetic. Some of what we know of them comes from Celsus and Galen of Pergamum. Herophilus of Chalcedon, the renowned Alexandrian physician, was one of the pioneers of human anatomy. Though his knowledge of the anatomical structure of the human body was vast, he specialized in the aspects of neural anatomy. Thus, his experimentation was centered around the anatomical composition of the blood-vascular system and the pulsations that can be analyzed from the system. Furthermore, the surgical experimentation he administered caused him to become very prominent throughout the field of medicine, as he was one of the first physicians to initiate the exploration and dissection of the human body. The banned practice of human dissection was lifted during his time within the scholastic community. This brief moment in the history of Greek medicine allowed him to further study the brain, which he believed was the core of the nervous system. He also distinguished between veins and arteries, noting that the latter pulse and the former do not. Thus, while working at the medical school of Alexandria, Herophilus placed intelligence in the brain based on his surgical exploration of the body, and he connected the nervous system to motion and sensation. In addition, he and his contemporary, Erasistratus of Chios, continued to research the role of veins and nerves. After conducting extensive research, the two Alexandrians mapped out the course of the veins and nerves across the human body. Erasistratus connected the increased complexity of the surface of the human brain compared to other animals to its superior intelligence. He sometimes employed experiments to further his research, at one time repeatedly weighing a caged bird, and noting its weight loss between feeding times. In Erasistratus' physiology, air enters the body, is then drawn by the lungs into the heart, where it is transformed into vital spirit, and is then pumped by the arteries throughout the body. Some of this vital spirit reaches the brain, where it is transformed into animal spirit, which is then distributed by the nerves. Ancient Roman medicine The Romans invented numerous surgical instruments, including the first instruments unique to women, as well as the surgical uses of forceps, scalpels, cautery, cross-bladed scissors, the surgical needle, the sound, and speculas. Romans also performed cataract surgery. The Roman army physician Dioscorides (–90 CE), was a Greek botanist and pharmacologist. He wrote the encyclopedia De Materia Medica describing over 600 herbal cures, forming an influential pharmacopoeia which was used extensively for the following 1,500 years. Early Christians in the Roman Empire incorporated medicine into their theology, ritual practices, and metaphors. Post-classical medicine Middle East Places Byzantine medicine Byzantine medicine encompasses the common medical practices of the Byzantine Empire from about 400 CE to 1453 CE. Byzantine medicine was notable for building upon the knowledge base developed by its Greco-Roman predecessors. In preserving medical practices from antiquity, Byzantine medicine influenced Islamic medicine as well as fostering the Western rebirth of medicine during the Renaissance. Byzantine physicians often compiled and standardized medical knowledge into textbooks. Their records tended to include both diagnostic explanations and technical drawings. The Medical Compendium in Seven Books, written by the leading physician Paul of Aegina, survived as a particularly thorough source of medical knowledge. This compendium, written in the late seventh century, remained in use as a standard textbook for the following 800 years. Late antiquity ushered in a revolution in medical science, and historical records often mention civilian hospitals (although battlefield medicine and wartime triage were recorded well before Imperial Rome). Constantinople stood out as a center of medicine during the Middle Ages, which was aided by its crossroads location, wealth, and accumulated knowledge. The first ever known example of separating conjoined twins occurred in the Byzantine Empire in the 10th century. The next example of separating conjoined twins would be recorded many centuries later in Germany in 1689. The Byzantine Empire's neighbors, the Persian Sassanid Empire, also made their noteworthy contributions mainly with the establishment of the Academy of Gondeshapur, which was "the most important medical center of the ancient world during the 6th and 7th centuries." In addition, Cyril Elgood, British physician and a historian of medicine in Persia, commented that thanks to medical centers like the Academy of Gondeshapur, "to a very large extent, the credit for the whole hospital system must be given to Persia." Islamic medicine The Islamic civilization rose to primacy in medical science as its physicians contributed significantly to the field of medicine, including anatomy, ophthalmology, pharmacology, pharmacy, physiology, and surgery. Islamic civilization's contribution to these fields within medicine was a gradual process that took hundreds of years. During the time of the first great Muslim dynasty, the Umayyad Caliphate (661–750 CE), these fields that were in their very early stages of development, and not much progress was made. One reason for the limited advancement in medicine during the Umayyad Caliphate was the Caliphate's focus on expansion after the death of Muhammad (632 CE). The focus on expansionism redirected resources from other fields, such as medicine. The priority on these factors led a large percentage of the population to believe that God will provide cures for their illnesses and diseases because of the attention on spirituality. There were also many other areas of interest during that time before there was a rising interest in the field of medicine. Abd al-Malik ibn Marwan, the fifth caliph of the Umayyad, developed governmental administration, adopted Arabic as the main language, and focused on many other areas. However, this rising interest in Islamic medicine grew significantly when the Abbasid Caliphate (750–1258 CE) overthrew the Umayyad Caliphate in 750 CE. This change in dynasty from the Umayyad Caliphate to the Abbasid Caliphate served as a turning point towards scientific and medical developments. A large contributor to this was that under Abbasid rule there much of the Greek legacy was transmitted into Arabic which by then, was the main language of Islamic nations. Because of this, many Islamic physicians were heavily influenced by the works of Greek scholars of Alexandria and Egypt and were able to further expand on those texts to produce new medical pieces of knowledge. This period of time is also known as the Islamic Golden Age where there was a period of development for development and flourishments of technology, commerce, and sciences including medicine. Additionally, during this time the creation of the first Islamic Hospital in 805 CE by the Abbasid caliph Harun al-Rashid in Baghdad was recounted as a glorious event of the Golden Age. This hospital in Baghdad contributed immensely to Baghdad's success and also provided educational opportunities for Islamic physicians. During the Islamic Golden Age, there were many famous Islamic physicians that paved the way for medical advancements and understandings. However, this would not be possible without the influence from many different areas of the world that influenced the Arabs. Muslims were influenced by ancient Indian, Persian, Greek, Roman and Byzantine medical practices, and helped them develop further.Galen & Hippocrates were pre-eminent authorities. The translation of 129 of Galen's works into Arabic by the Nestorian Christian Hunayn ibn Ishaq and his assistants, and in particular Galen's insistence on a rational systematic approach to medicine, set the template for Islamic medicine, which rapidly spread throughout the Arab Empire. Its most famous physicians included the Persian polymaths Muhammad ibn Zakarīya al-Rāzi and Avicenna, who wrote more than 40 works on health, medicine, and well-being. Taking leads from Greece and Rome, Islamic scholars kept both the art and science of medicine alive and moving forward. Persian polymath Avicenna has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. The Canon of Medicine presents an overview of the contemporary medical knowledge of the medieval Islamic world, which had been influenced by earlier traditions including Greco-Roman medicine (particularly Galen), Persian medicine, Chinese medicine and Indian medicine. Persian physician al-Rāzi was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of al-Rāzi's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. In addition to contributions to humanity's understanding of human anatomy, Islamicate scientists and scholars, physicians specifically, played an invaluable role in the development of the modern hospital system, creating the foundations on which more contemporary medical professionals would build models of public health systems in Europe and elsewhere. During the time of the Safavid empire (16th–18th centuries) in Iran and the Mughal empire (16th–19th centuries) in India, Muslim scholars radically transformed the institution of the hospital, creating an environment in which rapidly developing medical knowledge of the time could be passed among students and teachers from a wide range of cultures. There were two main schools of thought with patient care at the time. These included humoral physiology from the Persians and Ayurvedic practice. After these theories were translated from Sanskrit to Persian and vice-versa, hospitals could have a mix of culture and techniques. This allowed for a sense of collaborative medicine. Hospitals became increasingly common during this period as wealthy patrons commonly founded them. Many features that are still in use today, such as an emphasis on hygiene, a staff fully dedicated to the care of patients, and separation of individual patients from each other were developed in Islamicate hospitals long before they came into practice in Europe. At the time, the patient care aspects of hospitals in Europe had not taken effect. European hospitals were places of religion rather than institutions of science. As was the case with much of the scientific work done by Islamicate scholars, many of these novel developments in medical practice were transmitted to European cultures hundreds of years after they had long been used throughout the Islamicate world. Although Islamicate scientists were responsible for discovering much of the knowledge that allows the hospital system to function safely today, European scholars who built on this work still receive the majority of the credit historically. Before the development of scientific medical practices in the Islamicate empires, medical care was mainly performed by religious figures such as priests. Without a profound understanding of how infectious diseases worked and why sickness spread from person to person, these early attempts at caring for the ill and injured often did more harm than good. Contrarily, with the development of new and safer practices by scholars and physicians in hospitals of the Islamic world, ideas vital for the effective care of patients were developed, learned, and transmitted widely. Hospitals developed novel "concepts and structures" which are still in use today: separate wards for male and female patients, pharmacies, medical record-keeping, and personal and institutional sanitation and hygiene. Much of this knowledge was recorded and passed on through Islamicate medical texts, many of which were carried to Europe and translated for the use of European medical workers. The Tasrif, written by surgeon Abu Al-Qasim Al-Zahrawi, was translated into Latin; it became one of the most important medical texts in European universities during the Middle Ages and contained useful information on surgical techniques and spread of bacterial infection. The hospital was a typical institution included in the majority of Muslim cities, and although they were often physically attached to religious institutions, they were not themselves places of religious practice. Rather, they served as facilities in which education and scientific innovation could flourish. If they had places of worship, they were secondary to the medical side of the hospital. Islamicate hospitals, along with observatories used for astronomical science, were some of the most important points of exchange for the spread of scientific knowledge. Undoubtedly, the hospital system developed in the Islamicate world played an invaluable role in the creation and evolution of the hospitals we as a society know and depend on today. Europe After 400 CE, the study and practice of medicine in the Western Roman Empire went into deep decline. Medical services were provided, especially for the poor, in the thousands of monastic hospitals that sprang up across Europe, but the care was rudimentary and mainly palliative. Most of the writings of Galen and Hippocrates were lost to the West, with the summaries and compendia of St. Isidore of Seville being the primary channel for transmitting Greek medical ideas. The Carolingian Renaissance brought increased contact with Byzantium and a greater awareness of ancient medicine, but only with the Renaissance of the 12th century and the new translations coming from Muslim and Jewish sources in Spain, and the fifteenth-century flood of resources after the fall of Constantinople did the West fully recover its acquaintance with classical antiquity. Greek and Roman taboos had meant that dissection was usually banned in ancient times, but in the Middle Ages it changed: medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (–1326) produced the first known anatomy textbook based on human dissection. Wallis identifies a prestige hierarchy with university educated physicians on top, followed by learned surgeons; craft-trained surgeons; barber surgeons; itinerant specialists such as dentist and oculists; empirics; and midwives. Institutions The first medical schools were opened in the 9th century, most notably the Schola Medica Salernitana at Salerno in southern Italy. The cosmopolitan influences from Greek, Latin, Arabic, and Hebrew sources gave it an international reputation as the Hippocratic City. Students from wealthy families came for three years of preliminary studies and five of medical studies. The medicine, following the laws of Federico II, that he founded in 1224 the university and improved the Schola Salernitana, in the period between 1200 and 1400, it had in Sicily (so-called Sicilian Middle Ages) a particular development so much to create a true school of Jewish medicine. As a result of which, after a legal examination, was conferred to a Jewish Sicilian woman, Virdimura, wife of another physician Pasquale of Catania, the historical record of before woman officially trained to exercise of the medical profession. At the University of Bologna the training of physicians began in 1219. The Italian city attracted students from across Europe. Taddeo Alderotti built a tradition of medical education that established the characteristic features of Italian learned medicine and was copied by medical schools elsewhere.Turisanus (d. 1320) was his student. The University of Padua was founded about 1220 by walkouts from the University of Bologna, and began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body. Starting in 1595, Padua's famous anatomical theatre drew artists and scientists studying the human body during public dissections. The intensive study of Galen led to critiques of Galen modeled on his own writing, as in the first book of Vesalius's De humani corporis fabrica. Andreas Vesalius held the chair of Surgery and Anatomy (explicator chirurgiae) and in 1543 published his anatomical discoveries in De Humani Corporis Fabrica. He portrayed the human body as an interdependent system of organ groupings. The book triggered great public interest in dissections and caused many other European cities to establish anatomical theatres. By the thirteenth century, the medical school at Montpellier began to eclipse the Salernitan school. In the 12th century, universities were founded in Italy, France, and England, which soon developed schools of medicine. The University of Montpellier in France and Italy's University of Padua and University of Bologna were leading schools. Nearly all the learning was from lectures and readings in Hippocrates, Galen, Avicenna, and Aristotle. In later centuries, the importance of universities founded in the late Middle Ages gradually increased, e.g. Charles University in Prague (established in 1348), Jagiellonian University in Kraków (1364), University of Vienna (1365), Heidelberg University (1386) and University of Greifswald (1456). People Early modern medicine Places England In England, there were but three small hospitals after 1550. Pelling and Webster estimate that in London in the 1580 to 1600 period, out of a population of nearly 200,000 people, there were about 500 medical practitioners. Nurses and midwives are not included. There were about 50 physicians, 100 licensed surgeons, 100 apothecaries, and 250 additional unlicensed practitioners. In the last category about 25% were women. All across England—and indeed all of the world—the vast majority of the people in city, town or countryside depended for medical care on local amateurs with no professional training but with a reputation as wise healers who could diagnose problems and advise sick people what to do—and perhaps set broken bones, pull a tooth, give some traditional herbs or brews or perform a little magic to cure what ailed them. People Europe The Renaissance brought an intense focus on scholarship to Christian Europe. A major effort to translate the Arabic and Greek scientific works into Latin emerged. Europeans gradually became experts not only in the ancient writings of the Romans and Greeks, but in the contemporary writings of Islamic scientists. During the later centuries of the Renaissance came an increase in experimental investigation, particularly in the field of dissection and body examination, thus advancing our knowledge of human anatomy. Ideas Animalcules: In 1677 Antonie van Leeuwenhoek identified "animalcules", which we now know as microorganisms, within their paper "letter on the protozoa". Blood circulation: In 1628 the English physician William Harvey made a ground-breaking discovery when he correctly described the circulation of the blood in his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus. Before this time the most useful manual in medicine used both by students and expert physicians was Dioscorides' De Materia Medica, a pharmacopoeia. Inventions Microscopes: Bacteria and protists were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field of microbiology. Institutions At the University of Bologna the curriculum was revised and strengthened in 1560–1590. A representative professor was Julius Caesar Aranzi (Arantius) (1530–1589). He became Professor of Anatomy and Surgery at the University of Bologna in 1556, where he established anatomy as a major branch of medicine for the first time. Aranzi combined anatomy with a description of pathological processes, based largely on his own research, Galen, and the work of his contemporary Italians. Aranzi discovered the 'Nodules of Aranzio' in the semilunar valves of the heart and wrote the first description of the superior levator palpebral and the coracobrachialis muscles. His books (in Latin) covered surgical techniques for many conditions, including hydrocephalus, nasal polyp, goitre and tumours to phimosis, ascites, haemorrhoids, anal abscess and fistulae. People Women Catholic women played large roles in health and healing in medieval and early modern Europe. A life as a nun was a prestigious role; wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor. The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God's grace through good works—and thereby escape purgatory—by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering. Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments. In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women—some of them former nuns—were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services. Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science. Asia China In the 18th century, during the Qing dynasty, there was a proliferation of popular books as well as more advanced encyclopedias on traditional medicine. Jesuit missionaries introduced Western science and medicine to the royal court, although the Chinese physicians ignored them. India Unani medicine, based on Avicenna's Canon of Medicine (ca. 1025), was developed in India throughout the Medieval and Early Modern periods. Its use continued, especially in Muslim communities, during the Indian Sultanate and Mughal periods. Unani medicine is in some respects close to Ayurveda and to Early Modern European medicine. All share a theory of the presence of the elements (in Unani, as in Europe, they are considered to be fire, water, earth, and air) and humors in the human body. According to Unani physicians, these elements are present in different humoral fluids and their balance leads to health and their imbalance leads to illness. Sanskrit medical literature of the Early Modern period included innovative works such as the Compendium of Śārṅgadhara (Skt. Śārṅgadharasaṃhitā, ca. 1350) and especially The Illumination of Bhāva (Bhāvaprakāśa, by Bhāvamiśra, ca. 1550). The latter work also contained an extensive dictionary of materia medica, and became a standard textbook used widely by ayurvedic practitioners in north India up to the present day (2024). Medical innovations of this period included pulse diagnosis, urine diagnosis, the use of mercury and china root to treat syphilis, and the increasing use of metallic ingredients in drugs. By the 18th century CE, Ayurvedic medical therapy was still widely used among most of the population. Muslim rulers built large hospitals in 1595 in Hyderabad, and in Delhi in 1719, and numerous commentaries on ancient texts were written. Europe Events European Age of Enlightenment During the Age of Enlightenment, the 18th century, science was held in high esteem and physicians upgraded their social status by becoming more scientific. The health field was crowded with self-trained barber-surgeons, apothecaries, midwives, drug peddlers, and charlatans. Across Europe medical schools relied primarily on lectures and readings. The final year student would have limited clinical experience by trailing the professor through the wards. Laboratory work was uncommon, and dissections were rarely done because of legal restrictions on cadavers. Most schools were small, and only Edinburgh Medical School, Scotland, with 11,000 alumni, produced large numbers of graduates. Places Spain and the Spanish Empire In the Spanish Empire, the viceregal capital of Mexico City was a site of medical training for physicians and the creation of hospitals. Epidemic disease had decimated indigenous populations starting with the early sixteenth-century Spanish conquest of the Aztec empire, when a black auxiliary in the armed forces of conqueror Hernán Cortés, with an active case of smallpox, set off a virgin land epidemic among indigenous peoples, Spanish allies and enemies alike. Aztec emperor Cuitlahuac died of smallpox. Disease was a significant factor in the Spanish conquest elsewhere as well. Medical education instituted at the Royal and Pontifical University of Mexico chiefly served the needs of urban elites. Male and female curanderos or lay practitioners, attended to the ills of the popular classes. The Spanish crown began regulating the medical profession just a few years after the conquest, setting up the Royal Tribunal of the Protomedicato, a board for licensing medical personnel in 1527. Licensing became more systematic after 1646 with physicians, druggists, surgeons, and bleeders requiring a license before they could publicly practice. Crown regulation of medical practice became more general in the Spanish empire. Elites and the popular classes alike called on divine intervention in personal and society-wide health crises, such as the epidemic of 1737. The intervention of the Virgin of Guadalupe was depicted in a scene of dead and dying Indians, with elites on their knees praying for her aid. In the late eighteenth century, the crown began implementing secularizing policies on the Iberian peninsula and its overseas empire to control disease more systematically and scientifically. Spanish Quest for Medicinal Spices Botanical medicines also became popular during the 16th, 17th, and 18th Centuries. Spanish pharmaceutical books during this time contain medicinal recipes consisting of spices, herbs, and other botanical products. For example, nutmeg oil was documented for curing stomach ailments and cardamom oil was believed to relieve intestinal ailments. During the rise of the global trade market, spices and herbs, along with many other goods, that were indigenous to different territories began to appear in different locations across the globe. Herbs and spices were especially popular for their utility in cooking and medicines. As a result of this popularity and increased demand for spices, some areas in Asia, like China and Indonesia, became hubs for spice cultivation and trade. The Spanish Empire also wanted to benefit from the international spice trade, so they looked towards their American colonies. The Spanish American colonies became an area where the Spanish searched to discover new spices and indigenous American medicinal recipes. The Florentine Codex, a 16th-century ethnographic research study in Mesoamerica by the Spanish Franciscan friar Bernardino de Sahagún, is a major contribution to the history of Nahua medicine. The Spanish did discover many spices and herbs new to them, some of which were reportedly similar to Asian spices. A Spanish physician by the name of Nicolás Monardes studied many of the American spices coming into Spain. He documented many of the new American spices and their medicinal properties in his survey Historia medicinal de las cosas que se traen de nuestras Indias Occidentales. For example, Monardes describes the "Long Pepper" (Pimienta luenga), found along the coasts of the countries that are now known Panama and Colombia, as a pepper that was more flavorful, healthy, and spicy in comparison to the Eastern black pepper. The Spanish interest in American spices can first be seen in the commissioning of the Libellus de Medicinalibus Indorum Herbis, which was a Spanish-American codex describing indigenous American spices and herbs and describing the ways that these were used in natural Aztec medicines. The codex was commissioned in the year 1552 by Francisco de Mendoza, the son of Antonio de Mendoza, who was the first Viceroy of New Spain. Francisco de Mendoza was interested in studying the properties of these herbs and spices, so that he would be able to profit from the trade of these herbs and the medicines that could be produced by them. Francisco de Mendoza recruited the help of Monardez in studying the traditional medicines of the indigenous people living in what was then the Spanish colonies. Monardez researched these medicines and performed experiments to discover the possibilities of spice cultivation and medicine creation in the Spanish colonies. The Spanish transplanted some herbs from Asia, but only a few foreign crops were successfully grown in the Spanish Colonies. One notable crop brought from Asia and successfully grown in the Spanish colonies was ginger, as it was considered Hispaniola's number 1 crop at the end of the 16th Century. The Spanish Empire did profit from cultivating herbs and spices, but they also introduced pre-Columbian American medicinal knowledge to Europe. Other Europeans were inspired by the actions of Spain and decided to try to establish a botanical transplant system in colonies that they controlled, however, these subsequent attempts were not successful. United Kingdom and the British Empire The London Dispensary opened in 1696, the first clinic in the British Empire to dispense medicines to poor sick people. The innovation was slow to catch on, but new dispensaries were open in the 1770s. In the colonies, small hospitals opened in Philadelphia in 1752, New York in 1771, and Boston (Massachusetts General Hospital) in 1811. Guy's Hospital, the first great British hospital with a modern foundation, opened in 1721 in London, with funding from businessman Thomas Guy. It had been preceded by St Bartholomew's Hospital and St Thomas's Hospital, both medieval foundations. In 1821 a bequest of £200,000 by William Hunt in 1829 funded expansion for an additional hundred beds at Guy's. Samuel Sharp (1709–78), a surgeon at Guy's Hospital from 1733 to 1757, was internationally famous; his A Treatise on the Operations of Surgery (1st ed., 1739), was the first British study focused exclusively on operative technique. English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. Late modern medicine Germ theory and bacteriology In the 1830s in Italy, Agostino Bassi traced the silkworm disease muscardine to microorganisms. Meanwhile, in Germany, Theodor Schwann led research on alcoholic fermentation by yeast, proposing that living microorganisms were responsible. Leading chemists, such as Justus von Liebig, seeking solely physicochemical explanations, derided this claim and alleged that Schwann was regressing to vitalism. In 1847 in Vienna, Ignaz Semmelweis (1818–1865), dramatically reduced the death rate of new mothers (due to childbed fever) by requiring physicians to clean their hands before attending childbirth, yet his principles were marginalized and attacked by professional peers. At that time most people still believed that infections were caused by foul odors called miasmas. French scientist Louis Pasteur confirmed Schwann's fermentation experiments in 1857 and afterwards supported the hypothesis that yeast were microorganisms. Moreover, he suggested that such a process might also explain contagious disease. In 1860, Pasteur's report on bacterial fermentation of butyric acid motivated fellow Frenchman Casimir Davaine to identify a similar species (which he called ) as the pathogen of the deadly disease anthrax. Others dismissed "" as a mere byproduct of the disease. British surgeon Joseph Lister, however, took these findings seriously and subsequently introduced antisepsis to wound treatment in 1865. German physician Robert Koch, noting fellow German Ferdinand Cohn's report of a spore stage of a certain bacterial species, traced the life cycle of Davaine's , identified spores, inoculated laboratory animals with them, and reproduced anthrax—a breakthrough for experimental pathology and germ theory of disease. Pasteur's group added ecological investigations confirming spores' role in the natural setting, while Koch published a landmark treatise in 1878 on the bacterial pathology of wounds. In 1881, Koch reported discovery of the "tubercle bacillus", cementing germ theory and Koch's acclaim. Upon the outbreak of a cholera epidemic in Alexandria, Egypt, two medical missions went to investigate and attend the sick, one was sent out by Pasteur and the other led by Koch. Koch's group returned in 1883, having successfully discovered the cholera pathogen. In Germany, however, Koch's bacteriologists had to vie against Max von Pettenkofer, Germany's leading proponent of miasmatic theory. Pettenkofer conceded bacteria's casual involvement, but maintained that other, environmental factors were required to turn it pathogenic, and opposed water treatment as a misdirected effort amid more important ways to improve public health. The massive cholera epidemic in Hamburg in 1892 devastated Pettenkoffer's position, and yielded German public health to "Koch's bacteriology". On losing the 1883 rivalry in Alexandria, Pasteur switched research direction, and introduced his third vaccine—rabies vaccine—the first vaccine for humans since Jenner's for smallpox. From across the globe, donations poured in, funding the founding of Pasteur Institute, the globe's first biomedical institute, which opened in 1888. Along with Koch's bacteriologists, Pasteur's group—which preferred the term microbiology—led medicine into the new era of "scientific medicine" upon bacteriology and germ theory. Accepted from Jakob Henle, Koch's steps to confirm a species' pathogenicity became famed as "Koch's postulates". Although his proposed tuberculosis treatment, tuberculin, seemingly failed, it soon was used to test for infection with the involved species. In 1905, Koch was awarded the Nobel Prize in Physiology or Medicine, and remains renowned as the founder of medical microbiology. Nursing The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. At Kaiserswerth, where the first German nursing schools were founded in 1836 by Theodor Fliedner, she said, "The nursing was nil and the hygiene horrible.") Britain's male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper-class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra's Royal Army Nursing Corps; when she died the next queen became president. Today its Colonel in Chief is Sophie, Countess of Wessex, the daughter-in-law of Queen Elizabeth II. In the United States, upper-middle-class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. Nurses were soon a part of large hospitals, where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle-class women. Statistical methods A major breakthrough in epidemiology came with the introduction of statistical maps and graphs. They allowed careful analysis of seasonality issues in disease incidents, and the maps allowed public health officials to identify critical loci for the dissemination of disease. John Snow in London developed the methods. In 1849, he observed that the symptoms of cholera, which had already claimed around 500 lives within a month, were vomiting and diarrhoea. He concluded that the source of contamination must be through ingestion, rather than inhalation as was previously thought. It was this insight that resulted in the removal of The Pump On Broad Street, after which deaths from cholera plummeted. English nurse Florence Nightingale pioneered analysis of large amounts of statistical data, using graphs and tables, regarding the condition of thousands of patients in the Crimean War to evaluate the efficacy of hospital services. Her methods proved convincing and led to reforms in military and civilian hospitals, usually with the full support of the government. By the late 19th and early 20th century English statisticians led by Francis Galton, Karl Pearson and Ronald Fisher developed the mathematical tools such as correlations and hypothesis tests that made possible much more sophisticated analysis of statistical data. During the U.S. Civil War the Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns. The pioneer was John Shaw Billings (1838–1913). A senior surgeon in the war, Billings built the Library of the Surgeon General's Office (now the National Library of Medicine), the centerpiece of modern medical information systems. Billings figured out how to mechanically analyze medical and demographic data by turning facts into numbers and punching the numbers onto cardboard cards that could be sorted and counted by machine. The applications were developed by his assistant Herman Hollerith; Hollerith invented the punch card and counter-sorter system that dominated statistical data manipulation until the 1970s. Hollerith's company became International Business Machines (IBM) in 1911. Psychiatry Until the nineteenth century, the care of the insane was largely a communal and family responsibility rather than a medical one. The vast majority of the mentally ill were treated in domestic contexts with only the most unmanageable or burdensome likely to be institutionally confined. This situation was transformed radically from the late eighteenth century as, amid changing cultural conceptions of madness, a new-found optimism in the curability of insanity within the asylum setting emerged. Increasingly, lunacy was perceived less as a physiological condition than as a mental and moral one to which the correct response was persuasion, aimed at inculcating internal restraint, rather than external coercion. This new therapeutic sensibility, referred to as moral treatment, was epitomised in French physician Philippe Pinel's quasi-mythological unchaining of the lunatics of the Bicêtre Hospital in Paris and realised in an institutional setting with the foundation in 1796 of the Quaker-run York Retreat in England. From the early nineteenth century, as lay-led lunacy reform movements gained in influence, ever more state governments in the West extended their authority and responsibility over the mentally ill. Small-scale asylums, conceived as instruments to reshape both the mind and behaviour of the disturbed, proliferated across these regions. By the 1830s, moral treatment, together with the asylum itself, became increasingly medicalised and asylum doctors began to establish a distinct medical identity with the establishment in the 1840s of associations for their members in France, Germany, the United Kingdom and America, together with the founding of medico-psychological journals. Medical optimism in the capacity of the asylum to cure insanity soured by the close of the nineteenth century as the growth of the asylum population far outstripped that of the general population. Processes of long-term institutional segregation, allowing for the psychiatric conceptualisation of the natural course of mental illness, supported the perspective that the insane were a distinct population, subject to mental pathologies stemming from specific medical causes. As degeneration theory grew in influence from the mid-nineteenth century, heredity was seen as the central causal element in chronic mental illness, and, with national asylum systems overcrowded and insanity apparently undergoing an inexorable rise, the focus of psychiatric therapeutics shifted from a concern with treating the individual to maintaining the racial and biological health of national populations. Emil Kraepelin (1856–1926) introduced new medical categories of mental illness, which eventually came into psychiatric usage despite their basis in behavior rather than pathology or underlying cause. Shell shock among frontline soldiers exposed to heavy artillery bombardment was first diagnosed by British Army doctors in 1915. By 1916, similar symptoms were also noted in soldiers not exposed to explosive shocks, leading to questions as to whether the disorder was physical or psychiatric. In the 1920s surrealist opposition to psychiatry was expressed in a number of surrealist publications. In the 1930s several controversial medical practices were introduced including inducing seizures (by electroshock, insulin or other drugs) or cutting parts of the brain apart (leucotomy or lobotomy). Both came into widespread use by psychiatry, but there were grave concerns and much opposition on grounds of basic morality, harmful effects, or misuse. In the 1950s new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. There was also increasing opposition to the use of psychiatric hospitals, and attempts to move people back into the community on a collaborative user-led group approach ("therapeutic communities") not controlled by psychiatry. Campaigns against masturbation were done in the Victorian era and elsewhere. Lobotomy was used until the 1970s to treat schizophrenia. This was denounced by the anti-psychiatric movement in the 1960s and later. Women It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell became the first woman to formally study and practice medicine in the United States. She was a leader in women's medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers. Asia Places China Finally in the 19th century, Western medicine was introduced at the local level by Christian medical missionaries from the London Missionary Society (Britain), the Methodist Church (Britain) and the Presbyterian Church (US). Benjamin Hobson (1816–1873) in 1839, set up a highly successful Wai Ai Clinic in Guangzhou, China. The Hong Kong College of Medicine for Chinese was founded in 1887 by the London Missionary Society, with its first graduate (in 1892) being Sun Yat-sen, who later led the Chinese Revolution (1911). The Hong Kong College of Medicine for Chinese was the forerunner of the School of Medicine of the University of Hong Kong, which started in 1911. Because of the social custom that men and women should not be near to one another, the women of China were reluctant to be treated by male doctors. The missionaries sent women doctors such as Dr. Mary Hannah Fulton (1854–1927). Supported by the Foreign Missions Board of the Presbyterian Church (US) she in 1902 founded the first medical college for women in China, the Hackett Medical College for Women, in Guangzhou. Japan European ideas of modern medicine were spread widely through the world by medical missionaries, and the dissemination of textbooks. Japanese elites enthusiastically embraced Western medicine after the Meiji Restoration of the 1860s. However they had been prepared by their knowledge of the Dutch and German medicine, for they had some contact with Europe through the Dutch. Highly influential was the 1765 edition of Hendrik van Deventer's pioneer work Nieuw Ligt ("A New Light") on Japanese obstetrics, especially on Katakura Kakuryo's publication in 1799 of Sanka Hatsumo ("Enlightenment of Obstetrics"). A cadre of Japanese physicians began to interact with Dutch doctors, who introduced smallpox vaccinations. By 1820 Japanese ranpô medical practitioners not only translated Dutch medical texts, they integrated their readings with clinical diagnoses. These men became leaders of the modernization of medicine in their country. They broke from Japanese traditions of closed medical fraternities and adopted the European approach of an open community of collaboration based on expertise in the latest scientific methods. Kitasato Shibasaburō (1853–1931) studied bacteriology in Germany under Robert Koch. In 1891 he founded the Institute of Infectious Diseases in Tokyo, which introduced the study of bacteriology to Japan. He and French researcher Alexandre Yersin went to Hong Kong in 1894, where; Kitasato confirmed Yersin's discovery that the bacterium Yersinia pestis is the agent of the plague. In 1897 he isolated and described the organism that caused dysentery. He became the first dean of medicine at Keio University, and the first president of the Japan Medical Association. Japanese physicians immediately recognized the values of X-Rays. They were able to purchase the equipment locally from the Shimadzu Company, which developed, manufactured, marketed, and distributed X-Ray machines after 1900. Japan not only adopted German methods of public health in the home islands, but implemented them in its colonies, especially Korea and Taiwan, and after 1931 in Manchuria. A heavy investment in sanitation resulted in a dramatic increase of life expectancy. Europe The practice of medicine changed in the face of rapid advances in science, as well as new approaches by physicians. Hospital doctors began much more systematic analysis of patients' symptoms in diagnosis. Among the more powerful new techniques were anaesthesia, and the development of both antiseptic and aseptic operating theatres. Effective cures were developed for certain endemic infectious diseases. However, the decline in many of the most lethal diseases was due more to improvements in public health and nutrition than to advances in medicine. Medicine was revolutionized in the 19th century and beyond by advances in chemistry, laboratory techniques, and equipment. Old ideas of infectious disease epidemiology were gradually replaced by advances in bacteriology and virology. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals. Places France Paris (France) and Vienna were the two leading medical centers on the Continent in the era 1750–1914. In the 1770s–1850s Paris became a world center of medical research and teaching. The "Paris School" emphasized that teaching and research should be based in large hospitals and promoted the professionalization of the medical profession and the emphasis on sanitation and public health. A major reformer was Jean-Antoine Chaptal (1756–1832), a physician who was Minister of Internal Affairs. He created the Paris Hospital, health councils, and other bodies. Louis Pasteur (1822–1895) was one of the most important founders of medical microbiology. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. His discoveries reduced mortality from puerperal fever, and he created the first vaccines for rabies and anthrax. His experiments supported the germ theory of disease. He was best known to the general public for inventing a method to treat milk and wine to prevent it from causing sickness, a process that came to be called pasteurization. He is regarded as one of the three main founders of microbiology, together with Ferdinand Cohn and Robert Koch. He worked chiefly in Paris and in 1887 founded the Pasteur Institute there to perpetuate his commitment to basic research and its practical applications. As soon as his institute was created, Pasteur brought together scientists with various specialties. The first five departments were directed by Emile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Ilya Ilyich Mechnikov (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Emile Roux (technical microbe research). One year after the inauguration of the Institut Pasteur, Roux set up the first course of microbiology ever taught in the world, then entitled Cours de Microbie Technique (Course of microbe research techniques). It became the model for numerous research centers around the world named "Pasteur Institutes. Vienna The First Viennese School of Medicine, 1750–1800, was led by the Dutchman Gerard van Swieten (1700–1772), who aimed to put medicine on new scientific foundations—promoting unprejudiced clinical observation, botanical and chemical research, and introducing simple but powerful remedies. When the Vienna General Hospital opened in 1784, it at once became the world's largest hospital and physicians acquired a facility that gradually developed into the most important research centre. Progress ended with the Napoleonic wars and the government shutdown in 1819 of all liberal journals and schools; this caused a general return to traditionalism and eclecticism in medicine. Vienna was the capital of a diverse empire and attracted not just Germans but Czechs, Hungarians, Jews, Poles and others to its world-class medical facilities. After 1820 the Second Viennese School of Medicine emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialization advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna. The textbook of ophthalmologist Georg Joseph Beer (1763–1821) Lehre von den Augenkrankheiten combined practical research and philosophical speculations, and became the standard reference work for decades. Berlin After 1871 Berlin, the capital of the new German Empire, became a leading center for medical research. The Charité is tracing back its origins to the year 1710. More than half of all German Nobel Prize winners in Physiology or Medicine, including Emil von Behring, Robert Koch and Paul Ehrlich, worked there. Koch, (1843–1910), was a representative leader. He became famous for isolating Bacillus anthracis (1877), the Tuberculosis bacillus (1882) and Vibrio cholerae (1883) and for his development of Koch's postulates. He was awarded the Nobel Prize in Physiology or Medicine in 1905 for his tuberculosis findings. Koch is one of the founders of microbiology and modern medicine. He inspired such major figures as Ehrlich, who discovered the first antibiotic, arsphenamine and Gerhard Domagk, who created the first commercially available antibiotic, Prontosil. North America Events American Civil War In the American Civil War (1861–65), as was typical of the 19th century, more soldiers died of disease than in battle, and even larger numbers were temporarily incapacitated by wounds, disease and accidents. Conditions were poor in the Confederacy, where doctors and medical supplies were in short supply. The war had a dramatic long-term impact on medicine in the U.S., from surgical technique to hospitals to nursing and to research facilities. Weapon development -particularly the appearance of Springfield Model 1861, mass-produced and much more accurate than muskets led to generals underestimating the risks of long range rifle fire; risks exemplified in the death of John Sedgwick and the disastrous Pickett's Charge. The rifles could shatter bone forcing amputation and longer ranges meant casualties were sometimes not quickly found. Evacuation of the wounded from Second Battle of Bull Run took a week. As in earlier wars, untreated casualties sometimes survived unexpectedly due to maggots debriding the wound -an observation which led to the surgical use of maggots -still a useful method in the absence of effective antibiotics. The hygiene of the training and field camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and, especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll. This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. The Union responded by building army hospitals in every state. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department, and the United States Sanitary Commission, a new private agency. Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission as well as smaller private agencies. The U.S. Army learned many lessons and in August 1886, it established the Hospital Corps. Institutions Johns Hopkins Hospital, founded in 1889, originated several modern medical practices, including residency and rounds. People Cardiovascular Blood groups The ABO blood group system was discovered in 1901 by Karl Landsteiner at the University of Vienna. Landsteiner experimented on his staff, mixing their various blood components together, and found that some people's blood agglutinated (clumped together) with other blood, while some did not. This then lead him identifying three blood groups, ABC, which would later be renamed to ABO. The less frequently found blood group AB was discovered later in 1902 by Alfred Von Decastello and Adriano Sturli. In 1937 Landsteiner and Alexander S. Wiener further discovered the Rh factor (misnamed from early thinking that this blood group was similar to that found in rhesus monkeys) whose antigens further determine blood reaction between people. This was demonstrated in a 1939 case study by Phillip Levine and Rufus Stetson where a mother who had recently given birth had reacted to their partner's blood, highlighting the Rh factor. Blood transfusion Canadian physician Norman Bethune, M.D. developed a mobile blood-transfusion service for frontline operations in the Spanish Civil War (1936–1939), but ironically, he himself died of sepsis. Pacemaker In 1958, Arne Larsson in Sweden became the first patient to depend on an artificial cardiac pacemaker. He died in 2001 at age 86, having outlived its inventor, the surgeon, and 26 pacemakers. Cancer Cancer treatment has been developed with radiotherapy, chemotherapy and surgical oncology. Diagnosis X-ray imaging was the first kind of medical imaging, and later ultrasonic imaging, CT scanning, MR scanning and other imaging methods became available. Disabilities Prosthetics have improved with lightweight materials as well as neural prosthetics emerging in the end of the 20th century. Diseases Oral rehydration therapy has been extensively used since the 1970s to treat cholera and other diarrhea-inducing infections. As infectious diseases have become less lethal, and the most common causes of death in developed countries are now tumors and cardiovascular diseases, these conditions have received increased attention in medical research. Disease eradication Malaria eradication Starting in World War II, DDT was used as insecticide to combat insect vectors carrying malaria, which was endemic in most tropical regions of the world. The first goal was to protect soldiers, but it was widely adopted as a public health device. In Liberia, for example, the United States had large military operations during the war and the U.S. Public Health Service began the use of DDT for indoor residual spraying (IRS) and as a larvicide, with the goal of controlling malaria in Monrovia, the Liberian capital. In the early 1950s, the project was expanded to nearby villages. In 1953, the World Health Organization (WHO) launched an antimalaria program in parts of Liberia as a pilot project to determine the feasibility of malaria eradication in tropical Africa. However these projects encountered a spate of difficulties that foreshadowed the general retreat from malaria eradication efforts across tropical Africa by the mid-1960s. Pandemics 1918 influenza pandemic (1918-1920) The 1918 influenza pandemic was a global pandemic in the early 20th century that occurred between 1918 and 1920. Sometimes known as Spanish Flu due to popular opinion at the time thinking the flu originated from Spain, this pandemic caused close to 50 million deaths around the world. Spreading at the end of World War I. Public health Public health measures became particularly important during the 1918 flu pandemic, which killed at least 50 million people around the world. It became an important case study in epidemiology. Bristow shows there was a gendered response of health caregivers to the pandemic in the United States. Male doctors were unable to cure the patients, and they felt like failures. Women nurses also saw their patients die, but they took pride in their success in fulfilling their professional role of caring for, ministering, comforting, and easing the last hours of their patients, and helping the families of the patients cope as well. Research Evidence-based medicine is a modern concept, not introduced to literature until the 1990s. Sexual and reproductive health The sexual revolution included taboo-breaking research in human sexuality such as the 1948 and 1953 Kinsey reports, invention of hormonal contraception, and the normalization of abortion and homosexuality in many countries. Family planning has promoted a demographic transition in most of the world. With threatening sexually transmitted infections, not least HIV, use of barrier contraception has become imperative. The struggle against HIV has improved antiretroviral treatments. Smoking Tobacco smoking as a cause of lung cancer was first researched in the 1920s, but was not widely supported by publications until the 1950s. Surgery Cardiac surgery was revolutionized in 1948 as open-heart surgery was introduced for the first time since 1925. In 1954 Joseph Murray, J. Hartwell Harrison and others accomplished the first kidney transplantation. Transplantations of other organs, such as heart, liver and pancreas, were also introduced during the later 20th century. The first partial face transplant was performed in 2005, and the first full one in 2010. By the end of the 20th century, microtechnology had been used to create tiny robotic devices to assist microsurgery using micro-video and fiber-optic cameras to view internal tissues during surgery with minimally invasive practices. Laparoscopic surgery was broadly introduced in the 1990s. Natural orifice surgery has followed. War Mexican Revolution (1910-1920) During the 19th century, large-scale wars were attended with medics and mobile hospital units which developed advanced techniques for healing massive injuries and controlling infections rampant in battlefield conditions. During the Mexican Revolution (1910–1920), General Pancho Villa organized hospital trains for wounded soldiers. Boxcars marked Servicio Sanitario ("sanitary service") were re-purposed as surgical operating theaters and areas for recuperation, and staffed by up to 40 Mexican and U.S. physicians. Severely wounded soldiers were shuttled back to base hospitals. World War I (1914-1918) Thousands of scarred troops provided the need for improved prosthetic limbs and expanded techniques in plastic surgery or reconstructive surgery. Those practices were combined to broaden cosmetic surgery and other forms of elective surgery. Interwar period (1918–1939) From 1917 to 1932, the American Red Cross moved into Europe with a battery of long-term child health projects. It built and operated hospitals and clinics, and organized antituberculosis and antityphus campaigns. A high priority involved child health programs such as clinics, better baby shows, playgrounds, fresh air camps, and courses for women on infant hygiene. Hundreds of U.S. doctors, nurses, and welfare professionals administered these programs, which aimed to reform the health of European youth and to reshape European public health and welfare along American lines. World War II (1939-1945) The advances in medicine made a dramatic difference for Allied troops, while the Germans and especially the Japanese and Chinese suffered from a severe lack of newer medicines, techniques and facilities. Harrison finds that the chances of recovery for a badly wounded British infantryman were as much as 25 times better than in the First World War. The reason was that: "By 1944 most casualties were receiving treatment within hours of wounding, due to the increased mobility of field hospitals and the extensive use of aeroplanes as ambulances. The care of the sick and wounded had also been revolutionized by new medical technologies, such as active immunization against tetanus, sulphonamide drugs, and penicillin." During the second World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. The War spurred the usage of Roentgen's X-ray, and the electrocardiograph, for the monitoring of internal bodily functions. This was followed in the inter-war period by the development of the first anti-bacterial agents such as the sulpha antibiotics. Nazi and Japanese medical research Unethical human subject research, and killing of patients with disabilities, peaked during the Nazi era, with Nazi human experimentation and Aktion T4 during the Holocaust as the most significant examples. Many of the details of these and related events were the focus of the Doctors' Trial. Subsequently, principles of medical ethics, such as the Nuremberg Code, were introduced to prevent a recurrence of such atrocities. After 1937, the Japanese Army established programs of biological warfare in China. In Unit 731, Japanese doctors and research scientists conducted large numbers of vivisections and experiments on human beings, mostly Chinese victims. Institutions World Health Organization The World Health Organization was founded in 1948 as a United Nations agency to improve global health. In most of the world, life expectancy has improved since then, and was about 67 years , and well above 80 years in some countries. Eradication of infectious diseases is an international effort, and several new vaccines have been developed during the post-war years, against infections such as measles, mumps, several strains of influenza and human papilloma virus. The long-known vaccine against Smallpox finally eradicated the disease in the 1970s, and Rinderpest was wiped out in 2011. Eradication of polio is underway. Tissue culture is important for development of vaccines. Though the early success of antiviral vaccines and antibacterial drugs, antiviral drugs were not introduced until the 1970s. Through the WHO, the international community has developed a response protocol against epidemics, displayed during the SARS epidemic in 2003, the Influenza A virus subtype H5N1 from 2004, the Ebola virus epidemic in West Africa and onwards. People Contemporary medicine Antibiotics and antibiotic resistance The discovery of penicillin in the 20th century by Alexander Fleming provided a vital line of defence against bacterical infections that, without them, often cause patients to suffer prelonged recovery periods and highly increased chances of death. Its discovery and application within medicine allowed previously impossible treatments to take place, including cancer treatments, organ transplants, to open heart surgery. Throughout the 20th century, though, their overprescribed use to humans, as well as to animals that need them due to the conditions of intensive animal farming, has led to the development of antibiotic resistant bacteria. Robotics HIV First death The early 21st century, facilitated by extensive global connections, international travel, and unprecedented human disruption of ecological systems, has been defined by a number of noval as well as continuing global pandemics from the 20th century. Past The SARS 2002 to 2004 outbreak affected a number of countries around the world and killed hundreds. This outbreak gave rise to a number of lessons learnt from viral infection control, including more effective isolation room protocols to better hand washing techniques for medical staff. A mutated strain of SARS would go on to develop into COVID-19, causing the future COVID-19 pandemic. A significant influenza strain, H1N1, caused a further pandemic between 2009 and 2010. Known as swine flu, due to its indirect source from pigs, it went on to infect over 700 million people. Ongoing The continuing HIV pandemic, starting in 1981, has infected and led to the deaths of millions of people around the world. Emerging and improved pre-exposure prophylaxis (PrEP) and post-exposure prophylaxis (PEP) treatments that aim to reduce the spread of the disease have proven effective in limiting the spread of HIV alongside combined use of safe sex methods, sexual health education, needle exchange programmes, and sexual health screenings. Efforts to find a HIV vaccine are ongoing while health inequities have left certain population groups, like trans women, as well as resource limited regions, like sub-Saharan Africa, at greater risk of contracting HIV compared with, for example, developed countries. The outbreak of COVID-19, starting in 2019, and subsequent declaration of the COVID-19 pandemic by the WHO is a major pandemic event within the early 21st century. Causing global disruptions, millions of infections and deaths, the pandemic has caused suffering throughout communities. The pandemic has also seen some of the largest logistical organisations of goods, medical equipment, medical professionals, and military personnel since World War II that highlights its far-reaching impact. Personalised medicine The rise of personalised medicine in the 21st century has generated the possibility to develop diagnosis and treatments based on the individual characteristics of a person, rather than through generic practices that defined 20th century medicine. Areas like DNA sequencing, genetic mapping, gene therapy, imaging protocols, proteomics, stem cell therapy, and wireless health monitoring devices are all rising innovations that can help medical professionals fine tune treatment to the individual. Telemedicine Remote surgery is another recent development, with the transatlantic Lindbergh operation in 2001 as a groundbreaking example. Institutions People Themes in medical history Racism in medicine Racism has a long history in how medicine has evolved and established itself, both in terms of racism experience upon patients, professionals, and wider systematic violence within medical institutions and systems. See: medical racism in the United States, race and health, and scientific racism. Women in medicine Women have always served as healers and midwives since ancient times. However, the professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. See also Health care in the United States History of dental treatments History of herbalism History of hospitals History of medicine in Canada History of medicine in the United States History of nursing History of pathology History of pharmacy History of surgery Timeline of nursing history Timeline of medicine and medical technology History of health care (disambiguation) Explanatory notes References Further reading External links The history of medicine and surgery as portrayed by various artists Directory of History of Medicine Collections , Index to the major collections in the United States and Canada, selected by the US National Institute of Health Newsletter / Hannah Institute for the History of Medicine . Date: [1988-1997], Wellcome Collection Medicine History by topic
0.765645
0.996973
0.763327
Anthropocentrism
Anthropocentrism (; ) is the belief that human beings are the central or most important entity on the planet. The term can be used interchangeably with humanocentrism, and some refer to the concept as human supremacy or human exceptionalism. From an anthropocentric perspective, humankind is seen as separate from nature and superior to it, and other entities (animals, plants, minerals, etc.) are viewed as resources for humans to use. It is possible to distinguish between at least three types of anthropocentrism: perceptual anthropocentrism (which "characterizes paradigms informed by sense-data from human sensory organs"); descriptive anthropocentrism (which "characterizes paradigms that begin from, center upon, or are ordered around Homo sapiens / ‘the human'"); and normative anthropocentrism (which "characterizes paradigms that make assumptions or assertions about the superiority of Homo sapiens, its capacities, the primacy of its values, [or] its position in the universe"). Anthropocentrism tends to interpret the world in terms of human values and experiences. It is considered to be profoundly embedded in many modern human cultures and conscious acts. It is a major concept in the field of environmental ethics and environmental philosophy, where it is often considered to be the root cause of problems created by human action within the ecosphere. However, many proponents of anthropocentrism state that this is not necessarily the case: they argue that a sound long-term view acknowledges that the global environment must be made continually suitable for humans and that the real issue is shallow anthropocentrism. Environmental philosophy Some environmental philosophers have argued that anthropocentrism is a core part of a perceived human drive to dominate or "master" the Earth. Anthropocentrism is believed by some to be the central problematic concept in environmental philosophy, where it is used to draw attention to claims of a systematic bias in traditional Western attitudes to the non-human world that shapes humans' sense of self and identities. Val Plumwood argued that anthropocentrism plays an analogous role in green theory to androcentrism in feminist theory and ethnocentrism in anti-racist theory. Plumwood called human-centredness "anthrocentrism" to emphasise this parallel. One of the first extended philosophical essays addressing environmental ethics, John Passmore's Man's Responsibility for Nature has been criticised by defenders of deep ecology because of its anthropocentrism, often claimed to be constitutive of traditional Western moral thought. Indeed, defenders of anthropocentrism concerned with the ecological crisis contend that the maintenance of a healthy, sustainable environment is necessary for human well-being as opposed to for its own sake. According to William Grey, the problem with a "shallow" viewpoint is not that it is human-centred: "What's wrong with shallow views is not their concern about the well-being of humans, but that they do not really consider enough in what that well-being consists. According to this view, we need to develop an enriched, fortified anthropocentric notion of human interest to replace the dominant short-term, sectional and self-regarding conception." In turn, Plumwood in Environmental Culture: The Ecological Crisis of Reason argued that Grey's anthropocentrism is inadequate. Many devoted environmentalists encompass a somewhat anthropocentric-based philosophical view supporting the fact that they will argue in favor of saving the environment for the sake of human populations. Grey writes: "We should be concerned to promote a rich, diverse, and vibrant biosphere. Human flourishing may certainly be included as a legitimate part of such a flourishing." Such a concern for human flourishing amidst the flourishing of life as a whole, however, is said to be indistinguishable from that of deep ecology and biocentrism, which has been proposed as both an antithesis of anthropocentrism and as a generalised form of anthropocentrism. Judaeo–Christian traditions In the 1985 CBC series "A Planet For the Taking", David Suzuki explored the Old Testament roots of anthropocentrism and how it shaped human views of non-human animals. Some Christian proponents of anthropocentrism base their belief on the Bible, such as the verse 1:26 in the Book of Genesis: The use of the word "dominion" in the Genesis has been used to justify an anthropocentric worldview, but recently some have found it controversial, viewing it as possibly a mistranslation from the Hebrew. However an argument can be made that the Bible actually places all the importance on God as creator, and humans as merely another part of creation. Moses Maimonides, a Torah scholar who lived in the twelfth century AD, was renowned for his staunch opposition to anthropocentrism. He referred to humans as "just a drop in the bucket" and asserted that "humans are not the axis of the world". He also claimed that anthropocentric thinking is what leads humans to believe in the existence of evil things in nature. According to Rabbi Norman Lamm, Moses Maimonides "refuted the exaggerated ideas about the importance of man and urged us to abandon these fantasies. Catholic social teaching sees the pre-eminence of human beings over the rest of creation in terms of service rather than domination. Pope Francis, in his 2015 encyclical letter Laudato si' , notes that "an obsession with denying any pre-eminence to the human person" endangers the concern which should be shown to protecting and upholding the welfare of all people, which he argues should rank alongside the "care for our common home" which is the subject of his letter. In the same text he acknowledges that "a mistaken understanding" of Christian belief "has at times led us to justify mistreating nature, to exercise tyranny over creation": in such actions, Christian believers have "not [been] faithful to the treasures of wisdom which we have been called to protect and preserve. In his follow-up exhortation, Laudate Deum (2023) he refers to a preferable understanding of "the unique and central value of the human being amid the marvellous concert of all God's creatures" as a "situated anthropocentrism". Human rights Anthropocentrism is the grounding for some naturalistic concepts of human rights. Defenders of anthropocentrism argue that it is the necessary fundamental premise to defend universal human rights, since what matters morally is simply being human. For example, noted philosopher Mortimer J. Adler wrote, "Those who oppose injurious discrimination on the moral ground that all human beings, being equal in their humanity, should be treated equally in all those respects that concern their common humanity, would have no solid basis in fact to support their normative principle." Adler is stating here that denying what is now called human exceptionalism could lead to tyranny, writing that if humans ever came to believe that they do not possess a unique moral status, the intellectual foundation of their liberties collapses: "Why, then, should not groups of superior men be able to justify their enslavement, exploitation, or even genocide of inferior human groups on factual and moral grounds akin to those we now rely on to justify our treatment of the animals we harness as beasts of burden, that we butcher for food and clothing, or that we destroy as disease-bearing pests or as dangerous predators?" Author and anthropocentrism defender Wesley J. Smith from the Discovery Institute has written that human exceptionalism is what gives rise to human duties to each other, the natural world, and to treat animals humanely. Writing in A Rat is a Pig is a Dog is a Boy, a critique of animal rights ideology, "Because we are unquestionably a unique species—the only species capable of even contemplating ethical issues and assuming responsibilities—we uniquely are capable of apprehending the difference between right and wrong, good and evil, proper and improper conduct toward animals. Or to put it more succinctly, if being human isn't what requires us to treat animals humanely, what in the world does?" Moral status of animals Anthropocentrism is closely related to the notion of speciecism, defined by Richard D. Ryder as a "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species". One of the earliest of these critics was J. Howard Moore, who in The Universal Kinship (1906) argued that Charles Darwin's On the Origin of Species (1859) "sealed the doom" of anthropocentrism. While humans cognition is relatively advanced, many traits traditionally used to justify humanity exceptionalism (such as rationality, emotional complexity and social bonds) are not unique to humans. Research in ethology has shown that non-human animals, such as primates, elephants, and cetaceans, also demonstrate complex social structures, emotional depth, and problem-solving abilities. This challenges the claim that humans possess qualities absent in other animals, and which would justify denying moral status to them. Animal welfare proponents attribute moral consideration to all sentient animals, proportional to their ability to have positive or negative mental experiences. It is notably associated with the ethical theory of utilitarianism, which aims to maximize well-being. It is notably defended by Peter Singer. According to David Pearce, "other things being equal, equally strong interests should count equally." Jeremy Bentham is also known for raising early the issue of animal welfare, arguing that "the question is not, Can they reason? nor, Can they talk? but, Can they suffer?". Animal welfare proponents can in theory accept animal exploitation if the benefits outweigh the harms. But in practice, they generally consider that intensive animal farming causes a massive amount of suffering that outweighs the relatively minor benefit that humans get from consuming animals. Animal rights proponents argue that all animals have inherent rights, similar to human rights, and should not be used as means to human ends. Unlike animal welfare advocates, who focus on minimizing suffering, animal rights supporters often call for the total abolition of practices that exploit animals, such as intensive animal farming, animal testing, and hunting. Prominent figures like Tom Regan argue that animals are "subjects of a life" with inherent value, deserving moral consideration regardless of the potential benefits humans may derive from using them. Cognitive psychology In cognitive psychology, the term anthropocentric thinking has been defined as "the tendency to reason about unfamiliar biological species or processes by analogy to humans." Reasoning by analogy is an attractive thinking strategy, and it can be tempting to apply one's own experience of being human to other biological systems. For example, because death is commonly felt to be undesirable, it may be tempting to form the misconception that death at a cellular level or elsewhere in nature is similarly undesirable (whereas in reality programmed cell death is an essential physiological phenomenon, and ecosystems also rely on death). Conversely, anthropocentric thinking can also lead people to underattribute human characteristics to other organisms. For instance, it may be tempting to wrongly assume that an animal that is very different from humans, such as an insect, will not share particular biological characteristics, such as reproduction or blood circulation. Anthropocentric thinking has predominantly been studied in young children (mostly up to the age of 10) by developmental psychologists interested in its relevance to biology education. Children as young as 6 have been found to attribute human characteristics to species unfamiliar to them (in Japan), such as rabbits, grasshoppers or tulips. Although relatively little is known about its persistence at a later age, evidence exists that this pattern of human exceptionalist thinking can continue through young adulthood at least, even among students who have been increasingly educated in biology. The notion that anthropocentric thinking is an innate human characteristic has been challenged by study of American children raised in urban environments, among whom it appears to emerge between the ages of 3 and 5 years as an acquired perspective. Children's recourse to anthropocentric thinking seems to vary with their experience of nature, and cultural assumptions about the place of humans in the natural world. For example, whereas young children who kept goldfish were found to think of frogs as being more goldfish-like, other children tended to think of frogs in terms of humans. More generally, children raised in rural environments appear to use anthropocentric thinking less than their urban counterparts because of their greater familiarity with different species of animals and plants. Studies involving children from some of the indigenous peoples of the Americas have found little use of anthropocentric thinking. Study of children among the Wichí people in South America showed a tendency to think of living organisms in terms of their perceived taxonomic similarities, ecological considerations, and animistic traditions, resulting in a much less anthropocentric view of the natural world than is experienced by many children in Western societies. In popular culture In fiction from all eras and societies, there is fiction depicting the actions of humans to ride, eat, milk, and otherwise treat (non-human) animals as inferior. There are occasional fictional exceptions, such as talking animals as aberrations to the rule distinguishing people from animals. In science fiction, humanocentrism is the idea that humans, as both beings and as a species, are the superior sentients. Essentially the equivalent of racial supremacy on a galactic scale, it entails intolerant discrimination against sentient non-humans, much like race supremacists discriminate against those not of their race. A prime example of this concept is utilized as a story element for the Mass Effect series. After humanity's first contact results in a brief war, many humans in the series develop suspicious or even hostile attitudes towards the game's various alien races. By the time of the first game, which takes place several decades after the war, many humans still retain such sentiments in addition to forming 'pro-human' organizations. This idea is countered by anti-humanism. At times, this ideal also includes fear of and superiority over strong AIs and cyborgs, downplaying the ideas of integration, cybernetic revolts, machine rule and Tilden's Laws of Robotics. Mark Twain mocked the belief in human supremacy in Letters from the Earth (written c. 1909, published 1962). The Planet of the Apes franchise focuses on the analogy of apes becoming the dominant species in society and the fall of humans (see also human extinction). In the 1968 film, Taylor, a human states "take your stinking paws off me, you damn dirty ape!". In the 2001 film, this is contrasted with Attar (a gorilla)'s quote "take your stinking hands off me, you damn dirty human!". This links in with allusions that in becoming the dominant species apes are becoming more like humans (anthropomorphism). In the film Battle for the Planet of the Apes, Virgil, an orangutan states "ape has never killed ape, let alone an ape child. Aldo has killed an ape child. The branch did not break. It was cut with a sword." in reference to planned murder; a stereotypical human concept. Additionally, in Dawn of the Planet of the Apes, Caesar states "I always think...ape better than human. I see now...how much like them we are." In George Orwell's novel Animal Farm, this theme of anthropocentrism is also present. Whereas originally the animals planned for liberation from humans and animal equality, as evident from the "seven commandments" such as "whatever goes upon two legs is an enemy", "Whatever goes upon four legs, or has wings, is a friend", "All animals are equal"; the pigs would later abridge the commandments with statements such as "All animals are equal, but some animals are more equal than others", and "Four legs good, two legs better." The 2012 documentary The Superior Human? systematically analyzes anthropocentrism and concludes that value is fundamentally an opinion, and since life forms naturally value their own traits, most humans are misled to believe that they are actually more valuable than other species. This natural bias, according to the film, combined with a received sense of comfort and an excuse for exploitation of non-humans cause anthropocentrism to remain in society. In his 2009 book Eating Animals, Jonathan Foer describes anthropocentrism as "The conviction that humans are the pinnacle of evolution, the appropriate yardstick by which to measure the lives of other animals, and the rightful owners of everything that lives." See also References Further reading Bertalanffy, Ludwig Von (1993) General System Theory: Foundations, Development, Applications pp. 239–48 Boddice, Rob (ed.) (2011) Anthropocentrism: Humans, Animals, Environments Leiden and Boston: Brill Mylius, Ben (2018). "Three Types of Anthropocentrism". Environmental Philosophy 15 (2):159-194. White, Lynn Townsend, Jr, "The Historical Roots of Our Ecologic Crisis", Science, Vol 155 (Number 3767), 10 March 1967, pp 1203–1207 Human supremacism: why are animal rights activists still the "orphans of the left"?. New Statesman America. April 30, 2019. Human Supremacy: The Source of All Environmental Crises? Psychology Today December 25, 2021 Animal ethics Environmental ethics Posthumanism Philosophical theories
0.765753
0.996817
0.763316
Liberalism
Liberalism is a political and moral philosophy based on the rights of the individual, liberty, consent of the governed, political equality, right to private property and equality before the law. Liberals espouse various and often mutually warring views depending on their understanding of these principles but generally support private property, market economies, individual rights (including civil rights and human rights), liberal democracy, secularism, rule of law, economic and political freedom, freedom of speech, freedom of the press, freedom of assembly, and freedom of religion. Liberalism is frequently cited as the dominant ideology of modern history. Liberalism became a distinct movement in the Age of Enlightenment, gaining popularity among Western philosophers and economists. Liberalism sought to replace the norms of hereditary privilege, state religion, absolute monarchy, the divine right of kings and traditional conservatism with representative democracy, rule of law, and equality under the law. Liberals also ended mercantilist policies, royal monopolies, and other trade barriers, instead promoting free trade and marketization. Philosopher John Locke is often credited with founding liberalism as a distinct tradition based on the social contract, arguing that each man has a natural right to life, liberty and property, and governments must not violate these rights. While the British liberal tradition has emphasized expanding democracy, French liberalism has emphasized rejecting authoritarianism and is linked to nation-building. Leaders in the British Glorious Revolution of 1688, the American Revolution of 1776, and the French Revolution of 1789 used liberal philosophy to justify the armed overthrow of royal sovereignty. The 19th century saw liberal governments established in Europe and South America, and it was well-established alongside republicanism in the United States. In Victorian Britain, it was used to critique the political establishment, appealing to science and reason on behalf of the people. During the 19th and early 20th centuries, liberalism in the Ottoman Empire and the Middle East influenced periods of reform, such as the Tanzimat and Al-Nahda, and the rise of constitutionalism, nationalism, and secularism. These changes, along with other factors, helped to create a sense of crisis within Islam, which continues to this day, leading to Islamic revivalism. Before 1920, the main ideological opponents of liberalism were communism, conservatism, and socialism; liberalism then faced major ideological challenges from fascism and Marxism–Leninism as new opponents. During the 20th century, liberal ideas spread even further, especially in Western Europe, as liberal democracies found themselves as the winners in both world wars and the Cold War. Liberals sought and established a constitutional order that prized important individual freedoms, such as freedom of speech and freedom of association; an independent judiciary and public trial by jury; and the abolition of aristocratic privileges. Later waves of modern liberal thought and struggle were strongly influenced by the need to expand civil rights. Liberals have advocated gender and racial equality in their drive to promote civil rights, and global civil rights movements in the 20th century achieved several objectives towards both goals. Other goals often accepted by liberals include universal suffrage and universal access to education. In Europe and North America, the establishment of social liberalism (often called simply liberalism in the United States) became a key component in expanding the welfare state. Today, liberal parties continue to wield power and influence throughout the world. The fundamental elements of contemporary society have liberal roots. The early waves of liberalism popularised economic individualism while expanding constitutional government and parliamentary authority. Etymology and definition Liberal, liberty, libertarian, and libertine all trace their etymology to liber, a root from Latin that means "free". One of the first recorded instances of liberal occurred in 1375 when it was used to describe the liberal arts in the context of an education desirable for a free-born man. The word's early connection with the classical education of a medieval university soon gave way to a proliferation of different denotations and connotations. Liberal could refer to "free in bestowing" as early as 1387, "made without stint" in 1433, "freely permitted" in 1530, and "free from restraint"—often as a pejorative remark—in the 16th and the 17th centuries. In the 16th-century Kingdom of England, liberal could have positive or negative attributes in referring to someone's generosity or indiscretion. In Much Ado About Nothing, William Shakespeare wrote of "a liberal villaine" who "hath ... confest his vile encounters". With the rise of the Enlightenment, the word acquired decisively more positive undertones, defined as "free from narrow prejudice" in 1781 and "free from bigotry" in 1823. In 1815, the first use of liberalism appeared in English. In Spain, the liberales, the first group to use the liberal label in a political context, fought for decades to implement the Spanish Constitution of 1812. From 1820 to 1823, during the Trienio Liberal, King Ferdinand VII was compelled by the liberales to swear to uphold the 1812 Constitution. By the middle of the 19th century, liberal was used as a politicised term for parties and movements worldwide. Over time, the meaning of liberalism began to diverge in different parts of the world. According to the Encyclopædia Britannica: "In the United States, liberalism is associated with the welfare-state policies of the New Deal programme of the Democratic administration of Pres. Franklin D. Roosevelt, whereas in Europe it is more commonly associated with a commitment to limited government and laissez-faire economic policies." Consequently, the ideas of individualism and laissez-faire economics previously associated with classical liberalism are key components of modern American conservatism and movement conservatism, and became the basis for the emerging school of modern American libertarian thought. In this American context, liberal is often used as a pejorative. Yellow is the political colour most commonly associated with liberalism. In Europe and Latin America, liberalism means a moderate form of classical liberalism and includes both conservative liberalism (centre-right liberalism) and social liberalism (centre-left liberalism). In North America, liberalism almost exclusively refers to social liberalism. The dominant Canadian party is the Liberal Party, and the Democratic Party is usually considered liberal in the United States. In the United States, conservative liberals are usually called conservatives in a broad sense. Philosophy Liberalism—both as a political current and an intellectual tradition—is mostly a modern phenomenon that started in the 17th century, although some liberal philosophical ideas had precursors in classical antiquity and Imperial China. The Roman Emperor Marcus Aurelius praised "the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed". Scholars have also recognised many principles familiar to contemporary liberals in the works of several Sophists and the Funeral Oration by Pericles. Liberal philosophy is the culmination of an extensive intellectual tradition that has examined and popularized some of the modern world's most important and controversial principles. Its immense scholarly output has been characterized as containing "richness and diversity", but that diversity often has meant that liberalism comes in different formulations and presents a challenge to anyone looking for a clear definition. Major themes Although all liberal doctrines possess a common heritage, scholars frequently assume that those doctrines contain "separate and often contradictory streams of thought". The objectives of liberal theorists and philosophers have differed across various times, cultures and continents. The diversity of liberalism can be gleaned from the numerous qualifiers that liberal thinkers and movements have attached to the term "liberalism", including classical, egalitarian, economic, social, the welfare state, ethical, humanist, deontological, perfectionist, democratic, and institutional, to name a few. Despite these variations, liberal thought does exhibit a few definite and fundamental conceptions. Political philosopher John Gray identified the common strands in liberal thought as individualist, egalitarian, meliorist and universalist. The individualist element avers the ethical primacy of the human being against the pressures of social collectivism; the egalitarian element assigns the same moral worth and status to all individuals; the meliorist element asserts that successive generations can improve their sociopolitical arrangements, and the universalist element affirms the moral unity of the human species and marginalises local cultural differences. The meliorist element has been the subject of much controversy, defended by thinkers such as Immanuel Kant, who believed in human progress, while suffering criticism by thinkers such as Jean-Jacques Rousseau, who instead believed that human attempts to improve themselves through social cooperation would fail. The liberal philosophical tradition has searched for validation and justification through several intellectual projects. The moral and political suppositions of liberalism have been based on traditions such as natural rights and utilitarian theory, although sometimes liberals even request support from scientific and religious circles. Through all these strands and traditions, scholars have identified the following major common facets of liberal thought: believing in equality and individual liberty supporting private property and individual rights supporting the idea of limited constitutional government recognising the importance of related values such as pluralism, toleration, autonomy, bodily integrity, and consent Classical and modern John Locke and Thomas Hobbes Enlightenment philosophers are given credit for shaping liberal ideas. These ideas were first drawn together and systematized as a distinct ideology by the English philosopher John Locke, generally regarded as the father of modern liberalism. Thomas Hobbes attempted to determine the purpose and the justification of governing authority in post-civil war England. Employing the idea of a state of nature — a hypothetical war-like scenario prior to the state — he constructed the idea of a social contract that individuals enter into to guarantee their security and, in so doing, form the State, concluding that only an absolute sovereign would be fully able to sustain such security. Hobbes had developed the concept of the social contract, according to which individuals in the anarchic and brutal state of nature came together and voluntarily ceded some of their rights to an established state authority, which would create laws to regulate social interactions to mitigate or mediate conflicts and enforce justice. Whereas Hobbes advocated a strong monarchical commonwealth (the Leviathan), Locke developed the then-radical notion that government acquires consent from the governed, which has to be constantly present for the government to remain legitimate. While adopting Hobbes's idea of a state of nature and social contract, Locke nevertheless argued that when the monarch becomes a tyrant, it violates the social contract, which protects life, liberty and property as a natural right. He concluded that the people have a right to overthrow a tyrant. By placing the security of life, liberty and property as the supreme value of law and authority, Locke formulated the basis of liberalism based on social contract theory. To these early enlightenment thinkers, securing the essential amenities of life—liberty and private property—required forming a "sovereign" authority with universal jurisdiction. His influential Two Treatises (1690), the foundational text of liberal ideology, outlined his major ideas. Once humans moved out of their natural state and formed societies, Locke argued, "that which begins and actually constitutes any political society is nothing but the consent of any number of freemen capable of a majority to unite and incorporate into such a society. And this is that, and that only, which did or could give beginning to any lawful government in the world". The stringent insistence that lawful government did not have a supernatural basis was a sharp break with the dominant theories of governance, which advocated the divine right of kings and echoed the earlier thought of Aristotle. Dr John Zvesper described this new thinking: "In the liberal understanding, there are no citizens within the regime who can claim to rule by natural or supernatural right, without the consent of the governed". Locke had other intellectual opponents besides Hobbes. In the First Treatise, Locke aimed his arguments first and foremost at one of the doyens of 17th-century English conservative philosophy: Robert Filmer. Filmer's Patriarcha (1680) argued for the divine right of kings by appealing to biblical teaching, claiming that the authority granted to Adam by God gave successors of Adam in the male line of descent a right of dominion over all other humans and creatures in the world. However, Locke disagreed so thoroughly and obsessively with Filmer that the First Treatise is almost a sentence-by-sentence refutation of Patriarcha. Reinforcing his respect for consensus, Locke argued that "conjugal society is made up by a voluntary compact between men and women". Locke maintained that the grant of dominion in Genesis was not to men over women, as Filmer believed, but to humans over animals. Locke was not a feminist by modern standards, but the first major liberal thinker in history accomplished an equally major task on the road to making the world more pluralistic: integrating women into social theory. Locke also originated the concept of the separation of church and state. Based on the social contract principle, Locke argued that the government lacked authority in the realm of individual conscience, as this was something rational people could not cede to the government for it or others to control. For Locke, this created a natural right to the liberty of conscience, which he argued must remain protected from any government authority. In his Letters Concerning Toleration, he also formulated a general defence for religious toleration. Three arguments are central: Earthly judges, the state in particular, and human beings generally, cannot dependably evaluate the truth claims of competing religious standpoints; Even if they could, enforcing a single "true religion" would not have the desired effect because belief cannot be compelled by violence; Coercing religious uniformity would lead to more social disorder than allowing diversity. Locke was also influenced by the liberal ideas of Presbyterian politician and poet John Milton, who was a staunch advocate of freedom in all its forms. Milton argued for disestablishment as the only effective way of achieving broad toleration. Rather than force a man's conscience, the government should recognise the persuasive force of the gospel. As assistant to Oliver Cromwell, Milton also drafted a constitution of the independents (Agreement of the People; 1647) that strongly stressed the equality of all humans as a consequence of democratic tendencies. In his Areopagitica, Milton provided one of the first arguments for the importance of freedom of speech—"the liberty to know, to utter, and to argue freely according to conscience, above all liberties". His central argument was that the individual could use reason to distinguish right from wrong. To exercise this right, everyone must have unlimited access to the ideas of his fellow men in "a free and open encounter", which will allow good arguments to prevail. In a natural state of affairs, liberals argued, humans were driven by the instincts of survival and self-preservation, and the only way to escape from such a dangerous existence was to form a common and supreme power capable of arbitrating between competing human desires. This power could be formed in the framework of a civil society that allows individuals to make a voluntary social contract with the sovereign authority, transferring their natural rights to that authority in return for the protection of life, liberty and property. These early liberals often disagreed about the most appropriate form of government, but all believed that liberty was natural and its restriction needed strong justification. Liberals generally believed in limited government, although several liberal philosophers decried government outright, with Thomas Paine writing, "government even in its best state is a necessary evil". James Madison and Montesquieu As part of the project to limit the powers of government, liberal theorists such as James Madison and Montesquieu conceived the notion of separation of powers, a system designed to equally distribute governmental authority among the executive, legislative and judicial branches. Governments had to realise, liberals maintained, that legitimate government only exists with the consent of the governed, so poor and improper governance gave the people the authority to overthrow the ruling order through all possible means, even through outright violence and revolution, if needed. Contemporary liberals, heavily influenced by social liberalism, have supported limited constitutional government while advocating for state services and provisions to ensure equal rights. Modern liberals claim that formal or official guarantees of individual rights are irrelevant when individuals lack the material means to benefit from those rights and call for a greater role for government in the administration of economic affairs. Early liberals also laid the groundwork for the separation of church and state. As heirs of the Enlightenment, liberals believed that any given social and political order emanated from human interactions, not from divine will. Many liberals were openly hostile to religious belief but most concentrated their opposition to the union of religious and political authority, arguing that faith could prosper independently without official sponsorship or administration by the state. Beyond identifying a clear role for government in modern society, liberals have also argued over the meaning and nature of the most important principle in liberal philosophy: liberty. From the 17th century until the 19th century, liberals (from Adam Smith to John Stuart Mill) conceptualised liberty as the absence of interference from government and other individuals, claiming that all people should have the freedom to develop their unique abilities and capacities without being sabotaged by others. Mill's On Liberty (1859), one of the classic texts in liberal philosophy, proclaimed, "the only freedom which deserves the name, is that of pursuing our own good in our own way". Support for laissez-faire capitalism is often associated with this principle, with Friedrich Hayek arguing in The Road to Serfdom (1944) that reliance on free markets would preclude totalitarian control by the state. Coppet Group and Benjamin Constant The development into maturity of modern classical in contrast to ancient liberalism took place before and soon after the French Revolution. One of the historic centres of this development was at Coppet Castle near Geneva, where the eponymous Coppet group gathered under the aegis of the exiled writer and salonnière, Madame de Staël, in the period between the establishment of Napoleon's First Empire (1804) and the Bourbon Restoration of 1814–1815. The unprecedented concentration of European thinkers who met there was to have a considerable influence on the development of nineteenth-century liberalism and, incidentally, romanticism. They included Wilhelm von Humboldt, Jean de Sismondi, Charles Victor de Bonstetten, Prosper de Barante, Henry Brougham, Lord Byron, Alphonse de Lamartine, Sir James Mackintosh, Juliette Récamier and August Wilhelm Schlegel. Among them was also one of the first thinkers to go by the name of "liberal", the Edinburgh University-educated Swiss Protestant, Benjamin Constant, who looked to the United Kingdom rather than to ancient Rome for a practical model of freedom in a large mercantile society. He distinguished between the "Liberty of the Ancients" and the "Liberty of the Moderns". The Liberty of the Ancients was a participatory republican liberty, which gave the citizens the right to influence politics directly through debates and votes in the public assembly. In order to support this degree of participation, citizenship was a burdensome moral obligation requiring a considerable investment of time and energy. Generally, this required a sub-group of slaves to do much of the productive work, leaving citizens free to deliberate on public affairs. Ancient Liberty was also limited to relatively small and homogenous male societies, where they could congregate in one place to transact public affairs. In contrast, the Liberty of the Moderns was based on the possession of civil liberties, the rule of law, and freedom from excessive state interference. Direct participation would be limited: a necessary consequence of the size of modern states and the inevitable result of creating a mercantile society where there were no slaves, but almost everybody had to earn a living through work. Instead, the voters would elect representatives who would deliberate in Parliament on the people's behalf and would save citizens from daily political involvement. The importance of Constant's writings on the liberty of the ancients and that of the "moderns" has informed the understanding of liberalism, as has his critique of the French Revolution. The British philosopher and historian of ideas, Sir Isaiah Berlin, has pointed to the debt owed to Constant. British liberalism Liberalism in Britain was based on core concepts such as classical economics, free trade, laissez-faire government with minimal intervention and taxation and a balanced budget. Classical liberals were committed to individualism, liberty and equal rights. Writers such as John Bright and Richard Cobden opposed aristocratic privilege and property, which they saw as an impediment to developing a class of yeoman farmers. Beginning in the late 19th century, a new conception of liberty entered the liberal intellectual arena. This new kind of liberty became known as positive liberty to distinguish it from the prior negative version, and it was first developed by British philosopher T. H. Green. Green rejected the idea that humans were driven solely by self-interest, emphasising instead the complex circumstances involved in the evolution of our moral character. In a very profound step for the future of modern liberalism, he also tasked society and political institutions with the enhancement of individual freedom and identity and the development of moral character, will and reason and the state to create the conditions that allow for the above, allowing genuine choice. Foreshadowing the new liberty as the freedom to act rather than to avoid suffering from the acts of others, Green wrote the following: Rather than previous liberal conceptions viewing society as populated by selfish individuals, Green viewed society as an organic whole in which all individuals have a duty to promote the common good. His ideas spread rapidly and were developed by other thinkers such as Leonard Trelawny Hobhouse and John A. Hobson. In a few years, this New Liberalism had become the essential social and political programme of the Liberal Party in Britain, and it would encircle much of the world in the 20th century. In addition to examining negative and positive liberty, liberals have tried to understand the proper relationship between liberty and democracy. As they struggled to expand suffrage rights, liberals increasingly understood that people left out of the democratic decision-making process were liable to the "tyranny of the majority", a concept explained in Mill's On Liberty and Democracy in America (1835) by Alexis de Tocqueville. As a response, liberals began demanding proper safeguards to thwart majorities in their attempts at suppressing the rights of minorities. Besides liberty, liberals have developed several other principles important to the construction of their philosophical structure, such as equality, pluralism and tolerance. Highlighting the confusion over the first principle, Voltaire commented, "equality is at once the most natural and at times the most chimeral of things". All forms of liberalism assume in some basic sense that individuals are equal. In maintaining that people are naturally equal, liberals assume they all possess the same right to liberty. In other words, no one is inherently entitled to enjoy the benefits of liberal society more than anyone else, and all people are equal subjects before the law. Beyond this basic conception, liberal theorists diverge in their understanding of equality. American philosopher John Rawls emphasised the need to ensure equality under the law and the equal distribution of material resources that individuals required to develop their aspirations in life. Libertarian thinker Robert Nozick disagreed with Rawls, championing the former version of Lockean equality. To contribute to the development of liberty, liberals also have promoted concepts like pluralism and tolerance. By pluralism, liberals refer to the proliferation of opinions and beliefs that characterise a stable social order. Unlike many of their competitors and predecessors, liberals do not seek conformity and homogeneity in how people think. Their efforts have been geared towards establishing a governing framework that harmonises and minimises conflicting views but still allows those views to exist and flourish. For liberal philosophy, pluralism leads easily to toleration. Since individuals will hold diverging viewpoints, liberals argue, they ought to uphold and respect the right of one another to disagree. From the liberal perspective, toleration was initially connected to religious toleration, with Baruch Spinoza condemning "the stupidity of religious persecution and ideological wars". Toleration also played a central role in the ideas of Kant and John Stuart Mill. Both thinkers believed that society would contain different conceptions of a good ethical life and that people should be allowed to make their own choices without interference from the state or other individuals. Liberal economic theory Adam Smith's The Wealth of Nations, published in 1776, followed by the French liberal economist Jean-Baptiste Say's treatise on Political Economy published in 1803 and expanded in 1830 with practical applications, were to provide most of the ideas of economics until the publication of John Stuart Mill's Principles in 1848. Smith addressed the motivation for economic activity, the causes of prices and wealth distribution, and the policies the state should follow to maximise wealth. Smith wrote that as long as supply, demand, prices and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, maximises society's wealth through profit-driven production of goods and services. An "invisible hand" directed individuals and firms to work toward the nation's good as an unintended consequence of efforts to maximise their gain. This provided a moral justification for accumulating wealth, which some had previously viewed as sinful. Smith assumed that workers could be paid as low as was necessary for their survival, which David Ricardo and Thomas Robert Malthus later transformed into the "iron law of wages". His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income. Smith was one of the progenitors of the idea, which was long central to classical liberalism and has resurfaced in the globalisation literature of the later 20th and early 21st centuries, that free trade promotes peace. Smith's economics was carried into practice in the 19th century with the lowering of tariffs in the 1820s, the repeal of the Poor Relief Act that had restricted the mobility of labour in 1834 and the end of the rule of the East India Company over India in 1858. In his Treatise (Traité d'économie politique), Say states that any production process requires effort, knowledge and the "application" of the entrepreneur. He sees entrepreneurs as intermediaries in the production process who combine productive factors such as land, capital and labour to meet the consumers' demands. As a result, they play a central role in the economy through their coordinating function. He also highlights qualities essential for successful entrepreneurship and focuses on judgement, in that they have continued to assess market needs and the means to meet them. This requires an "unerring market sense". Say views entrepreneurial income primarily as the high revenue paid in compensation for their skills and expert knowledge. He does so by contrasting the enterprise and supply-of-capital functions, distinguishing the entrepreneur's earnings on the one hand and the remuneration of capital on the other. This differentiates his theory from that of Joseph Schumpeter, who describes entrepreneurial rent as short-term profits which compensate for high risk (Schumpeterian rent). Say himself also refers to risk and uncertainty along with innovation without analysing them in detail. Say is also credited with Say's law, or the law of markets which may be summarised as "Aggregate supply creates its own aggregate demand", and "Supply creates its own demand", or "Supply constitutes its own demand" and "Inherent in supply is the need for its own consumption". The related phrase "supply creates its own demand" was coined by John Maynard Keynes, who criticized Say's separate formulations as amounting to the same thing. Some advocates of Say's law who disagree with Keynes have claimed that Say's law can be summarized more accurately as "production precedes consumption" and that what Say is stating is that for consumption to happen, one must produce something of value so that it can be traded for money or barter for consumption later. Say argues, "products are paid for with products" (1803, p. 153) or "a glut occurs only when too much resource is applied to making one product and not enough to another" (1803, pp. 178–179). Related reasoning appears in the work of John Stuart Mill and earlier in that of his Scottish classical economist father, James Mill (1808). Mill senior restates Say's law in 1808: "production of commodities creates, and is the one and universal cause which creates a market for the commodities produced". In addition to Smith's and Say's legacies, Thomas Malthus' theories of population and David Ricardo's Iron law of wages became central doctrines of classical economics. Meanwhile, Jean-Baptiste Say challenged Smith's labour theory of value, believing that prices were determined by utility and also emphasised the critical role of the entrepreneur in the economy. However, neither of those observations became accepted by British economists at the time. Malthus wrote An Essay on the Principle of Population in 1798, becoming a major influence on classical liberalism. Malthus claimed that population growth would outstrip food production because the population grew geometrically while food production grew arithmetically. As people were provided with food, they would reproduce until their growth outstripped the food supply. Nature would then provide a check to growth in the forms of vice and misery. No gains in income could prevent this, and any welfare for the poor would be self-defeating. The poor were, in fact, responsible for their problems which could have been avoided through self-restraint. Several liberals, including Adam Smith and Richard Cobden, argued that the free exchange of goods between nations would lead to world peace. Smith argued that as societies progressed, the spoils of war would rise, but the costs of war would rise further, making war difficult and costly for industrialised nations. Cobden believed that military expenditures worsened the state's welfare and benefited a small but concentrated elite minority, combining his Little Englander beliefs with opposition to the economic restrictions of mercantilist policies. To Cobden and many classical liberals, those who advocated peace must also advocate free markets. Utilitarianism was seen as a political justification for implementing economic liberalism by British governments, an idea dominating economic policy from the 1840s. Although utilitarianism prompted legislative and administrative reform, and John Stuart Mill's later writings foreshadowed the welfare state, it was mainly used as a premise for a laissez-faire approach. The central concept of utilitarianism, developed by Jeremy Bentham, was that public policy should seek to provide "the greatest happiness of the greatest number". While this could be interpreted as a justification for state action to reduce poverty, it was used by classical liberals to justify inaction with the argument that the net benefit to all individuals would be higher. His philosophy proved highly influential on government policy and led to increased Benthamite attempts at government social control, including Robert Peel's Metropolitan Police, prison reforms, the workhouses and asylums for the mentally ill. Keynesian economics During the Great Depression, the English economist John Maynard Keynes (1883–1946) gave the definitive liberal response to the economic crisis. Keynes had been "brought up" as a classical liberal, but especially after World War I, became increasingly a welfare or social liberal. A prolific writer, among many other works, he had begun a theoretical work examining the relationship between unemployment, money and prices back in the 1920s. Keynes was deeply critical of the British government's austerity measures during the Great Depression. He believed budget deficits were a good thing, a product of recessions. He wrote: "For Government borrowing of one kind or another is nature's remedy, so to speak, for preventing business losses from being, in so severe a slump as the present one, so great as to bring production altogether to a standstill". At the height of the Great Depression in 1933, Keynes published The Means to Prosperity, which contained specific policy recommendations for tackling unemployment in a global recession, chiefly counter cyclical public spending. The Means to Prosperity contains one of the first mentions of the multiplier effect. Keynes's magnum opus, The General Theory of Employment, Interest and Money, was published in 1936 and served as a theoretical justification for the interventionist policies Keynes favoured for tackling a recession. The General Theory challenged the earlier neo-classical economic paradigm, which had held that the market would naturally establish full employment equilibrium if it were unfettered by government interference. Classical economists believed in Say's law, which states that "supply creates its own demand" and that in a free market, workers would always be willing to lower their wages to a level where employers could profitably offer them jobs. An innovation from Keynes was the concept of price stickiness, i.e. the recognition that, in reality, workers often refuse to lower their wage demands even in cases where a classical economist might argue it is rational for them to do so. Due in part to price stickiness, it was established that the interaction of "aggregate demand" and "aggregate supply" may lead to stable unemployment equilibria, and in those cases, it is the state and not the market that economies must depend on for their salvation. The book advocated activist economic policy by the government to stimulate demand in times of high unemployment, for example, by spending on public works. In 1928, he wrote: "Let us be up and doing, using our idle resources to increase our wealth. ... With men and plants unemployed, it is ridiculous to say that we cannot afford these new developments. It is precisely with these plants and these men that we shall afford them". Where the market failed to allocate resources properly, the government was required to stimulate the economy until private funds could start flowing again—a "prime the pump" kind of strategy designed to boost industrial production. Liberal feminist theory Liberal feminism, the dominant tradition in feminist history, is an individualistic form of feminist theory that focuses on women's ability to maintain their equality through their actions and choices. Liberal feminists hope to eradicate all barriers to gender equality, claiming that the continued existence of such barriers eviscerates the individual rights and freedoms ostensibly guaranteed by a liberal social order. They argue that society believes women are naturally less intellectually and physically capable than men; thus, it tends to discriminate against women in the academy, the forum and the marketplace. Liberal feminists believe that "female subordination is rooted in a set of customary and legal constraints that blocks women's entrance to and success in the so-called public world". They strive for sexual equality via political and legal reform. British philosopher Mary Wollstonecraft (1759–1797) is widely regarded as the pioneer of liberal feminism, with A Vindication of the Rights of Woman (1792) expanding the boundaries of liberalism to include women in the political structure of liberal society. In her writings, such as A Vindication of the Rights of Woman, Wollstonecraft commented on society's view of women and encouraged women to use their voices in making decisions separate from those previously made for them. Wollstonecraft "denied that women are, by nature, more pleasure seeking and pleasure giving than men. She reasoned that if they were confined to the same cages that trap women, men would develop the same flawed characters. What Wollstonecraft most wanted for women was personhood". John Stuart Mill was also an early proponent of feminism. In his article The Subjection of Women (1861, published 1869), Mill attempted to prove that the legal subjugation of women is wrong and that it should give way to perfect equality. He believed that both sexes should have equal rights under the law and that "until conditions of equality exist, no one can possibly assess the natural differences between women and men, distorted as they have been. What is natural to the two sexes can only be found out by allowing both to develop and use their faculties freely". Mill frequently spoke of this imbalance and wondered if women were able to feel the same "genuine unselfishness" that men did in providing for their families. This unselfishness Mill advocated is the one "that motivates people to take into account the good of society as well as the good of the individual person or small family unit". Like Mary Wollstonecraft, Mill compared sexual inequality to slavery, arguing that their husbands are often just as abusive as masters and that a human being controls nearly every aspect of life for another human being. In his book The Subjection of Women, Mill argues that three major parts of women's lives are hindering them: society and gender construction, education and marriage. Equity feminism is a form of liberal feminism discussed since the 1980s, specifically a kind of classically liberal or libertarian feminism. Steven Pinker, an evolutionary psychologist, defines equity feminism as "a moral doctrine about equal treatment that makes no commitments regarding open empirical issues in psychology or biology". Barry Kuhle asserts that equity feminism is compatible with evolutionary psychology in contrast to gender feminism. Social liberal theory Jean Charles Léonard Simonde de Sismondi's New Principles of Political Economy (French: Nouveaux principes d'économie politique, ou de la richesse dans ses rapports avec la population) (1819) represents the first comprehensive liberal critique of early capitalism and laissez-faire economics, and his writings, which were studied by John Stuart Mill and Karl Marx among many others, had a profound influence on both liberal and socialist responses to the failures and contradictions of industrial society. By the end of the 19th century, the principles of classical liberalism were being increasingly challenged by downturns in economic growth, a growing perception of the evils of poverty, unemployment and relative deprivation present within modern industrial cities, as well as the agitation of organised labour. The ideal of the self-made individual who could make his or her place in the world through hard work and talent seemed increasingly implausible. A major political reaction against the changes introduced by industrialisation and laissez-faire capitalism came from conservatives concerned about social balance, although socialism later became a more important force for change and reform. Some Victorian writers, including Charles Dickens, Thomas Carlyle and Matthew Arnold, became early influential critics of social injustice. New liberals began to adapt the old language of liberalism to confront these difficult circumstances, which they believed could only be resolved through a broader and more interventionist conception of the state. An equal right to liberty could not be established merely by ensuring that individuals did not physically interfere with each other or by having impartially formulated and applied laws. More positive and proactive measures were required to ensure that every individual would have an equal opportunity for success. John Stuart Mill contributed enormously to liberal thought by combining elements of classical liberalism with what eventually became known as the new liberalism. Mill's 1859 On Liberty addressed the nature and limits of the power that can be legitimately exercised by society over the individual. He gave an impassioned defence of free speech, arguing that free discourse is a necessary condition for intellectual and social progress. Mill defined "social liberty" as protection from "the tyranny of political rulers". He introduced many different concepts of the form tyranny can take, referred to as social tyranny and tyranny of the majority. Social liberty meant limits on the ruler's power through obtaining recognition of political liberties or rights and establishing a system of "constitutional checks". His definition of liberty, influenced by Joseph Priestley and Josiah Warren, was that the individual ought to be free to do as he wishes unless he harms others. However, although Mill's initial economic philosophy supported free markets and argued that progressive taxation penalised those who worked harder, he later altered his views toward a more socialist bent, adding chapters to his Principles of Political Economy in defence of a socialist outlook and defending some socialist causes, including the radical proposal that the whole wage system be abolished in favour of a co-operative wage system. Another early liberal convert to greater government intervention was T. H. Green. Seeing the effects of alcohol, he believed that the state should foster and protect the social, political and economic environments in which individuals will have the best chance of acting according to their consciences. The state should intervene only where there is a clear, proven and strong tendency of liberty to enslave the individual. Green regarded the national state as legitimate only to the extent that it upholds a system of rights and obligations most likely to foster individual self-realisation. The New Liberalism or social liberalism movement emerged in about 1900 in Britain. The New Liberals, including intellectuals like L. T. Hobhouse and John A. Hobson, saw individual liberty as something achievable only under favourable social and economic circumstances. In their view, the poverty, squalor and ignorance in which many people lived made it impossible for freedom and individuality to flourish. New Liberals believed these conditions could be ameliorated only through collective action coordinated by a strong, welfare-oriented, interventionist state. It supports a mixed economy that includes public and private property in capital goods. Principles that can be described as social liberal have been based upon or developed by philosophers such as John Stuart Mill, Eduard Bernstein, John Dewey, Carlo Rosselli, Norberto Bobbio and Chantal Mouffe. Other important social liberal figures include Guido Calogero, Piero Gobetti, Leonard Trelawny Hobhouse and R. H. Tawney. Liberal socialism has been particularly prominent in British and Italian politics. Anarcho-capitalist theory Classical liberalism advocates free trade under the rule of law. Anarcho-capitalism goes one step further, with law enforcement and the courts being provided by private companies. Various theorists have espoused legal philosophies similar to anarcho-capitalism. One of the first liberals to discuss the possibility of privatizing the protection of individual liberty and property was France's Jakob Mauvillon in the 18th century. Later in the 1840s, Julius Faucher and Gustave de Molinari advocated the same. In his essay The Production of Security, Molinari argued: "No government should have the right to prevent another government from going into competition with it, or to require consumers of security to come exclusively to it for this commodity". Molinari and this new type of anti-state liberal grounded their reasoning on liberal ideals and classical economics. Historian and libertarian Ralph Raico argued that what these liberal philosophers "had come up with was a form of individualist anarchism, or, as it would be called today, anarcho-capitalism or market anarchism". Unlike the liberalism of Locke, which saw the state as evolving from society, the anti-state liberals saw a fundamental conflict between the voluntary interactions of people, i.e. society, and the institutions of force, i.e. the state. This society versus state idea was expressed in various ways: natural society vs artificial society, liberty vs authority, society of contract vs society of authority and industrial society vs militant society, to name a few. The anti-state liberal tradition in Europe and the United States continued after Molinari in the early writings of Herbert Spencer and thinkers such as Paul Émile de Puydt and Auberon Herbert. However, the first person to use the term anarcho-capitalism was Murray Rothbard. In the mid-20th century, Rothbard synthesized elements from the Austrian School of economics, classical liberalism and 19th-century American individualist anarchists Lysander Spooner and Benjamin Tucker (while rejecting their labour theory of value and the norms they derived from it). Anarcho-capitalism advocates the elimination of the state in favour of individual sovereignty, private property and free markets. Anarcho-capitalists believe that in the absence of statute (law by decree or legislation), society would improve itself through the discipline of the free market (or what its proponents describe as a "voluntary society"). In a theoretical anarcho-capitalist society, law enforcement, courts and all other security services would be operated by privately funded competitors rather than centrally through taxation. Money and other goods and services would be privately and competitively provided in an open market. Anarcho-capitalists say personal and economic activities under anarcho-capitalism would be regulated by victim-based dispute resolution organizations under tort and contract law rather than by statute through centrally determined punishment under what they describe as "political monopolies". A Rothbardian anarcho-capitalist society would operate under a mutually agreed-upon libertarian "legal code which would be generally accepted, and which the courts would pledge themselves to follow". Although enforcement methods vary, this pact would recognize self-ownership and the non-aggression principle (NAP). History Isolated strands of liberal thought had existed in Eastern philosophy since the Chinese Spring and Autumn period and Western philosophy since the Ancient Greeks. The economist Murray Rothbard suggested that Chinese Taoist philosopher Laozi was the first libertarian, likening Laozi's ideas on government to Friedrich Hayek's theory of spontaneous order. These ideas were first drawn together and systematized as a distinct ideology by the English philosopher John Locke, generally regarded as the father of modern liberalism. The first major signs of liberal politics emerged in modern times. These ideas began to coalesce at the time of the English Civil War. The Levellers, a largely ignored minority political movement that primarily consisted of Puritans, Presbyterians, and Quakers, called for freedom of religion, frequent convening of parliament and equality under the law. The Glorious Revolution of 1688 enshrined parliamentary sovereignty and the right of revolution in Britain and was referred to by author Steven Pincus as the "first modern liberal revolution". The development of liberalism continued throughout the 18th century with the burgeoning Enlightenment ideals of the era. This period of profound intellectual vitality questioned old traditions and influenced several European monarchies throughout the 18th century. Political tension between England and its American colonies grew after 1765 and the Seven Years' War over the issue of taxation without representation, culminating in the American Revolutionary War and, eventually, the Declaration of Independence. After the war, the leaders debated about how to move forward. The Articles of Confederation, written in 1776, now appeared inadequate to provide security or even a functional government. The Confederation Congress called a Constitutional Convention in 1787, which resulted in the writing of a new Constitution of the United States establishing a federal government. In the context of the times, the Constitution was a republican and liberal document. It remains the oldest liberal governing document in effect worldwide. The two key events that marked the triumph of liberalism in France were the abolition of feudalism in France on the night of 4 August 1789, which marked the collapse of feudal and old traditional rights and privileges and restrictions, as well as the passage of the Declaration of the Rights of Man and of the Citizen in August, itself based on the U.S. Declaration of Independence from 1776. During the Napoleonic Wars, the French brought Western Europe the liquidation of the feudal system, the liberalization of property laws, the end of seigneurial dues, the abolition of guilds, the legalization of divorce, the disintegration of Jewish ghettos, the collapse of the Inquisition, the end of the Holy Roman Empire, the elimination of church courts and religious authority, the establishment of the metric system and equality under the law for all men. His most lasting achievement, the Civil Code, served as "an object of emulation all over the globe" but also perpetuated further discrimination against women under the banner of the "natural order". The development into maturity of classical liberalism took place before and after the French Revolution in Britain. Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of John Stuart Mill's Principles in 1848. Smith addressed the motivation for economic activity, the causes of prices and wealth distribution, and the policies the state should follow to maximise wealth. The radical liberal movement began in the 1790s in England and concentrated on parliamentary and electoral reform, emphasizing natural rights and popular sovereignty. Radicals like Richard Price and Joseph Priestley saw parliamentary reform as a first step toward dealing with their many grievances, including the treatment of Protestant Dissenters, the slave trade, high prices and high taxes. In Latin America, liberal unrest dates back to the 18th century, when liberal agitation in Latin America led to independence from the imperial power of Spain and Portugal. The new regimes were generally liberal in their political outlook and employed the philosophy of positivism, which emphasized the truth of modern science, to buttress their positions. In the United States, a vicious war ensured the integrity of the nation and the abolition of slavery in the South. Historian Don H. Doyle has argued that the Union victory in the American Civil War (1861–1865) greatly boosted the course of liberalism. In the 19th century, English liberal political philosophers were the most influential in the global tradition of liberalism. During the 19th and early 20th century, in the Ottoman Empire and the Middle East, liberalism influenced periods of reform, such as the Tanzimat and Al-Nahda; the rise of secularism, constitutionalism and nationalism; and different intellectuals and religious groups and movements, like the Young Ottomans and Islamic Modernism. Prominent of the era were Rifa'a al-Tahtawi, Namık Kemal and İbrahim Şinasi. However, the reformist ideas and trends did not reach the common population successfully, as the books, periodicals, and newspapers were accessible primarily to intellectuals and segments of the emerging middle class. Many Muslims saw them as foreign influences on the world of Islam. That perception complicated reformist efforts made by Middle Eastern states. These changes, along with other factors, helped to create a sense of crisis within Islam, which continues to this day. This led to Islamic revivalism. Abolitionist and suffrage movements spread, along with representative and democratic ideals. France established an enduring republic in the 1870s. However, nationalism also spread rapidly after 1815. A mixture of liberal and nationalist sentiments in Italy and Germany brought about the unification of the two countries in the late 19th century. A liberal regime came to power in Italy and ended the secular power of the Popes. However, the Vatican launched a counter-crusade against liberalism. Pope Pius IX issued the Syllabus of Errors in 1864, condemning liberalism in all its forms. In many countries, liberal forces responded by expelling the Jesuit order. By the end of the nineteenth century, the principles of classical liberalism were being increasingly challenged, and the ideal of the self-made individual seemed increasingly implausible. Victorian writers like Charles Dickens, Thomas Carlyle and Matthew Arnold were early influential critics of social injustice. Liberalism gained momentum at the beginning of the 20th century. The bastion of autocracy, the Russian Tsar, was overthrown in the first phase of the Russian Revolution. The Allied victory in the First World War and the collapse of four empires seemed to mark the triumph of liberalism across the European continent, not just among the victorious allies but also in Germany and the newly created states of Eastern Europe. Militarism, as typified by Germany, was defeated and discredited. As Blinkhorn argues, the liberal themes were ascendant in terms of "cultural pluralism, religious and ethnic toleration, national self-determination, free market economics, representative and responsible government, free trade, unionism, and the peaceful settlement of international disputes through a new body, the League of Nations". In the Middle East, liberalism led to constitutional periods, like the Ottoman First and Second Constitutional Era and the Persian constitutional period, but it declined in the late 1930s due to the growth and opposition of Islamism and pan-Arab nationalism. However, many intellectuals advocated liberal values and ideas. Prominent liberals were Taha Hussein, Ahmed Lutfi el-Sayed, Tawfiq al-Hakim, Abd El-Razzak El-Sanhuri and Muhammad Mandur. In the United States, modern liberalism traces its history to the popular presidency of Franklin D. Roosevelt, who initiated the New Deal in response to the Great Depression and won an unprecedented four elections. The New Deal coalition established by Roosevelt left a strong legacy and influenced many future American presidents, including John F. Kennedy. Meanwhile, the definitive liberal response to the Great Depression was given by the British economist John Maynard Keynes, who had begun a theoretical work examining the relationship between unemployment, money and prices back in the 1920s. The worldwide Great Depression, starting in 1929, hastened the discrediting of liberal economics and strengthened calls for state control over economic affairs. Economic woes prompted widespread unrest in the European political world, leading to the rise of fascism as an ideology and a movement against liberalism and communism, especially in Nazi Germany and Italy. The rise of fascism in the 1930s eventually culminated in World War II, the deadliest conflict in human history. The Allies prevailed in the war by 1945, and their victory set the stage for the Cold War between the Communist Eastern Bloc and the liberal Western Bloc. In Iran, liberalism enjoyed wide popularity. In April 1951, the National Front became the governing coalition when democratically elected Mohammad Mosaddegh, a liberal nationalist, took office as the Prime Minister. However, his way of governing conflicted with Western interests, and he was removed from power in a coup on 19 August 1953. The coup ended the dominance of liberalism in the country's politics. Among the various regional and national movements, the civil rights movement in the United States during the 1960s strongly highlighted the liberal efforts for equal rights. The Great Society project launched by President Lyndon B. Johnson oversaw the creation of Medicare and Medicaid, the establishment of Head Start and the Job Corps as part of the War on Poverty and the passage of the landmark Civil Rights Act of 1964, an altogether rapid series of events that some historians have dubbed the "Liberal Hour". The Cold War featured extensive ideological competition and several proxy wars, but the widely feared World War III between the Soviet Union and the United States never occurred. While communist states and liberal democracies competed against one another, an economic crisis in the 1970s inspired a move away from Keynesian economics, especially under Margaret Thatcher in the United Kingdom and Ronald Reagan in the United States. This trend, known as neoliberalism, constituted a paradigm shift away from the post-war Keynesian consensus, which lasted from 1945 to 1980. Meanwhile, nearing the end of the 20th century, communist states in Eastern Europe collapsed precipitously, leaving liberal democracies as the only major forms of government in the West. At the beginning of World War II, the number of democracies worldwide was about the same as it had been forty years before. After 1945, liberal democracies spread very quickly but then retreated. In The Spirit of Democracy, Larry Diamond argues that by 1974 "dictatorship, not democracy, was the way of the world" and that "barely a quarter of independent states chose their governments through competitive, free, and fair elections". Diamond says that democracy bounced back, and by 1995 the world was "predominantly democratic". However, liberalism still faces challenges, especially with the phenomenal growth of China as a model combination of authoritarian government and economic liberalism. Liberalism is frequently cited as the dominant ideology of the modern era. Criticism and support Liberalism has drawn criticism and support from various ideological groups throughout its history. Despite these complex relationships, some scholars have argued that liberalism actually "rejects ideological thinking" altogether, largely because such thinking could lead to unrealistic expectations for human society. Conservatism The first major proponent of modern conservative thought, Edmund Burke, offered a blistering critique of the French Revolution by assailing the liberal pretensions to the power of rationality and the natural equality of all humans. Conservatives have also attacked what they perceive as the reckless liberal pursuit of progress and material gains, arguing that such preoccupations undermine traditional social values rooted in community and continuity. However, a few variations of conservatism, like liberal conservatism, expound some of the same ideas and principles championed by classical liberalism, including "small government and thriving capitalism". In the book Why Liberalism Failed (2018), Patrick Deneen argued that liberalism has led to income inequality, cultural decline, atomization, nihilism, the erosion of freedoms, and the growth of powerful, centralized bureaucracies. The book also argues that liberalism has replaced old values of community, religion and tradition with self-interest. Russian President Vladimir Putin believes that "liberalism has become obsolete" and claims that the vast majority of people in the world oppose multiculturalism, immigration, and rights for LGBT people. Catholicism One of the most outspoken early critics of liberalism was the Roman Catholic Church, which resulted in lengthy power struggles between national governments and the Church. A movement associated with modern democracy, Christian democracy, hopes to spread Catholic social ideas and has gained a large following in some European nations. The early roots of Christian democracy developed as a reaction against the industrialisation and urbanisation associated with laissez-faire liberalism in the 19th century. Anarchism Anarchists criticize the liberal social contract, arguing that it creates a state that is "oppressive, violent, corrupt, and inimical to liberty." Marxism Karl Marx rejected the foundational aspects of liberal theory, hoping to destroy both the state and the liberal distinction between society and the individual while fusing the two into a collective whole designed to overthrow the developing capitalist order of the 19th century. Vladimir Lenin stated that—in contrast with Marxism—liberal science defends wage slavery. However, some proponents of liberalism, such as Thomas Paine, George Henry Evans, and Silvio Gesell, were critics of wage slavery. Deng Xiaoping criticized that liberalization would destroy the political stability of the People's Republic of China and the Chinese Communist Party, making it difficult for development to take place, and is inherently capitalistic. He termed it bourgeois liberalization. Thus some socialists accuse the economic doctrines of liberalism, such as individual economic freedom, of giving rise to what they view as a system of exploitation that goes against the democratic principles of liberalism, while some liberals oppose the wage slavery that the economic doctrines of capitalism allow. Feminism Some feminists argue that liberalism's emphasis on distinguishing between the private and public spheres in society "allow[s] the flourishing of bigotry and intolerance in the private sphere and to require respect for equality only in the public sphere", making "liberalism vulnerable to the right-wing populist attack. Political liberalism has rejected the feminist call to recognize that the personal is political and has relied on political institutions and processes as barriers against illiberalism." Social democracy Social democracy, an ideology advocating modification of capitalism along progressive lines, emerged in the 20th century and was influenced by socialism. Broadly defined as a project that aims to correct through government reform what it regards as the intrinsic defects of capitalism, by reducing inequality, social democracy does not oppose the existence of the state. Several commentators have noted strong similarities between social liberalism and social democracy, with one political scientist calling American liberalism "bootleg social democracy" due to the absence of a significant social democratic tradition in the United States. Fascism Fascists accuse liberalism of materialism and a lack of spiritual values. In particular, fascism opposes liberalism for its materialism, rationalism, individualism and utilitarianism. Fascists believe that the liberal emphasis on individual freedom produces national divisiveness, but many fascists agree with liberals in their support of private property rights and a market economy. See also The American Prospect, an American political magazine that backs social liberal policies Black liberalism Constitutional liberalism Friedrich Naumann Foundation, a global advocacy organisation that supports liberal ideas and policies The Liberal, a former British magazine dedicated to coverage of liberal politics and liberal culture Liberalism by country Muscular liberalism Old Liberals Orange Book liberalism Paradox of tolerance Rule according to higher law References Notes Bibliography and further reading Alterman, Eric. Why We're Liberals. New York: Viking Adult, 2008. . Ameringer, Charles. Political parties of the Americas, 1980s to 1990s. Westport: Greenwood Publishing Group, 1992. . Amin, Samir. The liberal virus: permanent war and the americanization of the world. New York: Monthly Review Press, 2004. Antoninus, Marcus Aurelius. The Meditations of Marcus Aurelius Antoninus. New York: Oxford University Press, 2008. . Arnold, N. Scott. Imposing values: an essay on liberalism and regulation. New York: Oxford University Press, 2009. . Auerbach, Alan and Kotlikoff, Laurence. Macroeconomics Cambridge: MIT Press, 1998. . Barzilai, Gad. Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press, 2003. . Bell, Duncan. "What is Liberalism?" Political Theory, 42/6 (2014). Brack, Duncan and Randall, Ed (eds.). Dictionary of Liberal Thought. London: Politico's Publishing, 2007. . George Brandis, Tom Harley & Donald Markwell (editors). Liberals Face the Future: Essays on Australian Liberalism, Melbourne: Oxford University Press, 1984. Alan Bullock & Maurice Shock (editors). The Liberal Tradition: From Fox to Keynes, Oxford: Clarendon Press, 1967. Chodos, Robert et al. The unmaking of Canada: the hidden theme in Canadian history since 1945. Halifax: James Lorimer & Company, 1991. . Coker, Christopher. Twilight of the West. Boulder: Westview Press, 1998. . Taverne, Dick. The march of unreason: science, democracy, and the new fundamentalism. New York: Oxford University Press, 2005. . Diamond, Larry. The Spirit of Democracy. New York: Macmillan, 2008. . Dobson, John. Bulls, Bears, Boom, and Bust. Santa Barbara: ABC-CLIO, 2006. . Dorrien, Gary. The making of American liberal theology. Louisville: Westminster John Knox Press, 2001. . Farr, Thomas. World of Faith and Freedom. New York: Oxford University Press US, 2008. . Fawcett, Edmund. Liberalism: The Life of an Idea. Princeton: Princeton University Press, 2014. . Feuer, Lewis. Spinoza and the Rise of Liberalism. New Brunswick: Transaction 1984. Flamm, Michael and Steigerwald, David. Debating the 1960s: liberal, conservative, and radical perspectives. Lanham: Rowman & Littlefield, 2008. . Freeden, Michael, Javier Fernández-Sebastián, et al. In Search of European Liberalisms: Concepts, Languages, Ideologies (2019) Gallagher, Michael et al. Representative government in modern Europe. New York: McGraw Hill, 2001. . Gifford, Rob. China Road: A Journey into the Future of a Rising Power. Random House, 2008. . Godwin, Kenneth et al. School choice tradeoffs: liberty, equity, and diversity. Austin: University of Texas Press, 2002. . Gould, Andrew. Origins of liberal dominance. Ann Arbor: University of Michigan Press, 1999. . Gray, John. Liberalism. Minneapolis: University of Minnesota Press, 1995. . Grigsby, Ellen. Analyzing Politics: An Introduction to Political Science. Florence: Cengage Learning, 2008. . Gross, Jonathan. Byron: the erotic liberal. Lanham: Rowman & Littlefield Publishers, Inc., 2001. . Hafner, Danica and Ramet, Sabrina. Democratic transition in Slovenia: value transformation, education, and media. College Station: Texas A&M University Press, 2006. . Handelsman, Michael. Culture and Customs of Ecuador. Westport: Greenwood Press, 2000. . Hartz, Louis. The liberal tradition in America. New York: Houghton Mifflin Harcourt, 1955. . Hodge, Carl. Encyclopedia of the Age of Imperialism, 1800–1944. Westport: Greenwood Publishing Group, 2008. . Jensen, Pamela Grande. Finding a new feminism: rethinking the woman question for liberal democracy. Lanham: Rowman & Littlefield, 1996. . Johnson, Paul. The Renaissance: A Short History. New York: Modern Library, 2002. . Karatnycky, Adrian. Freedom in the World. Piscataway: Transaction Publishers, 2000. . Karatnycky, Adrian et al. Nations in transit, 2001. Piscataway: Transaction Publishers, 2001. . Kelly, Paul. Liberalism. Cambridge: Polity Press, 2005. . Kirchner, Emil. Liberal parties in Western Europe. Cambridge: Cambridge University Press, 1988. . Knoop, Todd. Recessions and Depressions Westport: Greenwood Press, 2004. . Koerner, Kirk. Liberalism and its critics. Oxford: Taylor & Francis, 1985. . Lightfoot, Simon. Europeanizing social democracy?: The rise of the Party of European Socialists. New York: Routledge, 2005. . Losurdo, Domenico. Liberalism: a counter-history. London: Verso, 2011. Mackenzie, G. Calvin and Weisbrot, Robert. The liberal hour: Washington and the politics of change in the 1960s. New York: Penguin Group, 2008. . Manent, Pierre and Seigel, Jerrold. An Intellectual History of Liberalism. Princeton: Princeton University Press, 1996. . Donald Markwell. John Maynard Keynes and International Relations: Economic Paths to War and Peace, Oxford University Press, 2006. Mazower, Mark. Dark Continent. New York: Vintage Books, 1998. . Monsma, Stephen and Soper, J. Christopher. The Challenge of Pluralism: Church and State in Five Democracies. Lanham: Rowman & Littlefield, 2008. . Palmer, R.R. and Joel Colton. A History of the Modern World. New York: McGraw-Hill, Inc., 1995. . Perry, Marvin et al. Western Civilization: Ideas, Politics, and Society. Florence, KY: Cengage Learning, 2008. . Pierson, Paul. The New Politics of the Welfare State. New York: Oxford University Press, 2001. . Puddington, Arch. Freedom in the World: The Annual Survey of Political Rights and Civil Liberties. Lanham: Rowman & Littlefield, 2007. . Riff, Michael. Dictionary of modern political ideologies. Manchester: Manchester University Press, 1990. . Rivlin, Alice. Reviving the American Dream Washington D.C.: Brookings Institution Press, 1992. . Ros, Agustin. Profits for all?: the cost and benefits of employee ownership. New York: Nova Publishers, 2001. . Routledge, Paul et al. The geopolitics reader. New York: Routledge, 2006. . Ryan, Alan. The Philosophy of John Stuart Mill. Humanity Books: 1970. . Ryan, Alan. The Making of Modern Liberalism (Princeton University Press, 2012). Ryan, Alan. On Politics: A History of Political Thought: From Herodotus to the Present. Allen Lane, 2012. . Shell, Jonathan. The Unconquerable World: Power, Nonviolence, and the Will of the People. New York: Macmillan, 2004. . Shaw, G. K. Keynesian Economics: The Permanent Revolution. Aldershot, England: Edward Elgar Publishing Company, 1988. . Sinclair, Timothy. Global governance: critical concepts in political science. Oxford: Taylor & Francis, 2004. . Smith, Steven B. Spinoza, Liberalism, and the Question of Jewish Identity. New Haven: Yale University Press 1997. Song, Robert. Christianity and Liberal Society. Oxford: Oxford University Press, 2006. . Stacy, Lee. Mexico and the United States. New York: Marshall Cavendish Corporation, 2002. . Steindl, Frank. Understanding Economic Recovery in the 1930s. Ann Arbor: University of Michigan Press, 2004. . Susser, Bernard. Political ideology in the modern world. Upper Saddle River: Allyn and Bacon, 1995. . . Van den Berghe, Pierre. The Liberal dilemma in South Africa. Oxford: Taylor & Francis, 1979. . Van Schie, P. G. C. and Voermann, Gerrit. The dividing line between success and failure: a comparison of Liberalism in the Netherlands and Germany in the 19th and 20th Centuries. Berlin: LIT Verlag Berlin-Hamburg-Münster, 2006. . Venturelli, Shalini. Liberalizing the European media: politics, regulation, and the public sphere. New York: Oxford University Press, 1998. . Wallerstein, Immanuel. The Modern World-System IV: Centrist Liberalism trimphant 1789–1914. Berkeley and Los Angeles: University of California Press, 2011. Whitfield, Stephen. Companion to twentieth-century America. Hoboken: Wiley-Blackwell, 2004. . Wolfe, Alan. The Future of Liberalism. New York: Random House, Inc., 2009. . Zvesper, John. Nature and liberty. New York: Routledge, 1993. . Britain Adams, Ian. Ideology and politics in Britain today. Manchester: Manchester University Press, 1998. . Cook, Richard. The Grand Old Man. Whitefish: Kessinger Publishing, 2004. on Gladstone. Falco, Maria. Feminist interpretations of Mary Wollstonecraft. State College: Penn State Press, 1996. . Forster, Greg. John Locke's politics of moral consensus. Cambridge: Cambridge University Press, 2005. . Locke, John. A Letter Concerning Toleration. 1689. Locke, John. Two Treatises of Government. reprint, New York: Hafner Publishing Company, Inc., 1947. . Wempe, Ben. T. H. Green's theory of positive freedom: from metaphysics to political theory. Exeter: Imprint Academic, 2004. . France Frey, Linda and Frey, Marsha. The French Revolution. Westport: Greenwood Press, 2004. . Hanson, Paul. Contesting the French Revolution. Hoboken: Blackwell Publishing, 2009. . Leroux, Robert, Political Economy and Liberalism in France: The Contributions of Frédéric Bastiat, London and New York, Routledge, 2011. Leroux, Robert, and David Hart (eds), French Liberalism in the 19th century. An Anthology, London and New York, Routledge, 2012. Lyons, Martyn. Napoleon Bonaparte and the Legacy of the French Revolution. New York: St. Martin's Press, Inc., 1994. . Shlapentokh, Dmitry. The French Revolution and the Russian Anti-Democratic Tradition. Edison, NJ: Transaction Publishers, 1997. . External links Liberalism—entry at Encyclopædia Britannica "Guide to Classical Liberal Scholarship". Egalitarianism History of political thought Human rights concepts Individualism Political culture Political science terminology Social theories
0.763425
0.999821
0.763289
Hundred Years' War
The Hundred Years' War (; 1337–1453) was a conflict between the kingdoms of England and France and a civil war in France during the Late Middle Ages. It emerged from feudal disputes over the Duchy of Aquitaine and was triggered by a claim to the French throne made by Edward III of England. The war grew into a broader military, economic, and political struggle involving factions from across Western Europe, fuelled by emerging nationalism on both sides. The periodisation of the war typically charts it as taking place over 116 years. However, it was an intermittent conflict which was frequently interrupted by external factors, such as the Black Death, and several years of truces. The Hundred Years' War was a significant conflict in the Middle Ages. During the war, five generations of kings from two rival dynasties fought for the throne of France, which was then the dominant kingdom in Western Europe. The war had a lasting effect on European history: both sides produced innovations in military technology and tactics, including professional standing armies and artillery, that permanently changed European warfare. Chivalry, which reached its height during the conflict, subsequently declined. Stronger national identities took root in both kingdoms, which became more centralized and gradually emerged as global powers. The term "Hundred Years' War" was adopted by later historians as a historiographical periodisation to encompass dynastically related conflicts, constructing the longest military conflict in European history. The war is commonly divided into three phases separated by truces: the Edwardian War (1337–1360), the Caroline War (1369–1389), and the Lancastrian War (1415–1453). Each side drew many allies into the conflict, with English forces initially prevailing; however, the French forces under the House of Valois ultimately retained control over the Kingdom of France. The French and English monarchies thereafter remained separate, despite the monarchs of England (later Britain) styling themselves as sovereigns of France until 1802. Overview Origins The root causes of the conflict can be traced to the crisis of 14th-century Europe. The outbreak of war was motivated by a gradual rise in tension between the kings of France and England over territory; the official pretext was the interruption of the direct male line of the Capetian dynasty. Tensions between the French and English crowns had gone back centuries to the origins of the English royal family, which was French (Norman, and later, Angevin) in origin through William the Conqueror, the Norman duke who became King of England in 1066. English monarchs had, therefore, historically held titles and lands within France, which made them vassals to the kings of France. The status of the English king's French fiefs was a significant source of conflict between the two monarchies throughout the Middle Ages. French monarchs systematically sought to check the growth of English power, stripping away lands as the opportunity arose, mainly whenever England was at war with Scotland, an ally of France. English holdings in France had varied in size, at some points dwarfing even the French royal domain; by 1337, however, only Guyenne and Gascony were English. In 1328, Charles IV of France died without any sons or brothers, and a new principle, Salic law, disallowed female succession. Charles's closest male relative was his nephew Edward III of England, whose mother, Isabella, was Charles's sister. Isabella claimed the throne of France for her son by the rule of proximity of blood, but the French nobility rejected this, maintaining that Isabella could not transmit a right she did not possess. An assembly of French barons decided that a native Frenchman should receive the crown, rather than Edward. The throne passed to Charles's patrilineal cousin instead, Philip, Count of Valois. Edward protested but ultimately submitted and did homage for Gascony. Further French disagreements with Edward induced Philip, during May 1337, to meet with his Great Council in Paris. It was agreed that Gascony should be taken back into Philip's hands, which prompted Edward to renew his claim for the French throne, this time by force of arms. Edwardian phase In the early years of the war, the English, led by their king and his son Edward, the Black Prince, saw resounding successes, notably at Crécy (1346) and at Poitiers (1356), where King John II of France was taken prisoner. Caroline phase and Black Death By 1378, under King Charles V the Wise and the leadership of Bertrand du Guesclin, the French had reconquered most of the lands ceded to King Edward in the Treaty of Brétigny (signed in 1360), leaving the English with only a few cities on the continent. In the following decades, the weakening of royal authority, combined with the devastation caused by the Black Death of 1347–1351 (which killed nearly half of France and 20–33% of England) and the significant economic crisis that followed, led to a period of civil unrest in both countries. These crises were resolved in England earlier than in France. Lancastrian phase and after The newly crowned Henry V of England seized the opportunity presented by the mental illness of Charles VI of France and the French civil war between Armagnacs and Burgundians to revive the conflict. Overwhelming victories at Agincourt (1415) and Verneuil (1424), as well as an alliance with the Burgundians raised the prospects of an ultimate English triumph and persuaded the English to continue the war over many decades. A variety of factors prevented this, however. Notable influences include the deaths of both Henry and Charles in 1422, the emergence of Joan of Arc (which boosted French morale), and the loss of Burgundy as an ally (concluding the French civil war). The Siege of Orléans (1429) made English aspirations for conquest all but infeasible. Despite Joan's capture by the Burgundians and her subsequent execution (1431), a series of crushing French victories concluded the siege, favoring the Valois dynasty. Notably, Patay (1429), Formigny (1450), and Castillon (1453) proved decisive in ending the war. England permanently lost most of its continental possessions, with only the Pale of Calais remaining under its control on the continent until the Siege of Calais (1558). Related conflicts and after-effects Local conflicts in neighbouring areas, which were contemporarily related to the war, including the War of the Breton Succession (1341–1364), the Castilian Civil War (1366–1369), the War of the Two Peters (1356–1369) in Aragon, and the 1383–1385 crisis in Portugal, were used by the parties to advance their agendas. By the war's end, feudal armies had mainly been replaced by professional troops, and aristocratic dominance had yielded to a democratization of the manpower and weapons of armies. Although primarily a dynastic conflict, the war inspired French and English nationalism. The broader introduction of weapons and tactics supplanted the feudal armies where heavy cavalry had dominated, and artillery became important. The war precipitated the creation of the first standing armies in Western Europe since the Western Roman Empire and helped change their role in warfare. Civil wars, deadly epidemics, famines, and bandit free-companies of mercenaries reduced the population drastically in France. But at the end of the war, the French had the upper hand due to their better supply, such as small hand-held cannons, weapons, etc. In England, political forces over time came to oppose the costly venture. After the war, England was left insolvent, leaving the conquering French in complete control of all of France except Calais. The dissatisfaction of English nobles, resulting from the loss of their continental landholdings, as well as the general shock at losing a war in which investment had been so significant, helped lead to the Wars of the Roses (1455–1487). The economic consequences of the Hundred Years' War not only produced a decline in trade but also led to a high collection of taxes from both countries, which played a significant role in civil disorder. Causes and prelude Dynastic turmoil in France: 1316–1328 The question of female succession to the French throne was raised after the death of Louis X in 1316. Louis left behind a young daughter, Joan II of Navarre, and a son, John I of France, although he only lived for five days. However, Joan's paternity was in question, as her mother, Margaret of Burgundy, was accused of being an adulterer in the Tour de Nesle affair. Given the situation, Philip, Count of Poitiers and brother of Louis X, positioned himself to take the crown, advancing the stance that women should be ineligible to succeed to the French throne. He won over his adversaries through his political sagacity and succeeded to the French throne as Philip V. When he died in 1322, leaving only daughters behind, the crown passed to his younger brother, Charles IV. Charles IV died in 1328, leaving behind his young daughter and pregnant wife, Joan of Évreux. He decreed that he would become king if the unborn child were male. If not, Charles left the choice of his successor to the nobles. Joan gave birth to a girl, Blanche of France (later Duchess of Orleans). With Charles IV's death and Blanche's birth, the main male line of the House of Capet was rendered extinct. By proximity of blood, the nearest male relative of Charles IV was his nephew, Edward III of England. Edward was the son of Isabella, the sister of the dead Charles IV, but the question arose whether she could transmit a right to inherit that she did not possess. Moreover, the French nobility baulked at the prospect of being ruled by an Englishman, especially one whose mother, Isabella, and her lover, Roger Mortimer, were widely suspected of having murdered the previous English king, Edward II. The French barons, prelates, and the University of Paris assemblies decided that males who derive their right to inheritance through their mother should be excluded from consideration. Therefore, excluding Edward, the nearest heir through the male line was Charles IV's first cousin, Philip, Count of Valois, and it was decided that he should take the throne. He was crowned Philip VI in 1328. In 1340, the Avignon papacy confirmed that, under Salic law, males would not be able to inherit through their mothers. Eventually, Edward III reluctantly recognized Philip VI and paid him homage for the duchy of Aquitaine and Gascony in 1329. He made concessions in Guyenne but reserved the right to reclaim territories arbitrarily confiscated. After that, he expected to be left undisturbed while he made war on Scotland. Dispute over Guyenne: a problem of sovereignty Tensions between the French and English monarchies can be traced back to the 1066 Norman Conquest of England, in which the English throne was seized by the Duke of Normandy, a vassal of the King of France. As a result, the crown of England was held by a succession of nobles who already owned lands in France, which put them among the most influential subjects of the French king, as they could now draw upon the economic power of England to enforce their interests in the mainland. To the kings of France, this threatened their royal authority, and so they would constantly try to undermine English rule in France, while the English monarchs would struggle to protect and expand their lands. This clash of interests was the root cause of much of the conflict between the French and English monarchies throughout the medieval era. The Anglo-Norman dynasty that had ruled England since the Norman conquest of 1066 was brought to an end when Henry, the son of Geoffrey of Anjou and Empress Matilda, and great-grandson of William the Conqueror, became the first of the Angevin kings of England in 1154 as Henry II. The Angevin kings ruled over what was later known as the Angevin Empire, which included more French territory than that under the kings of France. The Angevins still owed homage to the French king for these territories. From the 11th century, the Angevins had autonomy within their French domains, neutralizing the issue. King John of England inherited the Angevin domains from his brother Richard I. However, Philip II of France acted decisively to exploit the weaknesses of John, both legally and militarily, and by 1204 had succeeded in taking control of much of the Angevin continental possessions. Following John's reign, the Battle of Bouvines (1214), the Saintonge War (1242), and finally the War of Saint-Sardos (1324), the English king's holdings on the continent, as Duke of Aquitaine, were limited roughly to provinces in Gascony. The dispute over Guyenne is even more important than the dynastic question in explaining the outbreak of the war. Guyenne posed a significant problem to the kings of France and England: Edward III was a vassal of Philip VI of France because of his French possessions and was required to recognize the suzerainty of the King of France over them. In practical terms, a judgment in Guyenne might be subject to an appeal to the French royal court. The King of France had the power to revoke all legal decisions made by the King of England in Aquitaine, which was unacceptable to the English. Therefore, sovereignty over Guyenne was a latent conflict between the two monarchies for several generations. During the War of Saint-Sardos, Charles of Valois, father of Philip VI, invaded Aquitaine on behalf of Charles IV and conquered the duchy after a local insurrection, which the French believed had been incited by Edward II of England. Charles IV grudgingly agreed to return this territory in 1325. Edward II had to compromise to recover his duchy: he sent his son, the future Edward III, to pay homage. The King of France agreed to restore Guyenne, minus Agen, but the French delayed the return of the lands, which helped Philip VI. On 6 June 1329, Edward III finally paid homage to the King of France. However, at the ceremony, Philip VI had it recorded that the homage was not due to the fiefs detached from the duchy of Guyenne by Charles IV (especially Agen). For Edward, the homage did not imply the renunciation of his claim to the extorted lands. Gascony under the King of England In the 11th century, Gascony in southwest France had been incorporated into Aquitaine (also known as Guyenne or Guienne) and formed with it the province of Guyenne and Gascony (French: Guyenne-et-Gascogne). The Angevin kings of England became dukes of Aquitaine after Henry II married the former Queen of France, Eleanor of Aquitaine, in 1152, from which point the lands were held in vassalage to the French crown. By the 13th century the terms Aquitaine, Guyenne and Gascony were virtually synonymous. At the beginning of Edward III's reign on 1 February 1327, the only part of Aquitaine that remained in his hands was the Duchy of Gascony. The term Gascony came to be used for the territory held by the Angevin (Plantagenet) kings of England in southwest France, although they still used the title Duke of Aquitaine. For the first 10 years of Edward III's reign, Gascony had been a significant friction point. The English argued that, as Charles IV had not acted properly towards his tenant, Edward should be able to hold the duchy free of French suzerainty. The French rejected this argument, so in 1329, the 17-year-old Edward III paid homage to Philip VI. Tradition demanded that vassals approach their liege unarmed, with heads bare. Edward protested by attending the ceremony wearing his crown and sword. Even after this pledge of homage, the French continued to pressure the English administration. Gascony was not the only sore point. One of Edward's influential advisers was Robert III of Artois. Robert was an exile from the French court, having fallen out with Philip VI over an inheritance claim. He urged Edward to start a war to reclaim France, and was able to provide extensive intelligence on the French court. Franco-Scot alliance France was an ally of the Kingdom of Scotland as English kings had tried to subjugate the country for some time. In 1295, a treaty was signed between France and Scotland during the reign of Philip the Fair, known as the Auld Alliance. Charles IV formally renewed the treaty in 1326, promising Scotland that France would support the Scots if England invaded their country. Similarly, France would have Scotland's support if its own kingdom were attacked. Edward could not succeed in his plans for Scotland if the Scots could count on French support. Philip VI had assembled a large naval fleet off Marseilles as part of an ambitious plan for a crusade to the Holy Land. However, the plan was abandoned and the fleet, including elements of the Scottish navy, moved to the English Channel off Normandy in 1336, threatening England. To deal with this crisis, Edward proposed that the English raise two armies, one to deal with the Scots "at a suitable time" and the other to proceed at once to Gascony. At the same time, ambassadors were to be sent to France with a proposed treaty for the French king. Beginning of the war: 1337–1360 End of homage At the end of April 1337, Philip of France was invited to meet the delegation from England but refused. The arrière-ban, a call to arms, was proclaimed throughout France starting on 30 April 1337. Then, in May 1337, Philip met with his Great Council in Paris. It was agreed that the Duchy of Aquitaine, effectively Gascony, should be taken back into the King's hands because Edward III was in breach of his obligations as a vassal and had sheltered the King's "mortal enemy" Robert d'Artois. Edward responded to the confiscation of Aquitaine by challenging Philip's right to the French throne. When Charles IV died, Edward claimed the succession of the French throne through the right of his mother, Isabella (Charles IV's sister), daughter of Philip IV. His claim was considered invalidated by Edward's homage to Philip VI in 1329. Edward revived his claim and in 1340 formally assumed the title "King of France and the French Royal Arms". On 26 January 1340, Edward III formally received homage from Guy, half-brother of the Count of Flanders. The civic authorities of Ghent, Ypres, and Bruges proclaimed Edward King of France. Edward aimed to strengthen his alliances with the Low Countries. His supporters could claim that they were loyal to the "true" King of France and did not rebel against Philip. In February 1340, Edward returned to England to try to raise more funds and also deal with political difficulties. Relations with Flanders were also tied to the English wool trade since Flanders' principal cities relied heavily on textile production, and England supplied much of the raw material they needed. Edward III had commanded that his chancellor sit on the woolsack in council as a symbol of the pre-eminence of the wool trade. At the time there were about 110,000 sheep in Sussex alone. The great medieval English monasteries produced large wool surpluses sold to mainland Europe. Successive governments were able to make large amounts of money by taxing it. France's sea power led to economic disruptions for England, shrinking the wool trade to Flanders and the wine trade from Gascony. Outbreak, the English Channel and Brittany On 22 June 1340, Edward and his fleet sailed from England and arrived off the Zwin estuary the next day. The French fleet assumed a defensive formation off the port of Sluis. The English fleet deceived the French into believing they were withdrawing. When the wind turned in the late afternoon, the English attacked with the wind and sun behind them. The French fleet was almost destroyed in what became known as the Battle of Sluys. England dominated the English Channel for the rest of the war, preventing French invasions. At this point, Edward's funds ran out and the war probably would have ended were it not for the death of the Duke of Brittany in 1341 precipitating a succession dispute between the duke's half-brother John of Montfort and Charles of Blois, nephew of Philip VI. In 1341, this inheritance dispute over the Duchy of Brittany set off the War of the Breton Succession, in which Edward backed John of Montfort and Philip backed Charles of Blois. Action for the next few years focused on a back-and-forth struggle in Brittany. The city of Vannes in Brittany changed hands several times, while further campaigns in Gascony met with mixed success for both sides. The English-backed Montfort finally took the duchy but not until 1364. Battle of Crécy and the taking of Calais In July 1346, Edward mounted a major invasion across the channel, landing on Normandy's Cotentin Peninsula at St Vaast. The English army captured the city of Caen in just one day, surprising the French. Philip mustered a large army to oppose Edward, who chose to march northward toward the Low Countries, pillaging as he went. He reached the river Seine to find most of the crossings destroyed. He moved further south, worryingly close to Paris until he found the crossing at Poissy. This had only been partially destroyed, so the carpenters within his army were able to fix it. He then continued to Flanders until he reached the river Somme. The army crossed at a tidal ford at Blanchetaque, stranding Philip's army. Edward, assisted by this head start, continued on his way to Flanders once more until, finding himself unable to outmaneuver Philip, Edward positioned his forces for battle, and Philip's army attacked. The Battle of Crécy of 1346 was a complete disaster for the French, largely credited to the English longbowmen and the French king, who allowed his army to attack before it was ready. Philip appealed to his Scottish allies to help with a diversionary attack on England. King David II of Scotland responded by invading northern England, but his army was defeated, and he was captured at the Battle of Neville's Cross on 17 October 1346. This greatly reduced the threat from Scotland. In France, Edward proceeded north unopposed and besieged the city of Calais on the English Channel, capturing it in 1347. This became an important strategic asset for the English, allowing them to keep troops safely in northern France. Calais would remain under English control, even after the end of the Hundred Years' War, until the successful French siege in 1558. Battle of Poitiers The Black Death, which had just arrived in Paris in 1348, ravaged Europe. In 1355, after the plague had passed and England was able to recover financially, King Edward's son and namesake, the Prince of Wales, later known as the Black Prince, led a Chevauchée from Gascony into France, during which he pillaged Avignonet, Castelnaudary, Carcassonne, and Narbonne. The next year during another Chevauchée he ravaged Auvergne, Limousin, and Berry but failed to take Bourges. He offered terms of peace to King John II of France (known as John the Good), who had outflanked him near Poitiers but refused to surrender himself as the price of their acceptance. This led to the Battle of Poitiers (19 September 1356) where the Black Prince's army routed the French. During the battle, the Gascon noble Jean de Grailly, captal de Buch led a mounted unit that was concealed in a forest. The French advance was contained, at which point de Grailly led a flanking movement with his horsemen, cutting off the French retreat and successfully capturing King John and many of his nobles. With John held hostage, his son the Dauphin (later to become Charles V) assumed the powers of the king as regent. After the Battle of Poitiers, many French nobles and mercenaries rampaged, and chaos ruled. A contemporary report recounted: Reims campaign and Black Monday Edward invaded France, for the third and last time, hoping to capitalise on the discontent and seize the throne. The Dauphin's strategy was that of non-engagement with the English army in the field. However, Edward wanted the crown and chose the cathedral city of Reims for his coronation (Reims was the traditional coronation city). However, the citizens of Reims built and reinforced the city's defences before Edward and his army arrived. Edward besieged the city for five weeks, but the defences held and there was no coronation. Edward moved on to Paris, but retreated after a few skirmishes in the suburbs. Next was the town of Chartres. Disaster struck in a freak hailstorm on the encamped army, causing over 1,000 English deaths – the so-called Black Monday at Easter 1360. This devastated Edward's army and forced him to negotiate when approached by the French. A conference was held at Brétigny that resulted in the Treaty of Brétigny (8 May 1360). The treaty was ratified at Calais in October. In return for increased lands in Aquitaine, Edward renounced Normandy, Touraine, Anjou and Maine and consented to reduce King John's ransom by a million crowns. Edward also abandoned his claim to the crown of France. First peace: 1360–1369 The French king, John II, was held captive in England for four years. The Treaty of Brétigny set his ransom at 3 million crowns and allowed for hostages to be held in lieu of John. The hostages included two of his sons, several princes and nobles, four inhabitants of Paris, and two citizens from each of the nineteen principal towns of France. While these hostages were held, John returned to France to try to raise funds to pay the ransom. In 1362, John's son Louis of Anjou, a hostage in English-held Calais, escaped captivity. With his stand-in hostage gone, John felt honour-bound to return to captivity in England. The French crown had been at odds with Navarre (near southern Gascony) since 1354, and in 1363, the Navarrese used the captivity of John II in London and the political weakness of the Dauphin to try to seize power. Although there was no formal treaty, Edward III supported the Navarrese moves, particularly as there was a prospect that he might gain control over the northern and western provinces as a consequence. With this in mind, Edward deliberately slowed the peace negotiations. In 1364, John II died in London, while still in honourable captivity. Charles V succeeded him as king of France. On 16 May, one month after the dauphin's accession and three days before his coronation as Charles V, the Navarrese suffered a crushing defeat at the Battle of Cocherel. French ascendancy under Charles V: 1369–1389 Aquitaine and Castile In 1366, there was a civil war of succession in Castile (part of modern Spain). The forces of the ruler Peter of Castile were pitched against those of his half-brother Henry of Trastámara. The English crown supported Peter; the French supported Henry. French forces were led by Bertrand du Guesclin, a Breton, who rose from relatively humble beginnings to prominence as one of France's war leaders. Charles V provided a force of 12,000, with du Guesclin at their head, to support Trastámara in his invasion of Castile. Peter appealed to England and Aquitaine's Black Prince, Edward of Woodstock, for help, but none was forthcoming, forcing Peter into exile in Aquitaine. The Black Prince had previously agreed to support Peter's claims but concerns over the terms of the treaty of Brétigny led him to assist Peter as a representative of Aquitaine, rather than England. He then led an Anglo-Gascon army into Castile. Peter was restored to power after Trastámara's army was defeated at the Battle of Nájera. Although the Castilians had agreed to fund the Black Prince, they failed to do so. The Prince was suffering from ill health and returned with his army to Aquitaine. To pay off debts incurred during the Castile campaign, the prince instituted a hearth tax. Arnaud-Amanieu VIII, Lord of Albret had fought on the Black Prince's side during the war. Albret, who already had become discontented by the influx of English administrators into the enlarged Aquitaine, refused to allow the tax to be collected in his fief. He then joined a group of Gascon lords who appealed to Charles V for support in their refusal to pay the tax. Charles V summoned one Gascon lord and the Black Prince to hear the case in his High Court in Paris. The Black Prince answered that he would go to Paris with sixty thousand men behind him. War broke out again and Edward III resumed the title of King of France. Charles V declared that all the English possessions in France were forfeited, and before the end of 1369 all of Aquitaine was in full revolt. With the Black Prince gone from Castile, Henry of Trastámara led a second invasion that ended with Peter's death at the Battle of Montiel in March 1369. The new Castilian regime provided naval support to French campaigns against Aquitaine and England. In 1372, the Castilian fleet defeated the English fleet in the Battle of La Rochelle. 1373 campaign of John of Gaunt In August 1373, John of Gaunt, accompanied by John de Montfort, Duke of Brittany led a force of 9,000 men from Calais on a . While initially successful as French forces were insufficiently concentrated to oppose them, the English met more resistance as they moved south. French forces began to concentrate around the English force but under orders from Charles V, the French avoided a set battle. Instead, they fell on forces detached from the main body to raid or forage. The French shadowed the English and in October, the English found themselves trapped against the River Allier by four French forces. With some difficulty, the English crossed at the bridge at Moulins but lost all their baggage and loot. The English carried on south across the Limousin plateau but the weather was turning severe. Men and horses died in great numbers and many soldiers, forced to march on foot, discarded their armour. At the beginning of December, the English army entered friendly territory in Gascony. By the end of December, they were in Bordeaux, starving, ill-equipped, and having lost over half of the 30,000 horses with which they had left Calais. Although the march across France had been a remarkable feat, it was a military failure. English turmoil With his health deteriorating, the Black Prince returned to England in January 1371, where his father Edward III was elderly and also in poor health. The prince's illness was debilitating, and he died on 8 June 1376. Edward III died the following year on 21 June 1377 and was succeeded by the Black Prince's second son Richard II who was still a child of 10 (Edward of Angoulême, the Black Prince's first son, had died sometime earlier). The treaty of Brétigny had left Edward III and England with enlarged holdings in France, but a small professional French army under the leadership of du Guesclin pushed the English back; by the time Charles V died in 1380, the English held only Calais and a few other ports. It was usual to appoint a regent in the case of a child monarch but no regent was appointed for Richard II, who nominally exercised the power of kingship from the date of his accession in 1377. Between 1377 and 1380, actual power was in the hands of a series of councils. The political community preferred this to a regency led by the king's uncle, John of Gaunt, although Gaunt remained highly influential. Richard faced many challenges during his reign, including the Peasants' Revolt led by Wat Tyler in 1381 and an Anglo-Scottish war in 1384–1385. His attempts to raise taxes to pay for his Scottish adventure and for the protection of Calais against the French made him increasingly unpopular. 1380 campaign of the Earl of Buckingham In July 1380, the Earl of Buckingham commanded an expedition to France to aid England's ally, the Duke of Brittany. The French refused battle before the walls of Troyes on 25 August; Buckingham's forces continued their and in November laid siege to Nantes. The support expected from the Duke of Brittany did not appear and in the face of severe losses in men and horses, Buckingham was forced to abandon the siege in January 1381. In February, reconciled to the regime of the new French king Charles VI by the Treaty of Guérande, Brittany paid 50,000 francs to Buckingham for him to abandon the siege and the campaign. French turmoil After the deaths of Charles V and du Guesclin in 1380, France lost its main leadership and overall momentum in the war. Charles VI succeeded his father as king of France at the age of 11, and he was thus put under a regency led by his uncles, who managed to maintain an effective grip on government affairs until about 1388, well after Charles had achieved royal majority. With France facing widespread destruction, plague, and economic recession, high taxation put a heavy burden on the French peasantry and urban communities. The war effort against England largely depended on royal taxation, but the population was increasingly unwilling to pay for it, as would be demonstrated at the Harelle and Maillotin revolts in 1382. Charles V had abolished many of these taxes on his deathbed, but subsequent attempts to reinstate them stirred up hostility between the French government and populace. Philip II of Burgundy, the uncle of the French king, brought together a Burgundian-French army and a fleet of 1,200 ships near the Zeeland town of Sluis in the summer and autumn of 1386 to attempt an invasion of England, but this venture failed. However, Philip's brother John of Berry appeared deliberately late, so that the autumn weather prevented the fleet from leaving and the invading army then dispersed again. Difficulties in raising taxes and revenue hampered the ability of the French to fight the English. At this point, the war's pace had largely slowed down, and both nations found themselves fighting mainly through proxy wars, such as during the 1383–1385 Portuguese interregnum. The independence party in the Kingdom of Portugal, which was supported by the English, won against the supporters of the King of Castile's claim to the Portuguese throne, who in turn was backed by the French. Second peace: 1389–1415 The war became increasingly unpopular with the English public due to the high taxes needed for the war effort. These taxes were seen as one of the reasons for the Peasants' Revolt. Richard II's indifference to the war together with his preferential treatment of a select few close friends and advisors angered an alliance of lords that included one of his uncles. This group, known as Lords Appellant, managed to press charges of treason against five of Richard's advisors and friends in the Merciless Parliament. The Lords Appellant were able to gain control of the council in 1388 but failed to reignite the war in France. Although the will was there, the funds to pay the troops was lacking, so in the autumn of 1388 the Council agreed to resume negotiations with the French crown, beginning on 18 June 1389 with the signing of the three-year Truce of Leulinghem. In 1389, Richard's uncle and supporter, John of Gaunt, returned from Spain and Richard was able to rebuild his power gradually until 1397, when he reasserted his authority and destroyed the principal three among the Lords Appellant. In 1399, after John of Gaunt died, Richard II disinherited Gaunt's son, the exiled Henry of Bolingbroke. Bolingbroke returned to England with his supporters, deposed Richard and had himself crowned Henry IV. In Scotland, the problems brought in by the English regime change prompted border raids that were countered by an invasion in 1402 and the defeat of a Scottish army at the Battle of Homildon Hill. A dispute over the spoils between Henry and Henry Percy, 1st Earl of Northumberland, resulted in a long and bloody struggle between the two for control of northern England, resolved only with the almost complete destruction of the House of Percy by 1408. In Wales, Owain Glyndŵr was declared Prince of Wales on 16 September 1400. He was the leader of the most serious and widespread rebellion against England authority in Wales since the conquest of 1282–1283. In 1405, the French allied with Glyndŵr and the Castilians in Spain; a Franco-Welsh army advanced as far as Worcester, while the Spaniards used galleys to raid and burn all the way from Cornwall to Southampton, before taking refuge in Harfleur for the winter. The Glyndŵr Rising was finally put down in 1415 and resulted in Welsh semi-independence for a number of years. In 1392, Charles VI suddenly descended into madness, forcing France into a regency dominated by his uncles and his brother. A conflict for control over the Regency began between his uncle Philip the Bold, Duke of Burgundy and his brother, Louis of Valois, Duke of Orléans. After Philip's death, his son and heir John the Fearless continued the struggle against Louis but with the disadvantage of having no close relation to the king. Finding himself outmanoeuvred politically, John ordered the assassination of Louis in retaliation. His involvement in the murder was quickly revealed and the Armagnac family took political power in opposition to John. By 1410, both sides were bidding for the help of English forces in a civil war. In 1418 Paris was taken by the Burgundians, who were unable to stop the massacre of Count of Armagnac and his followers by a Parisian crowd, with an estimated death toll between 1,000 and 5,000. Throughout this period, England confronted repeated raids by pirates that damaged trade and the navy. There is some evidence that Henry IV used state-legalised piracy as a form of warfare in the English Channel. He used such privateering campaigns to pressure enemies without risking open war. The French responded in kind and French pirates, under Scottish protection, raided many English coastal towns. The domestic and dynastic difficulties faced by England and France in this period quieted the war for a decade. Henry IV died in 1413 and was replaced by his eldest son Henry V. The mental illness of Charles VI of France allowed his power to be exercised by royal princes whose rivalries caused deep divisions in France. In 1414 while Henry held court at Leicester, he received ambassadors from Burgundy. Henry accredited envoys to the French king to make clear his territorial claims in France; he also demanded the hand of Charles VI's youngest daughter Catherine of Valois. The French rejected his demands, leading Henry to prepare for war. Resumption of the war under Henry V: 1415–1429 Burgundian alliance and the seizure of Paris Battle of Agincourt (1415) In August 1415, Henry V sailed from England with a force of about 10,500 and laid siege to Harfleur. The city resisted for longer than expected, but finally surrendered on 22 September. Because of the unexpected delay, most of the campaign season was gone. Rather than march on Paris directly, Henry elected to make a raiding expedition across France toward English-occupied Calais. In a campaign reminiscent of Crécy, he found himself outmanoeuvred and low on supplies and had to fight a much larger French army at the Battle of Agincourt, north of the Somme. Despite the problems and having a smaller force, his victory was near total; the French defeat was catastrophic, costing the lives of many of the Armagnac leaders. About 40% of the French nobility was killed. Henry was apparently concerned that the large number of prisoners taken were a security risk (there were more French prisoners than there were soldiers in the entire English army) and he ordered their deaths before the French reserves fled the field and Henry rescinded the order. Treaty of Troyes (1420) Henry retook much of Normandy, including Caen in 1417, and Rouen on 19 January 1419, turning Normandy English for the first time in two centuries. A formal alliance was made with Burgundy, which had taken Paris in 1418 before the assassination of Duke John the Fearless in 1419. In 1420, Henry met with King Charles VI. They signed the Treaty of Troyes, by which Henry finally married Charles' daughter Catherine of Valois and Henry's heirs would inherit the throne of France. The Dauphin, Charles VII, was declared illegitimate. Henry formally entered Paris later that year and the agreement was ratified by the Estates-General. Death of the Duke of Clarence (1421) On 22 March 1421 Henry V's progress in his French campaign experienced an unexpected reversal. Henry had left his brother and presumptive heir Thomas, Duke of Clarence in charge while he returned to England. The Duke of Clarence engaged a Franco-Scottish force of 5000 men, led by Gilbert Motier de La Fayette and John Stewart, Earl of Buchan at the Battle of Baugé. The Duke of Clarence, against the advice of his lieutenants, before his army had been fully assembled, attacked with a force of no more than 1500 men-at-arms. Then, during the course of the battle, he led a charge of a few hundred men into the main body of the Franco-Scottish army, who quickly enveloped the English. In the ensuing mêlée, the Scot, John Carmichael of Douglasdale, broke his lance unhorsing the Duke of Clarence. Once on the ground, the duke was slain by Alexander Buchanan. The body of the Duke of Clarence was recovered from the field by Thomas Montacute, 4th Earl of Salisbury, who conducted the English retreat. English success Henry V returned to France and went to Paris, then visiting Chartres and Gâtinais before returning to Paris. From there, he decided to attack the Dauphin-held town of Meaux. It turned out to be more difficult to overcome than first thought. The siege began about 6 October 1421, and the town held for seven months before finally falling on 11 May 1422. At the end of May, Henry was joined by his queen and together with the French court, they went to rest at Senlis. While there, it became apparent that he was ill (possibly dysentery), and when he set out to the Upper Loire, he diverted to the royal castle at Vincennes, near Paris, where he died on 31 August. The elderly and insane Charles VI of France died two months later on 21 October. Henry left an only child, his nine-month-old son, Henry, later to become Henry VI. On his deathbed, as Henry VI was only an infant, Henry V had given the Duke of Bedford responsibility for English France. The war in France continued under Bedford's generalship and several battles were won. The English won an emphatic victory at the Battle of Verneuil (17 August 1424). At the Battle of Baugé, the Duke of Clarence had rushed into battle without the support of his archers; by contrast, at Verneuil the archers fought to devastating effect against the Franco-Scottish army. The effect of the battle was to virtually destroy the Dauphin's field army and to eliminate the Scots as a significant military force for the rest of the war. French victory: 1429–1453 Joan of Arc and French revival The English laid siege to Orléans in October 1428, which created a stalemate for months. Food shortages within the city led to the likelihood that the city would be forced to surrender. In April 1429 Joan of Arc persuaded the Dauphin to send her to the siege, stating she had received visions from God telling her to drive out the English. She entered the city on April 29, after which the tide began to turn against the English within a matter of days. She raised the morale of the troops, and they attacked the English redoubts, forcing the English to lift the siege. Inspired by Joan, the French took several English strongholds on the Loire River. The English retreated from the Loire Valley, pursued by a French army. Near the village of Patay, French cavalry broke through a unit of English longbowmen that had been sent to block the road, then swept through the retreating English army. The English lost 2,200 men, and the commander, John Talbot, 1st Earl of Shrewsbury, was taken prisoner. This victory opened the way for the Dauphin to march to Reims for his coronation as Charles VII, on 16 July 1429. After the coronation, Charles VII's army fared less well. An attempted French siege of Paris was defeated on 8 September 1429, and Charles VII withdrew to the Loire Valley. Henry's coronations and the desertion of Burgundy Henry VI was crowned king of England at Westminster Abbey on 5 November 1429 and king of France at Notre-Dame, in Paris, on 16 December 1431. Joan of Arc was captured by the Burgundians at the siege of Compiègne on 23 May 1430. The Burgundians then transferred her to the English, who organised a trial headed by Pierre Cauchon, Bishop of Beauvais and a collaborator with the English government who served as a member of the English Council at Rouen. Joan was convicted and burned at the stake on 30 May 1431 (she was rehabilitated 25 years later by Pope Callixtus III). After the death of Joan of Arc, the fortunes of war turned dramatically against the English. Most of Henry's royal advisers were against making peace. Among the factions, the Duke of Bedford wanted to defend Normandy, the Duke of Gloucester was committed to just Calais, whereas Cardinal Beaufort was inclined to peace. Negotiations stalled. It seems that at the congress of Arras, in the summer of 1435, where the duke of Beaufort was mediator, the English were unrealistic in their demands. A few days after the congress ended in September, Philip the Good, duke of Burgundy, deserted to Charles VII, signing the Treaty of Arras that returned Paris to the King of France. This was a major blow to English sovereignty in France. The Duke of Bedford died on 14 September 1435 and was later replaced by Richard Plantagenet, 3rd Duke of York. French resurgence The allegiance of Burgundy remained fickle, but the Burgundian focus on expanding their domains in the Low Countries left them little energy to intervene in the rest of France. The long truces that marked the war gave Charles time to centralise the French state and reorganise his army and government, replacing his feudal levies with a more modern professional army that could put its superior numbers to good use. A castle that once could only be captured after a prolonged siege would now fall after a few days from cannon bombardment. The French artillery developed a reputation as the best in the world. By 1449, the French had retaken Rouen. In 1450 the Count of Clermont and Arthur de Richemont, Earl of Richmond, of the Montfort family (the future Arthur III, Duke of Brittany), caught an English army attempting to relieve Caen and defeated it at the Battle of Formigny in 1450. Richemont's force attacked the English army from the flank and rear just as they were on the verge of beating Clermont's army. French conquest of Gascony After Charles VII's successful Normandy campaign in 1450, he concentrated his efforts on Gascony, the last province held by the English. Bordeaux, Gascony's capital, was besieged and surrendered to the French on 30 June 1451. Largely due to the English sympathies of the Gascon people, this was reversed when John Talbot and his army retook the city on 23 October 1452. However, the English were decisively defeated at the Battle of Castillon on 17 July 1453. Talbot had been persuaded to engage the French army at Castillon near Bordeaux. During the battle the French appeared to retreat towards their camp. The French camp at Castillon had been laid out by Charles VII's ordinance officer Jean Bureau and this was instrumental in the French success as when the French cannon opened fire, from their positions in the camp, the English took severe casualties losing both Talbot and his son. End of the war Although the Battle of Castillon is considered the last battle of the Hundred Years' War, England and France remained formally at war for another 20 years, but the English were in no position to carry on the war as they faced unrest at home. Bordeaux fell to the French on October 19 and there were no more hostilities afterwards. Following defeat in the Hundred Years' War, English landowners complained vociferously about the financial losses resulting from the loss of their continental holdings; this is often considered a major cause of the Wars of the Roses that started in 1455. The Hundred Years' War almost resumed in 1474, when the duke Charles of Burgundy, counting on English support, took up arms against Louis XI. Louis managed to isolate the Burgundians by buying Edward IV of England off with a large cash sum and an annual pension, in the Treaty of Picquigny (1475). The treaty formally ended the Hundred Years' War with Edward renouncing his claim to the throne of France. However, future Kings of England (and later of Great Britain) continued to claim the title until 1803, when they were dropped in deference to the exiled Count of Provence, titular King Louis XVIII, who was living in England after the French Revolution. Significance Historical significance The French victory marked the end of a long period of instability that had been seeded with the Norman Conquest (1066), when William the Conqueror added "King of England" to his titles, becoming both the vassal to (as Duke of Normandy) and the equal of (as king of England) the king of France. When the war ended, England was bereft of its continental possessions, leaving it with only Calais on the Continent (until 1558). The war destroyed the English dream of a joint monarchy and led to the rejection in England of all things French, although the French language in England, which had served as the language of the ruling classes and commerce there from the time of the Norman conquest, left many vestiges in English vocabulary. English became the official language in 1362 and French was no longer used for teaching from 1385. National feeling that emerged from the war unified both France and England further. Despite the devastation on its soil, the Hundred Years' War accelerated the process of transforming France from a feudal monarchy to a centralised state. In England the political and financial troubles which emerged from the defeat were a major cause of the War of the Roses (1455–1487). Historian Ben Lowe argued in 1997 that opposition to the war helped to shape England's early modern political culture. Although anti-war and pro-peace spokesmen generally failed to influence outcomes at the time, they had a long-term impact. England showed decreasing enthusiasm for conflict deemed not in the national interest, yielding only losses in return for high economic burdens. In comparing this English cost-benefit analysis with French attitudes, given that both countries suffered from weak leaders and undisciplined soldiers, Lowe noted that the French understood that warfare was necessary to expel the foreigners occupying their homeland. Furthermore, French kings found alternative ways to finance the war – sales taxes, debasing the coinage – and were less dependent than the English on tax levies passed by national legislatures. English anti-war critics thus had more to work with than the French. A 2021 theory about the early formation of state capacity is that interstate war was responsible for initiating a strong move toward states implementing tax systems with higher state capabilities. For example, see France in the Hundred Years' War, when the English occupation threatened the independent French Kingdom. The king and his ruling elite demanded consistent and permanent taxation, which would allow a permanent standing army to be financed. The French nobility, which had always opposed such an extension of state capacity, agreed in this exceptional situation. Hence, the inter-state war with England increased French state capability. Bubonic plague and warfare reduced population numbers throughout Europe during this period. France lost half its population during the Hundred Years' War, with Normandy reduced by three-quarters and Paris by two-thirds. During the same period, England's population fell by 20 to 33 per cent. Military significance The first regular standing army in Western Europe since Roman times was organised in France in 1445, partly as a solution to marauding free companies. The mercenary companies were given a choice of either joining the Royal army as compagnies d'ordonnance on a permanent basis or being hunted down and destroyed if they refused. France gained a total standing army of around 6,000 men, which was sent out to gradually eliminate the remaining mercenaries who insisted on operating on their own. The new standing army had a more disciplined and professional approach to warfare than its predecessors. The Hundred Years' War was a time of rapid military evolution. Weapons, tactics, army structure and the social meaning of war all changed, partly in response to the war's costs, partly through advancement in technology and partly through lessons that warfare taught. The feudal system slowly disintegrated as well as the concept of chivalry. By the war's end, although the heavy cavalry was still considered the most powerful unit in an army, the heavily armoured horse had to deal with several tactics developed to deny or mitigate its effective use on a battlefield. The English began using lightly armoured mounted troops, known as hobelars. Hobelars' tactics had been developed against the Scots, in the Anglo-Scottish wars of the 14th century. Hobelars rode smaller unarmoured horses, enabling them to move through difficult or boggy terrain where heavier cavalry would struggle. Rather than fight while seated on the horse, they would dismount to engage the enemy. The closing battle of the war, the Battle of Castillon, was the first major battle won through the extensive use of field artillery. Timeline Battles Prominent figures France England Burgundy See also France–United Kingdom relations Military history of the United Kingdom Military history of France Influence of French on English List of battles involving the Kingdom of France Medieval demography Timeline of the Hundred Years' War Journal d'un bourgeois de Paris Notes References Sources Further reading External links The Hundred Years War and the History of Navarre The Hundred Years' War (1336–1565) by Lynn H. Nelson, University of Kansas Emeritus The Hundred Years' War information and game Jean Froissart, "On The Hundred Years War (1337–1453)" from the Internet Medieval Sourcebook Online database of Soldiers serving in the Hundred Years War. University of Southampton and University of Reading. 14th century in France 15th century in France Anglo-French wars Global conflicts Wars involving Luxembourg Wars involving Portugal Wars involving Scotland Wars involving the Holy Roman Empire Wars of succession involving the states and peoples of Europe Wars of the Middle Ages
0.763302
0.999872
0.763204
Gymnasium (school)
Gymnasium (and variations of the word; ) is a term in various European languages for a secondary school that prepares students for higher education at a university. It is comparable to the US English term preparatory high school or the British term grammar school. Before the 20th century, the gymnasium system was a widespread feature of educational systems throughout many European countries. The word , from Greek 'naked' or 'nude', was first used in Ancient Greece, in the sense of a place for both physical and intellectual education of young men. The latter meaning of a place of intellectual education persisted in many European languages (including Albanian, Bulgarian, Czech, Dutch, Estonian, Greek, German, Hungarian, Macedonian, Polish, Russian, Scandinavian languages, Croatian, Serbian, Slovak, Slovenian and Ukrainian), whereas in other languages, like English (gymnasium, gym) and Spanish (gimnasio), the former meaning of a place for physical education was retained. School structure Because gymnasia prepare students for university study, they are thus meant for the more academically minded students, who are sifted out between the ages of 10 and 13. In addition to the usual curriculum, students of a gymnasium often study Latin and Ancient Greek. Some gymnasia provide general education, while others have a specific focus. (This also differs from country to country.) The four traditional branches are: humanities (specializing in classical languages, such as Latin and Greek) modern languages (students are required to study at least three languages) mathematics and physical sciences economics and other social sciences (students are required to study economics, world history, social studies or business informatics) Curricula differ from school to school but generally include literature, mathematics, informatics, physics, chemistry, biology, geography, art (as well as crafts and design), music, history, philosophy, civics/citizenship, social sciences, and several foreign languages. Schools concentrate not only on academic subjects, but also on producing well-rounded individuals, so physical education and religion or ethics are compulsory, even in non-denominational schools which are prevalent. For example, the German constitution guarantees the separation of church and state, so although religion or ethics classes are compulsory, students may choose to study a specific religion or none at all. Today, a number of other areas of specialization exist, such as gymnasia specializing in economics, technology or domestic sciences. In some countries, there is a notion of , which is equivalent to beginning classes of the full gymnasium, with the rights to continue education in a gymnasium. Here, the prefix pro- is equivalent to pre-, indicating that this curriculum precedes normal gymnasium studies. History In Central European, Nordic, Benelux and Baltic countries, this meaning for "gymnasium" (that is a secondary school preparing the student for higher education at a university) has been the same at least since the Protestant Reformation in the 16th century. The term was derived from the classical Greek word , which was originally applied to an exercising ground in ancient Athens. Here teachers gathered and gave instruction between the hours devoted to physical exercises and sports, and thus the term became associated with and came to mean an institution of learning. This use of the term did not prevail among the Romans, but was revived during the Renaissance in Italy, and from there passed into the Netherlands and Germany during the 15th century. In 1538, Johannes Sturm founded at Strasbourg the school which became the model of the modern German gymnasium. In 1812, a Prussian regulation ordered all schools with the right to send their students to the university to bear the name of gymnasium. By the 20th century, this practice was followed in almost the entire Austrian-Hungarian, German, and Russian Empires. In the modern era, many countries which have gymnasia were once part of these three empires. By country Albania In Albania, a gymnasium education takes three years following a compulsory nine-year elementary education and ending with a final aptitude test called . The final test is standardized at the state level and serves as an entrance qualification for universities. These can be either public (state-run, tuition-free) or private (fee-paying). The subjects taught are mathematics, Albanian language, one to three foreign languages, history, geography, computer science, the natural sciences (biology, chemistry, physics), history of art, music, philosophy, logic, physical education, and the social sciences (sociology, ethics, psychology, politics and economy). The gymnasium is generally viewed as a destination for the best-performing students and as the type of school that serves primarily to prepare students for university, while other students go to technical/vocational schools. Therefore, gymnasia often base their admittance criteria on an entrance exam, elementary school grades, or some combination of the two. Austria In Austria the has two stages, from the age of 11 to 14, and from 15 to 18, concluding with Matura. Historically, three types existed. The focuses on Ancient Greek and Latin. The puts its focus on actively spoken languages. The usual combination is English, French, and Latin; sometimes French can be swapped with another foreign language (like Italian, Spanish or Russian). The emphasizes the sciences. In the last few decades, more autonomy has been granted to schools, and various types have been developed, focusing on sports, music, or economics, for example. Belarus In Belarus, gymnasium is the highest variant of secondary education, which provides advanced knowledge in various subjects. The number of years of instruction at a gymnasium is 11. However, it is possible to cover all required credits in 11 years, by taking additional subjects each semester. In Belarus, gymnasium is generally viewed as a destination for the best-performing students and as the type of school that serves primarily to prepare students for university. Czech Republic and Slovakia In the Czech Republic and Slovakia, (also spelled gymnasium) is a type of school that provides secondary education. Secondary schools, including , lead to the exam. There are different types of distinguished by the length of study. In the Czech Republic there are eight-year, six-year, and four-year types, and in Slovakia there are eight-year and four-year types, of which the latter is more common. In both countries, there are also bilingual (Czech or Slovak with English, French, Spanish, Italian, German, or Russian; in Slovakia, bilingual are five-year) and private . Germany German gymnasia are selective schools. They offer the most academically promising youngsters a quality education that is free in all state-run schools (and generally not above €50/month cost in Church-run schools, though there are some expensive private schools). Gymnasia may expel students who academically under-perform their classmates or behave in a way that is often seen as undesirable and unacceptable. Historically, the German also included in its overall accelerated curriculum post-secondary education at college level and the degree awarded substituted for the bachelor's degree (Baccalaureate) previously awarded by a college or university so that universities in Germany became exclusively graduate schools. In the United States, the German Gymnasium curriculum was used at a number of prestigious universities, such as the University of Michigan, as a model for their undergraduate college programs. Pupils study subjects such as German, mathematics, physics, chemistry, geography, biology, arts, music, physical education, religion, history and civics/citizenship/social sciences and computer science. They are also required to study at least two foreign languages. The usual combinations are English and French or English and Latin, although many schools make it possible to combine English with another language, most often Spanish, Ancient Greek, or Russian. Religious education classes are a part of the curricula of all German schools, yet not compulsory; a student or their parents or guardians can conscientiously object to taking them, in which case the student (along with those whose religion is not being taught in the school) is taught ethics or philosophy. In-state schools, a student who is not baptized into either the Catholic or Protestant faiths is allowed to choose which of these classes to take. The only exception to this is in the state of Berlin, where the subject ethics is mandatory for all students and (Christian) religious studies can only be chosen additionally. A similar situation is found in Brandenburg where the subject life skills, ethics, and religious education (Lebensgestaltung, Ethik, Religionskunde, LER) is the primary subject but parents/guardians or students older than 13 can choose to replace it with (Christian) religious studies or take both. The intention behind LER is that students should get an objective insight on questions of personal development and ethics as well as on the major world religions. For younger students nearly the entire curriculum of a gymnasium is compulsory; in higher years additional subjects are available and some of the hitherto compulsory subjects can be dropped, but the choice is not as wide as in other school systems, such as US high schools. Although some specialist gymnasia have English or French as the language of instruction, at most gymnasia lessons (apart from foreign language courses) are conducted in Standard German. The number of years of instruction at a gymnasium differs between the states. It varies between six and seven years in Berlin and Brandenburg (primary school is six years in both as opposed to four years in the rest of Germany) and eight in Bavaria, Hesse and Baden-Württemberg among others. While in Saxony and Thuringia students have never been taught more than eight years in Gymnasium (by default), nearly all states now conduct the examinations, which complete the Gymnasium education, after 13 years of primary school and Gymnasium combined. In addition, some states offer a 12-year curriculum leading to the . These final examinations are now centrally drafted and controlled in all German states except for Rhineland-Palatinate and provide a qualification to attend any German university. Italy In Italy originally the indicated a type of five-year junior high school (age 11 to 16) and preparing to the three year Classical Lyceum (age 16 to 19), a high school focusing on classical studies and humanities. After the school reform that unified the junior high school system, the term stayed to indicate the first two year of , now five years long. An Italian high school student who enrolls in follows this study path: (gymnasium fourth year, age 14), (gymnasium fifth year, age 15), (lyceum first year, age 16), (lyceum second year, age 17) and (lyceum third year, age 18). Some believe this still has some sense, since the two-year has a differently oriented curriculum from the . students spend the majority of their schooling studying Greek and Latin grammar, laying the bases for the "higher" and more in depth set of studies of the , such as Greek and Latin literature and philosophy. In July 1940 the fascist Minister of National Education Giuseppe Bottai got a bill of law approved that abolished the first three years of the gymnasium and instituted a unique path of studies for children aged from 12 to 14. The last two years of the gymnasium kept the previous denomination and the related scholastic curriculum for the following decades. Netherlands In the Netherlands, gymnasium is the highest variant of secondary education, offering the academically most promising youngsters (top 5%) a quality education that is in most cases free (and in other cases at low cost). It consists of six years, after eight years (including kindergarten) of primary school, in which pupils study the same subjects as their German counterparts, with the addition of compulsory Ancient Greek, Latin and (Classical Cultural Education), history of the Ancient Greek and Roman culture and literature. Schools have some freedom in choosing their specific curriculum, with for example Spanish, Philosophy and , a very technical and highly demanding course, being available as final exams. Usually, schools will have all classes mandatory in switching combinations for the first three or so years (with the exception of which is a free choice from the second year onward), after which students will choose their subjects in the directions of Economics and Society, Culture and Society, Nature and Health, Nature and Technology or Technology. The equivalent without classical languages is called , and gives access to the same university studies (although some extra classes are needed when starting a degree in classical languages or theology). All are government-funded. See (in English) for the full article on Dutch "preparatory scientific education". Nordic and Baltic countries In Denmark, Estonia, the Faroe Islands, Finland, Greenland, Iceland, Latvia, Norway and Sweden, gymnasium consists of three years, usually starting at the year the students turn 16 years old after nine or ten years of primary school. In Lithuania, the gymnasium usually consists of four years of schooling starting at the age of 15–16, the last year roughly corresponding to the first year of college. Most gymnasia in the Nordic countries are free. Universal student grants are also available in certain countries for students over 18. In Denmark (see also Gymnasium (Denmark)), there are four kinds of gymnasia: STX (Regular Examination Programme), HHX (Higher Business Examination Programme), HTX (Higher Technical Examination Programme) and HF (Higher Preparatory Examination Programme). HF is only two years, instead of the three required for STX, HHX, and HTX. All different types of gymnasia (except for HF) theoretically gives the same eligibility for university. However, because of the different subjects offered, students may be better qualified in an area of further study. E.g. HHX students have subjects that make them more eligible for studies such as business studies or economics at university, while HTX offer applied science and mathematics that benefit studies in Science or Engineering. There is also EUX, which takes four to five years and ends with both the HTX (or HHX for EUX-business) exam and status as a journeyman of a craft. Compared to the somewhat equivalent A-levels in the UK, Danish gymnasia have more mandatory subjects. The subjects are divided into levels, where A-levels usually run through all three years, B-levels usually two years and C-levels one year (apart from PE which exists as a C-level lasting tree years). In Sweden, there are two different kinds of branches of studies: the first branch focuses on giving a vocational education while the second branch focuses on giving preparation for higher education. While students from both branches can go on to study at a university, students of the vocational branch graduate with a degree within their attended program. There are 18 national programs, 12 vocational and 6 preparatory. In the Faroe Islands, there are also four kinds of gymnasia, which are the equivalents of the Danish programmes: (equivalent to STX), (HHX), (HTX) and HF (HF). and HF are usually located at the same institutions as can be seen in the name of the institute in Eysturoy: Studentaskúlin og HF-skeiðið í Eysturoy. In Greenland, there is a single kind of gymnasium, Den Gymnasiale Uddannelse (Ilinniarnertuunngorniarneq), that replaced the earlier Greenlandic Secondary Education Programme (GU), the Greenland Higher Commercial Examination Programme (HHX) and the Greenland education to Higher Technical Examination Programme (HTX), which were based on the Danish system. This program allows a more flexible Greenland gymnasium, where students based on a common foundation course can choose between different fields of study that meet the individual student's abilities and interests. The course is offered in Aasiaat, Nuuk, Sisimiut and Qaqortoq, with one in Ilulissat to be opened in 2015, latest in 2016 if approved by . In Finland, the admissions to gymnasia are competitive, the accepted people comprising 51% of the age group. The gymnasia concludes with the matriculation examination, an exam whose grades are the main criteria for university admissions. Switzerland In Switzerland, gymnasia (, ) are selective schools that provide a three- to six-year (depending on the canton) course of advanced secondary education intended to prepare students to attend university. They conclude with a nationally standardized exam, the or , often shortened to "Matura or Matur", which if passed allows students to attend a Swiss university. The gymnasia are operated by the cantons of Switzerland, and accordingly in many cantons they are called (cantonal school). Former Yugoslav countries In Bosnia and Herzegovina, Croatia, Montenegro, North Macedonia, Serbia, and Slovenia, a gymnasium education takes four years following a compulsory eight or nine-year elementary education and ending with a final aptitude test called Matura. In these countries, the final test is standardized at the state level and can serve as an entrance qualification for universities. There are either public (state-run and tuition-free), religious (church-run with secular curriculum and tuition-free) or private (fee-paying) gymnasium schools in these countries. The subjects taught are mathematics, the native language, one to three foreign languages, history, geography, informatics (computers), the natural sciences (biology, chemistry, physics), history of art, music, philosophy, logic, physical education, and the social sciences (sociology, ethics or religious education, psychology, politics, and economy). Religious studies are optional. In Bosnia and Herzegovina, Croatia, Montenegro, Serbia and North Macedonia, Latin is also a mandatory subject in all gymnasia, just as Ancient Greek is, with Latin, in a certain type of gymnasia called Classical Gymnasia. In all of the countries, the gymnasium (/) is generally viewed as a destination for best-performing students and as the type of school that serves primarily to prepare students for university studies, while other students go to technical/vocational schools. Therefore, gymnasia often base their admittance criteria on an entrance exam, elementary school grades, or a combination of the two. Countries with gymnasium systems : three years, after nine years (four years primary school and five years lower high school) of education, ends with at the age of 18. : Colegio Nacional de Buenos Aires, 6 years; Rafael Hernández National College of La Plata, five years (formerly 6 years), after 7 years of primary school; and Gymnasium UNT eight years, ends at the age of 18. : eight years, after four years of primary school; or four years, after primary school and four years of ; ending in matura at the age 18. : 7 years, after four years of primary school. : 6 years, starting at age 11/13, after 6 years of primary school, ends at the age of 18 where students progress to a university. : Deutsche Schule Mariscal Braun La Paz, 6 years, ends with Abitur. : four years, starting at age 14/15 after nine years in elementary school, ends with Matura. : Humboldt Schule of São Paulo is a German school in São Paulo. There are more Gymnasia in the country and some of them receive resources from the German government. : five years, after 7 years of primary school. Currently graduation after passing at least two Maturas. : (all-male, traditional Pre-K to 11th grade private school located in Bogotá, Colombia. Its founders were inspired by the original Greek to name the first "Gimnasio" in Colombia). : four years, starting at age 14/15 after eight years in elementary school, five different educational tracks: (general education), (focused on Latin and Ancient Greek), (focused on modern languages), (biology, chemistry, physics) and (mathematics, physics and computer science), ends with Matura exam. Students of all tracks have compulsory classes in Latin and English as well as in at least one additional foreign language (most commonly German, Italian, Spanish and French). : three years, starting at age 12 and following 6 years of elementary school. Compulsory for all students. Followed by the non-mandatory Lyceum (ages 15 to 18) for students with academic aspirations or Secondary Technical and Vocational Lyceum TVE for students who prefer vocational training. After successfully completing the program, students of TVE are awarded a School Leaving Certificate, which is recognized as equivalent to a Lyceum School Leaving Certificate (three-grade Senior Secondary School). : four years, starting at age 15 or 16; 6 years, starting at age 13 or 14 (not usual); eight years, starting at age 11 or 12; all ending in matura. : three years, or four years for athletes who are part of the Team Danmark elite sports program, and musicians, artists and actors who have chosen MGK ("Musical Elementary Course"), BGK ("Visual Arts Elementary Course") or SGK ("Performing Arts Elementary Course"), usually starting after 10 or 11 years of primary school. This is more like a prep school or the first years of college than high school. Everyone is eligible to go to a US high school, but one needs to be deemed competent to get into a gymnasium. (For more information, see Gymnasium (Denmark).) Gymnasium is also available in an intensive 2-year program leading to the ("Higher Preparatory Exam"), which doesn't give the same eligibility for university. : three years, after nine years of primary school. : three years, usually starting after 9 or 10 years of primary school. The system is similar to the Danish system. A gymnasium-level education is also available in an intensive 2-year programme leading to ("Higher Preparatory Exam"). : (educational language is Finnish) or (educational language is Swedish) takes two–five years (most students spend three years), after nine years of primary school (, ); starts usually in the autumn of the year when the student turns 16 and ends with the matriculation examination; is not compulsory and its entrance can be competitive, especially in larger cities. : the French equivalent of a gymnasium is called a (three years, after 5 years of primary school and 4 years of secondary school, age 15/18). The last year (called ) ends with passing the , an examination to enter university. : formerly eight–nine years depending on the state—now being changed to eight years nationwide, starting at 5th (at age 11), in 12th or 13th grade; for more information, see Gymnasium (Germany). : three years, starting at age 12 after six years of primary school. Compulsory for all children, it is followed by the non-mandatory , (Lyceum, ages 15–18), or the Vocational Lyceum (EPAL). The EPAL School Leaving Certificate is recognized equally as a Senior Secondary School Leaving Certificate (high school). : four/six/eight years, starting after eight/six/four years of primary school, ends with Matura; see Education in Hungary : usually 3–4 years, starting at age 15 or 16 after 10 years of elementary school. : five schools termed "gymnasium" located in Tel Aviv, Rishon LeZion, Jerusalem, and Haifa. : is the name of the two first years of : 7 years, after 5 years of primary school : three or six years, depending if you start from the 7th or 10th grade. : ends with Matura. : —usually 4 years: 2 years of basic school after 4 years of basic school and 2 years of secondary school, sometimes eight years: 6 of basic school and 2 of secondary school, 12 years in rural areas or in art/music gymnasia. : usually 7 years, starting at age 12–13 after six years of primary school. : 4 years, starting at age 14/15 after nine years in elementary school,ends with Matura. : six years, starting at age 11–13, after eight years of primary school. Prepares for admission to university. Gymnasia in the Netherlands have compulsory classes in Ancient Greek and/or Latin; the same high level secondary school without the classical languages is called Atheneum. They are both variants of . : the traditional but now discontinued gymnasium led to the completion of . This has now been succeeded by a 2-, 3-, or 4-year program, depending on course path taken, starting at the age of 15/16, culminating with an exam that qualifies for university matriculation. : was the name of the 3-year Polish compulsory middle school, starting for pupils aged 12 or 13, following six years of primary school. ended with a standardized test. Further education was encouraged but optional, consisting of either 3-year , 4-year , or 2 to three years of vocational school (potentially followed by a supplementary or ). In 2017, Poland reverted to a compulsory 8-year primary school, optionally followed by a 4-year , a 5-year , or 2 to three years of vocational school. : 4 years, starting at age 10 ends with at the age of 14. Primary education lasts for four years. Secondary education consists of: 1) lower secondary school education organized in a gymnasium for grades 5 to 8 and lower cycle of high school or arts and trades schools (vocational) for grades 9 and 10; 2) upper secondary school education organized in for grades 11, 12, and 13 followed, if necessary, by an additional high school year for those who want to move from vocational training (grade 10) to upper secondary school education. High school education (lower cycle of high school and upper secondary school education) offers three different orientations (academic, technological, specialization). Imperial Russia: since 1726, eight years since 1871. Women's gymnasia since 1862; 7 years plus an optional 8th for specialisation in pedagogy. Progymnasia: equivalent to 4 first years of gymnasium. Russian Federation: full 11 or 6–7 years after primary school. There are very few classical gymnasia in modern Russia. The notable exception is the St Petersburg Classical Gymnasium where Latin, Ancient Greek, and mathematics are the three core subjects. In the majority of other cases, Russian Gymnasia are schools specialised in a certain subject (or several subjects) in the humanities (e.g. Chelyabinsk School No. 1). : 4 years, starting at age 14/15 after eight years in elementary/primary school. There are three most common types of gymnasia: 1) general gymnasium which offers broad education in all sciences; 2) natural sciences; and 3) social studies, available all over Serbia, and a few specialised ones, e.g. mathematics—only one in all of Serbia, in Belgrade; sports—just two in Serbia; language—a total of four in Serbia; and military gymnasium—only one in all of Serbia. In the end, all students take a final exam—a Matura. Completion of the Gymnasium is a prerequisite for enrollment into a university. English and another foreign language (from the selection of German, French, Russian (most common languages), Italian or Spanish (far less common) or Chinese and Japanese (only philological gymnasia have these two) in addition to the mother tongue, and in case of minorities also Serbian) are compulsory throughout. : 4 years starting at age 15/16 after completing nine years of elementary school (more common); eight years starting at age 11/12 after completing 5 years of elementary school; both end with Maturita. : 4 years, starting at age 14/15; ends with Matura. : Paul Roos Gymnasium is a well-known gymnasium for boys in the town of Stellenbosch. The school is a boarding school, based on the classic British boarding schools; however, it was more influenced by the Protestant faith, hence the German Gymnasium. Foreign languages such as French, German, Mandarin, and Latin are studied; Afrikaans and English are compulsory. School in South Africa: 5 years, starting at age 13/14, at a secondary institution, after 7 years of primary school, ends with Matric. : Upper secondary school in Sweden lasts for three years (formerly four years on some programmes). "Gymnasium" is the word used to describe this stage of the education system in Sweden. The National Agency for Education has decided that gymnasium is equivalent to the international upper secondary school. The gymnasium is optional and follows after nine years in elementary school. However, the Swedish term ("high school") may cause some confusion. In Swedish it is used almost synonymously with "university," with the only difference being that universities have the right to issue doctoral examinations. In the case of technical universities, these could also be called even when they have right to issue doctoral examinations (e.g., , officially named a "Technical University" in English; , Faculty of Engineering, Lund University; and Kungliga tekniska högskolan, Royal Institute of Technology"). A is often located in cities with lower population, except for the technical ones that can be found also in the largest cities. : usually 4 years after nine years of compulsory schooling (primary and secondary I); it is also possible to attend a so-called which lasts 6 years, following a six-year primary schooling; the Gymnasium ends with Matura at the age of 18/19. : eight years, starting after four years of primary school. : historically, grammar schools have been the English equivalent of the gymnasium, selecting pupils on the basis of academic ability (usually through the 11+ entrance examination in year 6, at the age of 10 or 11) and educating them with the assumption that they would go on to study at a university; such schools were largely phased out from 1965 under the Wilson and Heath governments, and less than 5% of pupils now attend the remaining 146 grammar schools. The UK therefore no longer has a widespread equivalent of the gymnasium. The exception is Northern Ireland and some parts of England within the counties of Buckinghamshire, Lincolnshire, and Kent, which have retained the system. Grammar schools are also to be found in some London boroughs, North Yorkshire, Essex, Lancashire, Warwickshire, and Devon in varying degrees. Many private, fee-paying private schools, including all those commonly referred to as "public" schools, seek to fulfill a similar role to the state grammar school if the scholar has the ability (and thus to the gymnasium in other countries) and, most importantly, the money to attend them. Public school: As school districts continue to experiment with educational styles, the magnet school has become a popular type of high school. Boston Latin School and Central High School in Philadelphia are the two oldest public schools in the country and the oldest magnet schools. As the concept has not become entrenched in the various American educational systems, due partly to the federal—rather than unitary—style of education in the United States, the term may vary among states. Private school: The equivalent among private schools is the preparatory school. Final degree Depending on country, the final degree (if any) is called Abitur, Artium, Diploma, Matura, Maturita or Student and it usually opens the way to professional schools directly. However, these degrees are occasionally not fully accredited internationally, so students wanting to attend a foreign university often have to submit to further exams to be permitted access to them. Relationship with other education facilities In countries like Austria, most university faculties accept only students from secondary schools that last four years (rather than three). This includes all Gymnasium students but only a part of vocational high schools, in effect making Gymnasium the preferred choice for all pupils aiming for university diplomas. In Germany, other types of secondary school are called , and . These are attended by about two thirds of the students and the first two are practically unknown in other parts of the world. A largely corresponds to a British or American comprehensive school. However, it offers the same school-leaving certificates as the other three types—the (school-leaving certificate of a after 9th grade or in Berlin and North Rhine-Westphalia after 10th grade), the (also called , school-leaving certificate of a after 10th Grade) and (also called , school-leaving certificate after 12th Grade). Students who graduate from or may continue their schooling at a vocational school until they have full job qualifications. It is also possible to get an after 10th grade that allows the students to continue their education at the of a gymnasium and get an . There are two types of vocational school in Germany: the , a part-time vocational school and a part of Germany's dual education system, and the , a full-time vocational school outside the dual education system. Students who graduate from a vocational school and students who graduate with a good grade point average from a can continue their schooling at another type of German secondary school, the Fachhochschulreife, a vocational high school. The school leaving exam of this type of school, the , enables the graduate to start studying at a (polytechnic) and in Hesse also at a university within the state. Students who have graduated from vocational school and have been working in a job for at least three years can go to Berufsoberschule to get either a (meaning they may go to university, but they can only study the subjects belonging to the "branch" (economical, technical, social) they studied in at ) after one year, or the normal (after two years), which gives them complete access to universities. See also Comparison of US and UK Education Educational stage Gymnasium (ancient Greece) Gymnasium (Germany) Lyceum Lyceum (classical) Realschule Explanatory notes Citations External links School types Secondary education
0.76537
0.997164
0.763199
Retrofuturism
Retrofuturism (adjective retrofuturistic or retrofuture) is a movement in the creative arts showing the influence of depictions of the future produced in an earlier era. If futurism is sometimes called a "science" bent on anticipating what will come, retrofuturism is the remembering of that anticipation. Characterized by a blend of old-fashioned "retro styles" with futuristic technology, retrofuturism explores the themes of tension between past and future, and between the alienating and empowering effects of technology. Primarily reflected in artistic creations and modified technologies that realize the imagined artifacts of its parallel reality, retrofuturism can be seen as "an animating perspective on the world". Etymology The word retrofuturism is formed by the addition of the prefix "retro" from the Latin language, which gives the meaning of "backwards" to the word "future", a word also originating from Latin. According to the Oxford English Dictionary, an early use of the term appears in a Bloomingdales advertisement in a 1983 issue of The New York Times. The ad talks of jewellery that is "silverized steel and sleek grey linked for a retro-futuristic look". In an example more related to retrofuturism as an exploration of past visions of the future, the term appears in the form of “retro-futurist” in a 1984 review of the film Brazil in The New Yorker. Critic Pauline Kael writes, "[Terry Gilliam] presents a retro-futurist fantasy." Historiography Retrofuturism builds on ideas of futurism, but the latter term functions differently in several different contexts. In avant-garde artistic, literary and design circles, futurism is a long-standing and well-established term. But in its more popular form, futurism (sometimes referred to as futurology) is "an early optimism that focused on the past and was rooted in the nineteenth century, an early-twentieth-century 'golden age' that continued long into the 1960s' Space Age". Retrofuturism is first and foremost based on modern but changing notions of "the future". As Guffey notes, retrofuturism is "a recent neologism", but it "builds on futurists' fevered visions of space colonies with flying cars, robotic servants, and interstellar travel on display there; where futurists took their promise for granted, retro-futurism emerged as a more skeptical reaction to these dreams." It took its current shape in the 1970s, a time when technology was rapidly changing. From the advent of the personal computer to the birth of the first test tube baby, this period was characterized by intense and rapid technological change. But many in the general public began to question whether applied science would achieve its earlier promise—that life would inevitably improve through technological progress. In the wake of the Vietnam War, environmental degradations, and the energy crisis, many commentators began to question the benefits of applied science. But they also wondered, sometimes in awe, sometimes in confusion, at the scientific positivism evinced by earlier generations. Retrofuturism "seeped into academic and popular culture in the 1960s and 1970s", inflecting George Lucas's Star Wars and the paintings of pop artist Kenny Scharf alike". Surveying the optimistic futurism of the early twentieth century, historians Joe Corn and Brian Horrigan remind us that retrofuturism is "a history of an idea, or a system of ideas—an ideology. The future, of course, does not exist except as an act of belief or imagination." Characteristics Retrofuturism incorporates two overlapping trends which may be summarized as the future as seen from the past and the past as seen from the future. The first trend, retrofuturism proper, is directly inspired by the imagined future which existed in the minds of writers, artists, and filmmakers in the pre-1960 period who attempted to predict the future, either in serious projections of existing technology (e.g. in magazines like Science and Invention) or in science fiction novels and stories. Such futuristic visions are refurbished and updated for the present, and offer a nostalgic, counterfactual image of what the future might have been, but is not. The second trend is the inverse of the first: futuristic retro. It starts with the retro appeal of old styles of art, clothing, mores, and then grafts modern or futuristic technologies onto it, creating a mélange of past, present, and future elements. Steampunk, a term applying both to the retrojection of futuristic technology into an alternative Victorian age, and the application of neo-Victorian styles to modern technology, is a highly successful version of this second trend. In the movie Space Station 76 (2014), mankind has reached the stars, but clothes, technology, furnitures and above all social taboos are purposely highly reminiscent of the mid-1970s. In practice, the two trends cannot be sharply distinguished, as they mutually contribute to similar visions. Retrofuturism of the first type is inevitably influenced by the scientific, technological, and social awareness of the present, and modern retrofuturistic creations are never simply copies of their pre-1960 inspirations; rather, they are given a new (often wry or ironic) twist by being seen from a modern perspective. In the same way, futuristic retro owes much of its flavor to early science fiction (e.g. the works of Jules Verne and H. G. Wells), and in a quest for stylistic authenticity may continue to draw on writers and artists of the desired period. Both retrofuturistic trends in themselves refer to no specific time. When a time period is supplied for a story, it might be a counterfactual present with unique technology; a fantastic version of the future; or an alternate past in which the imagined (fictitious or projected) inventions of the past were indeed real. The import of retrofuturism has, in recent years, come under considerable discussion. Some, like the German architecture critic Niklas Maak, see retrofuturism as "nothing more than an aesthetic feedback loop recalling a lost belief in progress, the old images of the once radically new". Bruce McCall calls retrofuturism a "faux nostalgia"—the nostalgia for a future that never happened. Themes Although retrofuturism, due to the varying time-periods and futuristic visions to which it alludes, does not provide a unified thematic purpose or experience, a common thread is dissatisfaction or discomfort with the present, to which retrofuturism provides a nostalgic contrast. A similar theme is dissatisfaction with the modern world itself. A world of high-speed air transport, computers, and space stations is (by any past standard) "futuristic"; yet the search for alternative and perhaps more promising futures suggests a feeling that the desired or expected future has failed to materialize. Retrofuturism suggests an alternative path, and in addition to pure nostalgia, may act as a reminder of older but now forgotten ideals. This dissatisfaction also manifests as political commentary in Retrofuturistic literature, in which visionary nostalgia is paradoxically linked to a utopian future modelled after conservative values as seen in the example of Fox News' use of BioShock's aesthetic in a 2014 broadcast. Retrofuturism also implies a reevaluation of technology. Unlike the total rejection of post-medieval technology found in most fantasy genres, or the embrace of any and all possible technologies found in some science-fiction, retrofuturism calls for a human-scale, largely comprehensible technology, amenable to tinkering and less opaque than modern black-box technology. Retrofuturism has two main viewpoints that it stems from either an optimistic viewpoint or a pessimistic view. Retrofuturism which stems from an optimistic viewpoint tends to be an imagined futuristic society that comes from an advancement in technology in order to explore for the sake of science. Retrofuturism is not universally optimistic, and when its points of reference touch on gloomy periods like World War II, or the paranoia of the Cold War, it may itself become bleak and dystopian. This pessimistic retrofuturism is often imagined with technological advancement either being the downfall of humanity or being the last hope for humanity. In such cases, the alternative reality inspires fear, not hope, though it may still be coupled with nostalgia for a world of greater moral as well as mechanical transparency. It has been argued that retrofuturism, through finding hope in the disappointment and dystopia, and using that hope to push towards a brighter future, can be optimistic. Similarly, the visions of utopias depicted in retrofuturistic pieces can re-instill that hopefulness in audiences that have lost it. Genres Genres of retrofuturism include cyberpunk, steampunk, dieselpunk, atompunk, and Raygun Gothic, each referring to a technology from a specific time period. The first of these to be named and recognized as its own genre was cyberpunk, originating in the early to mid-1980s in literature with the works of Bruce Bethke, William Gibson, Bruce Sterling, and Pat Cadigan. Its setting is almost always a dystopian future, with a strong emphasis either upon outlaws hacking the futuristic world's machinery (often computers and computer networks), or even upon post-apocalyptic settings. The post-apocalyptic variant is the one usually associated with retrofuturism, where characters will rely upon a mixture of old and new technologies. Furthermore, synthwave and vaporwave are nostalgic, humorous and often retrofuturistic revivals of early cyberpunk aesthetic. The term "steampunk" was among the early subgenres recognized, emerging in the late 1980s. It presents a generally more optimistic and brighter outlook compared to cyberpunk. Steampunk is typically set in an alternate history closely resembling our own from the late 18th century, particularly the Regency era onwards, up to approximately 1914. However, it diverges from history in that it envisions 20th-century or even futuristic technologies powered by steam. One of the recurring themes in this genre is the fascination with electricity as a mysterious force, often considered a utopian power source of the future. It's occasionally portrayed as having mystical healing properties, akin to how nuclear energy was perceived in the mid-20th century. Steampunk shares similarities with the original scientific romances and utopian novels of authors like H. G. Wells and Jules Verne. The modern form of steampunk literature can be traced back to works such as Mervyn Peake's "Titus Alone" (1959), Ronald W. Clark's "Queen Victoria's Bomb" (1967), Michael Moorcock's "A Nomad of the Time Streams" series (1971–1981), K. W. Jeter's "Morlock Night" (1979), and William Gibson & Bruce Sterling's "The Difference Engine" (1990). In the realm of cinema, early examples include "The Time Machine" (1960) and "Castle in the Sky" (1986). An early instance of steampunk in comics can be found in the Franco-Belgian graphic novel series "Les Cités obscures," initiated by creators François Schuiten and Benoît Peeters in the early 1980s. On occasion, steampunk blurs the lines with the Weird West genre. The most recently named and recognized retrofuturistic genre is dieselpunk aka decodence (the term dieselpunk is often associated with a more pulpish form and decodence, named after the contemporary art movement of Art Deco, with a more sophisticated form), set in alternate versions of an era located circa in the period of the 1920s–1950s. Early examples include the 1970s concept albums, their designs and marketing materials of the German band Kraftwerk (see below), the comic-book character Rocketeer (first appearing in his own series in 1982), the Fallout series of video games, and films such as Brazil (1985), Batman (1989), The Rocketeer (1991), Batman Returns (1992), The Hudsucker Proxy (1994), The City of Lost Children (1995), and Dark City (1998). Especially the lower end of the genre strongly mimic the pulp literature of the era (such as the 2004 film Sky Captain and the World of Tomorrow), and films of the genre often reference the cinematic styles of film noir and German Expressionism. At times, the genre overlaps with the alternate history genre of a different World War II, such as with an Axis victory. Design and arts Although loosely affiliated with early-twentieth century Futurism, retrofuturism draws from a wider range of sources. To be sure, retrofuturist art and literature often draws from the factories, buildings, cities, and transportation systems of the machine age. But it might be said that 20th century futuristic vision found its ultimate expression in the development of Googie architecture or Populuxe design. As applied to fiction, this brand of retrofuturistic visual style began to take shape in William Gibson's short story "The Gernsback Continuum". Here and elsewhere it is referred to as Raygun Gothic, a catchall term for a visual style that incorporates various aspects of the Googie, Streamline Moderne, and Art Deco architectural styles when applied to retrofuturistic science fiction environments. Although Raygun Gothic is most similar to the Googie or Populuxe style and sometimes synonymous with it, the name is primarily applied to images of science fiction. The style is also still a popular choice for retro sci-fi in film and video games. Raygun Gothic's primary influences include the set designs of Kenneth Strickfaden and Fritz Lang. The term was coined by William Gibson in his story "The Gernsback Continuum": "Cohen introduced us and explained that Dialta [a noted pop-art historian] was the prime mover behind the latest Barris-Watford project, an illustrated history of what she called 'American Streamlined Modern'. Cohen called it 'raygun Gothic'. Their working title was The Airstream Futuropolis: The Tomorrow That Never Was." Aspects of this form of retrofuturism can also be associated with the late 1970s and early 1980s the neo-Constructivist revival that emerged in art and design circles. Designers like David King in the UK and Paula Scher in the US imitated the cool, futuristic look of the Russian avant-garde in the years following the Russian Revolution. With three of their 1970s albums, German band Kraftwerk tapped into a larger retrofuturist vision, by combining their futuristic pioneering electronic music with nostalgic visuals. Kraftwerk's retro-futurism in their 1970s visual language has been referred to by German literary critic Uwe Schütte, a reader at Aston University, Birmingham, as "clear retro-style", and in the 2008 three-hour documentary Kraftwerk and the Electronic Revolution, Irish-British music scholar Mark J. Prendergast refers to Kraftwerk's peculiar "nostalgia for the future" clearly referencing "an interwar [progressive] Germany that never was but could've been, and now [due to their influence as a band] hopefully could happen again". Design historian Elizabeth Guffey has written that if Kraftwerk's machine imagery was lifted from Russian design motifs that were once considered futuristic, they also presented a "compelling, if somewhat chilling, vision of the world in which musical ecstasy is rendered cool, mechanical and precise." Kraftwerk's three retrofuturist albums are: Kraftwerk's 1975 album Radio-Activity showed a 1930s radio on the cover, its inlay (which for its later CD re-release was widely expanded as a booklet illustrated in the same nostalgic style) showed the band photographed in black and white with old-fashioned suits and hairdos, and the music in its instrumentation as well as its ambiguous lyrics were (besides the other obvious theme of nuclear decay and nuclear power referenced by the album's titular pun) in homage to the "Radio Stars", that is the pioneers of electronic music of the first half of the 20th century, such as Guglielmo Marconi, Léon Theremin, Pierre Schaeffer, and Karlheinz Stockhausen (due to whom the band referred to themselves as but the "second generation" of electronic music). The European version of the band's 1977 album Trans-Europe Express had a similar 1930s-style black and white photo of the band members on the cover (the U.S. version even had a cover of a vintage-style colored photograph in the style of Golden Age Hollywood stars), the style of the sleeve design as well as the design of promotional material tying in with the album were influenced by Bauhaus, Art Deco, and Streamline Moderne, the record came with a large, hand-tinted black and white poster of the band members in early-1930s style suits (where band member Karl Bartos later said in Kraftwerk and the Electronic Revolution that their intention was to visually resemble "an interwar string orchestra electrified" and that the background was meant to be a pictorial Switzerland where the band was making a resting stop in-between two legs of their European tour on the eponymous Trans-Europe Express), the song lyrics referenced the "elegance and decadence" of an urban interwar Europe, and in the promo clip made for the album's title song (shot in black and white on purpose) and other promotional material, the eponymous Trans-Europe Express was portrayed by the Schienenzeppelin first employed by the Deutsche Reichsbahn in 1931 (footage of the large original was used in outdoor shots, and a miniature model of it was used for shots where the TEE moved through a futuristic cityscape strongly reminiscent of Fritz Lang's 1927 film Metropolis). The cover and sleeve design of the 1978 album The Man-Machine exhibits an obvious stylistic nod to the Constructivism of 1920s artists such as El Lissitzky, Alexander Rodchenko, and László Moholy-Nagy (due to which band members have also referred to it as "the Russian album"), and one song references the film Metropolis again. From this album on, Kraftwerk would also use their "show-room dummies" aka robot lookalikes on stage and in promotional material and increase the use of slightly campish make-up on band members that also resembled 1920s' expressionist make-up that to a lesser degree had already appeared in the promotional material for their 1977 album Trans-Europe Express. From their 1981 album Computer World onwards, Kraftwerk have largely abandoned their retro notions and appear mainly futuristic only. The only references to their earlier retro style today appear in excerpts from their 1970s' promo clips that are projected in between more modern segments in their stage shows during the performance of these old song. Fashion Retrofuturistic clothing is a particular imagined vision of the clothing that might be worn in the distant future, typically found in science fiction and science fiction films of the 1940s onwards, but also in journalism and other popular culture. The garments envisioned have most commonly been either one-piece garments, skin-tight garments, or both, typically ending up looking like either overalls or leotards, often worn together with plastic boots. In many cases, there is an assumption that the clothing of the future will be highly uniform. The cliché of futuristic clothing has now become part of the idea of retrofuturism. Futuristic fashion plays on these now-hackneyed stereotypes, and recycles them as elements into the creation of real-world clothing fashions. "We've actually seen this look creeping up on the runway as early as 1995, though it hasn't been widely popular or acceptable street wear even through 2008," said Brooke Kelley, fashion editor and Glamour magazine writer. "For the last 20 years, fashion has reviewed the times of past, decade by decade, and what we are seeing now is a combination of different eras into one complete look. Future fashion is a style beyond anything we've yet dared to wear, and it's going to be a trend setter's paradise." Architecture Retrofuturism has appeared in some examples of postmodern architecture. To critics such as Niklas Maak, the term suggests that the "future style" is "a mere quotation of its own iconographic tradition" and retrofuturism is little more than "an aesthetic feedback loop" In the example seen at right, the upper portion of the building is not intended to be integrated with the building but rather to appear as a separate object—a huge flying saucer-like space ship only incidentally attached to a conventional building. This appears intended not to evoke an even remotely possible future, but rather a past imagination of that future, or a reembracing of the futuristic vision of Googie architecture. The once-futuristic Los Angeles International Airport Theme Building was built in 1961 as an expression of the then new jet and space ages, incorporating what later came to be known as Googie and Populuxe design elements. Plans unveiled in 2008 for LAX's expansion featured retrofuturist flying-saucer/spaceship themes in proposals for new terminals and concourses. Music Modern electro style, influenced by Detroit-based artist in the early 80s (such as Drexciya, Aux 88, Cybotron). This style blend old analog gear (Roland Tr-808 and synths) and sampling methods from the 80's with modern approach of electro. The records labels involved in this journey are AMZS Recording, Gosu, Osman, Traffic Records and many others. Canadian band Alvvays's music video, "Dreams Tonite", which includes archival footage of Montreal's Expo 67 was described by the band as "fetishizing retro-futurism". English band Electric Light Orchestra released their concept album Time in 1981. This album follows a man who wakes up in the year 2095 and how he reacts to this sudden change as well as his longing to be back in 1981. There are multiple descriptions of life and what technology is like in 2095. Film and television Director Brad Bird describes his 2004 Pixar film The Incredibles as "looking like what we thought the future would turn out like in the 1960s." British filmmaker Richard Ayoade noted his film The Double from 2013 was designed with the intention of looking like "the future imagined by someone in the past who got it wrong." The 2015 Disney film Tomorrowland, based on Disneyland's attraction by the same name, directed by Brad Bird, has retrofuturistic aesthetic. The TVA in the 2021 MCU TV show Loki is reminiscent of 60's office building, with many futuristic devices scattered throughout. The 2024 Fallout live-action series, based on the homonymous video game franchise, is set in a 1950s-inspired Raygun Gothic and Atompunk retrofuture. See also Cyberpunk and cyberpunk derivatives Hauntology List of stories set in a future now past Raygun Gothic References Citations Further reading External links Interesting Engineering 17 February 2021 Fascinating Visions of Our Present From Over 100 Years Ago The wonder city you may live to see – 1950 as seen in 1925 retro-futurismus.de – A German site showing numerous illustrations (click the names) Architectural styles Science fiction themes Retro style Prequels Retro-style automobiles Futurist movements Futurism
0.764826
0.997797
0.763141
Elite theory
In philosophy, political science and sociology, elite theory is a theory of the state that seeks to describe and explain power relationships in society. The theory posits that a small minority, consisting of members of the economic elite and policymaking networks, holds the most power—and that this power is independent of democratic elections. Through positions in corporations and influence over policymaking networks, through the financial support of foundations or positions with think tanks, or policy-discussion groups, members of the "elite" exert significant power over corporate and government decisions. The basic characteristics of this theory are that power is concentrated, the elites are unified, the non-elites are diverse and powerless, elites' interests are unified due to common backgrounds, and positions and the defining characteristic of power is institutional position. Elite theory opposes pluralism, a tradition that emphasizes how multiple major social groups and interests have an influence upon and various forms of representation within more powerful sets of rulers, contributing to representative political outcomes that reflect the collective needs of society. Even when entire groups are ostensibly completely excluded from the state's traditional networks of power (on the basis of arbitrary criteria such as nobility, race, gender, or religion), elite theory recognizes that "counter-elites" frequently develop within such excluded groups. Negotiations between such disenfranchised groups and the state can be analyzed as negotiations between elites and counter-elites. A major problem, in turn, is the ability of elites to co-opt counter-elites. Democratic systems function on the premise that voting behavior has a direct, noticeable effect on policy outcomes, and that these outcomes are preferred by the largest portion of voters. A study in 2014, correlated voters' preferences to policy outcomes, found that the statistical correlation between the two is heavily dependent on the income brackets of the voting groups. At the lowest income sampled, the correlation coefficient reached zero, whereas the highest income returned a correlation above 0.6. The conclusion was that there is a strong, linear correlation between the income of voters and how often their policy preferences become reality. The causation for this correlation has not yet been proven in subsequent studies, but is an area of research. History Ancient perspective Polybius (≈150 B.C.) referred to what we call today Elite Theory as simply "autocracy". He posited with great confidence that all 3 originating forms of sources of political power: one man (monarchy/executive), few men (aristocracy), many (democracy) would eventually be corrupted into a debased form of itself, if not balanced in a "mixed government". Monarchy would become "tyranny", democracy would become "mob rule", and rule by elites (aristocracy) would become corrupted in what he called "oligarchy". Polybius effectively said this is due to a failure to properly apply checks and balances between the three mentioned forms as well as subsequent political institutions. Italian school of elitism Vilfredo Pareto (1848–1923), Gaetano Mosca (1858–1941), and Robert Michels (1876–1936), were cofounders of the Italian school of elitism, which influenced subsequent elite theory in the Western tradition. The outlook of the Italian school of elitism is based on two ideas: Power lies in position of authority in key economic and political institutions. The psychological difference that sets elites apart is that they have personal resources, for instance intelligence and skills, and a vested interest in the government; while the rest are incompetent and do not have the capabilities of governing themselves, the elite are resourceful and strive to make the government work. For, in reality, the elite would have the most to lose in a failed state. Vilfredo Pareto Pareto emphasized the psychological and intellectual superiority of elites, believing that they were the highest accomplishers in any field. He discussed the existence of two types of elites: Governing elites Non-governing elites He also extended the idea that a whole elite can be replaced by a new one and how one can circulate from being elite to non-elite. Gaetano Mosca Mosca emphasized the sociological and personal characteristics of elites. He said elites are an organized minority and that the masses are an unorganized majority. The ruling class is composed of the ruling elite and the sub-elites. He divides the world into two groups: Political class Non-Political class Mosca asserts that elites have intellectual, moral, and material superiority that is highly esteemed and influential. Robert Michels Sociologist Michels developed the iron law of oligarchy where, he asserts, social and political organizations are run by few individuals, and social organization and labor division are key. He believed that all organizations were elitist and that elites have three basic principles that help in the bureaucratic structure of political organization: Need for leaders, specialized staff, and facilities Utilization of facilities by leaders within their organization The importance of the psychological attributes of the leaders Contemporary elite theorists Elmer Eric Schattschneider Elmer Eric Schattschneider offered a strong critique of the American political theory of pluralism: Rather than an essentially democratic system in which the many competing interests of citizens are amply represented, if not advanced, by equally many competing interest groups, Schattschneider argued the pressure system is biased in favor of "the most educated and highest-income members of society", and showed that "the difference between those who participate in interest group activity and those who stand at the sidelines is much greater than between voters and nonvoters". In The Semisovereign People, Schattschneider argued the scope of the pressure system is really quite small: The "range of organized, identifiable, known groups is amazingly narrow; there is nothing remotely universal about it" and the "business or upper-class bias of the pressure system shows up everywhere". He says the "notion that the pressure system is automatically representative of the whole community is a myth" and, instead, the "system is skewed, loaded and unbalanced in favor of a fraction of a minority". C. Wright Mills Mills published his book The Power Elite in 1956, in which he claimed to present a new sociological perspective on systems of power in the United States. He identified a triumvirate of power groups—political, economic and military—which form a distinguishable, although not unified, power-wielding body in the United States. Mills proposed that this group had been generated through a process of rationalization at work in all advanced industrial societies whereby the mechanisms of power became concentrated, funneling overall control into the hands of a limited, somewhat corrupt group. This reflected a decline in politics as an arena for debate and relegation to a merely formal level of discourse. This macro-scale analysis sought to point out the degradation of democracy in "advanced" societies and the fact that power generally lies outside the boundaries of elected representatives. A main influence for the study was Franz Leopold Neumann's book, Behemoth: The Structure and Practice of National Socialism, 1933–1944, a study of how Nazism came to power in the German democratic state. It provided the tools to analyze the structure of a political system and served as a warning of what could happen in a modern capitalistic democracy. Floyd Hunter The elite theory analysis of power was also applied on the micro scale in community power studies such as that by Floyd Hunter (1953). Hunter examined in detail the power of relationships evident in his "Regional City" looking for the "real" holders of power rather than those in obvious official positions. He posited a structural-functional approach that mapped hierarchies and webs of interconnection within the city—mapping relationships of power between businessmen, politicians, clergy etc. The study was promoted to debunk current concepts of any "democracy" present within urban politics and reaffirm the arguments for a true representative democracy. This type of analysis was also used in later, larger scale, studies such as that carried out by M. Schwartz examining the power structures within the sphere of the corporate elite in the United States. G. William Domhoff In his controversial 1967 book Who Rules America?, G. William Domhoff researched local and national decision-making process networks seeking to illustrate the power structure in the United States. He asserts, much like Hunter, that an elite class that owns and manages large income-producing properties (like banks and corporations) dominate the American power structure politically and economically. James Burnham Burnham's early work The Managerial Revolution sought to express the movement of all functional power into the hands of managers rather than politicians or businessmen—separating ownership and control. Robert D. Putnam Putnam saw the development of technical and exclusive knowledge among administrators and other specialist groups as a mechanism that strips power from the democratic process and slips it to the advisors and specialists who influence the decision process. "If the dominant figures of the past hundred years have been the entrepreneur, the businessman, and the industrial executive, the ‘new men’ are the scientists, the mathematicians, the economists, and the engineers of the new intellectual technology." Thomas R. Dye Dye in his book Top Down Policymaking, argues that U.S. public policy does not result from the "demands of the people", but rather from elite consensus found in Washington, D.C.-based non-profit foundations, think tanks, special-interest groups, and prominent lobbying and law firms. Dye's thesis is further expanded upon in his works: The Irony of Democracy, Politics in America, Understanding Public Policy, and Who's Running America?. George A. Gonzalez In his book Corporate Power and the Environment, George A. Gonzalez writes on the power of U.S. economic elites to shape environmental policy for their own advantage. In The Politics of Air Pollution: Urban Growth, Ecological Modernization and Symbolic Inclusion and also in Urban Sprawl, Global Warming, and the Empire of Capital Gonzalez employs elite theory to explain the interrelationship between environmental policy and urban sprawl in America. His most recent work, Energy and Empire: The Politics of Nuclear and Solar Power in the United States demonstrates that economic elites tied their advocacy of the nuclear energy option to post-1945 American foreign policy goals, while at the same time these elites opposed government support for other forms of energy, such as solar, that cannot be dominated by one nation. Ralf Dahrendorf In his book Reflections on the Revolution in Europe, Ralf Dahrendorf asserts that, due to advanced level of competence required for political activity, a political party tends to become, actually, a provider of "political services", that is, the administration of local and governmental public offices. During the electoral campaign, each party tries to convince voters it is the most suitable for managing the state business. The logical consequence would be to acknowledge this character and openly register the parties as service providing companies. In this way, the ruling class would include the members and associates of legally acknowledged companies and the "class that is ruled" would select by election the state administration company that best fits its interests. Martin Gilens and Benjamin I. Page In their statistical analysis of 1,779 policy issues professors Martin Gilens and Benjamin Page found that "economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence." Critics cited by Vox.com argued, using the same dataset, that when the rich and middle class disagreed, the rich got their preferred outcome 53 percent of the time and the middle class got what they wanted 47 percent of the time. Some critics disagree with Gilens and Pages' headline conclusion, but do believe that the dataset confirms "the rich and middle (class) are effective at blocking policies that the poor want". Thomas Ferguson The political scientist Thomas Ferguson's Investment Theory of Party Competition can be thought of as an elite theory. Set out most extensively in his 1995 book Golden Rule: The Investment Theory of Party Competition and the Logic of Money-driven Political Systems, the theory begins by noting that in modern political systems the cost of acquiring political awareness is so great that no citizen can afford it. As a consequence, these systems tend be dominated by those who can, most typically elites and corporations. These elites then seek to influence politics by 'investing' in the parties or policies they support through political contributions and other means such as endorsements in the media. Neema Parvini In his 2022 book, The Populist Delusion, Neema Parvini asserts that ‘the will of the people’ does not impact political decisions and that ‘elite driven change’ better explains the realities of political power. In the book Parvini introduces Elite Theory by explicating the theories of other Elite Theorists: Gaetano Mosca, Vilfredo Pareto, Robert Michels, Carl Schmitt, Bertrande de Jouvenel, James Burnham, Samuel T. Francis and Paul Gottfried. In explaining the thinkers and applying their frameworks to western political history, Parvini concludes the true functioning of power to be “where an organized minority elite rule over a disorganized mass”. Parvini also discusses and presents Elite Theory and the arguments made in The Populist Delusion on his YouTube channel, Academic Agent. See also Democratic deficit Elitism Iron law of oligarchy Mass society Positive political theory The Power Elite Ruling class Expressions of dominance Liberal elite Invisible Class Empire Dictatorship of the bourgeoisie References Bibliography Amsden, Alice (2012) The Role of Elites in Economic Development, Oxford University Press, 2012. with Alisa Di Caprio and James A. Robinson. Bottomore, T. (1993) Elites and Society (2nd Edition). London: Routledge. Burnham, J. (1960) The Managerial Revolution. Bloomington: Indiana University Press. Crockett, Norman L. ed. The power elite in America (1970), excerpts from experts online free Domhoff. G. William (1967–2009) Who Rules America? McGraw-Hill. online 5th edition Domhoff, G. William. Studying the power elite: Fifty years of who rules America? (Routledge, 2017); new essays by 12 experts Downey, Liam, et al. "Power, hegemony, and world society theory: A critical evaluation." Socius 6 (2020): 2378023120920059 online. Dye, T. R. (2000) Top Down Policymaking New York: Chatham House Publishers. Gonzalez, G. A. (2012) Energy and Empire: The Politics of Nuclear and Solar Power in the United States. Albany: State University of New York Press Gonzalez, G. A. (2009) Urban Sprawl, Global Warming, and the Empire of Capital. Albany: State University of New York Press Gonzalez, G. A. (2006) The Politics of Air Pollution: Urban Growth, Ecological Modernization, And Symbolic Inclusion. Albany: State University of New York Press Gonzalez, G. A. (2001) Corporate Power and the Environment. Rowman & Littlefield Publishers Hunter, Floyd (1953) Community Power Structure: A Study of Decision Makers. Lerner, R., A. K. Nagai, S. Rothman (1996) American Elites. New Haven CT: Yale University Press Milch, Jan, (1992) . C.Wright Mills och hans sociologiska vision Om hans syn på makt och metod och vetenskap,. Sociologiska Institution Göteborgs Universit-("C.Wright Mills and his sociological vision About his views on power and methodology and science. Department of Sociology Gothenburg University") Mills, C. Wright (1956) The Power Elite. online Neumann, Franz Leopold (1944). Behemoth: The Structure and Practice of National Socialism, 1933 - 1944. Harper. online Putnam, R. D. (1976) The Comparative Study of Political Elites. New Jersey: Prentice Hall. Putnam, R. D. (1977) ‘Elite Transformation in Advance Industrial Societies: An Empirical Assessment of the Theory of Technocracy’ in Comparative Political Studies Vol. 10, No. 3, pp383–411. Schwartz, M. (ed.) (1987) The Structure of Power in America: The Corporate Elite as a Ruling Class. New York: Holmes & Meier. Volpe, G. (2021) Italian Elitism and the Reshaping of Democracy in the United States. Abingdon, Oxon; New York: Routledge. Comparative politics Political science theories Sociological theories Social class in the United States Political science Conflict theory Structural functionalism Majority–minority relations
0.766638
0.995434
0.763137
Technological evolution
The term "technological evolution" captures explanations of technological change that draw on mechanisms from evolutionary biology. Evolutionary biology was originally described in On the Origin of Species by Charles Darwin. In the style of this catchphrase, technological evolution can be used to describe the origin of new technologies. Combinatoric theory of technological change The combinatoric theory of technological change states that every technology always consists of simpler technologies, and a new technology is made of already existing technologies. One notion of this theory is that this interaction of technologies creates a network. All the technologies which interact to form a new technology can be thought of as complements, such as a screwdriver and a screw which by their interaction create the process of screwing a screw. This newly formed process of screwing a screw can be perceived as a technology itself and can therefore be represented by a new node in the network of technologies. The new technology itself can interact with other technologies to form a new technology again. As the process of combining existing technologies is repeated again and again, the network of technologies grows. A described mechanism of technological change has been termed, “combinatorial evolution”. Others have called it, “technological recursion”. Brian Arthur has elaborated how the theory is related to the mechanism of genetic recombination from evolutionary biology and in which aspects it differs. History of technological evolution Technological evolution is a theory of radical transformation of society through technological development. This theory originated with Czech philosopher Radovan Richta. Mankind In Transition; A View of the Distant Past, the Present and the Far Future, Masefield Books, 1993. Technology (which Richta defines as "a material entity created by the application of mental and physical effort to nature in order to achieve some value") evolves in three stages: tools, machine, automation. This evolution, he says, follows two trends: The pre-technological period, in which other animal species remain today (aside from some avian and primate species) was a non-rational period of the early prehistoric man. The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, such as an arrow, plow, or hammer that augments physical labor to more efficiently achieve his objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart, or carrying volumes of water in a bucket. The second technological stage was the creation of the machine. A machine (a powered machine to be more precise) is a tool that substitutes part of or all of the element of human physical effort, requiring only the control of its functions. Machines became widespread with the industrial revolution, though windmills, a type of machine, are much older. Examples of this include cars, trains, computers, and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse. The third, and final stage of technological evolution is the automation. The automation is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers, and computer programs. Each of these three stages outline the introduction and development of the fundamental types of technology, and all three continue to be widely used today. A spear, a plow, a pen, a knife, a glove, and an optical microscope are all examples of tools. See also Self-replicating machines in fiction Sociocultural evolution References External links The Evolution of Technology, George Basalla, University of Delaware Technology in society Evolution Technological change
0.77425
0.985609
0.763108
Clash of Civilizations
The "Clash of Civilizations" is a thesis that people's cultural and religious identities will be the primary source of conflict in the post–Cold War world. The American political scientist Samuel P. Huntington argued that future wars would be fought not between countries, but between cultures. It was proposed in a 1992 lecture at the American Enterprise Institute, which was then developed in a 1993 Foreign Affairs article titled "The Clash of Civilizations?", in response to his former student Francis Fukuyama's 1992 book The End of History and the Last Man. Huntington later expanded his thesis in a 1996 book The Clash of Civilizations and the Remaking of World Order. The phrase itself was earlier used by Albert Camus in 1946, by Girilal Jain in his analysis of the Ayodhya dispute in 1988, by Bernard Lewis in an article in the September 1990 issue of The Atlantic Monthly titled "The Roots of Muslim Rage" and by Mahdi El Mandjra in his book "La première guerre civilisationnelle" published in 1992. Even earlier, the phrase appears in a 1926 book regarding the Middle East by Basil Mathews: Young Islam on Trek: A Study in the Clash of Civilizations (p. 196). This expression derives from "clash of cultures", already used during the colonial period and the Belle Époque. Huntington began his thinking by surveying the diverse theories about the nature of global politics in the post–Cold War period. Some theorists and writers argued that human rights, liberal democracy, and the capitalist free market economy had become the only remaining ideological alternative for nations in the post–Cold War world. Specifically, Francis Fukuyama argued that the world had reached the 'end of history' in a Hegelian sense. Huntington believed that while the age of ideology had ended, the world had only reverted to a normal state of affairs characterized by cultural conflict. In his thesis, he argued that the primary axis of conflict in the future will be along cultural lines. As an extension, he posits that the concept of different civilizations, as the highest category of cultural identity, will become increasingly useful in analyzing the potential for conflict. At the end of his 1993 Foreign Affairs article, "The Clash of Civilizations?", Huntington writes, "This is not to advocate the desirability of conflicts between civilizations. It is to set forth descriptive hypothesis as to what the future may be like." In addition, the clash of civilizations, for Huntington, represents a development of history. In the past, world history was mainly about the struggles between monarchs, nations and ideologies, such as that seen within Western civilization. However, after the end of the Cold War, world politics moved into a new phase, in which non-Western civilizations are no longer the exploited recipients of Western civilization but have become additional important actors joining the West to shape and move world history. Major civilizations according to Huntington Huntington divided the world into the "major civilizations" in his thesis as such: Western civilization, comprising the United States and Canada, Western and Central Europe, most of the Philippines, Australia, and Oceania. Whether Latin America and the former member states of the Soviet Union are included, or are instead their own separate civilizations, will be an important future consideration for those regions, according to Huntington. The traditional Western viewpoint identified Western Civilization with the Western Christian (Catholic-Protestant) countries and culture. Latin American civilization, including South America (excluding Guyana, Suriname and French Guiana), Central America, Mexico, Cuba, and the Dominican Republic may be considered a part of Western civilization. Many people in South America, Central America and Mexico regard themselves as full members of Western civilization. Orthodox civilization, comprising Bulgaria, Cyprus, Georgia, Greece, Romania, great parts of the former Soviet Union and Yugoslavia. Countries with a non-Orthodox majority are usually excluded e.g. Muslim Azerbaijan and Muslim Albania and most of Central Asia, as well as majority Muslim regions in the Balkans, Caucasus and central Russian regions such as Tatarstan and Bashkortostan, Roman Catholic Slovenia and Croatia, Protestant and Catholic Baltic states. However, Armenia is included, despite its dominant faith, the Armenian Apostolic Church, being a part of Oriental Orthodoxy rather than the Eastern Orthodox Church, and Kazakhstan is also included, despite its dominant faith being Sunni Islam. The Eastern world is the mix of the Buddhist, Islamic, Chinese, Hindu, and Japonic civilizations. The Sinic civilization of China, the Koreas, Singapore, Taiwan, and Vietnam. This group also includes the Chinese diaspora, especially in relation to Southeast Asia. Japan, considered a hybrid of Chinese civilization and older Altaic patterns. The Buddhist areas of Bhutan, Cambodia, Laos, Mongolia, Myanmar, Sri Lanka and Thailand are identified as separate from other civilizations, but Huntington believes that they do not constitute a major civilization in the sense of international affairs. Hindu civilization, located chiefly in India and Nepal, and culturally adhered to by the global Indian diaspora. Part of Islamic, located in Muslim-Majority countries which icludes South Asian countries of Pakistan, Afghanistan and Bangladesh. South East Asian countries of Indonesia and Malaysia The Muslim world of the Greater Middle East (excluding Armenia, Cyprus, Ethiopia, Georgia, Israel, Malta and South Sudan), northern West Africa, Albania, Pakistan, Bangladesh, parts of Bosnia and Herzegovina, Brunei, Comoros, Indonesia, Malaysia, Maldives and parts of south-western Philippines. The civilization of Sub-Saharan Africa located in southern Africa, Middle Africa (excluding Chad), East Africa (excluding Ethiopia, the Comoros, Mauritius, and the Swahili coast of Kenya and Tanzania), Cape Verde, Ghana, the Ivory Coast, Liberia, and Sierra Leone. Considered as a possible eighth civilization by Huntington. Instead of belonging to one of the "major" civilizations, Ethiopia and Haiti are labeled as "Lone" countries. Israel could be considered a unique state with its own civilization, Huntington writes, but one which is extremely similar to the West. Huntington also believes that the Anglophone Caribbean, former British colonies in the Caribbean, constitutes a distinct entity. There are also others which are considered "cleft countries" because they contain very large groups of people identifying with separate civilizations. Examples include Ukraine ("cleft" between its Eastern Rite Catholic-dominated western section and its Orthodox-dominated east), French Guiana (cleft between Latin America, and the West), Benin, Chad, Kenya, Nigeria, Tanzania, and Togo (all cleft between Islam and Sub-Saharan Africa), Guyana and Suriname (cleft between Hindu and Sub-Saharan African), Sri Lanka (cleft between Hindu and Buddhist), and the Philippines (cleft between Islam, in the case of south western Mindanao; Sinic, in the case of Cordillera; and the Westernized Christian Majority). Sudan was also included as "cleft" between Islam and Sub-Saharan Africa; this division became a formal split in July 2011 following an overwhelming vote for independence by South Sudan in a January 2011 referendum. Huntington's thesis of civilizational clash Huntington argues that the trends of global conflict after the end of the Cold War are increasingly appearing at these civilizational divisions. Wars such as those following the break up of Yugoslavia, in Chechnya, and between India and Pakistan were cited as evidence of inter-civilizational conflict. He also argues that the widespread Western belief in the universality of the West's values and political systems is naïve and that continued insistence on democratization and such "universal" norms will only further antagonize other civilizations. Huntington sees the West as reluctant to accept this because it built the international system, wrote its laws, and gave it substance in the form of the United Nations. Huntington identifies a major shift of economic, military, and political power from the West to the other civilizations of the world, most significantly to what he identifies as the two "challenger civilizations", Sinic and Islam. In Huntington's view, East Asian Sinic civilization is culturally asserting itself and its values relative to the West due to its rapid economic growth. Specifically, he believes that China's goals are to reassert itself as the regional hegemon, and that other countries in the region will 'bandwagon' with China due to the history of hierarchical command structures implicit in the Confucian Sinic civilization, as opposed to the individualism and pluralism valued in the West. Regional powers such as the two Koreas and Vietnam will acquiesce to Chinese demands and become more supportive of China rather than attempting to oppose it. Huntington therefore believes that the rise of China poses one of the most significant problems and the most powerful long-term threat to the West, as Chinese cultural assertion clashes with the American desire for the lack of a regional hegemony in East Asia. Huntington argues that the Islamic civilization has experienced a massive population explosion which is fueling instability both on the borders of Islam and in its interior, where fundamentalist movements are becoming increasingly popular. Manifestations of what he terms the "Islamic Resurgence" include the 1979 Iranian revolution and the first Gulf War. Perhaps the most controversial statement Huntington made in the Foreign Affairs article was that "Islam has bloody borders". Huntington believes this to be a real consequence of several factors, including the previously mentioned Muslim youth bulge and population growth and Islamic proximity to many civilizations including Sinic, Orthodox, Western, and African. Huntington sees Islamic civilization as a potential ally to China, both having more revisionist goals and sharing common conflicts with other civilizations, especially the West. Specifically, he identifies common Chinese and Islamic interests in the areas of weapons proliferation, human rights, and democracy that conflict with those of the West, and feels that these are areas in which the two civilizations will cooperate. Russia, Japan, and India are what Huntington terms 'swing civilizations' and may favor either side. Russia, for example, clashes with the many Muslim ethnic groups on its southern border (such as Chechnya) but—according to Huntington—cooperates with Iran to avoid further Muslim-Orthodox violence in Southern Russia, and to help continue the flow of oil. Huntington argues that a "Sino-Islamic connection" is emerging in which China will cooperate more closely with Iran, Pakistan, and other states to augment its international position. Huntington also argues that civilizational conflicts are "particularly prevalent between Muslims and non-Muslims", identifying the "bloody borders" between Islamic and non-Islamic civilizations. This conflict dates back as far as the initial thrust of Islam into Europe, its eventual expulsion in the Iberian reconquest, the attacks of the Ottoman Turks on Eastern Europe and Vienna, and the European imperial division of the Islamic nations in the 1800s and 1900s. Huntington also believes that some of the factors contributing to this conflict are that both Christianity (upon which Western civilization is based) and Islam are: Missionary religions, seeking conversion of others Universal, "all-or-nothing" religions, in the sense that it is believed by both sides that only their faith is the correct one Teleological religions, that is, that their values and beliefs represent the goals of existence and purpose in human existence. More recent factors contributing to a Western–Islamic clash, Huntington wrote, are the Islamic Resurgence and demographic explosion in Islam, coupled with the values of Western universalism—that is, the view that all civilizations should adopt Western values—that infuriate Islamic fundamentalists. All these historical and modern factors combined, Huntington wrote briefly in his Foreign Affairs article and in much more detail in his 1996 book, would lead to a bloody clash between the Islamic and Western civilizations. Why civilizations will clash Huntington offers six explanations for why civilizations will clash: Differences among civilizations are too basic in that civilizations are differentiated from each other by history, language, culture, tradition, and, most importantly, religion. These fundamental differences are the product of centuries and the foundations of different civilizations, meaning they will not be gone soon. The world is becoming a smaller place. As a result, interactions across the world are increasing, which intensify "civilization consciousness" and the awareness of differences between civilizations and commonalities within civilizations. Due to economic modernization and social change, people are separated from longstanding local identities. Instead, religion has replaced this gap, which provides a basis for identity and commitment that transcends national boundaries and unites civilizations. The growth of civilization-consciousness is enhanced by the dual role of the West. On the one hand, the West is at a peak of power. At the same time, a return-to-the-roots phenomenon is occurring among non-Western civilizations. A West at the peak of its power confronts non-Western countries that increasingly have the desire, the will and the resources to shape the world in non-Western ways. Cultural characteristics and differences are less mutable and hence less easily compromised and resolved than political and economic ones. Economic regionalism is increasing. Successful economic regionalism will reinforce civilization-consciousness. Economic regionalism may succeed only when it is rooted in a common civilization. The West versus the Rest Huntington suggests that in the future the central axis of world politics tends to be the conflict between Western and non-Western civilizations, in Stuart Hall's phrase, the conflict between "the West and the Rest". He offers three forms of general and fundamental actions that non-Western civilization can take in response to Western countries. Non-Western countries can attempt to achieve isolation in order to preserve their own values and protect themselves from Western invasion. However, Huntington argues that the costs of this action are high and only a few states can pursue it. According to the theory of "band-wagoning", non-Western countries can join and accept Western values. Non-Western countries can make an effort to balance Western power through modernization. They can develop economic/military power and cooperate with other non-Western countries against the West while still preserving their own values and institutions. Huntington believes that the increasing power of non-Western civilizations in international society will make the West begin to develop a better understanding of the cultural fundamentals underlying other civilizations. Therefore, Western civilization will cease to be regarded as "universal" but different civilizations will learn to coexist and join to shape the future world. Core state and fault line conflicts In Huntington's view, intercivilizational conflict manifests itself in two forms: fault line conflicts and core state conflicts. Fault line conflicts are on a local level and occur between adjacent states belonging to different civilizations or within states that are home to populations from different civilizations. Core state conflicts are on a global level between the major states of different civilizations. Core state conflicts can arise out of fault line conflicts when core states become involved. These conflicts may result from a number of causes, such as: relative influence or power (military or economic), discrimination against people from a different civilization, intervention to protect kinsmen in a different civilization, or different values and culture, particularly when one civilization attempts to impose its values on people of a different civilization. Modernization, Westernization, and "torn countries" Japan, China and the Four Asian Tigers have modernized in many respects while maintaining traditional or authoritarian societies which distinguish them from the West. Some of these countries have clashed with the West and some have not. Perhaps the ultimate example of non-Western modernization is Russia, the core state of the Orthodox civilization. Huntington argues that Russia is primarily a non-Western state although he seems to agree that it shares a considerable amount of cultural ancestry with the modern West. According to Huntington, the West is distinguished from Orthodox Christian countries by its experience of the Renaissance, Reformation, the Enlightenment; by overseas colonialism rather than contiguous expansion and colonialism; and by the infusion of Classical culture through ancient Greece rather than through the continuous trajectory of the Byzantine Empire. Huntington refers to countries that are seeking to affiliate with another civilization as "torn countries". Turkey, whose political leadership has systematically tried to Westernize the country since the 1920s, is his chief example. Turkey's history, culture, and traditions are derived from Islamic civilization, but Turkey's elite, beginning with Mustafa Kemal Atatürk who took power as first President in 1923, imposed Western institutions and dress, embraced the Latin alphabet, joined NATO, and has sought to join the European Union. Mexico and Russia are also considered to be torn by Huntington. He also gives the example of Australia as a country torn between its Western civilizational heritage and its growing economic engagement with Asia. According to Huntington, a torn country must meet three requirements to redefine its civilizational identity. Its political and economic elite must support the move. Second, the public must be willing to accept the redefinition. Third, the elites of the civilization that the torn country is trying to join must accept the country. The book claims that to date no torn country has successfully redefined its civilizational identity, this mostly due to the elites of the 'host' civilization refusing to accept the torn country, though if Turkey gained membership in the European Union, it has been noted that many of its people would support Westernization, as in the following quote by EU Minister Egemen Bağış: "This is what Europe needs to do: they need to say that when Turkey fulfills all requirements, Turkey will become a member of the EU on date X. Then, we will regain the Turkish public opinion support in one day." If this were to happen, it would, according to Huntington, be the first to redefine its civilizational identity. Criticism The book has been criticized by various academic writers, who have empirically, historically, logically, or ideologically challenged its claims. Political scientist Paul Musgrave writes that Clash of Civilization "enjoys great cachet among the sort of policymaker who enjoys name-dropping Sun Tzu, but few specialists in international relations rely on it or even cite it approvingly. Bluntly, Clash has not proven to be a useful or accurate guide to understanding the world." In an article explicitly referring to Huntington, scholar Amartya Sen (1999) argues that "diversity is a feature of most cultures in the world. Western civilization is no exception. The practice of democracy that has won out in the modern West is largely a result of a consensus that has emerged since the Enlightenment and the Industrial Revolution, and particularly in the last century or so. To read in this a historical commitment of the West—over the millennia—to democracy, and then to contrast it with non-Western traditions (treating each as monolithic) would be a great mistake." In his 2003 book Terror and Liberalism, Paul Berman argues that distinct cultural boundaries do not exist in the present day. He argues there is no "Islamic civilization" nor a "Western civilization", and that the evidence for a civilization clash is not convincing, especially when considering relationships such as that between the United States and Saudi Arabia. In addition, he cites the fact that many Islamic extremists spent a significant amount of time living or studying in the Western world. According to Berman, conflict arises because of philosophical beliefs various groups share (or do not share), regardless of cultural or religious identity. Timothy Garton Ash objects to the 'extreme cultural determinism... crude to the point of parody' of Huntington's idea that Catholic and Protestant Europe is headed for democracy, but that Orthodox Christian and Islamic Europe must accept dictatorship. Edward Said issued a response to Huntington's thesis in his 2001 article, "The Clash of Ignorance". Said argues that Huntington's categorization of the world's fixed "civilizations" omits the dynamic interdependency and interaction of culture. A longtime critic of the Huntingtonian paradigm, and an outspoken proponent of Arab issues, Said (2004) also argues that the clash of civilizations thesis is an example of "the purest invidious racism, a sort of parody of Hitlerian science directed today against Arabs and Muslims" (p. 293). Noam Chomsky has criticized the concept of the clash of civilizations as just being a new justification for the United States "for any atrocities that they wanted to carry out", which was required after the Cold War as the Soviet Union was no longer a viable threat. In 21 Lessons for the 21st Century, Yuval Noah Harari called the clash of civilizations a misleading thesis. He wrote that Islamic fundamentalism is more of a threat to a global civilization, rather than a confrontation with the West. He also argued that talking about civilizations using analogies from evolutionary biology is wrong. Nathan J. Robinson criticizes Huntington's thesis as inconsistent. He notes that according to Huntington, "Spanish-speaking Catholic-majority Spain is West, while Spanish-speaking Catholic-majority Mexico is not part of Western civilization, and instead belongs with Brazil as part of Latin American civilization." Robinson concludes, "If you look at the map and think these divisions make sense, which you might, it is because what you are mostly seeing here is a map of prejudices. [Huntington] indeed shows how a lot of people think of the world, especially in America." Intermediate Region Huntington's geopolitical model, especially the structures for North Africa and Eurasia, is largely derived from the "Intermediate Region" geopolitical model first formulated by Dimitri Kitsikis and published in 1978. The Intermediate Region, which spans the Adriatic Sea and the Indus River, is neither Western nor Eastern (at least, with respect to the Far East) but is considered distinct. Concerning this region, Huntington departs from Kitsikis contending that a civilizational fault line exists between the two dominant yet differing religions (Eastern Orthodoxy and Sunni Islam), hence a dynamic of external conflict. However, Kitsikis establishes an integrated civilization comprising these two peoples along with those belonging to the less dominant religions of Shia Islam, Alevism, and Judaism. They have a set of mutual cultural, social, economic and political views and norms which radically differ from those in the West and the Far East. In the Intermediate Region, therefore, one cannot speak of a civilizational clash or external conflict, but rather an internal conflict, not for cultural domination, but for political succession. This has been successfully demonstrated by documenting the rise of Christianity from the Hellenized Roman Empire, the rise of the Islamic caliphates from the Christianized Roman Empire and the rise of Ottoman rule from the Islamic caliphates and the Christianized Roman Empire. Opposing concepts In recent years, the theory of Dialogue Among Civilizations, a response to Huntington's Clash of Civilizations, has become the center of some international attention. The concept was originally coined by Austrian philosopher Hans Köchler in an essay on cultural identity (1972). In a letter to UNESCO, Köchler had earlier proposed that the cultural organization of the United Nations should take up the issue of a "dialogue between different civilizations" (dialogue entre les différentes civilisations). In 2001, Iranian president Mohammad Khatami introduced the concept at the global level. At his initiative, the United Nations proclaimed the year 2001 as the "United Nations Year of Dialogue among Civilizations". The Alliance of Civilizations (AOC) initiative was proposed at the 59th General Assembly of the United Nations in 2005 by the Spanish Prime Minister, José Luis Rodríguez Zapatero and co-sponsored by the Turkish Prime Minister Recep Tayyip Erdoğan. The initiative is intended to galvanize collective action across diverse societies to combat extremism, to overcome cultural and social barriers between mainly the Western and predominantly Muslim worlds, and to reduce the tensions and polarization between societies which differ in religious and cultural values. Other civilizational models Eurasianism, a Russian geopolitical concept based on the civilization of Eurasia Intermediate Region Islamo-Christian Civilization Pan-Turkism Individuals Richard Bulliet Jacob Burckhardt Niall Ferguson Dimitri Kitsikis Feliks Koneczny Carroll Quigley Oswald Spengler See also Balkanization Civilizing mission Cold War II Criticism of multiculturalism Cultural relativism Eastern Party Fault line war Global policeman Inglehart–Welzel cultural map of the world Opposition to immigration Occidentalism Orientalism Oriental Despotism Potential superpowers Protracted social conflict Religious pluralism East-West Cultural Debate References Bibliography Barbé, Philippe, '[L'Anti-Choc des Civilisations: Méditations Méditerranéennes, Editions de l'Aube, 2006, Barber, Benjamin R., Jihad vs. McWorld, Hardcover: Crown, 1995, ; Paperback: Ballantine Books, 1996, Blankley, Tony, The West's Last Chance: Will We Win the Clash of Civilizations?, Washington, D.C., Regnery Publishing, Inc., 2005 Harris, Lee, Civilization and Its Enemies: The Next Stage of History, New York, The Free Press, 2004 Harrison, Lawrence E. and Samuel P. Huntington (eds.), Culture Matters: How Values Shape Human Progress, New York, Basic Books, 2001 Huntington, Samuel P., The Clash of Civilizations?, in "Foreign Affairs", vol. 72, no. 3, Summer 1993, pp. 22–49 Huntington, Samuel P., The Clash of Civilizations and the Remaking of World Order, New York, Simon & Schuster, 1996 Huntington, Samuel P. (ed.), The Clash of Civilizations?: The Debate, New York, Foreign Affairs, 1996 Kepel, Gilles, Bad Moon Rising: a chronicle of the Middle East today, London, Saqi Books, 2003 Köchler, Hans (ed.), Civilizations: Conflict or Dialogue?, Vienna, International Progress Organization, 1999 (Google Print) Köchler, Hans, After September 11, 2001: Clash of Civilizations or Dialogue? University of the Philippines, Manila, 2002 Köchler, Hans, The "Clash of Civilizations": Perception and Reality in the Context of Globalization and International Power Politics, Tbilisi (Georgia), 2004 Pera, Marcello and Joseph Ratzinger (Pope Benedict XVI), Senza radici: Europa, Relativismo, Cristianesimo, Islam [transl.: Without Roots: The West, Relativism, Christianity, Islam, Philadelphia, Pennsylvania, Perseus Books Group, 2006 ], Milano, Mondadori, 2004 Peters, Ralph, Fighting for the Future: Will America Triumph?, Mechanicsburg, Pennsylvania, Stackpole Books, 1999 Potter, Robert (2011), 'Recalcitrant Interdependence', Thesis, Flinders University Sacks, Jonathan, The Dignity of Difference: How to Avoid the Clash of Civilizations, London, Continuum, 2002 Toft, Monica Duffy, The Geography of Ethnic Violence: Identity, Interests, and the Indivisibility of Territory, Princeton, New Jersey, Princeton University Press, 2003 Tusicisny, Andrej, "Civilizational Conflicts: More Frequent, Longer, and Bloodier?", in Journal of Peace Research, vol. 41, no. 4, 2004, pp. 485–498 (available online) Van Creveld, Martin, The Transformation of War, New York & London, The Free Press, 1991 Venn, Couze "Clash of Civilisations", in Prem Poddar et al., Historical Companion to Postcolonial Literatures—Continental Europe and its Empires, Edinburgh University Press, 2008. Further reading Tony Blankley, The West's Last Chance: Will We Win the Clash of Civilizations? J. Paul Barker, ed. Huntington's Clash of Civilization: Twenty Years On E-International Relations, Bristol, 2013. Nikolaos A. Denaxas, The clash of civilizations according to Samuel Huntington – Orthodox criticism, 2008. (postgraduate thesis in Greek) Hale, H., & Laruelle, M. (2020). "Rethinking Civilizational Identity from the Bottom Up: A Case Study of Russia and a Research Agenda." Nationalities Papers James Kurth, The Real Clash, The National Interest, 1994 Davide Orsi, ed. The 'Clash of Civilizations' 25 Years On: A Multidisciplinary Appraisal E-International Relations, Bristol, 2018. External links "The Clash of Civilizations?" – Original essay from Foreign Affairs 1993 "If Not Civilizations, What? Samuel Huntington Responds to His Critics", Foreign Affairs'', 1993 1996 non-fiction books Academic journal articles Books about civilizations Books about cultural geography Books about globalization Books about ideologies Books about international relations Cultural geography Ethnic conflict Geopolitical rivalry Identity politics International relations theory Civilizations Political science theories Political theories Simon & Schuster books Theories of history Works by Samuel P. Huntington
0.764666
0.997937
0.763089
Geology
Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science. Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates. Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering. Geological material The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods. Minerals Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and an ordered atomic arrangement. Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for: Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color. Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help identify the mineral. Hardness: The resistance of a mineral to scratching or indentation. Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes. Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull. Specific gravity: the weight of a specific volume of a mineral. Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing. Magnetism: Involves using a magnet to test for magnetism. Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt). Rock A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle illustrates the relationships among them (see diagram). When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. Sedimentary rocks are mainly divided into four categories: sandstone, shale, carbonate, and evaporite. This group of classifications focuses partly on the size of sedimentary particles (sandstone and shale), and partly on mineralogy and formation processes (carbonation and evaporation). Igneous and sedimentary rocks can then be turned into metamorphic rocks by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify. Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks. To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric. Unlithified material Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock. This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time. Magma Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization. Whole-Earth structure Plate tectonics In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading and the global distribution of mountain terrain and seismicity. There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic parts of plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics. The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries: Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart. Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another. Transform boundaries, such as the San Andreas Fault system, are where plates slide horizontally past each other. Plate tectonics has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle. Earth structure Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth. Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a lithosphere (including crust) on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model. Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth. Geological time The geological time scale encompasses the history of the Earth. It is bracketed at the earliest by the dates of the first Solar System material at 4.567 Ga (or 4.567 billion years ago) and the formation of the Earth at 4.54 Ga (4.54 billion years), which is the beginning of the Hadean eona division of geological time. At the later end of the scale, it is marked by the present day (in the Holocene epoch). Timescale of the Earth Important milestones on Earth 4.567 Ga (gigaannum: billion years ago): Solar system formation 4.54 Ga: Accretion, or formation, of Earth c. 4 Ga: End of Late Heavy Bombardment, the first life c. 3.5 Ga: Start of photosynthesis c. 2.3 Ga: Oxygenated atmosphere, first snowball Earth 730–635 Ma (megaannum: million years ago): second snowball Earth 541 ± 0.3 Ma: Cambrian explosion – vast multiplication of hard-bodied life; first abundant fossils; start of the Paleozoic c. 380 Ma: First vertebrate land animals 250 Ma: Permian-Triassic extinction – 90% of all land animals die; end of Paleozoic and beginning of Mesozoic 66 Ma: Cretaceous–Paleogene extinction – Dinosaurs die; end of Mesozoic and beginning of Cenozoic c. 7 Ma: First hominins appear 3.9 Ma: First Australopithecus, direct ancestor to modern Homo sapiens, appear 200 ka (kiloannum: thousand years ago): First modern Homo sapiens appear in East Africa Timescale of the Moon Timescale of Mars Dating methods Relative dating Methods for relative dating were developed when geology first emerged as a natural science. Geologists still use the following principles today as a means to provide information about geological history and the timing of geological events. The principle of uniformitarianism states that the geological processes observed in operation that modify the Earth's crust at present have worked in much the same way over geological time. A fundamental principle of geology advanced by the 18th-century Scottish physician and geologist James Hutton is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now." The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. Different types of intrusions include stocks, laccoliths, batholiths, sills and dikes. The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault. The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock that contains them. The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal). The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of the vertical timeline, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed. The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist during the same period throughout the world, their presence or (sometimes) absence provides a relative age of the formations where they appear. Based on principles that William Smith laid out almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils formed globally at the same time. Absolute dating Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods. At the beginning of the 20th century, advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geological events using radioactive isotopes and other methods. This changed the understanding of geological time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates, it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages. For many geological applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice. These are used in geochronologic and thermochronologic studies. Common methods include uranium–lead dating, potassium–argon dating, argon–argon dating and uranium–thorium dating. These methods are used for a variety of applications. Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units that do not contain radioactive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement. Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleo-topography. Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle. Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionuclide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon. Geological development of an area The geology of an area changes through time as rock units are deposited and inserted, and deformational processes alter their shapes and locations. Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude. After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates. When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes the deeper rock to move on top of the shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms, and synforms. Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks. Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units ending up below older units. Stretching of units can result in their thinning. In fact, at one location within the Maria Fold and Thrust Belt, the entire sedimentary sequence of the Grand Canyon appears over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage" because of their visual similarity. Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely. The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano. All of these processes do not necessarily occur in a single environment and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is indiscernible without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited. Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area. Investigative methods Geologists use a number of fields, laboratory, and numerical modeling methods to decipher Earth history and to understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface. Sub-specialities of geology may distinguish endogenous and exogenous geology. Field methods Geological field work varies depending on the task at hand. Typical fieldwork could consist of: Geological mapping Structural mapping: identifying the locations of major rock units and the faults and folds that led to their placement there. Stratigraphic mapping: pinpointing the locations of sedimentary facies (lithofacies and biofacies) or the mapping of isopachs of equal thickness of sedimentary rock Surficial mapping: recording the locations of soils and surficial deposits Surveying of topographic features compilation of topographic maps Work to understand change across landscapes, including: Patterns of erosion and deposition River-channel change through migration and avulsion Hillslope processes Subsurface mapping through geophysical methods These methods include: Shallow seismic surveys Ground-penetrating radar Aeromagnetic surveys Electrical resistivity tomography They aid in: Hydrocarbon exploration Finding groundwater Locating buried archaeological artifacts High-resolution stratigraphy Measuring and describing stratigraphic sections on the surface Well drilling and logging Biogeochemistry and geomicrobiology Collecting samples to: determine biochemical pathways identify new species of organisms identify new chemical compounds and to use these discoveries to: understand early life on Earth and how it functioned and metabolized find important compounds for use in pharmaceuticals Paleontology: excavation of fossil material For research into past life and evolution For museums and education Collection of samples for geochronology and thermochronology Glaciology: measurement of characteristics of glaciers and their motion Petrology In addition to identifying rocks in the field (lithology), petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, petrologists analyze thin sections of rock samples using a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens. In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals. Stable and radioactive isotope studies provide insight into the geochemical evolution of rock units. Petrologists can also use fluid inclusion data and perform high temperature and pressure physical experiments to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks. This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution. Structural geology Structural geologists use microscopic analysis of oriented thin sections of geological samples to observe the fabric within the rocks, which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures to better understand the orientations of faults and folds to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings. The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geological structures. Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries. In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge. Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt. This helps to show the relationship between erosion and the shape of a mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time. Stratigraphy In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores. Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface. Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions. Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth, interpret past environments, and locate areas for water, coal, and hydrocarbon extraction. In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them. These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition. Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate. Planetary geology With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geological principles to study other bodies of the solar system. This is a major aspect of planetary science, and largely focuses on the terrestrial planets, icy moons, asteroids, comets, and meteorites. However, some planetary geophysicists study the giant planets and exoplanets. Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialized terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use. Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes. Applied geology Economic geology Economic geology is a branch of geology that deals with aspects of economic minerals that humankind uses to fulfill various needs. Economic minerals are those extracted profitably for various practical uses. Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium. Mining geology Mining geology consists of the extractions of mineral and ore resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, Magnesite, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium. Petroleum geology Petroleum geologists study the locations of the subsurface of the Earth that can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins, they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units. Engineering geology Engineering geology is the application of geological principles to engineering practice for the purpose of assuring that the geological factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed. Engineering geology is distinct from geological engineering, particularly in North America. In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud. Hydrology Geology and geological principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geological environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater, which can often provide a ready supply of uncontaminated water and is especially important in arid regions, and to monitor the spread of contaminants in groundwater wells. Paleoclimatology Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores and sediment cores are used for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data. Natural hazards Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life. Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are: History The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals, then in practical use – even correctly noting the origin of amber. Additionally, in the 4th century BCE Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes. Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Persian geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea. Drawing from Greek and Indian scientific literature that were not destroyed by the Muslim conquests, the Persian scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science. In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by the erosion of the mountains and by deposition of silt. Georgius Agricola (1494–1555) published his groundbreaking work De Natura Fossilium in 1546 and is seen as the founder of geology as a scientific discipline. Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy. The word geology was first used by Ulisse Aldrovandi in 1603, then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, gê, meaning "earth" and λόγος, logos, meaning "speech". But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1669), who was a priest and scholar. Escholt first used the definition in his book titled, Geologia Norvegica (1657). William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them. In 1763, Mikhail Lomonosov published his treatise On the Strata of Earth. His work was the first narrative of modern geology, based on the unity of processes in time and explanation of the Earth's past from the present. James Hutton (1726–1797) is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795. Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time. The first geological map of the U.S. was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks. Sir Charles Lyell (1797–1875) first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time. Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years. By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet. Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old. Fields or related disciplines Earth system science Economic geology Mining geology Petroleum geology Engineering geology Environmental geology Environmental science Geoarchaeology Geochemistry Biogeochemistry Isotope geochemistry Geochronology Geodetics Geography Physical geography Technical geography Geological engineering Geological modelling Geometallurgy Geomicrobiology Geomorphology Geomythology Geophysics Glaciology Historical geology Hydrogeology Meteorology Mineralogy Oceanography Marine geology Paleoclimatology Paleontology Micropaleontology Palynology Petrology Petrophysics Planetary geology Plate tectonics Regional geology Sedimentology Seismology Soil science Pedology (soil study) Speleology Stratigraphy Biostratigraphy Chronostratigraphy Lithostratigraphy Structural geology Systems geology Tectonics Volcanology See also List of individual rocks References External links One Geology: This interactive geological map of the world is an international initiative of the geological surveys around the globe. This groundbreaking project was launched in 2007 and contributed to the 'International Year of Planet Earth', becoming one of their flagship projects. Earth Science News, Maps, Dictionary, Articles, Jobs American Geophysical Union American Geosciences Institute European Geosciences Union European Federation of Geologists Geological Society of America Geological Society of London Video-interviews with famous geologists Geology OpenTextbook Chronostratigraphy benchmarks The principles and objects of geology, with special reference to the geology of Egypt (1911), W. F. Hume
0.764279
0.998425
0.763075
The Age of Revolution: Europe 1789–1848
The Age of Revolution: Europe 1789–1848 is a book by British historian Eric Hobsbawm, first published in 1962. It is the first in a trilogy of books about "the long 19th century" (coined by Hobsbawm), followed by The Age of Capital: 1848–1875, and The Age of Empire: 1875–1914. Hobsbawm analyzed the early 19th century, and indeed the whole process of modernisation thereafter, using what he calls the "twin revolution thesis". This thesis recognized the dual importance of the French Revolution and the Industrial Revolution as midwives of modern European history, and – through the connections of colonialism and imperialism – world history. Contents Part I. Developments 1. The World in the 1780s Hobsbawm provides a tour d'horizon of what Europe, European society, and relations with the non-European societies were like in the world of the 1780s. He stresses that, for all the noticeable progress made in terms of things like increases in the number of good roads; faster mail; the mastery of overseas exploration, navigation, and trade; societies of the 1780s were still very much part of the pre-Modern — or Early Modern Period — world. In the 1780s, European society was overwhelmingly rural, to such an extent that to not appreciate this fact entails not being able to understand how the world worked at the time. Both the peasantry and the nobility were staunchly based in this rural world, in terms of their physical presence, their social outlook, their ways of conceiving of the world, and their relations to each other. While urban settlements of course existed, few a scattering a major cities across the European continent, the dominant form of urban life was the provincial town, not the life of the big cities. And unlike the urban cities that emerged in the course of the Industrial Revolution, these provincial towns' economies were ultimately heavily based on the countryside, rather than on mass production or large consumer bases or long-distance networks and markets. The land, above all, shaped the lives and relations of the majority of people in society. 2. The Industrial Revolution 3. The French Revolution 4. War 5. Peace 6. Revolutions 7. Nationalism In this chapter, Hobsbawm traces the emergence of the phenomenon of nationalism. It was truly a phenomenon because, though loose notions of loyalty to one's country, or patriotism, or recognition of an overarching national character existed, the nationalism that emerged in the years between 1789 and 1848 was more novel, more comprehensive, and more 'modern' (for lack of the better word) in its conception. Nationalism emerged initially as a liberal idea, because it entailed the notion of a nation made up of individual citizens whose rights and freedoms were recognized by the nation and, in turn, citizens owed responsibility to the national good. This was in contrast to the past, when society was made up of subjects loyal to a monarch, or local noble or church overlord, and whose rights and privileges were based on the social/collective/corporative groups that the subjects belonged to, and not based on the individual. Part II. Results 8. Land 9. Towards an Industrial World 10. The Career Open to Talent 11. The Labouring Poor 12. Ideology: Religion 13. Ideology: Secular 14. The Arts 15. Science 16. Conclusion: Towards 1848 See also Revolutionary Spring: Fighting for a New World 1848–1849 by Christopher Clark References External links The Age of Revolution: Europe 1789–1848 full text on Hathi Trust The age of revolution 1789-1848 - Eric Hobsbawm | libcom.org 1962 non-fiction books History books about Europe History books about the United Kingdom History books about the 19th century 20th-century history books History books about revolutions Books by Eric Hobsbawm Weidenfeld & Nicolson books
0.776585
0.982591
0.763066
The Origin of the Family, Private Property and the State
The Origin of the Family, Private Property and the State: in the Light of the Researches of Lewis H. Morgan is an 1884 anthropological treatise by Friedrich Engels. It is partially based on notes by Karl Marx to Lewis H. Morgan's book Ancient Society (1877). The book is an early historical materialist work and is regarded as one of the first major works on family economics. Summary Ancient Society The Origin of the Family, Private Property and the State begins with an extensive discussion of Morgan's Ancient Society, which aims to describe the major stages of human development, and agrees with the work that the first domestic institution in human history was the matrilineal clan. Morgan was a pioneering American anthropologist and business lawyer who championed the land rights of Native Americans. Traditionally, the Iroquois had lived in communal longhouses based on matrilineal descent and matrilocal residence, giving women a great deal of power. Engels stressed the theoretical significance of Morgan's highlighting of the matrilineal clan: Primitive communism, according to both Morgan and Engels, was based in the matrilineal clan where females lived with their classificatory sisters – applying the principle that "my sister’s child is my child". Because they lived and worked together, females in these communal households felt strong bonds of solidarity with one another, enabling them when necessary to take action against uncooperative men. Engels cites this passage from a letter to Morgan written by a missionary who had lived for many years among the Seneca Iroquois, According to Morgan, the rise of alienable property disempowered women by triggering a switch to patrilocal residence and patrilineal descent: Development of human society and the family Engels added political impact to Morgan's studies of women in prehistory, describing the "overthrow of mother right" as "the world-historic defeat of the female sex"; he attributed this defeat to the onset of farming and pastoralism. In reaction, most twentieth-century social anthropologists considered the theory of matrilineal priority untenable, though feminist scholars of the 1970s-1980s (particularly socialist and radical feminists) attempted to revive it with limited success. In recent years, evolutionary biologists, geneticists and palaeoanthropologists have been reassessing the issues, many citing genetic and other evidence that early human kinship may have been matrilineal after all. Engels emphasizes the importance of social relations of power and control over material resources rather than supposed psychological deficiencies of "primitive" people. In the eyes of both Morgan and Engels, terms such as "savagery" and "barbarism" were respectful and honorific, not negative. Engels summarises Morgan's three main stages as follows: Savagery – the period in which man's appropriation of products in their natural state predominates; the products of human art are chiefly instruments which assist this appropriation. Barbarism – the period during which man learns to breed domestic animals and to practice agriculture, and acquires methods of increasing the supply of natural products by human activity. Civilization – the period in which man learns a more advanced application of work to the products of nature, the period of industry proper and of art. In the following chapter on family, Engels seeks to connect the transition into these stages with a change in the way that family is defined and the rules by which it is governed. Much of this is still taken from Morgan, although Engels begins to intersperse his own ideas on the role of family into the text. Morgan acknowledges four stages in the family. The consanguine family is the first stage of the family and as such a primary indicator of our superior nature in comparison with animals. In this state marriage groups are separated according to generations. The husband and wife relationship is immediately and communally assumed between the male and female members of one generation. The only taboo is a sexual relationship between two generations (i.e. father and daughter, grandmother and grandson). The punaluan family, the second stage, extends the incest taboo to include sexual intercourse between siblings, including all cousins of the same generation. This prevents most incestuous relationships. The separation of the patriarchal and matriarchal lines divided a family into gentes. Interbreeding was forbidden within gens (anthropology), although first cousins from separate gentes could still breed. In the pairing family, the first indications of pairing are found in families where the husband has one primary wife. Inbreeding is practically eradicated by the prevention of a marriage between two family members who were even just remotely related, while relationships also start to approach monogamy. Property and economics begin to play a larger part in the family, as a pairing family had responsibility for the ownership of specific goods and property. Polygamy is still common amongst men, but no longer amongst women since their fidelity would ensure the child's legitimacy. Women have a superior role in the family as keepers of the household and guardians of legitimacy. The pairing family is the form characteristic of the lower stages of barbarism. However, at this point, when the man died his inheritance was still given to his gens, rather than to his offspring. Engels refers to this economic advantage for men coupled with the woman's lack of rights to lay claim to possessions for herself or her children (who became hers after a separation) as the overthrow of mother-right which was "the world historical defeat of the female sex". For Engels, ownership of property created the first significant division between men and women in which the woman was inferior. On the monogamous family, Engels writes: Family and property Engels' ideas on the role of property in the creation of the modern family and as such modern civilization begin to become more transparent in the latter part of Chapter 2 as he begins to elaborate on the question of the monogamous relationship and the freedom to enter into (or refuse) such a relationship. Bourgeois law dictates the rules for relationships and inheritances. As such, two partners, even when their marriage is not arranged, will always have the preservation of inheritance in mind and as such will never be entirely free to choose their partner. Engels argues that a relationship based on property rights and forced monogamy will only lead to the proliferation of sexual immorality and prostitution. The only class, according to Engels, which is free from these restraints of property, and as a result from the danger of moral decay, is the proletariat, as they lack the monetary means that are the basis of (as well as threat to) the bourgeois marriage. Monogamy is therefore guaranteed by the fact that theirs is a voluntary sex-love relationship. The social revolution which Engels believed was about to happen would eliminate class differences, and therefore also the need for prostitution and the enslavement of women. If men needed only to be concerned with sex-love and no longer with property and inheritance, then monogamy would come naturally. Publication history Background Following the death of his friend and co-thinker Karl Marx in 1883, Engels served as his literary executor, organizing his various writings and preparing them for publication. While time-consuming, this activity did not fully occupy Engels's available hours, and he continued to read and write on topics of his own. While Engels' 1883 manuscript Dialectics of Nature was left uncompleted and unpublished, he successfully published (The Origin of the Family, Private Property, and the State: in the Light of the Researches of Lewis H. Morgan) in Zürich in the spring of 1884. The writing of The Origin of the Family began in early April 1884, and was completed on 26 May. Engels began work on the treatise after reading Marx's handwritten synopsis of Lewis H. Morgan's Ancient Society; or, Researches in the Lines of Human Progress from Savagery, Through Barbarism to Civilization, first published in London in 1877. Engels believed that Marx had intended to create a critical book-length treatment of the ideas suggested by Morgan, and aimed to produce such a manuscript to fulfill his late comrade's wishes. Engels acknowledged these motives, noting in the preface to the first edition that "Marx had reserved to himself the privilege of displaying the results of Morgan's investigations in connection with his own materialist conception of history", as the latter had "in a manner discovered anew" in America the theory originated by Marx decades before. Writing process Engels's first inclination was to seek publication in Germany in spite of the passage of the first of the Anti-Socialist Laws by the government of Chancellor Otto von Bismarck. On April 26, 1884, Engels wrote a letter to his close political associate Karl Kautsky, saying he sought to "play a trick on Bismarck" by writing something "that he would be positively unable to ban". He felt this goal unrealizable owing to Morgan's discussions of the nature of monogamy and the relationship between private ownership of property and class struggle, however, these making it "absolutely impossible to couch in such a way as to comply with the Anti-Socialist Law". Engels viewed Morgan's findings as providing a "factual basis we have hitherto lacked" for a prehistory of contemporary class struggle. He believed that it would be an important supplement to the theory of historical materialism for Morgan's ideas to be "thoroughly worked on, properly weighed up, and presented as a coherent whole". This was to be the political intent behind his Origin of the Family project. Work on the book was completed—with the exception of revisions to the final chapter—on May 22, 1884, when the manuscript was dispatched to Eduard Bernstein in Zürich. The final decision of whether to print the book in Stuttgart "under a false style", hiding Engels's forbidden name, or immediately without alteration in a Swiss edition, was deferred by Engels to Bernstein. The latter course of action was chosen, with the book finding print early in October. His first objective was to claim that matriarchy was based on promiscuity as proved by Bachofen, who actually said it was based on monogamy. Editions The first edition of appeared in Zürich in October 1884, with the possibility of German publication forestalled by Bismarck's Anti-Socialist Law. Two subsequent German editions, each following the first Zürich edition exactly, were published in Stuttgart in 1886 and 1889. The book was translated into a number of European languages and published during the decade of the 1880s, including Polish, Romanian, Italian, Danish, and Serbian. Changes to the text were made by Engels for a fourth German language edition, published in 1891, with an effort made to incorporate contemporary findings in the fields of anthropology and ethnography into the work. The first English language edition did not appear until 1902, when Charles H. Kerr commissioned Ernest Untermann to produce a translation for the "Standard Socialist Series" of popularly priced pocket editions produced by his Charles H. Kerr & Co. of Chicago. The work was extensively reprinted throughout the 20th and into the 21st Centuries and is regarded as one of Engels' seminal works. References Bibliography External links The Origin of the Family, Private Property and the State. Ernest Untermann, trans. Chicago: Charles H. Kerr & Co., 1909. —Identical to 1st English language edition. The Origin of the Family, Private Property and the State. Alternate translation. New York: International Publishers, n.d. [c. 1933]. German language html version. English language html at Marxist Internet Archive Soviet study booklet 1884 non-fiction books Books by Friedrich Engels Feminist books Historical materialism Marxist books Marxist feminism Works about the theory of history
0.768973
0.99229
0.763043
Coming of age
Coming of age is a young person's transition from being a child to being an adult. The specific age at which this transition takes place varies between societies, as does the nature of the change. It can be a simple legal convention or can be part of a ritual or spiritual event. In the past, and in some societies today, such a change is often associated with the age of sexual maturity (puberty), especially menarche and spermarche. In others, it is associated with an age of religious responsibility. Particularly in Western societies, modern legal conventions stipulate points around the end of adolescence and the beginning of early adulthood (most commonly 18 though ranging from 16 to 21) when adolescents are generally no longer considered minors and are granted the full rights and responsibilities of an adult. Many cultures retain ceremonies to confirm the coming of age, and coming-of-age stories are a well-established sub-genre in literature, the film industry, and other forms of media. Cultural Ancient Greek In certain states in Ancient Greece, such as Sparta and Crete, adolescent boys were expected to enter into a mentoring relationship with an adult man, in which they would be taught skills pertaining to adult life, such as hunting, martial arts and fine arts. South Africa Ancient Rome The puberty ritual for the young Roman male involved shaving his beard and taking off his bulla, an amulet worn to mark and protect underage youth, which he then dedicated to his household gods, the Lares. He assumed the toga virilis ("toga of manhood"), was enrolled as a citizen on the census, and soon began his military service. Traditionally, the ceremony was held on the Liberalia, the festival in honor of the god Liber, who embodied both political and sexual liberty, but other dates could be chosen for individual reasons. Rome lacked the elaborate female puberty rituals of ancient Greece, and for girls, the wedding ceremony was in part a rite of passage for the bride. Girls coming of age dedicated their dolls to Artemis, the goddess most concerned with virginity, or to Aphrodite when they were preparing for marriage. All adolescents in ritual preparation to transition to adult status wore the tunica recta, the "upright tunic", but girls wove their own. The garment was called recta because it was woven by tradition on a type of upright loom that had become archaic in later periods. Roman girls were expected to remain virgins until marriage, but boys were often introduced to heterosexual behaviors by a prostitute. The higher the social rank of a girl, the sooner she was likely to become betrothed and married. The general age of betrothal for girls of the upper classes was fourteen, but for patricians as early as twelve. Weddings, however, were often postponed until the girl was considered mature enough. Males typically postponed marriage till they had served in the military for some time and were beginning their political careers, around age 25. Patrician males, however, might marry considerably earlier; Julius Caesar was married for the first time by the age of 18. On the night before the wedding, the bride bound up her hair with a yellow hairnet she had woven. The confining of her hair signifies the harnessing of her sexuality within marriage. Her weaving of the tunica recta and the hairnet demonstrated her skill and her capacity for acting in the traditional matron's role as custos domi, "guardian of the house". On her wedding day, she belted her tunic with the cingulum, made from the wool of an ewe to symbolize fertility, and tied with the "knot of Hercules", which was supposed to be hard to untie. The knot symbolized wifely chastity, in that it was to be untied only by her husband, but the cingulum also symbolized that the bridegroom "was belted and bound" to his wife. The bride's hair was ritually styled in "six tresses" (seni crines), and she was veiled until uncovered by her husband at the end of the ceremony, a ritual of surrendering her virginity to him. Anglo-Celtic The legal age of majority is 18 in most Anglo-Celtic cultures (such as Australia, New Zealand, the United Kingdom, and Ireland). One is legally enabled to vote, purchase tobacco and alcohol, marry without parental consent (although one can wed at 16 in Scotland and New Zealand) and sign contracts. But in the early twentieth century, the age of legal majority was 21, although the marriageable age was typically lower. Even though turning 21 now has few, if any, legal effects in most of these countries, its former legal status as the age of majority has caused it to continue to be celebrated. Canada In Canada, a person aged 16 and over can legally drive a car and work, but are only considered to be an adult at age 18 like in the US. In most provinces, the legal age to purchase alcohol and cigarettes is 19, except in Alberta, Manitoba, and Quebec where it is 18 years old. India In India, a person aged 18 and over is allowed to own and drive a car, and has attained the right to vote and the age of consent. Inspired by the western cultures however there are usually sweet sixteen birthday parties celebrated across the country but with little cultural significance besides having now become a young adult. The drinking age varies within states from 18 to 21 years old. Humanist In some countries, Humanist or freethinker organisations have arranged courses or camps for non-religious adolescents, in which they can study or work on ethical, social, and personal topics important for adult life, followed by a formal rite of passage comparable to the Christian Confirmation. Some of these ceremonies are even called "civil confirmations". The purpose of these ceremonies is to offer a festive ritual for those youngsters, who do not believe in any religion, but nevertheless want to mark their transition from childhood to adulthood. Indonesia In Bali, the coming of age ceremony is supposed to take place after a girl's first menstrual period or a boy's voice breaks. However, due to expense, it is often delayed until later. The upper canines are filed down slightly to symbolize the effacing of the individual's "wild" nature. While in Nias island, a young man must jump up over a stone (normally about 1 or 2 meters) as a part of the coming of age ceremony. Japan Since 1948, the age of majority in Japan has been 20; persons under 20 are not permitted to smoke or drink. Until June 2016, people under 20 were not permitted to vote. The government of Japan lowered the age of majority to 18, which came into effect in 2021. Coming-of-age ceremonies, known as seijin shiki, are held on the second Monday of January. At the ceremony, all of the men and women participating are brought to a government building and listen to many speakers, similar to a graduation ceremony. At the conclusion of the ceremony government officials give speeches, and small presents are handed out to the new adults. Korea In Korea, citizens are permitted to marry, vote, drive, drink alcohol, and smoke at age 19. The Monday of the third week of May is "coming-of-age day". There has been a traditional coming of age ceremony since before the Goryeo dynasty, but it has mostly disappeared. In the traditional way, when boys or girls were between the ages of fifteen and twenty, boys wore gat, a Korean traditional hat made of bamboo and horsehair, and girls did their hair in a chignon with a binyeo, a Korean traditional ornamental hairpin. Both of them wore hanbok, which are sometimes worn at the coming of age ceremony in the present day. Latin America In some Latin American countries, when a female reaches the age of 15, her relatives organize a very expensive celebration. It is usually a large party, called a Quinceañera in Spanish speaking countries and Baile de Debutantes (also called Festa de 15 [años], literally: Party of 15 [years]) in Brazil. The legal age of adulthood varies by country. Papua New Guinea Kovave is a ceremony to initiate Papua New Guinea boys into adult society. It involves dressing up in a conical hat which has long strands of leaves hanging from the edge, down to below the waist. The name Kovave is also used to describe the head-dress. Philippines In the Philippines, a popular coming of age celebration for 18-year-old women is the debut. It is normally a formal affair, with a strict dress code such as a coat and tie for the upper-middle and upper classes, and usually has a theme or color scheme that is related to the dress code. The débutante traditionally chooses for her entourage "18 Roses", who are 18 special men or boys in the girl's life such as boyfriends, relatives and brothers, and "18 Candles", who are the Roses' female counterparts. Each presents a rose or candle then delivers a short speech about the debutante. The Roses sometimes dance with the débutante before presenting their flower and speech, with the last being her father or boyfriend. Other variations exist, such as 18 Treasures (of any gender; gives a present instead of a candle or flower) or other types of flowers aside from roses being given, but the significance of "18" is almost always retained. Filipino men, on the other hand, celebrate their debut on their 21st birthday. There is no traditionally set program marking this event, and celebrations differ from family to family. Both men and women may opt not to hold a debut at all. Romani In the Romani culture, males are called Shave when they come of age at 20, and females Sheya. Males are then taught to drive and work in their family's line of trade, while females are taught the women's line of work. Scandinavian and Slavic In Ukraine, Poland, and the Scandinavian Countries, the legal coming of age of a person is celebrated at either 18 or 21. South Africa In South Africa, the Xhosa Ulwaluko and the Sotho Lebollo la banna circumcision and manhood ceremonies are still undertaken by the majority of males. Spain In Spain during the 19th century, there was a civilian coming of age bound to the compulsory military service. The quintos were the boys of the village that reached the age of eligibility for military service (18 years), thus forming the quinta of a year. In rural Spain, the mili was the first and sometimes the only experience of life away from family. In the days before their departure, the quintos knocked on every door to ask for food and drink. They held a common festive meal with what they gathered and sometimes painted some graffiti reading "Vivan los quintos del año" as a memorial of their leaving their youth. Years later, the quintos of the same year could still hold yearly meals to remember times past. By the end of the 20th century, the rural exodus, the diffusion of city customs and the loss of prestige of military service changed the relevance of quintos parties. In some places, the party included the village girls of the same age, thus becoming less directly related to military service. In others, the tradition was simply lost. In 2002, conscription was abolished in Spain in favor of an all-professional military. As a result, the quintos disappeared except for a few rural areas where it is kept as a coming of age traditional party without further consequences. United States In the United States, people are allowed to drive at 16 in all states, with the exception of New Jersey, which requires drivers to be 17 and older, and sometimes receive the responsibility of owning their own car. People are allowed to drive at age 15 in Idaho and Montana. At 16, people are also legally allowed to donate blood and work in most establishments. In spite of this, it is not until the age of 18 that a person is legally considered an adult and can vote and join the military (age 17 with parental consent). The legal age for purchasing and consuming alcohol, tobacco, and recreational marijuana (in states where it is legal) is 21. Multiple localities have also raised the minimum purchase age independent of state laws. Vietnam During the feudal period, the coming of age was celebrated at 15 for noblemen. Nowadays, the age is 20 for both genders. Religious Baha'i Turning 15, the "age of maturity", as the Baha'i faith terms it, is a time when a child is considered spiritually mature. Declared Baha'is that have reached the age of maturity are expected to begin observing certain Baha'i laws, such as obligatory prayer and fasting. Buddhism Theravada boys, typically just under the age of 20 years, undergo a Shinbyu ceremony, where they are initiated into the Temple as Novice Monks (Samanera). They will typically stay in the monastery for between 3 days and 3 years, most commonly for one 3-month "rainy season retreat" (vassa), held annually from late July to early October. During this period the boys experience the rigors of an orthodox Buddhist monastic lifestyle – a lifestyle that involves celibacy, formal voluntary poverty, absolute nonviolence, and daily fasting between noon and the following day's sunrise. Depending on how long they stay, the boys will learn various chants and recitations in the canonical language (Pali) – typically the Buddha's more famous discourses (Suttas) and verses (Gathas) – as well as Buddhist ethics and higher monastic discipline (Vinaya). If they stay long enough and conditions permit, they may be tutored in the meditative practices (bhavana, or dhyana) that are at the heart of Buddhism's program for the self-development of alert tranquillity (samadhi), wisdom (prajna), and divine mental states (brahmavihara). After living the novitiate monastic life for some time, the boy, now considered to have "come of age", will either take higher ordination as a fully ordained monk (a bhikkhu) or will (more often) return to lay life. In Southeast Asian countries, where most practitioners of Theravada Buddhism reside, women will often refuse to marry a man who has not ordained temporarily as a Samanera in this way at some point in his life. Men who have completed this Samanera ordination and have returned to lay life are considered primed for adult married life and are described in the Thai language and the Khmer language by terms which roughly translate as "cooked", "finished", or "cooled off" in English, as in meal preparation/consumption. Thus, one's monastic training is seen to have prepared one properly for familial, social, and civic duty and/or one's passions and unruliness of the boy are seen to have "cooled down" enough for him to be of use to a woman as a proper man. Christianity In many Western Christian churches (those deriving from Rome after the East-West Schism), a young person is eligible to receive confirmation, which is considered a sacrament in Catholicism, and a rite in Lutheranism, Anglicanism, Methodism, Irvingism, and Reformed Christianity. The Catholic and Methodist denominations teach that in confirmation, the Holy Spirit strengthens a baptized individual for their faith journey. This is usually done by a bishop or an abbot laying their hands upon the foreheads of the young person (usually between the ages of 12 and 15 years), and marking them with the seal of the Holy Spirit. In some Christian denominations, the confirmand (now an adult in the eyes of the Church) takes a Saint's name as a confirmation name. In Christian denominations that practice Believer's Baptism (baptism by voluntary decision, as opposed to baptism in early infancy), it is normatively carried out after the age of accountability has arrived, as with many Anabaptist denominations, such as the Mennonites. Some traditions withhold the rite of Holy Communion from those not yet at the age of accountability, on the grounds that children do not understand what the sacrament means. In the 20th century, Roman Catholic children began to be admitted to communion some years before confirmation, with an annual First Communion service – a practice that was extended to some paedobaptist Protestant groups, such as Lutheranism and Anglicanism–but since the Second Vatican Council, the withholding of confirmation to a later age, e.g. mid-teens in the United States, early teens in Ireland and Britain, has in some areas been abandoned in favour of restoring the traditional order of the three sacraments of initiation. In some denominations, full membership in the Church, if not bestowed at birth, often must wait until the age of accountability and frequently is granted only after a period of preparation known as catechesis. The time of innocence before one has the ability to understand truly the laws of God and that God sees one as innocent is also seen as applying to individuals who suffer from a mental disability which prevents them from ever reaching a time when they are capable of understanding the laws of God. These individuals are thus seen, according to some Christians, as existing in a perpetual state of innocence. Catholicism In 1910, Pope Pius X issued the decree Quam singulari, which changed the age of eligibility for receiving both the sacrament of Penance and the Eucharist to a "time when a child begins to reason, that is about the seventh year, more or less." Previously, local standards had been at least 10 or 12 or even 14 years old. Historically, the sacrament of confirmation has been administered to youth who have reached the "age of discretion". The catechism states that confirmation should be received "at the appropriate time", but in danger of death it can be administered to children. Together with the sacraments of baptism and the Eucharist, the sacrament of confirmation completes the sacraments of Christian initiation, "for without Confirmation and Eucharist, Baptism is certainly valid and efficacious, but Christian initiation remains incomplete." In Eastern Catholic Churches, infants receive confirmation and communion immediately after baptism. In Eastern Christianity the baptising priest confirms infants directly after baptism. The Church of Jesus Christ of Latter-Day Saints The Church of Jesus Christ of Latter-day Saints sets the age of accountability and minimum age for baptism at 8 years of age. All persons younger than 8 are considered innocent and not accountable for their sinning. The Church considers mentally challenged individuals whose mental age is under 8 to be in a perpetual state of innocence, while other doctrines teach that no one is 'without sin', both believe that those at a certain age are considered innocent. Confucianism According to the Grand Historian, the Duke of Zhou wrote the Rites of Zhou about 3000 years ago, which documented fundamental ceremonies in ancient China, including the Coming of Age rite. Then Confucius and his students wrote the Book of Rites, which introduced and further explained important ceremonies in Confucianism. When a man turned 20, his parents would hold a Guan Li (also named the capping ceremony); when a girl turned 15, she would receive a Ji Li (also known as the Hairpin Ceremony). These rites were considered to represent a person being mature and prepared to get married and start a family; therefore, they were the beginning of all the moral rites. During this rite of passage, the young person receives his/her style name. Hinduism In Hinduism coming of age generally signifies that a boy or girl is mature enough to understand his responsibility towards family and society. Some castes in Hinduism also have the sacred thread ceremony, called Upanayana, for Dvija (twice-born) boys that mark their coming of age to do religious ceremonies. A rite of passage males have to go through is Bhrataman (or Chudakarma) that marks adulthood. Ifá In the traditional Ifá faith of the Yoruba people of West Africa and the many New World religions that it subsequently gave birth to, men and women are often initiated to the service of one of the hundreds of subsidiary spirits that serve the Orisha Olodumare, the group's conception of the Almighty God. The mystic links that are forged by way of these initiations, which typically occur at puberty, are the conduits that are used by adherents to attempt to achieve what can be seen as the equivalent of the Buddhist enlightenment by way of a combination of personalized meditations, reincarnations and spirit possessions. Islam Children are not required to perform any obligatory religious obligations prior to reaching the age of puberty, although they are encouraged to begin praying at the age of seven. Once a person begins puberty, they are required to perform salat and other obligations of Islam. A girl is considered an adult when she begins menstruating, while a boy is considered an adult at twelve-to-fifteen years old. The evidence for this is the narration of Ibn Umar that he said: "Allah's Apostle called me to present myself in front of him on the eve of the battle of Uhud, while I was fourteen years of age at that time and he did not allow me to take part in that battle but he called me in front of him on the eve of the battle of the Trench when I was fifteen years old, and he allowed me to join the battle." (Reported by Bukhari and Muslim). When Umar Ibn Abdul Aziz heard this Hadith he made this age the evidence to differentiate between a mature and an immature person. In some Islamic cultures circumcision (khitan) can be a ritual associated with coming of age for boys, taking place in late childhood or early adolescence. Judaism In the Jewish faith, boys reach religious maturity at the age of thirteen and become a bar mitzvah ("bar mitzvah" means "son of the commandment" literally, and "subject to commandments" figuratively). Girls mature a year earlier, and become a bat mitzvah ("bat mitzvah" means "daughter of the commandment") at twelve. The new men and women are looked upon as adults and are expected to uphold the Jewish commandments and laws. Also, in religious court they are adults and can marry with their new title of an adult. Nonetheless, in the Talmud; Pirkei Avot (5:25), Rabbi Yehuda ben Teime gives the age of 18 as the appropriate age to get married. At the end of the bar or bat mitzvah, the boy or girl is showered with candies, which act as "sweet blessings". Besides the actual ceremony, there usually is a bar or bat mitzvah party. Chassidim In various Chassidic sects when boys turn 3 years of age, they have an upsherin (sect related typical Brooklin-Yiddish for Yiddish Abshern, for German Abscheren, "Haare schneiden", engl. hair cut, ) ceremony, when they receive their first haircut. Until then, their parents allow their hair to grow long, until they undergo this esoteric rite. Little girls for the first time co-light some extra ″Shabbat candles, after their mothers did so, also when they turn 3 years of age. Shinto In the Shinto faith, boys were taken to the shrine of their patron deity at approximately 12–14 years old. They were then given adult clothes and a new haircut. This was called Genpuku. Sikhism In Sikhism, when one reaches the age of maturity, the men will typically partake in a ceremony called Dastar Bandhi. This is the first time the proper Sikh Turban is tied on the adolescent. Women who wear the turban may also partake in the ceremony, although it is less common. See also Adolescence Age of consent Age of majority Age of Majority (Catholic Church) Bildungsroman Coming of Age (Unitarian Universalism) Coming-of-age story Manhood Poy Sang Long Quinceañera (age 15) Rite of passage Self-discovery Sweet sixteen (birthday) Coming of Age in Samoa References External links Rites of passage Adolescence Adulthood
0.764912
0.997532
0.763024
Acclimatization
Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do. Names The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it. Methods Biochemical In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments. Morphological Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species). The theory While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes). Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data. The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation. Examples Plants Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate. In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux. Animals Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity). Humans The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected. Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake. See also Acclimatisation society Beneficial acclimation hypothesis Heat index Introduced species Phenotypic plasticity Wind chill References Physiology Ecological processes Climate Biology terminology
0.767395
0.994293
0.763015
Hypermodernity
Hypermodernity (supermodernity) is a type, mode, or stage of society that reflects an inversion of modernity. Hypermodernism stipulates a world in which the object has been replaced by its own attributes. The new attribute-driven world is driven by the rise of technology and aspires to a convergence between technology and biology and more importantly information and matter. Hypermodernism finds its validation in emphasis on the value of new technology to overcome natural limitations. It rejects essentialism and instead favours postmodernism. In hypermodernism the function of an object has its reference point in the form of an object rather than function being the reference point for form. In other words, it describes an epoch in which teleological meaning is reversed from the standpoint of functionalism in favor of constructivism. Hypermodernity Hypermodernity emphasizes a hyperbolic separation between past and present due to the fact that: The past oriented attributes and their functions around objects. Objects that do exist in the present are only extant due to some useful attribute in the hypermodern era. Hypermodernity inverts modernity to allow the attributes of an object to provide even more individuality than modernism. Modernity trapped form within the bounds of limited function; hypermodernity posits that function is now evolving so rapidly, it must take its reference point from form itself. Both positive and negative societal changes occur due to hyper-individualism and increased personal choice. Postmodernity rejected the idea of the past as a reference point and curated objects from the past for the sole purpose of freeing form from function. In postmodernism, truth was ephemeral as the focus was to avoid non-falsifiable tenets. Postmodernity described a total collapse of modernity and its faith in progress and improvement in empowering the individual. Supermodernity If distinguished from hypermodernity, supermodernity is a step beyond the ontological emptiness of postmodernism and relies upon plausible heuristic truths. Whereas modernism focused upon the creation of great truths (or what Lyotard called "master narratives" or "metanarratives"), and postmodernity was intent upon their destruction (deconstruction); supermodernity operates extraneously of meta-truth. Instead, attributes are extracted from objects of the past based on their present relevance. Since attributes are both true and false, a truth value is not necessary including falsifiability. Supermodernity curates useful attributes from modern and postmodern objects in order to escape nihilistic postmodern tautology. Related authors are Terry Eagleton After Theory, and Marc Augé Non-Places: Introduction to an Anthropology of Supermodernity. See also Altermodern Hypermodernism Hypermodernism (chess) Metamodernism Bibliography S. Charles and G. Lipovetsky, Hypermodern Times, Polity Press, 2006. S. Charles, Hypermodern Explained to Children, Liber, 2007 (in French). R. Colonna, L'essere contro l'umano. Preludi per una filosofia della surmodernità, Edises, Napoli, 2010 (in Italian). F. Schoumacher, Eidolon: simulacre et hypermodernité, Paris, Balland, 2024. External links Gilles Lypovetsky interviewed by Denis Failly for his book "le bonheur paradoxal" Modernity Criticism of postmodernism
0.78378
0.973492
0.763004
Ethnography
Ethnography is a branch of anthropology and the systematic study of individual cultures. Ethnography explores cultural phenomena from the point of view of the subject of the study. Ethnography is also a type of social research that involves examining the behavior of the participants in a given social situation and understanding the group members' own interpretation of such behavior. As a form of inquiry, ethnography relies heavily on participant observation—on the researcher participating in the setting or with the people being studied, at least in some marginal role, and seeking to document, in detail, patterns of social interaction and the perspectives of participants, and to understand these in their local contexts. It had its origin in social and cultural anthropology in the early twentieth century, but spread to other social science disciplines, notably sociology, during the course of that century. Ethnographers mainly use qualitative methods, though they may also employ quantitative data. The typical ethnography is a holistic study and so includes a brief history, and an analysis of the terrain, the climate, and the habitat. A wide range of groups and organisations have been studied by this method, including traditional communities, youth gangs, religious cults, and organisations of various kinds. While, traditionally, ethnography has relied on the physical presence of the researcher in a setting, there is research using the label that has relied on interviews or documents, sometimes to investigate events in the past such as the NASA Challenger disaster. There is also a considerable amount of 'virtual' or online ethnography, sometimes labelled netnography or cyber-ethnography. Origins The term ethnography is from Greek ( éthnos "folk, people, nation" and gráphō "I write") and encompasses the ways in which ancient authors described and analyzed foreign cultures. Anthony Kaldellis loosely suggests the Odyssey as a starting point for ancient ethnography, while noting that Herodotus' Histories is the usual starting point; while Edith Hall has argued that Homeric poetry lacks "the coherence and vigour of ethnological science". From Herodotus forward, ethnography was a mainstay of ancient historiography. Tacitus has ethnographies in the Agricola, Histories, and Germania. Tacitus' Germania "stands as the sole surviving full-scale monograph by a classical author on an alien people." Ethnography formed a relatively coherent subgenre in Byzantine literature. Development as a science While ethnography ("ethnographic writing") was widely practiced in antiquity, ethnography as a science (cf. ethnology) did not exist in the ancient world. There is no ancient term or concept applicable to ethnography, and those writers probably did not consider the study of other cultures as a distinct mode of inquiry from history. Gerhard Friedrich Müller developed the concept of ethnography as a separate discipline whilst participating in the Second Kamchatka Expedition (1733–43) as a professor of history and geography. Whilst involved in the expedition, he differentiated Völker-Beschreibung as a distinct area of study. This became known as "ethnography", following the introduction of the Greek neologism ethnographia by Johann Friedrich Schöpperlin and the German variant by A. F. Thilo in 1767. August Ludwig von Schlözer and Christoph Wilhelm Jacob Gatterer of the University of Göttingen introduced the term into the academic discourse in an attempt to reform the contemporary understanding of world history. Features of ethnographic research According to Dewan (2018), the researcher is not looking for generalizing the findings; rather, they are considering it in reference to the context of the situation. In this regard, the best way to integrate ethnography in a quantitative research would be to use it to discover and uncover relationships and then use the resultant data to test and explain the empirical assumptions. In ethnography, the researcher gathers what is available, what is normal, what it is that people do, what they say, and how they work. Ethnography can also be used in other methodological frameworks, for instance, an action research program of study where one of the goals is to change and improve the situation. Ethnographic research is a fundamental methodology in cultural ecology, development studies, and feminist geography. In addition, it has gained importance in social, political, cultural, and nature-society geography. Ethnography is an effective methodology in qualitative geographic research that focuses on people's perceptions and experiences and their traditionally place-based immersion within a social group. Data collection methods According to John Brewer, a leading social scientist, data collection methods are meant to capture the "social meanings and ordinary activities" of people (informants) in "naturally occurring settings" that are commonly referred to as "the field". The goal is to collect data in such a way that the researcher imposes a minimal amount of personal bias in the data. Multiple methods of data collection may be employed to facilitate a relationship that allows for a more personal and in-depth portrait of the informants and their community. These can include participant observation, field notes, interviews and surveys, as well as various visual methods. Interviews are often taped and later transcribed, allowing the interview to proceed unimpaired of note-taking, but with all information available later for full analysis. Secondary research and document analysis are also used to provide insight into the research topic. In the past, kinship charts were commonly used to "discover logical patterns and social structure in non-Western societies". In the 21st century, anthropology focuses more on the study of people in urban settings and the use of kinship charts is seldom employed. In order to make the data collection and interpretation transparent, researchers creating ethnographies often attempt to be "reflexive". Reflexivity refers to the researcher's aim "to explore the ways in which [the] researcher's involvement with a particular study influences, acts upon and informs such research".[Marvasti, Amir & Gubrium, Jaber. 2023. Crafting Ethnographic Fieldwork: Sites, Selves & Social Worlds. Routledge. Despite these attempts of reflexivity, no researcher can be totally unbiased. This factor has provided a basis to criticize ethnography. Traditionally, the ethnographer focuses attention on a community, selecting knowledgeable informants who know the activities of the community well. These informants are typically asked to identify other informants who represent the community, often using snowball or chain sampling. This process is often effective in revealing common cultural denominators connected to the topic being studied. Ethnography relies greatly on up-close, personal experience. Participation, rather than just observation, is one of the keys to this process. Ethnography is very useful in social research. An inevitability during ethnographic participation is that the researcher experiences at least some resocialization. In other words, the ethnographer to some extent “becomes” what they are studying. For instance, an ethnographer may become skilled at a work activity that they are studying; they may become members of a particular religious group they are interested in studying; or they may even inhabit a familial role in a community they are staying with. Robert M. Emerson, Rachel Fretz, and Linda Shaw summarize this idea in their book Writing Ethnographic Field Notes using a common metaphor: “the fieldworker cannot and should not attempt to be a fly on the wall.” Ybema et al. (2010) examine the ontological and epistemological presuppositions underlying ethnography. Ethnographic research can range from a realist perspective, in which behavior is observed, to a constructivist perspective where understanding is socially constructed by the researcher and subjects. Research can range from an objectivist account of fixed, observable behaviors to an interpretive narrative describing "the interplay of individual agency and social structure." Critical theory researchers address "issues of power within the researcher-researched relationships and the links between knowledge and power." Another form of data collection is that of the "image". The image is the projection that an individual puts on an object or abstract idea. An image can be contained within the physical world through a particular individual's perspective, primarily based on that individual's past experiences. One example of an image is how an individual views a novel after completing it. The physical entity that is the novel contains a specific image in the perspective of the interpreting individual and can only be expressed by the individual in the terms of "I can tell you what an image is by telling you what it feels like." The idea of an image relies on the imagination and has been seen to be utilized by children in a very spontaneous and natural manner. Effectively, the idea of the image is a primary tool for ethnographers to collect data. The image presents the perspective, experiences, and influences of an individual as a single entity and in consequence, the individual will always contain this image in the group under study. Differences across disciplines The ethnographic method is used across a range of different disciplines, primarily by anthropologists/ethnologists but also occasionally by sociologists. Cultural studies, occupational therapy, economics, social work, education, design, psychology, computer science, human factors and ergonomics, ethnomusicology, folkloristics, religious studies, geography, history, linguistics, communication studies, performance studies, advertising, accounting research, nursing, urban planning, usability, political science, social movement, and criminology are other fields which have made use of ethnography. Cultural and social anthropology Cultural anthropology and social anthropology were developed around ethnographic research and their canonical texts, which are mostly ethnographies: e.g. Argonauts of the Western Pacific (1922) by Bronisław Malinowski, Ethnologische Excursion in Johore (1875) by Nicholas Miklouho-Maclay, Coming of Age in Samoa (1928) by Margaret Mead, The Nuer (1940) by E. E. Evans-Pritchard, Naven (1936, 1958) by Gregory Bateson, or "The Lele of the Kasai" (1963) by Mary Douglas. Cultural and social anthropologists today place a high value on doing ethnographic research. The typical ethnography is a document written about a particular people, almost always based at least in part on emic views of where the culture begins and ends. Using language or community boundaries to bound the ethnography is common. Ethnographies are also sometimes called "case studies". Ethnographers study and interpret culture, its universalities, and its variations through the ethnographic study based on fieldwork. An ethnography is a specific kind of written observational science which provides an account of a particular culture, society, or community. The fieldwork usually involves spending a year or more in another society, living with the local people and learning about their ways of life. Ruth Fulton Benedict uses examples of Enthrotyhy in her serious of field work that began in 1922 of Serrano, of the Zuni in 1924, the Cochiti in 1925 and the Pina in 1926. All being people she wished to study for her anthropological data. Benedict's experiences with the Southwest Zuni pueblo is to be considered the basis of her formative fieldwork. The experience set the idea for her to produce her theory of "culture is personality writ large" (modell, 1988). By studying the culture between the different Pueblo and Plain Indians, She discovered the culture isomorphism that would be considered her personalized unique approach to the study of anthropology using ethnographic techniques. A typical ethnography attempts to be holistic and typically follows an outline to include a brief history of the culture in question, an analysis of the physical geography or terrain inhabited by the people under study, including climate, and often including what biological anthropologists call habitat. Folk notions of botany and zoology are presented as ethnobotany and ethnozoology alongside references from the formal sciences. Material culture, technology, and means of subsistence are usually treated next, as they are typically bound up in physical geography and include descriptions of infrastructure. Kinship and social structure (including age grading, peer groups, gender, voluntary associations, clans, moieties, and so forth, if they exist) are typically included. Languages spoken, dialects, and the history of language change are another group of standard topics. Practices of child rearing, acculturation, and emic views on personality and values usually follow after sections on social structure. Rites, rituals, and other evidence of religion have long been an interest and are sometimes central to ethnographies, especially when conducted in public where visiting anthropologists can see them. As ethnography developed, anthropologists grew more interested in less tangible aspects of culture, such as values, worldview and what Clifford Geertz termed the "ethos" of the culture. In his fieldwork, Geertz used elements of a phenomenological approach, tracing not just the doings of people, but the cultural elements themselves. For example, if within a group of people, winking was a communicative gesture, he sought to first determine what kinds of things a wink might mean (it might mean several things). Then, he sought to determine in what contexts winks were used, and whether, as one moved about a region, winks remained meaningful in the same way. In this way, cultural boundaries of communication could be explored, as opposed to using linguistic boundaries or notions about the residence. Geertz, while still following something of a traditional ethnographic outline, moved outside that outline to talk about "webs" instead of "outlines" of culture. Within cultural anthropology, there are several subgenres of ethnography. Beginning in the 1950s and early 1960s, anthropologists began writing "bio-confessional" ethnographies that intentionally exposed the nature of ethnographic research. Famous examples include Tristes Tropiques (1955) by Lévi-Strauss, The High Valley by Kenneth Read, and The Savage and the Innocent by David Maybury-Lewis, as well as the mildly fictionalized Return to Laughter by Elenore Smith Bowen (Laura Bohannan). Later "reflexive" ethnographies refined the technique to translate cultural differences by representing their effects on the ethnographer. Famous examples include Deep Play: Notes on a Balinese Cockfight by Clifford Geertz, Reflections on Fieldwork in Morocco by Paul Rabinow, The Headman and I by Jean-Paul Dumont, and Tuhami by Vincent Crapanzano. In the 1980s, the rhetoric of ethnography was subjected to intense scrutiny within the discipline, under the general influence of literary theory and post-colonial/post-structuralist thought. "Experimental" ethnographies that reveal the ferment of the discipline include Shamanism, Colonialism, and the Wild Man by Michael Taussig, Debating Muslims by Michael F. J. Fischer and Mehdi Abedi, A Space on the Side of the Road by Kathleen Stewart, and Advocacy after Bhopal by Kim Fortun. This critical turn in sociocultural anthropology during the mid-1980s can be traced to the influence of the now classic (and often contested) text, Writing Culture: The Poetics and Politics of Ethnography, (1986) edited by James Clifford and George Marcus. Writing Culture helped bring changes to both anthropology and ethnography often described in terms of being 'postmodern,' 'reflexive,' 'literary,' 'deconstructive,' or 'poststructural' in nature, in that the text helped to highlight the various epistemic and political predicaments that many practitioners saw as plaguing ethnographic representations and practices. Where Geertz's and Turner's interpretive anthropology recognized subjects as creative actors who constructed their sociocultural worlds out of symbols, postmodernists attempted to draw attention to the privileged status of the ethnographers themselves. That is, the ethnographer cannot escape the personal viewpoint in creating an ethnographic account, thus making any claims of objective neutrality highly problematic, if not altogether impossible. In regards to this last point, Writing Culture became a focal point for looking at how ethnographers could describe different cultures and societies without denying the subjectivity of those individuals and groups being studied while simultaneously doing so without laying claim to absolute knowledge and objective authority. Along with the development of experimental forms such as 'dialogic anthropology,' 'narrative ethnography,' and 'literary ethnography', Writing Culture helped to encourage the development of 'collaborative ethnography.' This exploration of the relationship between writer, audience, and subject has become a central tenet of contemporary anthropological and ethnographic practice. In certain instances, active collaboration between the researcher(s) and subject(s) has helped blend the practice of collaboration in ethnographic fieldwork with the process of creating the ethnographic product resulting from the research. Sociology Sociology is another field which prominently features ethnographies. Urban sociology, Atlanta University (now Clark-Atlanta University), and the Chicago School, in particular, are associated with ethnographic research, with some well-known early examples being The Philadelphia Negro (1899) by W. E. B. Du Bois, Street Corner Society by William Foote Whyte and Black Metropolis by St. Clair Drake and Horace R. Cayton, Jr. Well-known is Jaber F. Gubrium's pioneering ethnography on the experiences of a nursing home, Living and Dying at Murray Manor. Major influences on this development were anthropologist Lloyd Warner, on the Chicago sociology faculty, and to Robert Park's experience as a journalist. Symbolic interactionism developed from the same tradition and yielded such sociological ethnographies as Shared Fantasy by Gary Alan Fine, which documents the early history of fantasy role-playing games. Other important ethnographies in sociology include Pierre Bourdieu's work in Algeria and France. Jaber F. Gubrium's series of organizational ethnographies focused on the everyday practices of illness, care, and recovery are notable. They include Living and Dying at Murray Manor, which describes the social worlds of a nursing home; Describing Care: Image and Practice in Rehabilitation, which documents the social organization of patient subjectivity in a physical rehabilitation hospital; Caretakers: Treating Emotionally Disturbed Children, which features the social construction of behavioral disorders in children; and Oldtimers and Alzheimer's: The Descriptive Organization of Senility, which describes how the Alzheimer's disease movement constructed a new subjectivity of senile dementia and how that is organized in a geriatric hospital. Another approach to ethnography in sociology comes in the form of institutional ethnography, developed by Dorothy E. Smith for studying the social relations which structure people's everyday lives. Other notable ethnographies include Paul Willis's Learning to Labour, on working class youth; the work of Elijah Anderson, Mitchell Duneier, and Loïc Wacquant on black America, and Lai Olurode's Glimpses of Madrasa From Africa. But even though many sub-fields and theoretical perspectives within sociology use ethnographic methods, ethnography is not the sine qua non of the discipline, as it is in cultural anthropology. Communication studies Beginning in the 1960s and 1970s, ethnographic research methods began to be widely used by communication scholars. As the purpose of ethnography is to describe and interpret the shared and learned patterns of values, behaviors, beliefs, and language of a culture-sharing group, Harris, (1968), also Agar (1980) note that ethnography is both a process and an outcome of the research. Studies such as Gerry Philipsen's analysis of cultural communication strategies in a blue-collar, working-class neighborhood on the south side of Chicago, Speaking 'Like a Man' in Teamsterville, paved the way for the expansion of ethnographic research in the study of communication. Scholars of communication studies use ethnographic research methods to analyze communicative behaviors and phenomena. This is often characterized in the writing as attempts to understand taken-for-granted routines by which working definitions are socially produced. Ethnography as a method is a storied, careful, and systematic examination of the reality-generating mechanisms of everyday life (Coulon, 1995). Ethnographic work in communication studies seeks to explain "how" ordinary methods/practices/performances construct the ordinary actions used by ordinary people in the accomplishments of their identities. This often gives the perception of trying to answer the "why" and "how come" questions of human communication. Often this type of research results in a case study or field study such as an analysis of speech patterns at a protest rally, or the way firemen communicate during "down time" at a fire station. Like anthropology scholars, communication scholars often immerse themselves, and participate in and/or directly observe the particular social group being studied. Other fields The American anthropologist George Spindler was a pioneer in applying the ethnographic methodology to the classroom. Anthropologists such as Daniel Miller and Mary Douglas have used ethnographic data to answer academic questions about consumers and consumption. In this sense, Tony Salvador, Genevieve Bell, and Ken Anderson describe design ethnography as being "a way of understanding the particulars of daily life in such a way as to increase the success probability of a new product or service or, more appropriately, to reduce the probability of failure specifically due to a lack of understanding of the basic behaviors and frameworks of consumers." Sociologist Sam Ladner argues in her book, that understanding consumers and their desires requires a shift in "standpoint", one that only ethnography provides. The results are products and services that respond to consumers' unmet needs. Businesses, too, have found ethnographers helpful for understanding how people use products and services. By assessing user experience in a "natural" setting, ethnology yields insights into the practical applications of a product or service. It is one of the best ways to identify areas of friction and improve overall user experience. Companies make increasing use of ethnographic methods to understand consumers and consumption, or for new product development (such as video ethnography). The Ethnographic Praxis in Industry (EPIC) conference is evidence of this. Ethnographers' systematic and holistic approach to real-life experience is valued by product developers, who use the method to understand unstated desires or cultural practices that surround products. Where focus groups fail to inform marketers about what people really do, ethnography links what people say to what they do—avoiding the pitfalls that come from relying only on self-reported, focus-group data. Evaluating ethnography The ethnographic methodology is not usually evaluated in terms of philosophical standpoint (such as positivism and emotionalism). Ethnographic studies need to be evaluated in some manner. No consensus has been developed on evaluation standards, but Richardson (2000, p. 254) provides five criteria that ethnographers might find helpful. Jaber F. Gubrium and James A. Holstein's (1997) monograph, The New Language of Qualitative Method, discusses forms of ethnography in terms of their "methods talk". Substantive contribution: "Does the piece contribute to our understanding of social life?" Aesthetic merit: "Does this piece succeed aesthetically?" Reflexivity: "How did the author come to write this text...Is there adequate self-awareness and self-exposure for the reader to make judgments about the point of view?" Impact: "Does this affect me? Emotionally? Intellectually?" Does it move me? Expresses a reality: "Does it seem 'true'—a credible account of a cultural, social, individual, or communal sense of the 'real'?" Ethics Gary Alan Fine argues that the nature of ethnographic inquiry demands that researchers deviate from formal and idealistic rules or ethics that have come to be widely accepted in qualitative and quantitative approaches in research. Many of these ethical assumptions are rooted in positivist and post-positivist epistemologies that have adapted over time but are apparent and must be accounted for in all research paradigms. These ethical dilemmas are evident throughout the entire process of conducting ethnographies, including the design, implementation, and reporting of an ethnographic study. Essentially, Fine maintains that researchers are typically not as ethical as they claim or assume to be — and that "each job includes ways of doing things that would be inappropriate for others to know". Also see Jaber F. Gubrium concept of "site-specificity" discussed his book co-edited with Amir Marvasti titled CRAFTING ETHNOGRAPHIC FIELDWORK. Routledge, 2023. Fine is not necessarily casting blame at ethnographic researchers but tries to show that researchers often make idealized ethical claims and standards which are inherently based on partial truths and self-deceptions. Fine also acknowledges that many of these partial truths and self-deceptions are unavoidable. He maintains that "illusions" are essential to maintain an occupational reputation and avoid potentially more caustic consequences. He claims, "Ethnographers cannot help but lie, but in lying, we reveal truths that escape those who are not so bold". Based on these assertions, Fine establishes three conceptual clusters in which ethnographic ethical dilemmas can be situated: "Classic Virtues", "Technical Skills", and "Ethnographic Self". Much debate surrounding the issue of ethics arose following revelations about how the ethnographer Napoleon Chagnon conducted his ethnographic fieldwork with the Yanomani people of South America. While there is no international standard on Ethnographic Ethics, many western anthropologists look to the American Anthropological Association for guidance when conducting ethnographic work. In 2009, the Association adopted a code of ethics, stating: Anthropologists have "moral obligations as members of other groups, such as the family, religion, and community, as well as the profession". The code of ethics notes that anthropologists are part of a wider scholarly and political network, as well as human and natural environment, which needs to be reported on respectfully. The code of ethics recognizes that sometimes very close and personal relationship can sometimes develop from doing ethnographic work. The Association acknowledges that the code is limited in scope; ethnographic work can sometimes be multidisciplinary, and anthropologists need to be familiar with ethics and perspectives of other disciplines as well. The eight-page code of ethics outlines ethical considerations for those conducting Research, Teaching, Application and Dissemination of Results, which are briefly outlined below. "Conducting Research" – When conducting research Anthropologists need to be aware of the potential impacts of the research on the people and animals they study. If the seeking of new knowledge will negatively impact the people and animals they will be studying they may not undertake the study according to the code of ethics. "Teaching" – When teaching the discipline of anthropology, instructors are required to inform students of the ethical dilemmas of conducting ethnographies and field work. "Application" – When conducting an ethnography, Anthropologists must be "open with funders, colleagues, persons studied or providing information, and relevant parties affected by the work about the purpose(s), potential impacts, and source(s) of support for the work." "Dissemination of Results" – When disseminating results of an ethnography, "[a]nthropologists have an ethical obligation to consider the potential impact of both their research and the communication or dissemination of the results of their research on all directly or indirectly involved." Research results of ethnographies should not be withheld from participants in the research if that research is being observed by other people. Classic virtues "The kindly ethnographer" – Most ethnographers present themselves as being more sympathetic than they are, which aids in the research process, but is also deceptive. The identity that we present to subjects is different from whom we are in other circumstances. "The friendly ethnographer" – Ethnographers operate under the assumption that they should not dislike anyone. When ethnographers find they intensely dislike individuals encountered in the research, they may crop them out of the findings. "The honest ethnographer" – If research participants know the research goals, their responses will likely be skewed. Therefore, ethnographers often conceal what they know in order to increase the likelihood of acceptance by participants. Technical skills "The Precise Ethnographer" – Ethnographers often create the illusion that field notes are data and reflect what "really" happened. They engage in the opposite of plagiarism, giving undeserved credit through loose interpretations and paraphrasing. Researchers take near-fictions and turn them into claims of fact. The closest ethnographers can ever really get to reality is an approximate truth. "The Observant Ethnographer" – Readers of ethnography are often led to assume the report of a scene is complete – that little of importance was missed. In reality, an ethnographer will always miss some aspect because of lacking omniscience. Everything is open to multiple interpretations and misunderstandings. As ethnographers' skills in observation and collection of data vary by individual, what is depicted in ethnography can never be the whole picture. "The Unobtrusive Ethnographer" – As a "participant" in the scene, the researcher will always have an effect on the communication that occurs within the research site. The degree to which one is an "active member" affects the extent to which sympathetic understanding is possible. Ethnographic self The following are commonly misconceived conceptions of ethnographers: "The Candid Ethnographer" – Where the researcher personally situates within the ethnography is ethically problematic. There is an illusion that everything reported was observed by the researcher. "The Chaste Ethnographer" – When ethnographers participate within the field, they invariably develop relationships with research subjects/participants. These relationships are sometimes not accounted for within the reporting of the ethnography, although they may influence the research findings. "The Fair Ethnographer" – Fine claims that objectivity is an illusion and that everything in ethnography is known from a perspective. Therefore, it is unethical for a researcher to report fairness in findings. "The Literary Ethnographer" – Representation is a balancing act of determining what to "show" through poetic/prosaic language and style, versus what to "tell" via straightforward, 'factual' reporting. The individual skills of an ethnographer influence what appears to be the value of the research. According to Norman K. Denzin, ethnographers should consider the following seven principles when observing, recording, and sampling data: The groups should combine symbolic meanings with patterns of interaction. Observe the world from the point of view of the subject, while maintaining the distinction between everyday and scientific perceptions of reality. Link the group's symbols and their meanings with the social relationships. Record all behavior. The methodology should highlight phases of process, change, and stability. The act should be a type of symbolic interactionism. Use concepts that would avoid casual explanations. Forms Autoethnography Autoethnography is a form of ethnographic research in which a researcher connects personal experiences to wider cultural, political, and social meanings and understandings. According to Adams et al., autoethnography uses a researcher's personal experience to describe and critique cultural beliefs, practices, and experiences; acknowledges and values a researcher's relationships with others uses deep and careful self-reflection—typically referred to as "reflexivity"—to name and interrogate the intersections between self and society, the particular and the general, the personal and the political shows people in the process of figuring out what to do, how to live, and the meaning of their struggles balances intellectual and methodological rigor, emotion, and creativity strives for social justice and to make life better. Bochner and Ellis have also defined autoethnography as "an autobiographical genre of writing and research that displays multiple layers of consciousness, connecting the personal to the cultural." They further indicate that autoethnography is typically written in first-person and can "appear in a variety of forms," such as "short stories, poetry, fiction, novels, photographic essays, personal essays, journals, fragmented and layered writing, and social science prose." Genealogical method The genealogical method investigates links of kinship determined by marriage and descent. The method owes its origin from the book of British ethnographer W. H. R. Rivers titled "Kinship and Social Organisation" in 1911. Genealogy or kinship commonly plays a crucial role in the structure of non-industrial societies, determining both social relations and group relationship to the past. Marriage, for example, is frequently pivotal in determining military alliances between villages, clans or ethnic groups. In the field of epistemology the term is used to characterize the philosophical method employed by such writers as Friedrich Nietzsche and Michel Foucault. Digital ethnography Digital ethnography is also seen as virtual ethnography. This type of ethnography is not so typical as ethnography recorded by pen and pencil. Digital ethnography allows for a lot more opportunities to look at different cultures and societies. Traditional ethnography may use videos or images, but digital ethnography goes more in-depth. For example, digital ethnographers would use social media platforms such as Twitter or blogs so that people's interactions and behaviors can be studied. Modern developments in computing power and AI have enabled higher efficiencies in ethnographic data collection via multimedia and computational analysis using machine learning to corroborate many data sources together to produce a refined output for various purposes. A modern example of this technology in application, is the use of captured audio in smart devices, transcribed to issue targeted adverts (often reconciled vs other metadata, or product development data for designers. Digital ethnography comes with its own set of ethical questions, and the Association of Internet Researchers' ethical guidelines are frequently used. Gabriele de Seta's paper "Three Lies of Digital Ethnography" explores some of the methodological questions more central to a specifically ethnographical approach to internet studies, drawing upon Fine's classic text. Multispecies ethnography Multispecies ethnography in particular focuses on both nonhuman and human participants within a group or culture, as opposed to just human participants in traditional ethnography. A multispecies ethnography, in comparison to other forms of ethnography, studies species that are connected to people and our social lives. Species affect and are affected by culture, economics, and politics. The study's roots go back to general anthropology of animals. One of the earliest well-known studies was Lewis Henry Morgan's The American Beaver and His Works (1868). His study closely observed a group of beavers in Northern Michigan. Morgan's main objective was to highlight that the daily individual tasks that the beavers performed were complex communicative acts that had been passed down for generations. In the early 2000s multi-species ethnography took on a huge increase in popularity. The annual meetings of the American Anthropological Association began to host the Multispecies Salon, a collection of discussions, showcases, and other events for anthropologists. The event provided a space for anthropologists and artists to come together and showcase vast knowledge of different organisms and their intertwined systems. Multispecies ethnography highlights a lot of the negative effects of these shared environments and systems. Not only does multispecies ethnography observe the physical relationships between organisms, it also takes note of the emotional and psychological relationships built between species. Relational ethnography Most ethnographies take place in specific places where the observer can observe specific instances that relate to the topic involved. Relational Ethnography articulates studying fields rather than places or processes rather than processed people. Meaning that relational ethnography doesn't take an object nor a bounded group that is defined by its members shared social features nor a specific location that is delimited by the boundaries of a particular area. But rather the processes involving configurations of relations among different agents or institutions. Notable ethnographers Manuel Ancízar Basterra (1812–1882) Franz Boas (1858–1942) Gregory Bateson (1904–1980) Adriaen Cornelissen van der Donck Mary Douglas (1921–2007) Raymond Firth (1901–2002) Leo Frobenius (1873–1938) Thor Heyerdahl (1914–2002) Zora Neale Hurston (1891–1960) Diamond Jenness (1886–1969) Mary Kingsley (1862–1900) Carobeth Laird (1895–1983) Ruth Landes (1908–1991) Edmund Leach (1910–1989) José Leite de Vasconcelos (1858–1941) Claude Lévi-Strauss (1908–2009) Bronisław Malinowski (1884–1942) David Maybury-Lewis (1929–2007) Margaret Mead (1901–1978) Nicholas Miklouho-Maclay (1846–1888) Gerhard Friedrich Müller (1705–1783) Nikolai Nadezhdin (1804–1856) Lubor Niederle (1865–1944) Dositej Obradović (1739–1811) Alexey Okladnikov (1908-)_1981) Sergey Oldenburg (1863–1934) Edward Sapir (1884–1939) August Ludwig von Schlözer (1735–1809) James Spradley (1933–1982) Jean Briggs (1929–2016) Cora Du Bois (1903–1991) Lila Abu-Lughod Elijah Anderson (born 1943) Ruth Behar Zuzana Beňušková (born 1960) Zalpa Bersanova Napoleon Chagnon (1938–2019) Veena Das (born 1945) Mitchell Duneier Kristen R. Ghodsee (born 1970) Alice Goffman (born 1982) Jaber F. Gubrium (born 1945) Katrina Karkazis Jovan Cvijić Richard Price (born 1941) Marilyn Strathern (born 1941) Carolyn Ellis Barrie Thorne Sudhir Venkatesh Susan Visvanathan Paul Willis Mikhail Nikolaevich Smirnov James H McAlexander (Consumer Culture Ethnography) (1958 to 2022) See also Area studies Autoethnography Critical ethnography Ethnoarchaeology Ethnography of communication Ethnographic Museum Ethnology Ethnosemiotics Folklore Immersion journalism Living lab Online ethnography Ontology Participant observation Qualitative research Realist ethnography Video ethnography Visual ethnography References Bibliography Agar, Michael (1996) The Professional Stranger: An Informal Introduction to Ethnography. Academic Press. Burns, Janet M.C. (1992) Caught in the Riptide: Female Researcher in a Patricentric Setting. pp. 171–182 in Fragile Truths: 25 Years of Sociology and Anthropology in Canada. D. Harrison, W.K. Carroll, L. Christiansen-Ruffman and Raymond Currie (eds.). Ottawa, Ontario, Canada: Carleton University Press. Clifford, James & George E. Marcus (Eds.). Writing Culture: The Poetics and Politics of Ethnography. (1986). Berkeley: University of California Press. Douglas, Mary and Baron Isherwood (1996) The World of Goods: Toward and Anthropology of Consumption. Routledge, London. Dubinsky, Itamar (2017). Global and local methodological and ethical questions in researching football academies in Ghana. Children's Geographies, 15(4), 385-398. https://doi.org/10.1080/14733285.2016.1249823. Erickson, Ken C. and Donald D. Stull (1997) Doing Team Ethnography : Warnings and Advice. Sage, Beverly Hills. Fetterman, D. (2009) Ethnography: Step by Step, Third edition, Thousand Oaks CA, Sage. Geertz, Clifford. The Interpretation of Cultures, New York, Basic Books. Ghodsee, Kristen (2013) Anthropology News. Groh, Arnold A. (2018). Research Methods in Indigenous Contexts. New York: Springer. Gubrium, Jaber F. (1988). "Analyzing Field Reality." Thousand Oaks, CA: Sage. Gubrium, Jaber F. and James A. Holstein. (1997) "The New Language of Qualitative Method." New York: Oxford University Press. Gubrium, Jaber F. and James A. Holstein. (2009). "Analyzing Narrative Reality." Thousand Oaks, CA: Sage. Hammersley, Martyn (2018) What's Wrong With Ethnography?, London, Routledge. Hammersley, Martyn and Atkinson, Paul (2019) Ethnography: Principles in Practice, Fourth edition, London, Routledge. Heath, Shirley Brice & Brian Street, with Molly Mills. On Ethnography. Hymes, Dell. (1974). Foundations in sociolinguistics: An ethnographic approach. Philadelphia: University of Pennsylvania Press. Kottak, Conrad Phillip (2005) Window on Humanity : A Concise Introduction to General Anthropology, (pages 2–3, 16–17, 34-44). McGraw Hill, New York. Mannik, L., & McGarry, K. (2017). Practicing Ethnography: A Student Guide to Method and Methodology. University of Toronto Press. Marcus, George E. & Michael Fischer. Anthropology as Cultural Critique: An Experimental Moment in the Human Sciences. (1986). Chicago: University of Chicago Press. Miller, Daniel (1987) Material Culture and Mass Consumption. Blackwell, London. Spradley, James P. (1979) The Ethnographic Interview. Wadsworth Group/Thomson Learning. Salvador, Tony; Genevieve Bell; and Ken Anderson (1999) Design Ethnography. Design Management Journal. Van Maanen, John. 1988. Tales of the Field: On Writing Ethnography Chicago: University of Chicago Press. Westbrook, David A. Navigators of the Contemporary: Why Ethnography Matters. (2008). Chicago: University of Chicago Press. External links Anthropology Cultural anthropology Social anthropology Ethnology Ethnic studies
0.76452
0.997952
0.762955
European integration
European integration is the process of industrial, economic, political, legal, social, and cultural integration of states wholly or partially in Europe, or nearby. European integration has primarily but not exclusively come about through the European Union and its policies. The history of European integration is marked by the Roman Empire's consolidation of European and Mediterranean territories, which set a precedent for the notion of a unified Europe. This idea was echoed through attempts at unity, such as the Holy Roman Empire, the Hanseatic League, and the Napoleonic Empire. The devastation of World War I reignited the concept of a unified Europe, leading to the establishment of international organizations aimed at political coordination across Europe. The interwar period saw politicians such as Richard von Coudenhove-Kalergi and Aristide Briand advocating for European unity, albeit with differing visions. Post-World War II Europe saw a significant push towards integration, with Winston Churchill's call for a "United States of Europe" in 1946 being a notable example. This period saw the formation of theories around European integration, categorizing into proto-integration, explaining integration, analyzing governance, and constructing the EU, reflecting a shift from viewing European integration as a unique process, to incorporating broader international relations and comparative politics theories. Citizens' organizations have played a role in advocating further European integration, exemplified by the Union of European Federalists and the European Movement International. Various agreements and memberships demonstrate the web of relations and commitments between European countries, showing the multi-layered nature of integration. History In antiquity, the Roman Empire brought about integration of multiple European and Mediterranean territories. The numerous subsequent claims of succession of the Roman Empire, even the iterations of the Classical Empire and its ancient peoples, have occasionally been reinterpreted in the light of post-1950 European integration as providing inspiration and historical precedents. Important examples include the Holy Roman Empire, the Hanseatic League, the Peace of Westphalia, the Napoleonic Empire, and the unification of Germany, Italy, and the Balkans as well as the Latin Monetary Union. Following the catastrophe of the First World War of 1914–1918, thinkers and visionaries from a range of political traditions again began to float the idea of a politically unified Europe. In the early 1920s a range of international organisations were founded (or re-founded) to help like-minded political parties to coordinate their activities. These ranged from the Comintern (1919), to the Labour and Socialist International (1921) to the Radical and Democratic Entente of centre-left progressive parties (1924), to the Green International of farmers' parties (1923), to the centre-right International Secretariat of Democratic Parties inspired by Christianity (1925). While the remit of these international bodies was global, the predominance of political parties from Europe meant that they facilitated interaction between the adherents of a given ideology across European borders. Within each political tradition, voices emerged advocating not merely the cooperation of various national parties, but the pursuit of political institutions at the European level. One of the first to articulate this view was Richard von Coudenhove-Kalergi, who outlined a conservative vision of European unity in his Pan-Europa manifesto (1923). The First Paneuropean Congress took place in Vienna in 1926, and the association possessed 8000 members by the time of the 1929 Wall Street Crash. They envisaged a specifically Christian, and by implication Catholic, Europe. The British civil-servant and future Conservative minister Arthur Salter published a book advocating The United States of Europe in 1933. In contrast the Soviet commissar (minister) Leon Trotsky raised the slogan "For a Soviet United States of Europe" in 1923, advocating a Europe united along communist principles. Among liberal-democratic parties, the French centre-left undertook several initiatives to group like-minded parties from the European states. In 1927, the French mathematician and politician Émile Borel, a leader of the centre-left Radical Party and the founder of the Radical International, set up a French Committee for European Cooperation, and a further twenty countries set up equivalent committees. However, it remained an élite venture: the largest committee, the French one, possessed fewer than six-hundred members, two-thirds of them parliamentarians. Two centre-left French prime ministers went further. In 1929 Aristide Briand gave a speech in the presence of the League of Nations Assembly in which he proposed the idea of a federation of European nations based on solidarity and in the pursuit of economic prosperity and political and social co-operation. In 1930, at the League's request, Briand presented a Memorandum on the organisation of a system of European Federal Union. The next year the future French prime minister Édouard Herriot published his book The United States of Europe. Indeed, a template for such a system already existed, in the form of the 1921 Belgian and Luxembourgish customs and monetary union. Support for the proposals by the French centre-left came from a range of prestigious figures. Many eminent economists, aware that the economic race-to-the-bottom between states was creating ever-greater instability, supported the view: these included John Maynard Keynes. The French political scientist and economist Bertrand Jouvenel remembered a widespread mood after 1924 calling for a "harmonisation of national interests along the lines of European union, for the purpose of common prosperity". The Spanish philosopher and politician, Ortega y Gasset, expressed a position shared by many within Republican Spain: "European unity is no fantasy, but reality itself; and the fantasy is precisely the opposite: the belief that France, Germany, Italy or Spain are substantive & independent realities." Eleftherios Venizelos, Prime Minister of Greece, outlined his government's support in a 1929 speech by saying that "the United States of Europe will represent, even without Russia, a power strong enough to advance, up to a satisfactory point, the prosperity of the other continents as well". Between the two world wars, the Polish statesman Józef Piłsudski (1867–1935) envisaged the idea of a European federation that he called Międzymorze ("Intersea" or "Between-seas"), known in English as Intermarium, which was a Polish-oriented version of Mitteleuropa. The Great Depression, the rise of fascism and communism and subsequently World War II prevented the inter-war movements from gaining further support: between 1933 and 1936 most of Europe's remaining democracies became dictatorships, and Ortega's Spain and Venizelos's Greece had both plunged into civil war. But although the social-democratic, liberal or Christian-democratic supporters of European unity were out of power during the 1930s and unable to put their ideas into practice, many would find themselves in power in the 1940s and 1950s, and better-placed to put into effect their earlier remedies against economic and political crisis. During World War II (1939–1945) Nazi Germany came to dominate - directly or indirectly - much of Europe at various times. The plans for German-oriented political, social, and economic integration of Europe - such as the New Order, the Greater Germanic Reich and - did not survive the war. At the end of World War II, the continental political climate favoured unity in democratic European countries, seen by many as an escape from the extreme forms of nationalism which had devastated the continent. In a speech delivered on 19 September 1946 at the University of Zürich in Switzerland, Winston Churchill postulated a United States of Europe. The same speech however contains remarks, less-often quoted, which make it clear that Churchill did not initially see Britain as being part of this United States of Europe: Theories of integration European integration scholars Thomas Diez and Antje Wiener identify the general tendencies in the development of European integration theory and suggest to divide theories of integration into three broad phases, which are preceded by a normative proto-integration theory period. There's a gradual shift from theories studying European integration as sui generis towards new approaches that incorporate theories of International Relations and Comparative politics. Proto-integration period The question of how to avoid wars between the nation-states was essential for the first theories. Federalism and functionalism proposed the containment of the nation-state, while transactionalism sought to theorise the conditions for the stabilisation of the nation-state system. Early federalism was more like a political movement calling for European federation by various political actors, for example, Altiero Spinelli calling for a federal Europe in his Ventotene Manifesto, and Paul Valéry envisioning European civilization for unity. State sovereignty was an issue for federalists who hoped political organizations at higher regional level would solve the issue. A representative scholar of functionalism was David Mitrany, who also saw states and their sovereignty as a core problem and believed that one should restrain states to prevent future wars. However, Mitrany disagreed with regional integration as he viewed it as mere replication of the state-model. Transactionalism, on the other hand, sees increased cross-border exchanges as promoting regional integration so that the risk of war is reduced. First phase: explaining integration, 1960s onwards European integration theory initially focused on explaining integration process of supranational institution-building. One of the most influential theories of European integration is neofunctionalism, influenced by functionalist ideas, developed by Ernst B. Haas (1958) and further investigated by Leon Lindberg (1963). This theory focuses on spillovers of integration, where well-integrated and interdependent areas led to more integration. Neofunctionalism well captures the spillover from the European Coal and Steel Community to the European Economic Community established in the 1957 Treaties of Rome. Transfers of loyalties from the national level to the supranational level is expected to occur as integration progresses. The other big influential theory in Integration Studies is Intergovernmentalism, advanced by Stanley Hoffmann after the Empty Chair Crisis by French President Charles De Gaulle in the 1960s. Intergovernmentalism and later, Liberal Intergovernmentalism, developed in the 1980s by Andrew Moravcsik focus on governmental actors' impacts that are enhanced by supranational institutions but not restrained from them. The important debate between neofunctionalism and (liberal) intergovernmentalism still remains central in understanding the development and setbacks of the European integration. Second phase: analyzing governance, 1980s onwards As the empirical world has changed, so have the theories and thus the understanding of European Integration. The second generation of integration theorists focused on the importance of institutions and their impacts on both integration process and European governance development. The second phase brought in perspectives from comparative politics in addition to traditional International Relations theoretical references. Studies attempted to understand what kind of polity the EU is and how it operates. For example, new theory multi-level governance (MLG) was developed to understand the workings and development of the EU. Third phase: constructing the EU, 1990s onwards The third phase of integration theory marked a return of International Relations theory with the rise of critical and constructivist approaches in the 1990s. Perspectives from social constructivists, post-structuralists, critical theories, feminist theories are incorporated in integration theories to conceptualize European integration process of widening and deepening. Citizens' organisations calling for further integration Various federalist organisations have been created over time supporting the idea of a federal Europe. These include the Union of European Federalists, the Young European Federalists, the European Movement International, the European Federalist Party, and Volt Europa. The Union of European Federalists (UEF) is a European non-governmental organisation, campaigning for a Federal Europe. It consists of 20 constituent organisations and it has been active at the European, national and local levels for more than 50 years. The European Movement International is a lobbying association that coordinates the efforts of associations and national councils with the goal of promoting European integration, and disseminating information about it. The European Federalist Party is a pro-European, pan-European and federalist political party which advocates further integration of the EU and the establishment of a Federal Europe. Its aim is to gather all Europeans to promote European federalism and to participate in all elections all over Europe. It has national sections in 15 countries. Volt Europa is a pan-European and European federalist political movement that also serves as the pan-European structure for subsidiary parties in EU member states. It is present in 29 countries and participates in elections all over the EU on the local, national and European level. Overlap of membership in various agreements There are various agreements with overlapping membership. Several countries take part in a larger number of agreements than others. Common membership of member states of the European Union All member states of the European Union (EU) are members of the: Organization for Security and Co-operation in Europe (OSCE), Secretariat: Vienna, Austria European Political Community (EPC) Council of Europe (CoE), HQ: Strasbourg, France European Civil Aviation Conference (ECAC), HQ: Neuilly-sur-Seine/Paris, France European Organisation for the Safety of Air Navigation (Eurocontrol), HQ: Brussels, Belgium European Committee for Standardization (CEN), HQ: Brussels, Belgium European Telecommunications Standards Institute (ETSI), HQ: Sophia Antipolis, France European Committee for Electrotechnical Standardization (CENELEC), HQ: Brussels, Belgium European Union Customs Union (EUCU) European Olympic Committees (EOC), HQ: Rome, Italy European Patent Convention (EPC)/European Patent Organisation (EPOrg) European Atomic Energy Community (EAEC, Euratom) Single Euro Payments Area (SEPA) European Common Aviation Area (ECAA) European Higher Education Area (EHEA) – Belgium as Flemish Community and French Community, i.e. the German-speaking Community of Belgium is not included. have organizations that are members of the: European Broadcasting Union (EBU), HQ: Geneva, Switzerland Union of European Football Associations (UEFA), HQ: Nyon, Switzerland European Network of Transmission System Operators for Electricity, HQ: Brussels, Belgium have organisations that are members, associated partners or observers of the European Network of Transmission System Operators for Gas, HQ: Brussels, Belgium are located in the European Broadcasting Area (EBA) Most integrated countries 21 states are part of the Eurozone or in ERM II without Euro opt-out. These are Austria, Belgium, Bulgaria, Croatia, Cyprus, Estonia, Finland, France, Germany, Greece, Italy, Ireland, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia and Spain. They are all members of or take part in: the European Union the European Defence Agency (EDA) Geographic scope Beyond geographic Europe Some agreements that are mostly related to countries of the European continent, are also valid in territories outside the continent. Not listed below are agreements if their scope is beyond geographic Europe only because the agreement includes: Territories of transcontinental countries: Russia, Kazakhstan, Turkey, Cyprus, Armenia, Azerbaijan and Georgia contain some territory in Europe and some in Asia The EU uses bilateral Enhanced Partnership and Cooperation Agreements as an integration tool. Special territories of European countries, e.g. Special territories of member states of the European Union Cyprus, which is a member of the Council of Europe and several other agreements List: NATO contains USA and Canada, but has a European focus, Article 10 of the North Atlantic Treaty describes how non-member states may join: "The Parties may [...] invite any other European State in a position to further the principles of this Treaty" Organization for Security and Co-operation in Europe (OSCE) contains the United States, Canada, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan, and Mongolia European Broadcasting Union (EBU) contains North African and Middle East countries European Olympic Committees (EOC) contains Israel Limited to regions within geographic Europe Several regional integration efforts have effectively promoted intergovernmental cooperation and reduced the possibility of regional armed conflict. Other initiatives have removed barriers to free trade in European regions, and increased the free movement of people, labour, goods, and capital across national borders. Nordic countries Since the end of the Second World War, the following organisations have been established in the Nordic region: The Nordic Council and the Nordic Council of Ministers is a co-operation forum for the parliaments and governments of the Nordic countries created in February 1953. It includes the states of Denmark, Finland, Iceland, Norway and Sweden, and their autonomous territories (Greenland, Faroe Islands and Åland). The Nordic Passport Union, created in 1954 but implemented on 1 May 1958, establishes free movement across borders without passports for the countries' citizens. It comprises Denmark, Sweden and Norway as foundational states; further, it includes Finland and Iceland since 24 September 1965, and the Danish autonomous territories of Faroe Islands since 1 January 1966. Baltic Sea region The following political and/or economic organisations have been in the Baltic region in the post-modern era: The Baltic Assembly aims to promote co-operation between the parliaments of the Baltic states, namely the Republics of Estonia, Latvia and Lithuania. The organisation was planned in Vilnius on 1 December 1990, and the three nations agreed to its structure and rules on 13 June 1994. The Baltic Free Trade Area (BAFTA) was a trade agreement between Estonia, Lithuania and Latvia. It was signed on 13 September 1993 and came into force on 1 April 1994. The agreement was later extended to apply also to agricultural products, effective from 1 January 1997. BAFTA ceased to exist when its members joined the EU on 1 May 2004. The Council of the Baltic Sea States (CBSS) was founded in 1992 to promote intergovernmental cooperation among Baltic Sea countries in questions concerning economy, civil society development, human rights issues, and nuclear and radiation safety. It has 12 members including Denmark, Estonia, Finland, Germany, Iceland (since 1995), Latvia, Lithuania, Norway, Poland, Russia, Sweden and the European Commission. In 2009 the European Council approved the EU Strategy for the Baltic Sea Region (EUSBSR) following a communication from the European Commission. The EUSBSR was the first macro-regional strategy in Europe. The Strategy aims to reinforce cooperation within the Baltic Sea Region, to address challenges together, and to promote balanced development in the Region. The Strategy contributes to major EU policies, including Europe 2020, and reinforces integration within the Region. Nordic-Baltic Eight Low Countries region (Benelux) Since the end of the First World War the following unions have been set in the Low Countries region: The Benelux is an economic and political union between Belgium, the Netherlands, and Luxembourg. On 5 September 1944, a treaty establishing the Benelux Customs Union was signed. It entered into force in 1948, and ceased to exist on 1 November 1960, when it was replaced by the Benelux Economic Union after a treaty signed in The Hague on 3 February 1958. A Benelux Parliament was created in 1955. The Belgium-Luxembourg Economic Union (BLEU) can be seen as a forerunner of the Benelux. BLEU was created by the treaty signed on 25 July 1921. It established a single market between both countries, while setting the Belgian franc and Luxembourgian franc at a fixed parity. Black Sea region Several regional organisations have been founded in the Black Sea region since the fall of the Soviet Union, such as: The Organization of the Black Sea Economic Cooperation (BSEC) aims to ensure peace, stability and prosperity by encouraging friendly and good-neighbourly relations among the 12 state members, located mainly in the Black Sea region. It was created on 25 June 1992 in Istanbul, and entered into force on 1 May 1999. The 11 founding members were Albania, Armenia, Azerbaijan, Bulgaria, Georgia, Greece, Moldova, Romania, Russia, Turkey, and Ukraine. Serbia (then Serbia and Montenegro) joined in April 2004. The GUAM Organization for Democracy and Economic Development is a regional organisation of four post-Soviet states, which aims to promote cooperation and democratic values, ensure stable development, enhance international and regional security, and stepping up European integration. Current members include the four founding ones, namely, Georgia, Ukraine, Azerbaijan, and Moldova. Uzbekistan joined in 1999, and left in 2005. United Kingdom and Ireland Since the end of the First World War, the following agreements have been signed in the United Kingdom and Ireland region: The British–Irish Council was created by the Good Friday Agreement in 1998 to "promote the harmonious and mutually beneficial development of the totality of relationships among the peoples of these islands". It was formally established on 2 December 1999. Its membership comprises Ireland, the United Kingdom, three of the countries of the UK (Northern Ireland, Scotland and Wales), and three British Crown dependencies (Guernsey, the Isle of Man and Jersey). Because England does not have a devolved parliament, it is not represented on the Council as a separate entity. The Common Travel Area is a passport-free zone established in 1922 that comprises Ireland, the United Kingdom, the Isle of Man, Jersey and Guernsey. Under Irish law, all British citizens are exempt from immigration control and immune from deportation. They are entitled to live in Ireland without any restrictions or conditions. Under British law, Irish citizens are entitled to enter and live in the United Kingdom without any restrictions or conditions. They also have the right to vote, work, study and access welfare and healthcare services. In January 2020, the United Kingdom left the EU, reversing most aspects of its 40+ years of participation in EU integration. Ireland continues to remain an enthusiastic member of the Union and participates in some elements of the Schengen Agreement other than the common visa policy [a position likely to remain for as a long as Northern Ireland remains part of the United Kingdom]. The Common Travel Area continues to operate though, , other aspects of the relationship are encountering difficulties. Central Europe The following cooperation agreements have been signed in Central Europe: The Visegrád Group is a Central-European alliance for cooperation and European integration, based on an ancient strategic alliance of core Central European countries. The Group originated in a summit meeting of Czechoslovakia, Hungary and Poland held in the Hungarian castle town of Visegrád on 15 February 1991. The Czech Republic and Slovakia became members after the dissolution of Czechoslovakia in 1993. In 1989, the Central European Initiative, a forum of regional cooperation in Central and Eastern Europe with 18 member states, was formed in Budapest. The CEI headquarters have been in Trieste, Italy, since 1996. The Central European Free Trade Agreement (CEFTA) is a trade agreement between countries in Central Europe and the Balkans, which works as a preparation for full European Union membership. , it has 7 members: North Macedonia, Albania, Bosnia and Herzegovina, Moldova, Montenegro, Serbia and the UNMIK (as Kosovo). It was established in 1992 by Czechoslovakia, Hungary and Poland, but came into force only in 1994. Czechoslovakia had in the meantime split into the Czech Republic and Slovakia. Slovenia joined in 1996, while Romania did the same in 1997, Bulgaria in 1999, and Croatia in 2003. In 2004, the Czech Republic, Slovakia, Hungary, Poland, and Slovenia left the CEFTA to join the EU. Romania and Bulgaria left it in 2007 for the same reason. Subsequently, North Macedonia joined it in 2006, and Albania, Bosnia and Herzegovina, Moldova, Montenegro, Serbia and UNMIK (on behalf of Kosovo) in 2007. In 2013, Croatia left the CEFTA to join the EU. Switzerland and Liechtenstein participate in a customs union since 1924, and both employ the Swiss franc as national currency. Eastern Europe The effects of the EU integration process of the countries from the former Eastern bloc are still debated. As a result, the relationship between immigration levels and EU public support remains uncertain. Through the integration, the countries in Eastern Europe have experienced growth of the economy, benefits of the free market agreements and freedom of the labor movement within the EU. However, the results of the empirical socioeconomic analyses suggest that in Spain, France, Ireland and the Netherlands, the immigration from CEE had negative effects on support for European integration in the host societies. The research also implies that the immigration from the CEE seems to undermine the long-term effects of the integration. There are theories for the programs of social development that range in views from: an extended contact with the immigrants from Eastern Europe might help forge a common European identity and it could also lead to a potential national isolation, caused by tightening support mechanisms for the labor immigration. Equal amount of research also implies that the internal migration of the countries within the EU is necessary for the successful development of its economic union. Danube region The EU Strategy for the Danube Region was endorsed by the European Council in 2011 and is the second macro-regional strategy in Europe. The Strategy provides a basis for improved cooperation among 14 countries along the Danube River. It aims to improve the effectiveness of regional integration efforts and leverage the impact of policies at the EU, national and local levels. Balkans The Craiova Group, Craiova Four, or C4 is a cooperation project of four European states – Romania, Bulgaria, Greece and Serbia – for the purposes of furthering their European integration as well as economic, transport and energy cooperation with one another. Council of Europe Against the background of the devastation and human suffering during the Second World War as well as the need for reconciliation after the war, the idea of European integration led to the creation of the Council of Europe in Strasbourg in 1949. The most important achievement of the Council of Europe is the European Convention on Human Rights of 1950 with its European Court of Human Rights in Strasbourg, which serves as a de facto supreme court for human rights and fundamental freedoms throughout Europe. Human rights are also protected by the Council of Europe's Committee for the Prevention of Torture and the European Social Charter. Most conventions of the Council of Europe pursue the aim of greater legal integration, such as the conventions on legal assistance, against corruption, against money laundering, against doping in sport, or internet crime. Cultural co-operation is based on the Cultural Convention of 1954 and subsequent conventions on the recognition of university studies and diplomas as well as on the protection of minority languages. After the fall of the Berlin Wall, former communist European countries were able to accede to the Council of Europe, which now comprises 46 states in Europe. Therefore, European integration has practically succeeded at the level of the Council of Europe, encompassing almost the whole European continent, with the exception of Belarus, Kazakhstan, Kosovo, Russia, and the Vatican City. European integration at the level of the Council of Europe functions through the accession of member states to its conventions, and through political coordination at the level of ministerial conferences and inter-parliamentary sessions. In accordance with its Statute of 1949, the Council of Europe works to achieve greater unity among its members based on common values, such as human rights and democracy. European Political Community The European Political Community (EPC) is an intergovernmental forum for political and strategic discussions about the future of Europe. The inaugural summit was held on 6 October 2022 in Prague, with participants from 44 European countries, as well as the Presidents of the European Council and the European Commission. Organization for Security and Co-operation in Europe The Organization for Security and Co-operation in Europe (OSCE) is a trans-Atlantic intergovernmental organisation whose aim is to secure stability in Europe. It was established as the Conference on Security and Co-operation in Europe (CSCE) in July 1973, and was subsequently transformed into its current form in January 1995. The OSCE has 56 member states, covering most of the northern hemisphere. The OSCE develops three lines of activities, namely the Politico-Military Dimension, the Economic and Environmental Dimension and the Human Dimension. These respectively promote (i) mechanisms for conflict prevention and resolution; (ii) the monitoring, alerting and assistance in case of economic and environmental threats; and (iii) full respect for human rights and fundamental freedoms. European Free Trade Association The European Free Trade Association (EFTA) is a European trade bloc which was established on 3 May 1960 as an alternative for European states who did not join the EEC. EFTA currently has four member states: Iceland, Norway, Switzerland, and Liechtenstein; just Norway and Switzerland are founding members. The EFTA Convention was signed on 4 January 1960 in Stockholm by seven states: Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom. Finland became an associate member in 1961 and a full member in 1986; Iceland joined in 1970 and Liechtenstein did the same in 1991. A revised Convention, the Vaduz Convention, was signed on 21 June 2001 and entered into force on 1 June 2002. The United Kingdom and Denmark left in 1973, when they joined the European Community (EC). Portugal left EFTA in 1986, when it also joined the EC. Austria, Finland and Sweden ceased to be EFTA members in 1995 by joining the European Union, which superseded the EC in 1993. European Broadcasting Union The European Broadcasting Union (EBU) is an alliance of public service media entities, established on 12 February 1950. , the organisation comprises 112 active members in 54 countries, and 30 associate members from a further 19 countries. Most EU states are part of this organisation, and therefore the EBU has been subject to supranational legislation and regulation. It also hosted debates between candidates for the European Commission presidency for the 2014 parliamentary elections, but is unrelated to the EU itself. European Patent Convention The European Patent Convention (EPC), also known as the Convention on the Grant of European Patents of 5 October 1973, is a multilateral treaty instituting the European Patent Organisation and providing an autonomous legal system according to which European patents are granted. As of 2022, there are 39 parties to the European Patent Convention. The Convention on the Grant of European Patents was first signed on 5 October 1973. European Communities In 1951, Belgium, France, Italy, Luxembourg, the Netherlands and West Germany agreed to confer powers over their steel and coal production to the European Coal and Steel Community (ECSC) in the Treaty of Paris, which came into force on 23 July 1952. Coal and steel production was essential for the reconstruction of countries in Europe after the Second World War and this sector of the national economy had been important for warfare in the First and Second World Wars. Therefore, France had originally maintained its occupation of the Saarland with its steel companies after the founding of the Federal Republic of Germany (West Germany) in 1949. By transferring national powers over the coal and steel production to a newly created ECSC Commission, the member states of the ECSC were able to provide for greater transparency and trust among themselves. This transfer of national powers to a "Community" to be exercised by its Commission was paralleled under the 1957 Treaty of Rome establishing the European Atomic Energy Community (or Euratom) and the European Economic Community (EEC) in Brussels. In 1967, the Merger Treaty (or Brussels Treaty) combine the institutions of the ECSC and Euratom into that of the EEC. They already shared a Parliamentary Assembly and Courts. Collectively they were known as the European Communities. In 1987, the Single European Act (SEA) was the first major revision of the Treaty of Rome that formally established the single European market and the European Political Cooperation. The Communities originally had independent personalities although they were increasingly integrated, and over the years were transformed into what is now called the European Union. The six states that founded the three Communities were known as the "inner six" (the "outer seven" were those countries who formed the European Free Trade Association). These were Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The first enlargement was in 1973, with the accession of Denmark, Ireland and the United Kingdom. Greece joined in 1981, and Portugal and Spain in 1986. On 3 October 1990 East Germany and West Germany were reunified, hence East Germany became part of the Community in the new reunified Germany (not increasing the number of states). A key person in the Community creation process was Jean Monnet, regarded as the "founding father" of the European Union, which is seen as the dominant force in European integration. European Union The European Union (EU) is an association of 27 sovereign member states, that by treaty have delegated certain of their competences to common institutions, in order to coordinate their policies in a number of areas, without however constituting a new state on top of the member states. Officially established by the Treaty of Maastricht in 1993 upon the foundations of the pre-existing European Economic Community. Thus, 12 states are founding members, namely, Belgium, Denmark, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, and the United Kingdom. In 1995, Austria, Finland and Sweden entered the EU. Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and Slovenia joined in 2004. Bulgaria and Romania joined in 2007. Croatia acceded in 2013. The United Kingdom withdrew in 2020 after 47 years of membership. Official candidate states include Albania, Bosnia and Herzegovina, Georgia, North Macedonia, Moldova, Montenegro, Serbia, Turkey and Ukraine. Morocco's application was rejected by the EEC. Iceland and Switzerland have withdrawn their respective applications. Norway rejected membership in two referendums. The membership negotiations process between the EU and Turkey, which started on 3 October 2005, has been suspended since 2019. The institutions of the European Union, its parliamentarians, judges, commissioners and secretariat, the governments of its member states as well as their people, all play a role in European Integration. Nevertheless, the question of who plays the key role is disputed as there are different theories on European Integration focusing on different actors and agency. The European Union has a number of relationships with nations that are not formally part of the Union. According to the European Union's official site, and a statement by Commissioner Günter Verheugen, the aim is to have a ring of countries, sharing EU's democratic ideals and joining them in further integration without necessarily becoming full member states. Competences Whilst most responsibilities ('competences') are retained by the member states, some competences are conferred exclusively on the Union for collective decision, some are shared pending Union action and some receive Union support. These are shown on this table: Economic integration The European Union operates a single economic market across the territory of all its members, and uses a single currency between the Eurozone members. Further, the EU has a number of economic relationships with nations that are not formally part of the Union through the European Economic Area and customs union agreements. Free trade area The creation of the EEC eliminated tariffs, quotas and preferences on goods among member states, which are the requisites to define a free trade area (FTA). The United Kingdom remains part of the FTA during the transition period of the Brexit withdrawal agreement. Numerous countries have signed a European Union Association Agreement (AA) with FTA provisions. These mainly include Mediterranean countries (Algeria in 2005, Egypt in 2004, Israel in 2000, Jordan in 2002, Lebanon in 2006, Morocco in 2000, Palestinian National Authority in 1997, and Tunisia in 1998), albeit some countries from other trade blocs have also signed one (such as Chile in 2003, Mexico in 2000, and South Africa in 2000). Further, many Balkan states have signed a Stabilisation and Association Agreement (SAA) with FTA provisions, such as Albania (signed 2006), Montenegro (2007), North Macedonia (2004), Bosnia and Herzegovina and Serbia (both 2008, entry-into-force pending). In 2008, Poland and Sweden proposed the Eastern Partnership which would include setting a FTA between the EU and Armenia, Azerbaijan, Belarus, Georgia, Moldova and Ukraine. Customs union The European Union Customs Union defines an area where no customs are levied on goods travelling within it. It includes all member states of the European Union. The abolition of internal tariff barriers between EEC member states was achieved in 1968. Andorra and San Marino belong to the EU customs unions with third states. Turkey is linked by the European Union–Turkey Customs Union. European Single Market A prominent goal of the EU since its creation by the Maastricht Treaty in 1992 is establishing and maintaining a single market. This seeks to guarantee the four basic freedoms, which are related to ensure the free movement of goods, services, capital and people around the EU's internal market. The United Kingdom remained part of the single market during the transition period of the Brexit withdrawal agreement. The European Economic Area (EEA) agreement allows Norway, Iceland and Liechtenstein to participate in the European Single Market without joining the EU. The four basic freedoms apply. However, some restrictions on fisheries and agriculture take place. Switzerland is linked to the European Union by Swiss-EU bilateral agreements, with a different content from that of the EEA agreement. Eurozone The Eurozone refers to the European Union member states that have adopted the euro currency union as the third stage of the European Economic and Monetary Union (EMU). Further, certain states outside the EU have adopted the euro as their currency, despite not belonging to the EMU. Thus, a total of 26 states, including 20 European Union states and six non-EU members, currently use the euro. The Eurozone came into existence with the official launch of the euro on 1 January 1999. Physical coins and banknotes were introduced on 1 January 2002. The original members were Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, the Netherlands, Portugal, and Spain. Greece adopted the euro on 1 January 2001. Slovenia joined on 1 January 2007, Cyprus and Malta were admitted on 1 January 2008, Slovakia joined on 1 January 2009, Estonia on 1 January 2011, Latvia on 1 January 2014, Lithuania on 1 January 2015 and Croatia on 1 January 2023. Outside the EU, agreements have been concluded with Andorra, Monaco, San Marino, and Vatican City for formal adoption, including the right to issue their own coins. Montenegro and Kosovo} unilaterally adopted the euro when it launched. Fiscal union There has long been speculation about the possibility of the European Union eventually becoming a fiscal union. In the wake of the European debt crisis that began in 2009, calls for closer fiscal ties, possibly leading to some sort of fiscal union have increased; though it is generally regarded as implausible in the short term, some analysts regard fiscal union as a long-term necessity. While stressing the need for coordination, governments have rejected talk of fiscal union or harmonisation in this regard. Aviation There are three main aviation related institutions present in Europe: European Civil Aviation Conference (ECAC) Eurocontrol European Common Aviation Area (ECAA) Energy The transnational energy related structures present in Europe are: Energy Community European Atomic Energy Community European Network of Transmission System Operators for Electricity European Network of Transmission System Operators for Gas INOGATE Energy Charter Treaty Standardisation The transnational standardisation organisations present in Europe are: European Telecommunications Standards Institute (ETSI) European Committee for Standardization (CEN) European Committee for Electrotechnical Standardization (CENELEC) Institute for Reference Materials and Measurements (IRMM) Social and political integration Education The ERASMUS programme (European Region Action Scheme for the Mobility of University Students) seeks to encourage and support free movement of the academic community. It was established in 1987. A total of 33 states (including all European Union states, Iceland, Liechtenstein, Norway, Switzerland and Turkey) are involved. The European Higher Education Area (EHEA) aims to integrate education systems in Europe. Thus, degrees and study periods are recognised mutually. This is done by following the Bologna process, and under the Lisbon Recognition Convention of the Council of Europe. The Bologna declaration was signed in 1999 by 29 countries, all EU members or candidates at the moment (except Cyprus which joined later) and three out of four EFTA countries: Austria, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Switzerland, and United Kingdom. Croatia, Cyprus, Liechtenstein, and Turkey joined in 2001. In 2003, Albania, Andorra, Bosnia and Herzegovina, the Holy See (a Council of Europe permanent observer), North Macedonia, Russia, and Serbia signed the convention. Armenia, Azerbaijan, Georgia, Moldova and Ukraine followed in 2005. Montenegro joined in 2007. Finally, Kazakhstan (not a member of the Council of Europe) joined in 2010. This makes a total of 47 member states. Monaco and San Marino are the only members of the Council of Europe which have not adopted the convention. The other European nation that is eligible to join, but has not, is Belarus. Research There are a number of multinational research institutions based in Europe. In the EIROforum collaboration: European Space Agency European Molecular Biology Laboratory European Fusion Development Agreement European Southern Observatory Particle physics: CERN European Synchrotron Radiation Facility Institut Laue–Langevin European XFEL Meteorology: EUMETSAT European Centre for Medium-Range Weather Forecasts EUMETNET Health The European Health Insurance Card (or EHIC) is issued free of charge and allows anyone who is insured by or covered by a statutory social security scheme of the EEA countries and Switzerland to receive medical treatment in another member state for free or at a reduced cost, if that treatment becomes necessary during their visit (for example, due to illness or an accident), or if they have a chronic pre-existing condition which requires care such as kidney dialysis. The epSOS project, also known as Smart Open Services for European Patients, aims to promote free movement of patients. It will allow health professionals to electronically access the data from patients from another country, to electronically process prescriptions in all involved countries, or to provide treatment in another EU state to a patient on a waiting list. The project has been launched by the EU and 47 member institutions from 23 EU member states and 3 non-EU members. They include national health ministries, national competence centres, social insurance institutions and scientific institutions as well as technical and administrative management entities. Charter of Fundamental Rights The Charter of Fundamental Rights of the European Union is a document enshrining certain fundamental rights. The wording of the document has been agreed at ministerial level and has been incorporated into the Treaty of Lisbon. Poland has negotiated an opt out from this Charter, as had the United Kingdom before the latter's withdrawal from the European Union. Right to vote The European integration process has extended the right of foreigners to vote. Thus, European Union citizens were given voting rights in local elections by the 1992 Maastricht Treaty. Several member states (Belgium, Luxembourg, Lithuania, and Slovenia) have extended since then the right to vote to all foreign residents. This was already the case in Denmark, Finland, the Netherlands and Sweden. Further, voting and eligibility rights are granted among citizens of the Nordic Passport Union, and between numerous countries through bilateral treaties (i.e. between Norway and Spain, or between Portugal and Brazil, Cape Verde, Iceland, Norway, Uruguay, Venezuela, Chile and Argentina), or without them (i.e. Ireland and the United Kingdom). Finally, within the EEA, Iceland and Norway also grant the right to vote to all foreign residents. Schengen Area The main purpose of the establishment of the Schengen Agreement is the abolition of physical borders among European countries. A total of 30 states, including 26 European Union states (all except Ireland, which is part of the Common Travel Area with the United Kingdom) and four non-EU members (Iceland, Liechtenstein, Norway, and Switzerland), are subject to the Schengen rules. Its provisions have already been implemented by 29 states, leaving just Cyprus to do so among signatory states. Further, Monaco, San Marino and Vatican City are de facto members. Visa policy in EU European Union has visa-free regime agreements with some European countries outside EU and discussing such agreements with others; Armenia, Russia, Ukraine, and Moldova. Matters concerning Turkey have also been debated. Ireland maintains an independent visa policy in the EU. Defence There are a number of multi-national military and peacekeeping forces which are ultimately under the command of the EU, and therefore can be seen as the core for a future European Union army. These corps include forces from 26 EU states (all except Malta, which currently does not participate in any battlegroup), Norway and Turkey. Denmark used to have an opt-out clause in its accession treaty and was not obliged to participate in the common defence policy, but in 2022 decided to abandon its stance. Further, the Western European Union (WEU) capabilities and functions have been transferred to the European Union, under its developing Common Foreign and Security Policy (CFSP) and European Security and Defence Policy (ESDP). The EU also has close ties with the North Atlantic Treaty Organization (NATO), according to the Berlin Plus agreement. This is a comprehensive package of agreements made between NATO and the EU on 16 December 2002. With this agreement the EU is given the possibility to use NATO assets in case it wanted to act independently in an international crisis, on the condition that NATO does not want to act itself – the so-called "right of first refusal". In fact, many EU member states are among the 32 NATO members. The Treaty of Brussels is considered the precursor to NATO. The North Atlantic Treaty was signed in Washington, D.C., in 1949. It included the five Treaty of Brussels states, as well as the United States, Canada, Portugal, Italy, Norway, Denmark and Iceland. Greece and Turkey joined the alliance in 1952, and West Germany did the same in 1955. Spain entered in 1982. In 1999, Hungary, the Czech Republic, and Poland became NATO members. Finally, Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovenia, and Slovakia joined in 2004. In 2009, Albania and Croatia joined. In 2008, Ukraine and Georgia were told that they will also eventually become members. Montenegro and North Macedonia joined in 2017 and 2020 respectively. In 2023 and 2024, Finland and Sweden joined. Thus, 23 out of 32 NATO states are among the 27 EU members, another two are members of the EEA, and one more is an EU candidate and also a member of the European Union Customs Union. Space On 22 May 2007, the member states of the European Union have agreed to create a common political framework for space activities in Europe by unifying the approach of the European Space Agency (ESA) with those of the individual European Union member states. However, ESA is an intergovernmental organisation with no formal organic link to the EU; indeed the two institutions have different member states and are governed by different rules and procedures. ESA was created in 1975 by the merger of the two pre-existing European organisations engaged in space activities, ELDO and ESRO. The 10 founding members were Belgium, Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland and the United Kingdom. Ireland joined on 31 December 1975. In 1987, Austria and Norway became member states. Finland joined in 1995, Portugal in 2000, Greece and Luxembourg in 2005, the Czech Republic in 2008, and Romania in 2011. Currently, it has 20 member states: all the EU member states before 2004, plus Czech Republic, Norway, Poland, Romania, and Switzerland. In addition, Canada has had the special status of a Cooperating State under a series of cooperation agreements dating since 1979. In 2007 the political perspective of the European Union was to make ESA an agency of the EU by 2014. ESA is likely to expand in the coming years with the countries which joined the EU in both 2004 and 2007. Currently, almost all EU member states are in different stages of affiliation with ESA. Poland has joined on 19 November 2012. Hungary and Estonia have signed ESA Convention. Latvia and Slovenia have started to implement a Plan for European Cooperating State (PECS) Charter. Slovakia, Lithuania and Bulgaria have signed a European Cooperating State (ECS) Agreement. Cyprus, Malta and Croatia have signed Cooperation Agreements with ESA. Membership in European Union agreements A small group of EU member states have joined all European treaties, instead of opting out on some. They drive the development of a federal model for the European integration. This is linked to the concept of Multi-speed Europe where some countries would create a core union; and goes back to the Inner Six references to the founding member states of the European Communities. At present, the formation of a formal Core Europe Federation ("a federation within the confederation") has been held off at every occasion where such a federation treaty had been discussed. Instead, supranational institutions are created that govern more areas in "Inner Europe" than existing European integration provides for. Among the 27 EU state members, 18 states have signed all integration agreements: Austria, Belgium, Croatia, Finland, Estonia, France, Germany, Greece, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Portugal, Slovakia, Slovenia and Spain. The agreements considered include the fifth stage of economic integration or EMU, the Schengen agreement, and the Area of freedom, security and justice (AFSJ). Thus, among the 27 EU countries, 20 have joined the Eurozone, 25 have joined Schengen, and also 25 have no opt-outs under AFSJ. Further, some countries which do not belong to the EU have joined several of these initiatives, albeit sometimes at a lower stage such as the Customs Union, the Common Market (EEA), or even unilaterally adopting the euro, and by taking part in Schengen, either as a signatory state, or de facto. Thus, 6 non-EU countries have adopted the euro (4 through an agreement with the EU and 2 unilaterally), and 4 non-EU states have joined the Schengen agreement officially. The following table shows the status of each state membership to the different agreements promoted by the EU. It lists 47 countries, including the 27 EU member states, 9 candidate states, 3 members of the EEA and Switzerland, Kosovo which has applied for membership, 4 microstates, and the United Kingdom and Armenia as special cases. Hence, this table summarises some components of EU laws applied in most European states. Some territories of EU member states also have a special status in regard to EU laws applied. Some territories of EFTA member states also have a special status in regard to EU laws applied as is the case with some European microstates. For member states that do not have special-status territories the EU law applies fully with the exception of the opt-outs in the European Union and states under a safeguard clause or alternatively some states participate in enhanced co-operation between a subset of the EU members. Additionally, there are various examples of non-participation by some EU members and non-EU states participation in particular Agencies of the European Union, the programmes for European Higher Education Area, European Research Area and Erasmus Mundus. Notes: Future of European integration There is no fixed end to the process of integration. The discussion on the possible final political shape or configuration of the European Union is sometimes referred to as the debate on the finalité politique (French for "political purpose"). Integration and enlargement of the European Union are major issues in the politics of Europe, each at European, national and local level. Integration may conflict with national sovereignty and cultural identity, and is opposed by eurosceptics. To the east of the European Union, the countries of Belarus, Kazakhstan and Russia launched the creation of the Eurasian Economic Union in the year 2015, which was subsequently joined by Armenia and Kyrgyzstan. Other states in the region, such as Moldova and Tajikistan may also join. Meanwhile, the post-Soviet disputed states of Abkhazia, Artsakh, South Ossetia, and Transnistria have created the Community for Democracy and Rights of Nations to closer integrate among each other. Some Eastern European countries such as Armenia have opted to cooperate with both the EU and the Eurasian Union. On 24 February 2017 Tigran Sargsyan, the Chairman of the Eurasian Economic Commission stated that Armenia's stance was to cooperate and work with both the European Union and the Eurasian Economic Union. Sargsyan added that although Armenia is part of the Eurasian Union, a new European Union Association Agreement between Armenia and the EU would be finalized shortly. Several countries in Eastern Europe have engaged the EU with the aim to grow economic and political ties. The Euronest Parliamentary Assembly, established in 2003, is the inter-parliamentary forum in which members of the European Parliament and the national parliaments of Ukraine, Moldova, Belarus, Armenia, Azerbaijan and Georgia participate and forge closer political and economic ties with the European Union. All of these States participate in the EU's Eastern Partnership program. The Organization of the Black Sea Economic Cooperation and the Community of Democratic Choice are other organizations established to promote European integration, stability, and democracy. On 12 January 2002, the European Parliament noted that Armenia and Georgia may enter the EU in the future. On 12 March 2024, the European Parliament passed a resolution confirming Armenia meets Maastricht Treaty Article 49 requirements and may apply for EU membership. Currently, Georgia is the only country in the Caucasus actively seeking EU membership. European Security Treaty In 2008, Russian President Dmitry Medvedev announced a new concept for Russian foreign politics and called for the creation of a common space in Euro-Atlantic and Eurasia area "from Vancouver to Vladivostok". On 5 June 2009 in Berlin he proposed a new all-European pact for security that would include all European, CIS countries and the United States. On 29 November 2009 a draft version of the European Security Treaty appeared. French president Sarkozy spoke positively about Medvedev's ideas and called for closer security and economic relation between Europe and Russia. Common space from Lisbon to Vladivostok Russian Prime Minister Vladimir Putin in a German newspaper in 2010 called for common economic space, free-trade area or more advanced economic integration, stretching from Lisbon to Vladivostok. He also said it is quite possible Russia could join the eurozone one day. French president Nicolas Sarkozy in 2010 said he believed in 10 or 15 years there will be common economic space between EU and Russia with visa-free regime and general concept of security. Instead Russia has chosen economic policy of self-sufficiency and economic autarky. Russia has been unable to compete with the EU economy, so integration might be at the cost of its own political and socio-economic stability. Concept of a single legal space for the CIS and Europe Russian legal scholar Oleg Kutafin and economist Alexander Zakharov produced a Concept of a Single Legal Space for the CIS and Europe in 2002. This idea was fully incorporated in the resolution of the 2003 Moscow Legal Forum. The Forum gathered representatives of more than 20 countries including 10 CIS countries. In 2007 both the International Union of Jurists of the CIS and the International Union (Commonwealth) of Advocates passed resolutions that strongly support the Concept of a Single Legal Space for Europe and post-Soviet Countries. The concept said: "Obviously, to improve its legislation Russia and other countries of CIS should be oriented toward the continental legal family of European law. The civil law system is much closer to the Russian and other CIS countries will be instrumental in harmonising legislation of CIS countries and the European Community but all values of common law should be also investigated on the subject of possible implementation in some laws and norms. It is suggested that the introduction of the concept of a Single legal space and a single Rule of Law space for Europe and CIS be implemented in four steps: Development plans at the national level regarding adoption of selected EC legal standards in the legislation of CIS countries; Promotion of measures for harmonisation of law with the goal of developing a single legal space for Europe and CIS countries in the area of commercial and corporate law; Making the harmonisation of judicial practice of CIS countries compatible with Rule of Law principles and coordination of the basic requirements of the Rule of Law in CIS countries with the EU legal standards. Development of ideas the Roerich Pact (International Treaty on the Protection of Artistic and Scientific Institution and Historic Monuments initiated by Russian thinker Nicholas Roerich and signed in 1935 by 40 % of sovereign states in Washington D.C.) into the law of CIS countries and European law. Beyond Europe Euro-Mediterranean Partnership The Euro-Mediterranean Partnership or Barcelona Process was organised by the European Union to strengthen its relations with the countries in the Mashriq and Maghreb regions. It started in 1995 with the Barcelona Euro-Mediterranean Conference, and it has been developed in successive annual meetings. The European Union enlargement of 2004 brought two more Mediterranean countries (Cyprus and Malta) into the Union, while adding a total of 10 to the number of Member States. The Euro-Mediterranean Partnership today comprises 43 members: 27 European Union member states, and 15 partner countries (Albania, Algeria, Bosnia and Herzegovina, Egypt, Israel, Jordan, Lebanon, Libya, Mauritania, Monaco, Montenegro, Morocco, Syria and Tunisia, as well as the Palestinian Territories). Libya has had observer status since 1999. The Euro-Mediterranean Free Trade Area (EU-MEFTA) is based on the Barcelona Process and European Neighbourhood Policy (ENP). It will cover the EU, the EFTA, the EU customs unions with third states (Andorra, San Marino, and Turkey), the EU candidate states, and the partners of the Barcelona Process. The Union for the Mediterranean is a community of countries, mostly bordering the Mediterranean Sea, established in July 2008. Ties with partners Morocco already has a number of close ties with the EU, including an Association Agreement with FTA provisions, air transport integration, or the participation in military operations such as ALTHEA in Bosnia. Further, it will be the first partner to go beyond association by enhancing political and economic ties, entering the Single Market, and participating in some EU agencies. Commonwealth of Independent States The Commonwealth of Independent States (CIS) is a loose organisation in which most former Soviet republics participate. A visa-free regime operates among members and a free-trade area is planned. Ukraine is not an official member, but has participated in the organisation. Some members are more integrated than others, for example Russia and Belarus form a Union State. In 2010, Belarus, Russia and Kazakhstan formed a customs union and a single market (Common Economic Space) commenced on 1 January 2012. The Presidents of Belarus, Russia and Kazakhstan established the Eurasian Economic Union with a Eurasian Commission in 2015, subsequently joined by Armenia and Kyrgyzstan. A common currency is also planned, potentially to be named "evraz". Some other countries in the region, such as Moldova are potential members of these organisations. Community for Democracy and Rights of Nations The post-Soviet disputed states of Abkhazia, South Ossetia, and Transnistria are all members of the Community for Democracy and Rights of Nations which aims to forge closer integration. EU and other regions and countries in the world The European Union cooperates with some other countries and regions via loose organisations and regular meetings. The ASEM forum, consisting of the EU and some Asian countries, has been held every two years since 1996. The EU and African, Caribbean and Pacific Group of States form the ACP–EU Joint Parliamentary Assembly, promoting ACP–EU development cooperation, democracy and human rights. The EU and Latin American countries form the Euro-Latin American Parliamentary Assembly. TAFTA is a proposed free-trade area between EU and United States. ASEM – Asia–Europe Meeting ACP – African, Caribbean and Pacific Group of States (Economic Partnership Agreements) EuroLat – Euro-Latin American Parliamentary Assembly TAFTA – Transatlantic Free Trade Area Other organisations in world European countries like the United Kingdom, France, Spain, Portugal have made organisations with other countries in the world with which they have strong cultural and historical links. European languages in the world English is considered to be the global lingua franca. European languages like English, French, Spanish, Portuguese, Italian, Russian and German are official, co-official or widely in use in many countries with a colonial past or with a European diaspora. World integration See also Assembly of European Regions Centre Virtuel de la Connaissance sur l'Europe - Virtual Centre for Knowledge on Europe Differentiated integration European Federation European Policy Centre Europe Day EuroVoc Mechanism for Cooperation and Verification North American integration Pan-European identity Paneuropean Union Pulse of Europe Regions of Europe TRACECA Notes References Further reading Carrasco, C. A., & Peinado, P. (2014). On the origin of European imbalances in the context of European integration, Working papers wpaper71, Financialisation, Economy, Society & Sustainable Development (FESSUD) Project. Glencross, A. (2014). The Politics of European Integration: Political Union or a House Divided. Council of Europe Pan-Europeanism Regionalism (international relations)
0.766935
0.994801
0.762947
The Age of Empire: 1875–1914
The Age of Empire: 1875–1914 is a book by the British historian Eric Hobsbawm, published in 1987. It is the third in a trilogy of books about "the long 19th century" (coined by Hobsbawm), preceded by The Age of Revolution: Europe 1789–1848 and The Age of Capital: 1848–1875. A fourth book, The Age of Extremes: The Short Twentieth Century, 1914–1991, acts as a sequel to the trilogy. Themes The period of less than fifty years described by Hobsbawm began with an economic depression (see Long Depression), but the capitalist world economy quickly recovered, although the dominant British economy was being undermined by the German economy and American economy. Rising productivity resulted in increasing flow of goods, and rising living standards. Despite that, inequality was growing, both on the national and international levels. In the cultural sphere, it was the period of the Belle Époque, the swan-song of aristocracy, increasingly marginalized by the growing affluence of the upper middle class (bourgeois), which can be seen as the class most benefiting from changes of that period. As part of the Belle Époque it was also a period of peace, with Europe and the Western world involved in only a few minor conflicts. This led to a popular belief that no significant wars would happen in the future, an era of widespread optimism. At the same time, the military-industrial complex in all countries were busily stocking supplies for the conflict to come. In the background, the belief in progress and science was clashing with the old forces of religion. The West, dominating the world through its colonial system, was also increasingly interested in foreign cultures. It was such internal contradictions and tensions that for Hobsbawm defined this era, and spelled its inevitable end. The ending of the Hobsbawn trilogy sees the end of the era that began with the dual revolution (the French Revolution and the Industrial Revolution). Inspired by Vladimir Lenin, Hobsbawm, a writer widely recognized as a Marxist, traces the development of capitalism, linking it with the development of imperialism that resulted in the First World War. Unlike Lenin, who predicted that this will lead to capitalism's downfall, and with the benefit of almost a century more of a hindsight, Hobsbawm acknowledges that capitalism survived, although in a form different from that which it began with in the late 18th century. Facing the dangers of a competing ideology, that of communism, and another revolution (the Russian Revolution), capitalism, according to Hobsbawm, survived by appeasing the masses and accepting some socialist demands, such as that of the welfare state. Contents Overture The Centenarian Revolution An Economy Changes Gear The Age of Empire The Politics of Democracy Workers of the World Waving Flags: Nations and Nationalism Who's Who or the Uncertainties of the Bourgeoisie The New Woman The Arts Transformed Certainties Undermined: The Sciences Reason and Society Towards Revolution From Peace to War New Capitalism Epilogue See also World-systems theory References External links The Age of Empire 1987 non-fiction books Books about the West History books about international relations History books about politics 20th-century history books Books by Eric Hobsbawm History books about the 19th century History books about the 20th century Books about imperialism Weidenfeld & Nicolson books Belle Époque
0.777717
0.980957
0.762907
Indology
Indology, also known as South Asian studies, is the academic study of the history and cultures, languages, and literature of the Indian subcontinent, and as such is a subset of Asian studies. The term Indology (in German, Indologie) is often associated with German scholarship, and is used more commonly in departmental titles in German and continental European universities than in the anglophone academy. In the Netherlands, the term Indologie was used to designate the study of Indian history and culture in preparation for colonial service in the Dutch East Indies. Classical Indology majorly includes the linguistic studies of Sanskrit literature, Pāli and Tamil literature, as well as study of Dharmic religions (like Hinduism, Buddhism, Sikhism, etc.). Some of the regional specializations under South Asian studies include: Bengali studies – study of culture and languages of Bengal Dravidology – study of Dravidian languages of Southern India Tamil studies Pakistan studies Sindhology – the study of the historical Sindh region Some scholars distinguish Classical Indology from Modern Indology, the former more focussed on Sanskrit, Tamil and other ancient language sources, the latter on contemporary India, its politics and sociology. History Precursors The beginnings of the study of India by travellers from outside the subcontinent date back at least to Megasthenes (–290 BC), a Greek ambassador of the Seleucids to the court of Chandragupta (ruled 322-298 BC), founder of the Mauryan Empire. Based on his life in India Megasthenes composed a four-volume Indica, fragments of which still exist, and which influenced the classical geographers Arrian, Diodor and Strabo. Islamic Golden Age scholar Muḥammad ibn Aḥmad Al-Biruni (973–1048) in Tarikh Al-Hind (Researches on India) recorded the political and military history of India and covered India's cultural, scientific, social and religious history in detail. He studied the anthropology of India, engaging in extensive participant observation with various Indian groups, learning their languages and studying their primary texts, and presenting his findings with objectivity and neutrality using cross-cultural comparisons. Academic discipline Indology as generally understood by its practitioners began in the later Early Modern period and incorporates essential features of modernity, including critical self-reflexivity, disembedding mechanisms and globalization, and the reflexive appropriation of knowledge. An important feature of Indology since its beginnings in the late eighteenth century has been the development of networks of academic communication and trust through the creation of learned societies like the Asiatic Society of Bengal, and the creation of learned journals like the Journal of the Royal Asiatic Society and Annals of the Bhandarkar Oriental Research Institute. One of the defining features of Indology is the application of scholarly methodologies developed in European Classical Studies or "Classics" to the languages, literatures and cultures of South Asia. In the wake of eighteenth century pioneers like William Jones, Henry Thomas Colebrooke, Gerasim Lebedev or August Wilhelm Schlegel, Indology as an academic subject emerged in the nineteenth century, in the context of British India, together with Asian studies in general affected by the romantic Orientalism of the time. The Asiatic Society was founded in Calcutta in 1784, Société Asiatique founded in 1822, the Royal Asiatic Society in 1824, the American Oriental Society in 1842, and the German Oriental Society (Deutsche Morgenländische Gesellschaft) in 1845, the Japanese Association of Indian and Buddhist Studies in 1949. Sanskrit literature included many pre-modern dictionaries, especially the Nāmaliṅgānuśāsana of Amarasiṃha, but a milestone in the Indological study of Sanskrit literature was publication of the St. Petersburg Sanskrit-Wörterbuch during the 1850s to 1870s. Translations of major Hindu texts in the Sacred Books of the East began in 1879. Otto von Böhtlingk's edition of Pāṇini's grammar appeared in 1887. Max Müller's edition of the Rigveda appeared in 1849–1875. Albrecht Weber commenced publishing his pathbreaking journal Indologische Studien in 1849, and in 1897 Sergey Oldenburg launched a systematic edition of key Sanskrit texts, "Bibliotheca Buddhica". Professional literature and associations Indologists typically attend conferences such as the American Association of Asian Studies, the American Oriental Society annual conference, the World Sanskrit Conference, and national-level meetings in the UK, Germany, India, Japan, France and elsewhere. They may routinely read and write in journals such as Indo-Iranian Journal, Journal of the Royal Asiatic Society, Journal of the American Oriental Society, Journal asiatique, the Journal of the German Oriental Society (ZDMG), Wiener Zeitschrift für die Kunde Südasiens, Journal of Indian Philosophy, Bhandarkar Oriental Research Institute, Journal of Indian and Buddhist Studies (Indogaku Bukkyogaku Kenkyu), Bulletin de l'École française d'Extrême Orient, and others. They may be members of such professional bodies as the American Oriental Society, the Royal Asiatic Society of Great Britain and Ireland, the Société Asiatique, the Deutsche Morgenlāndische Gesellschaft and others. List of indologists The following is a list of prominent academically qualified Indologists. Historical scholars Megasthenes (350–290 BC) Al-Biruni (973–1050) Gaston-Laurent Cœurdoux (1691–1779) Anquetil Duperron (1731–1805) William Jones (1746–1794) Charles Wilkins (1749–1836) Colin Mackenzie (1753–1821) Dimitrios Galanos (1760–1833) Henry Thomas Colebrooke (1765–1837) Jean-Antoine Dubois (1765–1848) August Wilhelm Schlegel (1767–1845) James Mill (1773–1836) Horace Hayman Wilson (1786–1860) Franz Bopp (1791–1867) Duncan Forbes (linguist) (1798–1868) James Prinsep (1799–1840) Hermann Grassmann (1809–1877) John Muir (indologist) (1810–1882) Edward Balfour (1813–1889) Robert Caldwell (1814–1891) Alexander Cunningham (1814–1893) Hermann Gundert (1814–1893) Otto von Bohtlingk (1815–1904) Monier Monier-Williams (1819–1899) Henry Yule (1820–1889) Rudolf Roth (1821–1893) Theodor Aufrecht (1822–1907) Max Müller (1823–1900) Albrecht Weber (1825–1901) Ralph T. H. Griffith (1826–1906) William Dwight Whitney (1827–1894) Ferdinand Kittel (1832–1903) Edwin Arnold (1832–1904) Johan Hendrik Caspar Kern (1833–1917) Gustav Solomon Oppert (1836–1908) Georg Bühler (1837–1898) Chintaman Vinayak Vaidya (1861–1938) Ramakrishna Gopal Bhandarkar (1837–1925) Arthur Coke Burnell (1840–1882) Julius Eggeling (1842–1918) Paul Deussen (1845–1919) Vincent Arthur Smith (1848–1920) James Darmesteter (1849–1894) Hermann Jacobi (1850–1937) Kashinath Trimbak Telang (1850–1893) Alois Anton Führer (1853–1930) Jacob Wackernagel (1853–1938) Arthur Anthony Macdonell (1854–1930) Hermann Oldenberg (1854–1920) Maurice Bloomfield (1855–1928) E. Hultzsch (1857–1927) Mark Aurel Stein (1862–1943) P. T. Srinivasa Iyengar(1863–1931) Moriz Winternitz (1863–1937) Fyodor Shcherbatskoy (1866–1942) F.W. Thomas (1867–1956) Jadunath Sarkar (1870–1958) S. Krishnaswami Aiyangar (1871–1947) Percy Brown (1872–1955) John Hubert Marshall (1876–1958) Arthur Berriedale Keith (1879–1944) Pandurang Vaman Kane (1880–1972) Pierre Johanns (1882–1955) Andrzej Gawronski (1885–1927) Willibald Kirfel (1885–1964) Johannes Nobel (1887–1960) Betty Heimann (1888–1961) Alice Boner (1889–1981) Heinrich Zimmer (1890–1943) Ervin Baktay (1890–1963) Mortimer Wheeler (1890–1976) B. R. Ambedkar (1891–1956) K. A. Nilakanta Sastri (1892–1975) Mahapandit Rahul Sankrityayan (1893–1963) Vasudev Vishnu Mirashi (1893–1985) V. R. Ramachandra Dikshitar (1896–1953) Dasharatha Sharma (1903–1976) Shakti M. Gupta (1927–) S. Srikanta Sastri (1904–1974) Joseph Campbell (1904–1987) Murray Barnson Emeneau (1904–2005) Jan Gonda (1905–1991) Paul Thieme (1905–2001) Jean Filliozat (1906–1982) Alain Danielou (1907–1994) F B J Kuiper (1907–2003) Thomas Burrow (1909–1986) Jagdish Chandra Jain (1909–1993) Ramchandra Narayan Dandekar (1909–2001) Arthur Llewellyn Basham (1914–1986) Richard De Smet (1916–1997) P. N. Pushp (1917–1998) Ahmad Hasan Dani (1920–2009) Frank-Richard Hamm (1920–1973) Madeleine Biardeau (1922–2010) Awadh K. (AK) Narain (1925–2013) V. S. Pathak (1926–2003) Kamil Zvelebil (1927–2009) J. A. B. van Buitenen (1928–1979) Tatyana Elizarenkova (1929–2007) Bettina Baumer (1940–) Anncharlott Eschmann (1941–1977) William Dalrymple (1965–present) Arvind Sharma (1940–present) Harilal Dhruv (1856–1896) Ram Swarup (1920–1998) Mikhail Konstantinovich Kudryavtsev (1911–1992) Daniel H. H. Ingalls, Sr. (1916–1999), Wales Professor of Sanskrit, Harvard University Sita Ram Goel (1921–2003) Natalya Romanovna Guseva (1914–2010) Ram Sharan Sharma (1919–2011), Founding Chairperson of Indian Council of Historical Research; Professor Emeritus, Patna University Bhadriraju Krishnamurti (1928–2012), Osmania University Fida Hassnain (1924–2016) Sri Pratap College, Srinagar Heinrich von Stietencron (1933–2018), University of Tübingen, Germany Iravatham Mahadevan (1930–2018)- Indian Council of Historical Research Stanley Wolpert (1927–2019)- University of California, Los Angeles (emeritus) Karel Werner (1925–2019) Dietmar Rothermund (1933–2020), Professor of the history of South Asia at the Ruprecht-Karls University in Heidelberg Bannanje Govindacharya (1936–2020), scholar in Tatva-vada school of philosophy and Vedic tradition Stanley Insler (1937–2019), Edward E. Salisbury Professor of Sanskrit and Comparative Philology, Yale University Gérard Fussman (1940–2022) Collège de France Contemporary scholars with university posts Romila Thapar (1931–present), Professor of Ancient History, emerita, at the Jawaharlal Nehru University Hermann Kulke (1938–present), Professor of South and Southeast Asian history at the Department of History, Kiel University Asko Parpola (1941–present), professor emeritus of Indology and South Asian Studies at the University of Helsinki Patrick Olivelle (1942–present) Professor Emeritus of Asian Studies at the University of Texas at Austin Michael Witzel (1943–present)- Wales Professor of Sanskrit at Harvard University Ronald Inden- Professor Emeritus of History, South Asian Languages and Civilizations at the University of Chicago George L. Hart (1945–present)- Professor Emeritus of Tamil at the University of California, Berkeley Stephanie Jamison (1948–present), Distinguished Professor of Asian Languages and Cultures and of Indo-European Studies at the University of California, Los Angeles Alexis Sanderson (1948–present) Emeritus Fellow and former Spalding Professor of Eastern Religion and Ethics at All Souls College, Oxford Michael D. Willis (The British Museum) Wendy Doniger (1940–present) University of Chicago Divinity School, as Mircea Eliade Distinguished Service Professor of the History of Religions Thomas Trautmann (1940–present), former Head of the Center for South Asian Studies, University of Michigan Kapil Kapoor (1940–present), well known scholar of English Literature, Linguistics, Paninan Grammar, Sanskrit Arts and Aesthetics, Director of Indian Institute of Advanced Studies, Shimla Shrivatsa Goswami (1950–present), Indian scholar of Hindu philosophy and art at (Banaras Hindu University), as well as Gaudiya Vaishnava religious leader. Edwin Bryant (1957–present) Rutgers University, New Jersey Other indologists Michel Danino, French-Indian author and historical negationist Georg Feuerstein Hans T. Bakker Indology organisations Faculty of Sanskrit Vidya Dharma Vigyan, Banaras Hindu University Adyar Library and Research Centre, Chennai Bhandarkar Oriental Research Institute, Pune Oriental Research Institute Mysore Oriental Research Institute & Manuscripts Library, Thiruvananthapuram Lalbhai Dalpatbhai Institute of Indology along with Lalbhai Dalpatbhai Museum which is adjacent to the institute, Ahmedabad, Gujarat, India American Institute of Indian Studies French Institute of Pondicherry The Oxford Centre For Hindu Studies See also Buddhism in the West History of India Greater India Bibliography of India Sanskrit Sanskrit studies Roja Muthiah Research Library Area studies Dreaming of Words References Further reading Balagangadhara, S. N. (1994). "The Heathen in his Blindness..." Asia, the West, and the Dynamic of Religion. Leiden, New York: E. J. Brill. Balagangadhara, S. N. (2012). Reconceptualizing India studies. New Delhi: Oxford University Press. Vishwa Adluri, Joydeep Bagchee: The Nay Science: A History of German Indology. Oxford University Press, New York 2014, (Introduction, p. 1–29). Joydeep Bagchee, Vishwa Adluri: "The passion of Paul Hacker: Indology, orientalism, and evangelism." In: Joanne Miyang Cho, Eric Kurlander, Douglas T McGetchin (Eds.), Transcultural Encounters Between Germany and India: Kindred Spirits in the Nineteenth Century. Routledge, New York 2013, p. 215–229. Joydeep Bagchee: "German Indology." In: Alf Hiltebeitel (Ed.), Oxford Bibliographies Online: Hinduism. Oxford University Press, New York 2014. Chakrabarti, Dilip K.: Colonial Indology, 1997, Munshiram Manoharlal: New Delhi. Jean Filliozat and Louis Renou – L'inde classique – ISBN B0000DLB66. Halbfass, W. India and Europe: An Essay in Understanding. SUNY Press, Albany: 1988 Inden, R. B. (2010). Imagining India. Bloomington, Ind: Indiana University Press. Vishwa Adluri, Joydeep Bagchee: The Nay Science: A History of German Indology. Oxford University Press, New York 2014, Gauri Viswanathan, 1989, Masks of Conquest Rajiv Malhotra (2016), Battle for Sanskrit: Dead or Alive, Oppressive or Liberating, Political or Sacred? (Publisher: HarperCollins India; ) Rajiv Malhotra (2016), Academic Hinduphobia: A Critique of Wendy Doniger's Erotic School of Indology (Publisher: Voice of India; ) Antonio de Nicolas, Krishnan Ramaswamy, and Aditi Banerjee (eds.) (2007), Invading the Sacred: An Analysis of Hinduism Studies in America (Publisher: Rupa & Co.) Shourie, Arun. 2014. Eminent historians: their technology, their line, their fraud. HarperCollins. Trautmann, Thomas. 1997. Aryans and British India, University of California Press, Berkeley. Windisch, Ernst. Geschichte der Sanskrit-Philologie und Indischen Altertumskunde. 2 vols. Strasbourg. Trübner, K.J., 1917–1920 Zachariae, Theodor. Opera minora zur indischen Wortforschung, zur Geschichte der indischen Literatur und Kultur, zur Geschichte der Sanskritphilologie. Ed. Claus Vogel. Wiesbaden 1977, . External links Omilos Meleton www.indology.info – since 1995, with associated discussion forum since 1990 Italian blog with many links to indological websites Books related to Indology (commercial publisher's website) The Veda as Studied by European Scholars (Gifford Lectures Online) Institutes Vienna Heidelberg Halle Mainz French Institute of Pondicherry Tübingen Zürich Oxford Library guides Asian studies
0.769803
0.991021
0.762891
Chinese culture
Chinese culture is one of the world's oldest cultures, originating thousands of years ago. The culture prevails across a large geographical region in East Asia with Sinosphere in whole and is extremely diverse, with customs and traditions varying greatly between counties, provinces, cities, towns. The terms 'China' and the geographical landmass of 'China' have shifted across the centuries, before the name 'China' became commonplace in modernity. Chinese civilization is historically considered a dominant culture of East Asia. With China being one of the earliest ancient civilizations, Chinese culture exerts profound influence on the philosophy, virtue, etiquette, and traditions of Asia. Chinese characters, ceramics, architecture, music, dance, literature, martial arts, cuisine, arts, philosophy, etiquette, religion, politics, and history have had global influence, while its traditions and festivals are celebrated, instilled, and practiced by people around the world. Identity As early as the Zhou dynasty, the Chinese government divided Chinese people into four classes: gentry, farmer, craftsman, and merchant. Gentry and farmers constituted the two major classes, while merchant and craftsmen were collected into the two minor. Theoretically, except for the position of the Emperor, nothing was hereditary. China's majority ethnic group, the Han Chinese, are an East Asian ethnic group and nation. They constitute approximately 92% of the population of China, 95% of Taiwan (Han Taiwanese), 76% of Singapore, 23% of Malaysia, and about 17% of the global population, making them the world's largest ethnic group, numbering over 1.3 billion people. In modern China, there are 56 officially labelled ethnic groups. Throughout Chinese history, many non-Han foreigners like the Indo-Iranians became Han Chinese through assimilation, other groups retained their distinct ethnic identities, or faded away. At the same time, the Han Chinese majority has maintained distinct linguistic and regional cultural traditions throughout the ages. The term Zhonghua minzu has been used to describe the notion of Chinese nationalism in general. Much of the traditional identity within the community has to do with distinguishing the family name. The characteristics of Chinese culture The chapter discusses the contemporary situations in Chinese culture that relate to social structure, sociocultural change, and the relationship of these factors to the current state of mental health of the Chinese people. The chapter focuses on the issues of mind, body, and behavior. The cultural framework is of central concern to Chinese participants, whether they are social scientists, humanists, or clinical psychiatrists. Chinese culture appears to affect the state of body and health, parent–child interaction, social relationships, individual and group aspirations, models of health care services, and the patterns of disorders and methods of coping under the impact of migration, industrialization, and urbanization. The chapter focuses on the importance of the impact of cultural tradition upon perception, behavioral orientation, pathology, coping, and help-seeking. The mental health concerns that are relevant to the population of mainland China are related to the recent dramatic socialist revolution and particularly to the 10-year period of the Cultural Revolution. Abstract Chinese civilization is the only one that has preserved its historical continuity among the world's “cradles for four ancient civilizations.” In the long process of civilization evolution, the Chinese people, in the spirit of “continuous self-renewal,” “self-discipline and social commitment,” “inclusiveness to diversity,” and “realism and adaptation to changes,” created cultural traditions of abundant contents, sophisticated structures, and various forms. These traditions have since been nourishing, nurturing, and shaping the Chinese people and become internalized in the blood and soul of the Chinese nation. Regional During the 361 years of civil war after the Han dynasty (202 BC – 220 AD), there was a partial restoration of feudalism when wealthy and powerful families emerged with large amounts of land and huge numbers of semi-serfs. They dominated important civilian and military positions of the government, making the positions available to members of their own families and clans. The Tang dynasty extended the imperial examination system as an attempt to eradicate this feudalism. Traditional Chinese culture covers large geographical territories, where each region is usually divided into distinct sub-cultures. Each region is often represented by three ancestral items. For example, Guangdong is represented by chenpi, aged ginger and hay. The ancient city of Lin'an (Hangzhou), is represented by tea leaf, bamboo shoot trunk, and hickory nut. Such distinctions give rise to the old Chinese proverb: "十里不同風, 百里不同俗/十里不同風": "praxis vary within ten li, customs vary within a hundred li". The 31 provincial-level divisions of the People's Republic of China are grouped by its former administrative areas from 1949 to 1980, and are now known as traditional regions. Social structure Since the Three Sovereigns and Five Emperors period, some form of Chinese monarch has been the main ruler above all. Different periods of history have different names for the various positions within society. Conceptually each imperial or feudal period is similar, with the government and military officials ranking high in the hierarchy, and the rest of the population under regular Chinese law. From the late Zhou dynasty (1046–256 BCE) onwards, traditional Chinese society was organized into a hierarchic system of socio-economic classes known as the four occupations. However, this system did not cover all social groups and the distinctions between the groups became blurred after the commercialization of Chinese culture in the Song dynasty (960–1279 CE). Ancient Chinese education also has a long history; ever since the Sui dynasty (581–618 CE), educated candidates prepared for the imperial examinations and exam graduates were drafted into government as scholar-bureaucrats. This led to the creation of a meritocracy, although success was available only to males who could afford test preparation. Imperial examinations required applicants to write essays and demonstrate mastery of the Confucian classics. Those who passed the highest level of the exam became elite scholar-officials known as jinshi, a highly esteemed socio-economic position. A major mythological structure developed around the topic of the mythology of the imperial exams. Trades and crafts were usually taught by a shifu. The female historian Ban Zhao wrote the Lessons for Women in the Han dynasty and outlined the four virtues women must abide by, while scholars such as Zhu Xi and Cheng Yi would expand on this. Chinese marriage and Taoist sexual practices are some of the rituals and customs found in society. With the rise of European economic and military power beginning in the mid-19th century, non-Chinese systems of social and political organization gained adherents in China. Some of these would-be reformers totally rejected China's cultural legacy, while others sought to combine the strengths of Chinese and European cultures. In essence, the history of 20th-century China is one of experimentation with new systems of social, political, and economic organization that would allow for the reintegration of the nation in the wake of dynastic collapse. Spiritual values Most spiritual practices are derived from Chinese Buddhism, Taoism and Confucianism. The relative influence of each school of practice is a subject of debate and other practices, such as Neo-Confucianism, Buddhism and others, have been introduced. Reincarnation and other rebirth concepts are a reminder of the connection between real-life and the after-life. In Chinese business culture, the concept of guanxi, indicating the primacy of relations over rules, has been well documented. While many deities are part of the tradition, some of the most recognized holy figures include Guan Yin, the Jade Emperor and Buddha. Chinese Buddhism has shaped some Chinese art, literature and philosophy. The translation of a large body of foreign Buddhist scriptures into Chinese and the inclusion of these translations, together with works composed in China, into a printed canon had far-reaching implications for the dissemination of Buddhism throughout China. Chinese Buddhism is also marked by the interaction between Indian religions, Chinese folk religion, and Taoism. Religion During the Xia and Shang dynasties, Chinese religion was oriented to worshipping the supreme god Shang Di, with the king and diviners acting as priests and using oracle bones. The Zhou dynasty oriented religion to worshipping the broader concept of heaven. A large part of Chinese culture is based in the belief in a spiritual world. Countless methods of divination have helped answer questions, even serving as an alternative to medicine. Folklores have helped fill the gap between things that cannot be explained. There is often a blurred line between myth, religion and unexplained phenomenon. Many of the stories have since evolved into traditional Chinese holidays. Other concepts have extended beyond mythology into spiritual symbols such as Door god and the Imperial guardian lions. Along with the belief in the divine beings, there is belief in evil beings. Practices such as Taoist exorcism fighting mogwai and jiangshi with peachwood swords are just some of the concepts passed down from generations. A few Chinese fortune telling rituals are still in use today after thousands of years of refinement. Taoism, a religious or philosophical tradition of Chinese origin, emphasizes living in harmony with the Tao (, literally "Way", also romanized as Dao). The Tao is a fundamental idea in most Chinese philosophical schools; in Taoism, however, it denotes the principle that is the source, pattern and substance of everything that exists. Taoism differs from Confucianism by not emphasizing rigid rituals and social order. Taoist ethics vary depending on the particular school, but in general tend to emphasize wu wei (effortless action), "naturalness", simplicity, spontaneity, and the Three Treasures: 慈 "compassion", 儉/俭 "frugality", and 谦 "humility". The roots of Taoism can be traced back to at least the 4th century BCE. Early Taoism drew its cosmological notions from the School of Yinyang (Naturalists), and was deeply influenced by one of China's oldest texts, the Yijing, which expounds a philosophical system of human behavior in accordance with the alternating cycles of nature. The "Legalist" Shen Buhai may also have been a major influence, expounding a realpolitik of wu wei. The Tao Te Ching, a compact book containing teachings attributed to Laozi, is widely considered the keystone work of the Taoist tradition, together with the later writings of Zhuangzi. Philosophy and legalism Confucianism, also known as Ruism, was the official philosophy throughout most of Imperial China's history, and mastery of Confucian texts was the primary criterion for entry into the imperial bureaucracy. A number of more authoritarian strains of thought have also been influential, such as Legalism.There was often conflict between the philosophies, e.g. the Song dynasty Neo-Confucians believed Legalism departed from the original spirit of Confucianism. Examinations and a culture of merit remain greatly valued in China today. In recent years, a number of New Confucians (not to be confused with Neo-Confucianism) have advocated that democratic ideals and human rights are quite compatible with traditional Confucian "Asian values". Confucianism is described as tradition, a philosophy, a religion, a humanistic or rationalistic religion, a way of governing, or simply a way of life. Confucianism developed from what was later called the Hundred Schools of Thought from the teachings of the Chinese philosopher Confucius (551–479 BCE), who considered himself a retransmitter of the values of the Zhou dynasty golden age of several centuries before. In the Han dynasty (206 BCE – 220 CE), Confucian approaches edged out the "proto-Taoist" Huang-Lao, as the official ideology, while the emperors mixed both with the realist techniques of Legalism. Hundred Schools of Thought The Hundred Schools of Thought were philosophies and schools that flourished from the 6th century to 221 BC, during the Spring and Autumn period and the Warring States period of ancient China. While this period was fraught with chaos and bloody battles, it was an era of great cultural and intellectual expansion in China. It came to be known as the Golden Age of Chinese philosophy because a broad range of thoughts and ideas were developed and could be freely discussed. This phenomenon has been called the Contention of a Hundred Schools of Thought (百家爭鳴/百家争鸣; bǎijiā zhēngmíng; pai-chia cheng-ming; "hundred schools contend"). The thoughts and ideas discussed and refined during this period have profoundly influenced lifestyles and social consciousness up to the present day in China and across East Asia. The intellectual society of this era was characterized by itinerant scholars, who were often employed by various state rulers as advisers on the methods of government, war, and diplomacy. This period ended with the rise of the imperial Qin dynasty and the subsequent purge of dissent. A traditional source for this period is the Shiji, or Records of the Grand Historian by Sima Qian. The autobiographical section of the Shiji, the "Taishigong Zixu" (太史公自序), refers to the schools of thought described below. Mohism was an ancient Chinese philosophy of logic, rational thought and science developed by the academic scholars who studied under the ancient Chinese philosopher Mozi (–). The philosophy is embodied in an eponymous book: the Mozi. Another group is the School of the Military (兵家; Bingjia) that studied warfare and strategy; Sunzi and Sun Bin were influential leaders. The School of Naturalists was a Warring States era philosophy that synthesized the concepts of yin-yang and the Five Elements; Zou Yan is considered the founder of this school. His theory attempted to explain the universe in terms of basic forces in nature: the complementary agents of yin (dark, cold, female, negative) and yang (light, hot, male, positive) and the Five Elements or Five Phases (water, fire, wood, metal, and earth). Language The ancient written standard was Classical Chinese. It was used for thousands of years, but was mostly used by scholars and intellectuals in the upper class society called "shi da fu (士大夫)". It was difficult, but possible, for ordinary people to enter this class by passing written exams. Calligraphy later became commercialized, and works by famous artists became prized possessions. Chinese literature has a long past; the earliest classic work in Chinese, the I Ching or "Book of Changes", dates to around 1000 BC. A flourishing of philosophy during the Warring States period produced such noteworthy works as Confucius's Analects and Laozi's Tao Te Ching. (See also: Chinese classics) Dynastic histories were often written, beginning with Sima Qian's seminal Records of the Grand Historian, written from 109 BC to 91 BC. The Tang dynasty witnessed a poetic flowering, while the Four Great Classical Novels of Chinese literature were written during the Ming and Qing dynasties. Printmaking in the form of movable type was developed during the Song dynasty. Academies of scholars sponsored by the empire were formed to comment on the classics in both printed and handwritten form. Members of royalty frequently participated in these discussions. Chinese philosophers, writers and poets were highly respected and played key roles in preserving and promoting the culture of the empire. Some classical scholars, however, were noted for their daring depictions of the lives of the common people, often to the displeasure of authorities. Varieties of dialect and writing system At the start of the 20th century, most of the population were still illiterate, and the many languages spoken (Mandarin, Wu, Yue (Cantonese), Min Nan (Ban-lam-gu), Jin, Xiang, Hakka, Gan, Hui, Ping etc.) in different regions prevented spoken communication with people from other areas. However, the written language made communication possible, such as passing on official orders and documents throughout the entire region of China. Reformers set out to establish a national language, settling on the Beijing-based Mandarin as the spoken form. After the May 4th Movement, Classical Chinese was quickly replaced by written vernacular Chinese, modeled after the vocabulary and grammar of the standard spoken language. Calligraphy Chinese calligraphy is a form of writing (calligraphy), or, the artistic expression of human language in a tangible form. There are some general standardizations of the various styles of calligraphy in this tradition. Chinese calligraphy and ink and wash painting are closely related: they are accomplished using similar tools and techniques, and have a long history of shared artistry. Distinguishing features of Chinese painting and calligraphy include an emphasis on motion charged with dynamic life. According to Stanley-Baker, "Calligraphy is sheer life experienced through energy in motion that is registered as traces on silk or paper, with time and rhythm in shifting space its main ingredients." Calligraphy has also led to the development of many forms of art in China, including seal carving, ornate paperweights, and inkstones. In China, calligraphy is referred to as Shūfǎ (書法/书法), literally "the way/method/law of writing". In Japan it is referred to as Shodō, literally "the way/principle of writing"; and in Korea as Seoye (서예; 書藝) literally "the skill/criterion of writing". Chinese calligraphy is normally regarded as one of the "arts" (Chinese 藝術/艺术 ) in the countries where it is practised. Chinese calligraphy focuses not only on methods of writing but also on cultivating one's character (人品) and taught as a discipline (書法; , "the rules of writing Han characters"). Literature The Zhou dynasty is often regarded as the touchstone of Chinese cultural development. Concepts covered in the Chinese classic texts include poetry, astrology, astronomy, the calendar, and constellations. Some of the most important early texts include the I Ching and the Shujing within the Four Books and Five Classics. Many Chinese concepts such as Yin and Yang, Qi, Four Pillars of Destiny in relation to heaven and earth were theorized in the pre-imperial periods. By the end of the Qing dynasty, Chinese culture had embarked on a new era with written vernacular Chinese for the common citizens. Hu Shih and Lu Xun were considered pioneers in modern literature at that time. After the founding of the People's Republic of China, the study of Chinese modern literature gradually increased. Modern-era literature has influenced modern interpretations of nationhood and the creation of a sense of national spirit. Poetry in the Tang dynasty Tang poetry refers to poetry written in or around the time of, or in the characteristic style of, China's Tang dynasty (18 June 618 – 4 June 907, including the 690–705 reign of Wu Zetian) or that follows a certain style, often considered the Golden Age of Chinese poetry. During the Tang dynasty, poetry continued to be an important part of social life at all levels of society. Scholars were required to master poetry for the civil service exams, but the art was theoretically available to everyone. This led to a large record of poetry and poets, a partial record of which survives today. Two of the most famous poets of the period were Li Bai and Du Fu. Tang poetry has had an ongoing influence on world literature and modern and quasi-modern poetry. The Quantangshi ("Complete Tang Poems") anthology compiled in the early eighteenth century includes over 48,900 poems written by over 2,200 authors. The Quantangwen (全唐文, "Complete Tang Prose"), despite its name, contains more than 1,500 fu and is another widely consulted source for Tang poetry. Despite their names, these sources are not comprehensive, and the manuscripts discovered at Dunhuang in the twentieth century included many shi and some fu, as well as variant readings of poems that were also included in the later anthologies. There are also collections of individual poets' work, which generally can be dated earlier than the Qing anthologies, although few earlier than the eleventh century. Only about a hundred Tang poets have such collected editions extant. Another important source is anthologies of poetry compiled during the Tang dynasty, although only thirteen such anthologies survive in full or in part. Many records of poetry, as well as other writings, were lost when the Tang capital of Changan was damaged by war in the eighth and ninth centuries, so that while more than 50,000 Tang poems survive (more than any earlier period in Chinese history), this still likely represents only a small portion of the poetry that was actually produced during the period. Many seventh-century poets are reported by the 721 imperial library catalog as having left behind massive volumes of poetry, of which only a tiny portion survives, and there are notable gaps in the poetic œuvres of even Li Bai and Du Fu, the two most celebrated Tang poets. Ci in Song dynasty Ci (辭/辞) are a poetic form, a type of lyric poetry, done in the tradition of Classical Chinese poetry. Ci use a set of poetic meters derived from a base set of certain patterns, in fixed-rhythm, fixed-tone, and variable line-length formal types, or model examples: the rhythmic and tonal pattern of the ci are based upon certain, definitive musical song tunes. They are also known as Changduanju (長短句/长短句, "lines of irregular lengths") and Shiyu (詩餘/诗馀, "that which is beside poetry").Typically the number of characters in each line and the arrangement of tones were determined by one of around 800 set patterns, each associated with a particular title, called cípái 詞牌/词牌. Originally they were written to be sung to a tune of that title, with set rhythm, rhyme, and tempo. The Song dynasty was also a period of great scientific literature, and saw the creation of works such as Su Song's Xin Yixiang Fayao and Shen Kuo's Dream Pool Essays. There were also enormous works of historiography and large encyclopedias, such as Sima Guang's Zizhi Tongjian of 1084 or the Four Great Books of Song fully compiled and edited by the 11th century. Notable Confucianists, Taoists and scholars of all classes have made significant contributions to and from documenting history to authoring saintly concepts that seem hundreds of years ahead of time. Although the oldest surviving textual examples of surviving ci are from 8th century CE Dunhuang manuscripts, beginning in the poetry of the Liang dynasty, the ci followed the tradition of the Shi Jing and the yuefu: they were lyrics which developed from anonymous popular songs into a sophisticated literary genre; although in the case of the ci form some of its fixed-rhythm patterns have an origin in Central Asia. The form was further developed in the Tang dynasty. Although the contributions of Li Bo (also known as Li Po, 701 – 762) are fraught with historical doubt, certainly the Tang poet Wen Tingyun (812–870) was a great master of the ci, writing it in its distinct and mature form. One of the more notable practitioners and developers of this form was Li Yu of the Southern Tang dynasty during the Five Dynasties and Ten Kingdoms period. However, the ci form of Classical Chinese poetry is especially associated with the poetry of the Song dynasty, during which it was indeed a popular poetic form. A revival of the ci poetry form occurred during the end of the Ming dynasty and the beginning of the Qing dynasty which was characterized by an exploration of the emotions connected with romantic love together with its valorization, often in a context of a brief poetic story narrative within a ci poem or a linked group of ci poems in an application of the chuanqi form of short story tales to poetry. Qu in Yuan dynasty The Qu form of poetry is a type of Classical Chinese poetry form, consisting of words written in one of a number of certain, set tone patterns, based upon the tunes of various songs. Thus Qu poems are lyrics with lines of varying longer and shorter lengths, set according to the certain and specific, fixed patterns of rhyme and tone of conventional musical pieces upon which they are based and after which these matched variations in lyrics (or individual Qu poems) generally take their name. The fixed-tone type of verse such as the Qu and the ci together with the shi and fu forms of poetry comprise the three main forms of Classical Chinese poetry. In Chinese literature, the Qu form of poetry from the Yuan dynasty may be called Yuanqu (元曲 P: Yuánqǔ, W: Yüan-chü). Qu may be derived from Chinese opera, such as the Zaju (雜劇/杂剧), in which case these Qu may be referred to as sanqu (散曲). The San in Sanqu refers to the detached status of the Qu lyrics of this verse form: in other words, rather than being embedded as part of an opera performance the lyrics stand separately on their own. Since the Qu became popular during the late Southern Song dynasty, and reached a special height of popularity in the poetry of the Yuan dynasty, therefore it is often called Yuanqu (元曲), specifying the type of Qu found in Chinese opera typical of the Yuan dynasty era. Both Sanqu and Ci are lyrics written to fit a different melodies, but Sanqu differs from Ci in that it is more colloquial, and is allowed to contain Chenzi (襯字/衬字 "filler words" which are additional words to make a more complete meaning). Sanqu can be further divided into Xiaoling (小令) and Santao (散套), with the latter containing more than one melody. The novels in Ming dynasty and Qing dynasty The Four Great Classical or Classic Novels of Chinese literature are the four novels commonly regarded by Chinese literary criticism to be the greatest and most influential of pre-modern Chinese fiction. Dating from the Ming and Qing dynasties, they are well known to most Chinese either directly or through their many adaptations to Chinese opera and other forms of popular culture. They are among the world's longest and oldest novels and are considered to be the pinnacle of China's literary achievement in classic novels, influencing the creation of many stories, plays, movies, games, and other forms of entertainment across other parts of East Asia. Chinese fiction, rooted in narrative classics such as Shishuo Xinyu, Sou Shen Ji, Wenyuan Yinghua, Da Tang Xiyu Ji, Youyang Zazu, Taiping Guangji, and official histories, developed into the novel as early as the Song dynasty. The novel as an extended prose narrative which realistically creates a believable world of its own evolved in China and in Europe from the 14th to 18th centuries, though a little earlier in China. Chinese audiences were more interested in history and were more historically minded. They appreciated relative optimism, moral humanism, and relative emphasis on collective behavior and the welfare of the society. The rise of a money economy and urbanization beginning in the Song era led to a professionalization of entertainment which was further encouraged by the spread of printing, the rise of literacy, and education. In both China and Western Europe, the novel gradually became more autobiographical and serious in exploration of social, moral, and philosophical problems. Chinese fiction of the late Ming dynasty and early Qing dynasty was varied, self-conscious, and experimental. In China, however, there was no counterpart to the 19th-century European explosion of novels. The novels of the Ming and early Qing dynasties represented a pinnacle of classic Chinese fiction. The scholar and literary critic Andrew H. Plaks argues that Romance of the Three Kingdoms, Water Margin, Journey to the West, and The Golden Lotus collectively constituted a technical breakthrough reflecting new cultural values and intellectual concerns. Their educated editors, authors, and commentators used the narrative conventions developed from earlier story-tellers, such as the episodic structure, interspersed songs and folk sayings, or speaking directly to the reader, but they fashioned self-consciously ironic narratives whose seeming familiarity camouflaged a Neo-Confucian moral critique of late Ming decadence. Plaks explores the textual history of the novels (all published after their author's deaths, usually anonymously) and how the ironic and satiric devices of these novels paved the way for the great novels of the 18th century. Plaks further shows these Ming novels share formal characteristics. Fashion and clothing China's fashion history covers hundreds of years with some of the most colorful and diverse arrangements. Different social classes in different eras boast different fashion trends, the color yellow was usually reserved for the emperor during China's Imperial era. Pre-Qing From the beginning of its history, Han clothing (especially in elite circles) was inseparable from silk, supposedly discovered by the Yellow Emperor's consort, Leizu. The dynasty to follow the Shang, the Western Zhou dynasty, established a strict hierarchical society that used clothing as a status meridian, and inevitably, the height of one's rank influenced the ornateness of a costume. Such markers included the length of a skirt, the wideness of a sleeve and the degree of ornamentation. In addition to these class-oriented developments, Han Chinese clothing became looser, with the introduction of wide sleeves and jade decorations hung from the sash which served to keep the yi closed. The yi was essentially wrapped over, in a style known as jiaoling youren, or wrapping the right side over before the left, because of the initially greater challenge to the right-handed wearer (people of Zhongyuan discouraged left-handedness like many other historical cultures, considering it unnatural, barbarian, uncivilized, and unfortunate). The Shang dynasty ( – 1046 BC), developed the rudiments of Chinese clothing; it consisted of a yi, a narrow-cuffed, knee-length tunic tied with a sash, and a narrow, ankle-length skirt, called chang, worn with a bixi, a length of fabric that reached the knees. Vivid primary colors and green were used, due to the degree of technology at the time. Qipao During the Qing dynasty, China's last imperial dynasty, a dramatic shift of clothing occurred, examples of which include the cheongsam (or qipao in Mandarin). The clothing of the era before the Qing dynasty is referred to as Hanfu or traditional Han Chinese clothing. Many symbols such as phoenix have been used for decorative as well as economic purposes. Among them were the Banners (qí), mostly Manchu, who as a group were called Banner People (旗人 pinyin: qí rén). Manchu women typically wore a one-piece dress that retrospectively came to be known as the qípáo (旗袍, Manchu: sijigiyan or banner gown). The generic term for both the male and the female forms of Manchu dress, essentially similar garments, was chángpáo (長袍/长袍). The qipao fitted loosely and hung straight down the body, or flared slightly in an A-line. Under the dynastic laws after 1636, all Han Chinese in the banner system were forced to adopt the Manchu male hairstyle of wearing a queue as did all Manchu men and dress in Manchu qipao. However, the order for ordinary non-Banner Han civilians to wear Manchu clothing was lifted and only Han who served as officials were required to wear Manchu clothing, with the rest of the civilian Han population dressing however they wanted. Qipao covered most of the woman's body, revealing only the head, hands, and the tips of the toes. The baggy nature of the clothing also served to conceal the figure of the wearer regardless of age. With time, though, the qipao were tailored to become more form fitting and revealing. The modern version, which is now recognized popularly in China as the "standard" qipao, was first developed in Shanghai in the 1920s, partly under the influence of Beijing styles. People eagerly sought a more modernized style of dress and transformed the old qipao to suit their tastes. Slender and form fitting with a high cut, it had great differences from the traditional qipao. It was high-class courtesans and celebrities in the city that would make these redesigned tight fitting qipao popular at that time. In Shanghai it was first known as zansae or "long dress" (長衫—Mandarin Chinese: chángshān; Shanghainese: zansae; Cantonese: chèuhngsāam), and it is this name that survives in English as the "cheongsam". Most Han civilian men eventually voluntarily adopted Manchu clothing while Han women continued wearing Han clothing. Until 1911, the changpao was required clothing for Chinese men of a certain class, but Han Chinese women continued to wear loose jacket and trousers, with an overskirt for formal occasions. The qipao was a new fashion item for Han Chinese women when they started wearing it around 1925.The original qipao was wide and loose. As hosiery in turn declined in later decades, cheongsams nowadays have come to be most commonly worn with bare legs. Arts Chinese art is visual art that, whether ancient or modern, originated in or is practiced in China or by Chinese artists. The Chinese art in the Republic of China (Taiwan) and that of overseas Chinese can also be considered part of Chinese art where it is based in or draws on Chinese heritage and Chinese culture. Early "Stone Age art" dates back to 10,000 BC, mostly consisting of simple pottery and sculptures. After this early period Chinese art, like Chinese history, is typically classified by the succession of ruling dynasties of Chinese emperors, most of which lasted several hundred years. Chinese art has arguably the oldest continuous tradition in the world, and is marked by an unusual degree of continuity within, and consciousness of, that tradition, lacking an equivalent to the Western collapse and gradual recovery of classical styles. The media that have usually been classified in the West since the Renaissance as the decorative arts are extremely important in Chinese art, and much of the finest work was produced in large workshops or factories by essentially unknown artists, especially in Chinese ceramics. Different forms of art have swayed under the influence of great philosophers, teachers, religious figures and even political figures. Chinese art encompasses all facets of fine art, folk art and performance art. Porcelain pottery was one of the first forms of art in the Palaeolithic period. Early Chinese music and poetry was influenced by the Book of Songs, and the Chinese poet and statesman Qu Yuan. Chinese painting became a highly appreciated art in court circles encompassing a wide variety of Shan shui with specialized styles such as Ming dynasty painting. Early Chinese music was based on percussion instruments, which later gave away to stringed and reed instruments. By the Han dynasty papercutting became a new art form after the invention of paper. Chinese opera would also be introduced and branched regionally in addition to other performance formats such as variety arts. Chinese lantern The Chinese paper lantern (紙燈籠, 纸灯笼) is a lantern made of thin, brightly colored paper. Paper lanterns come in various shapes and sizes, as well as various methods of construction. In their simplest form, they are simply a paper bag with a candle placed inside, although more complicated lanterns consist of a collapsible bamboo or metal frame of hoops covered with tough paper. Sometimes, other lanterns can be made out of colored silk (usually red) or vinyl. Silk lanterns are also collapsible with a metal expander and are decorated with Chinese characters and/or designs. The vinyl lanterns are more durable; they can resist rain, sunlight, and wind. Paper lanterns do not last very long, they soon break, and silk lanterns last longer. The gold paper on them will soon fade away to a pale white, and the red silk will become a mix between pink and red. Often associated with festivals, paper lanterns are common in China, Japan, Korea, Taiwan, and similarly in Chinatowns with large communities of Overseas Chinese, where they are often hung outside of businesses to attract attention. In Japan the traditional styles include bonbori and chōchin and there is a special style of lettering called chōchin moji used to write on them. Airborne paper lanterns are called sky lanterns, and are often released into the night sky for aesthetic effect at lantern festivals. The Chinese sky lantern (天燈, 天灯), also known as Kongming lantern, is a small hot air balloon made of paper, with an opening at the bottom where a small fire is suspended. In Asia and elsewhere around the world, sky lanterns have been traditionally made for centuries, to be launched for play or as part of long-established festivities. The name "sky lantern" is a translation of the Chinese name but they have also been referred to as sky candles or fire balloons. The general design is a thin paper shell, which may be from about 30 cm to a couple of metres across, with an opening at the bottom. The opening is usually about 10 to 30 cm wide (even for the largest shells), and is surrounded by a stiff collar that serves to suspend the flame source and to keep it away from the walls. When lit, the flame heats the air inside the lantern, thus lowering its density and causing the lantern to rise into the air. The sky lantern is only airborne for as long as the flame stays alight, after which the lantern sinks back to the ground. Chinese hand fan The oldest existing Chinese fans are a pair of woven bamboo, wood or paper side-mounted fans from the 2nd century BCE. The Chinese character for "fan" (扇) is etymologically derived from a picture of feathers under a roof. A particular status and gender would be associated with a specific type of fan. During the Song dynasty, famous artists were often commissioned to paint fans. The Chinese dancing fan was developed in the 7th century. The Chinese form of the hand fan was a row of feathers mounted in the end of a handle. In the later centuries, Chinese poems and four-word idioms were used to decorate the fans by using Chinese calligraphy pens. In ancient China, fans came in various shapes and forms (such as in a leaf, oval or a half-moon shape), and were made in different materials such as silk, bamboo, feathers, etc. Carved lacquer Carved lacquer or Qīdiāo is a distinctive Chinese form of decorated lacquerware. While lacquer has been used in China for at least 3,000 years, the technique of carving into very thick coatings of it appears to have been developed in the 12th century CE. It is extremely time-consuming to produce, and has always been a luxury product, essentially restricted to China, though imitated in Japanese lacquer in somewhat different styles. The producing process is called Diāoqī (/彫漆, carving lacquer).Though most surviving examples are from the Ming and Qing dynasties, the main types of subject matter for the carvings were all begun under the Song dynasty, and the development of both these and the technique of carving were essentially over by the early Ming. These types were the abstract guri or Sword-Pommel pattern, figures in a landscape, and birds and plants. To these some designs with religious symbols, animals, auspicious characters (right) and imperial dragons can be added. The objects made in the technique are a wide range of small types, but are mostly practical vessels or containers such as boxes, plates and trays. Some screens and pieces of Chinese furniture were made. Carved lacquer is only rarely combined with painting in lacquer and other lacquer techniques. Later Chinese writers dated the introduction of carved lacquer to the Tang dynasty (618–906), and many modern writers have pointed to some late Tang pieces of armour found on the Silk Road by Aurel Stein and now in the British Museum. These are red and black lacquer on camel hide, but the lacquer is very thin, "less than one millimeter in thickness", and the effect very different, with simple abstract shapes on a plain field and almost no impression of relief. The style of carving into thick lacquer used later is first seen in the Southern Song (1127–1279), following the development of techniques for making very thick lacquer. There is some evidence from literary sources that it had existed in the late Tang. At first the style of decoration used is known as guri (/曲仑) from the Japanese word for the ring-pommel of a sword, where the same motifs were used in metal, and is often called the "Sword-Pommel pattern" in English. This style uses a family of repeated two-branched scrolling shapes cut with a rounded profile at the surface, but below that a "V" section through layers of lacquer in different colours (black, red and yellow, and later green), giving a "marbled" effect from the contrasted colours; this technique is called tìxī (/剃犀) in Chinese. This style continued to be used up to the Ming dynasty, especially on small boxes and jars with covers, though after the Song only red was often used, and the motifs were often carved with wider flat spaces at the bottom level to be exposed. Folding screen A folding screen is a type of free-standing furniture. It consists of several frames or panels, which are often connected by hinges or by other means. It can be made in a variety of designs and with different kinds of materials. Folding screens have many practical and decorative uses. It originated from ancient China, eventually spreading to the rest of East Asia, Europe, and other parts of the world. Screens date back to China during the Eastern Zhou period (771–256 BCE). These were initially one-panel screens in contrast to folding screens. Folding screens were invented during the Han dynasty (206 BCE – 220 CE). Depictions of those folding screens have been found in Han-era tombs, such as one in Zhucheng, Shandong Province. Folding screens were originally made from wooden panels and painted on lacquered surfaces, eventually folding screens made from paper or silk became popular too. Even though folding screens were known to have been used since antiquity, it became rapidly popular during the Tang dynasty (618–907). During the Tang dynasty, folding screens were considered ideal ornaments for many painters to display their paintings and calligraphy on. Many artists painted on paper or silk and applied it onto the folding screen. There were two distinct artistic folding screens mentioned in historical literature of the era. One of it was known as the and the other was known as the . It was not uncommon for people to commission folding screens from artists, such as from Tang-era painter Cao Ba or Song-era painter Guo Xi. The landscape paintings on folding screens reached its height during the Song dynasty (960–1279). The lacquer techniques for the Coromandel screens, which is known as ( "incised colors"), emerged during the late Ming dynasty (1368–1644) and was applied to folding screens to create dark screens incised, painted, and inlaid with art of mother-of-pearl, ivory, or other materials. Chinese jade Chinese jade (玉) refers to the jade mined or carved in China from the Neolithic onward. It is the primary hardstone of Chinese sculpture. Although deep and bright green jadeite is better known in Europe, for most of China's history, jade has come in a variety of colors and white "mutton-fat" nephrite was the most highly praised and prized. Native sources in Henan and along the Yangtze were exploited since prehistoric times and have largely been exhausted; most Chinese jade today is extracted from the northwestern province of Xinjiang. Jade was prized for its hardness, durability, musical qualities, and beauty. In particular, its subtle, translucent colors and protective qualities caused it to become associated with Chinese conceptions of the soul and immortality. The most prominent early use was the crafting of the Six Ritual Jades, found since the 3rd-millennium BC Liangzhu culture: the bi, the cong, the huang, the hu, the gui, and the zhang. Although these items are so ancient that their original meaning is uncertain, by the time of the composition of the Rites of Zhou, they were thought to represent the sky, the earth, and the four directions. By the Han dynasty, the royal family and prominent lords were buried entirely ensheathed in jade burial suits sewn in gold thread, on the idea that it would preserve the body and the souls attached to it. Jade was also thought to combat fatigue in the living. The Han also greatly improved prior artistic treatment of jade. These uses gave way after the Three Kingdoms period to Buddhist practices and new developments in Taoism such as alchemy. Nonetheless, jade remained part of traditional Chinese medicine and an important artistic medium. Although its use never became widespread in Japan, jade became important to the art of Korea and Southeast Asia. Mythological beings Loong Loongs, also known as Chinese Dragon, are legendary creatures in Chinese mythology, Chinese folklore, and East Asian culture. Chinese dragons have many animal-like forms such as turtles and fish, but are most commonly depicted as snake-like with four legs. They traditionally symbolize potent and auspicious powers, particularly control over water, rainfall, typhoons, and floods. The dragon is also a symbol of power, strength, and good luck for people who are worthy of it. During the days of Imperial China, the Emperor of China usually used the dragon as a symbol of his imperial power and strength. They are also the symbol and representative for the Son of Heaven, the Mandate of Heaven, the Celestial Empire and the Chinese Tributary System during the history of China. Fenghuang Fenghuang (鳳凰) are mythological birds found in Chinese and East Asian mythology that reign over all other birds. The males were originally called feng and the females huang but such a distinction of gender is often no longer made and they are blurred into a single feminine entity so that the bird can be paired with the Chinese dragon, which is traditionally deemed male. The fenghuang is also called the "August Rooster" since it sometimes takes the place of the Rooster in the Chinese zodiac. In the Western world, it is commonly called the Chinese phoenix or simply Phoenix, although mythological similarities with the Western phoenix are superficial. Qilin The Qilin (; ), or Kirin in Japanese, is a mythical hooved chimerical creature in Chinese culture, said to appear with the imminent arrival or passing of a sage or illustrious ruler. Qilin is a specific type of the lin mythological family of one-horned beasts. The earliest references to the qilin are in the 5th century BC Zuo Zhuan. The qilin made appearances in a variety of subsequent Chinese works of history and fiction, such as Feng Shen Bang. Emperor Wu of Han apparently captured a live qilin in 122 BC, although Sima Qian was skeptical of this. Xuanwu Xuanwu (Chinese:玄武) is one of the Four Symbols of the Chinese constellations. Despite its English name, it is usually depicted as a turtle entwined together with a snake. It is known as Genbu in Japanese and Hyeonmu in Korean. It represents the north and the winter season. In Japan, it is one of the four guardian spirits that protect Kyoto and it is said that it protects the city on the north. Represented by the Kenkun Shrine, which is located on top of Mt Funaoka in Kyoto. The creature's name is identical to that of the important Taoist god Xuanwu, who is sometimes (as in Journey to the West) portrayed in the company of a turtle and snake. Music, instruments and dancing Music and dance were closely associated in the very early periods of China. The music of China dates back to the dawn of Chinese civilization with documents and artifacts providing evidence of a well-developed musical culture as early as the Zhou dynasty (1122 BCE – 256 BCE). The earliest music of the Zhou dynasty recorded in ancient Chinese texts includes the ritual music called yayue and each piece may be associated with a dance. Some of the oldest written music dates back to Confucius's time. The first major well-documented flowering of Chinese music was exemplified through the popularization of the qin (plucked instrument with seven strings) during the Tang dynasty, although the instrument is known to have played a major role before the Han dynasty. There are many musical instruments that are integral to Chinese culture, such as the Xun (Ocarina-type instrument that is also integral in Native American cultures), Guzheng (zither with movable bridges), guqin (bridgeless zither), sheng and xiao (vertical flute), the erhu (alto fiddle or bowed lute), pipa (pear-shaped plucked lute), and many others. Dance in China is a highly varied art form, consisting of many modern and traditional dance genres. The dances cover a wide range, from folk dances to performances in opera and ballet, and may be used in public celebrations, rituals and ceremonies. There are also 56 officially recognized ethnic groups in China, and each ethnic minority group in China also has its own folk dances. The best known Chinese dances today are the Dragon dance and the Lion Dance. Architecture Chinese architecture is a style of architecture that has taken shape through the ages and influenced the architecture of East Asia for many centuries. The structural principles of Chinese architecture have remained largely unchanged, the main changes being only the decorative details. Since the Tang dynasty, Chinese architecture has had a major influence on the architectural styles of East Asia such as Japan and Korea. Chinese architecture, examples for which can be found from more than 2,000 years ago, is almost as old as Chinese civilization and has long been an important hallmark of Chinese culture. There are certain features common to Chinese architecture, regardless of specific regions, different provinces or use. The most important is symmetry, which connotes a sense of grandeur as it applies to everything from palaces to farmhouses. One notable exception is in the design of gardens, which tends to be as asymmetrical as possible. Like Chinese scroll paintings, the principle underlying the garden's composition is to create enduring flow, to let the patron wander and enjoy the garden without prescription, as in nature herself. Feng shui has played a very important part in structural development. The Chinese garden is a landscape garden style which has evolved over three thousand years. It includes both the vast gardens of the Chinese emperors and members of the imperial family, built for pleasure and to impress, and the more intimate gardens created by scholars, poets, former government officials, soldiers and merchants, made for reflection and escape from the outside world. They create an idealized miniature landscape, which is meant to express the harmony that should exist between man and nature. A typical Chinese garden is enclosed by walls and includes one or more ponds, rock works, trees and flowers, and an assortment of halls and pavilions within the garden, connected by winding paths and zig-zag galleries. By moving from structure to structure, visitors can view a series of carefully composed scenes, unrolling like a scroll of landscape paintings. Chinese palace The Chinese palace is an imperial complex where the royal court and the civil government resided. Its structures are considerable and elaborate. The Chinese character gong (宮; meaning "palace") represents two connected rooms (呂) under a roof (宀). Originally the character applied to any residence or mansion, but it was used in reference to solely the imperial residence since the Qin dynasty (3rd century BC). A Chinese palace is composed of many buildings. It has large areas surrounded by walls and moats. It contains large halls (殿) for ceremonies and official business, as well as smaller buildings, temples, towers, residences, galleries, courtyards, gardens, and outbuildings. Apart from the main imperial palace, Chinese dynasties also had several other imperial palaces in the capital city where the empress, crown prince, or other members of the imperial family dwelled. There also existed palaces outside of the capital city called "away palaces" (離宮/离宫) where the emperors resided when traveling. Empress dowager Cixi (慈禧太后) built the Summer Palace or Yiheyuan (頤和園/颐和园 – "The Garden of Nurtured Harmony") near the Old Summer Palace, but on a much smaller scale than the Old Summer Palace. Paifang Paifang, also known as a Pailou, is a traditional style of Chinese architectural arch or gateway structure that is related to the Indian Torana from which it is derived. The word paifang was originally a collective term for the top two levels of administrative division and subdivisions of ancient Chinese cities. The largest division within a city in ancient China was a fang, equivalent to a current day ward. Each fang was enclosed by walls or fences, and the gates of these enclosures were shut and guarded every night. Each fang was further divided into several pai, which is equivalent to a current day (unincorporated) community. Each pai, in turn, contained an area including several hutongs (alleyways). This system of urban administrative division and subdivision reached an elaborate level during the Tang dynasty, and continued in the following dynasties. For example, during the Ming dynasty, Beijing was divided into a total of 36 fangs. Originally, the word paifang referred to the gate of a fang and the marker for an entrance of a building complex or a town; but by the Song dynasty, a paifang had evolved into a purely decorative monument. Chinese garden The Chinese garden is a landscape garden style which has evolved over the years. It includes both the vast gardens of the Chinese emperors and members of the imperial family, built for pleasure and to impress, and the more intimate gardens created by scholars, poets, former government officials, soldiers and merchants, made for reflection and escape from the outside world. They create an idealized miniature landscape, which is meant to express the harmony that should exist between man and nature. A typical Chinese garden is enclosed by walls and includes one or more ponds, rock works, trees and flowers, and an assortment of halls and pavilions within the garden, connected by winding paths and zig-zag galleries. By moving from structure to structure, visitors can view a series of carefully composed scenes, unrolling like a scroll of landscape paintings. The earliest recorded Chinese gardens were created in the valley of the Yellow River, during the Shang dynasty (1600–1046 BC). These gardens were large enclosed parks where the kings and nobles hunted game, or where fruit and vegetables were grown. Early inscriptions from this period, carved on tortoise shells, have three Chinese characters for garden, you, pu and yuan. You was a royal garden where birds and animals were kept, while pu was a garden for plants. During the Qin dynasty (221–206 BC), yuan became the character for all gardens. The old character for yuan is a small picture of a garden; it is enclosed in a square which can represent a wall, and has symbols which can represent the plan of a structure, a small square which can represent a pond, and a symbol for a plantation or a pomegranate tree. According to the Shiji, one of the most famous features of this garden was the Wine Pool and Meat Forest (酒池肉林). A large pool, big enough for several small boats, was constructed on the palace grounds, with inner linings of polished oval shaped stones from the sea shores. The pool was then filled with wine. A small island was constructed in the middle of the pool, where trees were planted, which had skewers of roasted meat hanging from their branches. King Zhou and his friends and concubines drifted in their boats, drinking the wine with their hands and eating the roasted meat from the trees. Later Chinese philosophers and historians cited this garden as an example of decadence and bad taste. During the Spring and Autumn period (722–481 BC), in 535 BC, the Terrace of Shanghua, with lavishly decorated palaces, was built by King Jing of the Zhou dynasty. In 505 BC, an even more elaborate garden, the Terrace of Gusu, was begun. It was located on the side of a mountain, and included a series of terraces connected by galleries, along with a lake where boats in the form of blue dragons navigated. From the highest terrace, a view extended as far as Lake Tai, the Great Lake. Martial arts China is one of the main birthplaces of Eastern martial arts. Chinese martial arts, often named under the umbrella terms kung fu and wushu, are the several hundred fighting styles that have developed over the centuries in China. These fighting styles are often classified according to common traits, identified as "families" (家; ), "sects" (派; ) or "schools" (门/門; ) of martial arts. Examples of such traits include Shaolinquan physical exercises involving Five Animals mimicry, or training methods inspired by Old Chinese philosophies, religions and legends. Styles that focus on qi manipulation are called "internal "(內家拳/内家拳; ), while others that concentrate on improving muscle and cardiovascular fitness are called "external" (外家拳; ). Geographical association, as in "northern"(北拳; ) and "southern" (南拳; ), is another popular classification method. Chinese martial arts are collectively given the name Kung Fu (gong) "achievement" or "merit", and (fu) "man", thus "human achievement") or (previously and in some modern contexts) Wushu ("martial arts" or "military arts"). China also includes the home to the well-respected Shaolin Monastery and Wudang Mountains. The first generation of art started more for the purpose of survival and warfare than art. Over time, some art forms have branched off, while others have retained a distinct Chinese flavor. Regardless, China has produced some of the most renowned martial artists including Wong Fei Hung and many others. The arts have also co-existed with a variety of weapons including the more standard 18 arms. Legendary and controversial moves like Dim Mak are also praised and talked about within the culture. Martial arts schools also teach the art of lion dance, which has evolved from a pugilistic display of Kung Fu to an entertaining dance performance. Leisure A number of games and pastimes are popular within Chinese culture. The most common game is Mahjong. The same pieces are used for other styled games such as Shanghai Solitaire. Others include pai gow, pai gow poker and other bone domino games. Weiqi and xiangqi are also popular. Ethnic games like Chinese yo-yo are also part of the culture where it is performed during social events. Qigong is the practice of spiritual, physical, and medical techniques. It is as a form of exercise and although it is commonly used among the elderly, any one of any age can practice it during their free time. Cuisine Chinese cuisine is a very important part of Chinese culture, which includes cuisine originating from the diverse regions of China, as well as from Chinese people in other parts of the world. Because of the Chinese diaspora and historical power of the country, Chinese cuisine has influenced many other cuisines in Asia, with modifications made to cater to local palates. Seasoning and cooking techniques of Chinese provinces depend on differences in historical background and ethnic groups. Geographic features including mountains, rivers, forests and deserts also have a strong effect on the local available ingredients, considering climate of China varies from tropical in the south to subarctic in the northeast. Imperial, royal and noble preference also plays a role in the change of Chinese cuisines. Because of imperial expansion and trading, ingredients and cooking techniques from other cultures are integrated into Chinese cuisines over time. The most praised "Four Major Cuisines" are Chuan, Lu, Yue and Huaiyang, representing West, North, South and East China cuisine correspondingly. Modern "Eight Cuisines" of China are Anhui, Cantonese, Fujian, Hunan, Jiangsu, Shandong, Sichuan, and Zhejiang cuisines. Color, smell and taste are the three traditional aspects used to describe Chinese food, as well as the meaning, appearance and nutrition of the food. Cooking should be appraised from ingredients used, cuttings, cooking time and seasoning. It is considered inappropriate to use knives on the dining table. Chopsticks are the main eating utensils for Chinese food, which can be used to cut and pick up food. Tea culture The practice of drinking tea has a long history in China, having originated there. The history of tea in China is long and complex, for the Chinese have enjoyed tea for millennia. Scholars hailed the brew as a cure for a variety of ailments; the nobility considered the consumption of good tea as a mark of their status, and the common people simply enjoyed its flavour. In 2016, the discovery of the earliest known physical evidence of tea from the mausoleum of Emperor Jing of Han in Xi'an was announced, indicating that tea from the genus Camellia was drunk by Han dynasty emperors as early as 2nd century BC. Tea then became a popular drink in the Tang (618–907) and Song (960–1279) Dynasties. Although tea originated in China, during the Tang dynasty, Chinese tea generally represents tea leaves which have been processed using methods inherited from ancient China. According to popular legend, tea was discovered by Chinese Emperor Shen Nong in 2737 BCE when a leaf from a nearby shrub fell into water the emperor was boiling. Tea is deeply woven into the history and culture of China. The beverage is considered one of the seven necessities of Chinese life, along with firewood, rice, oil, salt, soy sauce and vinegar. During the Spring and Autumn period, Chinese tea was used for medicinal purposes and it was the period when the Chinese people first enjoyed the juice extracted from the tea leaves that they chewed. Chinese tea culture refers to how tea is prepared as well as the occasions when people consume tea in China. Tea culture in China differs from that in European countries such as Britain and other Asian countries like Japan in preparation, taste, and the occasions when people consume tea. Even today, tea is consumed regularly, both at casual and formal occasions. In addition to being a popular beverage, tea is used in traditional Chinese medicine, as well as in Chinese cuisine. Green tea is one of the main teas originating in China. Food culture Imperial, royal and noble preference played a role in the changes in Chinese cuisines over time. Because of imperial expansion and trading, ingredients and cooking techniques from other cultures were integrated into Chinese cuisines over time. The overwhelmingly large variety of Chinese cuisine comes mainly from the practice of the dynastic periods, when emperors would host banquets with over 100 dishes per meal. A countless number of imperial kitchen staff and concubines were involved in the food preparation process. Over time, many dishes became part of the everyday citizen's cuisine. Some of the highest quality restaurants with recipes close to the dynastic periods include Fangshan restaurant in Beihai Park Beijing and the Oriole Pavilion. Arguably all branches of Hong Kong eastern style are in some ways rooted from the original dynastic cuisines. Manhan Quanxi, literally Manchu Han Imperial Feast was one of the grandest meals ever documented in Chinese cuisine. It consisted of at least 108 unique dishes from the Manchu and Han Chinese culture during the Qing dynasty, and it is only reserved and intended for the emperors. The meal was held for three whole days, across six banquets. The culinary skills consisted of cooking methods from all over Imperial China. When the Manchus conquered China and founded the Qing dynasty, the Manchu and Han Chinese peoples struggled for power. The Kangxi Emperor wanted to resolve the disputes so he held a banquet during his 66th birthday celebrations. The banquet consisted of Manchu and Han dishes, with officials from both ethnic groups attending the banquet together. After the Wuchang Uprising, common people learned about the imperial banquet. The original meal was served in the Forbidden City in Beijing. Major subcultures Chinese culture consists of many subcultures. In China, the cultural difference between adjacent provinces (and, in some cases, adjacent counties within the same province) can often be as big as than that between adjacent European nations. Thus, the concept of Han Chinese subgroups (漢族民系/汉族民系, literally "Han ethnic lineage") was born, used for classifying these subgroups within the greater Han ethnicity. These subgroups are, as a general rule, classified based on linguistic differences. Using this linguistic classification, some of the well-known subcultures within China include: North Hui culture Culture of Beijing (京) Culture of Shandong (魯/鲁) Culture of Gansu Dongbei culture (東北/东北) Shaanxi culture Jin culture (晉/晋) Zhongyuan culture (豫) South Haipai culture (海) Hakka culture (客) Hokkien culture (閩) Hong Kong culture (港) Hubei culture (楚) Huizhou culture (徽) Hunanese culture (湘) Jiangxi culture (贛) Jiangnan culture Lingnan culture (粵/粤) Macanese culture Sichuanese culture (蜀) Taiwanese culture (台) Teochew culture (潮) Wenzhou culture (欧) Wuyue culture (吳/吴) Gallery See also Bian Lian Chinese animation Chinese ancestral veneration Chinese art Chinese Buddhism Chinese cinema Chinese clothes Chinese food Chinese dance Chinese dragon Chinese drama Chinese festival Chinese folklore Chinese folk religion Chinese garden Chinese instrument Chinese jade Chinese literature Chinese marriage Chinese name Chinese opera Chinese orchestra Chinese paper cutting Chinese sphere of influence Chinese studies Color in Chinese culture Customs and etiquette in Chinese dining Go I Ching's influence Numbers in Chinese culture Peking opera Science and technology in China Chinese astronomy Chinese calendar Chinese mathematics Chinese medicine Chinese units of measurement Taoism Tian-tsui Xiangqi Notes References External links Aspect of Chinese culture, Chang Zonglin. Li Xukui, , Tsinghua University Press Exploring Ancient World Cultures – Ancient China, University of Evansville Politics of China
0.764279
0.998176
0.762886
Law of three stages
The law of three stages is an idea developed by Auguste Comte in his work The Course in Positive Philosophy. It states that society as a whole, and each particular science, develops through three mentally conceived stages: (1) the theological stage, (2) the metaphysical stage, and (3) the positive stage. The progression of the three stages of sociology (1) The Theological stage refers to the appeal to personified deities. During the earlier stages, people believed that all the phenomena of nature were the creation of the divine or supernatural. Adults and children failed to discover the natural causes of various phenomena and hence attributed them to a supernatural or divine power. Comte broke this stage into 3 sub-stages: 1A. Fetishism – Fetishism was the primary stage of the theological stage of thinking. Throughout this stage, primitive people believe that inanimate objects have living spirits in them, also known as animism. People worship inanimate objects like trees, stones, a pieces of wood, volcanic eruptions, etc. Through this practice, people believe that all things root from a supernatural source. 1B. Polytheism – At one point, Fetishism began to bring about doubt in the minds of its believers. As a result, people turned towards polytheism: the explanation of things through the use of many Gods. Primitive people believe that all natural forces are controlled by different Gods; a few examples would be the God of water, God of rain, God of fire, God of air, God of earth, etc. 1C. Monotheism – Monotheism means believing in one God or God in one; attributing all to a single, supreme deity. Primitive people believe a single theistic entity is responsible for the existence of the universe. (2) The Metaphysical stage is an extension of the theological stage. It refers to explanation by impersonal abstract concepts. People often try to characterize God as an abstract being. They believe that an abstract power or force guides and determines events in the world. Metaphysical thinking discards belief in a concrete God. For example: In Classical Hindu Indian society, the principle of the transmigration of the soul, the conception of rebirth, and notions of pursuant were largely governed by metaphysical uphill. (3) The Positivity stage, also known as the scientific stage, refers to scientific explanation based on observation, experiment, and comparison. Positive explanations rely upon a distinct method, the scientific method, for their justification. Today people attempt to establish cause-and-effect relationships. Positivism is a purely intellectual way of looking at the world; as well, it also emphasizes observation and classification of data and facts. This is the highest, most evolved behavior according to Comte. Comte, however, was conscious of the fact that the three stages of thinking may or do coexist in the same society or the same mind and may not always be successive. Comte proposed a hierarchy of the sciences based on historical sequence, with areas of knowledge passing through these stages in order of complexity. The simplest and most remote areas of knowledge—mechanical or physical—are the first to become scientific. These are followed by the more complex sciences, those considered closest to us. The sciences, then, according to Comte's "law", developed in this order: Mathematics; Astronomy; Physics; Chemistry; Biology; Sociology. A science of society is thus the "Queen science" in Comte's hierarchy as it would be the most fundamentally complex. Since Comte saw social science as an observation of human behavior and knowledge, his definition of sociology included observing humanity’s development of science itself. Because of this, Comte presented this introspective field of study as the science above all others. Sociology would both complete the body of positive sciences by discussing humanity as the last unstudied scientific field and would link the fields of science together in human history, showing the "intimate interrelation of scientific and social development". To Comte, the law of three stages made the development of sociology inevitable and necessary. Comte saw the formation of his law as an active use of sociology, but this formation was dependent on other sciences reaching the positive stage; Comte’s three-stage law would not have evidence for a positive stage without the observed progression of other sciences through these three stages. Thus, sociology and its first law of three stages would be developed after other sciences were developed out of the metaphysical stage, with the observation of these developed sciences becoming the scientific evidence used in a positive stage of sociology. This special dependence on other sciences contributed to Comte’s view of sociology being the most complex. It also explains sociology being the last science to be developed. Comte saw the results of his three-stage law and sociology as not only inevitable but good. In Comte’s eyes, the positive stage was not only the most evolved but also the stage best for mankind. Through the continuous development of positive sciences, Comte hoped that humans would perfect their knowledge of the world and make real progress to improve the welfare of humanity. He acclaimed the positive stage as the "highest accomplishment of the human mind" and as having "natural superiority" over the other, more primitive stages. Overall, Comte saw his law of three stages as the start of the scientific field of sociology as a positive science. He believed this development was the key to completing positive philosophy and would finally allow humans to study every observable aspect of the universe. For Comte, sociology’s human-centered studies would relate the fields of science to each other as progressions in human history and make positive philosophy one coherent body of knowledge. Comte presented the positive stage as the final state of all sciences, which would allow human knowledge to be perfected, leading to human progress. Critiques of the law Historian William Whewell wrote "Mr. Comte's arrangement of the progress of science as successively metaphysical and positive, is contrary to history in fact, and contrary to sound philosophy in principle." The historian of science H. Floris Cohen has made a significant effort to draw the modern eye towards this first debate on the foundations of positivism. In contrast, within an entry dated early October 1838 Charles Darwin wrote in one of his then private notebooks that "M. Comte's idea of a theological state of science [is a] grand idea." See also Antipositivism Religion of Humanity Sociological positivism References External links History Guide Sociocultural evolution theory Religion and science Auguste Comte History of sociology
0.766233
0.995598
0.76286
Early European Farmers
Early European Farmers (EEF) were a group of the Anatolian Neolithic Farmers (ANF) who brought agriculture to Europe and Northwest Africa. The Anatolian Neolithic Farmers were an ancestral component, first identified in farmers from Anatolia (also known as Asia Minor) in the Neolithic, and outside in Europe and Northwest Africa, they also existed in Iranian Plateau, South Caucasus, Mesopotamia and Levant. Although the spread of agriculture from the Middle East to Europe has long been recognised through archaeology, it is only recent advances in archaeogenetics that have confirmed that this spread was strongly correlated with a migration of these farmers, and was not just a cultural exchange. The earliest farmers in Anatolia derived most (80–90%) of their ancestry from the region's local hunter-gatherers, with minor Levantine and Caucasus-related ancestry. The Early European Farmers moved into Europe from Anatolia through Southeast Europe from around 7,000 BC, gradually spread north and westwards, and reached Northwest Africa via the Iberian Peninsula. Genetic studies have confirmed that the later Farmers of Europe generally have also a minor contribution from Western Hunter-Gatherers (WHGs), with significant regional variation. European farmer and hunter-gatherer populations coexisted and traded in some locales, although evidence suggests that the relationship was not always peaceful. Over the course of the next 4,000 years or so, Europe was transformed into agricultural communities, with WHGs being effectively replaced across Europe. During the Chalcolithic and early Bronze Age, people who had Western Steppe Herder (WSH) ancestry moved into Europe and mingled with the EEF population; these WSH, originating from the Yamnaya culture of the Pontic steppe of Eastern Europe, probably spoke Indo-European languages. EEF ancestry is common in modern European and Northwest African populations, with EEF ancestry highest in Southern Europeans, especially Sardinians and Basque people. A distinct group of the Anatolian Neolithic Farmers spread into the east of Anatolia, and left a considerable genetic legacy in Iranian Plateau, South Caucasus, Levant (during the Pre-Pottery Neolithic B) and Mesopotamia. They also have a minor role in the ethnogenesis of WSHs of Yamnaya culture. The ANF ancestry is found in substantial levels in contemporary European, West Asian and North African populations, and also found in Central and South Asian populations (through Bactria–Margiana Archaeological Complex and Corded Ware Culture) with lower levels. Overview Populations of the Anatolian Neolithic derived most of their ancestry from the Anatolian hunter-gatherers (AHG), with a minor geneflow from Iranian/Caucasus and Levantine related sources, suggesting that agriculture was adopted in situ by these hunter-gatherers and not spread by demic diffusion into the region. Ancestors of AHGs and EEFs are believed to have split off from Western Hunter-Gatherers (WHGs) between 45kya to 26kya during the Last Glacial Maximum, and to have split from Caucasus Hunter-Gatherers (CHGs) between 25kya to 14kya. Genetic studies demonstrate that the introduction of farming to Europe in the 7th millennium BC was associated with a mass migration of people from Northwest Anatolia to Southeast Europe, which resulted in the replacement of almost all (c. 98%) of the local Balkan hunter-gatherer gene pool with ancestry from Anatolian farmers. In the Balkans, the EEFs appear to have divided into two wings, who expanded further west into Europe along the Danube (Linear Pottery culture) or the western Mediterranean (Cardial Ware). Large parts of Northern Europe and Eastern Europe nevertheless remained unsettled by EEFs. During the Middle Neolithic there was a largely male-driven resurgence of WHG ancestry among many EEF-derived communities, leading to increasing frequencies of the hunter-gatherer paternal haplogroups among them. Around 7,500 years ago, EEFs originating from the Iberian Peninsular migrated into Northwest Africa, bringing farming to the region. They were a key component in the neolithization process of the Maghreb, and intermixed with the local forager communities. The most common paternal haplogroup among EEFs was haplogroup G2a, while haplogroups E1b1 and R1b have also been found. Their maternal haplogroups consisted mainly of West Eurasian lineages including haplogroups H2, I, and T2, however significant numbers of central European farmers belonged to East Asian maternal lineage N9a, which is almost non-existent in modern Europeans, but common in East Asia. During the Chalcolithic and early Bronze Age, the EEF-derived cultures of Europe were overwhelmed by successive migrations of Western Steppe Herders (WSHs) from the Pontic–Caspian steppe, who carried roughly equal amounts of Eastern Hunter-Gatherer (EHG) and Caucasus Hunter-Gatherer (CHG) ancestries. These migrations led to EEF paternal DNA lineages in Europe being almost entirely replaced with WSH-derived paternal DNA (mainly subclades of EHG-derived R1b and R1a). EEF maternal DNA (mainly haplogroup N) was also substantially replaced, being supplanted by steppe lineages, suggesting the migrations involved both males and females from the steppe. A 2017 study found that Bronze Age European with steppe ancestry had elevated EEF ancestry on the X chromosome, suggesting a sex bias, in which Steppe ancestry was inherited by more male than female ancestors. However, this study's results could not be replicated in a follow-up study by Iosif Lazaridis and David Reich, suggesting that the authors had mis-measured the admixture proportions of their sample. EEF ancestry remains widespread throughout Europe, ranging from about 60% near the Mediterranean Sea (with a peak of 65% in the island of Sardinia) and diminishing northwards to about 10% in northern Scandinavia. According to more recent studies however, the highest EEF ancestry found in modern Europeans ranges from 67% to over 80% in modern Sardinians, Italians, and Iberians, with the lowest EEF ancestry found in modern Europeans ranging around 35-40% in modern Finns, Lithuanians and Latvians. EEF ancestry is also prominent in living Northwest Africans like Moroccans and Algerians. Physical appearance and allele frequency European hunter-gatherers were much taller than EEFs, and the replacement of European hunter-gatherers by EEFs resulted in a dramatic decrease in genetic height throughout Europe. During the later phases of the Neolithic, height increased among European farmers, probably due to increasing admixture with hunter-gatherers. During the Late Neolithic and Bronze Age, further reductions of EEF ancestry in Europe due to migrations of peoples with steppe-related ancestry is associated with further increases in height. High frequencies of EEF ancestry in Southern Europe might partly explain the shortness of Southern Europeans as compared to Northern Europeans, who carry increased levels of steppe-related ancestry. The Early European Farmers are believed to have been mostly dark haired and dark eyed, and light skinned, with the derived SLC24A5 being fixed in the Anatolia Neolithic, although some farmers were slightly darker than most modern Europeans. A study on different EEF remains throughout Europe concluded that they mostly had an "intermediate to light skin complexion". A 2024 paper found that risk alleles for mood-related phenotypes are enriched in the ancestry of Neolithic farmers. Subsistence EEFs and their Anatolian forebears kept taurine cattle, pigs, sheep, and goats as livestock, and planted cereal crops like wheat. Social organisation Genetic analysis of individuals found in Neolithic tombs suggests that least some EEF peoples were patrilineal (tracing descent through the male line), with the tombs' occupants mostly consisting of the male descendants of a single male common ancestor and their children, as well as their wives, who were genetically unrelated to their husbands, suggesting female exogamy. See also Neolithic Europe Neolithic decline Anatolian hunter-gatherers References Bibliography Further reading Notes EEF Genetic history of Europe Neolithic Europe Modern human genetic history
0.765085
0.997037
0.762818
Human evolution
Human evolution is the evolutionary process within the history of primates that led to the emergence of Homo sapiens as a distinct species of the hominid family that includes all the great apes. This process involved the gradual development of traits such as human bipedalism, dexterity, and complex language, as well as interbreeding with other hominins (a tribe of the African hominid subfamily), indicating that human evolution was not linear but weblike. The study of the origins of humans involves several scientific disciplines, including physical and evolutionary anthropology, paleontology, and genetics; the field is also known by the terms anthropogeny, anthropogenesis, and anthropogony. (The latter two terms are sometimes used to refer to the related subject of hominization.) Primates diverged from other mammals about (mya), in the Late Cretaceous period, with their earliest fossils appearing over 55 mya, during the Paleocene. Primates produced successive clades leading to the ape superfamily, which gave rise to the hominid and the gibbon families; these diverged some 15–20 mya. African and Asian hominids (including orangutans) diverged about 14 mya. Hominins (including the Australopithecine and Panina subtribes) parted from the Gorillini tribe between 8 and 9 mya; Australopithecine (including the extinct biped ancestors of humans) separated from the Pan genus (containing chimpanzees and bonobos) 4–7 mya. The Homo genus is evidenced by the appearance of H. habilis over 2 mya, while anatomically modern humans emerged in Africa approximately 300,000 years ago. Before Homo Early evolution of primates The evolutionary history of primates can be traced back 65 million years. One of the oldest known primate-like mammal species, the Plesiadapis, came from North America; another, Archicebus, came from China. Other similar basal primates were widespread in Eurasia and Africa during the tropical conditions of the Paleocene and Eocene. David R. Begun concluded that early primates flourished in Eurasia and that a lineage leading to the African apes and humans, including to Dryopithecus, migrated south from Europe or Western Asia into Africa. The surviving tropical population of primates—which is seen most completely in the Upper Eocene and lowermost Oligocene fossil beds of the Faiyum depression southwest of Cairo—gave rise to all extant primate species, including the lemurs of Madagascar, lorises of Southeast Asia, galagos or "bush babies" of Africa, and to the anthropoids, which are the Platyrrhines or New World monkeys, the Catarrhines or Old World monkeys, and the great apes, including humans and other hominids. The earliest known catarrhine is Kamoyapithecus from the uppermost Oligocene at Eragaleit in the northern Great Rift Valley in Kenya, dated to 24 million years ago. Its ancestry is thought to be species related to Aegyptopithecus, Propliopithecus, and Parapithecus from the Faiyum, at around 35 mya. In 2010, Saadanius was described as a close relative of the last common ancestor of the crown catarrhines, and tentatively dated to 29–28 mya, helping to fill an 11-million-year gap in the fossil record. In the Early Miocene, about 22 million years ago, the many kinds of arboreally-adapted (tree-dwelling) primitive catarrhines from East Africa suggest a long history of prior diversification. Fossils at 20 million years ago include fragments attributed to Victoriapithecus, the earliest Old World monkey. Among the genera thought to be in the ape lineage leading up to 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Limnopithecus, Nacholapithecus, Equatorius, Nyanzapithecus, Afropithecus, Heliopithecus, and Kenyapithecus, all from East Africa. The presence of other generalized non-cercopithecids of Middle Miocene from sites far distant, such as Otavipithecus from cave deposits in Namibia, and Pierolapithecus and Dryopithecus from France, Spain and Austria, is evidence of a wide diversity of forms across Africa and the Mediterranean basin during the relatively warm and equable climatic regimes of the Early and Middle Miocene. The youngest of the Miocene hominoids, Oreopithecus, is from coal beds in Italy that have been dated to 9 million years ago. Molecular evidence indicates that the lineage of gibbons diverged from the line of great apes some 18–12 mya, and that of orangutans (subfamily Ponginae) diverged from the other great apes at about 12 million years; there are no fossils that clearly document the ancestry of gibbons, which may have originated in a so-far-unknown Southeast Asian hominoid population, but fossil proto-orangutans may be represented by Sivapithecus from India and Griphopithecus from Turkey, dated to around 10 mya. Hominidae subfamily Homininae (African hominids) diverged from Ponginae (orangutans) about 14 mya. Hominins (including humans and the Australopithecine and Panina subtribes) parted from the Gorillini tribe (gorillas) between 8 and 9 mya; Australopithecine (including the extinct biped ancestors of humans) separated from the Pan genus (containing chimpanzees and bonobos) 4–7 mya. The Homo genus is evidenced by the appearance of H. habilis over 2 mya, while anatomically modern humans emerged in Africa approximately 300,000 years ago. Divergence of the human clade from other great apes Species close to the last common ancestor of gorillas, chimpanzees and humans may be represented by Nakalipithecus fossils found in Kenya and Ouranopithecus found in Greece. Molecular evidence suggests that between 8 and 4 million years ago, first the gorillas, and then the chimpanzees (genus Pan) split off from the line leading to the humans. Human DNA is approximately 98.4% identical to that of chimpanzees when comparing single nucleotide polymorphisms (see human evolutionary genetics). The fossil record, however, of gorillas and chimpanzees is limited; both poor preservation – rain forest soils tend to be acidic and dissolve bone – and sampling bias probably contribute to this problem. Other hominins probably adapted to the drier environments outside the equatorial belt; and there they encountered antelope, hyenas, dogs, pigs, elephants, horses, and others. The equatorial belt contracted after about 8 million years ago, and there is very little fossil evidence for the split—thought to have occurred around that time—of the hominin lineage from the lineages of gorillas and chimpanzees. The earliest fossils argued by some to belong to the human lineage are Sahelanthropus tchadensis (7 Ma) and Orrorin tugenensis (6 Ma), followed by Ardipithecus (5.5–4.4 Ma), with species Ar. kadabba and Ar. ramidus. It has been argued in a study of the life history of Ar. ramidus that the species provides evidence for a suite of anatomical and behavioral adaptations in very early hominins unlike any species of extant great ape. This study demonstrated affinities between the skull morphology of Ar. ramidus and that of infant and juvenile chimpanzees, suggesting the species evolved a juvenalised or paedomorphic craniofacial morphology via heterochronic dissociation of growth trajectories. It was also argued that the species provides support for the notion that very early hominins, akin to bonobos (Pan paniscus) the less aggressive species of the genus Pan, may have evolved via the process of self-domestication. Consequently, arguing against the so-called "chimpanzee referential model" the authors suggest it is no longer tenable to use chimpanzee (Pan troglodytes) social and mating behaviors in models of early hominin social evolution. When commenting on the absence of aggressive canine morphology in Ar. ramidus and the implications this has for the evolution of hominin social psychology, they wrote: The authors argue that many of the basic human adaptations evolved in the ancient forest and woodland ecosystems of late Miocene and early Pliocene Africa. Consequently, they argue that humans may not represent evolution from a chimpanzee-like ancestor as has traditionally been supposed. This suggests many modern human adaptations represent phylogenetically deep traits and that the behavior and morphology of chimpanzees may have evolved subsequent to the split with the common ancestor they share with humans. Genus Australopithecus The genus Australopithecus evolved in eastern Africa around 4 million years ago before spreading throughout the continent and eventually becoming extinct 2 million years ago. During this time period various forms of australopiths existed, including Australopithecus anamensis, A. afarensis, A. sediba, and A. africanus. There is still some debate among academics whether certain African hominid species of this time, such as A. robustus and A. boisei, constitute members of the same genus; if so, they would be considered to be "robust australopiths" while the others would be considered "gracile australopiths". However, if these species do indeed constitute their own genus, then they may be given their own name, Paranthropus. Australopithecus (4–1.8 Ma), with species A. anamensis, A. afarensis, A. africanus, A. bahrelghazali, A. garhi, and A. sediba; Kenyanthropus (3–2.7 Ma), with species K. platyops; Paranthropus (3–1.2 Ma), with species P. aethiopicus, P. boisei, and P. robustus A new proposed species Australopithecus deyiremeda is claimed to have been discovered living at the same time period of A. afarensis. There is debate whether A. deyiremeda is a new species or is A. afarensis. Australopithecus prometheus, otherwise known as Little Foot has recently been dated at 3.67 million years old through a new dating technique, making the genus Australopithecus as old as afarensis. Given the opposable big toe found on Little Foot, it seems that the specimen was a good climber. It is thought given the night predators of the region that he built a nesting platform at night in the trees in a similar fashion to chimpanzees and gorillas. Evolution of genus Homo The earliest documented representative of the genus Homo is Homo habilis, which evolved around , and is arguably the earliest species for which there is positive evidence of the use of stone tools. The brains of these early hominins were about the same size as that of a chimpanzee, although it has been suggested that this was the time in which the human SRGAP2 gene doubled, producing a more rapid wiring of the frontal cortex. During the next million years a process of rapid encephalization occurred, and with the arrival of Homo erectus and Homo ergaster in the fossil record, cranial capacity had doubled to 850 cm3. (Such an increase in human brain size is equivalent to each generation having 125,000 more neurons than their parents.) It is believed that H. erectus and H. ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between . According to the recent African origin theory, modern humans evolved in Africa possibly from H. heidelbergensis, H. rhodesiensis or H. antecessor and migrated out of the continent some 50,000 to 100,000 years ago, gradually replacing local populations of H. erectus, Denisova hominins, H. floresiensis, H. luzonensis and H. neanderthalensis, whose ancestors had left Africa in earlier migrations. Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved in the Middle Paleolithic between 400,000 and 250,000 years ago. Recent DNA evidence suggests that several haplotypes of Neanderthal origin are present among all non-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day humans, suggestive of a limited interbreeding between these species. According to some anthropologists, the transition to behavioral modernity with the development of symbolic culture, language, and specialized lithic technology happened around 50,000 years ago (beginning of the Upper Paleolithic), although others point to evidence of a gradual change over a longer time span during the Middle Paleolithic. Homo sapiens is the only extant species of its genus, Homo. While some (extinct) Homo species might have been ancestors of Homo sapiens, many, perhaps most, were likely "cousins", having speciated away from the ancestral hominin line. There is yet no consensus as to which of these groups should be considered a separate species and which should be subspecies; this may be due to the dearth of fossils or to the slight differences used to classify species in the genus Homo. The Sahara pump theory (describing an occasionally passable "wet" Sahara desert) provides one possible explanation of the intermittent migration and speciation in the genus Homo. Based on archaeological and paleontological evidence, it has been possible to infer, to some extent, the ancient dietary practices of various Homo species and to study the role of diet in physical and behavioral evolution within Homo. Some anthropologists and archaeologists subscribe to the Toba catastrophe theory, which posits that the supereruption of Lake Toba on Sumatran island in Indonesia some 70,000 years ago caused global starvation, killing the majority of humans and creating a population bottleneck that affected the genetic inheritance of all humans today. The genetic and archaeological evidence for this remains in question however. A 2023 genetic study suggests that a similar human population bottleneck of between 1,000 and 100,000 survivors occurred "around 930,000 and 813,000 years ago ... lasted for about 117,000 years and brought human ancestors close to extinction." H. habilis and H. gautengensis Homo habilis lived from about 2.8 to 1.4 Ma. The species evolved in South and East Africa in the Late Pliocene or Early Pleistocene, 2.5–2 Ma, when it diverged from the australopithecines with the development of smaller molars and larger brains. One of the first known hominins, it made tools from stone and perhaps animal bones, leading to its name homo habilis (Latin 'handy man') bestowed by discoverer Louis Leakey. Some scientists have proposed moving this species from Homo into Australopithecus due to the morphology of its skeleton being more adapted to living in trees rather than walking on two legs like later hominins. In May 2010, a new species, Homo gautengensis, was discovered in South Africa. H. rudolfensis and H. georgicus These are proposed species names for fossils from about 1.9–1.6 Ma, whose relation to Homo habilis is not yet clear. Homo rudolfensis refers to a single, incomplete skull from Kenya. Scientists have suggested that this was a specimen of Homo habilis, but this has not been confirmed. Homo georgicus, from Georgia, may be an intermediate form between Homo habilis and Homo erectus, or a subspecies of Homo erectus. H. ergaster and H. erectus The first fossils of Homo erectus were discovered by Dutch physician Eugene Dubois in 1891 on the Indonesian island of Java. He originally named the material Anthropopithecus erectus (1892–1893, considered at this point as a chimpanzee-like fossil primate) and Pithecanthropus erectus (1893–1894, changing his mind as of based on its morphology, which he considered to be intermediate between that of humans and apes). Years later, in the 20th century, the German physician and paleoanthropologist Franz Weidenreich (1873–1948) compared in detail the characters of Dubois' Java Man, then named Pithecanthropus erectus, with the characters of the Peking Man, then named Sinanthropus pekinensis. Weidenreich concluded in 1940 that because of their anatomical similarity with modern humans it was necessary to gather all these specimens of Java and China in a single species of the genus Homo, the species H. erectus. Homo erectus lived from about 1.8 Ma to about 70,000 years ago – which would indicate that they were probably wiped out by the Toba catastrophe; however, nearby H. floresiensis survived it. The early phase of H. erectus, from 1.8 to 1.25 Ma, is considered by some to be a separate species, H. ergaster, or as H. erectus ergaster, a subspecies of H. erectus. Many paleoanthropologists now use the term Homo ergaster for the non-Asian forms of this group, and reserve H. erectus only for those fossils that are found in Asia and meet certain skeletal and dental requirements which differ slightly from H. ergaster. In Africa in the Early Pleistocene, 1.5–1 Ma, some populations of Homo habilis are thought to have evolved larger brains and to have made more elaborate stone tools; these differences and others are sufficient for anthropologists to classify them as a new species, Homo erectus—in Africa. The evolution of locking knees and the movement of the foramen magnum are thought to be likely drivers of the larger population changes. This species also may have used fire to cook meat. Richard Wrangham notes that Homo seems to have been ground dwelling, with reduced intestinal length, smaller dentition, and "brains [swollen] to their current, horrendously fuel-inefficient size", and hypothesizes that control of fire and cooking, which released increased nutritional value, was the key adaptation that separated Homo from tree-sleeping Australopithecines. H. cepranensis and H. antecessor These are proposed as species intermediate between H. erectus and H. heidelbergensis. H. antecessor is known from fossils from Spain and England that are dated 1.2 Ma–500 ka. H. cepranensis refers to a single skull cap from Italy, estimated to be about 800,000 years old. H. heidelbergensis H. heidelbergensis ("Heidelberg Man") lived from about 800,000 to about 300,000 years ago. Also proposed as Homo sapiens heidelbergensis or Homo sapiens paleohungaricus. H. rhodesiensis, and the Gawis cranium H. rhodesiensis, estimated to be 300,000–125,000 years old. Most current researchers place Rhodesian Man within the group of Homo heidelbergensis, though other designations such as archaic Homo sapiens and Homo sapiens rhodesiensis have been proposed. In February 2006 a fossil, the Gawis cranium, was found which might possibly be a species intermediate between H. erectus and H. sapiens or one of many evolutionary dead ends. The skull from Gawis, Ethiopia, is believed to be 500,000–250,000 years old. Only summary details are known, and the finders have not yet released a peer-reviewed study. Gawis man's facial features suggest that it is either an intermediate species or an example of a "Bodo man" female. Neanderthal and Denisovan Homo neanderthalensis, alternatively designated as Homo sapiens neanderthalensis, lived in Europe and Asia from 400,000 to about 28,000 years ago. There are a number of clear anatomical differences between anatomically modern humans (AMH) and Neanderthal specimens, many relating to the superior Neanderthal adaptation to cold environments. Neanderthal surface to volume ratio was even lower than that among modern Inuit populations, indicating superior retention of body heat. Neanderthals also had significantly larger brains, as shown from brain endocasts, casting doubt on their intellectual inferiority to modern humans. However, the higher body mass of Neanderthals may have required larger brain mass for body control. Also, recent research by Pearce, Stringer, and Dunbar has shown important differences in brain architecture. The larger size of the Neanderthal orbital chamber and occipital lobe suggests that they had a better visual acuity than modern humans, useful in the dimmer light of glacial Europe. Neanderthals may have had less brain capacity available for social functions. Inferring social group size from endocranial volume (minus occipital lobe size) suggests that Neanderthal groups may have been limited to 120 individuals, compared to 144 possible relationships for modern humans. Larger social groups could imply that modern humans had less risk of inbreeding within their clan, trade over larger areas (confirmed in the distribution of stone tools), and faster spread of social and technological innovations. All these may have all contributed to modern Homo sapiens replacing Neanderthal populations by 28,000 BP. Earlier evidence from sequencing mitochondrial DNA suggested that no significant gene flow occurred between H. neanderthalensis and H. sapiens, and that the two were separate species that shared a common ancestor about 660,000 years ago. However, a sequencing of the Neanderthal genome in 2010 indicated that Neanderthals did indeed interbreed with anatomically modern humans c. 45,000-80,000 years ago, around the time modern humans migrated out from Africa, but before they dispersed throughout Europe, Asia and elsewhere. The genetic sequencing of a 40,000-year-old human skeleton from Romania showed that 11% of its genome was Neanderthal, implying the individual had a Neanderthal ancestor 4–6 generations previously, in addition to a contribution from earlier interbreeding in the Middle East. Though this interbred Romanian population seems not to have been ancestral to modern humans, the finding indicates that interbreeding happened repeatedly. All modern non-African humans have about 1% to 4% (or 1.5% to 2.6% by more recent data) of their DNA derived from Neanderthals. This finding is consistent with recent studies indicating that the divergence of some human alleles dates to one Ma, although this interpretation has been questioned. Neanderthals and AMH Homo sapiens could have co-existed in Europe for as long as 10,000 years, during which AMH populations exploded, vastly outnumbering Neanderthals, possibly outcompeting them by sheer numbers. In 2008, archaeologists working at the site of Denisova Cave in the Altai Mountains of Siberia uncovered a small bone fragment from the fifth finger of a juvenile member of another human species, the Denisovans. Artifacts, including a bracelet, excavated in the cave at the same level were carbon dated to around 40,000 BP. As DNA had survived in the fossil fragment due to the cool climate of the Denisova Cave, both mtDNA and nuclear DNA were sequenced. While the divergence point of the mtDNA was unexpectedly deep in time, the full genomic sequence suggested the Denisovans belonged to the same lineage as Neanderthals, with the two diverging shortly after their line split from the lineage that gave rise to modern humans. Modern humans are known to have overlapped with Neanderthals in Europe and the Near East for possibly more than 40,000 years, and the discovery raises the possibility that Neanderthals, Denisovans, and modern humans may have co-existed and interbred. The existence of this distant branch creates a much more complex picture of humankind during the Late Pleistocene than previously thought. Evidence has also been found that as much as 6% of the DNA of some modern Melanesians derive from Denisovans, indicating limited interbreeding in Southeast Asia. Alleles thought to have originated in Neanderthals and Denisovans have been identified at several genetic loci in the genomes of modern humans outside Africa. HLA haplotypes from Denisovans and Neanderthal represent more than half the HLA alleles of modern Eurasians, indicating strong positive selection for these introgressed alleles. Corinne Simoneti at Vanderbilt University, in Nashville and her team have found from medical records of 28,000 people of European descent that the presence of Neanderthal DNA segments may be associated with a higher rate of depression. The flow of genes from Neanderthal populations to modern humans was not all one way. Sergi Castellano of the Max Planck Institute for Evolutionary Anthropology reported in 2016 that while Denisovan and Neanderthal genomes are more related to each other than they are to us, Siberian Neanderthal genomes show more similarity to modern human genes than do European Neanderthal populations. This suggests Neanderthal populations interbred with modern humans around 100,000 years ago, probably somewhere in the Near East. Studies of a Neanderthal child at Gibraltar show from brain development and tooth eruption that Neanderthal children may have matured more rapidly than Homo sapiens. H. floresiensis H. floresiensis, which lived from approximately 190,000 to 50,000 years before present (BP), has been nicknamed the hobbit for its small size, possibly a result of insular dwarfism. H. floresiensis is intriguing both for its size and its age, being an example of a recent species of the genus Homo that exhibits derived traits not shared with modern humans. In other words, H. floresiensis shares a common ancestor with modern humans, but split from the modern human lineage and followed a distinct evolutionary path. The main find was a skeleton believed to be a woman of about 30 years of age. Found in 2003, it has been dated to approximately 18,000 years old. The living woman was estimated to be one meter in height, with a brain volume of just 380 cm3 (considered small for a chimpanzee and less than a third of the H. sapiens average of 1400 cm3). However, there is an ongoing debate over whether H. floresiensis is indeed a separate species. Some scientists hold that H. floresiensis was a modern H. sapiens with pathological dwarfism. This hypothesis is supported in part, because some modern humans who live on Flores, the Indonesian island where the skeleton was found, are pygmies. This, coupled with pathological dwarfism, could have resulted in a significantly diminutive human. The other major attack on H. floresiensis as a separate species is that it was found with tools only associated with H. sapiens. The hypothesis of pathological dwarfism, however, fails to explain additional anatomical features that are unlike those of modern humans (diseased or not) but much like those of ancient members of our genus. Aside from cranial features, these features include the form of bones in the wrist, forearm, shoulder, knees, and feet. Additionally, this hypothesis fails to explain the find of multiple examples of individuals with these same characteristics, indicating they were common to a large population, and not limited to one individual. In 2016, fossil teeth and a partial jaw from hominins assumed to be ancestral to H. floresiensis were discovered at Mata Menge, about from Liang Bua. They date to about 700,000 years ago and are noted by Australian archaeologist Gerrit van den Bergh for being even smaller than the later fossils. H. luzonensis A small number of specimens from the island of Luzon, dated 50,000 to 67,000 years ago, have recently been assigned by their discoverers, based on dental characteristics, to a novel human species, H. luzonensis. H. sapiens H. sapiens (the adjective sapiens is Latin for "wise" or "intelligent") emerged in Africa around 300,000 years ago, likely derived from H. heidelbergensis or a related lineage. In September 2019, scientists reported the computerized determination, based on 260 CT scans, of a virtual skull shape of the last common human ancestor to modern humans (H. sapiens), representative of the earliest modern humans, and suggested that modern humans arose between 260,000 and 350,000 years ago through a merging of populations in East and South Africa. Between 400,000 years ago and the second interglacial period in the Middle Pleistocene, around 250,000 years ago, the trend in intra-cranial volume expansion and the elaboration of stone tool technologies developed, providing evidence for a transition from H. erectus to H. sapiens. The direct evidence suggests there was a migration of H. erectus out of Africa, then a further speciation of H. sapiens from H. erectus in Africa. A subsequent migration (both within and out of Africa) eventually replaced the earlier dispersed H. erectus. This migration and origin theory is usually referred to as the "recent single-origin hypothesis" or "out of Africa" theory. H. sapiens interbred with archaic humans both in Africa and in Eurasia, in Eurasia notably with Neanderthals and Denisovans. The Toba catastrophe theory, which postulates a population bottleneck for H. sapiens about 70,000 years ago, was controversial from its first proposal in the 1990s and by the 2010s had very little support. Distinctive human genetic variability has arisen as the result of the founder effect, by archaic admixture and by recent evolutionary pressures. Anatomical changes Since Homo sapiens separated from its last common ancestor shared with chimpanzees, human evolution is characterized by a number of morphological, developmental, physiological, behavioral, and environmental changes. Environmental (cultural) evolution discovered much later during the Pleistocene played a significant role in human evolution observed via human transitions between subsistence systems. The most significant of these adaptations are bipedalism, increased brain size, lengthened ontogeny (gestation and infancy), and decreased sexual dimorphism. The relationship between these changes is the subject of ongoing debate. Other significant morphological changes included the evolution of a power and precision grip, a change first occurring in H. erectus. Bipedalism Bipedalism, (walking on two legs), is the basic adaptation of the hominid and is considered the main cause behind a suite of skeletal changes shared by all bipedal hominids. The earliest hominin, of presumably primitive bipedalism, is considered to be either Sahelanthropus or Orrorin, both of which arose some 6 to 7 million years ago. The non-bipedal knuckle-walkers, the gorillas and chimpanzees, diverged from the hominin line over a period covering the same time, so either Sahelanthropus or Orrorin may be our last shared ancestor. Ardipithecus, a full biped, arose approximately 5.6 million years ago. The early bipeds eventually evolved into the australopithecines and still later into the genus Homo. There are several theories of the adaptation value of bipedalism. It is possible that bipedalism was favored because it freed the hands for reaching and carrying food, saved energy during locomotion, enabled long-distance running and hunting, provided an enhanced field of vision, and helped avoid hyperthermia by reducing the surface area exposed to direct sun; features all advantageous for thriving in the new savanna and woodland environment created as a result of the East African Rift Valley uplift versus the previous closed forest habitat. A 2007 study provides support for the hypothesis that bipedalism evolved because it used less energy than quadrupedal knuckle-walking. However, recent studies suggest that bipedality without the ability to use fire would not have allowed global dispersal. This change in gait saw a lengthening of the legs proportionately when compared to the length of the arms, which were shortened through the removal of the need for brachiation. Another change is the shape of the big toe. Recent studies suggest that australopithecines still lived part of the time in trees as a result of maintaining a grasping big toe. This was progressively lost in habilines. Anatomically, the evolution of bipedalism has been accompanied by a large number of skeletal changes, not just to the legs and pelvis, but also to the vertebral column, feet and ankles, and skull. The femur evolved into a slightly more angular position to move the center of gravity toward the geometric center of the body. The knee and ankle joints became increasingly robust to better support increased weight. To support the increased weight on each vertebra in the upright position, the human vertebral column became S-shaped and the lumbar vertebrae became shorter and wider. In the feet the big toe moved into alignment with the other toes to help in forward locomotion. The arms and forearms shortened relative to the legs making it easier to run. The foramen magnum migrated under the skull and more anterior. The most significant changes occurred in the pelvic region, where the long downward facing iliac blade was shortened and widened as a requirement for keeping the center of gravity stable while walking; bipedal hominids have a shorter but broader, bowl-like pelvis due to this. A drawback is that the birth canal of bipedal apes is smaller than in knuckle-walking apes, though there has been a widening of it in comparison to that of australopithecine and modern humans, thus permitting the passage of newborns due to the increase in cranial size. This is limited to the upper portion, since further increase can hinder normal bipedal movement. The shortening of the pelvis and smaller birth canal evolved as a requirement for bipedalism and had significant effects on the process of human birth, which is much more difficult in modern humans than in other primates. During human birth, because of the variation in size of the pelvic region, the fetal head must be in a transverse position (compared to the mother) during entry into the birth canal and rotate about 90 degrees upon exit. The smaller birth canal became a limiting factor to brain size increases in early humans and prompted a shorter gestation period leading to the relative immaturity of human offspring, who are unable to walk much before 12 months and have greater neoteny, compared to other primates, who are mobile at a much earlier age. The increased brain growth after birth and the increased dependency of children on mothers had a major effect upon the female reproductive cycle, and the more frequent appearance of alloparenting in humans when compared with other hominids. Delayed human sexual maturity also led to the evolution of menopause with one explanation, the grandmother hypothesis, providing that elderly women could better pass on their genes by taking care of their daughter's offspring, as compared to having more children of their own. Encephalization The human species eventually developed a much larger brain than that of other primates—typically in modern humans, nearly three times the size of a chimpanzee or gorilla brain. After a period of stasis with Australopithecus anamensis and Ardipithecus, species which had smaller brains as a result of their bipedal locomotion, the pattern of encephalization started with Homo habilis, whose brain was slightly larger than that of chimpanzees. This evolution continued in Homo erectus with , and reached a maximum in Neanderthals with , larger even than modern Homo sapiens. This brain increase manifested during postnatal brain growth, far exceeding that of other apes (heterochrony). It also allowed for extended periods of social learning and language acquisition in juvenile humans, beginning as much as 2 million years ago. Encephalization may be due to a dependency on calorie-dense, difficult-to-acquire food. Furthermore, the changes in the structure of human brains may be even more significant than the increase in size. Fossilized skulls shows the brain size in early humans fell within the range of modern humans 300,000 years ago, but only got its present-day brain shape between 100,000 and 35,000 years ago. The temporal lobes, which contain centers for language processing, have increased disproportionately, as has the prefrontal cortex, which has been related to complex decision-making and moderating social behavior. Encephalization has been tied to increased starches and meat in the diet, however a 2022 meta study called into question the role of meat. Other factors are the development of cooking, and it has been proposed that intelligence increased as a response to an increased necessity for solving social problems as human society became more complex. Changes in skull morphology, such as smaller mandibles and mandible muscle attachments, allowed more room for the brain to grow. The increase in volume of the neocortex also included a rapid increase in size of the cerebellum. Its function has traditionally been associated with balance and fine motor control, but more recently with speech and cognition. The great apes, including hominids, had a more pronounced cerebellum relative to the neocortex than other primates. It has been suggested that because of its function of sensory-motor control and learning complex muscular actions, the cerebellum may have underpinned human technological adaptations, including the preconditions of speech. The immediate survival advantage of encephalization is difficult to discern, as the major brain changes from Homo erectus to Homo heidelbergensis were not accompanied by major changes in technology. It has been suggested that the changes were mainly social and behavioural, including increased empathic abilities, increases in size of social groups, and increased behavioral plasticity. Humans are unique in the ability to acquire information through social transmission and adapt that information. The emerging field of cultural evolution studies human sociocultural change from an evolutionary perspective. Sexual dimorphism The reduced degree of sexual dimorphism in humans is visible primarily in the reduction of the male canine tooth relative to other ape species (except gibbons) and reduced brow ridges and general robustness of males. Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only hominoids in which the female is fertile year round and in which no special signals of fertility are produced by the body (such as genital swelling or overt changes in proceptivity during estrus). Nonetheless, humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 15% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring. Ulnar opposition The ulnar opposition—the contact between the thumb and the tip of the little finger of the same hand—is unique to the genus Homo, including Neanderthals, the Sima de los Huesos hominins and anatomically modern humans. In other primates, the thumb is short and unable to touch the little finger. The ulnar opposition facilitates the precision grip and power grip of the human hand, underlying all the skilled manipulations. Other changes A number of other changes have also characterized the evolution of humans, among them an increased reliance on vision rather than smell (highly reduced olfactory bulb); a longer juvenile developmental period and higher infant dependency; a smaller gut and small, misaligned teeth; faster basal metabolism; loss of body hair; an increase in eccrine sweat gland density that is ten times higher than any other catarrhinian primates, yet humans use 30% to 50% less water per day compared to chimps and gorillas; more REM sleep but less sleep in total; a change in the shape of the dental arcade from u-shaped to parabolic; development of a chin (found in Homo sapiens alone); styloid processes; and a descended larynx. As the human hand and arms adapted to the making of tools and were used less for climbing, the shoulder blades changed too. As a side effect, it allowed human ancestors to throw objects with greater force, speed and accuracy. Use of tools The use of tools has been interpreted as a sign of intelligence, and it has been theorized that tool use may have stimulated certain aspects of human evolution, especially the continued expansion of the human brain. Paleontology has yet to explain the expansion of this organ over millions of years despite being extremely demanding in terms of energy consumption. The brain of a modern human consumes, on average, about 13 watts (260 kilocalories per day), a fifth of the body's resting power consumption. Increased tool use would allow hunting for energy-rich meat products, and would enable processing more energy-rich plant products. Researchers have suggested that early hominins were thus under evolutionary pressure to increase their capacity to create and use tools. Precisely when early humans started to use tools is difficult to determine, because the more primitive these tools are (for example, sharp-edged stones) the more difficult it is to decide whether they are natural objects or human artifacts. There is some evidence that the australopithecines (4 Ma) may have used broken bones as tools, but this is debated. Many species make and use tools, but it is the human genus that dominates the areas of making and using more complex tools. The oldest known tools are flakes from West Turkana, Kenya, which date to 3.3 million years ago. The next oldest stone tools are from Gona, Ethiopia, and are considered the beginning of the Oldowan technology. These tools date to about 2.6 million years ago. A Homo fossil was found near some Oldowan tools, and its age was noted at 2.3 million years old, suggesting that maybe the Homo species did indeed create and use these tools. It is a possibility but does not yet represent solid evidence. The third metacarpal styloid process enables the hand bone to lock into the wrist bones, allowing for greater amounts of pressure to be applied to the wrist and hand from a grasping thumb and fingers. It allows humans the dexterity and strength to make and use complex tools. This unique anatomical feature separates humans from apes and other nonhuman primates, and is not seen in human fossils older than 1.8 million years. Bernard Wood noted that Paranthropus co-existed with the early Homo species in the area of the "Oldowan Industrial Complex" over roughly the same span of time. Although there is no direct evidence which identifies Paranthropus as the tool makers, their anatomy lends to indirect evidence of their capabilities in this area. Most paleoanthropologists agree that the early Homo species were indeed responsible for most of the Oldowan tools found. They argue that when most of the Oldowan tools were found in association with human fossils, Homo was always present, but Paranthropus was not. In 1994, Randall Susman used the anatomy of opposable thumbs as the basis for his argument that both the Homo and Paranthropus species were toolmakers. He compared bones and muscles of human and chimpanzee thumbs, finding that humans have 3 muscles which are lacking in chimpanzees. Humans also have thicker metacarpals with broader heads, allowing more precise grasping than the chimpanzee hand can perform. Susman posited that modern anatomy of the human opposable thumb is an evolutionary response to the requirements associated with making and handling tools and that both species were indeed toolmakers. Transition to behavioral modernity Anthropologists describe modern human behavior to include cultural and behavioral traits such as specialization of tools, use of jewellery and images (such as cave drawings), organization of living space, rituals (such as grave gifts), specialized hunting techniques, exploration of less hospitable geographical areas, and barter trade networks, as well as more general traits such as language and complex symbolic thinking. Debate continues as to whether a "revolution" led to modern humans ("big bang of human consciousness"), or whether the evolution was more gradual. Until about 50,000–40,000 years ago, the use of stone tools seems to have progressed stepwise. Each phase (H. habilis, H. ergaster, H. neanderthalensis) marked a new technology, followed by very slow development until the next phase. Currently paleoanthropologists are debating whether these Homo species possessed some or many modern human behaviors. They seem to have been culturally conservative, maintaining the same technologies and foraging patterns over very long periods. Around 50,000 BP, human culture started to evolve more rapidly. The transition to behavioral modernity has been characterized by some as a "Great Leap Forward", or as the "Upper Palaeolithic Revolution", due to the sudden appearance in the archaeological record of distinctive signs of modern behavior and big game hunting. Evidence of behavioral modernity significantly earlier also exists from Africa, with older evidence of abstract imagery, widened subsistence strategies, more sophisticated tools and weapons, and other "modern" behaviors, and many scholars have recently argued that the transition to modernity occurred sooner than previously believed. Other scholars consider the transition to have been more gradual, noting that some features had already appeared among archaic African Homo sapiens 300,000–200,000 years ago. Recent evidence suggests that the Australian Aboriginal population separated from the African population 75,000 years ago, and that they made a sea journey 60,000 years ago, which may diminish the significance of the Upper Paleolithic Revolution. Modern humans started burying their dead, making clothing from animal hides, hunting with more sophisticated techniques (such as using pit traps or driving animals off cliffs), and cave painting. As human culture advanced, different populations innovated existing technologies: artifacts such as fish hooks, buttons, and bone needles show signs of cultural variation, which had not been seen prior to 50,000 BP. Typically, the older H. neanderthalensis populations did not vary in their technologies, although the Chatelperronian assemblages have been found to be Neanderthal imitations of H. sapiens Aurignacian technologies. Recent and ongoing human evolution Anatomically modern human populations continue to evolve, as they are affected by both natural selection and genetic drift. Although selection pressure on some traits, such as resistance to smallpox, has decreased in the modern age, humans are still undergoing natural selection for many other traits. Some of these are due to specific environmental pressures, while others are related to lifestyle changes since the development of agriculture (10,000 years ago), urbanization (5,000), and industrialization (250 years ago). It has been argued that human evolution has accelerated since the development of agriculture 10,000 years ago and civilization some 5,000 years ago, resulting, it is claimed, in substantial genetic differences between different current human populations, and more recent research indicates that for some traits, the developments and innovations of human culture have driven a new form of selection that coexists with, and in some cases has largely replaced, natural selection. Particularly conspicuous is variation in superficial characteristics, such as Afro-textured hair, or the recent evolution of light skin and blond hair in some populations, which are attributed to differences in climate. Particularly strong selective pressures have resulted in high-altitude adaptation in humans, with different ones in different isolated populations. Studies of the genetic basis show that some developed very recently, with Tibetans evolving over 3,000 years to have high proportions of an allele of EPAS1 that is adaptive to high altitudes. Other evolution is related to endemic diseases: the presence of malaria selects for sickle cell trait (the heterozygous form of sickle cell gene), while in the absence of malaria, the health effects of sickle-cell anemia select against this trait. For another example, the population at risk of the severe debilitating disease kuru has significant over-representation of an immune variant of the prion protein gene G127V versus non-immune alleles. The frequency of this genetic variant is due to the survival of immune persons. Some reported trends remain unexplained and the subject of ongoing research in the novel field of evolutionary medicine: polycystic ovary syndrome (PCOS) reduces fertility and thus is expected to be subject to extremely strong negative selection, but its relative commonality in human populations suggests a counteracting selection pressure. The identity of that pressure remains the subject of some debate. Recent human evolution related to agriculture includes genetic resistance to infectious disease that has appeared in human populations by crossing the species barrier from domesticated animals, as well as changes in metabolism due to changes in diet, such as lactase persistence. Culturally-driven evolution can defy the expectations of natural selection: while human populations experience some pressure that drives a selection for producing children at younger ages, the advent of effective contraception, higher education, and changing social norms have driven the observed selection in the opposite direction. However, culturally-driven selection need not necessarily work counter or in opposition to natural selection: some proposals to explain the high rate of recent human brain expansion indicate a kind of feedback whereupon the brain's increased social learning efficiency encourages cultural developments that in turn encourage more efficiency, which drive more complex cultural developments that demand still-greater efficiency, and so forth. Culturally-driven evolution has an advantage in that in addition to the genetic effects, it can be observed also in the archaeological record: the development of stone tools across the Palaeolithic period connects to culturally-driven cognitive development in the form of skill acquisition supported by the culture and the development of increasingly complex technologies and the cognitive ability to elaborate them. In contemporary times, since industrialization, some trends have been observed: for instance, menopause is evolving to occur later. Other reported trends appear to include lengthening of the human reproductive period and reduction in cholesterol levels, blood glucose and blood pressure in some populations. History of study Before Darwin The name of the biological genus to which humans belong is Latin for 'human'. It was chosen originally by Carl Linnaeus in his classification system. The English word human is from the Latin , the adjectival form of . The Latin derives from the Indo-European root *, or 'earth'. Linnaeus and other scientists of his time also considered the great apes to be the closest relatives of humans based on morphological and anatomical similarities. Darwin The possibility of linking humans with earlier apes by descent became clear only after 1859 with the publication of Charles Darwin's On the Origin of Species, in which he argued for the idea of the evolution of new species from earlier ones. Darwin's book did not address the question of human evolution, saying only that "Light will be thrown on the origin of man and his history." The first debates about the nature of human evolution arose between Thomas Henry Huxley and Richard Owen. Huxley argued for human evolution from apes by illustrating many of the similarities and differences between humans and other apes, and did so particularly in his 1863 book Evidence as to Man's Place in Nature. Many of Darwin's early supporters (such as Alfred Russel Wallace and Charles Lyell) did not initially agree that the origin of the mental capacities and the moral sensibilities of humans could be explained by natural selection, though this later changed. Darwin applied the theory of evolution and sexual selection to humans in his 1871 book The Descent of Man, and Selection in Relation to Sex. First fossils A major problem in the 19th century was the lack of fossil intermediaries. Neanderthal remains were discovered in a limestone quarry in 1856, three years before the publication of On the Origin of Species, and Neanderthal fossils had been discovered in Gibraltar even earlier, but it was originally claimed that these were the remains of a modern human who had suffered some kind of illness. Despite the 1891 discovery by Eugène Dubois of what is now called Homo erectus at Trinil, Java, it was only in the 1920s when such fossils were discovered in Africa, that intermediate species began to accumulate. In 1925, Raymond Dart described Australopithecus africanus. The type specimen was the Taung Child, an australopithecine infant which was discovered in a cave. The child's remains were a remarkably well-preserved tiny skull and an endocast of the brain. Although the brain was small (410 cm3), its shape was rounded, unlike that of chimpanzees and gorillas, and more like a modern human brain. Also, the specimen showed short canine teeth, and the position of the foramen magnum (the hole in the skull where the spine enters) was evidence of bipedal locomotion. All of these traits convinced Dart that the Taung Child was a bipedal human ancestor, a transitional form between apes and humans. The East African fossils During the 1960s and 1970s, hundreds of fossils were found in East Africa in the regions of the Olduvai Gorge and Lake Turkana. These searches were carried out by the Leakey family, with Louis Leakey and his wife Mary Leakey, and later their son Richard and daughter-in-law Meave, fossil hunters and paleoanthropologists. From the fossil beds of Olduvai and Lake Turkana they amassed specimens of the early hominins: the australopithecines and Homo species, and even H. erectus. These finds cemented Africa as the cradle of humankind. In the late 1970s and the 1980s, Ethiopia emerged as the new hot spot of paleoanthropology after "Lucy", the most complete fossil member of the species Australopithecus afarensis, was found in 1974 by Donald Johanson near Hadar in the desertic Afar Triangle region of northern Ethiopia. Although the specimen had a small brain, the pelvis and leg bones were almost identical in function to those of modern humans, showing with certainty that these hominins had walked erect. Lucy was classified as a new species, Australopithecus afarensis, which is thought to be more closely related to the genus Homo as a direct ancestor, or as a close relative of an unknown ancestor, than any other known hominid or hominin from this early time range. (The specimen was nicknamed "Lucy" after the Beatles' song "Lucy in the Sky with Diamonds", which was played loudly and repeatedly in the camp during the excavations.) The Afar Triangle area would later yield discovery of many more hominin fossils, particularly those uncovered or described by teams headed by Tim D. White in the 1990s, including Ardipithecus ramidus and A. kadabba. In 2013, fossil skeletons of Homo naledi, an extinct species of hominin assigned (provisionally) to the genus Homo, were found in the Rising Star Cave system, a site in South Africa's Cradle of Humankind region in Gauteng province near Johannesburg. , fossils of at least fifteen individuals, amounting to 1,550 specimens, have been excavated from the cave. The species is characterized by a body mass and stature similar to small-bodied human populations, a smaller endocranial volume similar to Australopithecus, and a cranial morphology (skull shape) similar to early Homo species. The skeletal anatomy combines primitive features known from australopithecines with features known from early hominins. The individuals show signs of having been deliberately disposed of within the cave near the time of death. The fossils were dated close to 250,000 years ago, and thus are not ancestral but contemporary with the first appearance of larger-brained anatomically modern humans. The genetic revolution The genetic revolution in studies of human evolution started when Vincent Sarich and Allan Wilson measured the strength of immunological cross-reactions of blood serum albumin between pairs of creatures, including humans and African apes (chimpanzees and gorillas). The strength of the reaction could be expressed numerically as an immunological distance, which was in turn proportional to the number of amino acid differences between homologous proteins in different species. By constructing a calibration curve of the ID of species' pairs with known divergence times in the fossil record, the data could be used as a molecular clock to estimate the times of divergence of pairs with poorer or unknown fossil records. In their seminal 1967 paper in Science, Sarich and Wilson estimated the divergence time of humans and apes as four to five million years ago, at a time when standard interpretations of the fossil record gave this divergence as at least 10 to as much as 30 million years. Subsequent fossil discoveries, notably "Lucy", and reinterpretation of older fossil materials, notably Ramapithecus, showed the younger estimates to be correct and validated the albumin method. Progress in DNA sequencing, specifically mitochondrial DNA (mtDNA) and then Y-chromosome DNA (Y-DNA) advanced the understanding of human origins. Application of the molecular clock principle revolutionized the study of molecular evolution. On the basis of a separation from the orangutan between 10 and 20 million years ago, earlier studies of the molecular clock suggested that there were about 76 mutations per generation that were not inherited by human children from their parents; this evidence supported the divergence time between hominins and chimpanzees noted above. However, a 2012 study in Iceland of 78 children and their parents suggests a mutation rate of only 36 mutations per generation; this datum extends the separation between humans and chimpanzees to an earlier period greater than 7 million years ago (Ma). Additional research with 226 offspring of wild chimpanzee populations in eight locations suggests that chimpanzees reproduce at age 26.5 years on average; which suggests the human divergence from chimpanzees occurred between 7 and 13 mya. And these data suggest that Ardipithecus (4.5 Ma), Orrorin (6 Ma) and Sahelanthropus (7 Ma) all may be on the hominid lineage, and even that the separation may have occurred outside the East African Rift region. Furthermore, analysis of the two species' genes in 2006 provides evidence that after human ancestors had started to diverge from chimpanzees, interspecies mating between "proto-human" and "proto-chimpanzees" nonetheless occurred regularly enough to change certain genes in the new gene pool: A new comparison of the human and chimpanzee genomes suggests that after the two lineages separated, they may have begun interbreeding... A principal finding is that the X chromosomes of humans and chimpanzees appear to have diverged about 1.2 million years more recently than the other chromosomes. The research suggests: There were in fact two splits between the human and chimpanzee lineages, with the first being followed by interbreeding between the two populations and then a second split. The suggestion of a hybridization has startled paleoanthropologists, who nonetheless are treating the new genetic data seriously. The quest for the earliest hominin In the 1990s, several teams of paleoanthropologists were working throughout Africa looking for evidence of the earliest divergence of the hominin lineage from the great apes. In 1994, Meave Leakey discovered Australopithecus anamensis. The find was overshadowed by Tim D. White's 1995 discovery of Ardipithecus ramidus, which pushed back the fossil record to . In 2000, Martin Pickford and Brigitte Senut discovered, in the Tugen Hills of Kenya, a 6-million-year-old bipedal hominin which they named Orrorin tugenensis. And in 2001, a team led by Michel Brunet discovered the skull of Sahelanthropus tchadensis which was dated as , and which Brunet argued was a bipedal, and therefore a hominid—that is, a hominin ( Hominidae; terms "hominids" and hominins). Human dispersal Anthropologists in the 1980s were divided regarding some details of reproductive barriers and migratory dispersals of the genus Homo. Subsequently, genetics has been used to investigate and resolve these issues. According to the Sahara pump theory evidence suggests that the genus Homo have migrated out of Africa at least three and possibly four times (e.g. Homo erectus, Homo heidelbergensis and two or three times for Homo sapiens). Recent evidence suggests these dispersals are closely related to fluctuating periods of climate change. Recent evidence suggests that humans may have left Africa half a million years earlier than previously thought. A joint Franco-Indian team has found human artifacts in the Siwalk Hills north of New Delhi dating back at least 2.6 million years. This is earlier than the previous earliest finding of genus Homo at Dmanisi, in Georgia, dating to 1.85 million years. Although controversial, tools found at a Chinese cave strengthen the case that humans used tools as far back as 2.48 million years ago. This suggests that the Asian "Chopper" tool tradition, found in Java and northern China may have left Africa before the appearance of the Acheulian hand axe. Dispersal of modern Homo sapiens Up until the genetic evidence became available, there were two dominant models for the dispersal of modern humans. The multiregional hypothesis proposed that the genus Homo contained only a single interconnected population as it does today (not separate species), and that its evolution took place worldwide continuously over the last couple of million years. This model was proposed in 1988 by Milford H. Wolpoff. In contrast, the "out of Africa" model proposed that modern H. sapiens speciated in Africa recently (that is, approximately 200,000 years ago) and the subsequent migration through Eurasia resulted in the nearly complete replacement of other Homo species. This model has been developed by Chris Stringer and Peter Andrews. Sequencing mtDNA and Y-DNA sampled from a wide range of indigenous populations revealed ancestral information relating to both male and female genetic heritage, and strengthened the "out of Africa" theory and weakened the views of multiregional evolutionism. Aligned in genetic tree differences were interpreted as supportive of a recent single origin. "Out of Africa" has thus gained much support from research using female mitochondrial DNA and the male Y chromosome. After analysing genealogy trees constructed using 133 types of mtDNA, researchers concluded that all were descended from a female African progenitor, dubbed Mitochondrial Eve. "Out of Africa" is also supported by the fact that mitochondrial genetic diversity is highest among African populations. A broad study of African genetic diversity, headed by Sarah Tishkoff, found the San people had the greatest genetic diversity among the 113 distinct populations sampled, making them one of 14 "ancestral population clusters". The research also located a possible origin of modern human migration in southwestern Africa, near the coastal border of Namibia and Angola. The fossil evidence was insufficient for archaeologist Richard Leakey to resolve the debate about exactly where in Africa modern humans first appeared. Studies of haplogroups in Y-chromosomal DNA and mitochondrial DNA have largely supported a recent African origin. All the evidence from autosomal DNA also predominantly supports a Recent African origin. However, evidence for archaic admixture in modern humans, both in Africa and later, throughout Eurasia has recently been suggested by a number of studies. Recent sequencing of Neanderthal and Denisovan genomes shows that some admixture with these populations has occurred. All modern human groups outside Africa have 1–4% or (according to more recent research) about 1.5–2.6% Neanderthal alleles in their genome, and some Melanesians have an additional 4–6% of Denisovan alleles. These new results do not contradict the "out of Africa" model, except in its strictest interpretation, although they make the situation more complex. After recovery from a genetic bottleneck that some researchers speculate might be linked to the Toba supervolcano catastrophe, a fairly small group left Africa and interbred with Neanderthals, probably in the Middle East, on the Eurasian steppe or even in North Africa before their departure. Their still predominantly African descendants spread to populate the world. A fraction in turn interbred with Denisovans, probably in southeastern Asia, before populating Melanesia. HLA haplotypes of Neanderthal and Denisova origin have been identified in modern Eurasian and Oceanian populations. The Denisovan EPAS1 gene has also been found in Tibetan populations. Studies of the human genome using machine learning have identified additional genetic contributions in Eurasians from an "unknown" ancestral population potentially related to the Neanderthal-Denisovan lineage. There are still differing theories on whether there was a single exodus from Africa or several. A multiple dispersal model involves the Southern Dispersal theory, which has gained support in recent years from genetic, linguistic and archaeological evidence. In this theory, there was a coastal dispersal of modern humans from the Horn of Africa crossing the Bab el Mandib to Yemen at a lower sea level around 70,000 years ago. This group helped to populate Southeast Asia and Oceania, explaining the discovery of early human sites in these areas much earlier than those in the Levant. This group seems to have been dependent upon marine resources for their survival. Stephen Oppenheimer has proposed a second wave of humans may have later dispersed through the Persian Gulf oases, and the Zagros mountains into the Middle East. Alternatively it may have come across the Sinai Peninsula into Asia, from shortly after 50,000 yrs BP, resulting in the bulk of the human populations of Eurasia. It has been suggested that this second group possibly possessed a more sophisticated "big game hunting" tool technology and was less dependent on coastal food sources than the original group. Much of the evidence for the first group's expansion would have been destroyed by the rising sea levels at the end of each glacial maximum. The multiple dispersal model is contradicted by studies indicating that the populations of Eurasia and the populations of Southeast Asia and Oceania are all descended from the same mitochondrial DNA L3 lineages, which support a single migration out of Africa that gave rise to all non-African populations. On the basis of the early date of Badoshan Iranian Aurignacian, Oppenheimer suggests that this second dispersal may have occurred with a pluvial period about 50,000 years before the present, with modern human big-game hunting cultures spreading up the Zagros Mountains, carrying modern human genomes from Oman, throughout the Persian Gulf, northward into Armenia and Anatolia, with a variant travelling south into Israel and to Cyrenicia. Recent genetic evidence suggests that all modern non-African populations, including those of Eurasia and Oceania, are descended from a single wave that left Africa between 65,000 and 50,000 years ago. Evidence The evidence on which scientific accounts of human evolution are based comes from many fields of natural science. The main source of knowledge about the evolutionary process has traditionally been the fossil record, but since the development of genetics beginning in the 1970s, DNA analysis has come to occupy a place of comparable importance. The studies of ontogeny, phylogeny and especially evolutionary developmental biology of both vertebrates and invertebrates offer considerable insight into the evolution of all life, including how humans evolved. The specific study of the origin and life of humans is anthropology, particularly paleoanthropology which focuses on the study of human prehistory. Evidence from genetics The closest living relatives of humans are bonobos and chimpanzees (both genus Pan) and gorillas (genus Gorilla). With the sequencing of both the human and chimpanzee genome, estimates of the similarity between their DNA sequences range between 95% and 99%. It is also noteworthy that mice share around 97.5% of their working DNA with humans. By using the technique called the molecular clock which estimates the time required for the number of divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated. The gibbons (family Hylobatidae) and then the orangutans (genus Pongo) were the first groups to split from the line leading to the hominins, including humans—followed by gorillas (genus Gorilla), and, ultimately, by the chimpanzees (genus Pan). The splitting date between hominin and chimpanzee lineages is placed by some between , that is, during the Late Miocene. Speciation, however, appears to have been unusually drawn out. Initial divergence occurred sometime between , but ongoing hybridization blurred the separation and delayed complete separation during several millions of years. Patterson (2006) dated the final divergence at . Genetic evidence has also been employed to compare species within the genus Homo, investigating gene flow between early modern humans and Neanderthals, and to enhance the understanding of the early human migration patterns and splitting dates. By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor. Each time a certain mutation (single-nucleotide polymorphism) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 200,000 years ago. Human evolutionary genetics studies how human genomes differ among individuals, the evolutionary past that gave rise to them, and their current effects. Differences between genomes have anthropological, medical and forensic implications and applications. Genetic data can provide important insight into human evolution. In May 2023, scientists reported a more complicated pathway of human evolution than previously understood. According to the studies, humans evolved from different places and times in Africa, instead of from a single location and period of time. Evidence from the fossil record There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages. The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from , Orrorin tugenensis dating from , and Ardipithecus kadabba dating to . Each of these have been argued to be a bipedal ancestor of later hominins but, in each case, the claims have been contested. It is also possible that one or more of these species are ancestors of another branch of African apes, or that they represent a shared ancestor between hominins and other apes. The question then of the relationship between these early fossil species and the hominin lineage is still to be resolved. From these early species, the australopithecines arose around and diverged into robust (also called Paranthropus) and gracile branches, one of which (possibly A. garhi) probably went on to become ancestors of the genus Homo. The australopithecine species that is best represented in the fossil record is Australopithecus afarensis with more than 100 fossil individuals represented, found from Northern Ethiopia (such as the famous "Lucy"), to Kenya, and South Africa. Fossils of robust australopithecines such as A. robustus (or alternatively Paranthropus robustus) and A./P. boisei are particularly abundant in South Africa at sites such as Kromdraai and Swartkrans, and around Lake Turkana in Kenya. The earliest member of the genus Homo is Homo habilis which evolved around . H. habilis is the first species for which we have positive evidence of the use of stone tools. They developed the Oldowan lithic technology, named after the Olduvai Gorge in which the first specimens were found. Some scientists consider Homo rudolfensis, a larger bodied group of fossils with similar morphology to the original H. habilis fossils, to be a separate species, while others consider them to be part of H. habilis—simply representing intraspecies variation, or perhaps even sexual dimorphism. The brains of these early hominins were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years, a process of encephalization began and, by the arrival (about ) of H. erectus in the fossil record, cranial capacity had doubled. H. erectus were the first of the hominins to emigrate from Africa, and, from , this species spread through Africa, Asia, and Europe. One population of H. erectus, also sometimes classified as separate species H. ergaster, remained in Africa and evolved into H. sapiens. It is believed that H. erectus and H. ergaster were the first to use fire and complex tools. In Eurasia, H. erectus evolved into species such as H. antecessor, H. heidelbergensis and H. neanderthalensis. The earliest fossils of anatomically modern humans are from the Middle Paleolithic, about 300–200,000 years ago such as the Herto and Omo remains of Ethiopia, Jebel Irhoud remains of Morocco, and Florisbad remains of South Africa; later fossils from the Skhul Cave in Israel and Southern Europe begin around 90,000 years ago. As modern humans spread out from Africa, they encountered other hominins such as H. neanderthalensis and the Denisovans, who may have evolved from populations of H. erectus that had left Africa around . The nature of interaction between early humans and these sister species has been a long-standing source of controversy, the question being whether humans replaced these earlier species or whether they were in fact similar enough to interbreed, in which case these earlier populations may have contributed genetic material to modern humans. This migration out of Africa is estimated to have begun about 70–50,000 years BP and modern humans subsequently spread globally, replacing earlier hominins either through competition or hybridization. They inhabited Eurasia and Oceania by 40,000 years BP, and the Americas by at least 14,500 years BP. Inter-species breeding The hypothesis of interbreeding, also known as hybridization, admixture or hybrid-origin theory, has been discussed ever since the discovery of Neanderthal remains in the 19th century. The linear view of human evolution began to be abandoned in the 1970s as different species of humans were discovered that made the linear concept increasingly unlikely. In the 21st century with the advent of molecular biology techniques and computerization, whole-genome sequencing of Neanderthal and human genome were performed, confirming recent admixture between different human species. In 2010, evidence based on molecular biology was published, revealing unambiguous examples of interbreeding between archaic and modern humans during the Middle Paleolithic and early Upper Paleolithic. It has been demonstrated that interbreeding happened in several independent events that included Neanderthals and Denisovans, as well as several unidentified hominins. Today, approximately 2% of DNA from all non-African populations (including Europeans, Asians, and Oceanians) is Neanderthal, with traces of Denisovan heritage. Also, 4–6% of modern Melanesian genetics are Denisovan. Comparisons of the human genome to the genomes of Neandertals, Denisovans and apes can help identify features that set modern humans apart from other hominin species. In a 2016 comparative genomics study, a Harvard Medical School/UCLA research team made a world map on the distribution and made some predictions about where Denisovan and Neanderthal genes may be impacting modern human biology. For example, comparative studies in the mid-2010s found several traits related to neurological, immunological, developmental, and metabolic phenotypes, that were developed by archaic humans to European and Asian environments and inherited to modern humans through admixture with local hominins. Although the narratives of human evolution are often contentious, several discoveries since 2010 show that human evolution should not be seen as a simple linear or branched progression, but a mix of related species. In fact, genomic research has shown that hybridization between substantially diverged lineages is the rule, not the exception, in human evolution. Furthermore, it is argued that hybridization was an essential creative force in the emergence of modern humans. Stone tools Stone tools are first attested around 2.6 million years ago, when hominins in Eastern Africa used so-called core tools, choppers made out of round cores that had been split by simple strikes. This marks the beginning of the Paleolithic, or Old Stone Age; its end is taken to be the end of the last Ice Age, around 10,000 years ago. The Paleolithic is subdivided into the Lower Paleolithic (Early Stone Age), ending around 350,000–300,000 years ago, the Middle Paleolithic (Middle Stone Age), until 50,000–30,000 years ago, and the Upper Paleolithic, (Late Stone Age), 50,000–10,000 years ago. Archaeologists working in the Great Rift Valley in Kenya have discovered the oldest known stone tools in the world. Dated to around 3.3 million years ago, the implements are some 700,000 years older than stone tools from Ethiopia that previously held this distinction. The period from 700,000 to 300,000 years ago is also known as the Acheulean, when H. ergaster (or erectus) made large stone hand axes out of flint and quartzite, at first quite rough (Early Acheulian), later "retouched" by additional, more-subtle strikes at the sides of the flakes. After 350,000 BP the more refined so-called Levallois technique was developed, a series of consecutive strikes, by which scrapers, slicers ("racloirs"), needles, and flattened needles were made. Finally, after about 50,000 BP, ever more refined and specialized flint tools were made by the Neanderthals and the immigrant Cro-Magnons (knives, blades, skimmers). Bone tools were also made by H. sapiens in Africa by 90,000–70,000 years ago and are also known from early H. sapiens sites in Eurasia by about 50,000 years ago. Species list This list is in chronological order across the table by genus. Some species/subspecies names are well-established, and some are less established – especially in genus Homo. Please see articles for more information. See also Adaptive evolution in the human genome Amity–enmity complex Archaeogenetics Dual inheritance theory Evolution of human intelligence Evolution of morality Evolutionary medicine Evolutionary neuroscience Evolutionary origin of religion Evolutionary psychology Human behavioral ecology Human origins Human vestigiality List of human evolution fossils Molecular paleontology Obstetrical dilemma Origin of language Origin of speech Prehistory of nakedness and clothing Sexual selection in humans Transgenerational trauma Notes References Sources "The Conference on the Comparative Reception of Darwinism was held in Austin, Texas, on April 22 and 23, 1972, under the joint sponsorship of the American Council of Learned Societies and the University of Texas at Austin" "Contributions from the Third Stony Brook Human Evolution Symposium and Workshop October 3–7, 2006." Further reading – two ancestral ape chromosomes fused to give rise to human chromosome 2 (This book contains very useful, information-dense chapters on primate evolution in general, and human evolution in particular, including fossil history.) (This book contains very accessible descriptions of human and non-human primates, their evolution, and fossil history.) External links "Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (July 5, 2021). – Illustrations from the book Evolution (2007) "Human Trace" video (2015) Normandy University UNIHAVRE, CNRS, IDEES, E.Laboratory on Human Trace Unitwin Complex System Digital Campus UNESCO. Shaping Humanity Video 2013 Yale University Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). Human Evolution, BBC Radio 4 discussion with Steve Jones, Fred Spoor & Margaret Clegg (In Our Time, February 16, 2006) Evolutionary Timeline of Home Sapiens − Smithsonian (February 2021) History of Human Evolution in the United States – Salon (August 24, 2021) Anthropology
0.763046
0.999699
0.762817
Cenozoic
The Cenozoic ( ; ) is Earth's current geological era, representing the last 66million years of Earth's history. It is characterized by the dominance of mammals, birds, conifers, and angiosperms (flowering plants). It is the latest of three geological eras of the Phanerozoic Eon, preceded by the Mesozoic and Paleozoic. The Cenozoic started with the Cretaceous–Paleogene extinction event, when many species, including the non-avian dinosaurs, became extinct in an event attributed by most experts to the impact of a large asteroid or other celestial body, the Chicxulub impactor. The Cenozoic is also known as the Age of Mammals because the terrestrial animals that dominated both hemispheres were mammalsthe eutherians (placentals) in the northern hemisphere and the metatherians (marsupials, now mainly restricted to Australia and to some extent South America) in the southern hemisphere. The extinction of many groups allowed mammals and birds to greatly diversify so that large mammals and birds dominated life on Earth. The continents also moved into their current positions during this era. The climate during the early Cenozoic was warmer than today, particularly during the Paleocene–Eocene Thermal Maximum. However, the Eocene to Oligocene transition and the Quaternary glaciation dried and cooled Earth. Nomenclature Cenozoic derives from the Greek words ( 'new') and ( 'life'). The name was proposed in 1840 by the British geologist John Phillips (1800–1874), who originally spelled it Kainozoic. The era is also known as the Cænozoic, Caenozoic, or Cainozoic. In name, the Cenozoic is comparable to the preceding Mesozoic ('middle life') and Paleozoic ('old life') Eras, as well as to the Proterozoic ('earlier life') Eon. Divisions The Cenozoic is divided into three periods: the Paleogene, Neogene, and Quaternary; and seven epochs: the Paleocene, Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene. The Quaternary Period was officially recognised by the International Commission on Stratigraphy in June 2009. In 2004, the Tertiary Period was officially replaced by the Paleogene and Neogene Periods. The common use of epochs during the Cenozoic helps palaeontologists better organise and group the many significant events that occurred during this comparatively short interval of time. Knowledge of this era is more detailed than any other era because of the relatively young, well-preserved rocks associated with it. Paleogene The Paleogene spans from the extinction of non-avian dinosaurs, 66 million years ago, to the dawn of the Neogene, 23.03 million years ago. It features three epochs: the Paleocene, Eocene and Oligocene. The Paleocene Epoch lasted from 66 million to 56 million years ago. Modern placental mammals originated during this time. The devastation of the K–Pg extinction event included the extinction of large herbivores, which permitted the spread of dense but usually species-poor forests. The Early Paleocene saw the recovery of Earth. The continents began to take their modern shape, but all the continents and the subcontinent of India were separated from each other. Afro-Eurasia was separated by the Tethys Sea, and the Americas were separated by the strait of Panama, as the isthmus had not yet formed. This epoch featured a general warming trend, with jungles eventually reaching the poles. The oceans were dominated by sharks as the large reptiles that had once predominated were extinct. Archaic mammals filled the world such as creodonts (extinct carnivores, unrelated to existing Carnivora). The Eocene Epoch ranged from 56 million years to 33.9 million years ago. In the Early-Eocene, species living in dense forest were unable to evolve into larger forms, as in the Paleocene. Among them were early primates, whales and horses along with many other early forms of mammals. At the top of the food chains were huge birds, such as Paracrax. Carbon dioxide levels were approximately 1,400 ppm. The temperature was 30 degrees Celsius with little temperature gradient from pole to pole. In the Mid-Eocene, the Antarctic Circumpolar Current between Australia and Antarctica formed. This disrupted ocean currents worldwide and as a result caused a global cooling effect, shrinking the jungles. This allowed mammals to grow to mammoth proportions, such as whales which, by that time, had become almost fully aquatic. Mammals like Andrewsarchus were at the top of the food-chain. The Late Eocene saw the rebirth of seasons, which caused the expansion of savanna-like areas, along with the evolution of grasses. The end of the Eocene was marked by the Eocene–Oligocene extinction event, the European face of which is known as the Grande Coupure. The Oligocene Epoch spans from 33.9 million to 23.03 million years ago. The Oligocene featured the expansion of grasslands which had led to many new species to evolve, including the first elephants, cats, dogs, marsupials and many other species still prevalent today. Many other species of plants evolved in this period too. A cooling period featuring seasonal rains was still in effect. Mammals still continued to grow larger and larger. Neogene The Neogene spans from 23.03 million to 2.58 million years ago. It features 2 epochs: the Miocene, and the Pliocene. The Miocene Epoch spans from 23.03 to 5.333 million years ago and is a period in which grasses spread further, dominating a large portion of the world, at the expense of forests. Kelp forests evolved, encouraging the evolution of new species, such as sea otters. During this time, perissodactyla thrived, and evolved into many different varieties. Apes evolved into 30 species. The Tethys Sea finally closed with the creation of the Arabian Peninsula, leaving only remnants as the Black, Red, Mediterranean and Caspian Seas. This increased aridity. Many new plants evolved: 95% of modern seed plants families were present by the end of the Miocene. The Pliocene Epoch lasted from 5.333 to 2.58 million years ago. The Pliocene featured dramatic climatic changes, which ultimately led to modern species of flora and fauna. The Mediterranean Sea dried up for several million years (because the ice ages reduced sea levels, disconnecting the Atlantic from the Mediterranean, and evaporation rates exceeded inflow from rivers). Australopithecus evolved in Africa, beginning the human branch. The isthmus of Panama formed, and animals migrated between North and South America during the great American interchange, wreaking havoc on local ecologies. Climatic changes brought: savannas that are still continuing to spread across the world; Indian monsoons; deserts in central Asia; and the beginnings of the Sahara desert. The world map has not changed much since, save for changes brought about by the glaciations of the Quaternary, such as the Great Lakes, Hudson Bay, and the Baltic Sea. Quaternary The Quaternary spans from 2.58 million years ago to present day, and is the shortest geological period in the Phanerozoic Eon. It features modern animals, and dramatic changes in the climate. It is divided into two epochs: the Pleistocene and the Holocene. The Pleistocene lasted from 2.58 million to 11,700 years ago. This epoch was marked by ice ages as a result of the cooling trend that started in the Mid-Eocene. There were at least four separate glaciation periods marked by the advance of ice caps as far south as 40° N in mountainous areas. Meanwhile, Africa experienced a trend of desiccation which resulted in the creation of the Sahara, Namib, and Kalahari deserts. Many animals evolved including mammoths, giant ground sloths, dire wolves, sabre-toothed cats, and Homo sapiens. 100,000 years ago marked the end of one of the worst droughts in Africa, and led to the expansion of primitive humans. As the Pleistocene drew to a close, a major extinction wiped out much of the world's megafauna, including some of the hominid species, such as Neanderthals. All the continents were affected, but Africa to a lesser extent. It still retains many large animals, such as hippos. The Holocene began 11,700 years ago and lasts to the present day. All recorded history and "the Human history" lies within the boundaries of the Holocene Epoch. Human activity is blamed for a mass extinction that began roughly 10,000 years ago, though the species becoming extinct have only been recorded since the Industrial Revolution. This is sometimes referred to as the "Sixth Extinction". It is often cited that over 322 recorded species have become extinct due to human activity since the Industrial Revolution, but the rate may be as high as 500 vertebrate species alone, the majority of which have occurred after 1900. Tectonics Geologically, the Cenozoic is the era when the continents moved into their current positions. Australia-New Guinea, having split from Pangea during the early Cretaceous, drifted north and, eventually, collided with Southeast Asia; Antarctica moved into its current position over the South Pole; the Atlantic Ocean widened and, later in the era (2.8 million years ago), South America became attached to North America with the isthmus of Panama. India collided with Asia creating the Himalayas; Arabia collided with Eurasia, closing the Tethys Ocean and creating the Zagros Mountains, around . The break-up of Gondwana in Late Cretaceous and Cenozoic times led to a shift in the river courses of various large African rivers including the Congo, Niger, Nile, Orange, Limpopo and Zambezi. Climate In the Cretaceous, the climate was hot and humid with lush forests at the poles, there was no permanent ice and sea levels were around 300 metres higher than today. This continued for the first 10 million years of the Paleocene, culminating in the Paleocene–Eocene Thermal Maximum about . Around Earth entered a period of long term cooling. This was mainly due to the collision of India with Eurasia, which caused the rise of the Himalayas: the upraised rocks eroded and reacted with in the air, causing a long-term reduction in the proportion of this greenhouse gas in the atmosphere. Around permanent ice began to build up on Antarctica. The cooling trend continued in the Miocene, with relatively short warmer periods. When South America became attached to North America creating the Isthmus of Panama around , the Arctic region cooled due to the strengthening of the Humboldt and Gulf Stream currents, eventually leading to the glaciations of the Quaternary ice age, the current interglacial of which is the Holocene Epoch. Recent analysis of the geomagnetic reversal frequency, oxygen isotope record, and tectonic plate subduction rate, which are indicators of the changes in the heat flux at the core mantle boundary, climate and plate tectonic activity, shows that all these changes indicate similar rhythms on million years' timescale in the Cenozoic Era occurring with the common fundamental periodicity of ~13 Myr during most of the time. The levels of carbonate ions in the ocean fell over the course of the Cenozoic. Life Early in the Cenozoic, following the K-Pg event, the planet was dominated by relatively small fauna, including small mammals, birds, reptiles, and amphibians. From a geological perspective, it did not take long for mammals and birds to greatly diversify in the absence of the dinosaurs that had dominated during the Mesozoic. Some flightless birds grew larger than humans. These species are sometimes referred to as "terror birds", and were formidable predators. Mammals came to occupy almost every available niche (both marine and terrestrial), and some also grew very large, attaining sizes not seen in most of today's terrestrial mammals. The ranges of many Cenozoic bird clades were governed by latitude and temperature and have contracted over the course of this era as the world cooled. During the Cenozoic, mammals proliferated from a few small, simple, generalised forms into a diverse collection of terrestrial, marine, and flying animals, giving this period its other name, the Age of Mammals. The Cenozoic is just as much the age of savannas, the age of co-dependent flowering plants and insects, and the age of birds. Grasses also played a very important role in this era, shaping the evolution of the birds and mammals that fed on them. One group that diversified significantly in the Cenozoic as well were the snakes. Evolving in the Cenozoic, the variety of snakes increased tremendously, resulting in many colubrids, following the evolution of their current primary prey source, the rodents. In the earlier part of the Cenozoic, the world was dominated by the gastornithid birds, terrestrial crocodiles like Pristichampsus, large sharks such as Otodus, and a handful of primitive large mammal groups like uintatheres, mesonychians, and pantodonts. But as the forests began to recede and the climate began to cool, other mammals took over. The Cenozoic is full of mammals both strange and familiar, including chalicotheres, creodonts, whales, primates, entelodonts, sabre-toothed cats, mastodons and mammoths, three-toed horses, giant rhinoceros like Paraceratherium, the rhinoceros-like brontotheres, various bizarre groups of mammals from South America, such as the vaguely elephant-like pyrotheres and the dog-like marsupial relatives called borhyaenids and the monotremes and marsupials of Australia. Mammal evolution in the Cenozoic was predominantly shaped by climatic and geological processes. Cenozoic calcareous nannoplankton experienced rapid rates of speciation and reduced species longevity, while suffering prolonged declines in diversity during the Eocene and Neogene. Diatoms, in contrast, experienced major diversification over the Eocene, especially at high latitudes, as the world's oceans cooled. Diatom diversification was particularly concentrated at the Eocene-Oligocene boundary. A second major pulse of diatom diversification occurred over the course of the Middle and Late Miocene. See also Cretaceous–Paleogene boundary (K–T boundary) Geologic time scale Late Cenozoic Ice Age References Further reading External links Western Australian Museum – The Age of the Mammals Cenozoic (chronostratigraphy scale) Geological eras 1840s neologisms
0.76354
0.99903
0.762799
The Rise and Fall of the Great Powers
The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000, by Paul Kennedy, first published in 1987, explores the politics and economics of the Great Powers from 1500 to 1980 and the reason for their decline. It then continues by forecasting the positions of China, Japan, the European Economic Community (EEC), the Soviet Union and the United States through the end of the 20th century. Summary Kennedy argues that the strength of a Great Power can be properly measured only relative to other powers, and he provides a straightforward thesis: Great Power ascendancy (over the long term or in specific conflicts) correlates strongly to available resources and economic durability; military overstretch and a concomitant relative decline are the consistent threats facing powers whose ambitions and security requirements are greater than their resource base can provide for. Throughout the book he reiterates his early statement (page 71): "Military and naval endeavors may not always have been the raison d'être of the new nations-states, but it certainly was their most expensive and pressing activity", and it remains such until the power's decline. He concludes that declining countries can experience greater difficulties in balancing their preferences for guns, butter and investments. Kennedy states his theory in the second paragraph of the introduction as follows: Kennedy adds on the same page: Early modern era The book starts at the dividing line between the Renaissance and early modern history—1500 (chapter 1). It briefly discusses the Ming (page 4) and Muslim worlds (page 9) of the time and the rise of the western powers relative to them (page 16). The book then proceeds chronologically, looking at each of the power shifts over time and the effect on other Great Powers and the "Middle Powers". Kennedy uses a number of measures to indicate real, relative and potential strength of nations throughout the book. He changes the metric of power based on the point in time. Chapter 2, "The Habsburg Bid for Mastery, 1519–1659" emphasizes the role of the "manpower revolution" in changing the way Europeans fought wars (see military revolution). This chapter also emphasizes the importance of Europe's political boundaries in shaping a political balance of power. European imperialism The Habsburg failure segues into the thesis of chapter 3, that financial power reigned between 1660 and 1815, using Britain, France, Prussia, Austria-Hungary, and Russia to contrast between powers that could finance their wars (Britain and France) and powers that needed financial patronage to mobilize and maintain a major military force on the field. Kennedy presents a table (page 81, table 2) of "British Wartime Expenditures and Revenue"; between 1688 and 1815 is especially illustrative, showing that Britain was able to maintain loans at around one-third of British wartime expenditures throughout that period Total wartime expenditures, 1688–1815: £2,293,483,437 Total income: £1,622,924,377 Balance raised by loans: £670,559,060 Loans as per cent of expenditure: 33.3% The chapter also argues that British financial strength was the single most decisive factor in its victories over France during the 18th century. This chapter ends on the Napoleonic Wars and the fusion of British financial strength with a newfound industrial strength. Industrial Revolution Kennedy's next two chapters depend greatly upon Bairoch's calculations of industrialization, measuring all nations by an index, where 100 is the British per capita industrialization rate in 1900. The United Kingdom grows from 10 in 1750, to 16 in 1800, 25 in 1830, 64 in 1860, 87 in 1880, to 100 in 1900 (page 149). In contrast, France's per capita industrialization was 9 in 1750, 9 in 1800, 12 in 1830, 20 in 1860, 28 in 1880, and 39 in 1900. Relative shares of world manufacturing output (also first appearing on page 149) are used to estimate the peaks and troughs of power for major states. China, for example, begins with 32.8% of global manufacturing in 1750 and plummets after the First Opium War, Second Opium War and Taiping Rebellion to 19.7% of global manufacturing in 1860, and 12.5% in 1880 (compared to the UK's 1.9% in 1750, growing to 19.9% in 1860, and 22.9% in 1880). 20th century Measures of strength in the 20th century (pages 199–203) use population size, urbanization rates, Bairoch's per capita levels of industrialization, iron and steel production, energy consumption (measured in millions of metric tons of coal equivalent), and total industrial output of the powers (measured against Britain's 1900 figure of 100), to gauge the strength of the various great powers. Kennedy also emphasizes productivity increase, based on systematic interventions, which led to economic growth and prosperity for great powers in the 20th century. He compares the great powers at the close of the 20th century. Kennedy predicts the decline of the United States and the Soviet Union. Kennedy predicts the rise of Japan and struggles and potential for the European Economic Community (EEC). Kennedy predicts the rise of China, stating that if kept up, its economic development would change the country in decades and it would become a great power. He highlights the precedent of the Four Modernizations in Deng Xiaoping's plans for China—agriculture, industry, science and military—de-emphasizing military, while the United States and Soviet Union are emphasizing it. He predicts that continued deficit spending, especially on military build-up, will be the single most important reason for decline of any great power. The United States From the Civil War to the first half of the 20th century, the United States' economy benefited from high agricultural production, plentiful raw materials, technological advancements and financial inflows. During this time the U.S. did not have to contend with foreign dangers. From 1860 to 1914, U.S. exports increased sevenfold, resulting in huge trade surpluses. By 1945 the U.S. both enjoyed high productivity and was the only major industrialized nation intact after World War II. From the 1960s onward, the U.S. saw a relative decline in its share of world production and trade. By the 1980s, the U.S. experienced declining exports of agricultural and manufactured goods. In the space of a few years, the U.S. went from being the largest creditor to the largest debtor nation. At the same time, the federal debt was growing at an increasing pace. This situation is typical of declining hegemons. The United States has the typical problems of a great power, which include balancing guns and butter and investments for economic growth. The U.S.' growing military commitment to every continent (other than Antarctica) and the growing cost of military hardware severely limit available options. Kennedy compares the U.S.' situation to Great Britain's prior to World War I. He comments that the map of U.S. bases is similar to Great Britain's before World War I. As military expenses grow, this reduces investments in economic growth, which eventually "leads to the downward spiral of slower growth, heavier taxes, deepening domestic splits over spending priorities, and weakening capacity to bear the burdens of defense". Kennedy's advice is as follows: Table of contents Strategy and Economics in the Preindustrial World The Rise of the Western World The Habsburg Bid for Mastery, 1519–1659 Finance, Geography, and the Winning of Wars, 1660–1815 Strategy and Economics in the Industrial Era Industrialization and the Shifting Global Balances, 1815–1885 The Coming of a Bipolar World and the Crisis of the "Middle Powers": Part One, 1885–1918 The Coming of a Bipolar World and the Crisis of the "Middle Powers": Part Two, 1919–1942 Strategy and Economics Today and Tomorrow Stability and Change in a Bipolar World, 1943–1980 To the Twenty-first Century Maps, tables and charts The book has twelve maps, forty-nine tables and three charts to assist the reader in understanding the text. Publication data The Rise and Fall of the Great Powers is the eighth and best-known book by historian Paul Kennedy. It reached number six on the list of best-selling hardcover books for 1988. In 1988 the author was awarded the Wolfson History Prize for this work. Republished: January 1989, Paperback, , 704 pages See also Great power The Rise of the Great Powers Upward Spiral List of regions by past GDP (PPP) per capita Economic history of the United States Empire Ideocracy State collapse References External links The Rise and Fall of the Great Powers on Internet Archive Imperial Cycles: Bucks, Bullets and Bust January 10, 1988, New York Times Book Review, requires registration. PBS Newshour interview the author, 2010 – 25-year perspective 1987 non-fiction books Books about civilizations Books by Paul Kennedy Political realism History of international relations Books about geopolitics Books about wealth distribution Random House books
0.772541
0.987319
0.762744
Historical geology
Historical geology or palaeogeology is a discipline that uses the principles and methods of geology to reconstruct the geological history of Earth. Historical geology examines the vastness of geologic time, measured in billions of years, and investigates changes in the Earth, gradual and sudden, over this deep time. It focuses on geological processes, such as plate tectonics, that have changed the Earth's surface and subsurface over time and the use of methods including stratigraphy, structural geology, paleontology, and sedimentology to tell the sequence of these events. It also focuses on the evolution of life during different time periods in the geologic time scale. Historical development During the 17th century, Nicolas Steno was the first to observe and propose a number of basic principles of historical geology, including three key stratigraphic principles: the law of superposition, the principle of original horizontality, and the principle of lateral continuity. 18th-century geologist James Hutton contributed to an early understanding of the Earth's history by proposing the theory of uniformitarianism, which is now a basic principle in all branches of geology. Uniformitarianism describes an Earth formed by the same natural phenomena that are at work today, the product of slow and continuous geological changes. The theory can be summarized by the phrase "the present is the key to the past." Hutton also described the concept of deep time. The prevailing conceptualization of Earth history in 18th-century Europe, grounded in a literal interpretation of Christian scripture, was that of a young Earth shaped by catastrophic events. Hutton, however, depicted a very old Earth, shaped by slow, continuous change. Charles Lyell further developed the theory of uniformitarianism in the 19th century. Modern geologists have generally acknowledged that Earth's geological history is a product of both sudden, cataclysmic events (such as meteorite impacts and volcanic eruptions) and gradual processes (such as weathering, erosion, and deposition). The discovery of radioactive decay in the late 19th century and the development of radiometric dating techniques in the 20th century provided a means of deriving absolute ages of events in geological history. Use and importance Geology is considered a historical science; accordingly, historical geology plays a prominent role in the field. Historical geology covers much of the same subject matter as physical geology, the study of geological processes and the ways in which they shape the Earth's structure and composition. Historical geology extends physical geology into the past. Economic geology, the search for and extraction of fuel and raw materials, is heavily dependent on an understanding of the geological history of an area. Environmental geology, which examines the impacts of natural hazards such as earthquakes and volcanism, must rely on a detailed knowledge of geological history. Methods Stratigraphy Layers of rock, or strata, represent a geologic record of Earth's history. Stratigraphy is the study of strata: their order, position, and age. Structural geology Structural geology is concerned with rocks' deformational histories. Paleontology Fossils are organic traces of Earth's history. In a historical geology context, paleontological methods can be used to study fossils and their environments, including surrounding rocks, and place them within the geologic time scale. Sedimentology Sedimentology is the study of the formation, transport, deposition, and diagenesis of sediments. Sedimentary rocks, including limestone, sandstone, and shale, serve as a record of Earth's history: they contain fossils and are transformed by geological processes, such as weathering, erosion, and deposition, through deep time. Relative dating Historical geology makes use of relative dating in order to establish the sequence of geological events in relation to each another, without determining their specific numerical ages or ranges. Absolute dating Absolute dating allows geologists to determine a more precise chronology of geological events, based on numerical ages or ranges. Absolute dating includes the use of radiometric dating methods, such as radiocarbon dating, potassium–argon dating, and uranium–lead dating. Luminescence dating, dendrochronology, and amino acid dating are other methods of absolute dating. Plate tectonics The theory of plate tectonics explains how the movement of lithospheric plates has structured the Earth throughout its geological history. Weathering, erosion, and deposition Weathering, erosion, and deposition are examples of gradual geological processes, taking place over large sections of the geologic time scale. In the rock cycle, rocks are continually broken down, transported, and deposited, cycling through three main rock types: sedimentary, metamorphic, and igneous. Paleoclimatology Paleoclimatology is the study of past climates recorded in geological time. Brief geological history Notes External links Geology – Earth history | Encyclopedia Britannica Historical Geology | OpenGeology.org GEOL 102 Historical Geology | Lecture notes for course at the University of Maryland
0.784244
0.972555
0.76272
Feminist movement
The feminist movement, also known as the women's movement, refers to a series of social movements and political campaigns for radical and liberal reforms on women's issues created by inequality between men and women. Such issues are women's liberation, reproductive rights, domestic violence, maternity leave, equal pay, women's suffrage, sexual harassment, and sexual violence. The movement's priorities have expanded since its beginning in the 1800s, and vary among nations and communities. Priorities range from opposition to female genital mutilation in one country, to opposition to the glass ceiling in another. Feminism in parts of the Western world has been an ongoing movement since the turn of the century. During its inception, feminism has gone through a series of four high moments termed Waves. First-wave feminism was oriented around the station of middle- or upper-class white women and involved suffrage and political equality, education, right to property, organizational leadership, and marital freedoms. Second-wave feminism attempted to further combat social and cultural inequalities. Although the first wave of feminism involved mainly middle class white women, the second wave brought in women of different social classes, women of color, and women from other developing nations that were seeking solidarity. Third-wave feminism continued to address the financial, social, and cultural inequalities of women in business and in their home lives, and included renewed campaigning for greater influence of women in politics and media. In reaction to political activism, feminists have also had to maintain focus on women's reproductive rights, such as the right to abortion. Fourth-wave feminism examines the interlocking systems of power that contribute to the social stratification of traditionally marginalized groups, as well as the world around them. History The base of the Women's Movement, since its inception, has been grounded in the injustice of inequality between men and women. Throughout history, the relationship between men and women has been that of a patriarchal society, citing the law of nature as the justification, which was interpreted to mean women are inferior to men. Allan Johnson, a sociologist who studies masculinity, wrote of patriarchy: "Patriarchy encourages men to seek security, status, and other rewards through control; to fear other men's ability to control and harm them; and to identify being in control as both their best defense against loss and humiliation and the surest route to what they need and desire"(Johnson 26). During the pre-feminist era, women were expected to be proper, delicate, and emotional nurturers of the household. They were raised in a manner in which gaining a husband to take care of them and raising a family was their ultimate priority. Author Mary Wollstonecraft wrote of the lesser sex in her 1792 novels A Vindication of the Rights of Woman & A Vindication of the Rights of Men, "..for, like the flowers which are planted in too rich a soil, strength and usefulness are Sacrificed to beauty; and the flaunting leaves, after having pleased a fastidious eye, fade, disregarded on the stalk, long before the season when they ought to have arrived at maturity" (Wollstonecraft 9). Early ideas and activism of pro-feminism beliefs before the existence of the Feminist movement are described as protofeminist. Protofeminists in the United States organized before the Seneca Falls convention as part of the suffrage, abolition, and other movements. Gender equality movements were practiced within the Haudenosaunee (Iroquois) nations long before America was colonized (Wagner, Steinem 45). Some have come to recognize the beginning of the feminist movement in 1832, as The American Anti-Slavery Society (AASS), and The Connecticut Female Anti-Slavery Society formed as early as 1833 (Wagner, Steinem 48). By the year 1837, 139 AASS societies were formed across the nation (Wagner, Steinem 47). The first national AASS convention was held in New York City in 1837 (Wagner, Steinem 48). During the first convention, it was debated there whether black women could participate(48). By the second and third conventions, demands were heard which saw to it that conventions were open to African American leadership and membership participation. On the evening of the second convention held in Philadelphia Hall, after the meeting adjourned and the attendees left, a violent mob burned down the hall(49). The issues discussed included the vote, oppression, and slavery, and laid the basis for future movements. On November 15, 1895, Elizabeth Cady Stanton wrote an address describing how, in her perspective, the Seneca Falls Convention  "... was the first woman's rights convention ever held in the world ... a declaration was read and signed by most of those present, and a series of radical resolutions adopted" (356–7). Stanton's recollection prompted historians since the 1950s to attribute the Seneca Falls Convention in 1848 (at which the Women's Suffrage Movement began in the United States) as the earliest North American Feminist Movement. The convention met annually for fifteen years thereafter. Attendees drafted the Seneca Falls Declaration of Sentiments, outlining the new movement's ideology and political strategies. The earliest North American and European international women's organizations were the International Council of Women, established in 1888 in Washington, DC, US. The term Feminist Movement was coined in the late nineteenth century to distinguish the Feminist Movement from the Women's Movement, allowing for inclusion of male feminists. The new movement thus prompted the likes of male feminists George Lansbury of the British Labour Party to run for political candidacy on the feminist ticket in 1906. As the awareness of feminist movements evolved, transnational feminism and nationalist feminist movements established themselves worldwide. Priorities and ideas vary based on the political or cultural positions of the women in the area where each movement originates. General topics of feminist coalition politics include lack of legal rights, poverty, medical vulnerability, and labor. These political issues are often organized around division by class, caste, ethnicity, religion, sexuality, nationality, and age. Early Russian nationalist feminist activists founded the All-Russian Union for Women's Equality in 1905, allowing women to vote and allowing co-education. In 1931, the All-Asian Women's Conference was held in Lahore in what was then British India. This meeting is one example of the time period which "demonstrated the networking of women across various divides". The spirit of the conference can be understood as International or Global feminist. Pre-feminism society The feminist movement has been an ongoing force throughout history. There is no way to determine when the exact date was when the feminist movement was first thought up, because women have been writing on the topic for thousands of years. For instance, the female poet from Ancient Greece, Sappho, born in roughly 615 BC, made waves as an acclaimed poet during a time when the written word was conducted primarily by men. She wrote poetry about, among other things, sexuality. There have been four main waves of feminism since the beginning of the feminist movement in Western society, each with their own fight for women's rights.  The first in the wave was in the 1840s. It was based on Education, right to property, organizational leadership, right to vote, and marital freedoms. The second wave was in the 1960s. It was based on gender issues, women's sexual liberation, reproductive rights, job opportunities for women, violence against women, and changes in custody and divorce laws. The third wave was in the 1990s. It was based on individualism, diversity, redefined what it meant to be a feminist, intersectionality, sex positivity, transfeminism, and postmodern feminism. Lastly, the fourth wave began in the 2000s, and is currently still in progress. It has been based around female empowerment, body shaming, sexual harassment, spiritual concerns, human rights, and concerns for the planet. The feminist movement continued during the periods between waves, just not to the extent of the four large motions. The first documented gathering of women to form a movement with a common goal was on 5 October 1789, during the French Revolution. The event was later referred to as the Women's March on Versailles. The gathering was based on a lack of food, high market prices, and the fear of another famine occurring across France. On that day, women along with revolutionaries, had planned to gather in the market. Once gathered, the crowd stormed the Hotel de Ville (the City Hall of Paris) where weapons were being stored. The armed crowd then marched to the Palace of Versailles to draw King Louis XVI's attention to the high prices and food shortages. For King Louis XVI's remaining time on the throne, he stopped fighting the Revolutionaries. The march signaled a sort of change of power, showing that there is power in the people, and diminished the perception that the monarch was invincible. The French Revolution began with the inequality felt by French citizens and came as a reaction from the "Declaration of the Rights of Man and of the Citizen" which was signed in August 1789. The declaration gave rights to men who were termed Active Citizens. Active Citizenship was given to French men who were twenty-five years, or older, worked, and paid taxes, and who could not be titled a servant. The declaration dismissed the population who were women, foreigners, children, and servants, as passive citizens. Passive citizens, French women in particular, focused their fight on gaining citizenship and equal rights. One of the first women to speak out on women's rights and inequality was French playwright Olympes de Gouges, who wrote the "Declaration of the Rights of Woman" in 1791, in contrast to the "Declaration of the Rights of Man and of the Citizen." She famously stated, "Women are born free and are man's equal in law. Social distinctions can be founded solely on common utility"(De Gouges 1791). Olympes used her words to urge women to speak up and take control of their rights. She demonstrated the similarity between the duties as a citizen of both men and women and the cohesion to ensue if both genders were considered equal. British philosopher and writer Mary Wollstonecraft published in 1792 what has been seen as the first feminist treaty on the human rights of women, "Vindication of the Rights of Woman." She pressed the issue of equality between men and women, stating: "No society can be either virtuous or moral while half of the population are being subjugated by the other half'(Wollstonecraft 2009 p.59). She went on to write about the Law of Nature and the desire for women to present more as themselves, and demand respect and equality from their male counterparts, "...men endeavor to sink us still lower, merely to render us alluring objects for a moment; and women, intoxicated by the adoration which men, under the influence of their senses, pay them, do not see, to obtain a durable interest in their hearts, or to become the friends of the fellow-creatures who find amusement in their society"(Wollstonecraft 2008, p. 10). During the mid-nineteenth century, the women's movement developed as a result of women striving to improve their status and usefulness in society. Nancy Cott, historian and professor, wrote about the objectives of the feminist movement: "to initiate measures of charitable benevolence, temperance, and social welfare and to initiate struggles for civil rights, social freedoms, higher education, remunerative occupations, and the ballot"(Cott 1987, p. 3). The setting of these goals resulted from women's rising awareness of the precariousness of their situation in the patriarchal society of the 1800s. The developing movement promoted a series of new images for women: True Womanhood, Real Womanhood, Public Womanhood, and New Womanhood"(Cruea 2005, p. 2). True Womanhood was the ideal that women were meant to be pure and moral. A true woman was raised learning manners and submission to males to be a good wife and mother. Real Womanhood came to be with the Civil war, when women were forced to work in place of men who were at war. Real Women learned how to support themselves and took that knowledge with them in their marriage and education. Public Womanhood came with women being allowed to work domestic type jobs such as nursing, teaching, and secretary, which were jobs previously performed by men, but the corporation could pay women much less than men. New Womanhood was based on eliminating the traditional conformity of women's roles, inferiority from men, and living a more fulfilled life. "The four overlapping phases of the Women's Movement advanced women from domestic prisoners to significant members of their communities within less than a century"(Cruea 2005, p. 17). In the 1820s the women's movement, then called the Temperance movement, expanded from Europe and moved into the United States. Women began speaking out on the effects of the consumption of alcohol had on the morals of their husbands and blamed it on the problems within their household. They called for a moral reform by limiting or prohibiting the sale and consumption of alcohol, beginning the fight toward Prohibition which did not begin until 1920. The women fighting for the temperance movement came to the realization, without the ability to vote on the issues they were fighting for, nothing would ever change. Feminist movement in Western society Feminism in the United States, Canada, and a number of countries in Western Europe has been divided by scholars into three waves: first, second and third-wave feminism. Recent (early 2010s) research suggests there may be a fourth wave characterized, in part, by new media platforms. The feminist movement's agenda includes acting as a counterpart to the putatively patriarchal strands in the dominant masculine culture. While differing during the progression of "waves", it is a movement that has sought to challenge the political structure, power holders, and cultural beliefs or practices. Although antecedents to feminism may be found far back before the 18th century, the seeds of the modern feminist movement were planted during the late part of that century. Christine de Pizan, a late medieval writer, was possibly the earliest feminist in the western tradition. She is believed to be the first woman to make a living out of writing. Feminist thought began to take a more substantial shape during the Enlightenment with such thinkers as Lady Mary Wortley Montagu and the Marquis de Condorcet championing women's education. The first scientific society for women was founded in Middelburg, a city in the south of the Dutch republic, in 1785. First Wave Feminism Though the feminist movement had already begun in America with the Temperance Movement, the First Wave of Feminism, known as the Suffragette Movement, began on 19–20 July 1848 during the first Women's Right Convention in Seneca Falls, New York. The convention drew over 300 people, who were predominately white, middle-class women. Sixty-eight women and thirty-two men signed the "Declaration of Sentiments", which called for equal rights for women and men on the basis of education, right to property, organizational leadership, right to vote, and marital freedoms. For the Suffragette's first major display, they held a parade on 3 March 1913, in Washington DC. The first suffragette parade, which was also the first civil rights march on Washington, was coordinated by Alice Paul and the National American Suffrage Association. The parade drew over five thousand participants who were led by Inez Milholland. The parade was strategically scheduled for the day before the inauguration of President Woodrow Wilson, which drew in a lot of people to Washington. The women gathered in front of the US Capitol and then traveled fourteen blocks to the Treasury Department. The parade proceeded through the crowd of angry spectators who became verbally and physically abusive toward the women. By the end of the demonstration, there was reported at least one hundred people taken to the hospital due to injuries. In 1918 Crystal Eastman wrote an article published in the Birth Control Review, she contended that birth control is a fundamental right for women and must be available as an alternative if they are to participate fully in the modern world. "In short, if feminism, conscious and bold and intelligent, leads the demand, it will be supported by the secret eagerness of all women to control the size of their families, and a suffrage state should make short work of repealing these old laws that stand in the way of birth control." She stated "I don't believe there is one woman within the confines of this state who does not believe in birth control!"(Eastman 1918) The women who made the first efforts towards women's suffrage came from more stable and privileged backgrounds, and were able to dedicate time and energy into making change. Initial developments for women, therefore, mainly benefited white women in the middle and upper classes. During the second wave, the feminist movement became more inclusive of women of color and women of different cultures. Second Wave Feminism The 1960s second wave of feminism was termed the Women's liberation movement. It was the largest and broadest social movement in US history. The second wave was based around a sociopolitical-cultural movement. Activists fought for gender issues, women's sexual liberation, reproductive rights, job opportunities for women, violence against women, and changes in custody and divorce laws. It is believed the feminist movement gained attention in 1963, when Betty Friedan published her novel, The Feminine Mystique. Friedan wrote of "the problem that has no name"(Friedan 1963), as a way to describe the depression women felt about their limited choices in life. While reading The Feminine Mystique, women found they related to what Friedan wrote. Women were forced to look at themselves in a way they had not before. They saw within themselves, all the things they had given up in the name of conformity. The women's movement became more popular in May 1968 when women began to read again, more widely, the book The Second Sex, written in 1949 by a defender of women's rights, Simone de Beauvoir (and translated into English for the first time in 1953; later translation 2009). De Beauvoir's writing explained why it was difficult for talented women to become successful. The obstacles de Beauvoir enumerates include women's inability to make as much money as men do in the same profession, women's domestic responsibilities, society's lack of support towards talented women, and women's fear that success will lead to an annoyed husband or prevent them from even finding a husband at all. De Beauvoir also argues that women lack ambition because of how they are raised, noting that girls are told to follow the duties of their mothers, whereas boys are told to exceed the accomplishments of their fathers. Along with other influences, such as Betty Friedan, Simone de Beauvoir's work helped the feminist movement to solidify the second wave. Contributors to The Women's Liberation Movement include Simone de Beauvoir, Christiane Rochefort, Christine Delphy and Anne Tristan. The defining moment in the 1960s was a demonstration held to protest against the Miss America pageant in Atlantic City on 7 September 1968, termed the "cattle parade". The purpose of the protest was to call attention to beauty standards and the objectification of women. Through this era, women gained equal rights such as a right to an education, a right to work, and a right to contraception and abortion. One of the most important issues that The Women's Liberation movement faced was the banning of abortion and contraception, which the group saw as a violation of women's rights. Thus, they made a declaration known as Le Manifeste de 343 which held signatures from 343 women admitting to having had an illegal abortion. The declaration was published in two French newspapers, Le Nouvel observateur and Le Monde, on 5 April 1971. The group gained support upon the publication. Women received the right to abort with the passing of the Veil Law in 1975. Third Wave Feminism The 1980s and 1990s drew a different perspective in the feminist movement and was termed Grrl Feminism or Riot Grrl Feminism. The ideas of this era took root with the popularization of the Riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s. The feminists of this era strived to redefine what it meant to be a feminist. They embraced individualism and diversity, and pushed to eliminate conformity. The twentieth century woman had the mindset of wanting to have it all. They wanted a professional career, as well as be a wife and mother. Harriet Kimble Wrye PhD, ABPP, FIPA wrote of her research on the psychoanalytic perspectives of being a feminist in the twentieth century, "So many of us look back, and recognizing the pressures under which we struggled, wonder how we did what we did and at what price"(Wrye 2009). On 11 October 1991, the first televised workplace sexual harassment case was aired. Anita Hill, who was a law professor at the time accused Supreme Court nominee Clarence Thomas of persistent sexual harassment. Anita Hill recounted the details of her experience in court to an all male panel. Despite there being four corroborating witnesses, the case was dismissed and Clarence Thomas was confirmed into the Supreme Court. Though the case was dismissed, it encouraged other women to speak out on their own experiences which led to Congress passing the Civil Rights Act of 1991, which gave legal action against workplace sexual harassment. The United Nations Human Development Report 2004 estimated that when both paid employment and unpaid household tasks are accounted for, on average women work more than men. In rural areas of selected developing countries women performed an average of 20% more work than men, or 120% of men's total work, an additional 102 minutes per day. In the OECD countries surveyed, on average women performed 5% more work than men, or 105% of men's total work—an additional 20 minutes per day. However, men did up to 19 minutes more work per day than women in five out of the eighteen OECD countries surveyed: Canada, Denmark, Hungary, Israel, and The Netherlands. During the course of the women's movement in Western society, affective changes have taken place, including women's suffrage, the right to initiate divorce proceedings and "no fault" divorce, the right of women to make individual decisions regarding pregnancy (including access to contraceptives and abortion), and the right to own property. It has also led to broad employment for women at more equitable wages, and access to university education. Feminist movement in Eastern society Feminism in China Prior to the 20th century, women in China were considered essentially different from men. Feminism in China started in the 20th century with the Chinese Revolution in 1911. In China, Feminism has a strong association with socialism and class issues. Some commentators believe that this close association is damaging to Chinese feminism and argue that the interests of party are placed before those of women. In the patriarchal society, the struggle for women's emancipation means to enact laws that guarantee women's full equality of race, sex, property and freedom of marriage. To further eliminate the legacy of the class society of patriarchal women (drowning of infants, corset, foot binding, etc.), discrimination, play, mutilate women's traditional prejudice and habitual forces on the basis of the development of productive forces, it is gradually needful on achieving gender in politics, economy, social and family aspects of equality. Before the westernization movement and the reform movement, women had set off a wave of their own strength in the Taiping Heavenly Kingdom (1851–1864). However, there are too many women from the bottom identities in the Taiping Heavenly Kingdom. It is difficult to get rid of the fate of being used. Until the end of the Qing Dynasty, women with more knowledges took the initiative in the fight for women's rights and that is where feminism basically started. The term 'feminism' was first transmitted to China in 1791 which was proposed by Olympe de Gouges and promoted the 'women's liberation'. The feminist movement in China was mainly kickstarted and driven by male feminists prior to female feminists. Key male feminists in China in the 19th to 20th century included Liang Qichao, Ma Junwu and Jin Tianhe. In 1897, Liang Qichao proposed banning of foot-binding and encouraged women to engage in the workforce, political environment and education. The foot-binding costume had long been established in China which was an act to display the beauty and social status of women by binding their feet into an extremely small shoe with good decorations and ornaments. Liang Qichao proposed the abolishment of this act due to concern the health of female being a supportive wives and caring mothers. He also proposed to reduce the number of female dependents in family and encouraged women to receive the rights of education and enter the workforce to be economic independent from men and finally help the nation to reach higher wealth and prosperity. For feminist Ma Junwu and Jin Tianhe, they both supported the equality between husbands and wives, women enjoy legitimate and equal rights and also rights to enter the political sphere. A key assertion from Jin Tianhe was women as the mother of the nation. These views from male feminists in early feminism in China represented the image of ideal women in the imagination of men. Key female feminists in China in the 19th to 20th century included Lin Zongsu, He Zhen, Chen Xiefen and Qiu Jin. The female feminists in early China focused more on the methods or ways that women should behave and liberate themselves to achieve equal and deserved rights and independence. He Zhen expressed her opinion that women's liberation was not correlated to the interest of the nation and she analysed three reasons behind the male feminists included: following the Western trend, to alleviate their financial burdens and high quality of reproduction. Besides, Li Zongsu proposed that women should strive for their legitimate rights which includes broader aspects than the male feminists: call for their own right over men, the Qing Court and in an international extent. In the Qing Dynasty, the discussion on feminism had two dimensions including the sex differences between men and women such as maternal role and duties of women and social difference between genders; the other dimension was the aim of liberation of women. The view of the feminists were diverse: some believed feminism was benefiting the nation and some believed feminism was associated with the individual development of female in improving their rights and welfare. In the 1970s, the Marxist philosophy about female and feminism was transmitted to China and became the guiding principle of feminism movement in China by introducing class struggle theories to address gender quality. In the 1990s, more female scholars were adapted to feminism in Western countries, and they promoted feminism and equal rights for women by publishing, translating and carrying out research on global feminism and made feminism in China as one part of their study to raise more concern and awareness for gender equality issues.An important means of improving women's status in China was through legislation. After the PRC's founding in 1949, women were granted the same rights that men were entitled to by law, largely because women's liberation was presented as part of the Chinese nation's liberation. Language Feminists are sometimes, though not exclusively, proponents of using non-sexist language, such as using "Ms" to refer to both married and unmarried women. Feminists are also often proponents of using gender-inclusive language, such as "humanity" instead of "mankind", or "they" in place of "he" where the gender is unknown. Gender-neutral language is language usage which is aimed at minimizing assumptions regarding the gender of human referents. The advocacy of gender-neutral language reflects, at least, two different agendas: one aims to clarify the inclusion of both sexes or genders (gender-inclusive language); the other proposes that gender, as a category, is rarely worth marking in language (gender-neutral language). Gender-neutral language is sometimes described as non-sexist language by advocates and politically correct language by opponents. Not only has the movement come to change the language into gender neutral but the feminist movement has brought up how people use language. Emily Martin describes the concept of how metaphors are gendered and ingrained into everyday life. Metaphors are used in everyday language and have become a way that people describe the world. Martin explains that these metaphors structure how people think and in regards to science can shape what questions are being asked. If the right questions are not being asked then the answers are not going to be the right either. For example, the aggressive sperm and passive egg is a metaphor that felt 'natural' to people in history but as scientists have reexamined this phenomenon they have come up with a new answer. "The sperm tries to pull its getaway act even on the egg itself, but is held down against its struggles by molecules on the surface of the egg that hook together with counterparts on the sperm's surface, fastening the sperm until the egg can absorb it." This is a goal in feminism to see these gendered metaphors and bring it to the public's attention. The outcome of looking at things in a new perspective can produce new information. Heterosexual relationships The increased entry of women into the workplace beginning in the 20th century has affected gender roles and the division of labor within households. Sociologist Arlie Russell Hochschild in The Second Shift and The Time Bind presents evidence that in two-career couples, men and women, on average, spend about equal amounts of time working, but women still spend more time on housework. Feminist writer Cathy Young responds to Hochschild's assertions by arguing that, in some cases, women may prevent the equal participation of men in housework and parenting. Economists Mark Aguiar and Erik Hurst calculate that the amount of time spent on housework by women since the 1960s has dropped considerably. Leisure for both men and women has risen significantly and by about the same amount for both sexes. Jeremy Greenwood, Ananth Seshadri and Mehmet Yorukoglu argue that the introduction of modern appliances into the home has allowed women to enter the work force. Feminist criticisms of men's contributions to child care and domestic labor in the Western middle class are typically centered around the idea that it is unfair for women to be expected to perform more than half of a household's domestic work and child care when both members of the relationship perform an equal share of work outside the home. Several studies provide statistical evidence that the financial income of married men does not affect their rate of attending to household duties. In Dubious Conceptions, Kristin Luker discusses the effect of feminism on teenage girls' choices to bear children, both in and out of wedlock. She says that as childbearing out of wedlock has become more socially acceptable, young women, especially poor young women, while not bearing children at a higher rate than in the 1950s, now see less of a reason to get married before having a child. Her explanation for this is that the economic prospects for poor men are slim, hence poor women have a low chance of finding a husband who will be able to provide reliable financial support due to the rise of unemployment from more workers on the market, from just men to women and men. Some studies have suggested that both men and women perceive feminism as being incompatible with romance. However, a recent survey of U.S. undergraduates and older adults found that feminism actually has a positive impact on relationship health for women and sexual satisfaction for men, and found no support for negative stereotypes of feminists. Virginia Satir said the need for relationship education emerged from shifting gender roles as women gained greater rights and freedoms during the 20th century: Women's health Historically there has been a need to study and contribute to the health and well-being of a woman that previously has been lacking. Londa Schiebinger suggests that the common biomedical model is no longer adequate and there is a need for a broader model to ensure that all aspects of a woman are being cared for. Schiebinger describes six contributions that must occur to have success: political movement, academic women studies, affirmative action, health equality act, geo-political forces, and professional women not being afraid to talk openly about women issues. Political movements come from the streets and are what the people as a whole want to see changed. An academic women study is the support from universities in order to teach a subject that most people have never encountered. Affirmative action enacted is a legal change to acknowledge and do something for the times of neglect people were subjected to. Women's Health Equity Act legally enforces the idea that medicine needs to be tested in suitable standards such as including women in research studies and is also allocates a set amount of money to research diseases that are specific towards women. Research has shown that there is a lack of research in autoimmune disease, which mainly affects women. "Despite their prevalence and morbidity, little progress has been made toward a better understanding of those conditions, identifying risk factors, or developing a cure" this article reinforces the progress that still needs to be made. Geo-political forces can improve health, when the country is not at a sense of threat in war there is more funding and resources to focus on other needs, such as women's health. Lastly, professional women not being afraid to talk about women's issues moves women from entering into these jobs and preventing them for just acting as men and instead embracing their concerns for the health of women. These six factors need to be included for there to be change in women's health. Religion Feminist theology is a movement that reconsiders the traditions, practices, scriptures, and theologies of religions from a feminist perspective. Some of the goals of feminist theology include increasing the role of women among the clergy and religious authorities, reinterpreting male-dominated imagery and language about God, determining the place of women in relation to career and motherhood, and studying images of women in the religion's sacred texts. The feminist movement has affected religion and theology in profound ways. In liberal branches of Protestant Christianity, women are now allowed to be ordained as clergy, and in Reform, Conservative and Reconstructionist Judaism, women are now allowed to be ordained as rabbis and cantors. In some of these groups, some women are gradually obtaining positions of power that were formerly only held by men, and their perspectives are now sought out in developing new statements of belief. These trends, however, have been resisted within most sects of Islam, Roman Catholicism, and Orthodox Christianity. Within Roman Catholicism, most women understand that, through the dogma of the faith, they are to hold, within the family, a place of love and focus on the family. They also understand the need to rise above that does not necessarily constitute a woman to be considered less than, but in fact equal to, that of her husband who is called to be the patriarch of the family and provide love and guidance to his family as well. Christian feminism is a branch of feminist theology which seeks to reinterpret and understand Christianity in light of the equality of women and men. While there is no standard set of beliefs among Christian feminists, most agree that God does not discriminate on the basis of biologically determined characteristics such as sex. Early feminists such as Elizabeth Cady Stanton concentrated almost solely on "making women equal to men." However, the Christian feminist movement chose to concentrate on the language of religion because they viewed the historic gendering of God as male as a result of the pervasive influence of patriarchy. Rosemary Radford Ruether provided a systematic critique of Christian theology from a feminist and theist point of view. Stanton was a freethinker and Reuther is an agnostic who was born to Catholic parents but no longer practices the faith. Islamic feminism is concerned with the role of women in Islam and aims for the full equality of all Muslims, regardless of gender, in public and private life. Although rooted in Islam, the movement's pioneers have also used secular and Western feminist discourses. Advocates of the movement seek to highlight the deeply rooted teachings of equality in the Quran and encourage a questioning of the patriarchal interpretation of Islamic teaching through the Quran, hadith (sayings of Muhammad), and sharia (law) towards the creation of a more equal and just society. Jewish feminism seeks to improve the religious, legal, and social status of women within Judaism and to open up new opportunities for religious experience and leadership for Jewish women. In its modern form, the movement can be traced to the early 1970s in the United States. According to Judith Plaskow, who has focused on feminism in Reform Judaism, the main issues for early Jewish feminists in these movements were the exclusion from the all-male prayer group or minyan, the exemption from positive time-bound mitzvot, and women's inability to function as witnesses and to initiate divorce. Starting since the 1970s, the Goddess movement has been embraced by some feminists as well. Businesses Feminist activists have established a range of feminist businesses, including women's bookstores, feminist credit unions, feminist presses, feminist mail-order catalogs, and feminist restaurants. These businesses flourished as part of the second and third-waves of feminism in the 1970s, 1980s, and 1990s. Although the range of feminist businesses has increased significantly, a study stated that women-owned businesses are frequently described as underperforming, in that, their business remain small and marginal. Women still have high level of barrier to become an entrepreneur compared to males. See also Subjects or international organizations Comprehensive sex education Equity feminism Individualist feminism Jewish feminism Material feminism Marxist feminism New Thought Radical feminism Relationship education Sexual revolution Third-wave feminism Timeline of women's rights (other than voting) Timeline of women's suffrage Women, Culture, and Society Women's International League for Peace and Freedom Women's liberation movement List of feminists List of suffragists and suffragettes List of women's rights activists By continent Feminism in Africa Feminism in Asia Feminism in Europe Feminism in North America Feminism in Oceania Feminism in South America Country or region specific articles Feminism in 1950s Britain Feminist movements in the United States Jam'iyat-e Nesvan-e Vatankhah (Iran) References External links The M and S Collection at the Library of Congress contains materials on the Women's Movement. Four Waves of Feminism from Pacific University Oregon. Feminist theory Social movements
0.765138
0.996838
0.762719
Arabization
Arabization or Arabicization is a sociological process of cultural change in which a non-Arab society becomes Arab, meaning it either directly adopts or becomes strongly influenced by the Arabic language, culture, literature, art, music, and ethnic identity as well as other socio-cultural factors. It is a specific form of cultural assimilation that often includes a language shift. The term applies not only to cultures, but also to individuals, as they acclimate to Arab culture and become "Arabized". Arabization took place after the Muslim conquest of the Middle East and North Africa, as well as during the more recent Arab nationalist policies toward non-Arab minorities in modern Arab states, such as Algeria, Iraq, Syria, Egypt, Bahrain, and Sudan. After the rise of Islam in the Hejaz and subsequent Muslims conquests, Arab culture and language spread outside the Arabian Peninsula through trade and intermarriages between members of the non-Arab local population and the peninsular Arabs. The Arabic language began to serve as a lingua franca in these areas and various dialects were formed. This process was accelerated by the migration of various Arab tribes outside of Arabia, such as the Arab migrations to the Maghreb and the Levant. The influence of Arabic has been profound in many other countries whose cultures have been influenced by Islam. Arabic was a major source of vocabulary for various languages. This process reached its zenith between the 10th and 14th centuries, widely considered to be the high point of Arab culture, during the Islamic Golden Age. Early Arab expansion in the Near East After Alexander the Great, the Nabataean Kingdom emerged and ruled a region extending from north of Arabia to the south of Syria. The Nabataeans originated from the Arabian peninsula, who came under the influence of the earlier Aramaic culture, the neighbouring Hebrew culture of the Hasmonean kingdom, as well as the Hellenistic cultures in the region (especially with the Christianization of Nabateans in the 3rd and 4th centuries). The pre-modern Arabic language was created by Nabateans, who developed the Nabataean alphabet which became the basis of modern Arabic script. The Nabataean language, under heavy Arab influence, amalgamated into the Arabic language. The Arab Ghassanids were the last major non-Islamic Semitic migration northward out of Yemen in late classic era. They were Greek Orthodox Christian, and clients of the Byzantine Empire. They arrived in Byzantine Syria which had a largely Aramean population. They initially settled in the Hauran region, eventually spreading to the entire Levant (modern Lebanon, Israel, Palestine and Jordan), briefly securing governorship of parts of Syria and Transjordan away from the Nabataeans. The Arab Lakhmid Kingdom was founded by the Lakhum tribe that emigrated from Yemen in the 2nd century and ruled by the Banu Lakhm, hence the name given it. They adopted the religion of the Church of the East, founded in Assyria/Asōristān, opposed to the Ghassanids Greek Orthodox Christianity, and were clients of the Sasanian Empire. The Byzantines and Sasanians used the Ghassanids and Lakhmids to fight proxy wars in Arabia against each other. History of Arabization Arabization during the early Caliphate The most significant wave of "Arabization" in history followed the early Muslim conquests of Muhammad and the subsequent Rashidun and Umayyad Caliphates. These Arab empires were the first to grow well beyond the Arabian Peninsula, eventually reaching as far as Iberia in the West and Central Asia to the East, covering , one of the largest imperial expanses in history. Southern Arabia South Arabia is a historical region that consists of the southern region of the Arabian Peninsula, mainly centered in what is now the Republic of Yemen, yet it also included Najran, Jizan, and 'Asir, which are presently in Saudi Arabia, and the Dhofar of present-day Oman. Old South Arabian was driven to extinction by the Islamic expansion, being replaced by Classical Arabic which is written with the Arabic script. The South Arabian alphabet which was used to write it also fell out of use. A separate branch of South Semitic, the Modern South Arabian languages still survive today as spoken languages in southern of present-day Saudi Arabia, Yemen, and Dhofar in present-day Oman. Although Yemen is traditionally held to be the homeland of the Qahtanite Arabs who, according to Arab tradition, are pure Arabs; however, most of the sedentary Yemeni population did not speak Old Arabic prior to the spread of Islam, and spoke the extinct Old South Arabian languages instead. Eastern and Northern Arabia Before the 7th century CE, the population of Eastern Arabia consisted of Christian Arabs, Zoroastrian Arabs, Jews, and Aramaic-speaking agriculturalists. Some sedentary dialects of Eastern Arabia exhibit Akkadian, Aramaic and Syriac features. The sedentary people of ancient Bahrain were Aramaic speakers and to some degree Persian speakers, while Syriac functioned as a liturgical language. Even within Northern Arabia, Arabization occurred to non-Arab populations such as the Hutaym in the northwestern Arabia and the Solluba in the Syrian Desert and the region of Mosul. The Levant On the eve of the Rashidun Caliphate conquest of the Levant, 634 AD, Syria's population mainly spoke Aramaic; Greek was the official language of administration. Arabization and Islamization of Syria began in the 7th century, and it took several centuries for Islam, the Arab identity, and language to spread; the Arabs of the caliphate did not attempt to spread their language or religion in the early periods of the conquest, and formed an isolated aristocracy. The Arabs of the caliphate accommodated many new tribes in isolated areas to avoid conflict with the locals; caliph Uthman ordered his governor, Muawiyah I, to settle the new tribes away from the original population. Syrians who belonged to Monophysitic denominations welcomed the peninsular Arabs as liberators. The Abbasids in the eighth and ninth century sought to integrate the peoples under their authority, and the Arabization of the administration was one of the tools. Arabization gained momentum with the increasing numbers of Muslim converts; the ascendancy of Arabic as the formal language of the state prompted the cultural and linguistic assimilation of Syrian converts. Those who remained Christian also became Arabized; it was probably during the Abbasid period in the ninth century that Christians adopted Arabic as their first language; the first translation of the gospels into Arabic took place in this century. Many historians, such as Claude Cahen and Bernard Hamilton, proposed that the Arabization of Christians was completed before the First Crusade. By the thirteenth century, Arabic language achieved dominance in the region and its speakers became Arabs. Egypt Prior to the Islamic conquests, Arabs had been inhabiting the Sinai Peninsula, the Eastern desert and eastern Delta for centuries. These regions of Egypt collectively were known as "Arabia" to the contemporary historians and writers documenting them. Several pre-Islamic Arab kingdoms, such as the Qedarite Kingdom, extended into these regions. Inscriptions and other archeological remains, such as bowls bearing inscriptions identifying Qedarite kings and Nabatean Arabic inscriptions, affirm the Arab presence in the region. Egypt was conquered from the Romans by the Rashidun Caliphate in the 7th century CE. The Coptic language, which was written using the Coptic variation of the Greek alphabet, was spoken in most of Egypt prior to the Islamic conquest. Arabic, however, was already being spoken in the eastern fringes of Egypt for centuries prior to the arrival of Islam. By the Mameluke era, the Arabization of the Egyptian populace alongside a shift in the majority religion going from Christianity to Islam, had taken place. The Maghreb Neither North Africa nor the Iberian Peninsula were strangers to Semitic culture: the Phoenicians and later the Carthaginians dominated parts of the North African and Iberian shores for more than eight centuries until they were suppressed by the Romans and by the following Vandal and Visigothic invasions, and the Berber incursions. From the Muslim conquest of the Maghreb in the 7th century, Arabs began to migrate to the Maghreb in several waves. Arab migrants settled in all parts of the Maghreb, coming as peaceful newcomers who were welcomed everywhere, establishing large Arab settlements in many areas. In addition to changing the population's demographics, the early migration of Arab tribes resulted in the Arabization of the native Berber population. This initial wave contributed to the Berber adoption of Arab culture. Furthermore, the Arabic language spread during this period and drove local Latin (African Romance) into extinction in the cities. The Arabization took place around Arab centres through the influence of Arabs in the cities and rural areas surrounding them. Arab political entities in the Maghreb such as the Aghlabids, Idrisids, Salihids and Fatimids, were influential in encouraging Arabization by attracting Arab migrants and by promoting Arab culture. In addition, disturbances and political unrest in the Mashriq compelled the Arabs to migrate to the Maghreb in search of security and stability. After establishing Cairo in 969, the Fatimids left rule over Tunisia and eastern Algeria to the local Zirid dynasty (972–1148). In response to the Zirids later declaring independence from the Fatimids, the Fatimids dispatched large Bedouin Arab tribes, mainly the Banu Hilal and Banu Sulaym, to defeat the Zirids and settle in the Maghreb. The invasion of Ifriqiya by the Banu Hilal, a warlike Arab Bedouin tribe, sent the region's urban and economic life into further decline. The Arab historian Ibn Khaldun wrote that the lands ravaged by Banu Hilal invaders had become completely arid desert. The Fatimid caliph instructed the Bedouin tribes to rule the Maghreb instead of the Zirid emir Al-Mu'izz and told them "I have given you the Maghrib and the rule of al-Mu'izz ibn Balkīn as-Sanhājī the runaway slave. You will want for nothing." and told Al-Mu'izz "I have sent you horses and put brave men on them so that God might accomplish a matter already enacted". Sources estimated that the total number of Arab nomads who migrated to the Maghreb in the 11th century was at around 1 million Arabs. There were later Arab migrations to the Maghreb by Maqil and Beni Hassan in the 13th-15th century and by Andalusi refugees in the 15th-17th century. The migration of Banu Hilal and Banu Sulaym in the 11th century had a much greater influence on the process of Arabization of the population than did the earlier migrations. It played a major role in spreading Bedouin Arabic to rural areas such as the countryside and steppes, and as far as the southern areas near the Sahara. It also heavily transformed the culture of the Maghreb into Arab culture, and spread nomadism in areas where agriculture was previously dominant. Al-Andalus After the Umayyad conquest of Hispania, under the Arab Muslim rule Iberia (al-Andalus) incorporated elements of Arabic language and culture. The Mozarabs were Iberian Christians who lived under Arab Islamic rule in Al-Andalus. Their descendants remained unconverted to Islam, but did however adopt elements of Arabic language and culture and dress. They were mostly Roman Catholics of the Visigothic or Mozarabic Rite. Most of the Mozarabs were descendants of Hispano–Gothic Christians and were primarily speakers of the Mozarabic language under Islamic rule. Many were also what the Arabist Mikel de Epalza calls "Neo-Mozarabs", that is Northern Europeans who had come to the Iberian Peninsula and picked up Arabic, thereby entering the Mozarabic community. Besides Mozarabs, another group of people in Iberia eventually came to surpass the Mozarabs both in terms of population and Arabization. These were the Muladi or Muwalladun, most of whom were descendants of local Hispano-Basques and Visigoths who converted to Islam and adopted Arabic culture, dress, and language. By the 11th century, most of the population of al-Andalus was Muladi, with large minorities of other Muslims, Mozarabs, and Sephardic Jews. It was the Muladi, together with the Berber, Arab, and other (Saqaliba and Zanj) Muslims who became collectively termed in Christian Europe as "Moors". The Andalusian Arabic was spoken in Iberia during Islamic rule. Sicily, Malta, and Crete A similar process of Arabization and Islamization occurred in the Emirate of Sicily (as-Siqilliyyah), Emirate of Crete (al-Iqritish), and Malta (al-Malta), during this period some segments of the populations of these islands converted to Islam and began to adopt elements of Arabic culture, traditions, and customs. The Arabization process also resulted in the development of the now extinct Siculo-Arabic language, from which the modern Maltese language derives. By contrast, the present-day Sicilian language, which is an Italo-Dalmatian Romance language, retains very little Siculo-Arabic, with its influence being limited to some 300 words. Sudan Contacts between Nubians and Arabs long predated the coming of Islam, but the Arabization of the Nile Valley was a gradual process that occurred over a period of nearly one thousand years. Arab nomads continually wandered into the region in search of fresh pasturage, and Arab seafarers and merchants traded at Red Sea ports for spices and slaves. Intermarriage and assimilation also facilitated Arabization. Traditional genealogies trace the ancestry of the Nile valley's area of Sudan mixed population to Arab tribes that migrated into the region during this period. Even many non-Arabic-speaking groups claim descent from Arab forebears. The two most important Arabic-speaking groups to emerge in Nubia were the Ja'alin and the Juhaynah. In the 12th century, the Arab Ja'alin tribe migrated into Nubia and Sudan and gradually occupied the regions on both banks of the Nile from Khartoum to Abu Hamad. They trace their lineage to Abbas, uncle of the Islamic prophet Muhammad. They are of Arab origin, but now of mixed blood mostly with Northern Sudanese and Nubians. In the 16th and 17th centuries, new Islamic kingdoms were established – the Funj Sultanate and the Sultanate of Darfur, starting a long period of gradual Islamization and Arabization in Sudan. These sultanates and their societies existed until the Sudan was conquered by the Ottoman Egyptian invasion in 1820, and in the case of Darfur, even until 1916. In 1846, Arab Rashaida, who speak Hejazi Arabic, migrated from the Hejaz in present-day Saudi Arabia into what is now Eritrea and north-east Sudan, after tribal warfare had broken out in their homeland. The Rashaida of Sudan live in close proximity with the Beja people, who speak Bedawiye dialects in eastern Sudan. The Sahel In medieval times, the Baggara Arabs, a grouping of Arab ethnic groups who speak Shuwa Arabic (which is one of the regional varieties of Arabic in Africa), migrated into Africa, mainly between Lake Chad and southern Kordofan. Currently, they live in a belt which stretches across Sudan, Chad, Niger, Nigeria, Cameroon, Central African Republic and South Sudan and they number over six million people. Like other Arabic speaking tribes in the Sahara and the Sahel, Baggara tribes have origin ancestry from the Juhaynah Arab tribes who migrated directly from the Arabian peninsula or from other parts of north Africa. Arabic is an official language of Chad and Sudan as well as a national language in Niger, Mali, Senegal, and South Sudan. In addition, Arabic dialects are spoken of minorities in Nigeria, Cameroon, and Central African Republic. Arabization in modern times In the modern era, Arabization occurred due to the Arab nationalist policies toward non-Arab minorities in modern Arab states, including Algeria, Iraq, Syria, Egypt, Bahrain, Kuwait, and Sudan. Modern Arabization also occurred to reverse the consequences of European colonialism. Arab governments often imposed policies that sought to promote the use of Modern Standard Arabic and eliminate the languages of former colonizers, such as the reversing of street signs from French to Arabic names in Algeria. Arabization in Algeria Modern Arabization in Algeria took place to develop and promote Arabic into the nation's education system, government, and media in order to replace the former language that was enforced due to colonization, French. Algeria had been conquered by France and even made to be part of its metropolitan core for 132 years, a significantly longer timespan compared to Morocco and Tunisia, and it was also more influenced by Europe due to the contiguity with French settlers in Algeria: both Algerian and French nationals used to live in the same towns, resulting in the cohabitation of the two populations. While trying to build an independent and unified nation-state after the Evian Accords, the Algerian government under Ahmed Ben Bella's rule began a policy of Arabization. Indeed, due to the lasting and deep colonization, French was the major administrative and academic language in Algeria, even more so than in neighboring countries. Since independence, Algerian nationalism was heavily influenced Arab socialism, Islamism and Arab nationalism. The unification and pursuit of a single Algerian identity was to be found in the Arab identity, Arabic language and religion. Ben Bella composed the Algerian constitution in October 1963, which asserted that Islam was the state religion, Arabic was the sole national and official language of the state, Algeria was an integral part of the Arab world, and that Arabization was the first priority of the country to reverse French colonization. According to Abdelhamid Mehri, the decision of Arabic as an official language was the natural choice for Algerians, even though Algeria is a plurilingual nation with a minority, albeit substantial, number of Berbers within the nation, and the local variety of Arabic used in every-day life, Algerian Arabic, was distinct from the official language, Modern Standard Arabic. However, the process of Arabization was meant not only to promote Islam, but to fix the gap and decrease any conflicts between the different Algerian ethnic groups and promote equality through monolingualism. In 1964 the first practical measure was the Arabization of primary education and the introduction of religious education, the state relying on Egyptian teachers – belonging to the Muslim Brotherhood and therefore particularly religious – due to its lack of literary Arabic-speakers. In 1968, during the Houari Boumediene regime, Arabization was extended, and a law tried to enforce the use of Arabic for civil servants, but again, the major role played by French was only diminished. The whole policy was ultimately not as effective as anticipated: French has kept its importance and Berber opposition kept growing, contributing to the 1988 October Riots. Some Berber groups, like the Kabyles, felt that their ancestral culture and language were threatened and the Arab identity was given more focus at the expense of their own. After the Algerian Civil War, the government tried to enforce even more the use of Arabic, but the relative effect of this policy after 1998 (the limit fixed for complete Arabization) forced the heads of state to make concessions towards Berber, recognizing it in 2002 as another national language that will be promoted. However, because of literary Arabic's symbolic advantage, as well as being a single language as opposed to the fragmented Berber languages, Arabization is still a goal for the state, for example with laws on civil and administrative procedures. Arabization in Oman Despite being a nation of the Arabian Peninsula, Oman had been home to several native languages other than Arabic, of which Kumzari which is the only native Indo-European language in the Arabian Peninsula has been classified as highly endangered by the UNESCO and at risk of dying out in 50 years. Before the takeover of Qaboos as sultan, Arabic was only ever spoken by the inhabitants outside the village of Kumzar, in mosques or with strangers, however since the introduction of Arabic-only schools in 1984, Arabic is hence now spoken at both school and village with it being mandatory in school and as TV and radio are also in Arabic meaning virtually all media the people of Kumzar are exposed to is in Arabic. There has also been an internalization of outsiders' negative attitudes toward the Kumzari language to the point where some Kumzari families have begun to speak Arabic to their children at home. The Modern South Arabian languages have also come under threat in Oman. Hobyot is considered a critically endangered language. The actual number of speakers is unknown, but it is estimated to be only a few hundred. Most of those who maintain the language are elderly, which adds to the likelihood that language extinction is near. Ethnologue categorizes it as a moribund language (EGIDS 8a). The only fluent speakers that are left are older than the child-bearing age, which ultimately makes integration of the language into subsequent generations highly improbable. Mechanisms of transmission would have to be created from outside the community in order to preserve it. The Harsusi language is also critically endangered, as most Harsusi children now attend Arabic-language schools and are literate in Arabic, Harsusi is spoken less in the home, meaning that it is not being passed down to future generations. With the discovery of oil in the area and the reintroduction of the Arabian Oryx in the area which has provided job opportunities for Harsusi men, this has led to them using primarily Arabic or Mehri when communicating with their co-workers. These factors have also caused many Harasis to speak Arabic and Mehri in addition to or in place of Harsusi. These pressures led one researcher to conclude in 1981 that "within a few generations Harsusi will be replaced by Arabic, more specifically by the Omani Arabic standard dialect" though this has not yet materialized. UNESCO has categorised Harsusi as a language that is "definitely endangered". The Shehri language has also come under threat in recent years, prior to the Arabization of Oman, Shehri was once spoken from Yemen's Hadhramaut region to Ras Al Hadd in Eastern Oman. Until around as little as forty years ago, Shehri was spoken by all of the inhabitants of Dhofar as the common language, including by the native Arabic speakers in Salalah who spoke it fluently. The remainder of Dhofar's inhabitants all spoke Shehri as their mother tongue. Today however Arabic has taken over as the form of mutual communication in Dhofar and is now exclusively spoken by those to whom it is their native tongue. A number of the older generation of Shehri language speakers, particularly those who live in the mountains, don't even speak Arabic and it was only around fifty years ago that most of Dhofar's Shehri speaking population began to learn it. The fact that Arabic has a written form unlike Shehri has also greatly contributed to its decline. Another language, Bathari is the most at risk of dying out with its numbers (as of 2019) at currently anywhere from 12 to 17 fluent elderly speakers whereas there are some middle aged speakers but they mix their ancestral tongue with Arabic instead. The tribe seems to be dying out with the language also under threat from modern education solely in Arabic. The Bathari language is nearly extinct. Estimates are that the number of remaining speakers are under 100. Arabization in Morocco Following 44 years of colonization by France, Morocco began promoting the use of Modern Standard Arabic to create a united Moroccan national identity, and increase literacy throughout the nation away from any predominant language within the administration and educational system. Unlike Algeria, Morocco did not encounter with the French as strongly due to the fact that the Moroccan population was scattered throughout the nation and major cities, which resulted in a decrease of French influence compared to the neighboring nations. First and foremost, educational policy was the main focus within the process, debates surfaced between officials who preferred a "modern and westernized" education with enforcement of bilingualism while others fought for a traditional route with a focus of "Arabo-Islamic culture". Once the Istiqal Party took power, the party focused on placing a language policy siding with the traditional ideas of supporting and focusing on Arabic and Islam. The Istiqal Party implemented the policy rapidly and by the second year after gaining independence, the first year of primary education was completely Arabized, and a bilingual policy was placed for the remaining primary education decreasing the hours of French being taught in a staggered manner. Arabization in schools had been more time-consuming and difficult than expected due to the fact that the first 20 years following independence, politicians (most of which were educated in France or French private school in Morocco) were indecisive as to if Arabization was best for the country and its political and economic ties with European nations. Regardless, complete Arabization can only be achieved if Morocco becomes completely independent from France in all aspects; politically, economically, and socially. Around 1960, Hajj Omar Abdeljalil the education minister at the time reversed all the effort made to Arabize the public school and reverted to pre-independent policies, favoring French and westernized learning. Another factor that reflected the support of reversing the Arabization process in Morocco, was the effort made by King Hassan II, who supported the Arabization process but on the contrary increased political and economic dependence on France. Due to the fact that Morocco remained dependent on France and wanted to keep strong ties with the Western world, French was supported by the elites more than Arabic for the development of Morocco. Arabization in Tunisia The Arabization process in Tunisia theoretically should have been the easiest within the North African region because less than 1% of its population was Berber, and practically 100% of the population natively spoke vernacular Tunisian Arabic. However, it was the least successful due to its dependence on European nations and belief in Westernizing the nation for the future development of the people and the country. Much like Morocco, Tunisian leaders' debate consisted of traditionalists and modernists, traditionalists claiming that Arabic (specifically Classical Arabic) and Islam are the core of Tunisia and its national identity, while modernists believed that Westernized development distant from "Pan-Arabist ideas" are crucial for Tunisia's progress. Modernists had the upper hand, considering elites supported their ideals, and after the first wave of graduates that had passed their high school examinations in Arabic were not able to find jobs nor attend a university because they did not qualify due to French preference in any upper-level university or career other than Arabic and Religious Studies Department. There were legitimate efforts made to Arabize the nation from the 1970s up until 1982, though the efforts came to an end and the process of reversing all the progress of Arabization began and French implementation in schooling took effect. The Arabization process was criticized and linked with Islamic extremists, resulting in the process of "Francophonie" or promoting French ideals, values, and language throughout the nation and placing its importance above Arabic. Although Tunisia gained its independence, nevertheless the elites supported French values above Arabic, the answer to developing an educated and modern nation, all came from Westernization. The constitution stated that Arabic was the official language of Tunisia but nowhere did it claim that Arabic must be utilized within the administrations or every-day life, which resulted in an increase of French usage not only in science and technology courses. Further, major media channels were in French, and government administrations were divided—some were in Arabic while others were in French. Arabization in Sudan Sudan is an ethnically mixed country that is economically and politically dominated by the society of central northern Sudan, where many identify as Arabs and Muslims. The population in South Sudan consists mostly of Christian and Animist Nilotic people, who have been regarded for centuries as non-Arab, African people. Apart from Modern Standard Arabic, taught in schools and higher education, and the spoken forms of Sudanese Arabic colloquial, several other languages are spoken by diverse ethnic groups. Since independence in 1956, Sudan has been a multilingual country, with Sudanese Arabic as the major first or second language. In the 2005 constitution of the Republic of Sudan and following the Comprehensive Peace Agreement, the official languages of Sudan were declared Modern Standard Arabic (MSA) and English. Before the independence of South Sudan in 2011, people in the southern parts of the country, who mainly speak Nilo-Saharan languages or Juba Arabic, were subjected to the official policy of Arabization by the central government. The constitution declared, however, that "all indigenous languages of the Sudan are national languages and shall be respected, developed, and promoted," and it allowed any legislative body below the national level to adopt any other national language(s) as additional official working language(s) within that body's jurisdiction. MSA is also the language used in Sudan's central government, the press, as well as in official programmes of Sudan television and Radio Omdurman. Several lingua francas have emerged, and many people have become genuinely multilingual, fluent in a native language spoken at home, a lingua franca, and perhaps other languages. Arabization in Mauritania Mauritania is an ethnically-mixed country that is economically and politically dominated by those who identify as Arabs and/or Arabic-speaking Berbers. About 30% of the population is considered "Black African", and the other 40% are Arabized Blacks, both groups suffering high levels of discrimination. Recent Black Mauritanian protesters have complained of "comprehensive Arabization" of the country. Arabization in Iraq Saddam Hussein's Ba'ath Party had aggressive Arabization policies involving driving out many pre-Arab and non-Arab races – mainly Kurds, Assyrians, Yezidis, Shabaks, Armenians, Turcomans, Kawliya, Circassians, and Mandeans – replacing them with Arab families. In the 1970s, Saddam Hussein exiled between 350,000 to 650,000 Shia Iraqis of Iranian ancestry (Ajam). Most of them went to Iran. Those who could prove an Iranian/Persian ancestry in Iran's court received Iranian citizenship (400,000) and some of them returned to Iraq after Saddam. During the Iran-Iraq War, the Anfal campaign destroyed many Kurdish, Assyrian and other ethnic minority villages and enclaves in North Iraq, and their inhabitants were often forcibly relocated to large cities in the hope that they would be Arabized. This policy drove out 500,000 people in the years 1991–2003. The Baathists also pressured many of these ethnic groups to identify as Arabs, and restrictions were imposed upon their languages, cultural expression and right to self-identification. Arabization in Syria Since the independence of Syria in 1946, the ethnically diverse Rojava region in northern Syria suffered grave human rights violations, because all governments pursued a most brutal policy of Arabization. While all non-Arab ethnic groups within Syria, such as Assyrians, Armenians, Turcomans, and Mhallami have faced pressure from Arab Nationalist policies to identify as Arabs, the most archaic of it was directed against the Kurds. In his report for the 12th session of the UN Human Rights Council titled Persecution and Discrimination against Kurdish Citizens in Syria, the United Nations High Commissioner for Human Rights held: "Successive Syrian governments continued to adopt a policy of ethnic discrimination and national persecution against Kurds, completely depriving them of their national, democratic and human rights — an integral part of human existence. The government imposed ethnically-based programs, regulations and exclusionary measures on various aspects of Kurds' lives — political, economic, social and cultural." The Kurdish language was not officially recognized, it had no place in public schools. A decree from 1989 prohibited the use of Kurdish at the workplace as well as in marriages and other celebrations. In September 1992 came another government decree that children not be registered with Kurdish names. Also businesses could not be given Kurdish names. Books, music, videos and other material could not be published in Kurdish language. Expressions of Kurdish identity like songs and folk dances were outlawed and frequently prosecuted under a purpose-built criminal law against "weakening national sentiment". Celebrating the Nowruz holiday was often constrained. In 1973, the Syrian authorities confiscated 750 square kilometers of fertile agricultural land in Al-Hasakah Governorate, which were owned and cultivated by tens of thousands of Kurdish citizens, and gave it to Arab families brought in from other provinces. Describing the settlement policies pursued by the regime as part of the "Arab Belt programme, a Kurdish engineer in the region stated: "The government built them homes for free, gave them weapons, seeds and fertilizer, and created agricultural banks that provided loans. From 1973 to 1975, forty-one villages were created in this strip, beginning ten kilometers west of Ras al-'Ayn. The idea was to separate Turkish and Syrian Kurds, and to force Kurds in the area to move away to the cities. Any Arab could settle in Hasakeh, but no Kurd was permitted to move and settle there." In 2007, in another such scheme in Al-Hasakah governate, 6,000 square kilometers around Al-Malikiyah were granted to Arab families, while tens of thousands of Kurdish inhabitants of the villages concerned were evicted. These and other expropriations of ethnic Kurdish citizens followed a deliberate masterplan, called "Arab Belt initiative", attempting to depopulate the resource-rich Jazeera of its ethnic Kurdish inhabitants and settle ethnic Arabs there. After the Turkish-led forces had captured Afrin District in early 2018, they began to implement a resettlement policy by moving Turkish-backed Free Syrian Army fighters and Sunni Arab refugees from southern Syria into the empty homes that belonged to displaced locals. The previous owners, most of them Kurds or Yazidis, were often prevented from returning to Afrin. Refugees from Eastern Ghouta, Damascus, said that they were part of "an organised demographic change" which was supposed to replace the Kurdish population of Afrin with an Arab majority. De-Arabization In the modern era, de-Arabization can refer to government policies which aim to reverse Arabization, such as the reversal of the Arabization of Kurds in northern Iraq and Mizrahi Jews in Israel. Historic reversions of Arabization Norman conquest of southern Italy (999-1139) The Muslim conquest of Sicily lasted from 827 until 902 when the Emirate of Sicily was established. It was marked by an Arab–Byzantine culture. Sicily in turn was then subjected to the Norman conquest of southern Italy from 999 to 1139. The Arab identity of Sicily came to an end latest by the mid-13th century. Reconquista (1212-1492) The Reconquista in the Iberian Peninsula is the most notable example of a historic reversion of Arabization. The process of Arabization and Islamization was reversed as the mostly Christian kingdoms in the north of the peninsula conquered Toledo in 1212 and Cordoba in 1236. Granada, the last remaining emirate on the peninsula, was conquered in January 1492. The re-conquered territories were Hispanicized and Christianized, although the culture, languages and religious traditions imposed differed from those of the previous Visigothic kingdom. Reversions in modern times In modern times, there have been various political developments to reverse the process of Arabization. Notable among these are: The 1948 establishment of the State of Israel as a Jewish polity, Hebraization of Palestinian place names, use of Hebrew as an official language (with Arabic remaining co-official) and the de-Arabization of the Arabic-speaking Sephardim and Mizrahi Jews who arrived in Israel from the Arab world. The 1992 establishment of a Kurdish-dominated polity in Mesopotamia as Iraqi Kurdistan. The 2012 establishment of a multi-ethnic Democratic Federation of Northern Syria. Berberism, a Berber political-cultural movement of ethnic, geographic, or cultural nationalism present in Algeria, Morocco and broader North Africa including Mali. The Berberist movement is in opposition to cultural Arabization and the pan-Arabist political ideology, and is also associated with secularism. South Sudan's secession from Arab-led Sudan in 2011 after a bloody civil war decreased Sudan's territory by almost half. Sudan is a member of the Arab League while South Sudan did not enter membership. Arabic also is not an official language of South Sudan. Arabization of Malays was criticized by Sultan Ibrahim Ismail of Johor. He urged the retention of Malay culture instead of introducing Arab culture. He called on people to not mind unveiled women or mixed sex handshaking, and urged against using Arabic words in place of Malay words. He suggested Saudi Arabia as a destination for those who wanted Arab culture. He said that he was going to adhere to Malay culture himself. Abdul Aziz Bari said that Islam and Arab culture are intertwined and criticized the Johor Sultan for what he said. Datuk Haris Kasim, who leads the Selangor Islamic Religious Department, also criticized the Sultan for his remarks. The Chinese government launched a campaign in 2018 to remove Arab-style domes and minarets from mosques in a campaign called "de-Arabization" and "de-Saudization". See also Arab nationalism Pan-Arabism Cultural assimilation History of the Arabs Islamism Spread of Islam Notes References External links Genetic Evidence for the Expansion of Arabian Tribes into the Southern Levant and North Africa Bossut, Camille Alexandra. Arabization in Algeria : language ideology in elite discourse, 1962-1991 (Abstract) - PhD thesis, University of Texas at Austin, May 2016. Arabs Arab culture Arabic languages
0.76606
0.995634
0.762715
The
The is a grammatical article in English, denoting persons or things that are already or about to be mentioned, under discussion, implied or otherwise presumed familiar to listeners, readers, or speakers. It is the definite article in English. The is the most frequently used word in the English language; studies and analyses of texts have found it to account for seven percent of all printed English-language words. It is derived from gendered articles in Old English which combined in Middle English and now has a single form used with nouns of any gender. The word can be used with both singular and plural nouns, and with a noun that starts with any letter. This is different from many other languages, which have different forms of the definite article for different genders or numbers. Pronunciation In most dialects, "the" is pronounced as (with the voiced dental fricative followed by a schwa) when followed by a consonant sound, and as (homophone of the archaic pronoun thee) when followed by a vowel sound or used as an emphatic form. Modern American and New Zealand English have an increasing tendency to limit usage of pronunciation and use , even before a vowel. Sometimes the word "the" is pronounced , with stress, to emphasise that something is unique: "he is the expert", not just "an" expert in a field. Adverbial Definite article principles in English are described under "Use of articles". The, as in phrases like "the more the better", has a distinct origin and etymology and by chance has evolved to be identical to the definite article. Article The and that are common developments from the same Old English system. Old English had a definite article se (in the masculine gender), sēo (feminine), and þæt (neuter). In Middle English, these had all merged into þe, the ancestor of the Modern English word the. Geographic usage An area in which the use or non-use of the is sometimes problematic is with geographic names: Notable natural landmarks – rivers, seas, mountain ranges, deserts, island groups (archipelagoes), etc., are generally used with a "the" definite article (the Rhine, the North Sea, the Alps, the Sahara, the Hebrides). Continents, individual islands, administrative units, and settlements mostly do not take a "the" article (Europe, Jura, Austria (but the Republic of Austria), Scandinavia, Yorkshire (but the County of York), Madrid). Beginning with a common noun followed by of may take the article, as in the Isle of Wight or the Isle of Portland (compare Christmas Island), same applies to names of institutions: Cambridge University, but the University of Cambridge. Some place names include an article, such as the Bronx, The Oaks, The Rock, The Birches, The Harrow, The Rower, The Swan, The Valley, The Farrington, The Quarter, The Plains, The Dalles, The Forks, The Village, The Village (NJ), The Village (OK), The Villages, The Village at Castle Pines, The Woodlands, The Pas, Wells-next-the-Sea, the Vatican, the Tiergarten, The Hyde, the West End, the East End, The Hague, or the City of London (but London). Formerly e.g. Bath, Devizes or White Plains. Generally described singular names, the North Island (New Zealand) or the West Country (England), take an article. Countries and territorial regions are notably mixed, most exclude "the" but there are some that adhere to secondary rules: Derivations from collective common nouns such as "kingdom", "republic", "union", etc.: the Central African Republic, the Dominican Republic, the United States, the United Kingdom, the Soviet Union, the United Arab Emirates, including most country full names: the Czech Republic (but Czechia), the Russian Federation (but Russia), the Principality of Monaco (but Monaco), the State of Israel (but Israel) and the Commonwealth of Australia (but Australia). Countries in a plural noun: the Netherlands, the Falkland Islands, the Faroe Islands, the Cayman Islands, the Philippines, the Comoros, the Maldives, the Seychelles, Saint Vincent and the Grenadines, and the Bahamas. Singular derivations from "island" or "land" that hold administrative rights – Greenland, England, Christmas Island and Norfolk Island – do not take a "the" definite article. Derivations from mountain ranges, rivers, deserts, etc., are sometimes used with an article, even for singular (the Lebanon, the Sudan, the Yukon, the Congo). This usage is in decline, The Gambia remains recommended whereas use of the Argentine for Argentina is considered old-fashioned. Ukraine is occasionally referred to as the Ukraine, a usage that was common during the 20th century and during Soviet rule, but this is considered incorrect and possibly offensive in modern usage. Sudan (but the Republic of the Sudan) and South Sudan (but the Republic of South Sudan) are written nowadays without the article. Ye form In Middle English, the (þe) was frequently abbreviated as a þ with a small e above it, similar to the abbreviation for that, which was a þ with a small t above it. During the latter Middle English and Early Modern English periods, the letter thorn (þ) in its common script, or cursive, form came to resemble a y shape. With the arrival of movable type printing, the substitution of for became ubiquitous, leading to the common "ye", as in 'Ye Olde Curiositie Shoppe'. One major reason for this was that existed in the printer's types that William Caxton and his contemporaries imported from Belgium and the Netherlands, while did not. As a result, the use of a y with an e above it as an abbreviation became common. It can still be seen in reprints of the 1611 edition of the King James Version of the Bible in places such as Romans 15:29 or in the Mayflower Compact. Historically, the article was never pronounced with a y sound even when it was so written. Trademark Ohio State University registered a trademark allowing the university to use "THE" on casual and athletic clothing. The university, often referred to as "The Ohio State University", had used "THE" on clothing since 2005, but took steps to register the trademark in August 2019 after the Marc Jacobs company attempted to do the same. In August 2021 Ohio State and Marc Jacobs agreed the high-end fashion retailer could use "THE" on its merchandise, which was different from what the university would sell. Still, the university took almost an additional year to convince the United States Patent and Trademark Office that the use of "the" was "more than ... ornamental". Abbreviations Since "the" is one of the most frequently used words in English, at various times short abbreviations for it have been found: Barred thorn: the earliest abbreviation, it is used in manuscripts in the Old English language. It is the letter þ with a bold horizontal stroke through the ascender, and it represents the word þæt, meaning "the" or "that" (neuter nom. / acc.). þͤ and þͭ (þ with a superscript e or t) appear in Middle English manuscripts for "þe" and "þat" respectively. yͤ and yͭ are developed from þͤ and þͭ and appear in Early Modern manuscripts and in print (see Ye form). Occasional proposals have been made by individuals for an abbreviation. In 1916, Legros & Grant included in their classic printers' handbook Typographical Printing-Surfaces, a proposal for a letter similar to Ħ to represent "Th", thus abbreviating "the" to ħe. In Middle English, the (þe) was frequently abbreviated as a þ with a small e above it, similar to the abbreviation for that, which was a þ with a small t above it. During the latter Middle English and Early Modern English periods, the letter thorn (þ) in its common script, or cursive form, came to resemble a y shape. As a result, the use of a y with an e above it as an abbreviation became common. This can still be seen in reprints of the 1611 edition of the King James Version of the Bible in places such as Romans 15:29, or in the Mayflower Compact. Historically, the article was never pronounced with a y sound, even when so written. The word "The" itself, capitalised, is used as an abbreviation in Commonwealth countries for the honorific title "The Right Honourable", as in e.g. "The Earl Mountbatten of Burma", short for "The Right Honourable Earl Mountbatten of Burma", or "The Prince Charles". Notes References External links English grammar English words
0.763801
0.998548
0.762692
Classical mythology
Classical mythology, also known as Greco-Roman mythology or Greek and Roman mythology, is the collective body and study of myths from the ancient Greeks and ancient Romans. Mythology, along with philosophy and political thought, is one of the major survivals of classical antiquity throughout later Western culture. The Greek word mythos refers to the spoken word or speech, but it also denotes a tale, story or narrative. As late as the Roman conquest of Greece during the last two centuries Before the Common Era and for centuries afterwards, the Romans, who already had gods of their own, adopted many mythic narratives directly from the Greeks while preserving their own Roman (Latin) names for the gods. As a result, the actions of many Roman and Greek deities became equivalent in storytelling and literature. For example, the Roman sky god Jupiter or Jove became equated with his Greek counterpart Zeus; the Roman fertility goddess Venus with the Greek goddess Aphrodite; and the Roman sea god Neptune with the Greek god Poseidon. Latin remained the dominant language in Europe during the Middle Ages and Renaissance, largely due to the widespread influence of the Roman Empire. During this period, mythological names almost always appeared in their Latin form. However, in the 19th century, there was a shift towards the use of either the Greek or Roman names. For example, "Zeus" and "Jupiter" both became widely used in that century as the name of the supreme god of the classical pantheon. Classical myth The stories and characters found in Greco-Roman mythology are not considered real in terms of the same way that historical or scientific facts are real. They are not factual accounts of events that occurred. Instead, Greco-Roman mythology is a collection of ancient stories, legends, and beliefs that were created by the people of ancient Greece and Rome to explain aspects of the world around them, express cultural values, and provide a framework for understanding their existence. These myths often involve gods, heroes, goddesses, afterwar appearances, and other supernatural beings, and they were an integral part of the religious and cultural practices of the time. While these myths are not considered historically accurate, they hold cultural and literary significance. Greek myths were narratives related to ancient Greek religion, often concerned with the actions of gods and other supernatural beings and of heroes who transcend human bounds. Major sources for Greek myths include the Homeric epics, that is, the Iliad and the Odyssey, and the tragedies of Aeschylus, Sophocles, and Euripides. Known versions are mostly preserved in sophisticated literary works shaped by the artistry of individuals and by the conventions of genre, or in vase painting and other forms of visual art. In these forms, mythological narratives often serve purposes that are not primarily religious, such as entertainment and even comedy (The Frogs), or the exploration of social issues (Antigone). Roman myths are traditional stories pertaining to ancient Rome's legendary origins, religious institutions, and moral models, with a focus on human actors and only occasional intervention from deities but a pervasive sense of divinely ordered destiny. Roman myths have a dynamic relation to Roman historiography, as in the early books of Livy's Ab urbe condita. The most famous Roman myth may be the birth of Romulus and Remus and the founding of the city, in which fratricide can be taken as expressing the long history of political division in the Roman Republic. As late as the Hellenistic period of Greek influence and primarily through the Roman conquest of Greece, the Romans identified their own gods with those of the Greeks, keeping their own Roman names but adopting the Greek stories told about them (see interpretatio graeca) and importing other myths for which they had no counterpart. For instance, while the Greek god Ares and the Italic god Mars are both war deities, the role of each in his society and its religious practices differed often strikingly; but in literature and Roman art, the Romans reinterpreted stories about Ares under the name of Mars. The literary collection of Greco-Roman myths with the greatest influence on later Western culture was the Metamorphoses of the Augustan poet Ovid. Syncretized versions form the classical tradition of mythography, and by the time of the influential Renaissance mythographer Natalis Comes (16th century), few if any distinctions were made between Greek and Roman myths. The myths as they appear in popular culture of the 20th and 21st centuries often have only a tangential relation to the stories as told in ancient Greek and Latin literature. The people living in the Renaissance era, who primarily studied the Christian teachings, Classical mythology found a way to be told from the freshly found ancient sources that authors and directors used for plays and stories for the retelling of these myths. Professor John Th. Honti stated that "many myths of Graeco-Roman antiquity" show "a nucleus" that appear in "some later common European folk-tale". Mythology was not the only borrowing that the Romans made from Greek culture. Rome took over and adapted many categories of Greek culture: philosophy, rhetoric, history, epic, tragedy and their forms of art. In these areas, and more, Rome took over and developed the Greek originals for their own needs. Some scholars argue that the reason for this “borrowing” is largely, among many other things, the chronology of the two cultures. Professor Elizabeth Vandiver says Greece was the first culture in the Mediterranean, then Rome second. See also Related topics Chariot clock Classical tradition Classics Greco-Roman world Greek mythology in western art and literature LGBT themes in classical mythology List of films based on classical mythology List of films based on Greek drama Matter of Rome Natale Conti, influential Renaissance mythographer Proto-Indo-European religion Vatican Mythographers Classical mythology categories Classical mythology in popular culture Ancient Greece in art and culture Works based on classical mythology On individual myths or figures Ares in popular culture Prometheus in popular culture References
0.764658
0.997426
0.762689
Hanseatic League
The Hanseatic League was a medieval commercial and defensive network of merchant guilds and market towns in Central and Northern Europe. Growing from a few North German towns in the late 12th century, the League expanded between the 13th and 15th centuries and ultimately encompassed nearly 200 settlements across eight modern-day countries, ranging from Estonia in the north and east, to the Netherlands in the west, and extended inland as far as Cologne, the Prussian regions and Kraków, Poland. The League began as a collection of loosely associated groups of German traders and towns aiming to expand their commercial interests, including protection against robbery. Over time, these arrangements evolved into the League, offering traders toll privileges and protection on affiliated territory and trade routes. Economic interdependence and familial connections among merchant families led to deeper political integration and the reduction of trade barriers. This gradual process involved standardizing trade regulations among Hanseatic Cities. During its time, the Hanseatic League dominated maritime trade in the North and Baltic Seas. It established a network of trading posts in numerous towns and cities, notably the Kontors in London (known as the Steelyard), Bruges, Bergen, and Novgorod, which became extraterritorial entities that enjoyed considerable legal autonomy. Hanseatic merchants, commonly referred to as Hansards, operated private companies and were known for their access to commodities, and enjoyed privileges and protections abroad. The League's economic power enabled it to impose blockades and even wage war against kingdoms and principalities. Even at its peak, the Hanseatic League remained a loosely aligned confederation of city-states. It lacked a permanent administrative body, a treasury, and a standing military force. In the 14th century, the Hanseatic League instated an irregular negotiating diet that operated based on deliberation and consensus. By the mid-16th century, these weak connections left the Hanseatic League vulnerable, and it gradually unraveled as members merged into other realms or departed, ultimately disintegrating in 1669. The League used a variety of vessel types for shipping across the seas and navigating rivers. The most emblematic type was the cog. Expressing diversity in construction, it was depicted on Hanseatic seals and coats of arms. By the end of the Middle Ages, the cog was replaced by types like the hulk, which later gave way to larger carvel ships. Etymology is the Old High German word for a band or troop. This word was applied to bands of merchants traveling between the Hanseatic cities. in Middle Low German came to mean a society of merchants or a trader guild. Claims that it originally meant An-See, or "on the sea", are incorrect. History Exploratory trading ventures, raids, and piracy occurred throughout the Baltic Sea. The sailors of Gotland sailed up rivers as far away as Novgorod, which was a major Rus trade centre. Scandinavians led the Baltic trade before the League, establishing major trading hubs at Birka, Haithabu, and Schleswig by the 9th century CE. The later Hanseatic ports between Mecklenburg and Königsberg (present-day Kaliningrad) originally formed part of the Scandinavian-led Baltic trade system. The Hanseatic League was never formally founded, so it lacks a date of foundation. Historians traditionally traced its origins to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. More recent scholarship has deemphasized Lübeck, viewing it as one of several regional trading centers, and presenting the League as the combination of a north German trading system oriented on the Baltic and a Rhinelandic trading system targeting England and Flanders. German cities speedily dominated trade in the Baltic during the 13th century, and Lübeck became a central node in the seaborne trade that linked the areas around the North and Baltic seas. Lübeck hegemony peaked during the 15th century. Foundation and early development Well before the term Hanse appeared in a document in 1267, in different cities began to form guilds, or hansas, with the intention of trading with overseas towns, especially in the economically less-developed eastern Baltic. This area could supply timber, wax, amber, resins, and furs, along with rye and wheat brought on barges from the hinterland to port markets. Merchant guilds formed in hometowns and destination ports as medieval corporations (universitates mercatorum), and despite competition increasingly cooperated to coalesce into the Hanseatic network of merchant guilds. The dominant language of trade was Middle Low German, which had a significant impact on the languages spoken in the area, particularly the larger Scandinavian languages, Estonian, and Latvian. Visby, on the island of Gotland, functioned as the leading center in the Baltic before the Hansa. Sailing east, Visby merchants established a trading post at Novgorod called Gutagard (also known as Gotenhof) in 1080. In 1120, Gotland gained autonomy from Sweden and admitted traders from its southern and western regions. Thereafter, under a treaty with the Visby Hansa, northern German merchants made regular stops at Gotland. In the first half of the 13th century, they established their own trading station or Kontor in Novgorod, known as the Peterhof, up the river Volkhov. Lübeck soon became a base for merchants from Saxony and Westphalia trading eastward and northward; for them, because of its shorter and easier access route and better legal protections, it was more attractive than Schleswig. It became a transshipment port for trade between the North Sea and the Baltics. Lübeck also granted extensive trade privileges to Russian and Scandinavian traders. It was the main supply port for the Northern Crusades, improving its standing with various Popes. Lübeck gained imperial privileges to become a free imperial city in 1226, under Valdemar II of Denmark during the Danish dominion, as had Hamburg in 1189. Also in this period Wismar, Rostock, Stralsund, and Danzig received city charters. Hansa societies worked to remove trade restrictions for their members. The earliest documentary mention (although without a name) of a specific German commercial federation dates between 1173 and 1175 (commonly misdated to 1157) in London. That year, the merchants of the Hansa in Cologne convinced King Henry II of England to exempt them from all tolls in London and to grant protection to merchants and goods throughout England. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), all of which joined the League, and some of which retain Hansa buildings and bear the style of their Hanseatic days. Most adopted Lübeck law, after the league's most prominent town. The law provided that they appeal in all legal matters to Lübeck's city council. Others, like Danzig from 1295 onwards, had Magdeburg law or its derivative, Culm law. Later, the Livonian Confederation of 1435 to incorporated modern-day Estonia and parts of Latvia; all of its major towns were members of the League. Over the 13th century, older and wealthier long-distance traders increasingly chose to settle in their hometowns as trade leaders, transitioning from their previous roles as landowners. The growing number of settled merchants afforded long-distance traders greater influence over town policies. Coupled with an increased presence in the ministerial class, this elevated the status of merchants and enabled them to expand to and assert dominance over more cities. This decentralized arrangement was fostered by slow travel speeds: moving from Reval to Lübeck took between 4 weeks and, in winter, 4 months.In 1241, Lübeck, which had access to the Baltic and North seas' fishing grounds, formed an alliance—a precursor to the League—with the trade city of Hamburg, which controlled access to the salt-trade routes from Lüneburg. These cities gained control over most of the salt-fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. The towns raised their armies, with each guild required to provide levies when needed. The Hanseatic cities aided one another, and commercial ships often served to carry soldiers and their arms. The network of alliances grew to include a flexible roster of 70 to 170 cities. In the West, cities of the Rhineland such as Cologne enjoyed trading privileges in Flanders and England. In 1266, King Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, initially causing competition with the Westphalians. But the Cologne Hansa and the Wendish Hansa joined in 1282 to form the Hanseatic colony in London, although they didn't completely merge until the 15th century. Novgorod was blockaded in 1268 and 1277/1278. Nonetheless, Westphalian traders continued to dominate trade in London and also Ipswich and Colchester, while Baltic and Wendish traders concentrated between King's Lynn and Newcastle upon Tyne. Much of the drive for cooperation came from the fragmented nature of existing territorial governments, which did not provide security for trade. Over the next 50 years, the merchant Hansa solidified with formal agreements for co-operation covering the west and east trade routes. Cities from the east modern-day Low Countries, but also Utrecht, Holland, Zealand, Brabant, Namur, and modern Limburg joined in participation over the thirteenth century. This network of Hanseatic trading guilds became called the Kaufmannshanse in historiography. Commercial expansion The League succeeded in establishing additional Kontors in Bruges (Flanders), Bryggen in Bergen (Norway), and London (England) beside the Peterhof in Novgorod. These trading posts were institutionalised by the first half of the 14th century (for Bergen and Bruges) and, except for the Kontor of Bruges, became significant enclaves. The London Kontor, the Steelyard, stood west of London Bridge near Upper Thames Street, on the site later occupied by Cannon Street station. It grew into a walled community with its warehouses, weigh house, church, offices, and homes. In addition to the major Kontors, individual ports with Hanseatic trading outposts or factories had a representative merchant and warehouse. Often they were not permanently manned. In Scania, Denmark, around 30 Hanseatic seasonal factories produced salted herring, these were called vitten and were granted legal autonomy to the extent that Burkhardt argues that they resembled a fifth kontor and would be seen as such if not for their early decline. In England, factories in Boston (the outpost was also called Stalhof), Bristol, Bishop's Lynn (later King's Lynn, which featured the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Newcastle upon Tyne, Norwich, Scarborough, Yarmouth (now Great Yarmouth), and York, many of which were important for the Baltic trade and became centers of the textile industry in the late 14th century. Hansards and textile manufacturers coordinated to make fabrics meet local demand and fashion in the traders' hometowns. Outposts in Lisbon, Bordeaux, Bourgneuf, La Rochelle and Nantes offered the cheaper Bay salt. Ships that plied this trade sailed in the salt fleet. Trading posts operated in Flanders, Denmark-Norway, the Baltic interior, Upper Germany, Iceland, and Venice. Hanseatic trade was not exclusively maritime, or even over water. Most Hanseatic towns did not have immediate access to the sea and many were linked to partners by river trade or even land trade. These formed an integrated network, while many smaller Hanseatic towns had their main trading activity in subregional trade. Internal Hanseatic trade was the Hanse's quantitatively largest and most important business. Trade over rivers and land was not tied to specific Hanseatic privileges, but seaports such as Bremen, Hamburg and Riga dominated trade on their rivers. This was not possible for the Rhine where trade retained an open character. Digging canals for trade was uncommon, although the Stecknitz Canal was built between Lübeck and Lauenburg from 1391 to 1398. Major trade goods Starting with trade in coarse woolen fabrics, the Hanseatic League increased both commerce and industry in northern Germany. As trade increased, finer woolen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of the cottage industry occurred in other fields, e.g. etching, wood carving, armor production, engraving of metals, and wood-turning. The league primarily traded beeswax, furs, timber, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth, in particular broadcloth, (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came south from Sweden, while the Carpathians were another important source of copper and iron, often sold in Thorn. Lubeck had a vital role in the salt trade; salt was acquired in Lüneburg or shipped from France and Portugal and sold on Central European markets, taken to Scania to salt herring, or exported to Russia. Stockfish was traded from Bergen in exchange for grain; Hanseatic grain inflows allowed more permanent settlements further north in Norway. The league also traded beer, with beer from Hanseatic towns the most valued, and Wendish cities like Lübeck, Hamburg, Wismar, and Rostock developed export breweries for hopped beer. Economic power and defense The Hanseatic League, at first the merchant hansas and eventually its cities, relied on power to secure protection and gain and preserve privileges. Bandits and pirates were persistent problems; during wars, these could be joined by privateers. Traders could be arrested abroad and their goods could be confiscated. The league sought to codify protection; internal treaties established mutual defense and external treaties codified privileges. Many locals, merchant and noble alike, envied the League's power and tried to diminish it. For example, in London, local merchants exerted continuing pressure for the revocation of privileges. Most foreign cities confined Hanseatic traders to specific trading areas and their trading posts. The refusal of the Hansa to offer reciprocal arrangements to their counterparts exacerbated the tension. League merchants used their economic power to pressure cities and rulers. They called embargoes, redirected trade away from towns, and boycotted entire countries. Blockades were erected against Novgorod in 1268 and 1277/1278. Bruges was pressured by temporarily moving the Hanseatic emporium to Aardenburg from 1280 to 1282, from 1307 or 1308 to 1310 and in 1350, to Dordt in 1358 and 1388, and to Antwerp in 1436. Boycotts against Norway in 1284 and Flanders in 1358 nearly caused famines. They sometimes resorted to military action. Several Hanseatic cities maintained their warships and in times of need, repurposed merchant ships. Military action against political powers often involved an ad hoc coalition of stakeholders, called an alliance (tohopesate). As an essential part of protecting their investments, League members trained pilots and erected lighthouses, including Kõpu Lighthouse. Lübeck erected in 1202 what may be northern Europe's first proper lighthouse in Falsterbo. By 1600 at least 15 lighthouses had been erected along the German and Scandinavian coasts, making it the best-lighted coast in the world, largely thanks to the Hansa. Zenith The weakening of imperial power and imperial protection under the late Hohenstaufen dynasty forced the League to institutionalize a cooperating network of cities with a fluid structure, called the Städtehanse, but it never became a formal organization and the Kaufmannshanse continued to exist. This development was delayed by the conquest of Wendish cities by the Danish king Eric VI Menved or by their feudal overlords between 1306 and 1319 and the restriction of their autonomy. Assemblies of the Hanse towns met irregularly in Lübeck for a (Hanseatic Diet) – starting either around 1300, or possibly 1356. Many towns chose not to attend nor to send representatives, and decisions were not binding on individual cities if their delegates were not included in the recesses; representatives would sometimes leave the Diet prematurely to give their towns an excuse not to ratify decisions. Only a few Hanseatic cities were free imperial cities or enjoyed comparable autonomy and liberties, but many temporarily escaped domination by local nobility. Between 1361 and 1370, League members fought against Denmark in the Danish-Hanseatic War. Though initially unsuccessful with a Wendish offensive, towns from Prussia and the Netherlands, and eventually joined by Wendish towns, allied in the Confederation of Cologne in 1368, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son-in-law Haakon VI, King of Norway, to grant tax exemptions and influence over Øresund fortresses for 15 years in the peace treaty of Stralsund in 1370. It extended privileges in Scania to the League, including Holland and Zeeland. The treaty marked the height of Hanseatic influence; for this period the League was called a "Northern European great power". The Confederation lasted until 1385, while the Øresund fortresses were returned to Denmark that year. After Valdemar's heir Olav died, a succession dispute erupted over Denmark and Norway between Albert of Mecklenburg, King of Sweden and Margaret I, Queen of Denmark. This was further complicated when Swedish nobles rebelled against Albert and invited Margaret. Albert was taken prisoner in 1389, but hired privateers in 1392, the socalled Victual Brothers, who took Bornholm and Visby in his name. They and their descendants threatened maritime trade between 1392 and the 1430s. Under the 1395 release agreement for Albert, Stockholm was ruled from 1395 to 1398 by a consortium of 7 Hanseatic cities, and enjoyed full Hanseatic trading privileges. It went to Margaret in 1398. The Victual Brothers controlled Gotland in 1398. It was conquered by the Teutonic Order with support from the Prussian towns and its privileges were restored. The grandmaster of the Teutonic Order was often seen as the head of the Hanse (caput Hansae), both abroad and by some League members. Rise of rival powers Over the 15th century, the League became further institutionalized. This was in part a response to challenges in governance and competition with rivals, but also reflected changes in trade. A slow shift occurred from loose participation to formal recognition/revocation. Another general trend was Hanseatic cities' increased legislation of their kontors abroad. Only the Bergen kontor grew more independent in this period. In Novgorod, after extended conflict since the 1380s, the League regained its trade privileges in 1392, agreeing to Russian trade privileges for Livonia and Gotland. In 1424, all German traders of the Petershof kontor in Novgorod were imprisoned and 36 of them died. Although rare, arrests and seizures in Novgorod were particularly violent. In response, and due to the ongoing war between Novgorod and the Livonian Order, the League blockaded Novgorod and abandoned the Peterhof from 1443 to 1448. After extended conflicts with the League from the 1370s, English traders gained trade privileges in the Prussian region via the treaties of Marienburg (the first in 1388, the last in 1409). Their influence increased, while the importance of Hanseatic trade in England decreased over the 15th century. Over the 15th century, tensions between the Prussian region and the "Wendish" cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as center of the Hansa; Prussia's main interest, on the other hand, was the export of bulk products such as grain and timber to England, the Low Countries and later on Spain and Italy. Frederick II, Elector of Brandenburg, tried to assert authority over the Hanseatic towns Berlin and Cölln in 1442 and blocked all Brandenburg towns from participating in Hanseatic diets. For some Brandenburg towns, this ended their Hanseatic involvement. In 1488, John Cicero, Elector of Brandenburg did the same to Stendal and Salzwedel in the Altmark. Until 1394, Holland and Zeeland actively participated in the Hansa, but in 1395, their feudal obligations to Albert I, Duke of Bavaria prevented further cooperation. Consequently, their Hanseatic ties weakened, and their economic focus shifted. Between 1417 and 1432, this economic reorientation became even more pronounced as Holland and Zeeland gradually became part of the Burgundian State. The city of Lübeck faced financial troubles in 1403, leading dissenting craftsmen to establish a supervising committee in 1405. This triggered a governmental crisis in 1408 when the committee rebelled and established a new town council. Similar revolts broke out in Wismar and Rostock, with new town councils established in 1410. The crisis was ended in 1418 by a compromise. Eric of Pomerania succeeded Margaret in 1412 and sought to expand into Schleswig and Holstein levying tolls at the Øresund. Hanseatic cities were divided initially; Lübeck tried to appease Eric while Hamburg supported the Schauenburg counts against him. This led to the Danish-Hanseatic War (1426-1435) and the Bombardment of Copenhagen (1428). The Treaty of Vordingborg renewed the League's commercial privileges in 1435, but the Øresund tolls continued. Eric of Pomerania was subsequently deposed and in 1438 Lübeck took control of the Øresund toll, which caused tensions with Holland and Zeeland. The Sound tolls, and a later attempt of Lübeck to exclude the English and Dutch merchants from Scania harmed the Scanian herring trade when the excluded regions began to develop their own herring industries. In the Dutch–Hanseatic War (1438–1441), a privateer war mostly waged by Wendish towns, the merchants of Amsterdam sought and eventually won free access to the Baltic. Although the blockade of the grain trade hurt Holland and Zeeland more than Hanseatic cities, it was against Prussian interest to maintain it. In 1454, the year of the marriage of Elisabeth of Austria to King-Grand Duke Casimir IV Jagiellon of Poland-Lithuania, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 to 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn. Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the Polish capital, had a loose association with the Hansa. The lack of customs borders on the River Vistula after 1466 helped to gradually increase Polish grain exports, transported down the Vistula, from per year, in the late 15th century, to over in the 17th century. The Hansa-dominated maritime grain trade made Poland one of the main areas of its activity, helping Danzig to become the Hansa's largest city. Polish kings soon began to reduce the towns' political freedoms. Beginning in the mid-15th century, the Griffin dukes of Pomerania were in constant conflict over control of the Pomeranian Hanseatic towns. While not successful at first, Bogislav X eventually subjugated Stettin and Köslin, curtailing the region's economy and independence. A major Hansa economic advantage was its control of the shipbuilding market, mainly in Lübeck and Danzig. The League sold ships throughout Europe. The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of territorial states. New vehicles of credit were imported from Italy. When Flanders and Holland became part of the Duchy of Burgundy, Burgund Dutch and Prussian cities increasingly excluded Lübeck from their grain trade in the 15th and 16th century. Burgund Dutch demand for Prussian and Livonian grain grew in the late 15th century. These trade interests differed from Wendish interests, threatening political unity, but also showed a trade where the Hanseatic system was impractical. Hollandish freight costs were much lower than the Hansa's, and the Hansa were excluded as middlemen. After naval wars between Burgundy and the Hanseatic fleets, Amsterdam gained the position of leading port for Polish and Baltic grain from the late 15th century onwards. Nuremberg in Franconia developed an overland route to sell formerly Hansa-monopolised products from Frankfurt via Nuremberg and Leipzig to Poland and Russia, trading Flemish cloth and French wine in exchange for grain and furs from the east. The Hansa profited from the Nuremberg trade by allowing Nurembergers to settle in Hanseatic towns, which the Franconians exploited by taking over trade with Sweden as well. The Nuremberger merchant Albrecht Moldenhauer was influential in developing the trade with Sweden and Norway, and his sons Wolf and Burghard Moldenhauer established themselves in Bergen and Stockholm, becoming leaders of the local Hanseatic activities. King Edward IV of England reconfirmed the league's privileges in the Treaty of Utrecht despite the latent hostility, in part thanks to the significant financial contribution the League made to the Yorkist side during the Wars of the Roses of 1455–1487. Tsar Ivan III of Russia closed the Hanseatic Kontor at Novgorod in 1494 and deported its merchants to Moscow, in an attempt to reduce Hanseatic influence on Russian trade. At the time, only 49 traders were at the Peterhof. The fur trade was redirected to Leipzig, taking out the Hansards; while the Hanseatic trade with Russia moved to Riga, Reval, and Pleskau. When the Peterhof reopened in 1514, Novgorod was no longer a trade hub. In the same period, the burghers of Bergen tried to develop an independent intermediate trade with the northern population, against the Hansards' obstruction. The League's mere existence and its privileges and monopolies created economic and social tensions that often spilled onto rivalries between League members. End of the Hansa The development of transatlantic trade after the discovery of the Americas caused the remaining contours to decline, especially in Bruges, because it centered on other ports. It also changed business practice to short-term contracts and obsoleted the Hanseatic model of privileged guaranteed trade. The trends of local feudal lords asserting control over towns and suppressing their autonomy, and of foreign rulers repressing Hanseatic traders continued in the next century. In the Swedish War of Liberation 1521–1523, the Hanseatic League was successful in opposition to an economic conflict it had over the trade, mining, and metal industry in Bergslagen (the main mining area of Sweden in the 16th century) with Jakob Fugger (industrialist in the mining and metal industry) and his unfriendly business take-over attempt. Fugger allied with his financially dependent pope Leo X, Maximilian I, Holy Roman Emperor, and Christian II of Denmark/Norway. Both sides made costly investments in support of mercenaries to win the war. After the war, Gustav Vasa's Sweden and Frederick I's Denmark pursued independent policies and didn't support Lübeck's effort against Dutch trade. However, Lübeck under Jürgen Wullenwever overextended in the Count's Feud in Scania and Denmark and lost influence in 1536 after Christian III's victory. Lübeck's attempts at forcing competitors out of the Sound eventually alienated even Gustav Vasa. Its influence in the Nordic countries began to decline. The Hanseatic towns of Guelders were obstructed in the 1530s by Charles II, Duke of Guelders. Charles, a strict Catholic, objected to Lutheranism, in his words "Lutheran heresy", of Lübeck and other north German cities. This frustrated but did not end the towns' Hanseatic trade and a small resurgence came later. Later in the 16th century, Denmark-Norway took control of much of the Baltic. Sweden had regained control over its own trade, the Kontor in Novgorod had closed, and the Kontor in Bruges had become effectively moribund because the Zwin inlet was closing up. Finally, the growing political authority of the German princes constrained the independence of Hanse towns. The league attempted to deal with some of these issues: it created the post of syndic in 1556 and elected Heinrich Sudermann to the position, who worked to protect and extend the diplomatic agreements of the member towns. In 1557 and 1579, revised agreements spelled out the duties of towns and some progress was made. The Bruges Kontor moved to Antwerp in 1520 and the Hansa attempted to pioneer new routes. However, the league proved unable to prevent the growing mercantile competition. In 1567, a Hanseatic League agreement reconfirmed previous obligations and rights of league members, such as common protection and defense against enemies. The Prussian Quartier cities of Thorn, Elbing, Königsberg and Riga and Dorpat also signed. When pressed by the King of Poland–Lithuania, Danzig remained neutral and would not allow ships running for Poland into its territory. They had to anchor somewhere else, such as at Pautzke (Puck). The Antwerp Kontor, moribund after the fall of the city, closed in 1593. In 1597 Queen Elizabeth I of England expelled the League from London, and the Steelyard closed and sequestered in 1598. The Kontor returned in 1606 under her successor, James I, but it could not recover. The Bergen Kontor continued until 1754; of all the Kontore, only its buildings, the Bryggen, survive. Not all states tried to suppress their cities' former Hanseatic links; the Dutch Republic encouraged its eastern former members to maintain ties with the remaining Hanseatic League. The States-General relied on those cities in diplomacy at the time of the Kalmar War. The Thirty Years' War was destructive for the Hanseatic League and members suffered heavily from both the imperials, the Danes and the Swedes. In the beginning, Saxon and Wendish faced attacks because of the desire of Christian IV of Denmark to control the Elbe and Weser. Pomerania had a major population decline. Sweden took Bremen-Verden (excluding the city of Bremen), Swedish Pomerania (including Stralsund, Greifswald, Rostock) and Swedish Wismar, preventing their cities from participating in the League, and controlled the Oder, Weser, and Elbe, and could levy tolls on their traffic. The league became increasingly irrelevant despite its inclusion in the Peace of Westphalia. In 1666, the Steelyard burned in the Great Fire of London. The Kontor-manager sent a letter to Lübeck appealing for immediate financial assistance for a reconstruction. Hamburg, Bremen, and Lübeck called for a Hanseatic Day in 1669. Only a few cities participated and those who came were reluctant to contribute financially to the reconstruction. It was the last formal meeting, unbeknownst to any of the parties. This date is often taken in retrospect as the effective end date of the Hansa, but the League never formally disbanded. It silently disintegrated. Aftermath The Hanseatic League, however, lived on in the public mind. Leopold I even requested Lübeck to call a Tagfahrt to rally support for him against the Turks. Lübeck, Hamburg, and Bremen continued to attempt common diplomacy, although interests had diverged by the Peace of Ryswick. Nonetheless, the Hanseatic Republics were able to jointly perform some diplomacy, such as a joint delegation to the United States in 1827, led by Vincent Rumpff; later the United States established a consulate to the Hanseatic and Free Cities from 1857 to 1862. Britain maintained diplomats to the Hanseatic Cities until the unification of Germany in 1871. The three cities also had a common "Hanseatic" representation in Berlin until 1920. Three kontors remained as, often unused, Hanseatic property after the League's demise, as the Peterhof had closed in the 16th century. Bryggen was sold to Norwegian owners in 1754. The Steelyard in London and the Oostershuis in Antwerp were long impossible to sell. The Steelyard was finally sold in 1852 and the Oostershuis, closed in 1593, was sold in 1862. Hamburg, Bremen, and Lübeck remained as the only members until the League's formal end in 1862, on the eve of the 1867 founding of the North German Confederation and the 1871 founding of the German Empire under Kaiser Wilhelm I. Despite its collapse, they cherished the link to the Hanseatic League. Until German reunification, these three cities were the only ones that retained the words "Hanseatic City" in their official German names. Hamburg and Bremen continue to style themselves officially as "free Hanseatic cities", with Lübeck named "Hanseatic City". For Lübeck in particular, this anachronistic tie to a glorious past remained important in the 20th century. In 1937, the Nazi Party revoked its imperial immediacy through the Greater Hamburg Act. Since 1990, 24 other German cities have adopted this title. Organization The Hanseatic League was a complex, loose-jointed constellation of protagonists pursuing their interests, which coincided in a shared program of economic domination in the Baltic region, and a by no means a monolithic organization or a 'state within a state'. It gradually grew from a network of merchant guilds into a more formal association of cities, but never formed into a legal person. League members were Low German-speaking, except for Dinant. Not all towns with Low German merchant communities were members (e.g., Emden, Memel (today Klaipėda), Viborg (today Vyborg), and Narva). However, Hansards also came from settlements without German town law—the premise for league membership was birth to German parents, subjection to German law, and commercial education. The league served to advance and defend its members' common interests: commercial ambitions such as enhancement of trade, and political ambitions such as ensuring maximum independence from the territorial rulers. League decisions and actions were taken via a consensus-based procedure. If an issue arose, members were invited to participate in a central meeting, the Tagfahrt (Hanseatic Diet, "meeting ride", sometimes also referred to as Hansetag), that may have begun around 1300, and were formalized since 1358 (or possibly 1356). The member communities then chose envoys (Ratssendeboten) to represent their local consensus on the issue at the Diet. Not every community sent an envoy; delegates were often entitled to represent multiple communities. Consensus-building on local and Tagfahrt levels followed the Low Saxon tradition of Einung, where consensus was defined as the absence of protest: after a discussion, the proposals that gained sufficient support were dictated to the scribe and passed as binding Rezess if the attendees did not object; those favoring alternative proposals unlikely to get sufficient support remained silent during this procedure. If consensus could not be established on a certain issue, it was found instead in the appointment of league members empowered to work out a compromise. The League was characterised by legal pluralism and the diets could not issue laws. But the cities cooperated to achieve limited trade regulation, such as measures against fraud, or worked together on a regional level. Attempts to harmonize maritime law yielded a series of ordinances in the 15th and 16th centuries. The most extensive maritime ordinance was the Ship Ordinance and Sea Law of 1614, but it may not have been enforced. Kontors A Hanseatic Kontor was a settlement of Hansards organized in the mid-14th century as a private corporation that had its treasury, court, legislation, and seal. They operated like an early stock exchange. Kontors were first established to provide security, but also served to secure privileges and engage in diplomacy. The quality of goods was also examined at Kontors, increasing trade efficiency, and they served as bases to develop connections with local rulers and as sources of economic and political information. Most contours were also physical locations containing buildings that were integrated and segregated from city life to different degrees. The kontor of Bruges was an exception in this regard; it acquired buildings only as of the 15th century. Like the guilds, the Kontore were usually led by Ältermänner ("eldermen", or English aldermen). The Stalhof, a special case, had a Hanseatic and an English alderman. In Novgorod the aldermen were replaced by a hofknecht in the 15th century. The contours statutes were read aloud to the present merchants once a year. In 1347 the Kontor of Bruges modified its statute to ensure an equal representation of League members. To that end, member communities from different regions were pooled into three circles (Drittel ("third [part]"): the Wendish and Saxon Drittel, the Westphalian and Prussian Drittel as well as the Gotlandian, Livonian and Swedish Drittel). Merchants from each Drittel chose two aldermen and six members of the Eighteen Men Council (Achtzehnmännerrat) to administer the Kontor for a set period. In 1356, during a Hanseatic meeting in preparation for the first (or one of the first) Tagfahrt, the League confirmed this statute. All trader settlements including the Kontors were subordinated to the Diet's decisions around this time, and their envoys received the right to attend and speak at Diets, albeit without voting power. Drittel The league gradually divided the organization into three constituent parts called Drittel (German thirds), as shown in the table below. The Hansetag was the only central institution of the Hanseatic League. However, with the division into Drittel, the members of the respective subdivisions frequently held a Dritteltage ("Drittel meeting") to work out common positions which could then be presented at a Hansetag. On a more local level, league members also met, and while such regional meetings never crystalized into a Hanseatic institution, they gradually gained importance in the process of preparing and implementing the Diet's decisions. Quarters From 1554, the division into Drittel was modified to reduce the circles' heterogeneity, to enhance the collaboration of the members on a regional level and thus to make the League's decision-making process more efficient. The number of circles rose to four, so they were called Quartiere (quarters): This division was however not adopted by the Kontore, who, for their purposes (like Ältermänner elections), grouped League members in different ways (e.g., the division adopted by the Stalhof in London in 1554 grouped members into Dritteln, whereby Lübeck merchants represented the Wendish, Pomeranian Saxon, and several Westphalian towns, Cologne merchants represented the Cleves, Mark, Berg and Dutch towns, while Danzig merchants represented the Prussian and Livonian towns). Hanseatic ships Various types of ships were used. Cog The most used type, and the most emblematic, was the cog. The cog was a multi-purpose clinker-built ship with a carvel bottom, a stern rudder, and a square rigged mast. Most cogs were privately owned and were also used as warships. Cogs were built in various sizes and specifications and were used on both the seas and rivers. They could be outfitted with castles starting from the thirteenth century. The cog was depicted on many seals and several coats of arms of Hanseatic cities, like Stralsund, Elbląg and Wismar. Several shipwrecks have been found. The most notable wreck is the Bremen cog. It could carry a cargo of about 125 tons. Hulk The hulk began to replace the cog by 1400 and cogs lost their dominance to them around 1450. The Hulk was a bulkier ship that could carry larger cargo; Elbl estimates they could carry up to 500 tons by the 15th century. It could be clinker or carvel-built. No archeological evidence of a hulk has been found. Carvel In 1464, Danzig acquired a French carvel ship through a legal dispute and renamed it the Peter von Danzig. It was 40 m long and had three masts, one of the largest ships of its time. Danzig adopted carvel construction around 1470. Other cities shifted to carvels starting from this time. An example is the Jesus of Lübeck, later sold to England for use as a warship and slave ship. The galleonlike carvel warship Adler von Lübeck was constructed by Lübeck for military use against Sweden during the Northern Seven Years' War (1563–70), Llaunched in 1566, it was never put to military use after the Treaty of Stettin. It was the biggest ship of its day at 78 m long and had four masts, including a bonaventure mizzen. It served as a merchant ship until it was damaged in 1581 on a return voyage from Lisbon and broken up in 1588. Hanseatic cities Hansa Proper In the table below, the names listed in the column labeled Quarter have been summarised as follows: "Wendish": Wendish and Pomeranian (or just Wendish) quarter "Saxon": Saxon, Thuringian and Brandenburg (or just Saxon) quarter "Baltic": Prussian, Livonian and Swedish (or East Baltic) quarter "Westphalian": Rhine-Westphalian and Netherlands (including Flanders) (or Rhineland) quarter The remaining column headings are as follows: City – the city's name, with any variants Territory – the jurisdiction to which the city was subject at the time of the League Now – the modern nation-state in which the city is located From and Until – the dates at which the city joined and/or left the league Notes – additional details about the city Ref. – one or more references to works about the city Kontore The kontore were the major foreign trading posts of the League, not cities that were Hanseatic members, and are listed in the hidden table below. Vitten The vitten were significant foreign trading posts of the League in Scania, not cities that were Hanseatic members, they are argued by some to have been similar in status to the kontors, and are listed in the hidden table below. Ports with Hansa trading posts Åhus Berwick-upon-Tweed Bishop's Lynn (King's Lynn) Boston Bordeaux Bourgneuf Bristol Copenhagen Damme Frankfurt Ghent Hull (Kingston upon Hull) Ipswich Kalundborg Kaunas Landskrona La Rochelle Leith Lisbon Nantes Narva Næstved Newcastle Norwich Nuremberg Oslo Pleskau (Pskov) Polotsk Rønne Scarborough Yarmouth (Great Yarmouth) Sluis Smolensk Tønsberg Venice Vilnius Vitebsk York Ystad Other cities with a Hansa community Aberdeen Åbo (Turku) Avaldsnes Brae Grindavík Grundarfjörður Gunnister Hafnarfjörður Harlingen Haroldswick Hildesheim Hindeloopen (Hylpen) Kalmar Krambatangi Kumbaravogur Leghorn Lunna Wick Messina Naples Nordhausen Nyborg Nyköping Scalloway Stockholm Tórshavn Trondheim Tver Walk (Valka) Weißenstein (Paide) Wesenberg (Rakvere) Legacy Historiography Academic historiography of the Hanseatic League is considered to begin with Georg Sartorius, who started writing his first work in 1795 and founded the liberal historiographical tradition about the League. The German conservative nationalist historiographical tradition was first published with F.W. Barthold's Geschichte der Deutschen Hansa of 1853/1854. The conservative view was associated with Little German ideology and came to predominate from the 1850s until the end of the First World War. Hanseatic history was used to justify a stronger German navy and conservative historians drew a link between the League and the rise of Prussia as the leading German state. This climate deeply influenced the historiography of the Baltic trade. Issues of social, cultural and economic history became more important in German research after the First World War. But leading historian Fritz Rörig also promoted a National Socialist perspective. After the Second World War the conservative nationalist view was discarded, allowing exchanges between German, Swedish and Norwegian historians on the Hanseatic League's role in Sweden and Norway. Views on the League were strongly negative in the Scandinavian countries, especially Denmark, because of associations with German privilege and supremacy. Philippe Dollinger's book The German Hansa became the standard work in the 1960s. At that time, the dominant perspective became Ahasver von Brandt's view of a loosely aligned trading network. Marxist historians in the GDR were split on whether the League was a "late feudal" or "proto-capitalist" phenomenon. Two museums in Europe are dedicated to the history of the Hanseatic League: the European Hansemuseum in Lübeck and the Hanseatic Museum and Schøtstuene in Bergen. Popular views From the 19th century, Hanseatic history was often used to promote a national cause in Germany. German liberals built a fictional literature around Jürgen Wullenwever, expressing fierce anti-Danish sentiment. Hanseatic subjects were used to propagate nation building, colonialism, fleet building and warfare, and the League was presented as a bringer of culture and pioneer of German expansion. The preoccupation with a strong navy motivated German painters in the 19th century to paint supposedly Hanseatic ships. They used the traditions of maritime paintings and, not wanting Hanseatic ships to look unimpressive, ignored historical evidence to fictionalise cogs into tall two- or three-masted ships. The depictions were widely reproduced, such as on plates of Norddeutscher Lloyd. This misleading artistic tradition influenced public perception throughout the 20th century. In the late 19th century, a social-critical view developed, where opponents of the League like the likedeelers were presented as heroes and liberators from economic oppression. This was popular from the end of the First World War into the 1930s, and survives in the Störtebeker Festival on Rügen, founded as the Rügenfestspiele by the GDR. From the late 1970s, the Europeanness and cooperation of the Hanseatic League came to prominence in popular culture. It is associated with innovation, entrepreneurism and internationalness in economic circles. In this way it often used for tourism, city branding and commercial marketing. The League's unique governance structure has been identified as a precursor to the supranational model of the European Union. Modern transnational organisations named after the Hanseatic League Union of Cities THE HANSA In 1979, Zwolle invited over 40 cities from West Germany, the Netherlands, Sweden and Norway with historic links to the Hanseatic League to sign the recesses of 1669, at Zwolle's 750 year city rights' anniversary in August of the next year. In 1980, those cities established a "new Hanse" in Zwolle, named Städtebund Die Hanse (Union of Cities THE HANSA) in German and reinstituted the Hanseatic diets. This league is open to all former Hanseatic League members and cities that share a Hanseatic heritage. In 2012, the city league had 187 members. This included twelve Russian cities, most notably Novgorod, and 21 Polish cities. No Danish cities had joined the Union although several qualify. The "new Hanse" fosters business links, tourism and cultural exchange. The headquarters of the New Hansa is in Lübeck, Germany. Dutch cities including Groningen, Deventer, Kampen, Zutphen and Zwolle, and a number of German cities including Bremen, Buxtehude, Demmin, Greifswald, Hamburg, Lübeck, Lüneburg, Rostock, Salzwedel, Stade, Stendal, Stralsund, Uelzen and Wismar now call themselves Hanse cities (the German cities' car license plates are prefixed H, e.g. –HB– for "Hansestadt Bremen"). Each year one of the member cities of the New Hansa hosts the Hanseatic Days of New Time international festival. In 2006, King's Lynn became the first English member of the union of cities. It was joined by Hull in 2012 and Boston in 2016. New Hanseatic League The New Hanseatic League was established in February 2018 by finance ministers from Denmark, Estonia, Finland, Ireland, Latvia, Lithuania, the Netherlands and Sweden through the signing of a foundational document which set out the countries' "shared views and values in the discussion on the architecture of the EMU". Others The legacy of the Hansa is reflected in several names: the German airline Lufthansa (lit. "Air Hansa"); F.C. Hansa Rostock, nickamed the Kogge or Hansa-Kogge; Hansa-Park, one of the biggest theme parks in Germany; Hanze University of Applied Sciences in Groningen, Netherlands; Hanze oil production platform, Netherlands; the Hansa Brewery in Bergen and the Hanse Sail in Rostock; Hanseatic Trade Center in Hamburg; DDG Hansa, which was a major German shipping company from 1881 until its bankruptcy and takeover by Hapag-Lloyd in 1980; the district of New Hanza City in Riga, Latvia; and Hansabank in Estonia, which was rebranded as Swedbank. Historical maps In popular culture In the Patrician series of trading simulation video games, the player assumes the role of a merchant in any of several cities of the Hanseatic League. In the Saga of Seven Suns series of space opera novels by American writer Kevin J. Anderson, the human race has colonized multiple planets in the Spiral Arm, most of which are governed by the powerful Terran Hanseatic League (Hansa). Hansa Teutonica is a German board game designed by Andreas Steding and published by Argentum Verlag in 2009. In the Metro franchise of post-apocalyptic novels and video games, a trading alliance of stations called The Commonwealth of the Stations of the Ring Line is also known as the Hanseatic League, usually shortened to Hansa or Hanza. See also Baltic maritime trade (c. 1400–1800) Bay Fleet Brick Gothic Company of Merchant Adventurers of London Dithmarschen Hanseatic Cross Hanseatic Days of New Time Hanseatic flags Hanseatic Museum and Schøtstuene Hildebrand Veckinchusen History of Bremen (City) Maritime republics Peasants' Republic Schiffskinder Thalassocracy Explanatory footnotes References Sources Further reading Halliday, Stephen. "The First Common Market?" History Today 59 (2009): 31–37. Historiography Cowan, Alexander. "Hanseatic League: Oxford Bibliographies Online Research Guide" (Oxford University Press, 2010) online Harrison, Gordon. "The Hanseatic League in Historical Interpretation." The Historian 33 (1971): 385–97. . Szepesi, Istvan. "Reflecting the Nation: The Historiography of Hanseatic Institutions." Waterloo Historical Review 7 (2015). online External links 29th International Hansa Days in Novgorod 30th International Hansa Days 2010 in Parnu-Estonia Chronology of the Hanseatic League Hanseatic Cities in the Netherlands Hanseatic League Historical Re-enactors Hanseatic Towns Network Hanseatic League related sources in the German Wikisource Colchester: a Hanseatic port – Gresham The Lost Port of Sutton: Maritime trade Northern Europe Former monopolies Trade monopolies Early modern history of the Holy Roman Empire Former confederations Early modern history of Germany Early modern history of the Netherlands Economy of the Holy Roman Empire Economic history of the Netherlands History of international trade Hanseatic League International trade organizations Baltic Sea Brandenburg-Prussia Gotland Guilds Northern Renaissance History of Prussia History of Scandinavia History of the Øresund Region History of the Baltic states Medieval history of Poland 1862 disestablishments in Europe 14th century in Europe 15th century in Europe 16th century in Europe Medieval history of Germany 12th-century establishments in Europe
0.763134
0.999396
0.762673
Path dependence
Path dependence is a concept in the social sciences, referring to processes where past events or decisions constrain later events or decisions. It can be used to refer to outcomes at a single point in time or to long-run equilibria of a process. Path dependence has been used to describe institutions, technical standards, patterns of economic or social development, organizational behavior, and more. In common usage, the phrase can imply two types of claims. The first is the broad concept that "history matters," often articulated to challenge explanations that pay insufficient attention to historical factors. This claim can be formulated simply as "the future development of an economic system is affected by the path it has traced out in the past" or "particular events in the past can have crucial effects in the future." The second is a more specific claim about how past events or decisions affect future events or decisions in significant or disproportionate ways, through mechanisms such as increasing returns, positive feedback effects, or other mechanisms. Commercial examples Videocassette recording systems The videotape format war is a key example of path dependence. Three mechanisms independent of product quality could explain how VHS achieved dominance over Betamax from a negligible early adoption lead: A network effect: videocassette rental stores observed more VHS rentals and stocked up on VHS tapes, leading renters to buy VHS players and rent more VHS tapes, until there was complete vendor lock-in. A VCR manufacturer bandwagon effect of switching to VHS-production because they expected it to win the standards battle. Sony, the original developer of Betamax, did not let pornography companies license their technology for mass production, which meant that nearly all pornographic motion pictures released on video used VHS format. An alternative analysis is that VHS was better-adapted to market demands (e.g. having a longer recording time). In this interpretation, path dependence had little to do with VHS's success, which would have occurred even if Betamax had established an early lead. QWERTY keyboard The QWERTY keyboard is a prominent example of path dependence due to the widespread emergence and persistence of the QWERTY keyboard. QWERTY has persisted over time despite more efficient keyboard arrangements being developed – QWERTY vs. Dvorak is an example of this. However, there is still debate about the validity of this being a true example of path dependence. Railway track gauges The standard gauge of railway tracks is another example of path dependence which explains how a seemingly insignificant event or circumstance can change the choice of technology over the long run despite contemporary knowhow showing such a choice to be inefficient. More than half the world's railway gauges are , known as standard gauge, despite the consensus among engineers being that wider gauges have increased performance and speed. The path to the adoption of the standard gauge began in the late 1820s when George Stephenson, a British engineer, began work on the Liverpool and Manchester Railway. His experience with primitive coal tramways resulted in this gauge width being copied by the Liverpool and Manchester Railway, then the rest of Great Britain, and finally by railroads in Europe and North America. There are tradeoffs involved in the choice of rail gauge between the cost of constructing a line (which rises with wider gauges) and various performance metrics, including maximum speed, low center of gravity (desirable, especially in double-stack rail transport). While the attempts with Brunel gauge, a significantly broader gauge failed, the widespread use of Iberian gauge, Russian gauge and Indian gauge, all of which are broader than Stephenson's choice, show that there is nothing inherent to the 1435 mm gauge that led to its global success. Economics Path dependence theory was originally developed by economists to explain technology adoption processes and industry evolution. The theoretical ideas have had a strong influence on evolutionary economics. A common expression of the concept is the claim that predictable amplifications of small differences are a disproportionate cause of later circumstances, and, in the "strong" form, that this historical hang-over is inefficient. There are many models and empirical cases where economic processes do not progress steadily toward some pre-determined and unique equilibrium, but rather the nature of any equilibrium achieved depends partly on the process of getting there. Therefore, the outcome of a path-dependent process will often not converge towards a unique equilibrium, but will instead reach one of several equilibria (sometimes known as absorbing states). This dynamic vision of economic evolution is very different from the tradition of neo-classical economics, which in its simplest form assumed that only a single outcome could possibly be reached, regardless of initial conditions or transitory events. With path dependence, both the starting point and 'accidental' events (noise) can have significant effects on the ultimate outcome. In each of the following examples it is possible to identify some random events that disrupted the ongoing course, with irreversible consequences. Economic development In economic development, it is said (initially by Paul David in 1985) that a standard that is first-to-market can become entrenched (like the QWERTY layout in typewriters still used in computer keyboards). He called this "path dependence", and said that inferior standards can persist simply because of the legacy they have built up. That QWERTY vs. Dvorak is an example of this phenomenon, has been re-asserted, questioned, and continues to be argued. Economic debate continues on the significance of path dependence in determining how standards form. Economists from Alfred Marshall to Paul Krugman have noted that similar businesses tend to congregate geographically ("agglomerate"); opening near similar companies attracts workers with skills in that business, which draws in more businesses seeking experienced employees. There may have been no reason to prefer one place to another before the industry developed, but as it concentrates geographically, participants elsewhere are at a disadvantage, and will tend to move into the hub, further increasing its relative efficiency. This network effect follows a statistical power law in the idealized case, though negative feedback can occur (through rising local costs). Buyers often cluster around sellers, and related businesses frequently form business clusters, so a concentration of producers (initially formed by accident and agglomeration) can trigger the emergence of many dependent businesses in the same region. In the 1980s, the US dollar exchange rate appreciated, lowering the world price of tradable goods below the cost of production in many (previously successful) U.S. manufacturers. Some of the factories that closed as a result, could later have been operated at a (cash-flow) profit after dollar depreciation, but reopening would have been too expensive. This is an example of hysteresis, switching barriers, and irreversibility. If the economy follows adaptive expectations, future inflation is partly determined by past experience with inflation, since experience determines expected inflation and this is a major determinant of realized inflation. A transitory high rate of unemployment during a recession can lead to a permanently higher unemployment rate because of the skills loss (or skill obsolescence) by the unemployed, along with a deterioration of work attitudes. In other words, cyclical unemployment may generate structural unemployment. This structural hysteresis model of the labour market differs from the prediction of a "natural" unemployment rate or NAIRU, around which 'cyclical' unemployment is said to move without influencing the "natural" rate itself. Types of path dependence Liebowitz and Margolis distinguish types of path dependence; some do not imply inefficiencies and do not challenge the policy implications of neoclassical economics. Only "third-degree" path dependence—where switching gains are high, but transition is impractical—involves such a challenge. They argue that such situations should be rare for theoretical reasons, and that no real-world cases of private locked-in inefficiencies exist. Vergne and Durand qualify this critique by specifying the conditions under which path dependence theory can be tested empirically. Technically, a path-dependent stochastic process has an asymptotic distribution that "evolves as a consequence (function of) the process's own history". This is also known as a non-ergodic stochastic process. In The Theory of the Growth of the Firm (1959), Edith Penrose analyzed how the growth of a firm both organically and through acquisition is strongly influenced by the experience of its managers and the history of the firm's development. Conditions which give rise to path dependence Path dependence may arise or be hindered by a number of important factors, these may include Durability of capital equipment Technical interrelatedness Increasing returns Dynamic increasing returns to adoption Social sciences Institutions Recent methodological work in comparative politics and sociology has adapted the concept of path dependence into analyses of political and social phenomena. Path dependence has primarily been used in comparative-historical analyses of the development and persistence of institutions, whether they be social, political, or cultural. There are arguably two types of path-dependent processes: One is the critical juncture framework, most notably utilized by Ruth and David Collier in political science. In the critical juncture, antecedent conditions allow contingent choices that set a specific trajectory of institutional development and consolidation that is difficult to reverse. As in economics, the generic drivers are: lock-in, positive feedback, increasing returns (the more a choice is made, the bigger its benefits), and self-reinforcement (which creates forces sustaining the decision). The other path-dependent process deals with reactive sequences where a primary event sets off a temporally-linked and causally-tight deterministic chain of events that is nearly uninterruptible. These reactive sequences have been used to link such things as the assassination of Martin Luther King Jr. with welfare expansion, or the Industrial Revolution in England with the development of the steam engine. The critical juncture framework has been used to explain the development and persistence of welfare states, labor incorporation in Latin America, and the variations in economic development between countries, among other things. Scholars such as Kathleen Thelen caution that the historical determinism in path-dependent frameworks is subject to constant disruption from institutional evolution. Kathleen Thelen has criticized the application of QWERTY keyboard-style mechanisms to politics. She argues that such applications to politics are both too contingent and too deterministic. Too contingent in the sense that the initial choice is open and flukey, and too deterministic in the sense that once the initial choice is made, an unavoidable path inevitably forms from which there is no return. Organizations Paul Pierson's influential attempt to rigorously formalize path dependence within political science, draws partly on ideas from economics. Herman Schwartz has questioned those efforts, arguing that forces analogous to those identified in the economic literature are not pervasive in the political realm, where the strategic exercise of power gives rise to, and transforms, institutions. Especially sociology and organizational theory, a distinct yet closely related concept to path dependence is the concept of imprinting which captures how initial environmental conditions leave a persistent mark (or imprint) on organizations and organizational collectives (such as industries and communities), thus continuing to shape organizational behaviours and outcomes in the long run, even as external environmental conditions change. Individuals and groups The path dependence of emergent strategy has been observed in behavioral experiments with individuals and groups. Other examples A general type of path dependence is a typological vestige. In typography, for example, some customs persist, although the reason for their existence no longer applies; for example, the placement of the period inside a quotation in U.S. spelling. In metal type, pieces of terminal punctuation, such as the comma and period, are comparatively small and delicate (as they must be x-height for proper kerning.) Placing the full-height quotation mark on the outside protected the smaller cast metal sort from damage if the word needed to be moved around within or between lines. This would be done even if the period did not belong to the text being quoted. Evolution is considered by some to be path-dependent and historically contingent: mutations occurring in the past have had long-term effects on current life forms, some of which may no longer be adaptive to current conditions. For instance, there is a controversy about whether the panda's thumb is a leftover trait or not. In the computer and software markets, legacy systems indicate path dependence: customers' needs in the present market often include the ability to read data or run programs from past generations of products. Thus, for instance, a customer may need not merely the best available word processor, but rather the best available word processor that can read Microsoft Word files. Such limitations in compatibility contribute to lock-in, and more subtly, to design compromises for independently developed products, if they attempt to be compatible. Also see embrace, extend and extinguish. In socioeconomic systems, commercial fisheries' harvest rates and conservation consequences are found to be path dependent as predicted by the interaction between slow institutional adaptation, fast ecological dynamics, and diminishing returns. In physics and mathematics, a non-holonomic system is a physical system in which the states depend on the physical paths taken. See also Critical juncture theory Imprinting (organizational theory) Innovation butterfly Historicism Network effect Opportunity cost Ratchet effect Tyranny of small decisions Notes References Arrow, Kenneth J. (1963), 2nd ed. Social Choice and Individual Values. Yale University Press, New Haven, pp. 119–120 (constitutional transitivity as alternative to path dependence on the status quo). Arthur, W. Brian (1994), Increasing Returns and Path Dependence in the Economy. University of Michigan Press. , in P. Garrouste and S. Ioannides (eds), Evolution and Path Dependence in Economic Ideas: Past and Present, Edward Elgar Publishing, Cheltenham, England. Hargreaves Heap, Shawn (1980), "Choosing the Wrong 'Natural' Rate: Accelerating Inflation or Decelerating Employment and Growth?" Economic Journal 90(359) (Sept): 611–20 (ISSN 0013-0133) Stephen E. Margolis and S.J. Liebowitz (2000), "Path Dependence, Lock-In, and History" Nelson, R. and S. Winter (1982), An evolutionary theory of economic change, Harvard University Press. Pdf. Penrose, E. T., (1959), The Theory of the Growth of the Firm, New York: Wiley. Pierson, Paul (2000). "Increasing Returns, Path Dependence, and the Study of Politics". American Political Science Review, June. _ (2004), Politics in Time: Politics in Time: History, Institutions, and Social Analysis, Princeton University Press. Puffert, Douglas J. (1999), "Path Dependence in Economic History" (based on the entry "Pfadabhängigkeit in der Wirtschaftsgeschichte", in the Handbuch zur evolutorischen Ökonomik) _ (2001), "Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge" _ (2009), Tracks across continents, paths through history: the economic dynamics of standardization in railway gauge, University of Chicago Press. Schwartz, Herman. "Down the Wrong Path: Path Dependence, Increasing Returns, and Historical Institutionalism"., undated mimeo Shalizi, Cosma (2001), "QWERTY, Lock-in, and Path Dependence", unpublished website, with extensive references Vergne, J. P. and R. Durand (2010), "The missing link between the theory and empirics of path dependence", Journal of Management Studies, 47(4):736–59, with extensive references Competition (economics) Market failure Markov models Decision theory Memory
0.768402
0.992513
0.762649
Bioarchaeology
Bioarchaeology (osteoarchaeology, osteology or palaeo-osteology) in Europe describes the study of biological remains from archaeological sites. In the United States it is the scientific study of human remains from archaeological sites. The term was minted by British archaeologist Grahame Clark who, in 1972, defined it as the study of animal and human bones from archaeological sites. Jane Buikstra came up with the current US definition in 1977. Human remains can inform about health, lifestyle, diet, mortality and physique of the past. Although Clark used it to describe just human remains and animal remains, increasingly archaeologists include botanical remains. Bioarchaeology was largely born from the practices of New Archaeology, which developed in the United States in the 1970s as a reaction to a mainly cultural-historical approach to understanding the past. Proponents of New Archaeology advocate testing hypotheses about the interaction between culture and biology, or a biocultural approach. Some archaeologists advocate a more holistic approach that incorporates critical theory. Paleodemography Paleodemography studies demographic characteristics of past populations. Bioarchaeologists use paleodemography to create life tables, a type of cohort analysis, to understand zdemographic characteristics (such as risk of death or sex ratio) of a given age cohort within a population. It is often necessary to estimate the age and sex of individuals based on specific morphological characteristics of the skeleton. Age Age estimation attempts to determine the skeletal/biological age-at-death. The primary assumption is that an individual's skeletal age is closely associated with their chronological age. Age estimation can be based on patterns of growth and development or degenerative changes in the skeleton. A variety of skeletal series methods to assess these types of changes have been developed. For instance, in children age is typically estimated by assessing dental development, ossification and fusion of specific skeletal elements, or long bone length. For children, different teeth erupt from the gums serially are the most reliable for telling a child's age. However, fully developed teeth are less indicative. In adults, degenerative changes to the pubic symphysis, the auricular surface of the ilium, the sternal end of the 4th rib, and dental attrition are commonly used to estimate skeletal age. Until the age of about 30, human bones keep growing. Different bones fuse at different points of growth. This development can vary acros individuals. Wear and tear on bones further complicates age estimates. Often, estimates are limited to 'young' (20–35 years), 'middle' (35–50 years), or 'old' (50+ years). Sex Differences in male and female skeletal anatomy are used by bioarchaeologists to determine the biological sex of human skeletons. Humans are sexually dimorphic, although overlap in body shape and sexual characteristics is possible. Not all skeletons can be assigned a sex, and some may be wrongly identified. Biological males and biological females differ most in the skull and pelvis; bioarchaeologists focus on these body parts, although other body parts can be used. The female pelvis is generally broader than the male pelvis, and the angle between the two inferior pubic rami (the sub-pubic angle) is wider and more U-shaped, while the sub-pubic angle of the male is more V-shaped and less than 90 degrees. In general, the male skeleton is more robust than the female skeleton because of male's greater muscles mass. Male skeletons generally have more pronounced brow ridges, nuchal crests, and mastoid processes. Skeletal size and robustness are influenced by nutrition and activity levels. Pelvic and cranial features are considered to be more reliable indicators of biological sex. Sexing skeletons of young people who have not completed puberty is more difficult and problematic, because the body has not fully developed. Bioarchaeological sexing of skeletons is not error-proof. Recording errors and re-arranging of human remains may play a part in such misidentification. Direct testing of bioarchaeological methods for sexing skeletons by comparing gendered names on coffin plates from the crypt at Christ Church, Spitalfields, London to the associated remains achieved a 98 percent success rate. Gendered work patterns may leave marks on bones and be identifiable in the archaeological record. One study found extremely arthritic big toes, a collapse of the last dorsal vertebrae, and muscular arms and legs among female skeletons at Abu Hureyra, interpreting this as indicative of gendered work patterns. Such skeletal changes could have resulted from women spending long periods kneeling while grinding grain with the toes curled forward. Investigation of gender from mortuary remains is of growing interest to archaeologists. Non-specific stress indicators Dental non-specific stress indicators Enamel hypoplasia Enamel hypoplasia refers to transverse furrows or pits that form in the enamel surface of teeth when the normal process of tooth growth stops, leaving a deficit. Enamel hypoplasias generally form due to disease and/or poor nutrition. Linear furrows are commonly referred to as linear enamel hypoplasias (LEHs); LEHs can range in size from microscopic to visible to the naked eye. By examining the spacing of perikymata grooves (horizontal growth lines), the duration of the stressor can be estimated, although Mays argued that the width of the hypoplasia bears only an indirect relationship to the duration of the stressor. Studies of dental enamel hypoplasia are used to study child health. Unlike bone, teeth are not remodeled, so intact enamel can provide a more reliable indicator of past health events. Dental hypoplasias provide an indicator of health status during the time in childhood when the enamel of the tooth crown is forming. Not all enamel layers are visible on the tooth surface because enamel layers that are formed early in crown development are buried by later layers. Hypoplasias on this part of the tooth do not show on the tooth surface. Because of this buried enamel, teeth record stressors form a few months after the start of the event. The proportion of enamel crown formation time represented by this buried enamel varies from up to 50 percent in molars to 15-20 percent in anterior teeth. Surface hypoplasias record stressors occur from about one to seven years, or up to 13 years if the third molar is included. Skeletal non-specific stress indicators Porotic hyperostosis/cribra orbitalia It was long assumed that iron deficiency anemia has marked effects on the flat bones of the cranium of infants and young children. That as the body attempts to compensate for low iron levels by increasing red blood cell production in the young, sieve-like lesions develop in the cranial vaults (termed porotic hyperostosis) and/or the orbits (termed cribra orbitalia). This bone is spongy and soft. It is however, unlikely that iron deficiency anemia is a cause of either porotic hyperostosis or cribra orbitalia. These are more likely the result of vascular activity in these areas and are unlikely to be pathological. The development of cribra orbitalia and porotic hyperostosis could also be attributed to other causes besides a dietary iron deficiency, such as nutrients lost to intestinal parasites. However, dietary deficiencies are the most probable cause. Anemia incidence may be a result of inequalities within society, and/or indicative of different work patterns and activities among different groups within society. A study of iron-deficiency among early Mongolian nomads showed that although overall rates of cribra orbitalia declined from 28.7 percent (27.8 percent of the total female population, 28.4 percent of the total male population, 75 percent of the total juvenile population) during the Bronze and Iron Ages, to 15.5 percent during the Hunnu (2209–1907 BP) period, the rate of females with cribra orbitalia remained roughly the same, while incidence among males and children declined (29.4 percent of the total female population, 5.3 percent of the total male population, and 25 percent of the juvenile population had cribra orbitalia). This study hypothesized that adults may have lower rates of cribra orbitalia than juveniles because lesions either heal with age or lead to death. Higher rates of cribia orbitalia among females may indicate lesser health status, or greater survival of young females with cribia orbitalia into adulthood. Harris lines Harris lines form before adulthood, when bone growth is temporarily halted or slowed down due to some sort of stress (typically disease or malnutrition). During this time, bone mineralization continues, but growth does not, or does so at reduced levels. If and when the stressor is overcome, bone growth resumes, resulting in a line of increased mineral density visible in a radiograph. Absent removal of the stressor, no line forms. Particularly, deficiencies in protein and vitamins, which lead to delayed longitudinal bone growth, can result in the formation of Harris lines. During the process of endochondral bone growth, the cessation of osteoblastic activity results in the deposition of a thin layer of bone beneath the cartilage cap, potentially forming Harris lines. Subsequent recovery, necessary for the restoration of osteoblastic activity, is also implicated in Harris line formation. When matured cartilage cells reactivate, bone growth resumes, thickening the bony stratum. Therefore, complete recovery from periods of chronic illness or malnutrition manifests as transverse lines on radiographs. Lines tend to be thicker with prolonged and severe malnutrition. Harris line formation typically peaks in long bones around 2–3 years after birth and becomes rare after the age of 5 until adulthood. Harris lines occur more frequently in boys than in girls. Hair The stress hormone cortisol is deposited in hair as it grows. This has been used successfully to detect fluctuating levels of stress in the later lifespan of mummies. Mechanical stress and activity indicators Examining the effects that activities has upon the skeleton allows the archaeologist to examine who was doing what kinds of labor, and how activities were structured within society. Labor within the household may be divided according to gender and age, or be based on other social structures. Human remains can allow archaeologists to uncover these patterns. Living bones are subject to Wolff's law, which states that bones are physically affected and remodeled by physical activity or inactivity. Increases in mechanical stress tend to produce thicker and stronger bones. Disruptions in homeostasis caused by nutritional deficiency or disease or profound inactivity/disuse/disability can lead to bone loss. While the acquisition of bipedal locomotion and body mass appear to determine the size and shape of children's bones, activity during the adolescent growth period seems to exert a greater influence on the size and shape of adult bones than exercise later in life. Muscle attachment sites (entheses) have been thought to be impacted in the same way, causing entheseal changes. These changes were widely used to study activity-patterns, but research has shown that processes associated with aging have a greater impact than occupational stresses. It has also been shown that geometric changes to bone structure (described above) and entheseal changes differ in their underlying cause with the latter little affected by occupation. Joint changes, including osteoarthritis, have been used to infer occupations, but in general these are also manifestations of the aging process. Markers of occupational stress, which include morphological changes to the skeleton and dentition as well as joint changes at specific locations have been widely used to infer specific (rather than general) activities. Such markers are often based on single cases described in late nineteenth century clinical literature. One such marker has been found to be a reliable indicator of lifestyle: the external auditory exostosis also called surfer's ear, which is a small bony protuberance in the ear canal that occurs in those working in proximity to cold water. One example of how these changes have been used to study activities is the New York African Burial Ground in New York. This provides evidence of the brutal working conditions under which the enslaved labored; osteoarthritis of the vertebrae was common even among the young. The pattern of osteoarthritis combined with the early age of onset provides evidence of labor that resulted in mechanical strain to the neck. One male skeleton shows stress lesions at 37 percent of 33 muscle or ligament attachments, showing he experienced significant musculoskeletal stress. Overall, the interred show signs of significant musculoskeletal stress and heavy workloads, although workload and activities varied by individual. Some show high levels of stress, while others do not. This indicates the variety of types of labor (e.g., domestic vs. carrying heavy loads) labor. Injury and workload Fractures to bones during or after excavation appear relatively fresh, with broken surfaces appearing white and unweathered. Distinguishing between fractures around the time of death and post-depositional fractures in bone is difficult, as both types of fractures show signs of weathering. Unless evidence of bone healing or other factors are present, researchers may choose to regard all weathered fractures as post-depositional. Evidence of perimortal fractures (or fractures inflicted on a fresh corpse) can be distinguished in unhealed metal blade injuries to the bones. Living or freshly dead bones are somewhat resilient, so metal blade injuries to bone generate a linear cut with relatively clean edges rather than irregular shattering. Archaeologists have attempted to use the microscopic parallel scratch marks on cut bones in order to estimate the trajectory of the blade that caused the injury. Diet and dental health Dental caries are caused by localized destruction of tooth enamel, as a result of acids produced by bacteria feeding upon and fermenting carbohydrates in the mouth. Agriculture is strongly associated with a higher rate of caries than foraging, because of the associated higher levels of carbohydrates produced by agriculture. For example, bioarchaeologists have used caries in skeletons to correlate a diet of rice with disease. Women may be more vulnerable to caries compared to men due to having lower saliva flow, the positive correlation of estrogen with increased caries rates, and because of pregnancy-associated physiological changes, such as suppression of the immune system and a possible concomitant decrease in antimicrobial activity in the oral cavity. Stable isotope analysis Stable isotope biogeochemistry uses variations in isotopic signatures and relates them to biogeochemical processes. The science is based on the preferential fractionation of lighter or heavier isotopes, which results in enriched and depleted isotopic signatures compared to a standard value. Essential elements for life such as carbon, nitrogen, oxygen, and sulfur are the primary stable isotope systems used to interrogate archeological discoveries. Isotopic signatures from multiple systems are typically used in tandem to create a comprehensive understanding of the analyzed material. These systems are most commonly used to trace the geographic origin of archaeological remains and investigate the diets, mobility, and cultural practices of ancient humans. Applications Carbon Stable isotope analysis of carbon in human bone collagen allows bioarchaeologists to carry out dietary reconstruction and to make nutritional inferences. These chemical signatures reflect long-term dietary patterns, rather than a single meal or feast. Isotope ratios in food, especially plant food, are directly and predictably reflected in bone chemistry, allowing researchers to partially reconstruct recent diet using stable isotopes as tracers. Stable isotope analysis monitors the ratio of carbon 13 to carbon 12 (13C/12C), which is expressed as parts per thousand using delta notation (δ13C). The 13C and 12C ratio is either depleted (more negative) or enriched (more positive) relative to a standard. 12C and 13C occur in a ratio of approximately 98.9 to 1.1. The ratio of carbon isotopes in humans varies according to the types of plants digested with different photosynthesis pathways. The three photosynthesis pathways are C3 carbon fixation, C4 carbon fixation and Crassulacean acid metabolism. C4 plants are mainly grasses from tropical and subtropical regions, and are adapted to higher levels of radiation than C3 plants. Corn, millet and sugar cane are some well-known C4 crops, while trees and shrubs use the C3 pathway. C4 carbon fixation is more efficient when temperatures are high and atmospheric CO2 concentrations are low. C3 plants are more common and numerous than C4 plants as C3 carbon fixation is more efficient in a wider range of temperatures and atmospheric CO2 concentrations. The different photosynthesis pathways used by C3 and C4 plants cause them to discriminate differently towards 13C leading to distinctly different ranges of δ13C. C4 plants range between -9 and -16‰, and C3 plants range between -22 and -34‰. The isotopic signature of consumer collagen is close the δ13C of dietary plants, while apatite, a mineral component of bones and teeth, has an ~14‰ offset from dietary plants due fractionation associated with mineral formation. Stable carbon isotopes have been used as tracers of C4 plants in paleodiets. For example, the rapid and dramatic increase in 13C in human collagen after the adoption of maize agriculture in North America documents the transition from a C3 to a C4 (native plants to corn) diet by 1300 CE. Skeletons excavated from the Coburn Street Burial Ground (1750 to 1827 CE) in Cape Town, South Africa, were analyzed using stable isotope data in order to determine geographical histories and life histories. The people buried in this cemetery were assumed to be slaves and members of the underclass based on the informal nature of the cemetery; biomechanical stress analysis and stable isotope analysis, combined with other archaeological data, seem to support this supposition. Based on stable isotope levels, one study reported that eight Cobern Street Burial Ground individuals consumed a diet based on C4 (tropical) plants in childhood, then consumed more C3 plants, which were more common there later in their lives. Six of these individuals had dental modifications similar to those carried out by peoples inhabiting tropical areas known to be targeted by slavers who brought enslaved individuals from other parts of Africa to the colony. Based on this evidence, it was argued that these individuals represent enslaved persons from areas of Africa where C4 plants were consumed and who were brought to the Cape as laborers. These individuals were not assigned to a specific ethnicity, but similar dental modifications are carried out by the Makua, Yao, and Marav peoples. Four individuals were buried with no grave goods, in accordance with Muslim tradition, facing Signal Hill, which is a point of significance for local Muslims. Their isotopic signatures indicate that they grew up in a temperate environment consuming mostly C3 plants, but some C4 The study argued that these individuals were from the Indian Ocean area. It also suggested that these individuals were Muslims. It argued that stable isotopic analysis of burials, combined with historical and archaeological data were an effective way of investigating the migrations forced by the African Slave Trade, as well as the emergence of the underclass and working class in the Old World. Nitrogen The nitrogen stable isotope system is based on the relative enrichment/depletion of 15N in comparison to 14N in δ15N. Carbon and nitrogen stable isotope analyses are complementary in paleodiet studies. Nitrogen isotopes in bone collagen are ultimately derived from dietary protein, while carbon can be contributed by protein, carbohydrate, or fat. δ13C values help distinguish between dietary protein and plant sources while systematic increases in δ15N values as you move up in trophic level helps determine the position of protein sources in the food web. 15N increases 3-4% with each trophic step upward. It has been suggested that the relative difference between human δ15N values and animal protein values scales with the proportion of that animal protein in the diet, though this interpretation has been questioned due to contradictory views on the impact of nitrogen intake through protein consumption and nitrogen loss through waste release on 15N enrichment in the body. Variations in nitrogen values within the same trophic level are also considered. Nitrogen variations in plants, for example, can be caused by plant-specific reliance on nitrogen gas which causes the plant to mirror atmospheric values. Enriched or higher δ15N values can be achieved in plants that grew in soil fertilized by animal waste. Nitrogen isotopes have been used to estimate the relative contributions of legumes verses nonlegumes, as well as terrestrial versus marine resources. While other plants have δ15N values that range from 2 to 6‰, legumes have lower 14N/15N ratios (close to 0‰, i.e. atmospheric N2) because they can fix molecular nitrogen, rather than having to rely on soil nitrates and nitrites. Therefore, one potential explanation for lower δ15N values in human remains is an increased consumption of legumes or animals that eat them. 15N values increase with meat consumption, and decrease with legume consumption. The 14N/15N ratio could be used to gauge the contribution of meat and legumes to the diet. Oxygen The oxygen stable isotope system is based on the 18O/16O (δ18O) ratio in a given material, which is enriched/depleted relative to a standard. The field typically normalizes to both Vienna Standard Mean Ocean Water (VSMOW) and Standard Light Antarctic Precipitation (SLAP). This system is famous for its use in paleoclimatic studies but it also a prominent source of information in bioarchaeology. Variations in δ18O values in skeletal remains are directly related to the isotopic composition of the consumer's body water. isotopic composition of mammalian body water is primarily controlled by consumed water. δ18O values of freshwater drinking sources vary due to mass fractionations related to mechanisms of the global water cycle. Evaporated water vapor is more enriched in 16O (isotopically lighter; more negative delta value) compared to the remaining water, which is depleted in 16O (isotopically heavier; more positive delta value). An accepted first-order approximation for the isotopic composition of animal drinking water is local precipitation, though this is complicated to varying degrees by confounding water sources like natural springs or lakes. The baseline δ18O used in archaeological studies is modified depending on the relevant environmental and historical context. δ18O values of bioapatite in human skeletal remains are assumed to have formed in equilibrium with body water, thus providing a species-specific relationship to oxygen isotopic composition of body water. The same cannot be said for human bone collagen, as δ18O values in collagen seem to be impacted by drinking water, food water, and a combination of metabolic and physiological processes. δ18O values from bone minerals are essentially an averaged isotopic signature throughout the entire life of the individual. While carbon and nitrogen are used primarily to investigate the diets of ancient humans, oxygen isotopes offer insight into body water at different life stages. δ18O values are used to understand drinking behaviors, animal husbandry, and track mobility. 97 burials from the ancient Maya citadel of Tikal were studied using oxygen isotopes. Results from tooth enamel identified statistically different individuals, interpreted to be individuals from Maya lowlands, Guatemala, and potentially Mexico. Historical context combined with isotopic data from burials were used to argue that migrant individuals were a part of lower and higher social classes within Tikal. Female migrants who arrived in Tikal during Early Classic period could have been the brides of Maya elite. Sulfur The sulfur stable isotope system is based on small, mass-dependent fractionations of sulfur isotopes. These fractionations are reported relative to Canyon Diablo Troilite (V-CDT), the agreed upon standard. The ratio of the most abundant sulfur isotope, 32S, compared to rarer isotopes such as, 33S, 34S, and 36S, is used to characterize biological signatures and geological reservoirs. The fractionation of 34S (δ34S) is particularly useful since it is the most abundant of the rare isotopes. This system is less commonly used on its own and typically complements studies of carbon and nitrogen. In bioarchaeology, the sulfur system has been used to investigate paleodiets and spatial behaviors through the analysis of hair and bone collagen. Dietary proteins incorporated into living organisms tend to determine the stable isotope values of their organic tissues. Methionine and cysteine are the canonical sulfur-containing amino acids. Of the two, δ34S values of methionine are considered to better reflect isotopic compositions of dietary sulfur, since cysteine values are impacted by diet and internal cycling. While other stable isotope systems have significant trophic shifts, sulfur shows only a small shift (~0.5‰). Consumers yield isotopic signatures that reflect the sulfur reservoir(s) of the dietary protein source. Animal proteins sourced from marine ecosystems tend to have δ34S values between +16 and +17‰, terrestrial plants range from -7‰ to +8‰, while proteins from freshwater and terrestrial ecosystems are highly variable. The sulfate content of the modern ocean is well-mixed with a δ34S of approximately +21‰, while riverine water is heavily influenced by sulfur-bearing minerals in surrounding bedrock and terrestrial plants are influenced by the sulfur content of local soils. Estuarian ecosystems have increased complexity due to seawater and river inputs. The extreme range of δ34S values for freshwater ecosystems often interferes with terrestrial signals, making it difficult to use the sulfur system as the sole tool in paleodiet studies. Various studies have analyzed the isotopic ratios of sulfur in mummified hair. Hair is a good candidate for sulfur studies as it typically contains at least 5% elemental sulfur. One study incorporated sulfur isotope ratios into their paleodietary investigation of four mummified child victims of Incan sacrificial practices. δ34S values helped them conclude that the children had not been eating marine protein before their death. Historical insight coupled with consistent sulfur signatures for three of the children suggests that they were living in the same location 6 months prior to the sacrifice. Studies have measured δ34S values of bone collagen, though the interpretation of these values was not reliable until quality criteria were published in 2009. Though bone collagen is abundant in skeletal remains, less than 1% of the tissue is made of sulfur, making it imperative that these studies carefully assess the meaning of bone collagen δ34S values. DNA DNA analysis of past populations is used to genetically determine sex, determine genetic relatedness, understand marriage patterns, and investigate prehistoric migration. In 2012 archaeologists found skeletal remains of an adult male. He was buried under a car park in England. DNA evidence allowed the archaeologists to confirm that the remains belonged to Richard III, the former king of England who died in the Battle of Bosworth. In 2021, Canadian researchers analyzed skeletal remains found on King William Island, identifying them as belonging to Warrant Officer John Gregory, an engineer serving aboard HMS Erebus in the ill-fated 1845 Franklin Expedition. He was the first expedition member to be identified by DNA analysis. Biocultural bioarchaeology The study of human remains can illuminate the relationship between physical bodies and socio-cultural conditions and practices, via a biocultural bioarchaeology model. Bioarchaeology is typically regarded as a positivist, science-based discipline, while the social sciences are regarded as constructivist. Bioarchaeology has been criticized for having little to no concern for culture or history. One scholar argued that scientific/forensic scholarship ignores cultural/historic factors. He proposed that a biocultural version of bioarchaeology offered a more meaningful, nuanced, and relevant picture, especially for descent populations. Biocultural bioarchaeology combines standard forensic techniques with investigations of demography and epidemiology in order to assess socioeconomic conditions experienced by human communities. For example, incorporation of analysis of grave goods can further the understanding of daily activities. Some bioarchaeologists view the discipline as a crucial interface between the science and the humanities; as the human body is made and re-made by both biological and cultural factors. Another type of bioarchaeology focuses on quality of life, lifestyle, behavior, biological relatedness, and population history. It does not closely link skeletal remains to their archaeological context, and may best be viewed as a "skeletal biology of the past". Inequalities exist in all human societies. Bioarchaeology has helped to dispel the idea that life for foragers of the past was "nasty, brutish and short"; bioarchaeological studies reported that foragers of the past were often healthy, while agricultural societies tended to have increased incidence of malnutrition and disease. One study compared foragers from Oakhurst to agriculturalists from K2 and Mapungubwe and reported that agriculturalists from K2 and Mapungubwe were not subject to the lower nutritional levels expected. Danforth argues that more "complex" state-level societies display greater health differences between elites and the rest of society, with elites having the advantage, and that this disparity increases as societies become more unequal. Some status differences in society do not necessarily mean radically different nutritional levels; Powell did not find evidence of great nutritional differences between elites and commoners, but did find lower rates of anemia among elites in Moundville. An area of increasing interest interested in understanding inequality is the study of violence. Researchers analyzing traumatic injuries on human remains have shown that social status and gender can have a significant impact on exposure to violence. Numerous researchers study violence in human remains, exploring violent behavior, including intimate partner violence, child abuse, institutional abuse, torture, warfare, human sacrifice, and structural violence. Ethics Ethical issues with bioarchaeology revolve around the treatment and respect for the dead. Large-scale skeletal collections were first amassed in the US in the 19th century, largely the remains of Native Americans. No permission was granted by surviving family for study and display. Federal laws such as 1990's NAGPRA (Native American Graves Protection and Repatriation Act) allowed Native Americans to regain control over their ancestors' remains and associated artifacts. Many archaeologists did not realize that many people perceive archaeologists as non-productive and/or grave robbers. Concerns about mistreatment of remains are not unfounded: in a 1971 Minnesota excavation, White and Native American remains were treated differently; Whites were reburied, while Native Americans were moved to a natural history museum. African American bioarchaeology grew after NAGPRA and its effect of ending the study of Native American remains. Bioarchaeology in Europe was not as disrupted by repatriation issues. However, because much of European archaeology has been focused on classical roots, artifacts and art have been emphasized and Roman and post-Roman skeletal remains were nearly completely neglected until the 1980s. In prehistoric European archaeology, biological remains began to be analyzed earlier than in classical archaeology. See also Ancient DNA Biocultural anthropology Odontometrics Osteoarchaeology Paleopathology Zooarchaeology References Further reading J. Buikstra, 1977. "Biocultural dimensions of archaeological study: a regional perspective". In:Biocultural adaptation in prehistoric America, pp. 67–84. University of Georgia Press. J. Buikstra and L. Beck, eds., 2006. "Bioarchaeology: the Contextual Study of Human Remains." Elsevier. M. Katzenberg and S. Saunders, eds., 2000. Biological anthropology of the human skeleton. Wiley. K. Killgrove, 2014. Bioarchaeology . In: Oxford Annotated Bibliographies Online. Oxford. C.S. Larsen, 1997. Bioarchaeology: interpreting behavior from the human skeleton. Cambridge University Press. S. Mays, 1998. The archaeology of human bones. Routledge. Samuel J. Redman, 2016. Bone Rooms: From Scientific Racism to Human Prehistory in Museums. Harvard University Press. M. Parker Pearson, 2001. The archaeology of death and burial. Texas A&M University Press. D. Ubelaker, 1989. Human skeletal remains: excavation, analysis, interpretation. Taraxacum. T. White, 1991. Human osteology. Academic Press. External links Organizations American Association of Physical Anthropologists Biological Anthropology Section of the American Anthropological Association British Association of Biological Anthropologists and Osteoarchaeologists Canadian Association for Physical Anthropology Journals American Journal of Physical Anthropology International Journal of Osteoarchaeology HOMO: Journal of Comparative Human Biology International Journal of Paleopathology Bioarchaeology of the Near East Other Paleopathology The African Burial Ground Bioarchaeology and the Center for Bioarchaeological Research National NAGPRA homepage Bones Don't Lie Blog Powered by Osteons Blog Kristina Killgrove's Bioarchaeology Blog at Forbes 1970s neologisms Collections care Conservation and restoration of cultural heritage Museology Cultural heritage Archaeological science Art and cultural repatriation Zooarchaeology
0.78167
0.975647
0.762634
Glacial period
A glacial period (alternatively glacial or glaciation) is an interval of time (thousands of years) within an ice age that is marked by colder temperatures and glacier advances. Interglacials, on the other hand, are periods of warmer climate between glacial periods. The Last Glacial Period ended about 15,000 years ago.<ref name="Severinghaus1999"></ref> The Holocene is the current interglacial. A time with no glaciers on Earth is considered a greenhouse climate state. Quaternary Period Within the Quaternary, which started about 2.6 million years before present, there have been a number of glacials and interglacials. At least eight glacial cycles have occurred in the last 740,000 years alone. Penultimate Glacial Period The Penultimate Glacial Period (PGP) is the glacial period that occurred before the Last Glacial Period. It began about 194,000 years ago and ended 135,000 years ago, with the beginning of the Eemian interglacial. Last Glacial Period The last glacial period was the most recent glacial period within the Quaternary glaciation at the end of the Pleistocene, and began about 110,000 years ago and ended about 11,700 years ago. The glaciations that occurred during the glacial period covered many areas of the Northern Hemisphere and have different names, depending on their geographic distributions: Wisconsin (in North America), Devensian (in Great Britain), Midlandian (in Ireland), Würm (in the Alps), Weichsel (in northern Central Europe), Dali (in East China), Beiye (in North China), Taibai (in Shaanxi) Luoji Shan (in southwest Sichuan), Zagunao (in northwest Sichuan), Tianchi (in the Tian Shan) Jomolungma (in the Himalayas), and Llanquihue (in Chile). The glacial advance reached the Last Glacial Maximum about 26,500 BP. In Europe, the ice sheet reached Northern Germany. Over the last 650,000 years, there have been on average seven cycles of glacial advance and retreat. Next glacial period Since orbital variations are predictable, computer models that relate orbital variations to climate can predict future climate possibilities. Work by Berger and Loutre suggests that the current warm climate may last another 50,000 years. The amount of heat trapping (greenhouse) gases being emitted into the Earth's oceans and its atmosphere may delay the next glacial period by an additional 50,000 years. References External links Period
0.767576
0.993544
0.762621
History of Earth
The natural history of Earth concerns the development of planet Earth from its formation to the present day. Nearly all branches of natural science have contributed to understanding of the main events of Earth's past, characterized by constant geological change and biological evolution. The geological time scale (GTS), as defined by international convention, depicts the large spans of time from the beginning of the Earth to the present, and its divisions chronicle some definitive events of Earth history. Earth formed around 4.54 billion years ago, approximately one-third the age of the universe, by accretion from the solar nebula. Volcanic outgassing probably created the primordial atmosphere and then the ocean, but the early atmosphere contained almost no oxygen. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. While the Earth was in its earliest stage (Early Earth), a giant impact collision with a planet-sized body named Theia is thought to have formed the Moon. Over time, the Earth cooled, causing the formation of a solid crust, and allowing liquid water on the surface. The Hadean eon represents the time before a reliable (fossil) record of life; it began with the formation of the planet and ended 4.0 billion years ago. The following Archean and Proterozoic eons produced the beginnings of life on Earth and its earliest evolution. The succeeding eon is the Phanerozoic, divided into three eras: the Palaeozoic, an era of arthropods, fishes, and the first life on land; the Mesozoic, which spanned the rise, reign, and climactic extinction of the non-avian dinosaurs; and the Cenozoic, which saw the rise of mammals. Recognizable humans emerged at most 2 million years ago, a vanishingly small period on the geological scale. The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago, during the Eoarchean Era, after a geological crust started to solidify following the earlier molten Hadean eon. There are microbial mat fossils such as stromatolites found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth … then it could be common in the universe." Photosynthetic organisms appeared between 3.2 and 2.4 billion years ago and began enriching the atmosphere with oxygen. Life remained mostly small and microscopic until about 580 million years ago, when complex multicellular life arose, developed over time, and culminated in the Cambrian Explosion about 538.8 million years ago. This sudden diversification of life forms produced most of the major phyla known today, and divided the Proterozoic Eon from the Cambrian Period of the Paleozoic Era. It is estimated that 99 percent of all species that ever lived on Earth, over five billion, have gone extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million are documented, but over 86 percent have not been described. The Earth's crust has constantly changed since its formation, as has life since its first appearance. Species continue to evolve, taking on new forms, splitting into daughter species, or going extinct in the face of ever-changing physical environments. The process of plate tectonics continues to shape the Earth's continents and oceans and the life they harbor. Eons In geochronology, time is generally measured in mya (million years ago), each unit representing the period of approximately 1,000,000 years in the past. The history of Earth is divided into four great eons, starting 4,540 mya with the formation of the planet. Each eon saw the most significant changes in Earth's composition, climate and life. Each eon is subsequently divided into eras, which in turn are divided into periods, which are further divided into epochs. Geologic time scale The history of the Earth can be organized chronologically according to the geologic time scale, which is split into intervals based on stratigraphic analysis. Solar System formation The standard model for the formation of the Solar System (including the Earth) is the solar nebula hypothesis. In this model, the Solar System formed from a large, rotating cloud of interstellar dust and gas called the solar nebula. It was composed of hydrogen and helium created shortly after the Big Bang 13.8 Ga (billion years ago) and heavier elements ejected by supernovae. About 4.5 Ga, the nebula began a contraction that may have been triggered by the shock wave from a nearby supernova. A shock wave would have also made the nebula rotate. As the cloud began to accelerate, its angular momentum, gravity, and inertia flattened it into a protoplanetary disk perpendicular to its axis of rotation. Small perturbations due to collisions and the angular momentum of other large debris created the means by which kilometer-sized protoplanets began to form, orbiting the nebular center. The center of the nebula, not having much angular momentum, collapsed rapidly, the compression heating it until nuclear fusion of hydrogen into helium began. After more contraction, a T Tauri star ignited and evolved into the Sun. Meanwhile, in the outer part of the nebula gravity caused matter to condense around density perturbations and dust particles, and the rest of the protoplanetary disk began separating into rings. In a process known as runaway accretion, successively larger fragments of dust and debris clumped together to form planets. Earth formed in this manner about 4.54 billion years ago (with an uncertainty of 1%) and was largely completed within 10–20 million years. In June 2023, scientists reported evidence that the planet Earth may have formed in just three million years, much faster than the 10−100 million years thought earlier. Nonetheless, the solar wind of the newly formed T Tauri star cleared out most of the material in the disk that had not already condensed into larger bodies. The same process is expected to produce accretion disks around virtually all newly forming stars in the universe, some of which yield planets. The proto-Earth grew by accretion until its interior was hot enough to melt the heavy, siderophile metals. Having higher densities than the silicates, these metals sank. This so-called iron catastrophe resulted in the separation of a primitive mantle and a (metallic) core only 10 million years after the Earth began to form, producing the layered structure of Earth and setting up the formation of Earth's magnetic field. J.A. Jacobs was the first to suggest that Earth's inner core—a solid center distinct from the liquid outer core—is freezing and growing out of the liquid outer core due to the gradual cooling of Earth's interior (about 100 degrees Celsius per billion years). Hadean and Archean Eons The first eon in Earth's history, the Hadean, begins with the Earth's formation and is followed by the Archean eon at 3.8 Ga. The oldest rocks found on Earth date to about 4.0 Ga, and the oldest detrital zircon crystals in rocks to about 4.4 Ga, soon after the formation of the Earth's crust and the Earth itself. The giant impact hypothesis for the Moon's formation states that shortly after formation of an initial crust, the proto-Earth was impacted by a smaller protoplanet, which ejected part of the mantle and crust into space and created the Moon. From crater counts on other celestial bodies, it is inferred that a period of intense meteorite impacts, called the Late Heavy Bombardment, began about 4.1 Ga, and concluded around 3.8 Ga, at the end of the Hadean. In addition, volcanism was severe due to the large heat flow and geothermal gradient. Nevertheless, detrital zircon crystals dated to 4.4 Ga show evidence of having undergone contact with liquid water, suggesting that the Earth already had oceans or seas at that time. By the beginning of the Archean, the Earth had cooled significantly. Present life forms could not have survived at Earth's surface, because the Archean atmosphere lacked oxygen hence had no ozone layer to block ultraviolet light. Nevertheless, it is believed that primordial life began to evolve by the early Archean, with candidate fossils dated to around 3.5 Ga. Some scientists even speculate that life could have begun during the early Hadean, as far back as 4.4 Ga, surviving the possible Late Heavy Bombardment period in hydrothermal vents below the Earth's surface. Formation of the Moon Earth's only natural satellite, the Moon, is larger relative to its planet than any other satellite in the Solar System. During the Apollo program, rocks from the Moon's surface were brought to Earth. Radiometric dating of these rocks shows that the Moon is 4.53 ± 0.01 billion years old, formed at least 30 million years after the Solar System. New evidence suggests the Moon formed even later, 4.48 ± 0.02 Ga, or 70–110 million years after the start of the Solar System. Theories for the formation of the Moon must explain its late formation as well as the following facts. First, the Moon has a low density (3.3 times that of water, compared to 5.5 for the Earth) and a small metallic core. Second, the Earth and Moon have the same oxygen isotopic signature (relative abundance of the oxygen isotopes). Of the theories proposed to account for these phenomena, one is widely accepted: The giant impact hypothesis proposes that the Moon originated after a body the size of Mars (sometimes named Theia) struck the proto-Earth a glancing blow. The collision released about 100 million times more energy than the more recent Chicxulub impact that is believed to have caused the extinction of the non-avian dinosaurs. It was enough to vaporize some of the Earth's outer layers and melt both bodies. A portion of the mantle material was ejected into orbit around the Earth. The giant impact hypothesis predicts that the Moon was depleted of metallic material, explaining its abnormal composition. The ejecta in orbit around the Earth could have condensed into a single body within a couple of weeks. Under the influence of its own gravity, the ejected material became a more spherical body: the Moon. First continents Mantle convection, the process that drives plate tectonics, is a result of heat flow from the Earth's interior to the Earth's surface. It involves the creation of rigid tectonic plates at mid-oceanic ridges. These plates are destroyed by subduction into the mantle at subduction zones. During the early Archean (about 3.0 Ga) the mantle was much hotter than today, probably around , so convection in the mantle was faster. Although a process similar to present-day plate tectonics did occur, this would have gone faster too. It is likely that during the Hadean and Archean, subduction zones were more common, and therefore tectonic plates were smaller. The initial crust, which formed when the Earth's surface first solidified, totally disappeared from a combination of this fast Hadean plate tectonics and the intense impacts of the Late Heavy Bombardment. However, it is thought that it was basaltic in composition, like today's oceanic crust, because little crustal differentiation had yet taken place. The first larger pieces of continental crust, which is a product of differentiation of lighter elements during partial melting in the lower crust, appeared at the end of the Hadean, about 4.0 Ga. What is left of these first small continents are called cratons. These pieces of late Hadean and early Archean crust form the cores around which today's continents grew. The oldest rocks on Earth are found in the North American craton of Canada. They are tonalites from about 4.0 Ga. They show traces of metamorphism by high temperature, but also sedimentary grains that have been rounded by erosion during transport by water, showing that rivers and seas existed then. Cratons consist primarily of two alternating types of terranes. The first are so-called greenstone belts, consisting of low-grade metamorphosed sedimentary rocks. These "greenstones" are similar to the sediments today found in oceanic trenches, above subduction zones. For this reason, greenstones are sometimes seen as evidence for subduction during the Archean. The second type is a complex of felsic magmatic rocks. These rocks are mostly tonalite, trondhjemite or granodiorite, types of rock similar in composition to granite (hence such terranes are called TTG-terranes). TTG-complexes are seen as the relicts of the first continental crust, formed by partial melting in basalt. Oceans and atmosphere Earth is often described as having had three atmospheres. The first atmosphere, captured from the solar nebula, was composed of light (atmophile) elements from the solar nebula, mostly hydrogen and helium. A combination of the solar wind and Earth's heat would have driven off this atmosphere, as a result of which the atmosphere is now depleted of these elements compared to cosmic abundances. After the impact which created the Moon, the molten Earth released volatile gases; and later more gases were released by volcanoes, completing a second atmosphere rich in greenhouse gases but poor in oxygen. Finally, the third atmosphere, rich in oxygen, emerged when bacteria began to produce oxygen about 2.8 Ga. In early models for the formation of the atmosphere and ocean, the second atmosphere was formed by outgassing of volatiles from the Earth's interior. Now it is considered likely that many of the volatiles were delivered during accretion by a process known as impact degassing in which incoming bodies vaporize on impact. The ocean and atmosphere would, therefore, have started to form even as the Earth formed. The new atmosphere probably contained water vapor, carbon dioxide, nitrogen, and smaller amounts of other gases. Planetesimals at a distance of 1 astronomical unit (AU), the distance of the Earth from the Sun, probably did not contribute any water to the Earth because the solar nebula was too hot for ice to form and the hydration of rocks by water vapor would have taken too long. The water must have been supplied by meteorites from the outer asteroid belt and some large planetary embryos from beyond 2.5 AU. Comets may also have contributed. Though most comets are today in orbits farther away from the Sun than Neptune, computer simulations show that they were originally far more common in the inner parts of the Solar System. As the Earth cooled, clouds formed. Rain created the oceans. Recent evidence suggests the oceans may have begun forming as early as 4.4 Ga. By the start of the Archean eon, they already covered much of the Earth. This early formation has been difficult to explain because of a problem known as the faint young Sun paradox. Stars are known to get brighter as they age, and the Sun has become 30% brighter since its formation 4.5 billion years ago. Many models indicate that the early Earth should have been covered in ice. A likely solution is that there was enough carbon dioxide and methane to produce a greenhouse effect. The carbon dioxide would have been produced by volcanoes and the methane by early microbes. It is hypothesized that there also existed an organic haze created from the products of methane photolysis that caused an anti-greenhouse effect as well. Another greenhouse gas, ammonia, would have been ejected by volcanos but quickly destroyed by ultraviolet radiation. Origin of life One of the reasons for interest in the early atmosphere and ocean is that they form the conditions under which life first arose. There are many models, but little consensus, on how life emerged from non-living chemicals; chemical systems created in the laboratory fall well short of the minimum complexity for a living organism. The first step in the emergence of life may have been chemical reactions that produced many of the simpler organic compounds, including nucleobases and amino acids, that are the building blocks of life. An experiment in 1952 by Stanley Miller and Harold Urey showed that such molecules could form in an atmosphere of water, methane, ammonia and hydrogen with the aid of sparks to mimic the effect of lightning. Although atmospheric composition was probably different from that used by Miller and Urey, later experiments with more realistic compositions also managed to synthesize organic molecules. Computer simulations show that extraterrestrial organic molecules could have formed in the protoplanetary disk before the formation of the Earth. Additional complexity could have been reached from at least three possible starting points: self-replication, an organism's ability to produce offspring that are similar to itself; metabolism, its ability to feed and repair itself; and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances. Replication first: RNA world Even the simplest members of the three modern domains of life use DNA to record their "recipes" and a complex array of RNA and protein molecules to "read" these instructions and use them for growth, maintenance, and self-replication. The discovery that a kind of RNA molecule called a ribozyme can catalyze both its own replication and the construction of proteins led to the hypothesis that earlier life-forms were based entirely on RNA. They could have formed an RNA world in which there were individuals but no species, as mutations and horizontal gene transfers would have meant that the offspring in each generation were quite likely to have different genomes from those that their parents started with. RNA would later have been replaced by DNA, which is more stable and therefore can build longer genomes, expanding the range of capabilities a single organism can have. Ribozymes remain as the main components of ribosomes, the "protein factories" of modern cells. Although short, self-replicating RNA molecules have been artificially produced in laboratories, doubts have been raised about whether natural non-biological synthesis of RNA is possible. The earliest ribozymes may have been formed of simpler nucleic acids such as PNA, TNA or GNA, which would have been replaced later by RNA. Other pre-RNA replicators have been posited, including crystals and even quantum systems. In 2003 it was proposed that porous metal sulfide precipitates would assist RNA synthesis at about and at ocean-bottom pressures near hydrothermal vents. In this hypothesis, the proto-cells would be confined in the pores of the metal substrate until the later development of lipid membranes. Metabolism first: iron–sulfur world Another long-standing hypothesis is that the first life was composed of protein molecules. Amino acids, the building blocks of proteins, are easily synthesized in plausible prebiotic conditions, as are small peptides (polymers of amino acids) that make good catalysts. A series of experiments starting in 1997 showed that amino acids and peptides could form in the presence of carbon monoxide and hydrogen sulfide with iron sulfide and nickel sulfide as catalysts. Most of the steps in their assembly required temperatures of about and moderate pressures, although one stage required and a pressure equivalent to that found under of rock. Hence, self-sustaining synthesis of proteins could have occurred near hydrothermal vents. A difficulty with the metabolism-first scenario is finding a way for organisms to evolve. Without the ability to replicate as individuals, aggregates of molecules would have "compositional genomes" (counts of molecular species in the aggregate) as the target of natural selection. However, a recent model shows that such a system is unable to evolve in response to natural selection. Membranes first: Lipid world It has been suggested that double-walled "bubbles" of lipids like those that form the external membranes of cells may have been an essential first step. Experiments that simulated the conditions of the early Earth have reported the formation of lipids, and these can spontaneously form liposomes, double-walled "bubbles", and then reproduce themselves. Although they are not intrinsically information-carriers as nucleic acids are, they would be subject to natural selection for longevity and reproduction. Nucleic acids such as RNA might then have formed more easily within the liposomes than they would have outside. The clay theory Some clays, notably montmorillonite, have properties that make them plausible accelerators for the emergence of an RNA world: they grow by self-replication of their crystalline pattern, are subject to an analog of natural selection (as the clay "species" that grows fastest in a particular environment rapidly becomes dominant), and can catalyze the formation of RNA molecules. Although this idea has not become the scientific consensus, it still has active supporters. Research in 2003 reported that montmorillonite could also accelerate the conversion of fatty acids into "bubbles", and that the bubbles could encapsulate RNA attached to the clay. Bubbles can then grow by absorbing additional lipids and dividing. The formation of the earliest cells may have been aided by similar processes. A similar hypothesis presents self-replicating iron-rich clays as the progenitors of nucleotides, lipids and amino acids. Last universal common ancestor It is believed that of this multiplicity of protocells, only one line survived. Current phylogenetic evidence suggests that the last universal ancestor (LUA) lived during the early Archean eon, perhaps 3.5 Ga or earlier. This LUA cell is the ancestor of all life on Earth today. It was probably a prokaryote, possessing a cell membrane and probably ribosomes, but lacking a nucleus or membrane-bound organelles such as mitochondria or chloroplasts. Like modern cells, it used DNA as its genetic code, RNA for information transfer and protein synthesis, and enzymes to catalyze reactions. Some scientists believe that instead of a single organism being the last universal common ancestor, there were populations of organisms exchanging genes by lateral gene transfer. Proterozoic Eon The Proterozoic eon lasted from 2.5 Ga to 538.8 Ma (million years) ago. In this time span, cratons grew into continents with modern sizes. The change to an oxygen-rich atmosphere was a crucial development. Life developed from prokaryotes into eukaryotes and multicellular forms. The Proterozoic saw a couple of severe ice ages called Snowball Earths. After the last Snowball Earth about 600 Ma, the evolution of life on Earth accelerated. About 580 Ma, the Ediacaran biota formed the prelude for the Cambrian Explosion. Oxygen revolution The earliest cells absorbed energy and food from the surrounding environment. They used fermentation, the breakdown of more complex compounds into less complex compounds with less energy, and used the energy so liberated to grow and reproduce. Fermentation can only occur in an anaerobic (oxygen-free) environment. The evolution of photosynthesis made it possible for cells to derive energy from the Sun. Most of the life that covers the surface of the Earth depends directly or indirectly on photosynthesis. The most common form, oxygenic photosynthesis, turns carbon dioxide, water, and sunlight into food. It captures the energy of sunlight in energy-rich molecules such as ATP, which then provide the energy to make sugars. To supply the electrons in the circuit, hydrogen is stripped from water, leaving oxygen as a waste product. Some organisms, including purple bacteria and green sulfur bacteria, use an anoxygenic form of photosynthesis that uses alternatives to hydrogen stripped from water as electron donors; examples are hydrogen sulfide, sulfur and iron. Such extremophile organisms are restricted to otherwise inhospitable environments such as hot springs and hydrothermal vents. The simpler anoxygenic form arose about 3.8 Ga, not long after the appearance of life. The timing of oxygenic photosynthesis is more controversial; it had certainly appeared by about 2.4 Ga, but some researchers put it back as far as 3.2 Ga. The latter "probably increased global productivity by at least two or three orders of magnitude". Among the oldest remnants of oxygen-producing lifeforms are fossil stromatolites. At first, the released oxygen was bound up with limestone, iron, and other minerals. The oxidized iron appears as red layers in geological strata called banded iron formations that formed in abundance during the Siderian period (between 2500 Ma and 2300 Ma). When most of the exposed readily reacting minerals were oxidized, oxygen finally began to accumulate in the atmosphere. Though each cell only produced a minute amount of oxygen, the combined metabolism of many cells over a vast time transformed Earth's atmosphere to its current state. This was Earth's third atmosphere. Some oxygen was stimulated by solar ultraviolet radiation to form ozone, which collected in a layer near the upper part of the atmosphere. The ozone layer absorbed, and still absorbs, a significant amount of the ultraviolet radiation that once had passed through the atmosphere. It allowed cells to colonize the surface of the ocean and eventually the land: without the ozone layer, ultraviolet radiation bombarding land and sea would have caused unsustainable levels of mutation in exposed cells. Photosynthesis had another major impact. Oxygen was toxic; much life on Earth probably died out as its levels rose in what is known as the oxygen catastrophe. Resistant forms survived and thrived, and some developed the ability to use oxygen to increase their metabolism and obtain more energy from the same food. Snowball Earth The natural evolution of the Sun made it progressively more luminous during the Archean and Proterozoic eons; the Sun's luminosity increases 6% every billion years. As a result, the Earth began to receive more heat from the Sun in the Proterozoic eon. However, the Earth did not get warmer. Instead, the geological record suggests it cooled dramatically during the early Proterozoic. Glacial deposits found in South Africa date back to 2.2 Ga, at which time, based on paleomagnetic evidence, they must have been located near the equator. Thus, this glaciation, known as the Huronian glaciation, may have been global. Some scientists suggest this was so severe that the Earth was frozen over from the poles to the equator, a hypothesis called Snowball Earth. The Huronian ice age might have been caused by the increased oxygen concentration in the atmosphere, which caused the decrease of methane (CH4) in the atmosphere. Methane is a strong greenhouse gas, but with oxygen it reacts to form CO2, a less effective greenhouse gas. When free oxygen became available in the atmosphere, the concentration of methane could have decreased dramatically, enough to counter the effect of the increasing heat flow from the Sun. However, the term Snowball Earth is more commonly used to describe later extreme ice ages during the Cryogenian period. There were four periods, each lasting about 10 million years, between 750 and 580 million years ago, when the Earth is thought to have been covered with ice apart from the highest mountains, and average temperatures were about . The snowball may have been partly due to the location of the supercontinent Rodinia straddling the Equator. Carbon dioxide combines with rain to weather rocks to form carbonic acid, which is then washed out to sea, thus extracting the greenhouse gas from the atmosphere. When the continents are near the poles, the advance of ice covers the rocks, slowing the reduction in carbon dioxide, but in the Cryogenian the weathering of Rodinia was able to continue unchecked until the ice advanced to the tropics. The process may have finally been reversed by the emission of carbon dioxide from volcanoes or the destabilization of methane gas hydrates. According to the alternative Slushball Earth theory, even at the height of the ice ages there was still open water at the Equator. Emergence of eukaryotes Modern taxonomy classifies life into three domains. The time of their origin is uncertain. The Bacteria domain probably first split off from the other forms of life (sometimes called Neomura), but this supposition is controversial. Soon after this, by 2 Ga, the Neomura split into the Archaea and the Eukaryota. Eukaryotic cells (Eukaryota) are larger and more complex than prokaryotic cells (Bacteria and Archaea), and the origin of that complexity is only now becoming known. The earliest fossils possessing features typical of fungi date to the Paleoproterozoic era, some 2.4 Ga ago; these multicellular benthic organisms had filamentous structures capable of anastomosis. Around this time, the first proto-mitochondrion was formed. A bacterial cell related to today's Rickettsia, which had evolved to metabolize oxygen, entered a larger prokaryotic cell, which lacked that capability. Perhaps the large cell attempted to digest the smaller one but failed (possibly due to the evolution of prey defenses). The smaller cell may have tried to parasitize the larger one. In any case, the smaller cell survived inside the larger cell. Using oxygen, it metabolized the larger cell's waste products and derived more energy. Part of this excess energy was returned to the host. The smaller cell replicated inside the larger one. Soon, a stable symbiosis developed between the large cell and the smaller cells inside it. Over time, the host cell acquired some genes from the smaller cells, and the two kinds became dependent on each other: the larger cell could not survive without the energy produced by the smaller ones, and these, in turn, could not survive without the raw materials provided by the larger cell. The whole cell is now considered a single organism, and the smaller cells are classified as organelles called mitochondria. A similar event occurred with photosynthetic cyanobacteria entering large heterotrophic cells and becoming chloroplasts. Probably as a result of these changes, a line of cells capable of photosynthesis split off from the other eukaryotes more than 1 billion years ago. There were probably several such inclusion events. Besides the well-established endosymbiotic theory of the cellular origin of mitochondria and chloroplasts, there are theories that cells led to peroxisomes, spirochetes led to cilia and flagella, and that perhaps a DNA virus led to the cell nucleus, though none of them are widely accepted. Archaeans, bacteria, and eukaryotes continued to diversify and to become more complex and better adapted to their environments. Each domain repeatedly split into multiple lineages. Around 1.1 Ga, the plant, animal, and fungi lines had split, though they still existed as solitary cells. Some of these lived in colonies, and gradually a division of labor began to take place; for instance, cells on the periphery might have started to assume different roles from those in the interior. Although the division between a colony with specialized cells and a multicellular organism is not always clear, around 1 billion years ago, the first multicellular plants emerged, probably green algae. Possibly by around 900 Ma true multicellularity had also evolved in animals. At first, it probably resembled today's sponges, which have totipotent cells that allow a disrupted organism to reassemble itself. As the division of labor was completed in the different lineages of multicellular organisms, cells became more specialized and more dependent on each other. Supercontinents in the Proterozoic Reconstructions of tectonic plate movement in the past 250 million years (the Cenozoic and Mesozoic eras) can be made reliably using fitting of continental margins, ocean floor magnetic anomalies and paleomagnetic poles. No ocean crust dates back further than that, so earlier reconstructions are more difficult. Paleomagnetic poles are supplemented by geologic evidence such as orogenic belts, which mark the edges of ancient plates, and past distributions of flora and fauna. The further back in time, the scarcer and harder to interpret the data get and the more uncertain the reconstructions. Throughout the history of the Earth, there have been times when continents collided and formed a supercontinent, which later broke up into new continents. About 1000 to 830 Ma, most continental mass was united in the supercontinent Rodinia. Rodinia may have been preceded by Early-Middle Proterozoic continents called Nuna and Columbia. After the break-up of Rodinia about 800 Ma, the continents may have formed another short-lived supercontinent around 550 Ma. The hypothetical supercontinent is sometimes referred to as Pannotia or Vendia. The evidence for it is a phase of continental collision known as the Pan-African orogeny, which joined the continental masses of current-day Africa, South America, Antarctica and Australia. The existence of Pannotia depends on the timing of the rifting between Gondwana (which included most of the landmass now in the Southern Hemisphere, as well as the Arabian Peninsula and the Indian subcontinent) and Laurentia (roughly equivalent to current-day North America). It is at least certain that by the end of the Proterozoic eon, most of the continental mass lay united in a position around the south pole. Late Proterozoic climate and life The end of the Proterozoic saw at least two Snowball Earths, so severe that the surface of the oceans may have been completely frozen. This happened about 716.5 and 635 Ma, in the Cryogenian period. The intensity and mechanism of both glaciations are still under investigation and harder to explain than the early Proterozoic Snowball Earth. Most paleoclimatologists think the cold episodes were linked to the formation of the supercontinent Rodinia. Because Rodinia was centered on the equator, rates of chemical weathering increased and carbon dioxide (CO2) was taken from the atmosphere. Because CO2 is an important greenhouse gas, climates cooled globally. In the same way, during the Snowball Earths most of the continental surface was covered with permafrost, which decreased chemical weathering again, leading to the end of the glaciations. An alternative hypothesis is that enough carbon dioxide escaped through volcanic outgassing that the resulting greenhouse effect raised global temperatures. Increased volcanic activity resulted from the break-up of Rodinia at about the same time. The Cryogenian period was followed by the Ediacaran period, which was characterized by a rapid development of new multicellular lifeforms. Whether there is a connection between the end of the severe ice ages and the increase in diversity of life is not clear, but it does not seem coincidental. The new forms of life, called Ediacara biota, were larger and more diverse than ever. Though the taxonomy of most Ediacaran life forms is unclear, some were ancestors of groups of modern life. Important developments were the origin of muscular and neural cells. None of the Ediacaran fossils had hard body parts like skeletons. These first appear after the boundary between the Proterozoic and Phanerozoic eons or Ediacaran and Cambrian periods. Phanerozoic Eon The Phanerozoic is the current eon on Earth, which started approximately 538.8 million years ago. It consists of three eras: The Paleozoic, Mesozoic, and Cenozoic, and is the time when multi-cellular life greatly diversified into almost all the organisms known today. The Paleozoic ("old life") era was the first and longest era of the Phanerozoic eon, lasting from 538.8 to 251.9 Ma. During the Paleozoic, many modern groups of life came into existence. Life colonized the land, first plants, then animals. Two significant extinctions occurred. The continents formed at the break-up of Pannotia and Rodinia at the end of the Proterozoic slowly moved together again, forming the supercontinent Pangaea in the late Paleozoic. The Mesozoic ("middle life") era lasted from 251.9 Ma to 66 Ma. It is subdivided into the Triassic, Jurassic, and Cretaceous periods. The era began with the Permian–Triassic extinction event, the most severe extinction event in the fossil record; 95% of the species on Earth died out. It ended with the Cretaceous–Paleogene extinction event that wiped out the dinosaurs. The Cenozoic ("new life") era began at  Ma, and is subdivided into the Paleogene, Neogene, and Quaternary periods. These three periods are further split into seven subdivisions, with the Paleogene composed of The Paleocene, Eocene, and Oligocene, the Neogene divided into the Miocene, Pliocene, and the Quaternary composed of the Pleistocene, and Holocene. Mammals, birds, amphibians, crocodilians, turtles, and lepidosaurs survived the Cretaceous–Paleogene extinction event that killed off the non-avian dinosaurs and many other forms of life, and this is the era during which they diversified into their modern forms. Tectonics, paleogeography and climate At the end of the Proterozoic, the supercontinent Pannotia had broken apart into the smaller continents Laurentia, Baltica, Siberia and Gondwana. During periods when continents move apart, more oceanic crust is formed by volcanic activity. Because the young volcanic crust is relatively hotter and less dense than the old oceanic crust, the ocean floors rise during such periods. This causes the sea level to rise. Therefore, in the first half of the Paleozoic, large areas of the continents were below sea level. Early Paleozoic climates were warmer than today, but the end of the Ordovician saw a short ice age during which glaciers covered the south pole, where the huge continent Gondwana was situated. Traces of glaciation from this period are only found on former Gondwana. During the Late Ordovician ice age, a few mass extinctions took place, in which many brachiopods, trilobites, Bryozoa and corals disappeared. These marine species could probably not contend with the decreasing temperature of the sea water. The continents Laurentia and Baltica collided between 450 and 400 Ma, during the Caledonian Orogeny, to form Laurussia (also known as Euramerica). Traces of the mountain belt this collision caused can be found in Scandinavia, Scotland, and the northern Appalachians. In the Devonian period (416–359 Ma) Gondwana and Siberia began to move towards Laurussia. The collision of Siberia with Laurussia caused the Uralian Orogeny, the collision of Gondwana with Laurussia is called the Variscan or Hercynian Orogeny in Europe or the Alleghenian Orogeny in North America. The latter phase took place during the Carboniferous period (359–299 Ma) and resulted in the formation of the last supercontinent, Pangaea. By 180 Ma, Pangaea broke up into Laurasia and Gondwana. Cambrian explosion The rate of the evolution of life as recorded by fossils accelerated in the Cambrian period (542–488 Ma). The sudden emergence of many new species, phyla, and forms in this period is called the Cambrian Explosion. It was a form of adaptive radiation, where vacant niches left by the extinct Ediacaran biota were filled up by the emergence of new phyla. The biological fomenting in the Cambrian Explosion was unprecedented before and since that time. Whereas the Ediacaran life forms appear yet primitive and not easy to put in any modern group, at the end of the Cambrian, most modern phyla were already present. The development of hard body parts such as shells, skeletons or exoskeletons in animals like molluscs, echinoderms, crinoids and arthropods (a well-known group of arthropods from the lower Paleozoic are the trilobites) made the preservation and fossilization of such life forms easier than those of their Proterozoic ancestors. For this reason, much more is known about life in and after the Cambrian period than about life in older periods. Some of these Cambrian groups appear complex but are seemingly quite different from modern life; examples are Anomalocaris and Haikouichthys. More recently, however, these seem to have found a place in modern classification. During the Cambrian, the first vertebrate animals, among them the first fishes, had appeared. A creature that could have been the ancestor of the fishes, or was probably closely related to it, was Pikaia. It had a primitive notochord, a structure that could have developed into a vertebral column later. The first fishes with jaws (Gnathostomata) appeared during the next geological period, the Ordovician. The colonisation of new niches resulted in massive body sizes. In this way, fishes with increasing sizes evolved during the early Paleozoic, such as the titanic placoderm Dunkleosteus, which could grow long. The diversity of life forms did not increase significantly because of a series of mass extinctions that define widespread biostratigraphic units called biomeres. After each extinction pulse, the continental shelf regions were repopulated by similar life forms that may have been evolving slowly elsewhere. By the late Cambrian, the trilobites had reached their greatest diversity and dominated nearly all fossil assemblages. Colonization of land Oxygen accumulation from photosynthesis resulted in the formation of an ozone layer that absorbed much of the Sun's ultraviolet radiation, meaning unicellular organisms that reached land were less likely to die, and prokaryotes began to multiply and become better adapted to survival out of the water. Prokaryote lineages had probably colonized the land as early as 3 Ga even before the origin of the eukaryotes. For a long time, the land remained barren of multicellular organisms. The supercontinent Pannotia formed around 600 Ma and then broke apart a short 50 million years later. Fish, the earliest vertebrates, evolved in the oceans around 530 Ma. A major extinction event occurred near the end of the Cambrian period, which ended 488 Ma. Several hundred million years ago, plants (probably resembling algae) and fungi started growing at the edges of the water and then out of it. The oldest fossils of land fungi and plants date to 480–460 Ma, though molecular evidence suggests the fungi may have colonized the land as early as 1000 Ma and the plants 700 Ma. Initially remaining close to the water's edge, mutations and variations resulted in further colonization of this new environment. The timing of the first animals to leave the oceans is not precisely known: the oldest clear evidence is of arthropods on land around 450 Ma, perhaps thriving and becoming better adapted due to the vast food source provided by the terrestrial plants. There is also unconfirmed evidence that arthropods may have appeared on land as early as 530 Ma. Evolution of tetrapods At the end of the Ordovician period, 443 Ma, additional extinction events occurred, perhaps due to a concurrent ice age. Around 380 to 375 Ma, the first tetrapods evolved from fish. Fins evolved to become limbs that the first tetrapods used to lift their heads out of the water to breathe air. This would let them live in oxygen-poor water, or pursue small prey in shallow water. They may have later ventured on land for brief periods. Eventually, some of them became so well adapted to terrestrial life that they spent their adult lives on land, although they hatched in the water and returned to lay their eggs. This was the origin of the amphibians. About 365 Ma, another period of extinction occurred, perhaps as a result of global cooling. Plants evolved seeds, which dramatically accelerated their spread on land, around this time (by approximately 360 Ma). About 20 million years later (340 Ma), the amniotic egg evolved, which could be laid on land, giving a survival advantage to tetrapod embryos. This resulted in the divergence of amniotes from amphibians. Another 30 million years (310 Ma) saw the divergence of the synapsids (including mammals) from the sauropsids (including birds and reptiles). Other groups of organisms continued to evolve, and lines diverged—in fish, insects, bacteria, and so on—but less is known of the details. After yet another, the most severe extinction of the period (251~250 Ma), around 230 Ma, dinosaurs split off from their reptilian ancestors. The Triassic–Jurassic extinction event at 200 Ma spared many of the dinosaurs, and they soon became dominant among the vertebrates. Though some mammalian lines began to separate during this period, existing mammals were probably small animals resembling shrews. The boundary between avian and non-avian dinosaurs is unclear, but Archaeopteryx, traditionally considered one of the first birds, lived around 150 Ma. The earliest evidence for the angiosperms evolving flowers is during the Cretaceous period, some 20 million years later (132 Ma). Extinctions The first of five great mass extinctions was the Ordovician-Silurian extinction. Its possible cause was the intense glaciation of Gondwana, which eventually led to a Snowball Earth. 60% of marine invertebrates became extinct, and 25% of all families. The second mass extinction was the Late Devonian extinction, probably caused by the evolution of trees, which could have led to the depletion of greenhouse gases (like ) or the eutrophication of water. 70% of all species became extinct. The third mass extinction was the Permian-Triassic, or the Great Dying, event. The event was possibly caused by some combination of the Siberian Traps volcanic event, an asteroid impact, methane hydrate gasification, sea level fluctuations, and a major anoxic event. Either the proposed Wilkes Land crater in Antarctica or Bedout structure off the northwest coast of Australia may indicate an impact connection with the Permian-Triassic extinction. But it remains uncertain whether these or other proposed Permian-Triassic boundary craters are real impact craters or even contemporary with the Permian-Triassic extinction event. This was by far the deadliest extinction ever, with about 57% of all families and 83% of all genera killed. The fourth mass extinction was the Triassic-Jurassic extinction event in which almost all synapsids and archosaurs became extinct, probably due to new competition from dinosaurs. The fifth and most recent mass extinction was the Cretaceous-Paleogene extinction event. In 66 Ma, a asteroid struck Earth just off the Yucatán Peninsula—somewhere in the southwestern tip of then Laurasia—where the Chicxulub crater is today. This ejected vast quantities of particulate matter and vapor into the air that occluded sunlight, inhibiting photosynthesis. 75% of all life, including the non-avian dinosaurs, became extinct, marking the end of the Cretaceous period and Mesozoic era. Diversification of mammals The first true mammals evolved in the shadows of dinosaurs and other large archosaurs that filled the world by the late Triassic. The first mammals were very small, and were probably nocturnal to escape predation. Mammal diversification truly began only after the Cretaceous-Paleogene extinction event. By the early Paleocene the Earth recovered from the extinction, and mammalian diversity increased. Creatures like Ambulocetus took to the oceans to eventually evolve into whales, whereas some creatures, like primates, took to the trees. This all changed during the mid to late Eocene when the circum-Antarctic current formed between Antarctica and Australia which disrupted weather patterns on a global scale. Grassless savanna began to predominate much of the landscape, and mammals such as Andrewsarchus rose up to become the largest known terrestrial predatory mammal ever, and early whales like Basilosaurus took control of the seas. The evolution of grasses brought a remarkable change to the Earth's landscape, and the new open spaces created pushed mammals to get bigger and bigger. Grass started to expand in the Miocene, and the Miocene is where many modern- day mammals first appeared. Giant ungulates like Paraceratherium and Deinotherium evolved to rule the grasslands. The evolution of grass also brought primates down from the trees, and started human evolution. The first big cats evolved during this time as well. The Tethys Sea was closed off by the collision of Africa and Europe. The formation of Panama was perhaps the most important geological event to occur in the last 60 million years. Atlantic and Pacific currents were closed off from each other, which caused the formation of the Gulf Stream, which made Europe warmer. The land bridge allowed the isolated creatures of South America to migrate over to North America and vice versa. Various species migrated south, leading to the presence in South America of llamas, the spectacled bear, kinkajous and jaguars. Three million years ago saw the start of the Pleistocene epoch, which featured dramatic climatic changes due to the ice ages. The ice ages led to the evolution and expansion of modern man in Saharan Africa. The mega-fauna that dominated fed on grasslands that, by now, had taken over much of the subtropical world. The large amounts of water held in the ice allowed various water bodies to shrink and sometimes disappear, such as the North Sea and the Bering Strait. It is believed by many that a huge migration took place along Beringia, which is why, today, there are camels (which evolved and became extinct in North America), horses (which evolved and became extinct in North America), and Native Americans. The end of the last ice age coincided with the expansion of man and a massive die out of ice age mega-fauna. This extinction is nicknamed "the Sixth Extinction". Human evolution A small African ape living around 6 Ma was the last animal whose descendants would include both modern humans and their closest relatives, the chimpanzees. Only two branches of its family tree have surviving descendants. Very soon after the split, for reasons that are still unclear, apes in one branch developed the ability to walk upright. Brain size increased rapidly, and by 2 Ma, the first animals classified in the genus Homo had appeared. Around the same time, the other branch split into the ancestors of the common chimpanzee and the ancestors of the bonobo as evolution continued simultaneously in all life forms. The ability to control fire probably began in Homo erectus (or Homo ergaster), probably at least 790,000 years ago but perhaps as early as 1.5 Ma. The use and discovery of controlled fire may even predate Homo erectus. Fire was possibly used by the early Lower Paleolithic (Oldowan) hominid Homo habilis or strong australopithecines such as Paranthropus. It is more difficult to establish the origin of language; it is unclear whether Homo erectus could speak or if that capability had not begun until Homo sapiens. As brain size increased, babies were born earlier, before their heads grew too large to pass through the pelvis. As a result, they exhibited more plasticity, thus possessing an increased capacity to learn and requiring a longer period of dependence. Social skills became more complex, language became more sophisticated, and tools became more elaborate. This contributed to further cooperation and intellectual development. Modern humans (Homo sapiens) are believed to have originated around 200,000 years ago or earlier in Africa; the oldest fossils date back to around 160,000 years ago. The first humans to show signs of spirituality are the Neanderthals (usually classified as a separate species with no surviving descendants); they buried their dead, often with no sign of food or tools. However, evidence of more sophisticated beliefs, such as the early Cro-Magnon cave paintings (probably with magical or religious significance) did not appear until 32,000 years ago. Cro-Magnons also left behind stone figurines such as Venus of Willendorf, probably also signifying religious belief. By 11,000 years ago, Homo sapiens had reached the southern tip of South America, the last of the uninhabited continents (except for Antarctica, which remained undiscovered until 1820 AD). Tool use and communication continued to improve, and interpersonal relationships became more intricate. Human history Throughout more than 90% of its history, Homo sapiens lived in small bands as nomadic hunter-gatherers. As language became more complex, the ability to remember and communicate information resulted in a new replicator: the meme. Ideas could be exchanged quickly and passed down the generations. Cultural evolution quickly outpaced biological evolution, and history proper began. Between 8500 and 7000 BC, humans in the Fertile Crescent in the Middle East began the systematic husbandry of plants and animals: agriculture. This spread to neighboring regions and developed independently elsewhere until most Homo sapiens lived sedentary lives in permanent settlements as farmers. Not all societies abandoned nomadism, especially those in isolated areas of the globe poor in domesticable plant species, such as Australia. However, among those civilizations that did adopt agriculture, the relative stability and increased productivity provided by farming allowed the population to expand. Agriculture had a major impact; humans began to affect the environment as never before. Surplus food allowed a priestly or governing class to arise, followed by increasing division of labor. This led to Earth's first civilization at Sumer in the Middle East, between 4000 and 3000 BC. Additional civilizations quickly arose in ancient Egypt, at the Indus River valley and in China. The invention of writing enabled complex societies to arise: record-keeping and libraries served as a storehouse of knowledge and increased the cultural transmission of information. Humans no longer had to spend all their time working for survival, enabling the first specialized occupations (e.g. craftsmen, merchants, priests, etc.). Curiosity and education drove the pursuit of knowledge and wisdom, and various disciplines, including science (in a primitive form), arose. This in turn led to the emergence of increasingly larger and more complex civilizations, such as the first empires, which at times traded with one another, or fought for territory and resources. By around 500 BC, there were advanced civilizations in the Middle East, Iran, India, China, and Greece, at times expanding, at times entering into decline. In 221 BC, China became a single polity that would grow to spread its culture throughout East Asia, and it has remained the most populous nation in the world. During this period, famous Hindu texts known as vedas came in existence in Indus valley civilization. This civilization developed in warfare, arts, science, mathematics and architecture. The fundamentals of Western civilization were largely shaped in Ancient Greece, with the world's first democratic government and major advances in philosophy and science, and in Ancient Rome with advances in law, government, and engineering. The Roman Empire was Christianized by Emperor Constantine in the early 4th century and declined by the end of the 5th. Beginning with the 7th century, Christianization of Europe began, and since at least the 4th century Christianity has played a prominent role in the shaping of Western civilization. In 610, Islam was founded and quickly became the dominant religion in Western Asia. The House of Wisdom was established in Abbasid-era Baghdad, Iraq. It is considered to have been a major intellectual center during the Islamic Golden Age, where Muslim scholars in Baghdad and Cairo flourished from the ninth to the thirteenth centuries until the Mongol sack of Baghdad in 1258 AD. In 1054 AD the Great Schism between the Roman Catholic Church and the Eastern Orthodox Church led to the prominent cultural differences between Western and Eastern Europe. In the 14th century, the Renaissance began in Italy with advances in religion, art, and science. At that time the Christian Church as a political entity lost much of its power. In 1492, Christopher Columbus reached the Americas, initiating great changes to the new world. European civilization began to change beginning in 1500, leading to the scientific and industrial revolutions. That continent began to exert political and cultural dominance over human societies around the world, a time known as the Colonial era (also see Age of Discovery). In the 18th century a cultural movement known as the Age of Enlightenment further shaped the mentality of Europe and contributed to its secularization. See also Notes References Further reading Melosh, H. J.; Vickery, A. M. & Tonks, W. B. (1993). Impacts and the early environment and evolution of the terrestrial planets, in Levy, H. J. & Lunine, Jonathan I. (eds.): Protostars and Planets III, University of Arizona Press, Tucson, pp. 1339–1370. External links Davies, Paul. "Quantum leap of life". The Guardian. 2005 December 20. – discusses speculation on the role of quantum systems in the origin of life Evolution timeline (uses Flash Player). Animated story of life shows everything from the big bang to the formation of the Earth and the development of bacteria and other organisms to the ascent of man. 25 biggest turning points in Earth History BBC Evolution of the Earth. Timeline of the most important events in the evolution of the Earth. Ageing the Earth, BBC Radio 4 discussion with Richard Corfield, Hazel Rymer & Henry Gee (In Our Time, Nov. 20, 2003) Earth Geochronology Geology theories World history
0.763238
0.999173
0.762606
Evony
Evony (formerly known as Civony) is a multiplayer online game by American developer Evony LLC, set in the European medieval period. Two browser-based versions (Age 1 and Age 2) and a mobile version (The King's Return) exist. The game became notorious for its original ad campaign, which featured scantily clad women (including models from pornographic film covers) that had nothing to do with the game itself. Gameplay Evony is set in a persistent world during the medieval period. The player assumes the role of a lord or lady of a city or alliance. New players are granted "beginner's protection," which prevents other players from attacking their cities for seven days or until they upgrade the town hall to level five or higher. This lets new players accumulate resources and troops and accustom themselves to the game before other players can attack them. The player sets tax rate, production, and construction. Resources include gold, food, lumber, stone, iron, and the city's idle population. As with similar games, the player first must increase the city's population and hourly resource production rates and construct certain buildings in the city, and then start building resource fields and an army. An army can include siege machines, such as ballistas, catapults, and battering rams, and foot troops, such as archers, warriors and swordsmen. All items must be acquired with gems, which can be purchased with real money through its item shop in game or won at the wheel. Some items accelerate the player's progress through the game. Winning items in battle is the primary way to acquire resources and cities. Interaction The game features player-versus-player game play, rendering it almost impossible for players who have not formed or joined alliances to survive. The game allows the player to control up to ten cities through gain of titles. To gain a title, a certain rank is necessary. Both Title and Rank require Medals gained by use of in-game coins to purchase medal boxes, by attacking valleys or winning medal boxes from spinning the wheel. The game has two monetary systems. The in-game monetary system revolves around gold. Gold can be obtained by completing quests, by taxing the city's population, or by attacking NPCs. One can sell resources for gold on the marketplace or trade resources with others within one's alliance for gold. One can also use real money to buy game cents with which to purchase items and resources from the in-game shop. Reception In a three-star review for Stuff, Joel Lauterbach wrote, "Evony has done an amazing job at making the game look and feel appealing to all gamers, however once a player scratches the surface and sees the investment-heavy time-killing game mechanics, many are likely to be put off." Controversy The Guardian noted that Evony 2009 ad campaign featured women, increasingly unclothed, which had no connection to the game. In 2009, Gavin Mannion wrote that Evony "latest ad is seriously pushing boundaries of what is acceptable to publish on Google". Other ads used stock photographs from pornographic DVD covers and promoted the game via "millions of spam comments". The company denied responsibility. That same year, Evony lawyers sent a cease-and-desist letter to blogger Bruce Everiss after he alleged deceptive marketing but withdrew their claims two days into the case. References External links 2009 video games Browser games Browser-based multiplayer online games Flash games Massively multiplayer online real-time strategy games Obscenity controversies Science fiction video games Video games developed in the United States Video games set in Europe Video games set in the Middle Ages Video games with isometric graphics Virtual economies
0.768476
0.992347
0.762595
Praxis (process)
Praxis is the process by which a theory, lesson, or skill is enacted, embodied, realized, applied, or put into practice. "Praxis" may also refer to the act of engaging, applying, exercising, realizing, or practising ideas. This has been a recurrent topic in the field of philosophy, discussed in the writings of Plato, Aristotle, St. Augustine, Francis Bacon, Immanuel Kant, Søren Kierkegaard, Ludwig von Mises, Karl Marx, Antonio Gramsci, Martin Heidegger, Hannah Arendt, Jean-Paul Sartre, Paulo Freire, Murray Rothbard, and many others. It has meaning in the political, educational, spiritual and medical realms. Origins The word praxis is from . In Ancient Greek the word praxis (πρᾶξις) referred to activity engaged in by free people. The philosopher Aristotle held that there were three basic activities of humans: theoria (thinking), poiesis (making), and praxis (doing). Corresponding to these activities were three types of knowledge: theoretical, the end goal being truth; poietical, the end goal being production; and practical, the end goal being action. Aristotle further divided the knowledge derived from praxis into ethics, economics, and politics. He also distinguished between eupraxia (εὐπραξία, "good praxis") and dyspraxia (δυσπραξία, "bad praxis, misfortune"). Marxism Young Hegelian August Cieszkowski was one of the earliest philosophers to use the term praxis to mean "action oriented towards changing society" in his 1838 work Prolegomena zur Historiosophie (Prolegomena to a Historiosophy). Cieszkowski argued that while absolute truth had been achieved in the speculative philosophy of Hegel, the deep divisions and contradictions in man's consciousness could only be resolved through concrete practical activity that directly influences social life. Although there is no evidence that Karl Marx himself read this book, it may have had an indirect influence on his thought through the writings of his friend Moses Hess. Marx uses the term "praxis" to refer to the free, universal, creative and self-creative activity through which man creates and changes his historical world and himself. Praxis is an activity unique to man, which distinguishes him from all other beings. The concept appears in two of Marx's early works: the Economic and Philosophical Manuscripts of 1844 and the Theses on Feuerbach (1845). In the former work, Marx contrasts the free, conscious productive activity of human beings with the unconscious, compulsive production of animals. He also affirms the primacy of praxis over theory, claiming that theoretical contradictions can only be resolved through practical activity. In the latter work, revolutionary practice is a central theme: Marx here criticizes the materialist philosophy of Ludwig Feuerbach for envisaging objects in a contemplative way. Marx argues that perception is itself a component of man's practical relationship to the world. To understand the world does not mean considering it from the outside, judging it morally or explaining it scientifically. Society cannot be changed by reformers who understand its needs, only by the revolutionary praxis of the mass whose interest coincides with that of society as a whole—the proletariat. This will be an act of society understanding itself, in which the subject changes the object by the very fact of understanding it. Seemingly inspired by the Theses, the nineteenth century socialist Antonio Labriola called Marxism the "philosophy of praxis". This description of Marxism would appear again in Antonio Gramsci's Prison Notebooks and the writings of the members of the Frankfurt School. Praxis is also an important theme for Marxist thinkers such as Georg Lukacs, Karl Korsch, Karel Kosik and Henri Lefebvre, and was seen as the central concept of Marx's thought by Yugoslavia's Praxis School, which established a journal of that name in 1964. Jean-Paul Sartre In the Critique of Dialectical Reason, Jean-Paul Sartre posits a view of individual praxis as the basis of human history. In his view, praxis is an attempt to negate human need. In a revision of Marxism and his earlier existentialism, Sartre argues that the fundamental relation of human history is scarcity. Conditions of scarcity generate competition for resources, exploitation of one over another and division of labor, which in its turn creates struggle between classes. Each individual experiences the other as a threat to his or her own survival and praxis; it is always a possibility that one's individual freedom limits another's. Sartre recognizes both natural and man-made constraints on freedom: he calls the non-unified practical activity of humans the "practico-inert". Sartre opposes to individual praxis a "group praxis" that fuses each individual to be accountable to each other in a common purpose. Sartre sees a mass movement in a successful revolution as the best exemplar of such a fused group. Hannah Arendt In The Human Condition, Hannah Arendt argues that Western philosophy too often has focused on the contemplative life (vita contemplativa) and has neglected the active life (vita activa). This has led humanity to frequently miss much of the everyday relevance of philosophical ideas to real life. For Arendt, praxis is the highest and most important level of the active life. Thus, she argues that more philosophers need to engage in everyday political action or praxis, which she sees as the true realization of human freedom. According to Arendt, our capacity to analyze ideas, wrestle with them, and engage in active praxis is what makes us uniquely human. In Maurizio Passerin d'Etreves's estimation, "Arendt's theory of action and her revival of the ancient notion of praxis represent one of the most original contributions to twentieth century political thought. ... Moreover, by viewing action as a mode of human togetherness, Arendt is able to develop a conception of participatory democracy which stands in direct contrast to the bureaucratized and elitist forms of politics so characteristic of the modern epoch." Education Praxis is used by educators to describe a recurring passage through a cyclical process of experiential learning, such as the cycle described and popularised by David A. Kolb. Paulo Freire defines praxis in Pedagogy of the Oppressed as "reflection and action directed at the structures to be transformed." Through praxis, oppressed people can acquire a critical awareness of their own condition, and, with teacher-students and students-teachers, struggle for liberation. In the British Channel 4 television documentary New Order: Play at Home, Factory Records owner Tony Wilson describes praxis as "doing something, and then only afterwards, finding out why you did it". Praxis may be described as a form of critical thinking and comprises the combination of reflection and action. Praxis can be viewed as a progression of cognitive and physical actions: Taking the action Considering the impacts of the action Analysing the results of the action by reflecting upon it Altering and revising conceptions and planning following reflection Implementing these plans in further actions This creates a cycle which can be viewed in terms of educational settings, learners and educational facilitators. Scott and Marshall (2009) refer to praxis as "a philosophical term referring to human action on the natural and social world". Furthermore, Gramsci (1999) emphasises the power of praxis in Selections from the Prison Notebooks by stating that "The philosophy of praxis does not tend to leave the simple in their primitive philosophy of common sense but rather to lead them to a higher conception of life". To reveal the inadequacies of religion, folklore, intellectualism and other such 'one-sided' forms of reasoning, Gramsci appeals directly in his later work to Marx's 'philosophy of praxis', describing it as a 'concrete' mode of reasoning. This principally involves the juxtaposition of a dialectical and scientific audit of reality; against all existing normative, ideological, and therefore counterfeit accounts. Essentially a 'philosophy' based on 'a practice', Marx's philosophy, is described correspondingly in this manner, as the only 'philosophy' that is at the same time a 'history in action' or a 'life' itself (Gramsci, Hoare and Nowell-Smith, 1972, p. 332). Spirituality Praxis is also key in meditation and spirituality, where emphasis is placed on gaining first-hand experience of concepts and certain areas, such as union with the Divine, which can only be explored through praxis due to the inability of the finite mind (and its tool, language) to comprehend or express the infinite. In an interview for YES! Magazine, Matthew Fox explained it this way: According to Strong's Concordance, the Hebrew word ta‛am is, properly, a taste. This is, figuratively, perception and, by implication, intelligence; transitively, a mandate: advice, behaviour, decree, discretion, judgment, reason, taste, understanding. Medicine Praxis is the ability to perform voluntary skilled movements. The partial or complete inability to do so in the absence of primary sensory or motor impairments is known as apraxia. See also Apraxia Christian theological praxis Hexis Lex artis Orthopraxy Praxeology Praxis Discussion Series Praxis (disambiguation) Praxis intervention Praxis school Practice (social theory) Theses on Feuerbach References Further reading Paulo Freire (1970), Pedagogy of the Oppressed, Continuum International Publishing Group. External links Entry for "praxis" at the Encyclopaedia of Informal Education Der Begriff Praxis Concepts in the philosophy of mind Marxism
0.764502
0.997456
0.762557
Everyday life
Everyday life, daily life or routine life comprises the ways in which people typically act, think, and feel on a daily basis. Everyday life may be described as mundane, routine, natural, habitual, or normal. Human diurnality means most people sleep at least part of the night and are active in daytime. Most eat two or three meals in a day. Working time (apart from shift work) mostly involves a daily schedule, beginning in the morning. This produces the daily rush hours experienced by many millions, and the drive time focused on by radio broadcasters. Evening is often leisure time. Bathing every day is a custom for many. Beyond these broad similarities, lifestyles vary and different people spend their days differently. For example, nomadic life differs from sedentism, and among the sedentary, urban people live differently from rural folk. Differences in the lives of the rich and the poor, or between laborers and intellectuals, may go beyond their working hours. Children and adults also vary in what they do each day. Sociological perspectives Everyday life is a key concept in cultural studies and is a specialized subject in the field of sociology. Some argue that, motivated by capitalism and industrialism's degrading effects on human existence and perception, writers and artists of the 19th century turned more towards self-reflection and the portrayal of everyday life represented in their writings and art to a noticeably greater degree than in past works, for example Renaissance literature's interest in hagiography and politics. Other theorists dispute this argument based on a long history of writings about daily life which can be seen in works from Ancient Greece, medieval Christianity and the Age of Enlightenment. In the study of everyday life, gender has been an important factor in its conceptions. Some theorists regard women as the quintessential representatives and victims of everyday life. The connotation of everyday life is often negative, and is distinctively separated from exceptional moments by its lack of distinction and differentiation. Ultimately this is defined as the essential, taken-for-granted continuum of mundane activity that outlines forays into more esoteric experiences. It is the non-negotiable reality that exists amongst all social groupings without discrimination and is an unavoidable basis for which all human endeavor exists. Much of everyday life is automatic in that it is driven by current environmental features as mediated by automatic cognitive processing of those features, and without any mediation by conscious choice, according to social psychologist John A. Bargh. Daily life is also studied by sociologists to investigate how it is organised and given meaning. A sociological journal called the Journal of Mundane Behavior, published from 2000 to 2004, studied these everyday actions. Leisure Daily entertainment once consisted mainly of telling stories in the evening. This custom developed into the theatre of ancient Greece and other professional entertainments. Reading later became less a mysterious specialty of scholars, and more a common pleasure for people who could afford books. During the 20th century mass media became prevalent in rich countries, creating among other things a daily prime time to consume fiction and other professionally produced works. Different media forms serve different purposes in different individuals' everyday lives—which gives people the opportunities to make choices about what media form(s)—watching television, using the Internet, listening to the radio, or reading newspapers or magazines—most effectively help them to accomplish their tasks. Many people have steadily increased their daily use of the Internet, over all other media forms. Language People's everyday lives are shaped through language and communication. They choose what to do with their time based on opinions and ideals formed through the discourse they are exposed to. Much of the dialogue people are subject to comes from the mass media, which is an important factor in what shapes human experience. The media uses language to make an impact on one's everyday life, whether that be as small as helping to decide where to eat or as big as choosing a representative in government. To improve people's everyday life, Phaedra Pezzullo, professor in the Department of Communication and Culture at Indiana University Bloomington, says people should seek to understand the rhetoric that so often and unnoticeably changes their lives. She writes that "...rhetoric enables us to make connections... It's about understanding how we engage with the world". Activities of daily living Activities of daily living (ADL) is a term used in healthcare to refer to daily self care activities within an individual's place of residence, in outdoor environments, or both. Health professionals routinely refer to the ability or inability to perform ADLs as a measurement of the functional status of a person, particularly in regard to people with disabilities and the elderly. ADLs are defined as "the things we normally do...such as feeding ourselves, bathing, dressing, grooming, work, homemaking, and leisure". The ability and the extent to which the elderly can perform these activities is at the focus of gerontology and understandings of later life. See also Being in the World Existentiell Genre art Genre painting Homelessness Lifestyle (sociology) Lifeworld Personal life Realism (arts) Shibui Simple living Technics and Time, 1 Technology and the Character of Contemporary Life The Practice of Everyday Life The Revolution of Everyday Life References Bibliography Wyer, Robert S.; Bargh, John A. (1997). The Automaticity of Everyday life. Lawrence Erlbaum Associates. . Further reading Sigmund Freud (1901), The Psychopathology of Everyday Life, Henri Lefebvre (1947), Critique of Everyday Life Raoul Vaneigem (1967), The Revolution of Everyday Life Ágnes Heller (1970), Everyday Life . Jack D. Douglas (ed.), Understanding Everyday Life (Chicago, 1970) Richard Gombin, ‘La critique de la vie quotidienne,’ in: Les origines du gauchisme (Paris, 1971) Michel de Certeau (1974), The Practice of Everyday Life Shotter, John (1993), Cultural politics of everyday life: Social constructionism, rhetoric and knowing of the third kind. The Everyday Life Reader (2001) edited by Ben Highmore. Erving Goffman (2002), The Presentation of Self in Everyday Life, in CONTEMPORARY SOCIOLOGICAL THEORY. Kristine Hughes, The Writer's Guide to Everyday Life in Regency and Victorian England from 1811-1901 Candy Moulton, Everyday Life Among the American Indians 1800 to 1900. Everyday life Personal life Philosophy of life Self-care Sociology of culture Social psychology he:שגרה
0.767458
0.993604
0.762549
Eugenics
Eugenics ( ; ) is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have altered various human gene frequencies by inhibiting the fertility of people and groups they considered inferior, or promoting that of those considered superior. The contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Although it originated as a progressive social movement in the 19th century, in contemporary usage in the 21st century, the term is closely associated with scientific racism. Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, British-Indian scientist J. B. S. Haldane wrote in 1940 that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Early eugenicists were mostly concerned with factors of measured intelligence that often correlated strongly with social class. Common distinctions Eugenic programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. In other words, positive eugenics is aimed at encouraging reproduction among the genetically advantaged, for example, the eminently intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit. As opposed to "euthenics" Historical eugenics Ancient and medieval origins Academic origins The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, directly drawing on the recent work delineating natural selection by his half-cousin Charles Darwin. He published his observations and conclusions chiefly in his influential book Inquiries into Human Faculty and Its Development. Galton himself defined it as "the study of all agencies under human control which can improve or impair the racial quality of future generations". The first to systematically apply Darwinism theory to human relations, Galton believed that various desirable human qualities were also hereditary ones, although Darwin strongly disagreed with this elaboration of his theory. And it should also be noted that many of the early geneticists were not themselves Darwinians. Eugenics became an academic discipline at many colleges and universities and received funding from various sources. Organizations were formed to win public support for and to sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes. Three International Eugenics Conferences presented a global venue for eugenicists, with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies in the United States were first implemented by state-level legislators in the early 1900s. Eugenic policies also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden. Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed eugenics as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics"). In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty. As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics"; which focuses on individual freedom and allegedly pulls away from racism, sexism or a focus on intelligence. Early opposition Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Franz Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists who were themselves eugenicists, such as J. B. S. Haldane and R. A. Fisher, however, also expressed skepticism in the belief that sterilization of "defectives" (i.e. a purely negative eugenics) would lead to the disappearance of undesirable genetic traits. Among institutions, the Catholic Church was an opponent of state-enforced sterilizations, but accepted isolating people with hereditary diseases so as not to let them reproduce. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason." In fact, more generally, "[m]uch of the opposition to eugenics during that era, at least in Europe, came from the right." The eugenicists' political successes in Germany and Scandinavia were not at all matched in such countries as Poland and Czechoslovakia, even though measures had been proposed there, largely because of the Catholic church's moderating influence. Concerns over human devolution The Lamarckian backdrop Dysgenics Compulsory sterilization Eugenic feminism North American eugenics Eugenics in Mexico Nazism and the decline of eugenics The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust. By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". Modern eugenics Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". In a similar spirit, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology. Before any of these technological breakthroughs, however, prenatal screening has long been called by some a contemporary and highly prevalent form of eugenics because it may lead to selective abortions of fetuses with undesirable traits. In Singapore Lee Kuan Yew, the founding father of Singapore, actively promoted eugenics as late as 1983. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. For this purpose was introduced the "Graduate Mother Scheme" that incentivized graduate women to get married as much as the rest of their populace. The incentives were extremely unpopular and regarded as eugenic, and were seen as discriminatory towards Singapore's non-Chinese ethnic population. In 1985, the incentives were partly abandoned as ineffective, while the government matchmaking agency, the Social Development Network, remains active. Contested scientific status One general concern that many bring to the table, is that the reduced genetic diversity some argue to be a likely feature of long-term, species-wide eugenics plans, could eventually result in inbreeding depression, increased spread of infectious disease, and decreased resilience to changes in the environment. Arguments for scientific validity In his original lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson claimed that everything concerning eugenics fell into the field of medicine. Similarly apologetic, Czech-American Aleš Hrdlička, head of the American Anthropological Association from 1925 to 1926 and "perhaps the leading physical anthropologist in the country at the time" posited that its ultimate aim "is that it may, on the basis of accumulated knowledge and together with other branches of research, show the tendencies of the actual and future evolution of man, and aid in its possible regulation or improvement. The growing science of eugenics will essentially become applied anthropology." More recently, prominent evolutionary biologist Richard Dawkins stated of the matter:The spectre of Hitler has led some scientists to stray from "ought" to "is" and deny that breeding for human qualities is even possible. But if you can breed cattle for milk yield, horses for running speed, and dogs for herding skill, why on Earth should it be impossible to breed humans for mathematical, musical or athletic ability? Objections such as "these are not one-dimensional abilities" apply equally to cows, horses and dogs and never stopped anybody in practice. I wonder whether, some 60 years after Hitler's death, we might at least venture to ask what the moral difference is between breeding for musical ability and forcing a child to take music lessons. Scientifically possible and already well-established, heterozygote carrier testing is used in the prevention of autosomal recessive disorders, allowing couples to determine if they are at risk of passing various hereditary defects onto a future child. There are various examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not negatively affecting the heterozygote carriers of those diseases themselves. The elevated prevalence of various genetically transmitted diseases among Ashkenazi Jew populations (e.g. per Tay–Sachs, cystic fibrosis, Canavan's disease and Gaucher's disease), has been markedly decreased in more recent cohorts by the widespread adoption of genetic screening (cf. also Dor Yeshorim). Objections to scientific validity Amanda Caleb, Professor of Medical Humanities at Geisinger Commonwealth School of Medicine, says "Eugenic laws and policies are now understood as part of a specious devotion to a pseudoscience that actively dehumanizes to support political agendas and not true science or medicine." The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wroclaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pękalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together. While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common. Such cases in which, furthermore, even individual organisms' massive suffering or even death due to the odd 25 percent of homozygotes ineliminable by natural section under a Mendelian pattern of inheritance may be justified for the greater ecological good that is conspecifics incur a greater so-called heterozygote advantage in turn. Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. Indeed, the most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics is often considered to be tainted with scientific racism and pseudoscience. Regarding the lasting controversy above, himself citing recent scholarship, historian of science Aaron Gillette notes that: Others take a more nuanced view. They recognize that there was a wide variety of eugenic theories, some of which were much less race- or class-based than others. Eugenicists might also give greater or lesser acknowledgment to the role that environment played in shaping human behavior. In some cases, eugenics was almost imperceptibly intertwined with health care, child care, birth control, and sex education issues. In this sense, eugenics has been called, "a 'modern' way of talking about social problems in biologizing terms". Indeed, granting that the historical phenomenon of eugenics was that of a pseudoscience, Gilette further notes that this derived chiefly from its being "an epiphenomenon of a number of sciences, which all intersected at the claim that it was possible to consciously guide human evolution." Contested ethical status Contemporary ethical opposition In a book directly addressed at socialist eugenicist J.B.S. Haldane and his once-influential Daedalus, Betrand Russell, had one serious objection of his own: eugenic policies might simply end up being used to reproduce existing power relations “rather than to make men happy.” Environmental ethicist Bill McKibben argued against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, he argues, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using Ming China, Tokugawa Japan and the contemporary Amish as examples. The threat of perfection Contemporary ethical advocacy Some, for example Nathaniel C. Comfort of Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making process from the state to patients and their families. Comfort suggests that "the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise." Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements. In his book A Theory of Justice (1971), American philosopher John Rawls argued that "[o]ver time a society is to take steps to preserve the general level of natural abilities and to prevent the diffusion of serious defects". The original position, a hypothetical situation developed by Rawls, has been used as an argument for negative eugenics. Accordingly, some morally support germline editing precisely because of its capacity to (re)distribute such Rawlsian primary goods. Status quo bias and the reversal test The utilitarian perspective of Procreative Beneficence Transhuman perspectives Problematizing the therapy-enhancement distinction In science fiction The novel Brave New World by the English author Aldous Huxley (1931), is a dystopian social science fiction novel which is set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy. Various works by the author Robert A. Heinlein mention the Howard Foundation, a group which attempts to improve human longevity through selective breeding. Among Frank Herbert's other works, the Dune series, starting with the eponymous 1965 novel, describes selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach. The Star Trek franchise features a race of genetically engineered humans which is known as "Augments", the most notable of them is Khan Noonien Singh. These "supermen" were the cause of the Eugenics Wars, a dark period in Earth's fictional history, before they were deposed and exiled. They appear in many of the franchise's story arcs, most frequently, they appear as villains. The film Gattaca (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. The title alludes to the letters G, A, T and C, the four nucleobases of DNA, and depicts the possible consequences of genetic discrimination in the present societal framework. Relegated to the role of a cleaner owing to his genetically projected death at age 32 due to a heart condition (being told: "The only way you'll see the inside of a spaceship is if you were cleaning it”), the protagonist observes enhanced astronauts as they are demonstrating their superhuman athleticism. Nonetheless, against mere uniformity being the movies key theme, it may be highlighted that it also includes a twelve fingered concert pianist nonetheless taken to be highly esteemed. Even though it was not a box office success, it was critically acclaimed and it is said to have crystallized the debate over human genetic engineering in the public consciousness. As to its accuracy, its production company, Sony Pictures, consulted with a gene therapy researcher and prominent critic of eugenics known to have stated that "[w]e should not step over the line that delineates treatment from enhancement", W. French Anderson, to ensure that the portrayal of science was realistic. Disputing their success in this mission, Philim Yam of Scientific American called the film "science bashing" and Nature's Kevin Davies called it a "surprisingly pedestrian affair", while molecular biologist Lee Silver described its extreme determinism as "a straw man". In an even more pointed critique, in his 2018 book Blueprint, the behavioral geneticist Robert Plomin writes that while Gattaca warned of the dangers of genetic information being used by a totalitarian state, genetic testing could also favor better meritocracy in democratic societies which already administer a variety of standardized tests to select people for education and employment. He suggests that polygenic scores might supplement testing in a manner that is essentially free of biases. Along similar lines, in the 2004 book Citizen Cyborg, democratic transhumanist James Hughes had already argued against what he considers to be "professional fearmongers", stating of the movie's premises: Astronaut training programs are entirely justified in attempting to screen out people with heart problems for safety reasons; In the United States, people are already being screened by insurance companies on the basis of their propensities to disease, for actuarial purposes; Rather than banning genetic testing or genetic enhancement, society should simply develop genetic information privacy laws, such as the U.S. Genetic Information Nondiscrimination Act, that allow justified forms of genetic testing and data aggregation, but forbid those that are judged to result in genetic discrimination. Enforcing these would not be very hard once a system for reporting and penalties is in place. See also References Notes Further reading Anomaly, Jonathan (2018). "Defending eugenics: From cryptic choice to conscious selection." Monash Bioethics Review 35 (1–4):24-35. doi:10.1007/s40592-018-0081-2 Anomaly, Jonathan (2024). Creating Future People The Science and Ethics of Genetic Enhancement. Routledge, 2nd Edition. , Paul, Diane B.; Spencer, Hamish G. (1998). "Did Eugenics Rest on an Elementary Mistake?" (PDF). In: The Politics of Heredity: Essays on Eugenics, Biomedicine, and the Nature-Nurture Debate, SUNY Press (pp. 102–118) Gantsho, Luvuyo (2022). "The principle of procreative beneficence and its implications for genetic engineering." Theoretical Medicine and Bioethics 43 (5):307-328. doi:10.1007/s11017-022-09585-0 Harris, John (2009). "Enhancements are a Moral Obligation." In J. Savulescu & N. Bostrom (Eds.), Human Enhancement, Oxford University Press, pp. 131–154 Kamm, Frances (2010). "What Is And Is Not Wrong With Enhancement?" In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. Kamm, Frances (2005). "Is There a Problem with Enhancement?", The American Journal of Bioethics, 5(3), 5–14. PMID 16006376 doi:10.1080/15265160590945101 Ranisch, Robert (2022). "Procreative Beneficence and Genome Editing", The American Journal of Bioethics, 22(9), 20–22. doi:10.1080/15265161.2022.2105435 Robertson, John (2021). Children of Choice: Freedom and the New Reproductive Technologies. Princeton University Press, doi:10.2307/j.ctv1h9dhsh. Saunders, Ben (2015). "Why Procreative Preferences May be Moral – And Why it May not Matter if They Aren't." Bioethics, 29(7), 499–506. doi:10.1111/bioe.12147 Savulescu, Julian (2001). Procreative beneficence: why we should select the best children. Bioethics. 15(5–6): pp. 413–26 Singer, Peter (2010). "Parental Choice and Human Improvement." In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. Wikler, Daniel (1999). "Can we learn from eugenics?" (PDF). J Med Ethics. 25(2):183-94. doi: 10.1136/jme.25.2.183. PMID 10226926; PMCID: PMC479205. External links Embryo Editing for Intelligence: A cost-benefit analysis of CRISPR-based editing for intelligence with 2015-2016 state-of-the-art Embryo Selection For Intelligence: A cost-benefit analysis of the marginal cost of IVF-based embryo selection for intelligence and other traits with 2016-2017 state-of-the-art Eugenics: Its Origin and Development (1883–Present) by the National Human Genome Research Institute (30 November 2021) Eugenics and Scientific Racism Fact Sheet by the National Human Genome Research Institute (3 November 2021) Ableism Applied genetics Bioethics Nazism Pseudo-scholarship Pseudoscience Racism Technological utopianism White supremacy
0.762715
0.999752
0.762526
Ethnogenesis
Ethnogenesis (; ) is the formation and development of an ethnic group. This can originate by group self-identification or by outside identification. The term ethnogenesis was originally a mid-19th-century neologism that was later introduced into 20th-century academic anthropology. In that context, it refers to the observable phenomenon of the emergence of new social groups that are identified as having a cohesive identity, i.e. an "ethnic group" in anthropological terms. Relevant social sciences not only observe this phenomenon but also search for explanations for its causes. The term ethnogeny is also used as a variant of ethnogenesis. Passive or active ethnogenesis Ethnogenesis can occur passively or actively. A passive ethnogenesis is an unintended outcome, which involves the spontaneous emergence of various markers of group identity through processes such as the group's interaction with unique elements of their physical environment, cultural divisions (such as dialect and religious denomination), migrations and other processes. A founding myth of some kind may emerge as part of this process. Active ethnogenesis is the deliberate, direct planning and engineering of a separate identity. This is a controversial topic because of the difficulty involved in creating a new ethnic identity. However, it is clear that active ethnogenesis may augment passive ethnogenesis. Active ethnogenesis is usually inspired by emergent political issues, such as a perceived, long-term, structural economic imbalance between regions or a perceived discrimination against elements of local culture (e.g., as a result of the promotion of a single dialect as a standard language at the national level). With regard to the latter, since the late 18th century, such attempts have often been related to the promotion (or demotion) of a particular dialect; nascent nationalists have often attempted to establish a particular dialect (or group of dialects) as a separate language, encompassing a "national literature", out of which a founding myth may be extracted and promoted. In the 19th and 20th centuries, societies challenged by the obsolescence of those narratives that previously afforded them coherence have fallen back on ethnic or racial narratives to maintain or reaffirm their collective identity or polis. Language revival Language has been a critical asset for authenticating ethnic identities. The process of reviving an antique ethnic identity often poses an immediate language challenge, as obsolescent languages lack expressions for contemporary experiences. In the 1990s, proponents of ethnic revivals in Europe included those from the Celtic fringes in Wales and nationalists in the Basque Country. Activists' attempts since the 1970s to revive the Occitan language in Southern France are a similar example. Similarly, in the 19th century, the Fennoman movement in the Grand Duchy of Finland aimed to raise the Finnish language from peasant status to an official national language, which had been solely Swedish for some time. The Fennomans also founded the Finnish Party to pursue their nationalist aims. The publication in 1835 of the Finnish national epic, Kalevala, was a founding stone of Finnish nationalism and ethnogenesis. Finnish was recognized as the official language of Finland only in 1892. Fennomans were opposed by the Svecomans, headed by Axel Olof Freudenthal (1836–1911). He supported continuing the use of Swedish as the official language; it had been a minority language used by the educated elite in government and administration. In line with contemporary scientific racism theories, Freudenthal believed that Finland had two races, one speaking Swedish and the other Finnish. The Svecomans claimed that the Swedish Germanic race was superior to the majority Finnish people. In the late 19th and early 20th century, Hebrew underwent a revival from a liturgical language to a vernacular language with native speakers. This process began first with Eliezer Ben-Yehuda and the creation of the Ben-Yehuda Dictionary and later facilitated by Jewish immigration to Ottoman Palestine during the waves of migration known as the First- and Second Aliyot. Modern Hebrew was made one of three official languages in Mandatory Palestine, and later one of two official languages in Israel along with Arabic. In addition to the modernization of the language, many Jewish immigrants changed their names to ones that originate from Hebrew or align with Hebrew phonology, a process known as Hebraization This indicates that Hebrew revival was both an ethnogenic and linguistic phenomenon. In Ireland, the revival of the Irish language and the creation of Irish national literature was part of the reclamation of an Irish identity beginning at the end of the 19th century. Since its independence from the Netherlands in 1830, language has been an important but divisive political force in Belgium between the Dutch and Germanic Flemings and Franco-Celtic Walloons. Switzerland has four national languages--German, French, Italian, and Romansh--each concentrated in four regions of the country. The Alemannic German-speaking (die Deutschschweizer) region is in the north and east, the French-speaking (Romandie) region in the west, the Italian/Lombard (la Svizzera italiana) region in the south, and the small Romansh-speaking population in the south-eastern corner of the country in the Canton of Graubüen. Specific cases Ancient Greeks Anthony D. Smith notes that, in general, there is a lack of evidence that hampers the assessment of the existence of nations or nationalisms in antiquity. The two cases where more evidence exists are those of ancient Greece and Israel. In Ancient Greece, cultural rather than political unity is observed. Yet, there were ethnic divisions within the wider Hellenic ethnic community, mainly between Ionians, Aeolians, Boeotians, and Dorians. These groups were further divided into city-states. Smith postulates that there is no more than a semblance of nationalism in ancient Greece. Jonathan M. Hall's work “Ethnic Identity in Greek Antiquity” (1997), was acclaimed as the first full-length modern study on Ancient Greek ethnicity. According to Hall, Ancient Greek ethnic identity was much based on kinship, descent and genealogy, which was reflected in elaborate genealogy myths. In his view, genealogy is the most fundamental way any population defines itself as an ethnic group. There was a change in the way Greeks constructed their ethnic identity in the Persian Wars period (first half of 5th c BC). Before that (archaic period), Greeks tended to connect themselves through genealogical assimilation. After the Persian invasion, they began defining themselves against the enemy they perceived as the barbarian “other”. An indication of this disposition is the Athenians' speech to their allies in 480 BC, mentioning that all Hellenes are bound with the ("same blood"), ("same language"), and common religious practices. Hall believes that Hellenic identity was envisaged in the 6th century BC as being ethnic in character. Cultural forms of identification emerged in the 5th century, and there is evidence that by the 4th century, this identity was conceived more in cultural terms. American Americans in the United States In the 2015 Community Survey of the United States Census, 7.2% of the population identified as having American ancestry, mainly people whose ancestors migrated from Europe after the 1400s to the southeastern United States. Larger percentages from similarly long-established families identified as German Americans, English Americans, or Irish Americans, leaving the distinction between "American" and specific European ethnicity largely as a matter of personal preference. African Americans in the United States The ethnogenesis of African Americans begins with slavery, specifically in the United States of America. Between 1492 and 1880, 2 to 5.5 million Native Americans were enslaved in the Americas in addition to 12.5 million African slaves. The concept of race began to emerge during the mid-17th century as a justification for the enslavement of Africans in colonial America. Later, scientists developed theories to uphold the system of forced labor. Native Americans of darker skin tones were included in this construct with the arrival of African slaves. Some Native Americans of lighter complexions, however, owned slaves, participating in race-based slavery alongside Europeans. American society evolved into a two-caste system, with two broad classes: white and non-white, citizen and non-citizen (or semi-citizen). Non-white, non/semi-citizens were regarded as "Black" or "Negro" as a general term, regardless of their known ethnic/cultural background. The lives and identities of African Americans have been shaped by systems of race and slavery, resulting in a unique culture and experience. Cultural aspects like music, food, literature, inventions, dances, and other concepts prominently stem from the combined experience of enslaved African American people and free African Americans who were still subject to racist laws in the United States. Ethnicity is not solely based on race. However, due to the race-based history, system, and lifestyle of American society, African Americans tend to prefer to identify racially, rather than ethnically. This racialized identity has created the common misconception that African Americans are virtually a mono-racial African-descendant ethnic group in the United States. Still, the genetic architecture of African Americans is distinct from that of non-American Africans. This is consistent with the history of Africans, Europeans, and Native Americans intermixing during the transatlantic slave trade and race being a social construct created in the United States.But ethnogenesis is an ongoing process as shown by the 2.1 million African immigrants that resided in America in 2019, it is in no way accurate to say the ethnogenesis of Black Americans ended with slavery because Black people from other islands and the African continent continue to migrate to America. Despite typically carrying segments of DNA shaped by contributions from peoples of Indigenous America, Europe, Africa, and the Americas, the genetics of African Americans can span across more than several continents. Within the African American population, there are no mono-ethnic backgrounds from outside of the U.S., and mono-racial backgrounds are in the minority. Through forced enslavement and admixing, the African American ethnicity, race, lineage, culture, and identity are indigenous to the United States of America. Goths Herwig Wolfram offers "a radically new explanation of the circumstances under which the Goths were settled in Gaul, Spain and Italy". Since "they dissolved at their downfall into a myth accessible to everyone" at the head of a long history of attempts to lay claim to a "Gothic" tradition, the ethnogenesis by which disparate bands came to self-identify as "Goths" is of wide interest and application. The problem is in extracting a historical ethnography from sources that are resolutely Latin and Roman-oriented. Indigenous peoples of southwestern North America Clayton Anderson observed that with the arrival of the Spanish in southwestern North America, the Native Americans of the Jumano cultural sphere underwent social changes partly as a reaction, which spurred their ethnogenesis. Ethnogenesis in the Texas Plains and along the coast occurred in two forms. One way involved a disadvantaged group being assimilated into a more dominant group that they identified with, while the other way involved the modification and reinvention of cultural institutions. Nancy Hickerson argued that the disintegration of 17th-century Jumano, caused in part by the widespread deaths from introduced diseases, was followed by their reintegration as Kiowa. External stresses that produced ethnogenetic shifts preceded the arrival of the Spanish and their horse culture. Drought cycles had previously forced non-kin groups to either band together or disband and mobilize. Intertribal hostilities forced weaker groups to align with stronger ones. Indigenous peoples of southeastern North America From 1539 to 1543, a Spanish expedition led by Hernando de Soto departed Cuba for Florida and the American Southeast. Although asked to practice restraint, Soto led 600 men on a violent rampage through present-day Florida, Georgia, South Carolina, North Carolina, Tennessee, Alabama, Mississippi, Arkansas, and East Texas. Frustrated with not finding gold or silver in the areas suspected to contain such valuable materials, they destroyed villages and decimated native populations. Despite his death in 1542, Soto's men continued their expedition until 1543 when about half of their original force reached Mexico. Their actions introduced European diseases that further weakened native populations. The population collapse forced natives to relocate from their cities into the countryside, where smaller villages and new political structures developed, replacing the older chiefdom models of tribal governance. By 1700, the major tribal settlements Soto and his men had encountered were no more. Smaller tribes began to form loose confederations of smaller, more autonomous villages. From that blending of many tribes, ethnogenesis led to the emergence of new ethnic groups and identities for the consolidated natives who had managed to survive the incursion of European people, animals, and diseases. After 1700, most North American Indian "tribes" were relatively new composite groups formed by these remnant peoples who were trying to cope with epidemic illnesses brought by and clashes with the Europeans who were exploring the area. Indigenous peoples on the Canadian prairies European encroachment caused significant demographic shifts in the size and geographic distribution of the indigenous communities, leading to a rise in mortality rates due to conflict and disease. Some Aboriginal groups were destroyed, while new groups emerged from the cultural interface of pre-existing groups. One example of this ethnogenesis is the Métis people. Italian During the Middle Ages in Italy, the Italo-Dalmatian languages differentiated from Latin, leading to the distinction of Italians from neighboring ethnic groups within the former Roman Empire. Over time, ethnological and linguistic differences between regional groups also developed, from the Lombardians of the North to the Sicilians of the South. Mountainous terrain allowed the development of relatively isolated communities and numerous dialects and languages before Italian unification in the 19th century. Jewish In classical antiquity, Jewish, Greek, and Roman authors frequently referred to the Jewish people as an ethnos, one of the numerous ethne that lived in the Greco-Roman world. Van Maaren demonstrates why ancient Jews may be regarded as an ethnic group in current terms by using the six characteristics that co-ethnics share as established by Hutchinson and Smith: (1) the usage of several ethnonyms to refer to the Jewish ethnos, including "Jews", "Israel" and "Hebrews"; (2) Jews believed they shared a common ancestor as descendants of patriarch Jacob/Israel, and the Hasmonean dynasty (which controlled Judea between 140 and 37 BCE) employed the perceived common descent from Abraham to broaden definitions of Jewishness in their era; (3) historical events and heroes narrated in the Hebrew Bible and later scriptures served as a fundamental collection of shared memories of the past, and their community reading at synagogues helped instill the collective Jewish identity; (4) a shared culture including the religion of Judaism, worship of the God of Israel, Sabbath observance, kashrut, and the symbolic significance of the Hebrew language, even for Jews who did not speak it at the time; (5) a connection to the Land of Israel, Judaea or Palaestina, as their homeland to both local Jews and those residing abroad; (6) a sense of solidarity between Jews, on the part of at least some sections of the ethnic group as shown, for example, during the Jewish-Roman wars. Moldovan The separate Moldovan ethnic identification was promoted under Soviet rule when the Soviet Union established an autonomous Moldavian Autonomous Soviet Socialist Republic in 1924. This republic was situated between the Dniester and Southern Bug rivers (Transnistria), distinct from the Ukrainian SSR. Scholar Charles King concluded that this action was partly support for Soviet propaganda and help for a potential communist revolution in Romania. Initially, people of Moldovan ethnicity supported territorial claims to the regions of Bessarabia and Northern Bukovina, which were part of Romania at the time. The claims were based on the fact that the territory of eastern Bessarabia with Chisinau had belonged to the Russian Empire between 1812 and 1918. After having been part of the Romanian Principality of Moldova for 500 years, Russia was awarded the East of Moldova as compensation for its losses during the Napoleonic Wars. This marked the beginning of the 100 years of Russian history in East Moldova. After the Soviet occupation of the two territories in 1940, potential reunification claims were offset by the Moldavian Soviet Socialist Republic. When the Moldavian ASSR was established, Chișinău was named its capital, a role it continued to play even after the formation of the Moldavian SSR in 1940. The recognition of Moldovans as a separate ethnicity, distinct from Romanians, remains today a controversial subject. On one hand, the Moldovan Parliament adopted "The Concept on National Policy of the Republic of Moldova" in 2003. This document states that Moldovans and Romanians are two distinct peoples and speak two different languages. It also acknowledges that Romanians form an ethnic minority in Moldova, and it asserts that the Republic of Moldova is the legitimate successor to the Principality of Moldavia. On the other hand, Moldovans are only recognized as a distinct ethnic group by former Soviet states. Moreover, in Romania, people from Wallachia and Transylvania call the Romanians inhabiting western Moldavia, now part of Romania, as Moldovans. People in Romanian Moldova call themselves Moldovans, as subethnic denomination, and Romanians, as ethnic denomination (like Kentish and English for English people living in Kent). Romanians from Romania call the Romanians of the Republic of Moldova Bessarabians, as identification inside the subethnic group, Moldovans as subethnic group and Romanians as ethnic group. The subethnic groups referred to here are historically connected to independent Principalities. The Principality of Moldavia/Moldova founded in 1349 had various extensions between 1349 and 1859 and comprised Bucovina and Bessarabia as regional subdivisions. That way, Romanians of southern Bukovina (today part of Romania and formerly part of the historical Moldova) are called Bukovinians, Moldovans and Romanians. In the 2004 Moldovan Census, of the 3,383,332 people living in Moldova, 16.5% (558,508) chose Romanian as their mother tongue, and 60% chose Moldovan. While 40% of all urban Romanian/Moldovan speakers indicated Romanian as their mother tongue, in the countryside, barely one out of seven Romanian/Moldovan speakers indicated Romanian as their mother tongue. Palestinian Prior to the dissolution of the Ottoman Empire, the term "Palestinian" referred to any resident of the region of Palestine, regardless of their ethnic, cultural, linguistic, or religious affiliation. Similarly, during the League of Nations Mandate of Palestine, the term referred to a citizen as defined in the 1925 Citizenship Order. Starting in the late 19th century, Arabic-speaking people of Palestine began referring to themselves as "Arab" or by the endonym "Palestinian Arab" when referencing their specific subgroup. Following the foundation of the State of Israel, the Jews of the former Mandatory Palestine and the Arabs who received Israeli citizenship developed a distinct national identity. Consequently, the meaning of the word shifted to a demonym referring to the Arabs who did not receive citizenship in Israel, Jordan (West Bank residents), or Egypt (Gaza residents). Singaporean In Singapore, most of its country's policies have been focused on the cohesion of its citizens into a united Singaporean national identity. Singapore's cultural norms, psyche, and traditions have led to the classification of "Singaporean" as a unique ethnocultural and socio-ethnic group that is distinct from its neighboring countries. In 2013, Singapore's Prime Minister Lee Hsien Loong stated that "apart from numbers, that a strong Singapore core is also about the spirit of Singapore, who we are, what ideals we believe in and what ties bind us together as one people." According to a 2017 survey by the Institute of Policy Studies, 49% of Singaporeans identify with both Singaporeans and their ethnic identity equally, while 35% would exclusively identify as "Singaporeans." Historical scholarship Within the historical profession, the term "ethnogenesis" has been borrowed as a neologism to explain the origins and evolution of so-called barbarian ethnic cultures, stripped of its metaphoric connotations drawn from biology, of "natural" birth and growth. That view is closely associated with the Austrian historian Herwig Wolfram and his followers, who argued that such ethnicity was not a matter of genuine genetic descent ("tribes"). Rather, using Reinhard Wenskus' term Traditionskerne ("nuclei of tradition"), ethnogenesis arose from small groups of aristocratic warriors carrying ethnic traditions from place to place and generation to generation. Followers would coalesce or disband around these nuclei of tradition; ethnicities were available to those who wanted to participate in them with no requirement of being born into a "tribe". Thus, questions of race and place of origin became secondary. Proponents of ethnogenesis may claim it is the only alternative to the sort of ethnocentric and nationalist scholarship that is commonly seen in disputes over the origins of many ancient peoples such as the Franks, Goths, and Huns. It has also been used as an alternative to the Near East's "race history" that had supported Phoenicianism and claims to the antiquity of the variously called Assyrian/Chaldean/Syriac peoples. See also Lev Gumilev (1912–1992), founder of the passionarity theory of ethnogenesis Historiography and nationalism Nation-building Y-DNA haplogroups by ethnic group The Decline of the West – Spengler's account (1918–1923) of the rise and fall of civilisations Notes Kinship and descent Race (human categorization) Ethnicity Ethnology National identity Origin hypotheses of ethnic groups
0.766883
0.994293
0.762506
Mercantilism
Mercantilism is a nationalist economic policy that is designed to maximize the exports and minimize the imports for an economy. In other words, it seeks to maximize the accumulation of resources within the country and use those resources for one-sided trade. The policy aims to reduce a possible current account deficit or reach a current account surplus, and it includes measures aimed at accumulating monetary reserves by a positive balance of trade, especially of finished goods. Historically, such policies might have contributed to war and motivated colonial expansion. Mercantilist theory varies in sophistication from one writer to another and has evolved over time. Mercantilism promotes government regulation of a nation's economy for the purpose of augmenting and bolstering state power at the expense of rival national powers. High tariffs, especially on manufactured goods, were almost universally a feature of mercantilist policy. Before it fell into decline, mercantilism was dominant in modernized parts of Europe and some areas in Africa from the 16th to the 19th centuries, a period of proto-industrialization. Some commentators argue that it is still practised in the economies of industrializing countries, in the form of economic interventionism. With the efforts of supranational organizations such as the World Trade Organization to reduce tariffs globally, non-tariff barriers to trade have assumed a greater importance in neomercantilism. History Mercantilism became the dominant school of economic thought in Europe throughout the late Renaissance and the early modern period (from the 15th to the 18th centuries). Evidence of mercantilistic practices appeared in early modern Venice, Genoa, and Pisa regarding control of the Mediterranean trade in bullion. However, the empiricism of the Renaissance, which first began to quantify large-scale trade accurately, marked mercantilism's birth as a codified school of economic theories. The Italian economist and mercantilist Antonio Serra is considered to have written one of the first treatises on political economy with his 1613 work, A Short Treatise on the Wealth and Poverty of Nations. Mercantilism in its simplest form is bullionism, yet mercantilist writers emphasize the circulation of money and reject hoarding. Their emphasis on monetary metals accords with current ideas regarding the money supply, such as the stimulative effect of a growing money-supply. Fiat money and floating exchange rates have since rendered specie concerns irrelevant. In time, industrial policy supplanted the heavy emphasis on money, accompanied by a shift in focus from the capacity to carry on wars to promoting general prosperity. England began the first large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). An early statement on national balance of trade appeared in Discourse of the Common Wealth of this Realm of England, 1549: "We must always take heed that we buy no more from strangers than we sell them, for so should we impoverish ourselves and enrich them." The period featured various but often disjointed efforts by the court of Queen Elizabeth (r. 1558–1603) to develop a naval and merchant fleet capable of challenging the Spanish stranglehold on trade and of expanding the growth of bullion at home. Queen Elizabeth promoted the Trade and Navigation Acts in Parliament and issued orders to her navy for the protection and promotion of English shipping. Authors noted most for establishing the English mercantilist system include Gerard de Malynes ( 1585–1641) and Thomas Mun (1571–1641), who first articulated the Elizabethan system (England's Treasure by Foreign Trade or the Balance of Foreign Trade is the Rule of Our Treasure), which Josiah Child (–1699) then developed further. Numerous French authors helped cement French policy around statist mercantilism in the 17th century, as King Louis XIV (reigned 1643–1715) followed the guidance of Jean Baptiste Colbert, his Controller-General of Finances from 1665 to 1683 who revised the tariff system and expanded industrial policy. Colbertism was based on the principle that the state should rule in the economic realm as it did in the diplomatic, and that the interests of the state as identified by the king were superior to those of merchants and of everyone else. Mercantilist economic policies aimed to build up the state, especially in an age of incessant warfare, and theorists charged the state with looking for ways to strengthen the economy and to weaken foreign adversaries. In Europe, academic belief in mercantilism began to fade in the late-18th century after the East India Company annexed the Mughal Bengal, a major trading nation, and the establishment of the British India through the activities of the East India Company, in light of the arguments of Adam Smith (1723–1790) and of the classical economists. French economic policy liberalized greatly under Napoleon (in power from 1799 to 1814/1815). The British Parliament's repeal of the Corn Laws under Robert Peel in 1846 symbolized the emergence of free trade as an alternative system. Theory Most of the European economists who wrote between 1500 and 1750 are today generally described as mercantilists; this term was initially used solely by critics, such as Mirabeau and Smith, but historians proved quick to adopt it. Originally the standard English term was "mercantile system". The word "mercantilism" came into English from German in the early-19th century. The bulk of what is commonly called "mercantilist literature" appeared in the 1620s in Great Britain. Smith saw the English merchant Thomas Mun (1571–1641) as a major creator of the mercantile system, especially in his posthumously published Treasure by Foreign Trade (1664), which Smith considered the archetype or manifesto of the movement. Perhaps the last major mercantilist work was James Steuart's Principles of Political Economy, published in 1767. Mercantilist literature also extended beyond England. Italy and France produced noted writers of mercantilist themes, including Italy's Giovanni Botero (1544–1617) and Antonio Serra (1580–?) and, in France, Jean Bodin and Colbert. Themes also existed in writers from the German historical school from List, as well as followers of the American and British systems of free-trade, thus stretching the system into the 19th century. However, many British writers, including Mun and Misselden, were merchants, while many of the writers from other countries were public officials. Beyond mercantilism as a way of understanding the wealth and power of nations, Mun and Misselden are noted for their viewpoints on a wide range of economic matters. The Austrian lawyer and scholar Philipp Wilhelm von Hornick, one of the pioneers of Cameralism, detailed a nine-point program of what he deemed effective national economy in his Austria Over All, If She Only Will of 1684, which comprehensively sums up the tenets of mercantilism: That every little bit of a country's soil be utilized for agriculture, mining or manufacturing. That all raw materials found in a country be used in domestic manufacture, since finished goods have a higher value than raw materials. That a large, working population be encouraged. That all exports of gold and silver be prohibited and all domestic money be kept in circulation. That all imports of foreign goods be discouraged as much as possible. That where certain imports are indispensable they be obtained at first hand, in exchange for other domestic goods instead of gold and silver. That as much as possible, imports be confined to raw materials that can be finished [in the home country]. That opportunities be constantly sought for selling a country's surplus manufactures to foreigners, so far as necessary, for gold and silver. That no importation be allowed if such goods are sufficiently and suitably supplied at home. Other than Von Hornick, there were no mercantilist writers presenting an overarching scheme for the ideal economy, as Adam Smith would later do for classical economics. Rather, each mercantilist writer tended to focus on a single area of the economy. Only later did non-mercantilist scholars integrate these "diverse" ideas into what they called mercantilism. Some scholars thus reject the idea of mercantilism completely, arguing that it gives "a false unity to disparate events". Smith saw the mercantile system as an enormous conspiracy by manufacturers and merchants against consumers, a view that has led some authors, especially Robert E. Ekelund and Robert D. Tollison, to call mercantilism "a rent-seeking society". To a certain extent, mercantilist doctrine itself made a general theory of economics impossible. Mercantilists viewed the economic system as a zero-sum game, in which any gain by one party required a loss by another. Thus, any system of policies that benefited one group would by definition harm the other, and there was no possibility of economics being used to maximize the commonwealth, or common good. Mercantilists' writings were also generally created to rationalize particular practices rather than as investigations into the best policies. Mercantilist domestic policy was more fragmented than its trade policy. While Adam Smith portrayed mercantilism as supportive of strict controls over the economy, many mercantilists disagreed. The early modern era was one of letters patent and government-imposed monopolies; some mercantilists supported these, but others acknowledged the corruption and inefficiency of such systems. Many mercantilists also realized that the inevitable results of quotas and price ceilings were black markets. One notion that mercantilists widely agreed upon was the need for economic oppression of the working population; laborers and farmers were to live at the "margins of subsistence". The goal was to maximize production, with no concern for consumption. Extra money, free time, and education for the lower classes were seen to inevitably lead to vice and laziness, and would result in harm to the economy. The mercantilists saw a large population as a form of wealth that made possible the development of bigger markets and armies. Opposite to mercantilism was the doctrine of physiocracy, which predicted that mankind would outgrow its resources. The idea of mercantilism was to protect the markets as well as maintain agriculture and those who were dependent upon it. Policies Mercantilist ideas were the dominant economic ideology of all of Europe in the early modern period, and most states embraced it to a certain degree. Mercantilism was centred on England and France, and it was in these states that mercantilist policies were most often enacted. The policies have included: High tariffs, especially on manufactured goods. Forbidding colonies to trade with other nations. Monopolizing markets with staple ports. Banning the export of gold and silver, even for payments. Forbidding trade to be carried in foreign ships, as per, for example, the Navigation Acts. Subsidies on exports. Promoting manufacturing and industry through research or direct subsidies. Limiting wages. Maximizing the use of domestic resources. Restricting domestic consumption through non-tariff barriers to trade. France Mercantilism arose in France in the early 16th century soon after the monarchy had become the dominant force in French politics. In 1539, an important decree banned the import of woolen goods from Spain and some parts of Flanders. The next year, a number of restrictions were imposed on the export of bullion. Over the rest of the 16th century, further protectionist measures were introduced. The height of French mercantilism is closely associated with Jean-Baptiste Colbert, finance minister for 22 years in the 17th century, to the extent that French mercantilism is sometimes called Colbertism. Under Colbert, the French government became deeply involved in the economy in order to increase exports. Protectionist policies were enacted that limited imports and favored exports. Industries were organized into guilds and monopolies, and production was regulated by the state through a series of more than one thousand directives outlining how different products should be produced. To encourage industry, foreign artisans and craftsmen were imported. Colbert also worked to decrease internal barriers to trade, reducing internal tariffs and building an extensive network of roads and canals. Colbert's policies were quite successful, and France's industrial output and the economy grew considerably during this period, as France became the dominant European power. He was less successful in turning France into a major trading power, and Britain and the Dutch Republic remained supreme in this field. New France France imposed its mercantilist philosophy on its colonies in North America, especially New France. It sought to derive the maximum material benefit from the colony, for the homeland, with a minimum of colonial investment in the colony itself. The ideology was embodied in New France through the establishment under Royal Charter of a number of corporate trading monopolies including La Compagnie des Marchands, which operated from 1613 to 1621, and the Compagnie de Montmorency, from that date until 1627. It was in turn replaced by La Compagnie des Cent-Associés, created in 1627 by King Louis XIII, and the Communauté des habitants in 1643. These were the first corporations to operate in what is now Canada. United Kingdom In England, mercantilism reached its peak during the Long Parliament government (1640–60). Mercantilist policies were also embraced throughout much of the Tudor and Stuart periods, with Robert Walpole being another major proponent. In Britain, government control over the domestic economy was far less extensive than on the Continent, limited by common law and the steadily increasing power of Parliament. Government-controlled monopolies were common, especially before the English Civil War, but were often controversial. With respect to its colonies, British mercantilism meant that the government and the merchants became partners with the goal of increasing political power and private wealth, to the exclusion of other European powers. The government protected its merchants—and kept foreign ones out—through trade barriers, regulations, and subsidies to domestic industries in order to maximize exports from and minimize imports to the realm. The government had to fight smuggling, which became a favourite American technique in the 18th century to circumvent the restrictions on trading with the French, Spanish, or Dutch. The goal of mercantilism was to run trade surpluses to benefit the government. The government took its share through duties and taxes, with the remainder going to merchants in Britain. The government spent much of its revenue on the Royal Navy, which both protected the colonies of Britain but was vital in capturing the colonies of other European powers. British mercantilist writers were themselves divided on whether domestic controls were necessary. British mercantilism thus mainly took the form of efforts to control trade. A wide array of regulations were put in place to encourage exports and discourage imports. Tariffs were placed on imports and bounties given for exports, and the export of some raw materials was banned completely. The Navigation Acts removed foreign merchants from being involved England's domestic trade. British policies in their American colonies led to friction with the inhabitants of the Thirteen Colonies, and mercantilist policies (such as forbidding trade with other European powers and enforcing bans on smuggling) were a major irritant leading to the American Revolution. Mercantilism taught that trade was a zero-sum game, with one country's gain equivalent to a loss sustained by the trading partner. Overall, however, mercantilist policies had a positive impact on Britain, helping to transform the nation into the world's dominant trading power and a global hegemon. One domestic policy that had a lasting impact was the conversion of "wastelands" to agricultural use. Mercantilists believed that to maximize a nation's power, all land and resources had to be used to their highest and best use, and this era thus saw projects like the draining of The Fens. Other countries The other nations of Europe also embraced mercantilism to varying degrees. The Netherlands, which had become the financial centre of Europe by being its most efficient trader, had little interest in seeing trade restricted and adopted few mercantilist policies. Mercantilism became prominent in Central Europe and Scandinavia after the Thirty Years' War (1618–48), with Christina of Sweden, Jacob Kettler of Courland, and Christian IV of Denmark being notable proponents. The Habsburg Holy Roman Emperors had long been interested in mercantilist policies, but the vast and decentralized nature of their empire made implementing such notions difficult. Some constituent states of the empire did embrace mercantilism, most notably Prussia, which under Frederick the Great had perhaps the most rigidly controlled economy in Europe. Spain benefited from mercantilism early on as it brought a large amount of precious metals such as gold and silver into their treasury by way of the new world. In the long run, Spain's economy collapsed as it was unable to adjust to the inflation that came with the large influx of bullion. Heavy intervention from the crown put crippling laws for the protection of Spanish goods and services. Mercantilist protectionist policy in Spain caused the long-run failure of the Castilian textile industry as the efficiency severely dropped off with each passing year due to the production being held at a specific level. Spain's heavily protected industries led to famines as much of its agricultural land was required to be used for sheep instead of grain. Much of their grain was imported from the Baltic region of Europe which caused a shortage of food in the inner regions of Spain. Spain limiting the trade of their colonies is one of the causes that led to the separation of the Dutch from the Spanish Empire. The culmination of all of these policies led to Spain defaulting in 1557, 1575, and 1596. During the economic collapse of the 17th century, Spain had little coherent economic policy, but French mercantilist policies were imported by Philip V with some success. Ottoman Grand Vizier Kemankeş Kara Mustafa Pasha also followed some mercantilist financial policies during the reign of Ibrahim I. Russia under Peter I (Peter the Great) attempted to pursue mercantilism, but had little success because of Russia's lack of a large merchant class or an industrial base. Wars and imperialism Mercantilism was the economic version of warfare using economics as a tool for warfare by other means backed up by the state apparatus and was well suited to an era of military warfare. Since the level of world trade was viewed as fixed, it followed that the only way to increase a nation's trade was to take it from another. A number of wars, most notably the Anglo-Dutch Wars and the Franco-Dutch Wars, can be linked directly to mercantilist theories. Most wars had other causes but they reinforced mercantilism by clearly defining the enemy, and justified damage to the enemy's economy. Mercantilism fueled the imperialism of this era, as many nations expended significant effort to conquer new colonies that would be sources of gold (as in Mexico) or sugar (as in the West Indies), as well as becoming exclusive markets. European power spread around the globe, often under the aegis of companies with government-guaranteed monopolies in certain defined geographical regions, such as the Dutch East India Company or the Hudson's Bay Company (operating in present-day Canada). With the establishment of overseas colonies by European powers early in the 17th century, mercantile theory gained a new and wider significance, in which its aim and ideal became both national and imperialistic. The connection between communism and mercantilism has been explored by Marxist economist and sociologist Giovanni Arrighi, who analyzed mercantilism as having three components: "settler colonialism, capitalist slavery, and economic nationalism," and further noted that slavery was "partly a condition and partly a result of the success of settler colonialism." In France, the triangular trade method was integral in the continuation of mercantilism throughout the 17th and 18th centuries. In order to maximize exports and minimize imports, France worked on a strict Atlantic route: France, to Africa, to the Americas and then back to France. By bringing African slaves to labor in the New World, their labor value increased, and France capitalized upon the market resources produced by slave labor. Mercantilism as a weapon has continued to be used by nations through the 21st century by way of modern tariffs as it puts smaller economies in a position to conform to the larger economies goals or risk economic ruin due to an imbalance in trade. Trade wars are often dependent on such tariffs and restrictions hurting the opposing economy. Origins The term "mercantile system" was used by its foremost critic, Adam Smith, but Mirabeau (1715–1789) had used "mercantilism" earlier. Mercantilism functioned as the economic counterpart of the older version of political power: divine right of kings and absolute monarchy. Scholars debate over why mercantilism dominated economic ideology for 250 years. One group, represented by Jacob Viner, sees mercantilism as simply a straightforward, common-sense system whose logical fallacies remained opaque to people at the time, as they simply lacked the required analytical tools. The second school, supported by scholars such as Robert B. Ekelund, portrays mercantilism not as a mistake, but rather as the best possible system for those who developed it. This school argues that rent-seeking merchants and governments developed and enforced mercantilist policies. Merchants benefited greatly from the enforced monopolies, bans on foreign competition, and poverty of the workers. Governments benefited from the high tariffs and payments from the merchants. Whereas later economic ideas were often developed by academics and philosophers, almost all mercantilist writers were merchants or government officials. Monetarism offers a third explanation for mercantilism. European trade exported bullion to pay for goods from Asia, thus reducing the money supply and putting downward pressure on prices and economic activity. The evidence for this hypothesis is the lack of inflation in the British economy until the Revolutionary and Napoleonic Wars, when paper money came into vogue. A fourth explanation lies in the increasing professionalisation and technification of the wars of the era, which turned the maintenance of adequate reserve funds (in the prospect of war) into a more and more expensive and eventually competitive business. Mercantilism developed at a time of transition for the European economy. Isolated feudal estates were being replaced by centralized nation-states as the focus of power. Technological changes in shipping and the growth of urban centers led to a rapid increase in international trade. Mercantilism focused on how this trade could best aid the states. Another important change was the introduction of double-entry bookkeeping and modern accounting. This accounting made extremely clear the inflow and outflow of trade, contributing to the close scrutiny given to the balance of trade. New markets and new mines propelled foreign trade to previously inconceivable volumes, resulting in "the great upward movement in prices" and an increase in "the volume of merchant activity itself". Prior to mercantilism, the most important economic work done in Europe was by the medieval scholastic theorists. The goal of these thinkers was to find an economic system compatible with Christian doctrines of piety and justice. They focused mainly on microeconomics and on local exchanges between individuals. Mercantilism was closely aligned with the other theories and ideas that began to replace the medieval worldview. This period saw the adoption of the very Machiavellian realpolitik and the primacy of the raison d'état in international relations. The mercantilist idea of all trade as a zero-sum game, in which each side was trying to best the other in a ruthless competition, was integrated into the works of Thomas Hobbes. This dark view of human nature also fit well with the Puritan view of the world, and some of the most stridently mercantilist legislation, such as the Navigation Ordinance of 1651, was enacted by the government of Oliver Cromwell. Jean-Baptiste Colbert's work in 17th-century France came to exemplify classical mercantilism. In the English-speaking world, its ideas were criticized by Adam Smith with the publication of The Wealth of Nations in 1776 and later by David Ricardo with his explanation of comparative advantage. Mercantilism was rejected by Britain and France by the mid-19th century. The British Empire embraced free trade and used its power as the financial center of the world to promote the same. The Guyanese historian Walter Rodney describes mercantilism as the period of the worldwide development of European commerce, which began in the 15th century with the voyages of Portuguese and Spanish explorers to Africa, Asia, and the New World. End of mercantilism Adam Smith, David Hume, Edward Gibbon, Voltaire and Jean-Jacques Rousseau were the founding fathers of anti-mercantilist thought. A number of scholars found important flaws with mercantilism long before Smith developed an ideology that could fully replace it. Critics like Hume, Dudley North and John Locke undermined much of mercantilism and it steadily lost favor during the 18th century. In 1690, Locke argued that prices vary in proportion to the quantity of money. Locke's Second Treatise also points towards the heart of the anti-mercantilist critique: that the wealth of the world is not fixed, but is created by human labor (represented embryonically by Locke's labor theory of value). Mercantilists failed to understand the notions of absolute advantage and comparative advantage (although this idea was only fully fleshed out in 1817 by David Ricardo) and the benefits of trade. Hume famously noted the impossibility of the mercantilists' goal of a constant positive balance of trade. As bullion flowed into one country, the supply would increase, and the value of bullion in that state would steadily decline relative to other goods. Conversely, in the state exporting bullion, its value would slowly rise. Eventually, it would no longer be cost-effective to export goods from the high-price country to the low-price country, and the balance of trade would reverse. Mercantilists fundamentally misunderstood this, long arguing that an increase in the money supply simply meant that everyone gets richer. The importance placed on bullion was also a central target, even if many mercantilists had themselves begun to de-emphasize the importance of gold and silver. Adam Smith noted that at the core of the mercantile system was the "popular folly of confusing wealth with money", that bullion was just the same as any other commodity, and that there was no reason to give it special treatment. More recently, scholars have discounted the accuracy of this critique. They believe Mun and Misselden were not making this mistake in the 1620s, and point to their followers Josiah Child and Charles Davenant, who in 1699 wrote, "Gold and Silver are indeed the Measures of Trade, but that the Spring and Original of it, in all nations is the Natural or Artificial Product of the Country; that is to say, what this Land or what this Labour and Industry Produces." The critique that mercantilism was a form of rent seeking has also seen criticism, as scholars such as Jacob Viner in the 1930s pointed out that merchant mercantilists such as Mun understood that they would not gain by higher prices for English wares abroad. The first school to completely reject mercantilism was the physiocrats, who developed their theories in France. Their theories also had several important problems, and the replacement of mercantilism did not come until Adam Smith published The Wealth of Nations in 1776. This book outlines the basics of what is today known as classical economics. Smith spent a considerable portion of the book rebutting the arguments of the mercantilists, though often these are simplified or exaggerated versions of mercantilist thought. Scholars are also divided over the cause of mercantilism's end. Those who believe the theory was simply an error hold that its replacement was inevitable as soon as Smith's more accurate ideas were unveiled. Those who feel that mercantilism amounted to rent-seeking hold that it ended only when major power shifts occurred. In Britain, mercantilism faded as the Parliament gained the monarch's power to grant monopolies. While the wealthy capitalists who controlled the House of Commons benefited from these monopolies, Parliament found it difficult to implement them because of the high cost of group decision making. Mercantilist regulations were steadily removed over the course of the 18th century in Britain, and during the 19th century, the British government fully embraced free trade and Smith's laissez-faire economics. On the continent, the process was somewhat different. In France, economic control remained in the hands of the royal family, and mercantilism continued until the French Revolution. In Germany, mercantilism remained an important ideology in the 19th and early 20th centuries, when the historical school of economics was paramount. Legacy Adam Smith criticized the mercantile doctrine that prioritized production in the economy; he maintained that consumption was of prime significance. Additionally, the mercantile system was well liked by the traders as it was what is now referred to as rent seeking. John Maynard Keynes affirmed that motivating the production process was as significant as encouraging consumption, which benefited the new mercantilism. Keynes also affirmed that in the post-classical period the primary focus on gold and silver supplies (bullion) was rational. During the era before paper money, an increase in gold and silver was one of the ways of mercantilism increasing an economy's reserve or the supply of money. Keynes reiterated that the doctrines advocated for by mercantilism aided the improvement of both the domestic and foreign outlay—domestic because the policies lowered the domestic rate of interest, and investment by foreigners by tending to create a favorable balance of trade. Keynes and other economists of the 20th century also realized that the balance of payments is an important concern. Keynes also supported government intervention in the economy as necessary, as did mercantilism. , the word "mercantilism" remains a pejorative term, often used to attack various forms of protectionism. The similarities between Keynesianism (and its successor ideas) and mercantilism have sometimes led critics to call them neomercantilism. Paul Samuelson, writing within a Keynesian framework, wrote of mercantilism, "With employment less than full and Net National Product suboptimal, all the debunked mercantilist arguments turn out to be valid." Some other systems that copy several mercantilist policies, such as Japan's economic system, are also sometimes called neo-mercantilist. In an essay appearing in the May 14, 2007 issue of Newsweek, business columnist Robert J. Samuelson wrote that China was pursuing an essentially neo-mercantilist trade policy that threatened to undermine the post–World War II international economic structure. Murray Rothbard, representing the Austrian School of economics, describes it this way: Rothbard viewed mercantilism not as a coherent economic theory but rather a series of post-hoc rationalizations for various economic policies by interested parties. In specific instances, protectionist mercantilist policies also had an important and positive impact on the state that enacted them. Adam Smith, for instance, praised the Navigation Acts, as they greatly expanded the British merchant fleet and played a central role in turning Britain into the world's naval and economic superpower from the 18th century onward. Some economists thus feel that protecting infant industries, while causing short-term harm, can be beneficial in the long term. See also Autarky British Empire Money-free market Neorealism (international relations) Crony capitalism Notes References Further reading Heckscher, Eli F. (1936) "Revisions in Economic History: V. Mercantilism." Economic History Review, 7#1 1936, pp. 44–54. online Rees, J. F. "Mercantilism" History 24#94 (1939), pp. 129–135 online; historiography External links Thomas Mun's Englands Treasure by Forraign Trade : Adam Smith's Wealth of Nations Economic policy
0.763252
0.999007
0.762494
Economic history of the world
The economic history of the world encompasses the development of human economic activity throughout time. It has been estimated that throughout prehistory, the world average GDP per capita was about $158 per annum (adjusted to 2013 dollars), and did not rise much until the Industrial Revolution. Cattle were probably the first object or physical thing specifically used in a way similar enough to the modern definition of money, that is, as a medium for exchange. By the 3rd millennium BC, Ancient Egypt was home to almost half of the global population. The city states of Sumer developed a trade and market economy based originally on the commodity money of the shekel which was a certain weight measure of barley, while the Babylonians and their city state neighbors later developed the earliest system of prices using a measure of various commodities that was fixed in a legal code. The early law codes from Sumer could be considered the first (written) financial law, and had many attributes still in use in the current price system today. Temples are history's first documented creditors at interest, beginning in Sumer in the third millennium. Later, in their embassy functions, they legitimized profit‑seeking trade, as well as by being a major beneficiary. According to Herodotus, and most modern scholars, the Lydians were the first people to introduce the use of gold and silver coin around 650–600 BC. The first economist (at least from within opinion generated by the evidence of extant writings) is considered to be Hesiod, by the fact of his having written on the fundamental subject of the scarcity of resources, in Works and Days. Eventually, Indian subcontinent and China accounted for more than half the size of the world economy for the next 1,500 years. In the Middle Ages, the world economy slowly expanded with the increase of population and trade. During the early period of the Middle Ages, Europe was an economic backwater. However, by the later Medieval period, rich trading cities in Italy emerged, creating the first modern accounting and finance systems. During the Industrial Revolution, economic growth in the modern sense first occurred during the Industrial Revolution in Britain and then in the rest of Europe due to high amounts of energy conversion. Economic growth spread to all regions of the world during the twentieth century, when world GDP per capita quintupled. The highest growth occurred in the 1960s during post-war reconstruction. In particular, shipping containers revolutionized trade in the second half of the century, by making it cheaper to transport goods, especially internationally. These gains have not been uniform across the globe; there are still many countries where people, especially young children, die from what are now preventable diseases, such as rotavirus and polio. The Great Recession happened from 2007 to 2009. Since 2020, economies have suffered from the COVID-19 recession. Paleolithic Throughout the Paleolithic Era, which was between 500,000 and 10,000 BC, the primary socio-economic unit was the band (small kin group). Communication between bands occurred for the purposes of trading ideas, stories, tools, foods, animal skins, mates, and other commodities. Economic resources were constrained by typical ecosystem factors: density and replacement rates of edible flora and fauna, competition from other consumers (organisms) and climate. Throughout the Upper Paleolithic, humans both dispersed and adapted to a greater variety of environments, and also developed their technologies and behaviors to increase productivity in existing environments taking the global population to between 1 and 15 million. It has been estimated that throughout prehistory, the world average GDP per capita was about $158 per annum (adjusted to 2013 dollars), and did not rise much until the Industrial Revolution. Mesolithic This period began with the end of the last glacial period over 10,000 years ago involving the gradual domestication of plants and animals and the formation of settled communities at various times and places. Neolithic Within each tribe the activity of individuals was differentiated to specific activities, and the characteristic of some of these activities were limited by the resources naturally present and available from within each tribe's territory, creating specializations of skill. Cattle were probably the first object or physical thing specifically used in a way similar enough to the modern definition of money, that is, as a medium for exchange. Trading in red ochre is attested in Swaziland, shell jewellery in the form of strung beads also dates back to this period, and had the basic attributes needed of commodity money. To organize production and to distribute goods and services among their populations, before market economies existed, people relied on tradition, top-down command, or community cooperation. Agriculture emerged in the fertile crescent, and soon after and apparently independently, in South and East Asia, and the Americas. Cultivation provided complementary carbohydrates in diets, and could potentially produce a surplus to feed off-farm workers enabling the development of diversified and stratified societies (including a standing military and 'leisured class'). Soon after livestock became domesticated particularly in the middle east (goats, sheep, cattle), enabling pastoral societies to develop, to exploit lower productivity grasslands unsuited to agriculture. Early antiquity: Bronze and Iron ages Early developments in formal money and finance Ancient Egypt was home to almost half of the global population by 30th century BC. The city states of Sumer developed a trade and market economy based originally on the commodity money of the shekel which was a certain weight measure of barley, while the Babylonians and their city state neighbors later developed the earliest system of prices using a measure of various commodities that was fixed in a legal code. The early law codes from Sumer could be considered the first (written) financial law, and had many attributes still in use in the current price system today; such as codified quantities of money for business deals (interest rates), fines for 'wrongdoing', inheritance rules, laws concerning how private property is to be taxed or divided, within etc. For a summary of the laws, see Babylonian law. Temples are history's first documented creditors at interest, beginning in Sumer in the third millennium. By charging interest and ground rent on their own assets and property, temples helped legitimize the idea of interest‑bearing debt and profit seeking in general. Later, while the temples no longer included the handicraft workshops which characterized third‑millennium Mesopotamia, in their embassy functions they legitimized profit‑seeking trade, as well as by being a major beneficiary. Classical and late antiquity The Achaemenid Empire was the only civilization in all of history to connect over 40% of the global population, accounting for approximately 49.4 million of the world's 112.4 million people in around 480 BC. Later, the Roman Empire expanded to become one of the largest empires in the ancient world with an estimated 50 to 90 million inhabitants (roughly 20% of the world's population at the time) and covering 5.0 million square kilometres at its height in AD 117. Eventually, India and China accounted for more than half the size of the world economy for the next 1,500 years. Despite the high GDP, these nations being major population centers, did not have significantly higher GDP per capita. Expedition and long distance commerce The two major changes in commercial activity due to expedition known by historical recounting, are those led by Alexander the Great, which facilitated multi-national trade, and the Roman conquest of Gaul and invasions of Britain led by Julius Caesar. External trade with the Roman Empire During the time of the trade of the Occident with Rome, Egypt was the wealthiest of all places within the Roman Empire. The merchants of Rome acquired produce from Persia through Egypt, by way of the port of Berenice, and subsequently the Nile. Introduction of coinage According to Herodotus, and most modern scholars, the Lydians were the first people to introduce the use of gold and silver coin. It is thought that these first stamped coins were minted around 650–600 BC. A stater coin was made in the stater denomination. To complement the stater, fractions were made: the (third), the (sixth), and so forth in lower denominations. Developments in economic awareness and thought The first economist (at least from within opinion generated by the evidence of extant writings) is considered to be Hesiod, by the fact of his having written on the fundamental subject of the scarcity of resources, in Works and Days. The Arthashastra, an Indian work that includes sections on political economy, was composed between the 2nd and 3rd centuries BCE, and is often credited to the Indian thinker Chanakya. Greek and Roman thinkers made various economic observations, especially Aristotle and Xenophon. Many other Greek writings show understanding of sophisticated economic concepts. For instance, a form of Gresham's law is presented in Aristophanes' Frogs. Bryson of Heraclea was a neo-platonic who is cited as having heavily influenced early Muslim economic scholarship. Middle Ages In the Middle Ages the world economy slowly expanded with the increase of population and trade. The silk road was used for trading between Europe, Central Asia and China. During the early period of the Middle Ages, Europe was an economic backwater, however, by the later Medieval period rich trading cities in Italy emerged, creating the first modern accounting and finance systems. The field of Islamic economics was also introduced. The first banknotes were used in Tang dynasty China in the ninth century (with expanded use during the Song dynasty). Early Modern Era The Early modern era was a time of mercantilism, colonialism, nationalism, and international trade. The waning of feudalism saw new national economic frameworks begin to be strengthened. After the voyages of Christopher Columbus et al. opened up new opportunities for trade with the New World and Asia, newly-powerful monarchies wanted a more powerful military state to boost their status. Mercantilism was a political movement and an economic theory that advocated the use of the state's military power to ensure that local markets and supply sources were protected. The first banknote in Europe was issued by Stockholms Banco in 1661. Proto-industrialization The Mughal India, worth a quarter of world GDP in the 17th century and early 18th century, especially its largest and economically most developed province Bengal Subah consist of its 40%, were responsible for 25% of global output, that led to an unprecedented rise in the rate of population growth, ultimately leading to the proto-industrialization. Industrial Revolution Economic history as it relates to economic growth in the modern sense first occurred during the Industrial Revolution in Britain and then in the rest of Europe, due to high amounts of energy conversion taking place. Global nominal income expanded to $100 billion by 1880. After 1860, the enormous expansion of wheat production in the United States flooded the world market, lowering prices by 40%, and (along with the expansion of potato growing) made a major contribution to the nutritional welfare of the poor. Twentieth century Economic growth spread to all regions of the world during the twentieth century, when world GDP per capita quintupled. The highest growth occurred in the 1960s during post-war reconstruction. Global nominal income expanded to $1 trillion by 1960 and $10 trillion by 1980. Some increase in the volume of international trade is due to the reclassification of within-country trade to international trade due to the increasing number of countries and resulting changes in national boundaries, however, the effect is small. In particular, shipping containers revolutionized trade in the second half of the century, by making it cheaper to transport goods domestically and internationally. The economic boom of the 50s and 60s ended in the 70s with the 1973 oil crisis and the 1979 oil crisis. The former began in October 1973 when members of the Organization of Arab Petroleum Exporting Countries (OAPEC), led by King Faisal of Saudi Arabia, proclaimed an oil embargo against countries that supported Israel during the Yom Kippur War leading to the price of oil had rising to nearly 300%, from US$3 per barrel ($19/m^3) to nearly $12 per barrel ($75/m^3) globally. The latter happened in the wake of Iranian Revolution and the Iran-Iraq War where oil production from Iran and Iraq decreased dramatically raising the prices of oil doubling it to $39.50 per barrel ($248/m^3). Oil prices did not return to pre-crisis levels until the mid-1980s. Twenty-first century onwards Despite setbacks related to the global economic crisis or the "Great Recession" that was mostly predicated on housing and an increase in the use of leverage by both banks and households, the late twentieth and early twenty-first century has seen great increases in global GDP. Much of these increase are due to technological innovations, such as high-speed internet, smartphones, and numerous other technological advances that have changed the way much of the population lives, unlike any other economic period in history. These gains have not been uniform across the globe and there are still many countries where people, and especially young children die from what are now preventable diseases, such as rotovirus and polio. The Great Recession happened from 2007 to 2009. Global nominal income expanded to $100 trillion by 2020. Since 2020, economies have suffered from the COVID-19 recession. See also Notes References Further reading Berend, Ivan T. An Economic History of Nineteenth-Century Europe: Diversity and Industrialization (Cambridge University Press. 2012) Berend, Ivan T. An Economic History of Twentieth-Century Europe: Economic Regimes from Laissez-Faire to Globalization (Cambridge University Press, 2006) Bernstein, William J. A Splendid Exchange: How Trade Shaped the World (Atlantic Monthly Press, 2008) Birmingham, David. Trade and Empire in the Atlantic, 1400–1600 (Routledge, 2000). Bowden, Bradley. "Management history in the modern world: an overview." in The Palgrave Handbook of Management History (2020): 3-22. Cipolla, Carlo M. The economic history of world population (1978). online Coggan, Philip. More: A History of the World Economy from the Iron Age to the Information Age (Hachette UK, 2020). Day, Clive. A History of Commerce. New York [etc.]: Longmans, Green, and Co, 1921. online DeLong, J. Bradford. Slouching Towards Utopia: An Economic History of the Twentieth Century (2022) global history with stress on USA. Harreld, Donald J. An Economic History of the World Since 1400 (2016) online 48 university lectures Liss, Peggy K. Atlantic Empires: The Network of Trade and Revolution, 1713–1826 (Johns Hopkins University Press, 1983). Neal, Larry, and Rondo Cameron. A Concise Economic History of the World: From Paleolithic Times to the Present (5th ed. 2015) 3003 edition online North, Douglass C., and Robert Paul Thomas. The rise of the western world: A new economic history (Cambridge University Press, 1973). online Northrup, Cynthia Clark, ed. Encyclopedia of World Trade. Volumes 1-4: From Ancient Times to the Present (Routledge, 2004). 1200pp online Persson, Karl Gunnar, and Paul Sharp. An economic history of Europe (Cambridge University Press, 2015). Pomeranz, Kenneth. The World That Trade Created: Society, Culture, And the World Economy, 1400 to the Present (3rd ed. 2012) Vaidya, Ashish, ed. Globalization: Encyclopedia of Trade, Labor, and Politics (2 vol 2005) Economic history World history
0.768093
0.992692
0.76248
History of sociology
Sociology as a scholarly discipline emerged, primarily out of Enlightenment thought, as a positivist science of society shortly after the French Revolution. Its genesis owed to various key movements in the philosophy of science and the philosophy of knowledge, arising in reaction to such issues as modernity, capitalism, urbanization, rationalization, secularization, colonization and imperialism. During its nascent stages, within the late 19th century, sociological deliberations took particular interest in the emergence of the modern nation state, including its constituent institutions, units of socialization, and its means of surveillance. As such, an emphasis on the concept of modernity, rather than the Enlightenment, often distinguishes sociological discourse from that of classical political philosophy. Likewise, social analysis in a broader sense has origins in the common stock of philosophy, therefore pre-dating the sociological field. Various quantitative social research techniques have become common tools for governments, businesses, and organizations, and have also found use in the other social sciences. Divorced from theoretical explanations of social dynamics, this has given social research a degree of autonomy from the discipline of sociology. Similarly, "social science" has come to be appropriated as an umbrella term to refer to various disciplines which study humans, interaction, society or culture. As a discipline, sociology encompasses a varying scope of conception based on each sociologist's understanding of the nature and scope of society and its constituents. Creating a merely linear definition of its science would be improper in rationalizing the aims and efforts of sociological study from different academic backgrounds. Antecedent history Scope of being "sociological" The codification of sociology as a word, concept, and popular terminology is identified with Emmanuel Joseph Sieyès (see 18th century section) and succeeding figures from that point onward. It is important to be mindful of presentism, of introducing ideas of the present into the past, around sociology. Below, we see figures that developed strong methods and critiques that reflect on what we know sociology to be today that situates them as important figures in knowledge development around sociology. However, the term of "sociology" did not exist in this period, requiring careful language to incorporate these earlier efforts into the wider history of sociology. A more apt term to use might be proto-sociology that outlines that the rough ingredients of sociology were present, but had no defined shape or label to understand them as sociology as we concepualize it today. Ancient Greeks The sociological reasoning may be traced back at least as far as the ancient Greeks, whose characteristic trends in sociological thought can be traced back to their social environment. Given the rarity of extensive or highly-centralized political organization within states, the tribal spirit of localism and provincialism was in for deliberations on social phenomena, which would thus pervade much of Greek thought. Proto-sociological observations can be seen in the founding texts of Western philosophy (e.g. Herodotus, Thucydides, Plato, Polybius, etc.). Similarly, the methodological survey can trace its origins back to the Domesday Book ordered by King of England, William the Conqueror, in 1086. 13th century: studying social patterns East Asia Sociological perspectives can also be found among non-European thought of figures such as Confucius. Ma Duanlin In the 13th century, Ma Duanlin, a Chinese historian, first recognized patterns of social dynamics as an underlying component of historical development in his seminal encyclopedia, . 14th century: early studies of social conflict and change North Africa Ibn Khaldun There is evidence of early Muslim sociology from the 14th century. In particular, some consider Islamic scholar Ibn Khaldun, a 14th-century Arab from Tunisia, to have been the first sociologist and, thus, the father of sociology. His Muqaddimah (later translated as Prolegomena in Latin), serving as an introduction to a seven-volume analysis of universal history, would perhaps be the first work to advance social-scientific reasoning and social philosophy in formulating theories of social cohesion and social conflict. Concerning the discipline of sociology, Khaldun conceived a dynamic theory of history that involved conceptualizations of social conflict and social change. He developed the dichotomy of sedentary life versus nomadic life, as well as the concept of generation, and the inevitable loss of power that occurs when desert warriors conquer a city. Following his Syrian contemporary, Sati' al-Husri, the Muqaddimah may be read as a sociological work; six books of general sociology, to be specific. Topics dealt with in this work include politics, urban life, economics, and knowledge. The work is based around Khaldun's central concept of asabiyyah, meaning "social cohesion", "group solidarity", or "tribalism". Khaldun suggests such cohesion arises spontaneously amongst tribes and other small kinship groups, which can then be intensified and enlarged through religious ideology. Khaldun's analysis observes how this cohesion carries groups to power while simultaneously containing within itself the—psychological, sociological, economic, political—seeds of the group's downfall, to be replaced by a new group, dynasty, or empire bound by an even stronger (or at least younger and more vigorous) cohesion. 18th century: European modern origins of sociology The term "sociologie" was first coined by the French essayist Emmanuel Joseph Sieyès (1773–1799), derived the Latin ; joined with the suffix -ology, 'the study of', itself from the Greek lógos. 19th century: defining sociology In 1838, the French scholar Auguste Comte ultimately gave sociology the definition that it holds today. Comte had earlier expressed his work as "social physics", however that term would be appropriated by others such as Belgian statistician Adolphe Quetelet. European sociology: The Enlightenment and positivism Henri de Saint-Simon Henri de Saint-Simon published Physiologie sociale in 1813, devoting much of his time to the prospect that human society could be steered toward progress if scientists would form an international assembly to influence its course. He argued that scientists could distract groups from war and strife, by focusing their attention to generally improving their societies living conditions. In turn, this would bring multiple cultures and societies together and prevent conflict. Saint-Simon took the idea that everyone had encouraged from the Enlightenment, which was the belief in science, and spun it to be more practical and hands-on for the society. Saint-Simon's main idea was that industrialism would create a new launch in history. He saw that people had been seeing progress as an approach for science, but he wanted them to see it as an approach to all aspects of life. Society was making a crucial change at the time since it was growing out of a declining feudalism. This new path could provide the basis for solving all the old problems society had previously encountered. He was more concerned with the participation of man in the workforce instead of which workforce man choose. His slogan became "All men must work", to which communism would add and supply its own slogan "Each according to his capacity." Auguste Comte and followers Writing after the original Enlightenment and influenced by the work of Saint-Simon, political philosopher of social contract, Auguste Comte hoped to unify all studies of humankind through the scientific understanding of the social realm. His own sociological scheme was typical of the 19th-century humanists; he believed all human life passed through distinct historical stages and that, if one could grasp this progress, one could prescribe the remedies for social ills. Sociology was to be the "queen science" in Comte's schema; all basic physical sciences had to arrive first, leading to the most fundamentally difficult science of human society itself. Comte has thus come to be viewed as the "Father of Sociology". Comte delineated his broader philosophy of science in the Course of Positive Philosophy (c. 1830–1842), whereas his A General View of Positivism (1848) emphasized the particular goals of sociology. Comte would be so impressed with his theory of positivism that he referred to it as "the great discovery of the year 1822." Comte's system is based on the principles of knowledge as seen in three states. This law asserts that any kind of knowledge always begins in theological form. Here, the knowledge can be explained by a superior supernatural power such as animism, spirits, or gods. It then passes to the metaphysical form, where the knowledge is explained by abstract philosophical speculation. Finally, the knowledge becomes positive after being explained scientifically through observation, experimentation, and comparison. The order of the laws was created in order of increasing difficulty. Comte's description of the development of society is parallel to Karl Marx's own theory of historiography from capitalism to communism. The two would both be influenced by various Utopian-socialist thinkers of the day, agreeing that some form of communism would be the climax of societal development. In later life, Auguste Comte developed a "religion of humanity" to give positivist societies the unity and cohesiveness found through the traditional worship people were used to. In this new "religion", Comte referred to society as the "Great Being" and would promote a universal love and harmony taught through the teachings of his industrial system theory. For his close associate, John Stuart Mill, it was possible to distinguish between a "good Comte" (the one who wrote Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system would be unsuccessful but met with the publication of Darwin's On the Origin of Species to influence the proliferation of various secular humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Harriet Martineau undertook an English translation of Cours de Philosophie Positive that was published in two volumes in 1853 as The Positive Philosophy of Auguste Comte (freely translated and condensed by Harriet Martineau). Comte recommended her volumes to his students instead of his own. Some writers regard Martineau as the first female sociologist. Her introduction of Comte to the English-speaking world and the elements of sociological perspective in her original writings support her credit as a sociologist. Marx and historical materialism Both Comte and Marx intended to develop a new scientific ideology in the wake of European secularization. Marx, in the tradition of Hegelianism, rejected the positivist method and was in turn rejected by the self-proclaimed sociologists of his day. However, in attempting to develop a comprehensive science of society Marx nevertheless became recognized as a founder of sociology by the mid 20th century. Isaiah Berlin described Marx as the "true father" of modern sociology, "in so far as anyone can claim the title." In the 1830s, Karl Marx was part of the Young Hegelians in Berlin, which discussed and wrote about the legacy of the philosopher, George W. F. Hegel (whose seminal tome, Science of Logic was published in 1816). Although, at first sympathetic with the group's strategy of attacking Christianity to undermine the Prussian establishment, he later formed divergent ideas and broke with the Young Hegelians, attacking their views in works such as The German Ideology. Witnessing the struggles of the laborers during the Industrial Revolution, Marx concluded that religion (or the "ideal") is not the basis of the establishment's power, but rather ownership of capital (or the "material")- processes that employ technologies, land, money and especially human labor-power to create surplus-value—lie at the heart of the establishment's power. This "stood Hegel on his head" as he theorized that, at its core, the engine of history and the structure of society was fundamentally material rather than ideal. He theorized that both the realm of cultural production and political power created ideologies that perpetuated the oppression of the working class and the concentration of wealth within the capitalist class: the owners of the means of production. Marx predicted that the capitalist class would feel compelled to reduce wages or replace laborers with technology, which would ultimately increase wealth among the capitalists. However, as the workers were also the primary consumers of the goods produced, reducing their wages would result in an inevitable collapse in capitalism as a mode of economic production. Marx also co-operated with Friedrich Engels, who accused the capitalist class of "social murder" for causing workers "life of toil and wretchedness" but with a response that "tales no further trouble in the matter". This gives them the power over the workers' health and income, "which can degree his life or death". His book The Condition of the Working Class in England (1844) studied the life of the proletariat in Manchester, London, Dublin, and Edinburgh. Durkheim and French sociology Émile Durkheim's work had importance as he was concerned with how societies could maintain their integrity and coherence in modernity, an era in which traditional social and religious ties are no longer assumed, and in which new social institutions have come into being. Durkheim's first major sociological work was The Division of Labour in Society (1893). In 1895, he published The Rules of Sociological Method and set up the first European department of sociology at what became in 1896 the University of Bordeaux, where he taught from 1887 to 1902; he became France's first professor of sociology. In 1898, he established the journal L'Année Sociologique. Durkheim's seminal monograph, Suicide (1897), a study of suicide rates in Catholic and Protestant populations, pioneered modern social research and served to distinguish social science from psychology and political philosophy. The Elementary Forms of the Religious Life (1912) presented a theory of religion, comparing the social and cultural lives of aboriginal and modern societies. Durkheim was deeply preoccupied with the acceptance of sociology as a legitimate science. He refined the positivism originally set forth by Auguste Comte, promoting what could be considered as a form of epistemological realism, as well as the use of the hypothetico-deductive model in social science. For him, sociology was the science of institutions, if this term is understood in its broader meaning as "beliefs and modes of behaviour instituted by the collectivity" and its aim was to discover structural social facts. Durkheim was a major proponent of structural functionalism, a foundational perspective in both sociology and anthropology. In his view, social science should be purely holistic; that is, sociology should study phenomena attributed to society at large, rather than being limited to the specific actions of individuals. He remained a dominant force in French intellectual life until his death in 1917, presenting numerous lectures and published works on a variety of topics, including the sociology of knowledge, morality, social stratification, religion, law, education, and deviance. Durkheimian terms such as "collective consciousness" have since entered the popular lexicon. German sociology: Tönnies, the Webers, Simmel Ferdinand Tönnies argued that Gemeinschaft and Gesellschaft were the two normal types of human association. The former was the traditional kind of community with strong social bonds and shared beliefs, while the latter was the modern society in which individualism and rationality had become more dominant. He also drew a sharp line between the realm of conceptuality and the reality of social action: the first must be treated axiomatically and in a deductive way ('pure' sociology), whereas the second empirically and in an inductive way ('applied' sociology). His ideas were further developed by Max Weber, another early German sociologist. Weber argued for the study of social action through interpretive (rather than purely empiricist) means, based on understanding the purpose and meaning that individuals attach to their own actions. Unlike Durkheim, he did not believe in monocausal explanations and rather proposed that for any outcome there can be multiple causes. Weber's main intellectual concern was understanding the processes of rationalisation, secularisation, and "disenchantment", which he associated with the rise of capitalism and modernity. Weber is also known for his thesis combining economic sociology and the sociology of religion, elaborated in his book The Protestant Ethic and the Spirit of Capitalism, in which he proposed that ascetic Protestantism was one of the major "elective affinities" associated with the rise in the Western world of market-driven capitalism and the rational-legal nation-state. He argued that it was in the basic tenets of Protestantism to boost capitalism. Thus, it can be said that the spirit of capitalism is inherent to Protestant religious values. Against Marx's historical materialism, Weber emphasised the importance of cultural influences embedded in religion as a means for understanding the genesis of capitalism. The Protestant Ethic formed the earliest part in Weber's broader investigations into world religion. In another major work, "Politics as a Vocation", Weber defined the state as an entity that successfully claims a "monopoly of the legitimate use of physical force within a given territory". He was also the first to categorise social authority into distinct forms, which he labelled as charismatic, traditional, and rational-legal. His analysis of bureaucracy emphasised that modern state institutions are increasingly based on rational-legal authority. Weber´s wife, Marianne Weber also became a sociologist in her own right writing about women´s issues. She wrote Wife and Mother in the Development of Law which was devoted to the analysis of the institution of marriage. Her conclusion was that marriage is "a complex and ongoing negotiation over power and intimacy, in which money, women's work, and sexuality are key issues". Another theme in her work was that women's work could be used to "map and explain the construction and reproduction of the social person and the social world". Human work creates cultural products ranging from small, daily values such as cleanliness and honesty to larger, more abstract phenomena like philosophy and language. Georg Simmel was one of the first generation of German sociologists: his neo-Kantian approach laid the foundations for sociological antipositivism, asking 'What is society?' in a direct allusion to Kant's question 'What is nature?', presenting pioneering analyses of social individuality and fragmentation. For Simmel, culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history". Simmel discussed social and cultural phenomena in terms of "forms" and "contents" with a transient relationship; form becoming content, and vice versa, dependent on the context. In this sense he was a forerunner to structuralist styles of reasoning in the social sciences. With his work on the metropolis, Simmel was a precursor of urban sociology, symbolic interactionism and social network analysis. Simmel's most famous works today are The Problems of the Philosophy of History (1892), The Philosophy of Money (1900), The Metropolis and Mental Life (1903), Soziologie (1908, inc. The Stranger, The Social Boundary, The Sociology of the Senses, The Sociology of Space, and On The Spatial Projections of Social Forms), and Fundamental Questions of Sociology (1917). Herbert Spencer Herbert Spencer (1820–1903), the English philosopher, was one of the most popular and influential 19th-century sociologists, although his work has largely fallen out of favor in contemporary sociology. The early sociology of Spencer came about broadly as a reaction to Comte and Marx; writing before and after the Darwinian revolution in biology, Spencer attempted to reformulate the discipline in socially Darwinistic terms. In fact, his early writings show a coherent theory of general evolution several years before Darwin published anything on the subject. Encouraged by his friend and follower Edward L. Youmans, Spencer published The Study of Sociology in 1874, which was the first book with the term "sociology" in the title. It is estimated that he sold one million books in his lifetime, far more than any other sociologist at the time. So strong was his influence that many other 19th-century thinkers, including Émile Durkheim, defined their ideas in relation to his. Durkheim's Division of Labour in Society is to a large extent an extended debate with Spencer from whose sociology Durkheim borrowed extensively. Also a notable biologist, Spencer coined the term "survival of the fittest" as a basic mechanism by which more effective socio-cultural forms progressed. This means that the strongest members of society should not help the weaker members of society. An example is that the richest should remain rich and the poor should also remain poor, as the rich people would not help the poor. In the 20th century, Spencer's work became less influential in sociology because of his social Darwinist views on race, which are widely considered a form of scientific racism. For example, in his Social Statics (1850), he argued that imperialism had served civilization by clearing the inferior races off the earth: "The forces which are working out the great scheme of perfect happiness, taking no account of incidental suffering, exterminate such sections of mankind as stand in their way. … Be he human or be he brute – the hindrance must be got rid of." Largely because of his work on race, Spencer is now described in the academy as "of all the great Victorian thinkers... [the one] whose reputation has fallen the farthest." North American sociology Lester Frank Ward A contemporary of Spencer, Lester Frank Ward is often described as a father of American sociology and served as the first president of the American Sociological Association in 1905 and served as such until 1907. He published Dynamic Sociology in 1883; Outlines of Sociology in 1898; Pure Sociology in 1903; and Applied Sociology in 1906. Also in 1906, at the age of 65 he was appointed as professor of sociology at Brown University. W. E. B. Du Bois In July 1897, W. E. B. Du Bois produced his first major The Philadelphia Negro (1899), a detailed and comprehensive sociological study of the African-American people of Philadelphia, based on the field work he did in 1896–1897. The work was a breakthrough in scholarship because it was the first scientific study of African Americans and a significant contribution to early scientific sociology in the U.S. In the study, Du Bois coined the phrase "the submerged tenth" to describe the black underclass. Later in 1903 he popularized the term, the "Talented Tenth", applied to society's elite class. Du Bois's terminology reflected his opinion that the elite of a nation, both black and white, were critical to achievements in culture and progress. To portray the genius and humanity of the black race, Du Bois published The Souls of Black Folk (1903), a collection of 14 essays. The introduction famously proclaimed that "the problem of the Twentieth Century is the problem of the color line." A major theme of the work was the double consciousness faced by African Americans: being both American and black. This was a unique identity which, according to Du Bois, had been a handicap in the past, but could be a strength in the future, "Henceforth, the destiny of the race could be conceived as leading neither to assimilation nor separatism but to proud, enduring hyphenation." Other precursors Many other philosophers and academics were influential in the development of sociology, not least the Enlightenment theorists of social contract, and historians such as Adam Ferguson (1723–1816). For his theory on social interaction, Ferguson has himself been described as "the father" of modern sociology. Ferguson argued that capitalism was diminishing social bonds that traditionally held communities together. Other early works to appropriate the term 'sociology' included A Treatise on Sociology, Theoretical and Practical by the North American lawyer Henry Hughes and Sociology for the South, or the Failure of Free Society by the American lawyer George Fitzhugh. Both books were published in 1854, in the context of the debate over slavery in the antebellum US. Harriet Martineau, a Whig social theorist and the English translator of many of Comte's works, has been cited as the first female sociologist. Writing a study of the United States, she noted how the theoretical ideal of equality apparent in the Declaration of Independence were not reflected in the social reality of the country, which marginalised women and practiced slavery. Various other early social historians and economists have gained recognition as classical sociologists, including Robert Michels (1876–1936), Alexis de Tocqueville (1805–1859), Vilfredo Pareto (1848–1923) and Thorstein Veblen (1857–1926). The classical sociological texts broadly differ from political philosophy in the attempt to remain scientific, systematic, structural, or dialectical, rather than purely moral, normative or subjective. The new class relations associated with the development of Capitalism are also key, further distinguishing sociological texts from the political philosophy of the Renaissance and Enlightenment eras. 19th century: institutionalization Rise as an academic discipline Europe Formal institutionalization of sociology as an academic discipline began when Emile Durkheim founded the first French department of sociology at the University of Bordeaux in 1895. In 1896, he established the journal L'Année Sociologique. A course entitled "sociology" was taught for the first time in the United States in 1875 by William Graham Sumner, drawing upon the thought of Comte and Herbert Spencer rather than the work of Durkheim. In 1890, the oldest continuing sociology course in the United States began at the University of Kansas, lectured by Frank Blackmar. The Department of History and Sociology at the University of Kansas was established in 1891 and the first full-fledged independent university department of sociology was established in 1892 at the University of Chicago by Albion W. Small (1854–1926), who in 1895 founded the American Journal of Sociology. American sociology arose on a broadly independent trajectory to European sociology. George Herbert Mead and Charles H. Cooley were influential in the development of symbolic interactionism and social psychology at the University of Chicago, while Lester Ward emphasized the central importance of the scientific method with the publication of Dynamic Sociology in 1883. The first sociology department in the United Kingdom was founded at the London School of Economics in 1904. In 1919 a sociology department was established in Germany at the Ludwig Maximilian University of Munich by Max Weber, who had established a new antipositivist sociology. The "Institute for Social Research" at the University of Frankfurt (later to become the "Frankfurt School" of critical theory) was founded in 1923.[29] Critical theory would take on something of a life of its own after WW2, influencing literary theory and the "Birmingham School" of cultural studies. The University of Frankfurt's advances along with the close proximity to the research institute for sociology made Germany a powerful force in leading sociology at that time. In 1918, Frankfurt received the funding to create sociology's first department chair. The Germany's groundbreaking work influenced its government to add the position of Minister of Culture to advance the country as a whole. The remarkable collection of men who were contributing to the sociology department at Frankfurt were soon getting worldwide attention and began being referred to as the "Frankfurt school." Here they studied new perspectives on Marx' theories, and went into depth in the works of Weber and Freud. Most of these men would soon be forced out of Germany by the Nazis, moving to America. In the United States they had a significant influence on social research. This forced relocation of sociologists enabled sociology in America to rise up to the standards of European studies of sociology by planting some of Europe's greatest sociologists in America. Felix Weil was one of the students who received their doctorate on the concept of socialization from the University of Frankfurt. He, along with Max Horkheimer and Kurt Albert Gerlach, developed the Institute of Social Research after it was established in 1923. Kurt Albert Gerlach would serve as the institute's first director. Their goal in creating the institute was to produce a place that people could discover and be informed of social life as a whole. Weil, Horkheimer, and Gerlach wanted to focus on interactions between economics, politics, legal matters, as well as scholarly interactions in the community and society. The main research that got the institute known was its revival of scientific Marxism. Many benefactors contributed money, supplies, and buildings to keep this area of research going. When Gerlach became ill and had to step down as director, Max Horkheimer took his place. He encouraged the students of the institute to question everything they studied. If the students studied a theory, he not only wanted them to discover its truth themselves, but also to discover how, and why it is true and the theories relation to society. The National Socialist regime exiled many of the members of the Institute of Social Research. The regime also forced many students and staff from the entire Frankfurt University, and most fled to America. The war meant that the institute lost too many people and was forced to close. In 1950, the institute was reopened as a private establishment. From this point on the Institute of Social Research would have a close connection to sociology studies in the United States. North America In 1905 the American Sociological Association, the world's largest association of professional sociologists, was founded, and Lester F. Ward was selected to serve as the first President of the new society. The University of Chicago developed the major sociologists at the time. It brought them together, and even gave them a hub and a network to link all the leading sociologists. In 1925, a third of all sociology graduate students attended the University of Chicago. Chicago was very good at not isolating their students from other schools. They encouraged them to blend with other sociologists, and to not spend more time in the class room than studying the society around them. This would teach them real life application of the classroom teachings. The first teachings at the University of Chicago were focused on the social problems that the world had been dealt. At this time, academia was not concerned with theory; especially not to the point that academia is today. Many people were still hesitant of sociology at this time, especially with the recent controversial theories of Weber and Marx. The University of Chicago decided to go into an entirely different direction and their sociology department directed their attention to the individual and promoted equal rights. Their concentration was small groups and discoveries of the individual's relationship to society. The program combined with other departments to offer students well-rounded studies requiring courses in hegemony, economics, psychology, multiple social sciences and political science. Albion Small was the head of the sociology program at the University of Chicago. He played a key role in bringing German sociological advancements directly into American academic sociology. Small also created the American Journal of Sociology. Robert Park and Ernest Burgess refined the program's methods, guidelines, and checkpoints. This made the findings more standardized, concise and easier to comprehend. The pair even wrote the sociology program's textbook for a reference and get all students on the same page more effectively. Many remarkable sociologists such as George Hebert Mead, W.E.B Du Bois, Robert Park, Charles S. Johnson, William Ogburn, Hebert Blumer and many others have significant ties to the University of Chicago. In 1920 a department was set up in Poland by Florian Znaniecki (1882–1958). William I. Thomas was an early graduate from the Sociology Department of the University of Chicago. He built upon his education and his work changed sociology in many ways. In 1918, William I. Thomas and Florian Znaniecki gave the world the publication of "The Polish Peasant" in Europe and America. This publication combined sociological theory with in depth experiential research and thus launching methodical sociological research as a whole. This changed sociologist's methods and enabled them to see new patterns and connect new theories. This publication also gave sociologists a new way to found their research and prove it on a new level. All their research would be more solid, and harder for society to not pay attention to it. In 1920, Znaniecki developed a sociology department in Poland to expand research and teachings there. With the lack of sociological theory being taught at the University of Chicago paired with the new foundations of statistical methods, the student's ability to make any real predictions was nonexistent. This was a major factor in the downfall of the Chicago school. International International cooperation in sociology began in 1893 when René Worms (1869–1926) founded the small Institut International de Sociologie, eclipsed by much larger International Sociological Association from 1949. Canonization of Durkheim, Marx and Weber Durkheim, Marx, and Weber are typically cited as the three principal architects of modern social science. The sociological "canon of classics" with Durkheim and Weber at the top owes in part to Talcott Parsons, who is largely credited with introducing both to American audiences. Parsons' Structure of Social Action (1937) consolidated the American sociological tradition and set the agenda for American sociology at the point of its fastest disciplinary growth. In Parsons' canon, however, Vilfredo Pareto holds greater significance than either Marx or Simmel. His canon was guided by a desire to "unify the divergent theoretical traditions in sociology behind a single theoretical scheme, one that could in fact be justified by purely scientific developments in the discipline during the previous half century." While the secondary role Marx plays in early American sociology may be attributed to Parsons, as well as to broader political trends, the dominance of Marxism in European sociological thought had long since secured the rank of Marx alongside Durkheim and Weber as one of the three "classical" sociologists. 19th century: From positivism to anti-positivism The methodological approach toward sociology by early theorists was to treat the discipline in broadly the same manner as natural science. An emphasis on empiricism and the scientific method was sought to provide an incontestable foundation for any sociological claims or findings, and to distinguish sociology from less empirical fields such as philosophy. This perspective, termed positivism, was first developed by theorist Auguste Comte. Positivism was founded on the theory that the only true, factual knowledge is scientific knowledge. Comte had very vigorous guidelines for a theory to be considered positivism. He thought that this authentic knowledge can only be derived from positive confirmation of theories through strict continuously tested methods, that are not only scientifically but also quantitatively based. Émile Durkheim was a major proponent of theoretically grounded empirical research, seeking correlations to reveal structural laws, or "social facts". Durkheim proved that concepts that had been attributed to the individual were actually socially determined. These occurrences are things such as suicide, crime, moral outrage, a person's personality, time, space, and God. He brought to light that society had influence on all aspects of a person, far more than had been previously believed. For him, sociology could be described as the "science of institutions, their genesis and their functioning". Durkheim endeavoured to apply sociological findings in the pursuit of political reform and social solidarity. Today, scholarly accounts of Durkheim's positivism may be vulnerable to exaggeration and oversimplification: Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in the same way as noble science, whereas Durkheim acknowledged in greater detail the fundamental epistemological limitations. Reactions against positivism began when German philosopher Georg Wilhelm Friedrich Hegel (1770–1831) voiced opposition to both empiricism, which he rejected as uncritical, and determinism, which he viewed as overly mechanistic. Karl Marx's methodology borrowed from Hegel dialecticism but also a rejection of positivism in favour of critical analysis, seeking to supplement the empirical acquisition of "facts" with the elimination of illusions. He maintained that appearances need to be critiqued rather than simply documented. Marx nonetheless endeavoured to produce a science of society grounded in the economic determinism of historical materialism. Other philosophers, including Wilhelm Dilthey (1833–1911) and Heinrich Rickert (1863–1936) argued that the natural world differs from the social world because of those unique aspects of human society (meanings, signs, and so on) which inform human cultures. In Italy, speculative knowledge prevails over positivistic sociological science, where the forms of attraction of the social sciences are vitiated by the self-reformism of morality and the self-assertion of science. The process lasts until the 1950s. After that there is a revival and sociological science gradually asserts itself as an academic discipline (See Guglielmo Rinzivillo, Science and the Object. Self-criticism of strategic knowledge, Milan, Franco Angeli, 2010, p. 52 ff., ISBN 9788856824872). At the turn of the 20th century the first generation of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a subjective perspective. Max Weber argued that sociology may be loosely described as a 'science' as it is able to identify causal relationships—especially among ideal types, or hypothetical simplifications of complex social phenomena. As a nonpositivist however, one seeks relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. Both Weber and Georg Simmel pioneered the Verstehen (or 'interpretative') approach toward social science; a systematic process in which an outside observer attempts to relate to a particular cultural group, or indigenous people, on their own terms and from their own point-of-view. Through the work of Simmel, in particular, sociology acquired a possible character beyond positivist data-collection or grand, deterministic systems of structural law. Relatively isolated from the sociological academy throughout his lifetime, Simmel presented idiosyncratic analyses of modernity more reminiscent of the phenomenological and existential writers than of Comte or Durkheim, paying particular concern to the forms of, and possibilities for, social individuality. His sociology engaged in a neo-Kantian critique of the limits of perception, asking 'What is society?' in a direct allusion to Kant's question 'What is nature?' 20th century: functionalism, structuralism, critical theory and globalization Early 20th century In the early 20th century, sociology expanded in the U.S., including developments in both macrosociology, concerned with the evolution of societies, and microsociology, concerned with everyday human social interactions. Based on the pragmatic social psychology of George Herbert Mead (1863–1931), Herbert Blumer (1900–1987) and, later, the Chicago school, sociologists developed symbolic interactionism. In the 1920s, György Lukács released History and Class Consciousness (1923), while a number of works by Durkheim and Weber were published posthumously. During the same period members of the Frankfurt school, such as Theodor W. Adorno (1903–1969) and Max Horkheimer (1895–1973), developed critical theory, integrating the historical materialistic elements of Marxism with the insights of Weber, Freud and Gramsci—in theory, if not always in name—often characterizing capitalist modernity as a move away from the central tenets of the Enlightenment. In the 1930s, Talcott Parsons (1902–1979) aimed to bring together the various strands of sociology, with the aim of developing a universal methodology. He developed action theory and functionalism, integrating the study of social order with the structural and voluntaristic aspects of macro and micro factors, while placing the discussion within a higher explanatory context of system theory and cybernetics. Parsons had also suggested starting from the 'bottom up' rather than the 'top down' when researching social order. One of his students, Harold Garfinkel, followed in this direction, developing ethnomethodology. In Austria and later the U.S., Alfred Schütz (1899–1959) developed social phenomenology, which would later inform social constructionism. Mid 20th century In some countries, sociology was undermined by totalitarian governments for reasons of ostensible political control. After the Russian Revolution, sociology was gradually "politicized, Bolshevisized and eventually, Stalinized" until it virtually ceased to exist in the Soviet Union. In China, the discipline was banned with semiotics, comparative linguistics and cybernetics as "Bourgeois pseudoscience" in 1952, not to return until 1979. During the same period, however, sociology was also undermined by conservative universities in the West. This was due, in part, to perceptions of the subject as possessing an inherent tendency, through its own aims and remit, toward liberal or left wing thought. Given that the subject was founded by structural functionalists; concerned with organic cohesion and social solidarity, this view was somewhat groundless (though it was Parsons who had introduced Durkheim to American audiences, and his interpretation has been criticized for a latent conservatism). In the mid-20th century Robert K. Merton released his Social Theory and Social Structure (1949). Around the same time, C. Wright Mills continued Weber's work of understanding how modernity was undermining tradition, with a critique of the dehumanizing impact this had on people. Also using the Weberian notion of class, he found that the United States was at the time ruled by a power elite composed of military, political, economic and union leaders. His The Sociological Imagination (1959), argued that the problem was in people seeing their problems as individual issues, rather than as products of social processes. Also in 1959, Erving Goffman published The Presentation of Self in Everyday Life and introduced the theory of dramaturgical analysis which asserts that all individuals aim to create a specific impression of themselves in the minds of other people. Wright Mills' ideas were influential on the New Left of the 1960's, which he had also coined the name for. Herbert Marcuse was subsequently involved in the movement. Following the counterculture of the decade, new thinkers emerged, especially in France, such as Michel Foucault. While power had earlier been viewed either in political or economic terms, Foucault argued that "power is everywhere, and comes from everywhere", seeing it as a type of relation present on every level of society that is a key component of social order. Examples of such relations included discourse and power-knowledge. Foucault also studied human sexuality with his The History of Sexuality (1976). Influenced by him, Judith Butler subsequently pioneered queer theory. Raewyn Connell in turn identified the stigmatization of homosexuality as a product of hegemonic masculinity. In the 1960's, sociologists also developed new types of quantitative and qualitative research methods. Paul Lazarsfeld founded Columbia University's Bureau of Applied Social Research, where he exerted a tremendous influence over the techniques and the organization of social research. His many contributions to sociological method have earned him the title of the "founder of modern empirical sociology". Lazarsfeld made great strides in statistical survey analysis, panel methods, latent structure analysis, and contextual analysis. He is also considered a co-founder of mathematical sociology. Many of his ideas have been so influential as to now be considered self-evident. In the 1970's, Peter Townsend redefined poverty from the previous definition of 'total earnings being too little to obtain the minimum necessities of physical life', to one which also took into account the relative deprivation caused, meaning that not having access to the typical level of lifestyle was also a form of poverty. During the same decade, Pierre Bourdieu, advancing the concept of habitus, argued that class was not defined solely by economic means, but also by the socially acquired taste which one shared with the rest of the class. Beyond economic capital, he also identified cultural, social, scholastic, linguistic, and political capital. These all contributed towards symbolic capital. Richard Sennett in turn found that working-class people were finding themselves in crisis following rising social status, as it conflicted with the values of their background. Structuralism Structuralism is "the belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure". Structuralism in Europe developed in the early 1900s, mainly in France and Russian Empire, in the structural linguistics of Ferdinand de Saussure and the subsequent Prague, Moscow and Copenhagen schools of linguistics. In the late 1950s and early 1960s, when structural linguistics were facing serious challenges from the likes of Noam Chomsky and thus fading in importance, an array of scholars in the humanities borrowed Saussure's concepts for use in their respective fields of study. French anthropologist Claude Lévi-Strauss was arguably the first such scholar, sparking a widespread interest in structuralism. Modernization theory Modernization theory is used to explain the process of modernization within societies. Modernization refers to a model of a progressive transition from a 'pre-modern' or 'traditional' to a 'modern' society. Modernization theory originated from the ideas of German sociologist Max Weber (1864–1920), which provided the basis for the modernization paradigm developed by Harvard sociologist Talcott Parsons (1902–1979). The theory looks at the internal factors of a country while assuming that with assistance, "traditional" countries can be brought to development in the same manner more developed countries have been. Modernization theory was a dominant paradigm in the social sciences in the 1950s and 1960s, then went into a deep eclipse. It made a comeback after 1991 but remains a controversial model. Political sociologist Seymour Martin Lipset wrote extensively about the conditions for democracy in comparative perspective becoming influential in modernization theories and in emerging political science. Dependency theory In Latin America Dependency theory, a structuralist theory, emerged arguing that poor states are impoverished and rich ones enriched by the way poor states are integrated into the "world system". This theory was officially developed in the late 1960s following World War II, as scholars searched for the root issue in the lack of development in Latin America. The theory was popular in the 1960s and 1970s as a criticism of modernization theory, which was falling increasingly out of favor because of continued widespread poverty in much of the world. At that time the assumptions of liberal theories of development were under attack. It was used to explain the causes of overurbanization, a theory that urbanization rates outpaced industrial growth in several developing countries. Influenced by Dependency theory, World-systems theory emerged as a macro-scale approach to world history and social change which emphasizes the world-system (and not nation states) as the primary (but not exclusive) unit of social analysis. Immanuel Wallerstein has developed the best-known version of world-systems analysis, beginning in the 1970s. Wallerstein traces the rise of the capitalist world-economy from the "long" 16th century (c. 1450–1640). The rise of capitalism, in his view, was an accidental outcome of the protracted crisis of feudalism (c. 1290–1450). Europe (the West) used its advantages and gained control over most of the world economy and presided over the development and spread of industrialization and capitalist economy, indirectly resulting in unequal development. Systems theory Niklas Luhmann described modern capitalism as dividing society into different systems – economic, educational, scientific, legal, political and other systems – which together form the system of systems that is society itself. This system is in turn formed by communication, which is defined as the "synthesis of information, utterance, and understanding" emerging from verbal and non-verbal activities. A social system is similar to a biological organism by reproducing itself through communication developing from communication. A system is anything with a 'distinction' from its environment, which is itself formed by other systems. These systems are connected by 'structural couplings' which translate communications from one system to another (including from humans to systems), the lack of which is a problem for modern capitalism. Post-structuralism and postmodernism In the 1960s and 1970s post-structuralist and postmodernist theory, drawing upon structuralism and phenomenology as much as classical social science, made a considerable impact on frames of sociological enquiry. Often understood simply as a cultural style 'after-Modernism' marked by intertextuality, pastiche and irony, sociological analyses of postmodernity have presented a distinct era relating to (1) the dissolution of metanarratives (particularly in the work of Lyotard), and (2) commodity fetishism and the 'mirroring' of identity with consumption in late capitalist society (Debord; Baudrillard; Jameson). Postmodernism has also been associated with the rejection of enlightenment conceptions of the human subject by thinkers such as Michel Foucault, Claude Lévi-Strauss and, to a lesser extent, in Louis Althusser's attempt to reconcile Marxism with anti-humanism. Most theorists associated with the movement actively refused the label, preferring to accept postmodernity as a historical phenomenon rather than a method of analysis, if at all. Nevertheless, self-consciously postmodern pieces continue to emerge within the social and political sciences in general. Late 20th century sociology Intersectionality In the 1980's, bell hooks argued that white and non-white women faced different obstacles in society. Kimberlé Crenshaw subsequently developed the concept of intersectionality in 1989 to describe the way different identities intersected to create differing forms of discrimination. In 1990, Sylvia Walby argued that six intersecting structures upheld patriarchy: the family household, paid work, the state, male violence, sexuality, and cultural institutions. Later, the sociologist Helma Lutz described 14 'lines of difference' which could form the basis of unequal power relations. Globalization Elsewhere in the 1980s, theorists often focused on globalization, communication, and reflexivity in terms of a 'second' phase of modernity, rather than a distinct new era per se. Jürgen Habermas established communicative action as a reaction to postmodern challenges to the discourse of modernity, informed both by critical theory and American pragmatism. Fellow German sociologist, Ulrich Beck, presented The Risk Society (1992) as an account of the manner in which the modern nation state has become organized. In Britain, Anthony Giddens set out to reconcile recurrent theoretical dichotomies through structuration theory. During the 1990s, Giddens developed work on the challenges of "high modernity", as well as a new 'third way' politics that would greatly influence New Labour in U.K. and the Clinton administration in the U.S. Leading Polish sociologist, Zygmunt Bauman, wrote extensively on the concepts of modernity and postmodernity, particularly with regard to the Holocaust and consumerism as historical phenomena. While Pierre Bourdieu gained significant critical acclaim for his continued work on cultural capital, certain French sociologists, particularly Jean Baudrillard and Michel Maffesoli, were criticised for perceived obfuscation and relativism. Functionalist systems theorists such as Niklas Luhmann remained dominant forces in sociology up to the end of the century. In 1994, Robert K. Merton won the National Medal of Science for his contributions to the sociology of science. The positivist tradition is popular to this day, particularly in the United States. The discipline's two most widely cited American journals, the American Journal of Sociology and the American Sociological Review, primarily publish research in the positivist tradition, with ASR exhibiting greater diversity (the British Journal of Sociology, on the other hand, publishes primarily non-positivist articles). The twentieth century saw improvements to the quantitative methodologies employed in sociology. The development of longitudinal studies that follow the same population over the course of years or decades enabled researchers to study long-term phenomena and increased the researchers' ability to infer causality. 21st century sociology The increase in the size of data sets produced by the new survey methods was followed by the invention of new statistical techniques for analyzing this data. Analysis of this sort is usually performed with statistical software packages such as R, SAS, Stata, or SPSS. Social network analysis is an example of a new paradigm in the positivist tradition. The influence of social network analysis is pervasive in many sociological sub fields such as economic sociology (see the work of J. Clyde Mitchell, Harrison White, or Mark Granovetter, for example), organizational behavior, historical sociology, political sociology, or the sociology of education. There is also a minor revival of a more independent, empirical sociology in the spirit of C. Wright Mills, and his studies of the Power Elite in the United States of America, according to Stanley Aronowitz. Critical realism is a philosophical approach to understanding science developed by Roy Bhaskar (1944–2014). It combines a general philosophy of science (transcendental realism) with a philosophy of social science (critical naturalism). It specifically opposes forms of empiricism and positivism by viewing science as concerned with identifying causal mechanisms. Also, in the context of social science it argues that scientific investigation can lead directly to critique of social arrangements and institutions, in a similar manner to the work of Karl Marx. In the last decades of the twentieth century it also stood against various forms of 'postmodernism'. It is one of a range of types of philosophical realism, as well as forms of realism advocated within social science such as analytic realism and subtle realism. See also Bibliography of sociology List of sociologists Outline of sociology Subfields of sociology Timeline of sociology Philosophy of social science Notes References Further reading Coser, Lewis A. (1976). "Sociological Theory From the Chicago Dominance to 1965". Annual Review of Sociology. 2: 145–160. Gerhard Lenski. 1982. Human societies: An introduction to macrosociology, McGraw Hill Company. Nash, Kate. 2010. Contemporary Political Sociology: Globalization, Politics, and Power. Wiley-Blackwell Publishers. Samuel William Bloom, The Word as Scalpel: A History of Medical Sociology, Oxford University Press 2002 Raymond Boudon A Critical Dictionary of Sociology. Chicago: University of Chicago Press, 1989 Craig Calhoun, ed. Sociology in America. The ASA Centennial History. Chicago: University of Chicago Press, 2007. Deegan, Mary Jo, ed. Women in Sociology: A Bio-Bibliographical Sourcebook, New York: Greenwood Press, 1991. A. H. Halsey, A History of Sociology in Britain: Science, Literature, and Society, Oxford University Press 2004 Martindale, Don (1976). "American Sociology Before World War II". Annual Review of Sociology. 2: 121–143. Barbara Laslett (editor), Barrie Thorne (editor), Feminist Sociology: Life Histories of a Movement, Rutgers University Press 1997 Moebius, Stephan: Sociology in Germany. A History, Palgrave Macmillan 2021 (open access), ISBN 978-3-030-71866-4. T.N. Madan, Pathways : approaches to the study of society in India. New Delhi: Oxford University Press, 1994 Sorokin, Pitirim. Contemporary Sociological Theories (1928) online free guide to major scholars Guglielmo Rinzivillo, A Modern History of Sociology in Italy and the Various Patterns of its Epistemological Development, New York, Nova Science Publishers, 2019 Sorokin, Pitirim and Carle C Zimmerman. Principles of Rural-Urban Sociology (3 vol 1927) online free Steinmetz, George. 'Neo-Bourdieusian Theory and the Question of Scientific Autonomy: German Sociologists and Empire, 1890s–1940s', Political Power and Social Theory Volume 20 (2009): 71–131. Sociology Sociology
0.76739
0.9936
0.762479
Consociationalism
Consociationalism is a form of democratic power sharing. Political scientists define a consociational state as one which has major internal divisions along ethnic, religious, or linguistic lines, but which remains stable due to consultation among the elites of these groups. Consociational states are often contrasted with states with majoritarian electoral systems. The goals of consociationalism are governmental stability, the survival of the power-sharing arrangements, the survival of democracy, and the avoidance of violence. When consociationalism is organised along religious confessional lines, as in Lebanon, it is known as confessionalism. Consociationalism is sometimes seen as analogous to corporatism. Some scholars consider consociationalism a form of corporatism. Others claim that economic corporatism was designed to regulate class conflict, while consociationalism developed on the basis of reconciling societal fragmentation along ethnic and religious lines. Concurrent majority can be a precursor to consociationalism. A consociational democracy differs from consensus democracy (e.g. in Switzerland), in that consociational democracy represents a consensus of representatives with minority veto, while consensus democracy requires consensus across the electorate. The idea has received significant criticism in its applicability to democratic political systems, especially with regard to power-sharing. Origins Consociation was first discussed in the 17th century New England Confederation. It described the interassociation and cooperation of the participant self-governing Congregational churches of the various colonial townships of the Massachusetts Bay Colony. These were empowered in the civil legislature and magistracy. It was debated at length in the Boston Synod of 1662. This was when the Episcopalian Act of Uniformity 1662 was being introduced in England. Consociationalism was originally discussed in academic terms by the political scientist Arend Lijphart. However, Lijphart has stated that he "merely discovered what political practitioners had repeatedly – and independently of both academic experts and one another – invented years earlier". Theoretically, consociationalism was inducted from Lijphart's observations of political accommodation in the Netherlands, after which Lijphart argued for a generalizable consociational approach to ethnic conflict regulation. The Netherlands, as a consociational state, was between 1857 and 1967 divided into four non-territorial pillars: Calvinist, Catholic, socialist, and general, although until 1917 there was a plurality ("first past the post") electoral system rather than a proportional one. In their heyday, each comprised tightly organised groups, schools, universities, hospitals and newspapers, all divided along a Balkanised social structure. The theory, according to Lijphart, focuses on the role of social elites, their agreement and co-operation, as the key to a stable democracy. Based on this initial study of consociational democracy, John McGarry and Brendan O'Leary trace consociationalism back to 1917, when it was first employed in the Netherlands, while Gerhard Lehmbruch suggests 'precursors' of consociationalism as early as the 1555 Peace of Augsburg. State-building While Lijphart's initial theory drew primarily from Western European democracies in its formulation of consociationalism, it has gained immense traction in post-conflict state-building contexts in the past decades. This development has been reflected in the expansion of the favourable conditions to external factors in the literature as well. Rather than internally constructed by state elites, these recent examples have been characterised by external facilitation, and at times imposition, through international actors. In the process, consociational arrangements have frequently been used to transform immediate violent conflict and solidify peace settlements in extremely fragile contexts of deeply divided societies. The volatile environments in which these recent examples have been implemented have exhibited the need for external interference not only for their initial implementation but also for their continued existence. As such, a range of international actors have assumed mediating and supporting roles to preserve power-sharing agreements in targeted states. Most prominently in Bosnia-Herzegovina, this has involved an "international regulating body" in the form of a High Representative who in one period frequently intervened in the domestic political affairs of the state to implement legislation on which domestic elites were reluctant to come to an agreement on. While the current results of consociational arrangements implemented in post-conflict state-building endeavours have been mixed, scholars such as O'Leary and McGarry maintain that they have often proven to be the most practical approach to ending immediate conflict and creating the necessary stability for peace-building to take place. Its utility has been seen in its transformative aspect, flexibility, and "realist" approach to existing identity formations that are difficult to incorporate in a majoritarian system. Characteristics Lijphart identifies four key characteristics of consociational democracies: Consociational policies often have these characteristics: Coalition cabinets, where executive power is shared between parties, not concentrated in one. Many of these cabinets are oversized, meaning they include parties not necessary for a parliamentary majority; Balance of power between executive and legislative; Decentralized and federal government, where (regional) minorities have considerable independence; Incongruent bicameralism, where it is very difficult for one party to gain a majority in both houses. Normally one chamber represents regional interests and the other national interests; Proportional representation, to allow (small) minorities to gain representation too; Organized and corporatist interest groups, which represent minorities; A rigid constitution, which prevents government from changing the constitution without consent of minorities; Judicial review, which allows minorities to go to the courts to seek redress against laws that they see as unjust; Elements of direct democracy, which allow minorities to enact or prevent legislation; Proportional employment in the public sector; A neutral head of state, either a monarch with only ceremonial duties, or an indirectly elected president, who gives up his or her party affiliation after being elected; Referendums are only used to allow minorities to block legislation: this means that they must be a citizen's initiative and that there is no compulsory voting. Equality between ministers in cabinet, the prime minister is only primus inter pares; An independent central bank, where experts and not politicians set out monetary policies. Favourable conditions Lijphart also identified a number of "favourable conditions" under which consociationalism is likely to be successful. He has changed the specification of these conditions somewhat over time. Michael Kerr summarised Lijphart's most prominent favourable factors as: Segmental isolation of ethnic communities A multiple balance of power The presence of external threats common to all communities Overarching loyalties to the state A tradition of elite accommodation Socioeconomic equality A small population size, reducing the policy load A moderate multi-party system with segmental parties Lijphart stresses that these conditions are neither indispensable nor sufficient to account for the success of consociationalism. This has led Rinus van Schendelen to conclude that "the conditions may be present and absent, necessary and unnecessary, in short conditions or no conditions at all". John McGarry and Brendan O'Leary argue that three conditions are key to the establishment of democratic consociational power-sharing: elites have to be motivated to engage in conflict regulation; elites must lead deferential segments; and there must be a multiple balance of power, but more importantly the subcultures must be stable. Michael Kerr, in his study of the role of external actors in power-sharing arrangements in Northern Ireland and Lebanon, adds to McGarry and O'Leary's list the condition that "the existence of positive external regulating pressures, from state to non-state actors, which provide the internal elites with sufficient incentives and motives for their acceptance of, and support for, consociation". Arguments in favor In a consociational state, all groups, including minorities, are represented on the political and economic stages. Supporters of the consociationalism argue that it is a more realistic option in deeply divided societies than integrationist approaches to conflict management. Criticisms Many criticisms have been levelled against the deployment of consociationalism in state-building. It has been criticised as institutionalising and deepening existing divisions, being severely dependent on external support for survival, and temporarily freezing conflicts but not resolving them. Given the apparent necessity for external regulation of these agreements, many scholars have characterised these state-building projects as deeply invasive. A recurring concern therein is the erosion of the governing elite's accountability towards its population and the fostering of clientele politics. These dynamics have been pointed to as obstacles to the resolution of the deep divisions consociations are meant to alleviate. Further critiques have pointed out that consociations have at times encouraged conditions of "fragile states", which state-building is meant to prevent. Brian Barry Brian Barry has questioned the nature of the divisions that exist in the countries that Lijphart considers to be "classic cases" of consociational democracies. For example, he makes the case that in the Swiss example, "political parties cross-cut cleavages in the society and provide a picture of remarkable consensus rather than highly structured conflict of goals". In the case of the Netherlands, he argues that "the whole cause of the disagreement was the feeling of some Dutchman ... that it mattered what all the inhabitants of the country believed. Demands for policies aimed at producing religious or secular uniformity presuppose a concern ... for the state of grace of one's fellow citizens". He contrasts this to the case of a society marked by conflict, in this case Northern Ireland, where he argues that "the inhabitants ... have never shown much worry about the prospects of the adherents of the other religion going to hell". Barry concludes that in the Dutch case, consociationalism is tautological and argues that "the relevance of the 'consociational' model for other divided societies is much more doubtful than is commonly supposed". Rinus van Schendelen Rinus van Schendelen has argued that Lijphart uses evidence selectively. Pillarisation was "seriously weakening", even in the 1950s, cross-denominational co-operation was increasing, and formerly coherent political sub-cultures were dissolving. He argued that elites in the Netherlands were not motivated by preferences derived from the general interest, but rather by self-interest. They formed coalitions not to forge consociational negotiation between segments but to improve their parties' respective power. He argued that the Netherlands was "stable" in that it had few protests or riots, but that it was so before consociationalism, and that it was not stable from the standpoint of government turnover. He questioned the extent to which the Netherlands, or indeed any country labelled a consociational system, could be called a democracy, and whether calling a consociational country a democracy isn't somehow ruled out by definition. He believed that Lijphart suffered severe problems of rigor when identifying whether particular divisions were cleavages, whether particular cleavages were segmental, and whether particular cleavages were cross-cutting. Lustick on hegemonic control Ian Lustick has argued that academics lack an alternative "control" approach for explaining stability in deeply divided societies and that this has resulted in the empirical overextension of consociational models. Lustick argues that Lijphart has "an impressionistic methodological posture, flexible rules for coding data, and an indefatigable, rhetorically seductive commitment to promoting consociationalism as a widely applicable principle of political engineering", that results in him applying consociational theory to case studies that it does not fit. Furthermore, Lustick states that "Lijphart's definition of 'accommodation' ... includes the elaborately specified claim that issues dividing polarized blocs are settled by leaders convinced of the need for settlement". Horowitz and centripetal criticism of consociationalism Consociationalism focuses on diverging identities such as ethnicity instead of integrating identities such as class, institutionalizing and entrenching the former. Furthermore, it relies on rival co-operation, which is inherently unstable. It focuses on intrastate relations and neglects relations with other states. Donald L. Horowitz argues that consociationalism can lead to the reification of ethnic divisions, since "grand coalitions are unlikely, because of the dynamics of intraethnic competition. The very act of forming a multiethnic coalition generates intraethnic competition – flanking – if it does not already exist". Consistent with Horowitz's claims, Dawn Brancati finds that federalism/territorial autonomy, an element of consociationalism, strengthens ethnic divisions if it is designed in a way that strengthens regional parties, which in turn encourage ethnic conflict. James Anderson also supports Horowitz's contention that consociational powersharing built around diverging identities can entrench and sharpen these divisions. Citing the example of Northern Ireland, Anderson argues such approaches tend to "prioritise the same general type of territorial identity as the ethno-nationalists". Nonetheless, Anderson concedes difficulty lies in the fact such identities cannot simply be wished away, as he argues is attempted when focusing only on individual rights at the expense of group rights. As an alternative of consociationalism Horowitz suggested an alternative model – centripetalism. Centripetalism aims to depoliticize ethnicity and to encourage multi-ethnic parties instead of reinforcing ethnic divides through political institutions. Other criticisms In 2022, Yascha Mounk argued that the case for consociationalism and power-sharing had weakened significantly since first proposed based on experiments and real-life observations. He argues that in some cases it can bring short-term peace, but that it is always temporary and is likely to worsen tensions in the long-run. Critics point out that consociationalism is dangerous in a system of differing antagonistic ideologies, generally conservatism and communism. They state that specific conditions must exist for three or more groups to develop a multi- system with strong leaders. This philosophy is dominated by elites, with those masses that are sidelined with the elites having less to lose if war breaks out. Consociationalism cannot be imperially applied. For example, it does not effectively apply to Austria. Critics also point to the failure of this line of reasoning in Lebanon, a country that reverted to civil war. It only truly applies in Switzerland, Belgium and the Netherlands, and not in more deeply divided societies. If one of three groups gets half plus one of the vote, then the other groups are in perpetual opposition, which is largely incompatible with consociationalism. Consociationalism assumes that each group is cohesive and has strong leadership. Although the minority can block decisions, this requires 100 per cent agreement. Rights are given to communities rather than individuals, leading to over-representation of some individuals in society and under-representation of others. Grand coalitions are unlikely to happen due to the dynamics of ethnic competition. Each group seeks more power for itself. Consociationalists are criticized for focusing too much on the set up of institutions and not enough on transitional issues which go beyond such institutions. Finally, it is claimed that consociational institutions promote sectarianism and entrench existing identities. Examples The political systems of a number of countries operate or used to operate on a consociational basis, including Belgium, Italy, Cyprus (effective 1960–1963), the First Czechoslovak Republic, Israel, Lebanon, the Netherlands (1917–1967), Northern Ireland, Switzerland (consultation mostly across ideological lines), Ethiopia, Zimbabwe-Rhodesia, and South Africa. Some academics have also argued that the European Union resembles a consociational democracy, with consultation across ideological lines. Additionally, a number of peace agreements are consociational, including: The Dayton Agreement that ended the 1992–1995 war in Bosnia and Herzegovina, which is described as a "classic example of consociational settlement" by Sumantra Bose and "an ideal-typical consociational democracy" by Roberto Belloni. The Good Friday Agreement of 1998 in Northern Ireland (and its subsequent reinforcement with 2006's St Andrews Agreement), which Brendan O'Leary describes as "power-sharing plus". The Ohrid Agreement of 2001 setting the constitutional framework for power-sharing in North Macedonia. The Islamic Republic of Afghanistan's political system was also described as consociational, although it lacked ethnic quotas. In addition to the two-state solution to solve the Arab–Israeli conflict, some have argued for a one-state solution under a consociational democracy in the state of Israel, but this solution is not very popular, nor has it been discussed seriously at peace negotiations. During the 1980s the South African government attempted to reform apartheid into a consociational democracy. The South African Constitution of 1983 applied Lijphart's powersharing ideas by establishing a Tricameral Parliament. During the 1990s negotiations to end apartheid the National Party (NP) and Inkatha Freedom Party (IFP) proposed a settlement based upon consociationalism. The African National Congress (ANC) opposed consociationalism and proposed instead a settlement based upon majoritarian democracy. The NP abandoned consociationalism when the U.S. Department of State came out in favor of the majoritarian democracy model in 1992. See also Conflict management Consensus democracy Corporative federalism Directorial system Horizontalidad Minority groups Minority rights Negarchy Pillarisation Plural society Polycentric law Sui iuris References Further reading O'Leary, Brendan. 2020. "Consociation in the Present." Swiss Political Science Review. Bogaards, Matthijs; Helms, Ludger; Lijphart, Arend. 2020. "The Importance of Consociationalism for Twenty‐First Century Politics and Political Science." Swiss Political Science Review. Selway, Joel and K. Templeman. 2012. "The Myth of Consociationalism." Comparative Political Studies 45: 1542–1571. Comparative politics Democracy Ethnicity in politics Political theories Power sharing
0.766665
0.994495
0.762444
Neo-feudalism
Neo-feudalism or new feudalism is a theorized contemporary rebirth of policies of governance, economy, and public life, reminiscent of those which were present in many feudal societies. Such aspects include, but are not limited to: Unequal rights and legal protections for common people and for nobility, dominance of societies by a small and powerful elite, a lack of social mobility, and relations of lordship and serfdom between the elite and the people, where the former are rich and the latter poor. Use and etymology Generally, the term neo-feudalism refers to 21st century forms of feudalism which in some respects resemble the societal models of Medieval western Europe. In its early use, the term was deployed as both a criticism of the political Left and of the Right. On the other hand, Jürgen Habermas used the term Refeudalisierung ("refeudalisation") in his 1962 The Structural Transformation of the Public Sphere to criticise the privatisation of the forms of communication that he believed had produced an Enlightenment-era public sphere. While not talking about "neo-feudalism" as such, later commentators have noted that these ideas are similar to the idea of neo-feudalism. Correspondingly, in 1992 Immanuel Wallerstein expressed views on global development, listing neo-feudalism among three other variants. By neo-feudalism, Wallerstein referred to autarky regions with a localised hierarchy and hi-tech goods available only for the elite. Description The concept of neo-feudalism may focus on economics, though it is not limited to it. Among the issues claimed to be associated with the idea of neo-feudalism in contemporary society, are: class stratification, globalization, neoconservative foreign policy, multinational corporations, and "neo-corporatism". According to Les Johnston, Clifford Shearing's theoretical approach of neo-feudalism has been influential. Shearing "use[s] this term in a limited sense to draw attention to the emergence of domains of mass private property that are 'gated' in a variety of ways". Lucia Zedner responds that this use of neo-feudalism is too limited in scope; Shearing's comparison does not draw parallels with earlier governance explicitly enough. Zedner prefers more definitive endorsements. Neo-feudalism entails an order defined by commercial interests and administered in large areas, according to Bruce Baker, who argues that this does not fully describe the extent of cooperation between state and non-state policing. The significance of the comparison to feudalism, for Randy Lippert and Daniel O'Connor, is that corporations have power similar to states' governance powers. Similarly, Sighard Neckel has argued that the rise of financial-market-based capitalism in the later twentieth century has represented a 'refeudalisation' of the economy. Substackers such as Neoliberal Feudalism argue that trends toward neo-feudalism are both intentionally planned by international financial elites as well as naturally occurring due to long-term trends toward increased centralization and control at the cost of individualism and privacy. The widening of the wealth gap, as poor and marginalized people are excluded from the state's provision of security, can result in neo-feudalism, argues Marina Caparini, who says this has already happened in South Africa. Neo-feudalism is made possible by the commodification of policing, and signifies the end of shared citizenship, says Ian Loader. A primary characteristic of neo-feudalism is that individuals' public lives are increasingly governed by business corporations, as Martha K. Huggins finds. John Braithwaite notes that neo-feudalism brings a different approach to governance since business corporations, in particular, have this specialized need for loss reduction. Author Jonathan Bluestein has written about neo-feudalism as a feature of social power: economic, political and martial alike. He defines the neo-feudal sovereigns as those who, while not directly referred to as lords, aristocrats, kings or emperors, still hold an equivalent power in a modern sense. That is, people who are not subject to everyday laws, can create their own laws to an extent, dominate large markets, employ immense swathes of individuals, have the means to hold a private military force, wield the economic might equivalent of entire nations, and own assets, especially real-estate, on a massive scale. In his books, Bluestein both criticizes this phenomenon, and proposes social and economic solutions for it. Being the first to coin the term: techno-capitalist-feudalism, or TCF for short, political economist, Michel Luc Bellemare, released a seminal tome on the subject, titled 'Techno-Capitalist-Feudalism', in early September 2020. Described as the political economy of Scientific Anarchist-Communism, or structural-anarchism, TCF is a compilation of 15 years of economic research by the author, which began in the mid 2000s. According to Bellemare, in the book, "the epoch of techno-capitalist-feudalism is the epoch of totalitarian-capitalism, whereby the logic of capitalism attains totalitarian dimensions and authoritarian supremacy". One of the primary characteristics of the age of techno-capitalist-feudalism, according to Bellemare, is "the degeneration of the old modern class-system into a post-modern micro-caste-system, wherein an insurmountable divide and stratum now exists in-between the "1 percent" and the "99 percent", or more specifically, the state-finance-corporate-aristocracy and the workforce/population. Moreover, according to Bellemare, in the dark age of techno-capitalist-feudalism, "the determination of values, prices, and wages are no longer based upon the old Marxist notion of socially necessary labor-time, but rather upon the arbitrary use of force and influence, namely, through an underlying set of ruling capitalist power-relations and/or ideologies, which impose by force and influence, numeric values, prices, and wage-sums upon goods, services, and people, devoid of any considerations pertaining to labor-time". Ultimately, in the dark age of techno-capitalist-feudalism, "whatever a capitalist entity or a set of entities can get away with in the sphere of production and/or in the marketplace is deemed valid, legitimate, and normal, regardless of labour-time expenditures". As well, contra Marx, Bellemare's book argues that, in the dark age of TCF, "workers can be paid below subsistence levels", wherefore, they must now work a multiplicity of jobs and more hours in order to make ends meet, which, in many instances, they cannot do without social assistance. In turn, according to Bellemare, "in the dark age of TCF, most machine-technologies are capitalist in origin, meaning, these technologies are congealed power-relations and/or ideologies that are impregnated and programmed with capitalist biases". That is, a set of specific biases that maintain, reproduce, and expand, the power of the ruling capitalist relations and ideologies, undergirding the overall system. Thereby, according to Bellemare's book, in the dystopian age of TCF, "most capitalist machine-technologies are used to maintain, reproduce, and expand, the divisions in-between the '1 percent' and the '99 percent', by keeping the '99 percent' predominantly bolted-down upon the lower-stratums of the system, all the while, keeping the '1 percent' perched atop the upper-stratums of the system, indefinitely. In sum, in the dark age of TCF, the new aristocracy, that is, the capitalist oligarchy or the 1 percent, concerns itself first and foremost with the accumulation of power, control, and capital, as well as, reproducing hierarchical-stasis by any means necessary". As a result, for Bellemare, in the dark age of TCF, "the capitalist aristocracy does not seek to steal units of unpaid labor-time from workers, but rather, it seeks to influence and control all aspects of the workers' everyday lives". Thus, the accumulation of power, control, and capital, orchestrated by the 1 percent, their corporations, and the State, "is always at the expense of the workforce/population, which itself, is gradually impoverished, disempowered, and continually relegated to the margins of the system, namely, the margins of the techno-capitalist-feudal-edifice, as lowly wage-serfs and/or debt-serfs". During the course of the years 2020-2021, Yanis Varoufakis has written and lectured much about his theory concerning neo-feudalism. He posits that traditional capitalism has evolved into a new feudal-like structure of economies and societies, which he refers to as 'techno-feudalism'. Varoufakis explains that unlike in capitalism, feudal economies have the quality of being dominated by very small groups of people, and predetermine the behaviour of markets as they see fit. Taking the example of massive online enterprises such as Facebook, Amazon and others, Varoufakis noted that such venues are primarily governed by the whims of single individuals and small teams, and thus are not truly capitalist markets of free trade, but rather feudal markets of stringent control. Others, such as Jeremy Pitt, have raised similar opinions and concerns, also noting that techno-feudalism threatens freedom of information over the Internet. In early September, 2022, Bellemare, has offered a short and direct critique of 'techno-feudalism', on the grounds that 'techno-feudalism' skews the facts and daily realities of workers, toiling under the jackboot of totalitarian-capitalism, or more accurately, the jackboot of "techno-capitalist-feudalism". According to Bellemare's article, using the term 'techno-feudalism', instead of “techno-capitalist-feudalism” is a disservice to workers. To drop the term 'capitalist' from techno-capitalist-feudalism, "only muddies the clear blue waters of the terminal stage of capitalist development", namely, the new dawning epoch of totalitarian-capitalism, that is, the new dystopian dark age of techno-capitalist-feudalism, run-amok. As Bellemare states, in the article, "just because the old capitalist bourgeoisie has embraced digital algorithms and invasive surveillance technologies as its own, and abstracted itself at a higher-level of socio-economic existence, away from the workforce/population, whereby, it now appears invisible and increasingly distant from the everyday lives of workers, does not mean the old capitalist bourgeoisie has vanished into thin air, or has been usurped by a strictly technological aristocracy". According to Bellemare's article, "what has happened is that the old capitalist bourgeoisie has become a techno-capitalist-feudal-aristocracy, since, the logic of capitalism, capitalist profit, and capitalist technological innovations continue to inform and motivate this authoritarian capitalist aristocracy and all its overlapping networks of large-scale ruling power-blocs". Thereby, the specter of capitalism haunts 'techno-feudalism', in the sense that 'techno-feudalism', or more accurately, 'techno-capitalist-feudalism', is the result of "the capital/labor relationship at its most lopsided, oppressive, and technologically dominating. The capital/labor relationship continues to hold, since, the logic of capitalism continues to be the foundation and the fundamental under-girder of this new economic system". Therefore, within the evolutionary whimper of 'techno-feudalism', "the logic of capitalism is thriving, laughing all the way to the bank, as the term 'techno-feudalism' only empowers capitalist supremacy at the expense of workers’ liberation and self-management". In popular culture and literature After the financial crisis of 2007–2008, American technology billionaire Nick Hanauer stated that "our country [i.e. the United States] is rapidly becoming less a capitalist society and more a feudal society". His views were echoed by, amongst others, the Icelandic billionaire Björgólfur Thor Björgólfsson. The idea that the early 21st century boom and bust in Iceland saw the country returning to feudal structures of power was also expressed by a range of Icelandic novelists, among them Sigrún Davíðsdóttir in Samhengi hlutanna, Bjarni Bjarnason in Mannorð, Bjarni Harðarson in Sigurðar saga fóts, Böðvar Guðmundsson in Töfrahöllin, and Steinar Bragi in Hálendið: Skáldsaga. Similar ideas are found in some Anglophone fiction. For example, Frank Herbert's Dune series of novels is set in the distant future with a neo-feudalistic galactic empire known as the Imperium. In these novels, after a series of wars known as the Butlerian Jihad humanity has come to prohibit all kinds of "thinking machine technology", even its simpler forms. Subsequently, the political balance of power in the Dune Universe gradually became dominant by a myriad of royal houses, each dominating one or several planets. Albeit operating in the distant future, the social and political dynamics of said royal houses are similar in many respects to those previously seen in medieval times. In David Brin's near-future science fiction novel Existence, American politicians campaign on legally transitioning the United States into a neo-feudalist society. In the year 2020, head of the World Economic Forum, Klaus Schwab published a book titled COVID-19: The Great Reset. The book argues that the COVID-19 pandemic is an opportunity for politicians and governments to change the world's economies, societies and structures of government, by introducing a system of "Stakeholder Capitalism", doing so via the guidelines of a plan known as 'The Great Reset'. Schwab also refers to his goals as "The Fourth Industrial Revolution". Other authors have criticized The Great Reset as being a form of Neo-Feudalism. See also Anarcho-capitalism Ascribed status Dark Enlightenment Neo-medievalism Neotribalism Organic crisis Refeudalization Songbun Jim Crow economy Bullshit Jobs ("managerial feudalism") References External links Mutation of Medieval Feudalism Into Modern Corporate Capitalism: The Rise of Neofeudalism in Corporate Governance Неофеодализм в истории и футурологии Caste Criminology Democratic backsliding Economic inequality Economic systems Feudalism Oligarchy Political theories Reactionary Social privilege Social stratification Social theories
0.765998
0.99534
0.762429
Structuralism
Structuralism is an intellectual current and methodological approach, primarily in the social sciences, that interprets elements of human culture by way of their relationship to a broader system. It works to uncover the structural patterns that underlie all the things that humans do, think, perceive, and feel. Alternatively, as summarized by philosopher Simon Blackburn, structuralism is:"The belief that phenomena of human life are not intelligible except through their interrelations. These relations constitute a structure, and behind local variations in the surface phenomena there are constant laws of abstract structure."The structuralist mode of reasoning has since been applied in a range of fields, including anthropology, sociology, psychology, literary criticism, economics, and architecture. Along with Claude Lévi-Strauss, the most prominent thinkers associated with structuralism include linguist Roman Jakobson and psychoanalyst Jacques Lacan. History and background The term structuralism is ambiguous, referring to different schools of thought in different contexts. As such, the movement in humanities and social sciences called structuralism relates to sociology. Emile Durkheim based his sociological concept on 'structure' and 'function', and from his work emerged the sociological approach of structural functionalism. Apart from Durkheim's use of the term structure, the semiological concept of Ferdinand de Saussure became fundamental for structuralism. Saussure conceived language and society as a system of relations. His linguistic approach was also a refutation of evolutionary linguistics. Structuralism in Europe developed in the early 20th century, mainly in France and the Russian Empire, in the structural linguistics of Ferdinand de Saussure and the subsequent Prague, Moscow, and Copenhagen schools of linguistics. As an intellectual movement, structuralism became the heir to existentialism. After World War II, an array of scholars in the humanities borrowed Saussure's concepts for use in their respective fields. French anthropologist Claude Lévi-Strauss was arguably the first such scholar, sparking a widespread interest in structuralism. Throughout the 1940s and 1950s, existentialism, such as that propounded by Jean-Paul Sartre, was the dominant European intellectual movement. Structuralism rose to prominence in France in the wake of existentialism, particularly in the 1960s. The initial popularity of structuralism in France led to its spread across the globe. By the early 1960s, structuralism as a movement was coming into its own and some believed that it offered a single unified approach to human life that would embrace all disciplines. By the late 1960s, many of structuralism's basic tenets came under attack from a new wave of predominantly French intellectuals/philosophers such as historian Michel Foucault, Jacques Derrida, Marxist philosopher Louis Althusser, and literary critic Roland Barthes. Though elements of their work necessarily relate to structuralism and are informed by it, these theorists eventually came to be referred to as post-structuralists. Many proponents of structuralism, such as Lacan, continue to influence continental philosophy and many of the fundamental assumptions of some of structuralism's post-structuralist critics are a continuation of structuralist thinking. Russian functional linguist Roman Jakobson was a pivotal figure in the adaptation of structural analysis to disciplines beyond linguistics, including philosophy, anthropology, and literary theory. Jakobson was a decisive influence on anthropologist Claude Lévi-Strauss, by whose work the term structuralism first appeared in reference to social sciences. Lévi-Strauss' work in turn gave rise to the structuralist movement in France, also called French structuralism, influencing the thinking of other writers, most of whom disavowed themselves as being a part of this movement. This included such writers as Louis Althusser and psychoanalyst Jacques Lacan, as well as the structural Marxism of Nicos Poulantzas. Roland Barthes and Jacques Derrida focused on how structuralism could be applied to literature. Accordingly, the so-called "Gang of Four" of structuralism is considered to be Lévi-Strauss, Lacan, Barthes, and Michel Foucault.[dubious – discuss] Ferdinand de Saussure The origins of structuralism are connected with the work of Ferdinand de Saussure on linguistics along with the linguistics of the Prague and Moscow schools. In brief, Saussure's structural linguistics propounded three related concepts. Saussure argued for a distinction between langue (an idealized abstraction of language) and parole (language as actually used in daily life). He argued that a "sign" is composed of a "signified" (signifié, i.e. an abstract concept or idea) and a "signifier" (signifiant, i.e. the perceived sound/visual image). Because different languages have different words to refer to the same objects or concepts, there is no intrinsic reason why a specific signifier is used to express a given concept or idea. It is thus "arbitrary." Signs gain their meaning from their relationships and contrasts with other signs. As he wrote, "in language, there are only differences 'without positive terms. Lévi-Strauss Structuralism rejected the concept of human freedom and choice, focusing instead on the way that human experience and behaviour is determined by various structures. The most important initial work on this score was Lévi-Strauss's 1949 volume The Elementary Structures of Kinship. Lévi-Strauss had known Roman Jakobson during their time together at the New School in New York during WWII and was influenced by both Jakobson's structuralism, as well as the American anthropological tradition. In Elementary Structures, he examined kinship systems from a structural point of view and demonstrated how apparently different social organizations were different permutations of a few basic kinship structures. In the late 1958, he published Structural Anthropology, a collection of essays outlining his program for structuralism. Lacan and Piaget Blending Freud and Saussure, French (post)structuralist Jacques Lacan applied structuralism to psychoanalysis. Similarly, Jean Piaget applied structuralism to the study of psychology, though in a different way. Piaget, who would better define himself as constructivist, considered structuralism as "a method and not a doctrine," because, for him, "there exists no structure without a construction, abstract or genetic." 'Third order' Proponents of structuralism argue that a specific domain of culture may be understood by means of a structure that is modelled on language and is distinct both from the organizations of reality and those of ideas, or the imagination—the "third order." In Lacan's psychoanalytic theory, for example, the structural order of "the Symbolic" is distinguished both from "the Real" and "the Imaginary;" similarly, in Althusser's Marxist theory, the structural order of the capitalist mode of production is distinct both from the actual, real agents involved in its relations and from the ideological forms in which those relations are understood. Althusser Although French theorist Louis Althusser is often associated with structural social analysis, which helped give rise to "structural Marxism," such association was contested by Althusser himself in the Italian foreword to the second edition of Reading Capital. In this foreword Althusser states the following: Despite the precautions we took to distinguish ourselves from the 'structuralist' ideology…, despite the decisive intervention of categories foreign to 'structuralism'…, the terminology we employed was too close in many respects to the 'structuralist' terminology not to give rise to an ambiguity. With a very few exceptions…our interpretation of Marx has generally been recognized and judged, in homage to the current fashion, as 'structuralist'.… We believe that despite the terminological ambiguity, the profound tendency of our texts was not attached to the 'structuralist' ideology. Assiter In a later development, feminist theorist Alison Assiter enumerated four ideas common to the various forms of structuralism: a structure determines the position of each element of a whole; every system has a structure; structural laws deal with co-existence rather than change; and structures are the "real things" that lie beneath the surface or the appearance of meaning. In linguistics In Ferdinand de Saussure's Course in General Linguistics, the analysis focuses not on the use of language (parole, 'speech'), but rather on the underlying system of language (langue). This approach examines how the elements of language relate to each other in the present, synchronically rather than diachronically. Saussure argued that linguistic signs were composed of two parts: a signifiant ('signifier'): the "sound pattern" of a word, either in mental projection—e.g., as when one silently recites lines from signage, a poem to one's self—or in actual, any kind of text, physical realization as part of a speech act. a signifié '(signified'): the concept or meaning of the word. This differed from previous approaches that focused on the relationship between words and the things in the world that they designate. Although not fully developed by Saussure, other key notions in structural linguistics can be found in structural "idealism." A structural idealism is a class of linguistic units (lexemes, morphemes, or even constructions) that are possible in a certain position in a given syntagm, or linguistic environment (such as a given sentence). The different functional role of each of these members of the paradigm is called 'value' (French: ). Prague School In France, Antoine Meillet and Émile Benveniste continued Saussure's project, and members of the Prague school of linguistics such as Roman Jakobson and Nikolai Trubetzkoy conducted influential research. The clearest and most important example of Prague school structuralism lies in phonemics. Rather than simply compiling a list of which sounds occur in a language, the Prague school examined how they were related. They determined that the inventory of sounds in a language could be analysed as a series of contrasts. Thus, in English, the sounds /p/ and /b/ represent distinct phonemes because there are cases (minimal pairs) where the contrast between the two is the only difference between two distinct words (e.g. 'pat' and 'bat'). Analyzing sounds in terms of contrastive features also opens up comparative scope—for instance, it makes clear the difficulty Japanese speakers have differentiating /r/ and /l/ in English and other languages is because these sounds are not contrastive in Japanese. Phonology would become the paradigmatic basis for structuralism in a number of different fields. Based on the Prague school concept, André Martinet in France, J. R. Firth in the UK and Louis Hjelmslev in Denmark developed their own versions of structural and functional linguistics. In anthropology According to structural theory in anthropology and social anthropology, meaning is produced and reproduced within a culture through various practices, phenomena, and activities that serve as systems of signification. A structuralist approach may study activities as diverse as food-preparation and serving rituals, religious rites, games, literary and non-literary texts, and other forms of entertainment to discover the deep structures by which meaning is produced and reproduced within the culture. For example, Lévi-Strauss analysed in the 1950s cultural phenomena including mythology, kinship (the alliance theory and the incest taboo), and food preparation. In addition to these studies, he produced more linguistically-focused writings in which he applied Saussure's distinction between langue and parole in his search for the fundamental structures of the human mind, arguing that the structures that form the "deep grammar" of society originate in the mind and operate in people unconsciously. Lévi-Strauss took inspiration from mathematics. Another concept used in structural anthropology came from the Prague school of linguistics, where Roman Jakobson and others analysed sounds based on the presence or absence of certain features (e.g., voiceless vs. voiced). Lévi-Strauss included this in his conceptualization of the universal structures of the mind, which he held to operate based on pairs of binary oppositions such as hot-cold, male-female, culture-nature, cooked-raw, or marriageable vs. tabooed women. A third influence came from Marcel Mauss (1872–1950), who had written on gift-exchange systems. Based on Mauss, for instance, Lévi-Strauss argued an alliance theory—that kinship systems are based on the exchange of women between groups—as opposed to the 'descent'-based theory described by Edward Evans-Pritchard and Meyer Fortes. While replacing Mauss at his Ecole Pratique des Hautes Etudes chair, the writings of Lévi-Strauss became widely popular in the 1960s and 1970s and gave rise to the term "structuralism" itself. In Britain, authors such as Rodney Needham and Edmund Leach were highly influenced by structuralism. Authors such as Maurice Godelier and Emmanuel Terray combined Marxism with structural anthropology in France. In the United States, authors such as Marshall Sahlins and James Boon built on structuralism to provide their own analysis of human society. Structural anthropology fell out of favour in the early 1980s for a number of reasons. D'Andrade suggests that this was because it made unverifiable assumptions about the universal structures of the human mind. Authors such as Eric Wolf argued that political economy and colonialism should be at the forefront of anthropology. More generally, criticisms of structuralism by Pierre Bourdieu led to a concern with how cultural and social structures were changed by human agency and practice, a trend which Sherry Ortner has referred to as 'practice theory'. One example is Douglas E. Foley's Learning Capitalist Culture (2010), in which he applied a mixture of structural and Marxist theories to his ethnographic fieldwork among high school students in Texas. Foley analyzed how they reach a shared goal through the lens of social solidarity when he observed "Mexicanos" and "Anglo-Americans" come together on the same football team to defeat the school's rivals. However, he also continually applies a marxist lens and states that he," wanted to wow peers with a new cultural marxist theory of schooling." Some anthropological theorists, however, while finding considerable fault with Lévi-Strauss's version of structuralism, did not turn away from a fundamental structural basis for human culture. The Biogenetic Structuralism group for instance argued that some kind of structural foundation for culture must exist because all humans inherit the same system of brain structures. They proposed a kind of neuroanthropology which would lay the foundations for a more complete scientific account of cultural similarity and variation by requiring an integration of cultural anthropology and neuroscience—a program that theorists such as Victor Turner also embraced. In literary criticism and theory In literary theory, structuralist criticism relates literary texts to a larger structure, which may be a particular genre, a range of intertextual connections, a model of a universal narrative structure, or a system of recurrent patterns or motifs. The field of structuralist semiotics argues that there must be a structure in every text, which explains why it is easier for experienced readers than for non-experienced readers to interpret a text. Everything that is written seems to be governed by rules, or "grammar of literature", that one learns in educational institutions and that are to be unmasked. A potential problem for a structuralist interpretation is that it can be highly reductive; as scholar Catherine Belsey puts it: "the structuralist danger of collapsing all difference." An example of such a reading might be if a student concludes the authors of West Side Story did not write anything "really" new, because their work has the same structure as Shakespeare's Romeo and Juliet. In both texts a girl and a boy fall in love (a "formula" with a symbolic operator between them would be "Boy + Girl") despite the fact that they belong to two groups that hate each other ("Boy's Group - Girl's Group" or "Opposing forces") and conflict is resolved by their deaths. Structuralist readings focus on how the structures of the single text resolve inherent narrative tensions. If a structuralist reading focuses on multiple texts, there must be some way in which those texts unify themselves into a coherent system. The versatility of structuralism is such that a literary critic could make the same claim about a story of two friendly families ("Boy's Family + Girl's Family") that arrange a marriage between their children despite the fact that the children hate each other ("Boy - Girl") and then the children commit suicide to escape the arranged marriage; the justification is that the second story's structure is an 'inversion' of the first story's structure: the relationship between the values of love and the two pairs of parties involved have been reversed. Structuralist literary criticism argues that the "literary banter of a text" can lie only in new structure, rather than in the specifics of character development and voice in which that structure is expressed. Literary structuralism often follows the lead of Vladimir Propp, Algirdas Julien Greimas, and Claude Lévi-Strauss in seeking out basic deep elements in stories, myths, and more recently, anecdotes, which are combined in various ways to produce the many versions of the ur-story or ur-myth. There is considerable similarity between structural literary theory and Northrop Frye's archetypal criticism, which is also indebted to the anthropological study of myths. Some critics have also tried to apply the theory to individual works, but the effort to find unique structures in individual literary works runs counter to the structuralist program and has an affinity with New Criticism. In economics Yifu Lin criticizes early structural economic systems and theories, discussing the failures of it. He writes:"The structuralism believes that the failure to develop advanced capital-intensive industries spontaneously in a developing country is due to market failures caused by various structural rigidities..." "According to neoliberalism, the main reason for the failure of developing countries to catch up with developed countries was too much state intervention in the market, causing misallocation of resources, rent seeking and so forth."Rather these failures are more so centered around the unlikelihood of such quick development of these advanced industries within developing countries. New Structural Economics (NSE) New structural economics is an economic development strategy developed by World Bank Chief Economist Justin Yifu Lin. The strategy combines ideas from both neoclassical economics and structural economics. NSE studies two parts: the base and the superstructure. A base is a combination of forces and relations of production, consisting of, but not limited to, industry and technology, while the superstructure consists of hard infrastructure and institutions. This results in an explanation of how the base impacts the superstructure which then determines transaction costs. Interpretations and general criticisms Structuralism is less popular today than other approaches, such as post-structuralism and deconstruction. Structuralism has often been criticized for being ahistorical and for favouring deterministic structural forces over the ability of people to act. As the political turbulence of the 1960s and 1970s (particularly the student uprisings of May 1968) began affecting academia, issues of power and political struggle moved to the center of public attention. In the 1980s, deconstruction—and its emphasis on the fundamental ambiguity of language rather than its logical structure—became popular. By the end of the century, structuralism was seen as a historically important school of thought, but the movements that it spawned, rather than structuralism itself, commanded attention. Several social theorists and academics have strongly criticized structuralism or even dismissed it. French hermeneutic philosopher Paul Ricœur (1969) criticized Lévi-Strauss for overstepping the limits of validity of the structuralist approach, ending up in what Ricœur described as "a Kantianism without a transcendental subject." Anthropologist Adam Kuper (1973) argued that:'Structuralism' came to have something of the momentum of a millennial movement and some of its adherents felt that they formed a secret society of the seeing in a world of the blind. Conversion was not just a matter of accepting a new paradigm. It was, almost, a question of salvation. Philip Noel Pettit (1975) called for an abandoning of "the positivist dream which Lévi-Strauss dreamed for semiology," arguing that semiology is not to be placed among the natural sciences. Cornelius Castoriadis (1975) criticized structuralism as failing to explain symbolic mediation in the social world; he viewed structuralism as a variation on the "logicist" theme, arguing that, contrary to what structuralists advocate, language—and symbolic systems in general—cannot be reduced to logical organizations on the basis of the binary logic of oppositions. Critical theorist Jürgen Habermas (1985) accused structuralists like Foucault of being positivists; Foucault, while not an ordinary positivist per se, paradoxically uses the tools of science to criticize science, according to Habermas. (See Performative contradiction and Foucault–Habermas debate.) Sociologist Anthony Giddens (1993) is another notable critic; while Giddens draws on a range of structuralist themes in his theorizing, he dismisses the structuralist view that the reproduction of social systems is merely "a mechanical outcome." See also Antihumanism Engaged theory Genetic structuralism Holism Isomorphism Post-structuralism Russian formalism Structuralist film theory Structuration theory Émile Durkheim Structural functionalism Structuralism (philosophy of science) Structuralism (philosophy of mathematics) Structuralism (psychology) Structural change Structuralist economics References Further reading Angermuller, Johannes. 2015. Why There Is No Poststructuralism in France: The Making of an Intellectual Generation. London: Bloomsbury. Roudinesco, Élisabeth. 2008. Philosophy in Turbulent Times: Canguilhem, Sartre, Foucault, Althusser, Deleuze, Derrida. New York: Columbia University Press. Primary sources Althusser, Louis. Reading Capital. Barthes, Roland. S/Z. Deleuze, Gilles. 1973. "À quoi reconnaît-on le structuralisme?" Pp. 299–335 in Histoire de la philosophie, Idées, Doctrines. Vol. 8: Le XXe siècle, edited by F. Châtelet. Paris: Hachette de Saussure, Ferdinand. 1916. Course in General Linguistics. Foucault, Michel. The Order of Things. Jakobson, Roman. Essais de linguistique générale. Lacan, Jacques. The Seminars of Jacques Lacan. Lévi-Strauss, Claude. The Elementary Structures of Kinship. —— 1958. Structural Anthropology [Anthropologie structurale] —— 1964–1971. Mythologiques Wilcken, Patrick, ed. Claude Levi-Strauss: The Father of Modern Anthropology. Linguistic theories and hypotheses Literary criticism Philosophical anthropology Psychoanalytic theory Sociological theories Theories of language
0.763411
0.998681
0.762404
History of homosexuality
Societal attitudes towards same-sex relationships have varied over time and place. Attitudes to male homosexuality have varied from requiring males to engage in same-sex relationships to casual integration, through acceptance, to seeing the practice as a minor sin, repressing it through law enforcement and judicial mechanisms, and to proscribing it under penalty of death. In addition, it has varied as to whether any negative attitudes towards men who have sex with men have extended to all participants, as has been common in Abrahamic religions, or only to passive (penetrated) participants, as was common in Ancient Greece and Ancient Rome. Female homosexuality has historically been given less acknowledgment, explicit acceptance, and opposition. Homosexuality was generally accepted in many ancient and medieval eastern cultures such as those influenced by Buddhism, Hinduism, and Taoism. Homophobia in the eastern world is often discussed in the context of being an import from the western world, with some contending that definitions of "progress" on homosexuality (e.g. LGBT rights) as being Western-centric. It is thought that ancient Assyria (2nd millennium BC to 1st millennium AD) viewed homosexuality as negative and at least criminal, with the religious codes of Zoroastrianism forbidding homosexuality, and the rise of Judaism, Christianity and Islam leading to homophobia in much of the western world; the majority of the ancient sources prior to the onset of the Abrahamic religions present homosexuality in the form of male domination or rape. Abrahamic religions played a key role in the spread of homophobia in further Asia, such as Islam through the Mongol Empire (where homosexuality was banned) to parts of Central Asia, Southern Asia and the Sinosphere, or Christianity through the numerous colonial adventures of European nations. European Enlightenment ideas contributed to the French revolutionaries indirectly decriminalising gay sex in 1789 as part of the separation of secular and religious laws, though homophobia remained rampant in both secular and religious governments in an attempt to uphold the "highest moral standards". The 19th century later saw the first homosexual movement in Germany particularly in the aftermath of World War 1. The modern LGBTQ rights movement emerged in the 20th century with the 1969 Stonewall riots in New York. Many male historical figures, including Socrates, Lord Byron, Edward II, and Hadrian, have had terms such as gay or bisexual applied to them; some scholars, such as Michel Foucault, have regarded this as risking the anachronistic introduction of a contemporary social construct of sexuality foreign to their times, though others challenge this. A common thread of constructionist argument is that no one in antiquity or the Middle Ages experienced homosexuality as an exclusive, permanent, or defining mode of sexuality. John Boswell has countered this argument by citing ancient Greek writings by Plato, which describe individuals exhibiting exclusive homosexuality. The Americas Pre-colonization Indigenous societies Among Indigenous peoples of the Americas prior to European colonization, a number of Nations had respected ceremonial and social roles for homosexual, bisexual, and gender-nonconforming individuals in their communities; in many contemporary Native American and First Nations communities, these roles still exist. While each Indigenous culture has their own names for these individuals, a modern, pan-Indian term that was adopted in 1990 is "Two-Spirit". This new term has not been universally accepted, having been criticized by traditional communities who already have their own terms for the people being grouped under this "urban neologism", and by those who reject what they call the "western" binary implications, such as implying that Natives believe these individuals are "both male and female". However, it has generally met with more acceptance than the anthropological term it replaced. Homosexual and gender-variant individuals were also common among other pre-conquest civilizations in Latin America, such as the Aztecs, Mayans, Quechuas, Moches, Zapotecs, and the Tupinambá of Brazil. The Spanish conquerors were horrified to discover sodomy openly practiced among native peoples, and attempted to crush it out by subjecting the berdaches (as the Spanish called them) under their rule to severe penalties, including public execution, burning and being torn to pieces by dogs. Post-colonization East Asia In East Asia, same-sex love has been referred to since the earliest recorded history. China Homosexuality is widely documented in ancient China and attitudes towards it varied through time, location, and social class. Chinese literature recorded multiple anecdotes of men engaging in homosexual relationships. In the story of the leftover peach(余桃), set during the Spring and Autumn Era, the historian Han Fei recorded an anecdote in the relationship of Mi Zixia (彌子瑕) and Duke Ling of Wei (衛靈公) in which Mizi Xia shared an especially delicious peach with his lover. The story of the cut sleeve(断袖) recorded the Emperor Ai of Han sharing a bed with his lover, Dongxian (董賢); when Emperor Ai woke up later, he carefully cut off his sleeve, so as not to awake Dongxian, who had fallen asleep on top of it. Scholar Pan Guangdan (潘光旦) came to the conclusion that many emperors in the Han dynasty had one or more male sex partners. However, except in unusual cases, such as Emperor Ai, the men named for their homosexual relationships in the official histories appear to have had active heterosexual lives as well. With the rise of the Tang dynasty, China became increasingly influenced by the sexual mores of foreigners from Western and Central Asia, and female companions began to replace male companions in terms of power and familial standings. The following Song dynasty was the last dynasty to include a chapter on male companions of the emperors in official documents. During these dynasties, the general attitude toward homosexuality was still tolerant, but male lovers started to be seen as less legitimate compared to wives and men are usually expected to get married and continue the family line. During the Ming Dynasty, it is said that the Zhengde Emperor had a homosexual relationship with a Muslim leader named Sayyid Husain. In later Ming Dynasty, homosexuality began to be referred to as the "southern custom" due to the fact that Fujian was the site of a unique system of male marriages, attested to by the scholar-bureaucrat Shen Defu and the writer Li Yu, and mythologized by in the folk tale, The Leveret Spirit. The Qing dynasty instituted the first law against consensual, non-monetized homosexuality in China. However, the punishment designated, which included a month in prison and 100 heavy blows, was actually the lightest punishment which existed in the Qing legal system. Homosexuality started to become eliminated in China by the Self-Strengthening Movement, when homophobia was imported to China along with Western science and philosophy. Japan Homosexuality in Japan, variously known as shudo or nanshoku, has been documented for over one thousand years and had some connections to the Buddhist monastic life and the samurai tradition. This same-sex love culture gave rise to strong traditions of painting and literature documenting and celebrating such relationships. Siam Similarly, in Thailand, kathoey, or "ladyboys," have been a feature of Thai society for many centuries, and Thai kings had male as well as female lovers. While kathoey may encompass simple effeminacy or transvestism, it most commonly is treated in Thai culture as a third gender. They are generally accepted by society. Europe Antiquity The earliest Western documents (in the form of literary works, art objects, and mythographic materials) concerning same-sex relationships are derived from ancient Greece. The formal practice, an erotic yet often restrained relationship between a free-born (i.e. not a slave or freedman) adult male and a free-born adolescent, was valued for its pedagogic benefits and as a means of population control, though occasionally blamed for causing societal disorder. Plato praised its benefits in his early writings but in his late works proposed its prohibition. In the Symposium (182B-D), Plato equates acceptance of homosexuality with democracy, and its suppression with despotism, saying that homosexuality "is shameful to barbarians because of their despotic governments, just as philosophy and athletics are, since it is apparently not in best interests of such rulers to have great ideas engendered in their subjects, or powerful friendships or physical unions, all of which love is particularly apt to produce". Aristotle, in his Politics, dismissed Plato's ideas about abolishing homosexuality (2.4); he explains that barbarians like the Celts accorded it a special honour (2.6.6), while the Cretans used it to regulate the population (2.7.5). Little is known of female homosexuality in antiquity. Sappho, born on the island of Lesbos, was included by later classical Greek people in the canonical list of nine lyric poets. The adjectives deriving from her name and place of birth (sapphic and lesbian) came to be applied to female homosexuality beginning in the 19th century. Sappho's poetry centers on passion and love for various personages and both genders. The narrators of many of her poems speak of infatuations and love (sometimes requited, sometimes not) for various women, but descriptions of physical acts between women are few and subject to debate. In ancient Rome, the young male body remained a focus of male sexual attention, but relationships were between older free men and slaves or freed youths who took the receptive role in sex. The Hellenophile emperor Hadrian is renowned for his relationship with Antinous. However, after the transition to Christianity, by 390 A.D., Emperor Theodosius I made homosexuality a legally punishable offense for the passive partner: "All persons who have the shameful custom of condemning a man's body, acting the part of a woman's to the sufferance of alien sex (for they appear not to be different from women), shall expiate a crime of this kind in avenging flames in the sight of the people." In 558, toward the end of his reign, Justinian expanded the proscription to the active partner as well, warning that such conduct can lead to the destruction of cities through the "wrath of God". Notwithstanding these regulations, taxes on brothels of boys available for homosexual sex continued to be collected until the end of the reign of Anastasius I in 518. The Middle Ages Through the medieval period in Europe, homosexuality was generally condemned and thought to be the moral of the story of Sodom and Gomorrah. Historians debate if there were any prominent homosexuals and bisexuals at this time, but it is argued that figures such as Edward II, Richard the Lionheart, Philip II Augustus, and William Rufus were engaged in same-sex relationships. Also during the medieval period, there were legal arrangements called adelphopoiesis ("brother-making") in the Eastern Mediterranean or affrèrement ("embrotherment") in France that allowed two men to share living quarters and pool their resources, sharing "one bread, one wine, one purse." Historians such as John Boswell and Allan A. Tulchin have argued that these arrangements amounted to an early form of same-sex marriage. This interpretation of these arrangements remains controversial. The Renaissance During the Renaissance, wealthy cities in northern Italy—Florence and Venice in particular—were renowned for their widespread practice of same-sex love, engaged in by a considerable part of the male population and constructed along with the classical pattern of Greece and Rome. But even as many of the male population were engaging in same-sex relationships, the authorities, under the aegis of the Officers of the Night, were prosecuting, fining, and imprisoning a good portion of that population. Many of the prominent artists who defined the Renaissance such as Michelangelo and Leonardo da Vinci are believed to have had relationships with men. The decline of this period of relative artistic and erotic freedom was precipitated by the rise to power of the moralizing monk Girolamo Savonarola. In England, Geoffery Chaucer's "The Pardoner's Tale" centered around an enigmatic and deceptive character who is also at one point described as "a gelding or a mare", suggesting that the narrator thought the Pardoner to be either a eunuch ("gelding") or a homosexual. Modernity Early Modernity The relationships of socially prominent figures, such as King James I and the Duke of Buckingham, served to highlight the issue, including in anonymously authored street pamphlets: "The world is chang'd I know not how, For men Kiss Men, not Women now;...Of J. the First and Buckingham: He, true it is, his Wives Embraces fled, To slabber his lov'd Ganimede" The anonymous Love Letters Between a Certain Late Nobleman and the Famous Mr. Wilson was published in 1723 in England and was presumed by some modern scholars to be a novel. The 1749 edition of John Cleland's popular novel Fanny Hill includes a homosexual scene, but this was removed in its 1750 edition. Also in 1749, the earliest extended and serious defense of homosexuality in English, Ancient and Modern Pederasty Investigated and Exemplified, written by Thomas Cannon, was published, but was suppressed almost immediately. It includes the passage: "Unnatural Desire is a Contradiction in Terms; downright Nonsense. Desire is an amatory Impulse of the inmost human Parts." Around 1785 Jeremy Bentham wrote another defense, but this was not published until 1978. Executions for sodomy continued in the Netherlands until 1803 and in England until 1835. Late Modernity Between 1864 and 1880 Karl Heinrich Ulrichs published a series of twelve tracts, which he collectively titled Research on the Riddle of Man-Manly Love. In 1867 he became the first self-proclaimed homosexual person to speak out publicly in defense of homosexuality when he pleaded at the Congress of German Jurists in Munich for a resolution urging the repeal of anti-homosexual laws. Sexual Inversion by Havelock Ellis, published in 1896, challenged theories that homosexuality was abnormal, as well as stereotypes, and insisted on the ubiquity of homosexuality and its association with intellectual and artistic achievement. Although medical texts like these (written partly in Latin to obscure the sexual details) were not widely read by the general public, they did lead to the rise of Magnus Hirschfeld's Scientific Humanitarian Committee, which campaigned from 1897 to 1933 against anti-sodomy laws in Germany, as well as a much more informal, unpublicized movement among British intellectuals and writers, led by such figures as Edward Carpenter and John Addington Symonds. Beginning in 1894 with Homogenic Love, Socialist activist and poet Edward Carpenter wrote a string of pro-homosexual articles and pamphlets, and "came out" in 1916 in his book My Days and Dreams. In 1900, Elisar von Kupffer published an anthology of homosexual literature from antiquity to his own time, Lieblingminne und Freundesliebe in der Weltliteratur. His aim was to broaden the public perspective of homosexuality beyond its being viewed simply as a medical or biological issue, but also as an ethical and cultural one. Sigmund Freud, among others, argued that neither predominantly different- nor same-sex sexuality were the norm, instead that what is called "bisexuality" is the normal human condition thwarted by society. These developments suffered several setbacks, both coincidental and deliberate. For example, in 1895, famed playwright Oscar Wilde was convicted of "gross indecency" in the United Kingdom, and lurid details from the trials (especially those involving young male sex workers) led to increased scrutiny of all facets of relationships between men. The most destructive backlash occurred when the Third Reich specifically targeted LGBT people in the Holocaust. Middle East There are a handful of accounts by Arab travelers to Europe during the mid-1800s. Two of these travelers, Rifa'ah al-Tahtawi and Muhammad al-Saffar, show their surprise that the French sometimes deliberately mis-translated love poetry about a young boy, instead referring to a young woman, to maintain their social norms and morals. Among modern Middle Eastern countries, same-sex intercourse officially carries the death penalty in several nations, including Saudi Arabia and Iran. Today, governments in the Middle East often ignore, deny the existence of, or criminalize homosexuality. Iranian President Mahmoud Ahmadinejad, during his 2007 speech at Columbia University, asserted that there were no gay people in Iran. Gay people may live in Iran, however they are forced to keep their sexuality veiled from the society, funded and encouraged by government legislation and traditional norms. Mesopotamia Some ancient religious Assyrian texts may have contained prayers for divine blessings on homosexual relationships, though the same source acknowledges that homosexuality was regarded as reprehensible, and no less than criminal. Freely pictured art of anal intercourse, practiced as part of a religious ritual, dated from the 3rd millennium BC and onwards. Homosexual relationships with royal attendants, between soldiers, and those where a social better was submissive or penetrated were treated as rape or seen as bad omens, and punishments were applied. South Asia South Asia has a recorded and verifiable history of homosexuality going back to at least 1200 BC. Hindu medical texts written in India from this period document homosexual acts and attempt to explain the cause in a neutral/scientific manner. Numerous artworks and literary works from this period also describe homosexuality. The Pali Cannon, written in Sri Lanka between 600 BC and 100 BC, states that sexual relations, whether of homosexual or of heterosexual nature, is forbidden in the monastic code, and states that any acts of soft homosexual sex (such as masturbation and interfemural sex) does not entail a punishment but must be confessed to the monastery. These codes apply to monks only and not to the general population. The Kama Sutra written in India around 200 AD also described numerous homosexual sex acts positively. The Laws of Manu, the foundational work of Hindu law, mentions a "third sex", members of which may engage in nontraditional gender expression and homosexual activities. The Kama Sutra, written in the 4th century, describes techniques by which homosexuals perform fellatio. Further, such homosexual men were also known to marry, according to the Kama Sutra: "There are also third-sex citizens, sometimes greatly attached to one another and with complete faith in one another, who get married together." (KS 2.9.36). South Pacific In many societies of Melanesia, especially in Papua New Guinea, same-sex relationships were an integral part of the culture until the middle of the last century. The Etoro and Marind-anim for example, even viewed heterosexuality as sinful and celebrated homosexuality instead. In many traditional Melanesian cultures a prepubertal boy would be paired with an older adolescent who would become his mentor and who would "inseminate" him (orally, anally, or topically, depending on the tribe) over a number of years in order for the younger to also reach puberty. Many Melanesian societies, however, have become hostile towards same-sex relationships since the introduction of Christianity by European missionaries. Africa Egypt Homosexuality in ancient Egypt is a passionately disputed subject within Egyptology: historians and egyptologists alike debate what kind of view the Ancient Egyptian society fostered about homosexuality. Only a handful of direct hints have survived to this day and many possible indications are only vague and offer plenty of room for speculation. The best known case of possible homosexuality in Ancient Egypt is that of the two high officials Nyankh-Khnum and Khnum-hotep. Both men lived and served under pharaoh Niuserre during the 5th Dynasty (c. 2494–2345 BC). Nyankh-Khnum and Khnum-hotep each had families of their own with children and wives, but when they died their families apparently decided to bury them together in one and the same mastaba tomb. In this mastaba, several paintings depict both men embracing each other and touching their faces nose-on-nose. These depictions leave plenty of room for speculation, because in Ancient Egypt the nose-on-nose touching normally represented a kiss. Egyptologists and historians disagree about how to interpret the paintings of Nyankh-khnum and Khnum-hotep. Some scholars believe that the paintings reflect an example of homosexuality between two married men and prove that the Ancient Egyptians accepted same-sex relationships. Other scholars disagree and interpret the scenes as an evidence that Nyankh-khnum and Khnum-hotep were twins, even possibly conjoined twins. No matter what interpretation is correct, the paintings show at the very least that Nyankh-khnum and Khnum-hotep must have been very close to each other in life as in death. It remains unclear what exact view the Ancient Egyptians fostered about homosexuality. Any document and literature that actually contains sexually orientated stories never name the nature of the sexual deeds, but instead uses stilted and flowery paraphrases. While the stories about Seth and his sexual behavior may reveal rather negative thoughts and views, the tomb inscription of Nyankh-khnum and Khnum-hotep may instead suggest that homosexuality was likewise accepted. Ancient Egyptian documents never clearly say that same-sex relationships were seen as reprehensible or despicable. And no Ancient Egyptian document mentions that homosexual acts were set under penalty. Thus, a straight evaluation remains problematic. Uganda In the 19th century Mwanga II (1868–1903) the Kabaka of Buganda regularly had sex with his male page. Post–World War II The Western world After World War II, the history of homosexuality in Western societies progressed on very similar and often intertwined paths. In 1948, American biologist Alfred Kinsey published Sexual Behavior in the Human Male, popularly known as the Kinsey Reports. In 1957, the UK government commissioned the Wolfenden report to review the country's anti-sodomy laws; the final report advised decriminalizing consensual homosexual conduct, though the laws were not actually changed for another ten years. Homosexuality was deemed to be a psychiatric disorder for many years, although the studies this theory was based on were later determined to be flawed. In 1973 homosexuality was declassified as a mental illness in the United Kingdom. In 1986 all references to homosexuality as a psychiatric disorder were removed from the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association. LGBT rights movements During the Sexual Revolution, the different-sex sexual ideal became completely separated from procreation, yet at the same time was distanced from same-sex sexuality. Many people viewed this freeing of different-sex sexuality as leading to more freedom for same-sex sexuality. The Stonewall riots were a series of violent conflicts between New York City police officers and the patrons of the Stonewall Inn, a gay hangout in Greenwich Village. The riot began on Friday, June 27, 1969, during a routine police raid, when trans women and men, gay men, lesbians, street queens, and other street people fought back in the spirit of the civil rights movements of the era. This riot ended on the morning of 28 June, but smaller demonstrations occurred in the neighborhood throughout the remainder of the week. In the aftermath of the riots, many gay rights organizations formed such as the Gay Liberation Front (GLF). A year later the first gay pride march was held to mark the anniversary of the uprising. Historiographic considerations In an 1868 letter to Karl Heinrich Ulrichs, the terms homosexual and heterosexual were coined by Karl-Maria Kertbeny and then published in two pamphlets in 1869. These became the standard terms when used by Richard von Krafft-Ebing in his Psychopathia Sexualis (1886). The term bisexuality was invented in the 20th century as sexual identities became defined by the predominant sex to which people are attracted and thus a label was needed for those who are not predominantly attracted to one sex. This points out that the history of sexuality is not solely the history of different-sex sexuality plus the history of same-sex sexuality, but a broader conception viewing of historical events in light of our modern concept or concepts of sexuality taken at its most broad and/or literal definitions. Historical personalities are often described using modern sexual identity terms such as straight, bisexual, gay or queer. Those who favour the practice say that this can highlight such issues as discriminatory historiography by, for example, putting into relief the extent to which same-sex sexual experiences are excluded from biographies of noted figures, or to which sensibilities resulting from same-sex attraction are excluded from literary and artistic consideration of important works, and so on. As well as that, an opposite situation is possible in the modern society: some LGBT-supportive researchers stick to the homosexual theories, excluding other possibilities. However, many, especially in the academic world, regard the use of modern labels as problematic, owing to differences in the ways that different societies constructed sexual orientation identities and to the connotations of modern words like queer. Other academics acknowledge that, for example, even in the modern day not all men who have sex with men identify with any of the modern related terms, and that terms for other modern constructed or medicalized identities (such as nationality or disability) are routinely used in anachronistic contexts as mere descriptors or for ease of modern understanding; thus they have no qualms doing the same for sexual orientation. Academic works usually specify which words will be used and in which context. Readers are cautioned to avoid making assumptions about the identity of historical figures based on the use of the terms mentioned above. Ancient Greece Greek men had great latitude in their sexual expression, but their wives were severely restricted and could hardly move about the town unsupervised if she was old enough that people would ask whose mother she was, not whose wife she was. Men could also seek adolescent boys as partners as shown by some of the earliest documents concerning same-sex pederastic relationships, which come from Ancient Greece. Though slave boys could be bought, free boys had to be courted, and ancient materials suggest that the father also had to consent to the relationship. Such relationships did not replace marriage between man and woman, but occurred before and during the marriage. A mature man would not usually have a mature male mate (though there were exceptions, among whom Alexander the Great); he would be the erastes (lover) to a young eromenos (loved one). Dover suggests that it was considered improper for the eromenos to feel desire, as that would not be masculine. Driven by desire and admiration, the erastes would devote himself unselfishly by providing all the education his eromenos required to thrive in society. In recent times, Dover's theory suggests that questioned in light of massive evidence of ancient art and love poetry, a more emotional connection than earlier researchers liked to acknowledge. Ancient Rome The "conquest mentality" of the ancient Romans shaped Roman homosexual practices. In the Roman Republic, a citizen's political liberty was defined in part by the right to preserve his body from physical compulsion or use by others; for the male citizen to submit his body to the giving of pleasure was considered servile. As long as a man played the penetrative role, it was socially acceptable and considered natural for him to have same-sex relations, without a perceived loss of his masculinity or social standing. Sex between male citizens of equal status, including soldiers, was disparaged, and in some circumstances penalized harshly. The bodies of citizen youths were strictly off-limits, and the Lex Scantinia imposed penalties on those who committed a sex crime (stuprum) against a freeborn male minor. Male slaves, prostitutes, and entertainers or others considered infames (of no social standing) were acceptable sex partners for the dominant male citizen to penetrate. "Homosexual" and "heterosexual" were thus not categories of Roman sexuality, and no words exist in Latin that would precisely translate these concepts. A male citizen who willingly performed oral sex or received anal sex was disparaged. In courtroom and political rhetoric, charges of effeminacy and passive sexual behaviors were directed particularly at "democratic" politicians (populares) such as Julius Caesar and Mark Antony. Until the Roman Empire came under Christian rule, there is only limited evidence of legal penalties against men who were presumably "homosexual" in the modern sense. References Further reading D. L. Davis and R. G. Whitten, "The Cross-Cultural Study of Human Sexuality", Annual Review of Anthropology, Vol. 16: 69–98, October 1987, Gwen J. Broude and Sarah J. Greene, "Cross-Cultural Codes on Twenty Sexual Attitudes and Practices", Ethnology, Vol. 15, No. 4 (Oct., 1976), pp. 409–429. History of human sexuality Homosexuality Homosexuality
0.764712
0.99698
0.762402
Historism
Historism is a philosophical and historiographical theory, founded in 19th-century Germany (as ) and especially influential in 19th- and 20th-century Europe. In those times there was not a single natural, humanistic or philosophical science that would not reflect, in one way or another, the historical type of thought (cf. comparative historical linguistics etc.). It pronounces the historicity of humanity and its binding to tradition. Historist historiography rejects historical teleology and bases its explanations of historical phenomena on sympathy and understanding (see Hermeneutics) for the events, acting persons, and historical periods. The historist approach takes to its extreme limits the common observation that human institutions (language, Art, religion, law, State) are subject to perpetual change. Historism is not to be confused with historicism, nevertheless the English habits of using both words are very similar. (The term historism is sometimes reserved to identify the specific current called in the tradition of German philosophy and historiography.) Notable exponents Notable exponents of historism were primarily the German 19th-century historians Leopold von Ranke and Johann Gustav Droysen, 20th-century historian Friedrich Meinecke, and the philosopher Wilhelm Dilthey. Dilthey was influenced by Ranke. The jurists Friedrich Carl von Savigny and Karl Friedrich Eichhorn were strongly influenced by the ideas of historism and founded the German Historical School of Law. The Italian philosopher, anti-fascist and historian Benedetto Croce and his British colleague Robin George Collingwood were important European exponents of historism in the late 19th and early 20th century. Collingwood was influenced by Dilthey. Ranke's arguments can be viewed as an antidote to the lawlike and quantitative approaches common in sociology and most other social sciences. The principle of historism has a universal methodological significance in Marxism. The essence of this principle, in brief, is: Contemporary thought 20th-century German historians promoting some aspects of historism are Ulrich Muhlack, Thomas Nipperdey and Jörn Rüsen. The Spanish philosopher José Ortega y Gasset was influenced by historism. Criticism Because of the power held on the social sciences by logical positivism, historism or historicism is deemed unpopular. Georg G. Iggers is one of the most important critical authors on historism. His book The German Conception of History: The National Tradition of Historical Thought from Herder to the Present, first published in 1968 (by Wesleyan University Press, Middletown, Ct.) is a "classic” among critiques of historism. Another critique is presented by the German philosopher Friedrich Nietzsche, whose essay (On the Use and Abuse of History for Life, 1874; see The Untimely Meditations) denounces “a malignant historical fever”. Nietzsche contends that the historians of his times, the historists, damaged the powers of human life by relegating it to the past instead of opening it to the future. For this reason, he calls for a return, beyond historism, to humanism. Karl Popper was one of the most distinguished critics of historicism. He differentiated between both phenomena as follows: The term historicism is used in his influential books The Poverty of Historicism and The Open Society and Its Enemies to describe “an approach to the social sciences which assumes that historical prediction is their primary aim, and which assumes that this aim is attainable by discovering the 'rhythms' or the 'patterns', the 'laws' or the 'trends' that underlie the evolution of history”. Popper wrote with reference to Hegel's theory of history, which he criticized extensively. By historism on the contrary, he means the tendency to regard every argument or idea as completely accounted for by its historical context, as opposed to assessing it by its merits. Historism does not aim for the 'laws' of history, but premises the individuality of each historical situation. On the basis of Popper's definitions, the historian Stefan Berger proposes as a proper word usage: See also Heinrich Rickert Historical school of economics Notes References Georg G. Iggers, The German Conception of History: The National Tradition of Historical Thought from Herder to the Present, 2nd rev. edn., Wesleyan University Press, Middletown, Ct., 1983, . Stefan Berger, Stefan Berger responds to Ulrich Muhlack. In: Bulletin of the German Historical Institute London, Volume XXIII, No. 1, May 2001, pp. 21–33 (contemporary debate between a historism-critic and a historism-supporting historian). Frederick C. Beiser, The German Historicist Tradition, Oxford University Press, 2011. Frederick C. Beiser, After Hegel: German Philosophy, 1840-1900, Princeton University Press, 2014. Wallace, Edwin R. and Gach, John (eds.), History of Psychiatry and Medical Psychology: With an Epilogue on Psychiatry and the Mind-Body Relation, Springer, 2008. Peter Koslowski (ed.), The Discovery of Historicity in German Idealism and Historism, Springer, 2006. Case studies Historiography Philosophy of history
0.781795
0.975194
0.762402
Epidemiological transition
In demography and medical geography, epidemiological transition is a theory which "describes changing population patterns in terms of fertility, life expectancy, mortality, and leading causes of death." For example, a phase of development marked by a sudden increase in population growth rates brought by improved food security and innovations in public health and medicine, can be followed by a re-leveling of population growth due to subsequent declines in fertility rates. Such a transition can account for the replacement of infectious diseases by chronic diseases over time due to increased life span as a result of improved health care and disease prevention. This theory was originally posited by Abdel Omran in 1971. Theory Omran divided the epidemiological transition of mortality into three phases, in the last of which chronic diseases replace infection as the primary cause of death. These phases are: The Age of Pestilence and Famine: Mortality is high and fluctuating, precluding sustained population growth, with low and variable life expectancy vacillating between 20 and 40 years. It is characterized by an increase in infectious diseases, malnutrition and famine, common during the Neolithic age. Before the first transition, the hominid ancestors were hunter-gatherers and foragers, a lifestyle partly enabled by a small and dispersed population. However, unreliable and seasonal food sources put communities at risk for periods of malnutrition. The Age of Receding Pandemics: Mortality progressively declines, with the rate of decline accelerating as epidemic peaks decrease in frequency. Average life expectancy increases steadily from about 30 to 50 years. Population growth is sustained and begins to be exponential. The Age of Degenerative and Man-Made Diseases: Mortality continues to decline and eventually approaches stability at a relatively low level. Mortality is increasingly related to degenerative diseases, cardiovascular disease (CVD), cancer, violence, accidents, and substance abuse, some of these due primarily to human behavior patterns. The average life expectancy at birth rises gradually until it exceeds 50 years. It is during this stage that fertility becomes the crucial factor in population growth. In 1998 Barrett et al. proposed two additional phases in which cardiovascular diseases diminish as a cause of mortality due to changes in culture, lifestyle and diet, and diseases associated with aging increase in prevalence. In the final phase, disease is largely controlled for those with access to education and health care, but inequalities persist. The Age of Declining CVD Mortality, Aging and Emerging Diseases: Technological advances in medicine stabilize mortality and the birth rate levels off. Emerging diseases become increasingly lethal due to antibiotic resistance, new pathogens like Ebola or Zika, and mutations that allow old pathogens to overcome human immunity. The Age of Aspired Quality of Life with Persistent Inequalities: The birth rate declines as lifespan is extended, leading to an age-balanced population. Socioeconomic, ethnic, and gender inequalities continue to manifest differences in mortality and fertility. The epidemiological transition occurs when a country undergoes the process of transitioning from developing nation to developed nation status. The developments of modern healthcare and medicine, such as antibiotics, drastically reduce infant mortality rates and extend average life expectancy which, coupled with subsequent declines in fertility rates, reflects a transition to chronic and degenerative diseases as more important causes of death. The theory of epidemiological transition uses patterns of health and disease as well as their forms of demographic, economical and sociological determinants and outcomes. History In general human history, Omran's first phase occurs when human population sustains cyclic, low-growth, and mostly linear, up-and-down patterns associated with wars, famine, epidemic outbreaks, as well as small golden ages, and localized periods of "prosperity". In early pre-agricultural history, infant mortality rates were high and average life expectancy low. Today, life expectancy in developing countries remains relatively low, as in many Sub-Saharan African nations where it typically doesn't exceed 60 years of age. The second phase involves improved nutrition as a result of stable food production along with advances in medicine and the development of health care systems. Mortality in Western Europe and North America was halved during the 19th century due to closed sewage systems and clean water provided by public utilities, with a particular benefit for children of both sexes and to females in the adolescent and reproductive age periods, probably because the susceptibility of these groups to infectious and deficiency diseases is relatively high. An overall reduction in malnutrition enabled populations to better resist infectious disease. Treatment breakthroughs of importance included the initiation of vaccination during the early nineteenth century, and the discovery of penicillin in the mid 20th century, which led respectively to a widespread and dramatic decline in death rates from previously serious diseases such as smallpox and sepsis. Population growth rates surged in the 1950s, 1960's and 1970's to 1.8% per year and higher, with the world gaining 2 billion people between 1950 and the 1980s. A decline in mortality without a corresponding decline in fertility leads to a population pyramid assuming the shape of a bullet or a barrel, as young and middle-age groups comprise equivalent percentages of the population. Omran's third phase occurs when human birth rates drastically decline from highly positive replacement rates to stable replacement numbers. In several European nations replacement rates have even become negative. This transition generally represents the net effect of individual choices on family size and the ability to implement those choices. Omran gives three possible factors tending to encourage reduced fertility rates: Bio-physiologic factors, associated with reduced infant mortality and the expectation of longer life in parents; Socioeconomic factors, associated with childhood survival and the economic challenges of large family size; and Psychological or emotional factors, where society as a whole changes its rationale and opinion on family size and parental energies are redirected to qualitative aspects of child-raising. Impact on fertility Improvements in female and childhood survival that occur with the shift in health and disease patterns discussed above have distinct and seemingly contradictory effects on fertility. While better health and greater longevity enjoyed by females of reproductive age tend to enhance fertility, the reduced risks to infants and young children that occurs in the later stages of the transition tends to have the opposite effect: prolonged breastfeeding associated with reduced mortality among infants and toddlers, together with parental recognition of improved childhood survival, tend to lengthen birth intervals and depress overall reproductive rates. Economic impact The transition may also be associated with demographic movements to urban areas, and a shift from agriculture and labor-based production output to technological and service-sector-based economies. This shift in demographic and disease profiles is currently under way in most developing nations, however every country is unique and transition speed is based on numerous geographical and sociopolitical factors. Whether the transition is due to socioeconomic improvements (as in developed countries) or by modern public health programs (as has been the case in many developing countries), the lowering of mortality and of infectious disease tends to increase economic productivity through better functioning of adult members of the labor force and through an increase in the proportion of children who survive and mature into productive members of society. Models of transition Omran developed three models to explain the epidemiological transition. Classical/Western model: (England, Wales, and Sweden) Countries in Western Europe typically experienced a transition that began in the late eighteenth century and lasted over 150 years to the post-World War II era. The lengthy transition allowed fertility to decline at virtually the same rate that mortality also declined. Germany might be considered another example of this model. Accelerated model: (Japan) Japan experienced a rapid transition as a result of a few decades of intensive war-driven industrialization followed by postwar occupation. The accelerated transition follows a pattern similar to the Classical/Western Model except that it occurs within a much shorter time span. China might be considered another example of this model. Contemporary/Delayed model: (Chile, Ceylon) Due to slow economic development, Chile and Ceylon (Sri Lanka) experienced delayed transitions that have lasted into the 21st century. Medical and public health improvements have reduced mortality, while the birth rate remains high. Cultural traditions combined with political and economic instability and food insecurity mean that mortality for women and children fluctuates more than for men. Mauritius might be considered another example of this model. Determinants of disease Ecobiological: changing patterns of immunity, vectors (such as the black rat partially responsible for spreading bubonic plague in Europe), and the movement of pathogenic organisms. These alter the frequency of epidemic infectious diseases as well as chronic infections and other illnesses that affect fertility and infant mortality. Socioeconomic: political and cultural determinants, including standards of living, health habits, hygiene and nutrition. Hygiene and nutrition are included here, rather than under medical determinants, because their improvement in western countries was largely a byproduct of social change rather than a result of medical design. Medical/Public health: specific preventive and curative measures used to combat disease, including improved public sanitation, immunization and the development of decisive therapies. Medical and public health factors came into play late in the western transition, but have an influence early in certain accelerated and contemporary transitions. Other perspectives McMichael, Preston, and Murray offer a more nuanced view of the epidemiological transition, highlighting macro trends and emphasizing that there is a change from infectious to non-communicable disease, but arguing that it happens differently in different contexts. One of the first to refine the idea of the epidemiological transition was Preston, who in 1976 proposed the first comprehensive statistical model relating mortality and cause-specific mortality. Preston used life tables from 43 national populations, including both developed countries such as United States and England and developing countries such as Chile, Colombia, Costa Rica, Guatemala, México, Panama, Taiwan, Trinidad and Tobago, and Venezuela. He used multiple linear regression to analyze the cause-specific-age-standardized death rates by sex. The estimated slopes represented the proportional contribution of each cause to a unit change in the total mortality rate. With the exception of neoplasms in both sexes and cardiovascular disease in males, all of the estimated slopes were positive and statistically significant. This demonstrated that the mortality rates from each specific cause were expected to decline as total mortality declined. The major causes accounting for the decline were all infectious and parasitic diseases. McMichael et al. argue (2004) that the epidemiological transition has not taken place homogeneously in all countries. Countries have varied in the speed with which they go through the transition as well as what stage of the transition they are in. The global burden of disease website provides visual comparisons of the disease burdens of countries and the changes over time. The epidemiological transition correlates with changes in life expectancy. Worldwide, mortality rates have decreased as both technological and medical advancements have led to a tremendous decrease in infectious diseases. With fewer people dying from infectious diseases, there is a rising prevalence of chronic and/or degenerative diseases in the older surviving population. McMichael et al. describe life expectancy trends as grouped into three categories, as suggested by Casselli et al.: Rapid gains among countries such as Chile, Mexico and Tunisia that have strong economic and technical relationships with developed countries Slower plateauing gains mostly among developed countries with slower increases in life expectancy (for example, France) Frank reversals occurring mostly in developing countries where the HIV epidemic led to a significant decline in life expectancy, and countries in the former Soviet Union, afflicted by social upheavals, heavy alcohol consumption and institutional inadequacy (for example, Zimbabwe and Botswana) Murray and Lopez (1996) offered one of the most important cause-of-death models as part of the 1990 Global Burden of Disease Study. Their "cause of death" patterns sought to describe the fraction of deaths attributed to a set of mutually exclusive and collectively exhaustive causes. They divided diseases into three cause groups and made several important observations: Group 1 - communicable, maternal, perinatal, and nutritional: These causes of death decline much faster than overall mortality and comprise a small fraction of deaths in wealthier countries. Group 2 - non-communicable diseases: These causes of death are a major challenge for countries that have completed or nearly completed the epidemiological transition. Group 3 - injuries: This cause of death is most variable within and across different countries and is less predictive of all-cause mortality. The regression approach underlying the Global Burden of Disease received some critique in light of real-world violations of the model's "mutually exclusive and collectively exhaustive" cause attribution. Building on the existing body of evidence, Salomon and Murray (2002), further add nuances to the traditional theory of epidemiological transition by disintegrating it based on disease categories and different age-sex groups, positing that the epidemiological transition entails a real transition in the cause composition of age-specific mortality, as opposed to just a transition in the age structure. Using Global Burden of Disease data from 1990, they disintegrate the transition across three cause groups: communicable diseases, non-communicable diseases and injuries, seeking to explain the variation in all-cause mortality as a function of cause-specific mortality in 58 countries from 1950 to 1998. This analysis validates the underlying premise of the classic epidemiological transition theory: as total mortality declines and income rises, communicable diseases cause less and less mortality compared to non-communicable diseases and injuries. Decomposing this overall impact by age-sex groups, they find that for males, when overall mortality decreases, the importance of non-communicable diseases (NCDs) increases relative to the other causes with an age-specific impact on the role of injuries, whereas for women, both NCDs and injuries gain a more significant share with mortality decreases. For children over one year, they find that there is a gradual transition from communicable to non-communicable diseases, with injuries remaining significant in males. For young adults, the epidemiological transition is particularly different: for males, there is a shift from injuries to NCDs in lower income settings, and the opposite in higher-income settings; for females, rising income also signifies a shift from NCDs to injuries, but the role of injuries becomes more significant over time compared to males. Finally, for both males and females over 50, there is no epidemiological transition impact on the cause composition of mortality. Current evidence The majority of the literature on the epidemiological transition that was published since these seminal papers confirms the context-specific nature of the epidemiological transition: while there is an overall all-cause mortality decline, the nature of cause-specific mortality declines differs across contexts. Increasing obesity rates in high-income countries are further confirming the epidemiological transition theory as the epidemic leads to an increase in NCDs. The picture is more nuanced in low- and middle-income countries, where there are signs of a protracted transition with the double burden of communicable and noncommunicable disease. A recent review of cause-specific mortality rates from 12 low- and middle-income countries in Asia and sub-Saharan Africa by Santosa and Byass (2016) shows that broadly, low- and middle-income countries are rapidly transitioning to lower total mortality and lower infectious disease mortality. A more macro-level analysis from the Global Burden of Disease data conducted by Murray and others (2015) finds that while there is a global trend towards decreasing mortality and increasing NCD prevalence, this global trend is being driven by country-specific effects as opposed to a broader transition; further, there are varying patterns within and between countries, which makes it difficult to have a single unified theory of epidemiological transition. A theory of epidemiological transition aimed at explaining not just describing changes in population disease and mortality profiles would need to encompass the role in different morbid conditions of infectious diseases contracted over the life course. The concept of linear transition from infectious diseases to other conditions referred to as degenerative or non-communicable, was based on a false dichotomy as common microorganisms have now been confirmed as causal agents in several conditions recorded as the underlying cause of many deaths. A revised transition model might focus more on disease aetiology and the determinants of cause-specific mortality change, while encompassing the possibility that infectious causation may be established for other morbid conditions through the vast amount of ongoing research into associations with infectious diseases. See also Demographic transition Medical anthropology Medical sociology Nutrition transition Notes Further reading . Contains three articles by four authors. Epidemiology Demography Population geography
0.770596
0.989341
0.762382
Historical inheritance systems
Historical inheritance systems are different systems of inheritance among various people. Detailed anthropological and sociological studies have been made about customs of patrilineal inheritance, where only male children can inherit. Some cultures also employ matrilineal succession, where property can only pass along the female line, most commonly going to the sister's sons of the decedent; but also, in some societies, from the mother to her daughters. Some ancient societies and most modern states employ egalitarian inheritance, without discrimination based on gender and/or birth order. Cross cultural research about systems of inheritance Land inheritance Land inheritance customs greatly vary across cultures. The Ethnographic Atlas gives the following data regarding land distribution: primogeniture predominates in 247 societies, while ultimogeniture prevails in 16. In 19 societies land is exclusively or predominantly given to the one adjudged best qualified, while equality predominates in 301 societies. Regarding land inheritance rules, in 340 societies sons inherit, in 90 other patrilineal heirs (such as brothers), in 31 sister's sons, in 60 other matrilineal heirs (such as daughters or brothers), and in 98 all children. In 43 societies land is given to all children, but daughters receive less. In 472 societies, the distribution of inherited land follows no clear rules or information is missing, while in 436 societies inheritance rules for real property do not exist or data is missing; this is partly because there are many societies where there is little or no land to inherit, such as in hunter-gatherer or pastoral societies. Patrilineal primogeniture, where the eldest son inherits, was customary among many cultures around the world. Patrilineal ultimogeniture, where the youngest son inherits, was customary among a number of cultures including: Fur, Fali, Sami (also called Lapp), Bashkir, Chuvash, Gagauz, Vep, Tatar, Achang, Ayi, Atayal, Kachi, Biate, Chinantec, Hmar, Mro, Kom, Purum and Lushei or Lushai (sometimes mistakenly taken for the whole Mizo people, especially in the past). Among English peasants there was no clearly prevalent inheritance pattern, while Spanish Basques gave their land to the one considered best qualified, though they had a preference for sons. Giving more or less equal shares of land to sons, but excluded daughters was also common in many populations, as was giving relatively equal shares to both sons and daughters or slightly less to daughters. The same system prevails in contemporary Egypt and most Arab groups (see Sharia). Most non-Arab Muslims, with some exceptions (Caucasians, Iranians), historically followed their own inheritance customs, not those of the Sharia. In Ancient Egypt the eldest son inherited twice as much as other sons, and in earlier times he was the sole heir. Among the Lao, the Aceh, the Guanches, and the Minangkabau, all daughters inherited equal shares of land. The Cham, the Jaintia, the Garo, and the Khasi practiced female ultimogeniture. Primogeniture, regardless of the sex of the child, was customary among the Paiwan, the Ifugao, the Chugach, and the French Basques. While ultimogeniture, regardless of the sex of the child, was customary among the Chuvash and the Mari. Bilateral primogeniture is a rarer custom of inheritance where the eldest son inherits from the father and the eldest daughter inherits from the mother. This practice was common among the Classic Mayas, who transmitted the family's household furnishings from mother to eldest daughter, and the family's land, houses and agricultural tools from father to eldest son. It was also seen in the Greek island of Karpathos, where the family's house was transmitted from mother to eldest daughter, and the family's land was transmitted from father to eldest son. Among the Igorot, the father's land is inherited by his eldest son and the mother's land is inherited by her eldest daughter. A review of numerous studies found that the pattern of land inheritance traditionally prevalent among English, Dutch and New Englander peasants was partible inheritance. The pattern of land inheritance traditionally prevalent among Russian peasants was found to be close to patrilineal primogeniture, "as oldest sons may well inherit more". The conclusions of this review contradicts previous reports that Russians practiced equal inheritance of land by all sons and that the English, Dutch and New Englanders had no definite inheritance pattern. In easternmost Europe, patrilineal ultimogeniture prevailed among most Turkic peoples. Equal inheritance of property by all sons prevailed among most Finno-Ugric peoples, and patrilineal primogeniture prevailed among Estonians and Balts. Inheritance customs are sometimes considered a culturally distinctive aspect of a society. Although it is often thought that the Mizos employ ultimogeniture, this is because the customs of Lushais or Lusheis are confused with those of all Mizos; Mizo and Lushai have been occasionally used interchangeably. Among most non-Lushai Mizos, primogeniture predominates, just as among Kukis. In general there is great confusion about the ethnic identity of the many northeastern Indian tribes. Some regard the generic term Zomi as most appropriate. Inheritance of movable property The same disparity is seen regarding inheritance of movable property. Most nomadic peoples from Asia, for example the Khalka Mongols, give a more or less equal share of the herd to each son as he marries. Typically the youngest remain behind caring for the parents and inheriting his father's tent after their death in addition to his own share of the herd. However, others, such as the Yukaghir and the Yakuts, leave most of the herd to one son (in the above examples the youngest and the eldest, respectively). Some pastoral peoples from other geographical areas also practice unequal wealth transfers, although customs of equal male inheritance are more common among them than among agriculturalists. Patrilineal primogeniture with regards to both livestock and land was practiced by the Tswana people, whose main source of wealth was livestock, although they also practiced agriculture. This practice was also seen in other southern Bantu peoples, such as the Tsonga, or the Venda. Although, among the Venda, while the livestock was inherited by the eldest son, land was not inherited within families but given to each son by village authorities as he married. Among the Tsonga, most of the land was used only for stockbreeding. Patrilineal primogeniture also prevailed among the neighboring Khoi peoples, of whom only the Nama (among whom patrilineal primogeniture also prevailed)remain. Many other African peoples also practiced patrilineal primogeniture with regards to livestock. These included: The Ngoni, the Gogo, the Mangbetu, the Rendille, the Sapo, the Boran, the Gabra, the Plains Pokot, the Hema, the Beti-Pahuin, the Buduma, the Dogon, the Duala, the Djafun and the Kassena. According to the Ethnographic Atlas, the Fulbe or Fulani, the largest pastoral people in Africa, divided their livestock equally between all sons. However, according to some other sources they practiced male primogeniture. Chukchi, Koryak and Ket peoples practiced male ultimogeniture. It has been stated that the rest of Siberian peoples, such as Voguls, Samoyeds or Khantys, practiced patrilineal primogeniture, though there isn't much reliable information about the traditional customs of Siberian peoples. It is said that Gilyaks divided their cattle equally between all sons. Patrilineal primogeniture was also traditionally prevalent among pastoral peoples from Australia, such as the Aranda, as well as among Himalayan pastoralists like the Changpa. Patrilineal primogeniture was traditionally prevalent among some pastoral peoples from Greenland and northern Canada. The neighboring indigenous peoples of the Pacific Northwest Coast were organized in societies where elder sons and their lines of descent had higher status than younger sons and their lines of descent (a "conical clan"), although a rule of patrilineal primogeniture couldn't develop among most of them since they were mostly hunter-gatherers. However, rule of patrilineal primogeniture did develop among some Canadian indigenous peoples who practiced agriculture, such as the Montagnais, the Kutchin, the Pikangikum, the Ojibwa, the Klallam and the Atsugewi. Canadian indigenous peoples were influenced by the ancient Thule culture, of which little is known with certainty. Other sources Intergenerational wealth transmission among agriculturalists tends to be rather unequal. Only slightly more than half of the societies studied practice equal division of real property; customs to preserve land relatively intact (most commonly primogeniture) are very common. Wealth transfers are more egalitarian among pastoralists, but unequal inheritance customs also prevail in some of these societies, and they are strongly patrilineal. A study of 39 non-Western societies found many customs that distinguished between children according to their sex and birth order. First sons, in comparison to other sons, "are likely to inherit or otherwise gain control of more family land, livestock, or other wealth." First sons inherited more than the other sons among 11 societies studied. Among the Todas, both first and last sons inherited more than the other sons. Last sons inherited more than the other sons among the Lolo and the Yukaghir, and inherited less among the Luo. The people found to have the greatest number of customs favourable to first sons in the study were the Tswana, followed closely by the Azande. The people with the greatest number of customs favorable to last sons in their study were the Lolo. This study confirmed ethnographers' claims that customs favorable to first sons were common in South Asia, Austronesia and Sub-Saharan Africa, while customs favorable to last sons were common among the ethnic minorities of Southwest China. The only custom that distinguished between sons among the Dagor Mongols was that first sons received more respect from his siblings and last sons received less respect from their siblings. This contradicts those theories that maintain that peoples of the Asian steppe had strong customs favorable to first or last sons. In fact, the indigenous American peoples had significantly more customs favorable to first sons than the Dagor Mongols. Among Arab peoples, such as the Egyptian Fellahin, all sons inherited the same and had the same wealth. This was also seen among the Alaska Native peoples such as the Eyak. Jack Goody was an influential anthropologist during the twentieth century. However, his theories have been mostly rejected during the last decades. He made a distinction between a complete and a preferential form of primogeniture and ultimogeniture. In the complete form of both customs, the rest of the children are excluded from the inheritance. However, in the preferential form of primogeniture, the eldest son acts as custodian of the father's rights on behalf of his brothers. In the preferential form of ultimogeniture, the youngest son inherits the residue of his father's property after elder sons have received their shares during the father's lifetime. Goody called ultimogeniture "Borough English" and primogeniture "Borough French" because in England ultimogeniture was a native custom, while primogeniture was a custom brought by the Norman invaders. According to Goody, in Late Medieval England, patrilineal primogeniture predominated in feudal tenures and among the peasantry of large parts of the Midlands. Patrilineal ultimogeniture ("Borough English") prevailed elsewhere in the champion country. Partible inheritance (gavelkind) prevailed in Kent, East Anglia and the Celtic areas. Both preferential primogeniture and preferential ultimogeniture were practiced in pre-revolutionary Russia, where the eldest son succeeded as family head and inherited more than the other sons. "The youngest son, if he remained with the father, inherited the house and also at times other property" (minorat). However, the share of land and moveables of the other sons was only slightly smaller than that of the eldest and the youngest son. Only in the southern part of the country was the house inherited by the youngest son; in the north it was inherited by the eldest son. The Russian family of around 1900 considered property such as the house, agricultural implements, livestock and produce as belonging collectively to all family members. When the father died, his role as head of the family (known as Khozain, or Bolshak ) was passed to the oldest person in the house. In some areas this was the oldest son. In others it was the oldest brother of the deceased so long as he lived in the same house. There were some areas were a new head would be elected by the family members. If all surviving members of the family were under age, a relation would become a co-proprietor. If property was divided after a death, each adult male in the house got an equal share. Sons who had left home did not have a right of succession. Females remained within the family and received a share of the inheritance when they married. In the north of Russia, the oldest son inherited the house. In the south the eldest son would have set up a separate house while the father was still alive, therefore the youngest inherited the fathers house upon his death. Systems of inheritance among various people Throughout history, creative inheritance systems have been created, fitting the best needs of the various people according to their unique environment and challenges. Inheritance customs as a cultural dimension Inheritance customs do not follow clear ethnic, linguistic or geographical patterns. Equality between all sons and a subordinate position of women, with the exclusion of daughters from inheriting, are prominent aspects of Hungarian, Albanian, Romanian, Armenian, and most Slavic or Latin American cultures. While many studies show the privileged position that the eldest son traditionally enjoyed in Slovene, Finnish or Tibetan culture. The Jaintia, the Garo and the Khasi, on the other hand, traditionally privileged the youngest daughter. Some peoples, like the Dinka, the Arakanese, the Chins of Myanmar, or the Karen, frequently show a compromise between primogeniture and ultimogeniture in their inheritance patterns. Although among many Chins of Myanmar, the advantage that the eldest and the youngest son have over other sons is really small, so it is not correct to speak of a true pattern of mixed primogeniture and ultimogeniture. The advantage of the eldest and the youngest son is somewhat more ample among the Dinka and the Arakanese. The compromise between primogeniture and ultimogeniture was also found among the Kachin and the Dilling, as well as among the Sherpa to some degree. This pattern of inheritance is also reported for many Fulbe villages in the Republic of Guinea, though it seems that in past times the eldest son inherited all in Guinea. Sometimes inheritance customs do not entirely reflect social traditions. Romans valued sons more than daughters, and Thais and Shan showed the reverse pattern, though all practiced equal land inheritance between all children. The Shan people, who live mostly in northern Thailand and northeastern Myanmar, are markedly matrilocal. In Han Chinese tradition, the eldest son was of special importance. The law punished more harshly offences by a younger brother against an elder brother than vice versa. The eldest son received the family headship in cases where the family held together as a single unit, and the largest share in cases of family division, since he also inherited the cult to family ancestors. This is still practiced in Taiwan nowadays, though Chinese peasants have practiced partible inheritance since the time of the Qin and Han Dynasties, when the previous system of male primogeniture was abolished. In some cases, the eldest son of the eldest son, rather than the eldest son, was favored. Ritual primogeniture was emphasized in the lineage organizations of North China. During the Longshan culture period and the period of the three Dynasties (Xia, Zhou and Shang), patrilineal primogeniture predominated. Among Mongols it has been usually stated that the youngest son had a special position because he cared for his parents in their old age. On their death he inherited the parental tent, which was connected with the religious cult in Mongol traditions, though all sons received more or less equal shares of livestock as they married. However, in contrast to this popularly held notion, more rigorous and substantiated anthropological studies of kinship and family in central Asian peoples strongly indicate that in these societies elder sons and their lines of descent had higher status than younger sons and their lines of descent. In central Asia, all members of a lineage were terminologically distinguished by generation and age, with senior superior to junior. The lineage structure of central Asia had three different modes: genealogical distance, or the proximity of individuals to one another on a graph of kinship; generational distance, or the rank of generation in relation to a common ancestor; and birth order, the rank of brothers in relation to each another. The paternal descent lines were collaterally ranked according to the birth of their founders, and were thus considered senior and junior to each other. Of the various collateral patrilines, the senior in order of descent from the founding ancestor, the line of eldest sons, was the most noble. In the steppe, no one had his exact equal; everyone found his place in a system of collaterally ranked lines of descent from a common ancestor. It was according to this idiom of superiority and inferiority of lineages derived from birth order that legal claims to superior rank were couched. Furthermore, at least among Mongols, the elder son inherited more than the younger son, and this is mandated by law codes such as the Yassa, created by Genghis Khan. Among Arabic peoples, it is sometimes argued that the expansion of Islam brought an end to the sharp distinction between the firstborn and other sons so characteristic of ancient Semitic peoples. However, many peoples who have partially or completely embraced Islam, have also established inequality between sons, such as the Oromo of east Africa, who had patrilineal primogeniture in inheritance, in spite of the fact that some of them were Muslim. Other Muslim peoples, like the Minangkabau and the Javanese of Indonesia, the Turks, or the Fur in Sudan, also have inheritance practices that contradict their Islamic beliefs. Most non-Arab Muslims historically followed their own inheritance customs, not those of the Sharia. In India, inheritance customs were (and still are) very diverse. Patrilineal primogeniture predominated in ancient times. The Laws of Manu state that the oldest son inherits all of the father's estate. Since the Middle Ages patrilineal equal inheritance has prevailed in perhaps a majority of groups, although the eldest son often received an extra share. Under this system, the estate would be shared between all sons, but these would often remain together with their respective families under the headship of the karta or family head, who was usually the eldest son of the previous family head. However, among some South Asian peoples, such as the Western Punjabi, male primogeniture continued to prevail. Fertility and marriage strategies across diverse societies Cross-cultural comparisons The practice of widow inheritance by younger brothers has been observed in many parts of Africa and the Asian steppe, as well as small zones of South Asia. This practice forces younger brothers to marry older women. Eastern European cultures, on the other hand, are characterized by early, universal and equal access to marriage and reproduction, due to their systems of equal inheritance of land and movable property by all sons. Research on pre-industrial Russian Karelia however, suggests that younger brothers frequently remained unmarried, and the joint-family household characterized by the equal inheritance of land and moveable property by all sons and patriarchal power relations wasn't universal in Russia. The patrilineal joint-family systems and more or less equal inheritance for all son in India and China meant that there was no difference in marriage and reproduction due to birth order. In the stem-family systems of Northwest Europe however, access to marriage and reproduction wasn't equal for all sons, since only one of them would inherit most or all of the land. The survival and well-being of children in India and China is positively influenced by the number of older siblings of the opposite sex and negatively influenced by the number of older siblings of the same sex. However, definitive celibacy was historically relatively uncommon in India and China, but relatively common in many European societies where inheritance was impartible. The Han Chinese first sons historically married earlier, had lower rates of definitive celibacy and more children (especially males) than their younger brothers. However, they suffered higher mortality rates. This has been attributed to the fact that eldest sons needed to have more children to succeed them as heads and were willing to take more risks and suffer a higher drain of resources to achieve this. The Chinese joint family system had strong inegalitarian traits that made it demographically more akin to a stem family system. According to Emmanuel Todd and others, it be reminiscent of the system of patrilineal primogeniture prevalent during the Longshan culture period and the period of the Three Dynasties. Variations by class and context There is a strong relationship between fertility and inheritance in "Malthusian" contexts of resource scarcity. In contexts where resources are plentiful, the relationship between inheritance and social outcomes can be different. In the Midwest and Northeast United States during the period from 1775 to 1875, where resources were plentiful, being the first son was positively correlated with wealth and fertility. As in other western cultures, but unlike European societies where resources were scarce, this has a complex relationship with inheritance. Inheritance practices and seniority of patriline, as well as the importance of inheritance itself, have varied over time among the Lisu. This was mostly in response to changes in resource availability and poppy cultivation. In the United States, daughters currently inherit on average more than sons. In the past, however, the eldest son was favored in matters of land inheritance. During the Colonial Period, the eldest son inherited twice more than the other sons in the northern colonies (these inheritance laws were modelled on Mosaic Law), and in the southern colonies there was a rule of male primogeniture. In northern Ghana, a region where male primogeniture predominates, rich households favoured sons over daughters. It is likely that first born sons would have been preferred as they would inherit the wealth and therefore have higher reproductive prospects. Cultural patterns of child-preference In recent times inheritance in the western world has generally been egalitarian despite parents showing favoritism towards daughters and later-born sons. In parent-son relationships, mothers usually show favouritism towards the first son and fathers to later born sons however these tendencies have lost much of their importance with regards to inheritance. Customs of ultimogeniture among farmers has been explained as a consequence of postponing retirement so they do not feel "dethroned" early by their eldest son. This line of thinking has been linked to the preeminence of lastborn siblings in popular myth and folklore around the world. As a consequence, in some cultures that practice male preimogentiure there are ambiguous, contradictory feelings towards last born sons. Among the Hausa of West Africa, who practice primogeniture, the mother and oldest son engage in mutual avoidance behavior to mark the woman's positive change in status upon producing an heir. The father may also avoid the son if he is the mother's first male child, speaking to him through intermediaries rather than directly. Among the Mossi of central Burkina Faso in West Africa, the eldest son would be sent to relatives shortly after circumcision and return to the parental household shortly after puberty; after the death of his father he would inherit his property. A study of the people of the Pacific island Tikopia in 1929 found that the eldest son must marry and receive more land, while the younger sons remain bachelors, emigrate or even die. However, by 1952 many of the customs were being abandoned and marriage was beginning to become universal. In the succession to chieftainship, the traditional custom of male primogeniture continued though. In some societies in Sub-Saharan Africa where male primogeniture was practiced, tensions between parents and their inheriting eldest son were resolved through rituals of avoidance. This was most extreme among the Tallensi. Among East Asian peoples, on the other hand, co-residence between parents and their eldest son was thought of as normal and desirable in systems of impartible inheritance, and in some countries such as Japan, Vietnam and South Korea it is widely practiced even nowadays. Historically in Japan, marriage and reproduction by the eldest son was facilitated by their status as heirs. In Japan, Korea and Vietnam, as well as in some of those European regions where male primogeniture was practiced, parents didn't transfer their property to the inheriting son at the point of his marriage as among Germans. Instead, the first son remained under his father's authority even after he had married and had had children, and the father remained the nominal head of the family until his death, relinquishing his actual authority slowly and gradually. In Japan, only the inheriting son stayed in the parental household. He could become head of the family any time between his marriage and the death of his predecessor. The timing of this was normally dictated by familial or local traditions. The Catalan and Occitan stem families in Europe closely resembled the model seen in Japan. In rural China, property and landholdings are usually divided up when the older son marries. Normally the youngest son continues to live with the parents and inherits their remaining share of the property. Prior to the revolution in 1949, most families in rural areas of China stayed together for many years after the oldest son marries, sometimes until the youngest son married. However, there is some evidence that the practice of co-residing with the eldest son continues. In Israel, coresidence between parents and their eldest son prevails in the context of the Moshav movement, that prohibited breaking up family plots; thus the eldest son inherits the family farm. In South Korea, modern businesses (chaebol) are handed down according to male primogeniture in most cases. A study of family firms in the UK, France, Germany and US found that male primogeniture was the inheritance rule in more than half of family firms in France and the UK, but only in less than a third of those in the US and only in a quarter (25 per cent) of those in Germany. Social approaches to inheritance customs Employing differing forms of succession can affect many areas of society. Gender roles are profoundly affected by inheritance laws and traditions. Impartible inheritance has the effect of keeping large estates united and thus perpetuating an elite. With partible inheritance large estates are slowly divided among many descendants and great wealth is thus diluted. Inheritance customs can even affect gender differences in cognitive abilities. Among the Karbis, who employ male primogeniture, men perform significantly better than women in tasks of spatial abilities. There are no significant differences in the performance of men and women among the Khasis, who employ female ultimogeniture. The degree of acceptance that a society may show towards an inheritance rule can also vary. In South Africa, for example, the influence of more modern, western social ideas has caused strong opposition, both civil and official, to the customary law of patrilineal primogeniture traditionally prevalent among black peoples, and inheritance customs are gradually changing. Among the indigenous tribes of South Africa, the oldest son inherits after the death of the father. If the oldest son is also dead, the oldest surviving grandson inherits; if the eldest son has no sons, the inheritance is passed to the father's second son or his sons, and so on through all the sons and their male children if necessary. In polygynous families which were formed of multiple units, the inheritance rules were changed slightly. Each marriage formed a new unit, independent from the others, with separate property which was inherited by the heir of each unit. Polygynous families practised either simple or complex inheritance. In the simple system the heir is the eldest son of the first wife, of if he is dead, the eldest grandson. If the first wife had no sons, the inheritance went to the oldest surviving male descendant of the second wife, and so on through all the wives if necessary. Complex inheritance happened when the homestead was separated into two or three units, depending on the number of wives, and the eldest son of each wife became heir of their unit. If there was no heir in one of the units, the heir of the other inherited both. This form of inheritance was seen among the Xhosa people of south eastern South Africa. In Lesotho and southern Ethiopia, most people still follow the custom of male primogeniture. However, in Zambia, Namibia and Cameroon, the prevalent customary law of patrilineal primogeniture is beginning to be challenged in court. In eastern Democratic Republic of Congo, the predominant custom of male primogeniture is also beginning to be considered unfair by some women and younger sons. The custom of patrilineal primogeniture predominant in South Sudan, Uganda, Tanzania, Burundi, Equatorial Guinea, Zimbabwe and Gambia have not caused much opposition. In Ghana, the diverse inheritance customs across ethnic groups, such as the male primogeniture among the Ewe and the Krobos, or matrilineal inheritance among the Akan, contribute to the occurrence of children living in the streets. In Sierra Leone, the inheritance customs prevalent in the country, were either the eldest son or the eldest brother inherits the property, create insecurities for widows. In South Korea, favouring the eldest son has been predominant almost up to recent times, despite laws of equal inheritance for all children. In 2005, in more than half (52.6 per cent) cases of inheritance the eldest son inherited most or all of his parents' property; in more than 30 per cent of cases the eldest son inherited all of his parents' property. In the past North Korea has the same pattern of inheritance as the South, however no details about current inheritance practices have been available since the county's proclamation of independence in 1948. Social transformations can also modify inheritance customs to a great extent. For example, the Samburu of north-central Kenya are pastoralists who have traditionally practiced an attenuated form of patrilineal primogeniture, with the eldest son receiving the largest share of the family herd and each succeeding son receiving a considerably smaller share than any of his seniors. Now that many of them have become agriculturalists, some argue that land inheritance should follow patrilineal primogeniture, while others argue for equal division of the land. The Bhil people of central India, who were hunter-gatherers in the past, adopted a system of attenuated patrilineal primogeniture identical to that of pastoral Samburu when they became agriculturalists. The same custom also prevails among some other peoples, like the Elgeyo and Maasai in Kenya, or the Nupe of Nigeria and Niger. Most of the Amhara in Ethiopia divide their property between all sons, however male primogeniture is practised in some regions. Favoring the eldest son is also common among the Dinka in South Sudan. Among the Shona of Zimbabwe and Mozambique, the oldest son it the first to inherit and gets the best piece of the land. The oldest accounts of the Shona mention patrilineal primogeniture as their inheritance custom, with the oldest son of any of the deceased's wives becoming the main heir. The widow was inherited by her husbands brother but could choose not to be. Systems of social stratification Detailed anthropological and sociological studies have been made about customs of patrilineal inheritance, where only male children can inherit. Some cultures also employ matrilineal succession, where property can only pass along the female line, most commonly going to the sister's sons of the decedent; but also, in some societies, from the mother to her daughters. Some ancient societies and most modern states employ egalitarian inheritance, without discrimination based on gender and/or birth order. The evolution of inheritance practices in Europe The right of patrilineal primogeniture, though widespread during medieval and modern times in Europe, doesn't seem to have prevailed so extensively in ancient times. In Athens, according to Demosthenes and the Laws of Solon, the eldest son inherited the house and with it the cult to family ancestors. Aristotle spoke about patrilineal primogeniture during his time in Thebes and Corinth. He also spoke about the revolts that put an end to it in Massalia, Istros, Heraclea and Cnido). While Aristotle was opposed to this right, Plato wanted it to become more widespread. However, the nature of inheritance practices in Ancient Sparta is hotly debated among scholars. Ancient Greeks also considered the eldest son the avenger of wrongs done to parents—"The Erinyes are always at the command of the first-born". Roman law didn't recognise primogeniture, but in practice Romans favored the eldest son. In Ancient Persia, succession to the family headship was determined by patrilineal primogeniture. Among Celtic and Germanic peoples, the predominant custom during ancient times seems to have been to divide the land in equal parts for each of the sons. However, the house could be left to only one of them. Evidence of actual practices and law codes such as the Sachsenspiegel indicate that Germans left the house to the youngest son. This was possibly connected to the cult to family ancestors, which was also inherited by the youngest son. Celts from Ireland and northern France left the house to the eldest son. Both Germans and Irish divided the land into equal shares until the early Modern Age, when impartible inheritance gradually took hold among both peoples. However, according to Tacitus the German tribe of the Tencteri employed patrilineal primogeniture. There is also evidence that in Schleswig Holstein, leaving the estate to the eldest son and giving only monetary compensation to his siblings was the prevailing practice since around the year 100. Patrilineal primogeniture also prevailed among the Vikings. In Scotland, certain types of property descended exclusively to the eldest son in the Scottish Lowlands even before the Norman conquest in 1066. Patrilineal primogeniture with regards to all types of immoveable property became the legal rule in all of Scotland during the reign of William I (1165–1214). Until 1868, all immovable property, also called in Scottish law "heritable property" (buildings, lands, etc.) was inherited exclusively by the eldest son and couldn't be included in a will. After 1868, it could be included in a will or testament, but if a person died intestate, it was still inherited exclusively by the eldest son. In 1964, this rule of male primogeniture in cases of intestacy was finally abolished. According to Bede, the custom in Northumbria reserved a substantial birthright for the eldest son even before the Norman conquest and other local customs of inheritance also gave certain additional benefits to the eldest son. After the Norman conquest, male primogeniture became widespread throughout England, becoming the common law with the signing of Magna Carta in 1215, only slightly later than in Scotland. After 1540, a testator could dispose of its immovable property as he saw fit with the use of a testament, but until 1925 it was still inherited solely by the eldest son if he died intestate. However, although the gentry and the nobility in England practiced a relatively strict form of male primogeniture, there was no clearly prevalent inheritance pattern among peasants, giving rise to a sort of "proto-capitalist" rural economy, the "absolute nuclear" family. During Late Medieval Times male ultimogeniture ("Borough-English") was the predominant custom in England, as it was the customary rule of inheritance among unfree peasants, and this social class comprised most of the population according to the Domesday Book. In Scotland, by contrast, a strict form of male primogeniture prevailed (and still prevails) even among peasants. The Scottish clan of the feudal era, which survived in the Highlands until 1747, was the only known example of a conical clan in Europe, along with the Roman gens according to Fustel de Coulanges. As Gartmore says in a paper written in 1747, "The property of these Highlands belongs to a great many different persons, who are more or less considerable in proportion to the extent of their estates, and to the command of men that live upon them, or follow them on account of their clanship, out of the estates of others. These lands are set by the landlord during pleasure, or a short tack, to people whom they call good-men, and who are of a superior station to the commonality. These are generally the sons, brothers, cousins, or nearest relations of the landlord. The younger sons of families are not bred to any business or employments, but are sent to the French or Spanish armies, or marry as soon as they are of age. Those are left to their own good fortune and conduct abroad, and these are preferred to some advantageous farm at home. This, by the means of a small portion, and the liberality of their relations, they are able to stock, and which they, their children, and grandchildren, possess at an easy rent, till a nearer descendant be again preferred to it. As the propinquity removes, they become less considered, till at last they degenerate to be of the common people; unless some accidental acquisition of wealth supports them above their station. As this hath been an ancient custom, most of the farmers and cottars are of the name and clan of the proprietor; and, if they are not really so, the proprietor either obliges them to assume it, or they are glaid to do so, to procure his protection and favour." Prior to the advent of feudalism during Late Medieval times and the creation of the system above explained, no trace of male primogeniture or a similar custom existed in Scotland or elsewhere in the Celtic world. The successor to the office of the chief was selected among the wider kin of the previous chief (tanistry), and the land, among common families, was divided between all sons. Among many ancient Germanic tribes, on the other hand, male primogeniture determined succession to political office, the eldest son of a chief customarily succeeding his father. The common rule of land inheritance was partible inheritance, as in the Celtic world. The British custom of male primogeniture became also prevalent in some British colonies, most strongly in Australia. The contrary development occurred in South Africa, where the Afrikaner colonizers, who practiced partible inheritance, were always opposed to the custom of male primogeniture prevalent among indigenous black peoples. In New Zealand, European colonizers chose any son to succeed to the family farm, without regards to his fraternal birth order, while patrilineal primogeniture prevailed among the indigenous Maori people. In parts of northern France, giving a slightly larger share to the eldest son was common among peasants even before the 10th century; after that century, patrilineal primogeniture developed among the nobility (impartible inheritance never obtained among peasants in most of northern France). Flanders was probably the first country where patrilineal primogeniture became predominant among aristocrats. By the time of the French Revolution it had become almost universal in this social class in western, central and northern Europe, but inheritance customs among peasants varied widely across regions. Strabo also speaks about customs of male primogeniture among Iberian peoples (most of the Iberian peninsula was populated by then by Celtic or half-Celtic peoples, not Iberians proper). He mentions that among the Cantabrii, however, the eldest child regardless of sex inherited the family property. By the term "Cantabrii" he was most probably referring not to the actual Cantabrians but to the Basques (who were not an Iberian people); among the Basques of France, this usage survived until the French Revolution, long after it had been replaced by male primogeniture or free selection of an heir among the Basques of Spain. In Catalonia, in northeastern Spain, the custom of male primogeniture survived in an exceptionally vigorous form among peasants until very recent times (in northeastern Catalonia, for example, peasants rigorously respected the right of male primogeniture until very recent times. In the province of Lleida, too, even as late as the mid-twentieth century, only 7.11 percent of the sons who became single-heirs were not the first son. In central and southern Catalonia, male primogeniture was also predominant). However, in other past Iberian regions which were subject to greater Muslim influence, such as Valencia, this custom only survived in some areas. Welsh laws of inheritance The ancient Welsh laws of inheritance inform us about the evolution of inheritance practices in Great Britain. The Venedotian Code establishes that land must be partitioned between all sons and that the youngest has a preferential claim to the buildings: "If there be buildings, the youngest brother but one is to divide the tyddyns,* for in that case he is the meter; and the youngest to have his choice of the tyddyns, and after that he is to divide all the patrimony. And by seniority they are to choose unto the youngest; and that division is to continue during the lives of the brothers." "If there be no buildings on the land, the youngest son is to divide all the patrimony, and the eldest is to choose; and each, in seniority, choose unto the youngest." "Land of a hamlet is not to be shared as tyddyns, but as gardens; and if there be buildings thereon, the youngest son is not more entitled to them than the eldest, but they are to be shared as chambers." "When brothers share their patrimony between them, the younger is to have the principal tenement, and all the buildings, of his father, and eight einvs of land; his boiler, his hatchet, and his coulter, because a father cannot give these three to any one but the youngest son, and though they should be pledged they never become forfeited. Then let every brother take an homestead with eight erws of land; and the youngest son is to divide, and they are to choose in succession from the eldest to the youngest." This was later replaced by a preference for the eldest son, and the Dimetian Code provides: Canon law-dictated patrilineal primogeniture: During the Modern Age, many Welsh peasants in upland areas lived in stem families where the eldest son took over the farm when his father became old. Perhaps most intriguingly, in the inner, lowland areas of Wales, where English culture was stronger and absolute nuclear families on the English model prevailed, male ultimogeniture predominated. The fideicommissum Inheritance can be organized in a way that its use is restricted by the desires of someone (usually of the decedent). An inheritance may have been organized as a fideicommissum, which usually cannot be sold or diminished, only its profits are disposable. A fideicommissum's succession can also be ordered in a way that determines it long (or eternally) also with regard to persons born long after the original descendant. Royal succession has typically been more or less a fideicommissum, the realm not (easily) to be sold and the rules of succession not to be (easily) altered by a holder (a monarch). The fideicommissum, which in fact had little resemblance to the Roman institution of the same name, was almost the standard method of property transfer among the European nobility; Austria, Germany, Switzerland, Bohemia, Sweden and Italy were some of the countries where it became very popular among wealthy landowners, beginning in most cases around the early Modern Age. It was almost always organized around principles of male primogeniture. The Spanish mayorazgo and the Portuguese morgado also resembled the Continental fideicommissum more than the noble customs of Great Britain and most French regions; noble customs of primogeniture in these countries were more ancient and thus took different legal forms. Inheritance of noble titles also distinguished Great Britain from Continental Europe, since in most European countries most noble titles (though not estates) were inherited by all sons, sometimes even all children. References Inheritance
0.776074
0.982351
0.762377
Cultural lag
The difference between material culture and non-material culture is known as cultural lag. The term cultural lag refers to the notion that culture takes time to catch up with technological innovations, and the resulting social problems that are caused by this lag. In other words, cultural lag occurs whenever there is an unequal rate of change between different parts of culture causing a gap between material and non-material culture. Subsequently, cultural lag does not only apply to this idea only, but also relates to theory and explanation. It helps by identifying and explaining social problems to predict future problems in society. The term was first coined in William F. Ogburn's 1922 work Social Change with Respect to Culture and Original Nature. As explained by James W. Woodward, when the material conditions change, changes are occasioned in the adaptive culture, but these changes in the adaptive culture do not synchronize exactly with the change in the material culture, this delay is the culture lag. If people fail to adjust to the rapid environmental and technological changes it will cause a lag or a gap between the cultures. This resonates with ideas of technological determinism, which means that technology determines the development of its cultural values and social structure. That is, it can presuppose that technology has independent effects on society at large. However it does not necessarily assign causality to technology. Rather cultural lag focuses examination on the period of adjustment to new technologies. According to sociologists William F. Ogburn, cultural lag is a common societal phenomenon due to the tendency of material culture to evolve and change rapidly and voluminously while non-material culture tends to resist change and remain fixed for a far longer period of time. This is because ideals and values are much harder to change than physical things are. Due to the opposing nature of these two aspects of culture, adaptation of new technology becomes rather difficult. This can cause a disconnect between people and their society or culture. This distinction between material and non-material culture is also a contribution of Ogburn's 1922 work on social change. Ogburn's classic example of cultural lag was the period of adaptation when automobiles became faster and more efficient. It took some time for society to start building infrastructure that would tailor mainly to the new, more efficient, vehicles. This is because people are not comfortable with change and it takes them a little time to adapt. Hence, the term cultural lag. Social Change With Respect to Culture and Original Nature (1922) Social Change with Respect to Culture and Original Nature is a 1922 work by Ogburn. This work was crucial in drawing attention to issues with social changes and responses. In this work he first coined the term 'cultural lag' to describe a lag between material and non-material cultures. Ogburn states that there is a gap between traditional cultural values and the technical realities in the world. This work was innovative at the time of its release and brought light to the issues of 'cultural lag' and the possible solutions that could fix these issues. This was not the first time these issues have been looked at, but this is the first time that real solutions were presented. Ogburn's theory was not widely accepted at first due to people having different interpretations of the work. In the book he also details the four factors of technical development, which are: invention, accumulation, diffusion, and adjustment. In the work he suggests that primary engine of change and progress is technology, but that it is tempered by social responses. The book had mixed a mixed response because many interpreted his findings in many different ways. Works on Cultural Lag Social Change With Respect to Culture and Original Nature (1922) By: William F. Ogburn In Social Change with Respect to Culture and Original Nature, renowned sociologist William F. Ogburn coins the term 'cultural lag'. Ogburn states his thesis of cultural lag in this work. He says that the source of most modern social change is material culture. His theory of cultural lag suggests that a period of maladjustment occurs when the non-material culture is struggling to adapt to new material conditions. The rapid changes material culture force other parts of culture to change, but the rate of change in these other parts of culture is much slower. He states that people live in a state of 'maladjustment' because of this. Ogburn makes claims that he played a considerable role in solving the issue of social evolution. He goes on to say that the fours solving factors of social evolution are: invention, exponential accumulation, diffusion, and adjustment. This work was unique and innovative at the time of its publication. On Culture and Change (1964) By: William F. Ogburn On Culture and Change is a work by William F. Ogburn which is a collection of 25 works from the years 1912–1961. It is an examination of social change and culture from the perspective of a sociologist. The 25 topics discussed in the work are separated into four topics: social evolution, social trends, short-run changes, and the subjective in the social sciences. This collection of works examines culture and social change in the world. The findings and information in On Culture and Change continues to be influential and useful to this day. Future Shock (1970) By: Alvin Toffler In Future Shock, Alvin Toffler outlines the shattering stress and disorientation that rapid change people feel when they are subjected to too much change in too short of a time. Toffler says that society is undergoing a transformation from an industrial society to a "super-industrial" society. He states that this accelerating rate of change is causing people to feel disconnected from the culture. Toffler argues that balance is needed between the accelerated rates of change in society and the limited pace of human response. Toffler says that it is not impossible to attempt to slow or even control the rapid change and that it is possible for the future to arrive before society is ready for it. Toffler says that the only way to keep equilibrium would be to create social and new personal regulators. Strategies need to be put in place so that rapid culture change can be shaped and controlled. Cultural Lag: Conception & Theory (1997) By: Richard L. Brinkman, June E. Brinkman In Cultural Lag: Conception & Theory, Richard & June Brinkman go into what the theory and concept of cultural lag actually is. They go into detail about the points supporting and the points disputing the concept of cultural lag. They evaluate Ogburn's claims about cultural lag and make them more understandable. The work evaluates the existence of cultural lag and its ability to possibly predict and describe cultural change in society. The work also goes into the relevance of the concept of cultural lag to socioeconomic policies in the world. Material and non-material culture Material and non-material culture both are a big part of the theory of cultural lag. The theory states that material culture evolves and changes much quicker than non-material culture. Material culture being physical things, such as technology & infrastructure, and non-material culture being non-physical things, such as religion, ideals, and rules. Non-material culture lags behind material culture because the pace of human response is much slower than the pace of material change. New inventions and physical things that make people's lives easier are developed every single day, things such as religions and ideals are not. This is why there is cultural lag, if there is an invention created that goes against people's ideals it will take some time in order for them to accept the new invention and use it. Material culture Material culture is a term used by sociologists that refers to all physical objects that humans create that give meaning or define a culture. These are physical things that can be touched, felt, tasted, or observed with a sense. The term can include things like houses, churches, machines, furniture, or anything else for which a person may have some sentiment. The term can also include some things that cannot be seen but can be used. Things like the internet and television are also covered under the material culture definition. Material culture changes rapidly and changes depending where in the world somebody is. The environment may present different challenges in different parts of the world that is why material culture is so different everywhere. For example, houses in the heart of Tokyo are going to be smaller than the houses in Austin, Texas. Non-material culture Non-material culture is a term used by sociologists that refers to non-physical things such as ideas, values, beliefs, and rules that shape a culture. There are different belief systems everywhere in the world, different religions, myths, and legends that people may believe in. These non-physical things can be information passed down from past generations or new ideas thought up by somebody in today's world. Non-Material culture tends to lag behind material culture because it is easier to create a physical object that people will use than it is to create a system of beliefs or ideals that people will use and follow. Non-material culture tends to be very different wherever in the world someone is. This is because people from different backgrounds and areas in the world were raised on different ideals and beliefs that help shape society and culture. Problems with cultural lag Cultural lag creates problems for a society in a multitude of ways. The issue of cultural lag tends to permeate any discussion in which the implementation of some new technology is a topic. For example, the advent of stem cell research has given rise to many new, potentially beneficial medical technologies; however these new technologies have also raised serious ethical questions about the use of stem cells in medicine. In this example, the cultural lag is the fear of people to use a new possibly beneficial medical practices because of ethical issues. This shows that there really is a disconnect between material culture (Stem cell research) and non-material culture (Issues with ethics). Cultural lag is seen as an issue because failure to develop broad social consensus on appropriate applications of modern technology may lead to breakdowns in social solidarity and the rise of social conflict. Another issue that cultural lag causes is the rise of social conflict. Sometimes, people realize that they are disconnected with what is going on in society and they try to do everything they can to get back into the loop. This may result in a race to eliminate the cultural lag. For example, in the 1980s the arms race was in full effect. This is partly because one country discovered how to efficiently and safely use the widely thought unsafe nuclear power/energy. Once the United States was able to successfully harvest nuclear energy into a weapon many other countries realized that maybe nuclear energy isn't that bad and started to build weapons of mass destruction of their own. Issues can also arise when an aspect of culture changes so rapidly that society is unable to prepare or adjust to it. This is seen in the example of cars overtaking other modes of transportation in the past. Since the production and ownership of cars increased so rapidly society was unable to keep up with it. Broader roads, traffic rules, and separate lanes for horses did not come until some time after automobiles became a part of the mainstream culture. This caused dangerous situations for pedestrians and the people driving these new automobiles. Sometimes society is not ready for the future and this could cause dangerous situations for certain people or groups of people. See also Behavioural change theories Disruptive innovation I-Change Model Pace of innovation Progress trap Transtheoretical model Value network References Conclusion Disruptive innovation Not invented here Progress trap Zeitgeist Culture Technology in society
0.771329
0.988391
0.762375
History of democracy
A democracy is a political system, or a system of decision-making within an institution, organization, or state, in which members have a share of power. Modern democracies are characterized by two capabilities of their citizens that differentiate them fundamentally from earlier forms of government: to intervene in society and have their sovereign (e.g., their representatives) held accountable to the international laws of other governments of their kind. Democratic government is commonly juxtaposed with oligarchic and monarchic systems, which are ruled by a minority and a sole monarch respectively. Democracy is generally associated with the efforts of the ancient Greeks, whom 18th-century intellectuals considered the founders of Western civilization. These individuals attempted to leverage these early democratic experiments into a new template for post-monarchical political organization. The extent to which these 18th-century democratic revivalists succeeded in turning the democratic ideals of the ancient Greeks into the dominant political institution of the next 300 years is hardly debatable, even if the moral justifications they often employed might be. Nevertheless, the critical historical juncture catalyzed by the resurrection of democratic ideals and institutions fundamentally transformed the ensuing centuries and has dominated the international landscape since the dismantling of the final vestige of the empire following the end of the Second World War. Modern representative democracies attempt to bridge the gap between Rousseau's depiction of the state of nature and Hobbes's depiction of society as inevitably authoritarian through 'social contracts' that enshrine the rights of the citizens, curtail the power of the state, and grant agency through the right to vote. Antiquity Prehistoric origins Anthropologists have identified forms of proto-democracy that date back to small bands of hunter-gatherers that predate the establishment of agrarian, sedentary societies and still exist virtually unchanged in isolated indigenous groups today. In these groups of generally 50–100 individuals, often tied closely by familial bonds, decisions are reached by consensus or majority and many times without the designation of any specific chief. These types of democracy are commonly identified as tribalism, or primitive democracy. In this sense, a primitive democracy usually takes shape in small communities or villages when there are face-to-face discussions in a village, council or with a leader who has the backing of village elders or other cooperative forms of government. This becomes more complex on a larger scale, such as when the village and city are examined more broadly as political communities. All other forms of rule – including monarchy, tyranny, aristocracy, and oligarchy – have flourished in more urban centers, often those with concentrated populations. David Graeber and David Wengrow, in The Dawn of Everything, argue in contrast that cities and early settlements were more varied and unpredictable in terms of how their political systems alternated and evolved from more to less democratic. The concepts (and name) of democracy and constitution as a form of government originated in ancient Athens circa 508 BCE. In ancient Greece, where there were many city-states with different forms of government, democracy ("rule by the ", i.e. citizen body) was contrasted with governance by elites (aristocracy, literally "rule by the best"), by one person (monarchy), by tyrants (tyranny), etc. Potential proto-democratic societies Although fifth-century BCE Athens is widely considered to have been the first state to develop a sophisticated system of rule that we today call democracy, in recent decades scholars have explored the possibility that advancements toward democratic government occurred independently in the Near East, the Indian subcontinent, and elsewhere before this. Mesopotamia Studying pre-Babylonian Mesopotamia, Thorkild Jacobsen used Sumerian epic, myth, and historical records to identify what he has called primitive democracy. By this, Jacobsen means a government in which ultimate power rests with the mass of free (non-slave) male citizens, although "the various functions of government are as yet little specialised [and] the power structure is loose". In early Sumer, kings like Gilgamesh did not hold the autocratic power that later Mesopotamian rulers wielded. Rather, major city-states functioned with councils of elders and "young men" (likely free men bearing arms) that possessed the final political authority, and had to be consulted on all major issues such as war. The work has gained little outright acceptance. Scholars criticize the use of the word "democracy" in this context since the same evidence also can be interpreted to demonstrate a power struggle between primitive monarchy and noble classes, a struggle in which the common people function more like pawns rather than any kind of sovereign authority. Jacobsen conceded that the vagueness of the evidence prohibits the separation between the Mesopotamian democracy from a primitive oligarchy. Phoenicia The practice of "governing by assembly" was at least part of how ancient Phoenicians made important decisions. One source is the story of Wen-Amon, an Egyptian trader who travelled north to the Phoenician city of Byblos around 1100 BCE to trade for Phoenician lumber. After loading his lumber, a group of pirates surrounded Wen-Amon and his cargo ship. The Phoenician prince of Byblos was called in to fix the problem, whereupon he summoned his mw-'dwt, an old Semitic word meaning assembly, to reach a decision. This shows that Byblos was ruled in part by a popular assembly (drawn from what subpopulation and equipped with exactly what power is not known exactly). Indian subcontinent Another claim for early democratic institutions comes from the independent "republics" of India, s and s, which existed as early as the 6th century BCE and persisted in some areas until the 4th century. In addition, Diodorus—a Greek historian who wrote two centuries after the time of Alexander the Great's invasion of India—mentions that independent and democratic states existed in India. Key characteristics of the seem to include a monarch, usually known by the name raja, and a deliberative assembly. The assembly met regularly. It discussed all major state decisions. At least in some states, attendance was open to all free men. This body also had full financial, administrative, and judicial authority. Other officers, who rarely receive any mention, obeyed the decisions of the assembly. Elected by the , the monarch apparently always belonged to a family of the noble class of Kshatriya Varna. The monarch coordinated his activities with the assembly; in some states, he did so with a council of other nobles. The Licchavis had a primary governing body of 7,077 rajas, presumably the heads of the most important families. In contrast, the Shakyas, during the period around Gautama Buddha, had the assembly open to all men, rich and poor. Early "republics" or , such as Mallakas, centered in the city of Kusinagara, and the Vajji (or Vṛji) League, centered in the city of Vaishali, existed as early as the 6th century BCE and persisted in some areas until the 4th century CE. The most famous clan amongst the ruling confederate tribes of the Vajji Mahajanapada were the Licchavis. The Magadha kingdom included republican communities such as the community of Rajakumara. Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into executive, judicial, and military functions. Scholars differ over how best to describe these governments, and the vague, sporadic quality of the evidence allows for wide disagreements. Some emphasize the central role of the assemblies and thus tout them as democracies; other scholars focus on the upper-class domination of the leadership and possible control of the assembly and see an oligarchy or an aristocracy. Despite the assembly's obvious power, it has not yet been established whether the composition and participation were truly popular. The first main obstacle is the lack of evidence describing the popular power of the assembly. This is reflected in the Arthashastra, an ancient handbook for monarchs on how to rule efficiently. It contains a chapter on how to deal with the sangas, which includes injunctions on manipulating the noble leaders, yet it does not mention how to influence the mass of the citizens—a surprising omission if democratic bodies, not the aristocratic families, actively controlled the republican governments. Another issue is the persistence of the four-tiered Varna class system. The duties and privileges on the members of each particular caste—rigid enough to prohibit someone sharing a meal with those of another order—might have affected the roles members were expected to play in the state, regardless of the formality of the institutions. A central tenet of democracy is the notion of shared decision-making power. The absence of any concrete notion of citizen equality across these caste system boundaries leads many scholars to claim that the true nature of s and s is not comparable to truly democratic institutions. Sparta Ancient Greece, in its early period, was a loose collection of independent city states called poleis. Many of these poleis were oligarchies. The most prominent Greek oligarchy, and the state with which democratic Athens is most often and most fruitfully compared, was Sparta. Yet Sparta, in its rejection of private wealth as a primary social differentiator, was a peculiar kind of oligarchy and some scholars note its resemblance to democracy. In Spartan government, the political power was divided between four bodies: two Spartan kings (diarchy), (Council of Gerontes (elders), including the two kings), the ephors (representatives of the citizens who oversaw the kings), and the (assembly of Spartans). The two kings served as the head of the government. They ruled simultaneously, but they came from two separate lines. The dual kingship diluted the effective power of the executive office. The kings shared their judicial functions with other members of the . The members of the had to be over the age of 60 and were elected for life. In theory, any Spartan over that age could stand for election. However, in practice, they were selected from wealthy, aristocratic families. The gerousia possessed the crucial power of legislative initiative. Apella, the most democratic element, was the assembly where Spartans above the age of 30 elected the members of the gerousia and the ephors, and accepted or rejected gerousia's proposals. Finally, the five ephors were Spartans chosen in apella to oversee the actions of the kings and other public officials and, if necessary, depose them. They served for one year and could not be re-elected for a second term. Over the years, the ephors held great influence on the formation of foreign policy and acted as the main executive body of the state. Additionally, they had full responsibility for the Spartan educational system, which was essential for maintaining the high standards of the Spartan army. As Aristotle noted, ephors were the most important key institution of the state, but because often they were appointed from the whole social body it resulted in very poor men holding office, with the ensuing possibility that they could easily be bribed. The creator of the Spartan system of rule was the legendary lawgiver Lycurgus. He is associated with the drastic reforms that were instituted in Sparta after the revolt of the helots in the second half of the 7th century BCE. In order to prevent another helot revolt, Lycurgus devised the highly militarized communal system that made Sparta unique among the city-states of Greece. All his reforms were directed towards the three Spartan virtues: equality (among citizens), military fitness, and austerity. It is also probable that Lycurgus delineated the powers of the two traditional organs of the Spartan government, the and the . The reforms of Lycurgus were written as a list of rules/laws called Great Rhetra, making it the world's first written constitution. In the following centuries, Sparta became a military superpower, and its system of rule was admired throughout the Greek world for its political stability. In particular, the concept of equality played an important role in Spartan society. The Spartans referred to themselves as (, men of equal status). It was also reflected in the Spartan public educational system, , where all citizens irrespective of wealth or status had the same education. This was admired almost universally by contemporaries, from historians such as Herodotus and Xenophon to philosophers such as Plato and Aristotle. In addition, the Spartan women, unlike elsewhere, enjoyed "every kind of luxury and intemperance" including rights such as the right to inheritance, property ownership, and public education. Overall, the Spartans were relatively free to criticize their kings and they were able to depose and exile them. However, despite these 'democratic' elements in the Spartan constitution, there are two cardinal criticisms, classifying Sparta as an oligarchy. First, individual freedom was restricted, since as Plutarch writes "no man was allowed to live as he wished", but as in a "military camp" all were engaged in the public service of their . And second, the effectively maintained the biggest share of power of the various governmental bodies. The political stability of Sparta also meant that no significant changes in the constitution were made. The oligarchic elements of Sparta became even stronger, especially after the influx of gold and silver from the victories in the Persian Wars. In addition, Athens, after the Persian Wars, was becoming the hegemonic power in the Greek world and disagreements between Sparta and Athens over supremacy emerged. These led to a series of armed conflicts known as the Peloponnesian War, with Sparta prevailing in the end. However, the war exhausted both and Sparta was in turn humbled by Thebes at the Battle of Leuctra in 371 BCE. It was all brought to an end a few years later, when Philip II of Macedon crushed what remained of the power of the factional city-states to his South. Athens Athens is often regarded by western scholars as the birthplace of democracy and remains an important reference point for democracy, as evidenced by the etymological origins of democracy in English and many other languages being traced back to the Greek words '(common) people' and 'force/might'. Literature about the Athenian democracy spans over centuries with the earliest works being The Republic of Plato and Politics of Aristotle, continuing in the 16th century with Discourses of Niccolò Machiavelli. Athens emerged in the 7th century BCE, like many other , with a dominating powerful aristocracy. However, this domination led to exploitation, creating significant economic, political, and social problems. These problems were exacerbated early in the 6th century BCE; and, as "the many were enslaved to few, the people rose against the notables". At the same time, a number of popular revolutions disrupted traditional aristocracies. This included Sparta in the second half of the 7th century BCE. The constitutional reforms implemented by Lycurgus in Sparta introduced a hoplite state that showed, in turn, how inherited governments can be changed and lead to military victory. After a period of unrest between the rich and poor, Athenians of all classes turned to Solon to act as a mediator between rival factions, and reached a generally satisfactory solution to their problems. Solon and the foundations of democracy Solon ( BCE), an Athenian (Greek) of noble descent but moderate means, was a lyric poet and later a lawmaker; Plutarch ranked him as one of the Seven Sages of the ancient world. Solon attempted to satisfy all sides by alleviating the suffering of the poor majority without removing all the privileges of the rich minority. Solon divided the Athenians into four property classes, with different rights and duties for each. As the Rhetra did in Lycurgian Sparta, Solon formalized the composition and functions of the governmental bodies. All citizens gained the right to attend the Ecclesia (Assembly) and to vote. The Ecclesia became, in principle, the sovereign body, entitled to pass laws and decrees, elect officials, and hear appeals from the most important decisions of the courts. All but those in the poorest group might serve, a year at a time, on a new Boule of 400, which was to prepare the agenda for the Ecclesia. The higher governmental posts, those of the archons (magistrates), were reserved for citizens of the top two income groups. The retired archons became members of the Areopagus (Council of the Hill of Ares), which like the Gerousia in Sparta, was able to check improper actions of the newly powerful Ecclesia. Solon created a mixed timocratic and democratic system of institutions. Overall, Solon devised the reforms of 594 BCE to avert the political, economic, and moral decline in archaic Athens and gave Athens its first comprehensive code of law. The constitutional reforms eliminated enslavement of Athenians by Athenians, established rules for legal redress against over-reaching aristocratic archons, and assigned political privileges on the basis of productive wealth rather than of noble birth. Some of Solon's reforms failed in the short term, yet he is often credited with having laid the foundations for Athenian democracy. Democracy under Cleisthenes and Pericles Even though the Solonian reorganization of the constitution improved the economic position of the Athenian lower classes, it did not eliminate the bitter aristocratic contentions for control of the archonship, the chief executive post. Peisistratos became tyrant of Athens three times from 561 BCE and remained in power until his death in 527 BCE. His sons Hippias and Hipparchus succeeded him. After the fall of tyranny (510 BCE) and before the year 508–507 BCE was over, Cleisthenes proposed a complete reform of the system of government, which later was approved by the popular . Cleisthenes reorganized the population of citizens into ten tribes, with the aim to change the basis of political organization from the family loyalties to political ones, and improve the army's organization. He also introduced the principle of equality of rights for all male citizens, , by expanding access to power to more citizens. During this period, Athenians first used the word "democracy" ( – "rule by the people") to define their new system of government. In the next generation, Athens entered its Golden Age, becoming a great center of literature and art. Greek victories in Persian Wars (499–449 BCE) encouraged the poorest Athenians (who participated in the military campaigns) to demand a greater say in the running of their city. In the late 460s, Ephialtes and Pericles presided over a radicalization of power that shifted the balance decisively to the poorest sections of society, by passing laws which severely limited the powers of the Council of the Areopagus and allowed thetes (Athenians without wealth) to occupy public office. Pericles became distinguished as the Athenians' greatest democratic leader, even though he has been accused of running a political machine. In the following passage, Thucydides recorded Pericles, in the funeral oration, describing the Athenian system of rule: The Athenian democracy of Cleisthenes and Pericles was based on freedom of citizens (through the reforms of Solon) and on equality of citizens – introduced by Cleisthenes and later expanded by Ephialtes and Pericles. To preserve these principles, the Athenians used lot for selecting officials. Casting lots aimed to ensure that all citizens were "equally" qualified for office, and to avoid any corruption allotment machines were used. Moreover, in most positions chosen by lot, Athenian citizens could not be selected more than once; this rotation in office meant that no-one could build up a power base through staying in a particular position. The courts formed another important political institution in Athens; they were composed of a large number of juries with no judges, and they were selected by lot on a daily basis from an annual pool, also chosen by lot. The courts had unlimited power to control the other bodies of the government and its political leaders. Participation by the citizens selected was mandatory, and a modest financial compensation was given to citizens whose livelihood was affected by being "drafted" to office. The only officials chosen by elections, one from each tribe, were the (generals), where military knowledge was required, and the treasurers, who had to be wealthy, since any funds revealed to have been embezzled were recovered from a treasurer's private fortune. Debate was open to all present and decisions in all matters of policy were taken by majority vote in the (compare direct democracy), in which all male citizens could participate (in some cases with a quorum of 6000). The decisions taken in the were executed by the Boule of 500, which had already approved the agenda for the Ecclesia. The Athenian Boule was elected by lot every year and no citizen could serve more than twice. Overall, the Athenian democracy was not only direct in the sense that decisions were made by the assembled people, but also directest in the sense that the people through the assembly, boule, and courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business. And even though the rights of the individual (probably) were not secured by the Athenian constitution in the modern sense, the Athenians enjoyed their liberties not in opposition to the government, but by living in a city that was not subject to another power and by not being subjects themselves to the rule of another person. The birth of political philosophy Within the Athenian democratic environment, many philosophers from all over the Greek world gathered to develop their theories. Socrates (470–399 BCE) was the first to raise the question, further expanded by his pupil Plato (died 348/347), about the relation/position of an individual within a community. Aristotle (384–322 BCE) continued the work of his teacher, Plato, and laid the foundations of political philosophy. The political philosophy developed in Athens was, in the words of Peter Hall, "in a form so complete that hardly added anyone of moment to it for over a millennium". Aristotle systematically analyzed the different systems of rule that the numerous Greek city-states had and divided them into three categories based on how many ruled: the many (democracy/polity), the few (oligarchy/aristocracy), a single person (tyranny, or today: autocracy/monarchy). For Aristotle, the underlying principles of democracy are reflected in his work Politics: Decline, revival, and criticisms The Athenian democracy, in its two centuries of life-time, twice voted against its democratic constitution (both times during the crisis at the end of the Pelopponesian War of 431 to 404 BCE), establishing first the Four Hundred (in 411 BCE) and second Sparta's puppet régime of the Thirty Tyrants (in 404 BCE). Both votes took place under manipulation and pressure, but democracy was recovered in less than a year in both cases. Reforms following the restoration of democracy after the overthrow of the Thirty Tyrants removed most law-making authority from the Assembly and placed it in randomly selected law-making juries known as . Athens restored its democratic constitution again after King Philip II of Macedon (reigned 359–336 BCE) and later Alexander the Great (reigned 336–323 BCE) unified Greece, but it was politically overshadowed by the Hellenistic empires. Finally, after the Roman conquest of Greece in 146 BCE, Athens was restricted to matters of local administration. However, democracy in Athens declined not only due to external powers, but due to its citizens, such as Plato and his student Aristotle. Because of their influential works, after the rediscovery of classics during the Renaissance, Sparta's political stability was praised, while the Periclean democracy was described as a system of rule where either the less well-born, the mob (as a collective tyrant), or the poorer classes held power. Only centuries afterwards, after the publication of A History of Greece by George Grote from 1846 onwards, did modern political thinkers start to view the Athenian democracy of Pericles positively. In the late 20th century scholars re-examined the Athenian system of rule as a model of empowering citizens and as a "post-modern" example for communities and organizations alike. Rome Rome's history has helped preserve the concept of democracy over the centuries. The Romans invented the concept of classics and many works from Ancient Greece were preserved. Additionally, the Roman model of governance inspired many political thinkers over the centuries, and today's modern (representative) democracies imitate more the Roman than the Greek models. The Roman Republic Rome was a city-state in Italy next to powerful neighbors; Etruscans had built city-states throughout central Italy since the 13th century BCE and in the south were Greek colonies. Similar to other city-states, Rome was ruled by a king elected by the Assemblies. However, social unrest and the pressure of external threats led in 510 BCE the last king to be deposed by a group of aristocrats led by Lucius Junius Brutus. A new constitution was crafted, but the conflict between the ruling families (patricians) and the rest of the population, the plebeians continued. The plebs were demanding for definite, written, and secular laws. The patrician priests, who were the recorders and interpreters of the statutes, by keeping their records secret used their monopoly against social change. After a long resistance to the new demands, the Senate in 454 BCE sent a commission of three patricians to Greece to study and report on the legislation of Solon and other lawmakers. When they returned, the Assembly in 451 BCE chose ten men – a – to formulate a new code, and gave them supreme governmental power in Rome for two years. This commission, under the supervision of a resolute reactionary, Appius Claudius, transformed the old customary law of Rome into Twelve Tables and submitted them to the Assembly (which passed them with some changes) and they were displayed in the Forum for all who would and could read. The Twelve Tables recognised certain rights and by the 4th century BCE, the plebs were given the right to stand for consulship and other major offices of the state. The political structure as outlined in the Roman constitution resembled a mixed constitution and its constituent parts were comparable to those of the Spartan constitution: two consuls, embodying the monarchic form; the Senate, embodying the aristocratic form; and the people through the assemblies. The consul was the highest ranking ordinary magistrate. Consuls had power in both civil and military matters. While in the city of Rome, the consuls were the head of the Roman government and they would preside over the Senate and the assemblies. While abroad, each consul would command an army. The Senate passed decrees, which were called and were official advice to a magistrate. However, in practice, it was difficult for a magistrate to ignore the Senate's advice. The focus of the Roman Senate was directed towards foreign policy. Though it technically had no official role in the management of military conflict, the Senate ultimately was the force that oversaw such affairs. Also, it managed Rome's civil administration. The requirements for becoming a senator included having at least 100,000 denarii worth of land, being born of the patrician (noble aristocrats) class, and having held public office at least once before. New Senators had to be approved by the sitting members. The people of Rome through the assemblies had the final say regarding the election of magistrates, the enactment of new laws, the carrying out of capital punishment, the declaration of war and peace, and the creation (or dissolution) of alliances. Despite the obvious power the assemblies had, in practice, the assemblies were the least powerful of the other bodies of government. An assembly was legal only if summoned by a magistrate and it was restricted from any legislative initiative or the ability to debate. And even the candidates for public office, as Livy writes: "levels were designed so that no one appeared to be excluded from an election and yet all of the clout resided with the leading men". Moreover, the unequal weight of votes was making a rare practice for asking the lowest classes for their votes. Roman stability, in Polybius' assessment, was owing to the checks each element put on the superiority of any other: a consul at war, for example, required the cooperation of the Senate and the people if he hoped to secure victory and glory, and could not be indifferent to their wishes. This was not to say that the balance was in every way even: Polybius observes that the superiority of the Roman to the Carthaginian constitution (another mixed constitution) at the time of the Hannibalic War was an effect of the latter's greater inclination toward democracy than to aristocracy. Moreover, recent attempts to posit for Rome personal freedom in the Greek sense – : living as you like – have fallen on stony ground, since (which was an ideology and way of life in the democratic Athens) was anathema in the Roman eyes. Rome's core values included order, hierarchy, discipline, and obedience. These values were enforced with laws regulating the private life of an individual. The laws were applied in particular to the upper classes, since the upper classes were the source of Roman moral examples. Rome became the ruler of a great Mediterranean empire. The new provinces brought wealth to Italy, and fortunes were made through mineral concessions and enormous slave run estates. Slaves were imported to Italy and wealthy landowners soon began to buy up and displace the original peasant farmers. By the late 2nd century this led to renewed conflict between the rich and poor and demands from the latter for reform of the constitution. The background of social unease and the inability of the traditional republican constitutions to adapt to the needs of the growing empire led to the rise of a series of over-mighty generals, championing the cause of either the rich or the poor, in the last century BCE. Transition to empire Over the next few hundred years, various generals would bypass or overthrow the Senate for various reasons, mostly to address perceived injustices, either against themselves or against poorer citizens or soldiers. One of those generals was Julius Caesar, where he marched on Rome and took supreme power over the republic. Caesar's career was cut short by his assassination at Rome in 44 BCE by a group of Senators including Marcus Junius Brutus. In the power vacuum that followed Caesar's assassination, his friend and chief lieutenant, Marcus Antonius, and Caesar's grandnephew Octavian who also was the adopted son of Caesar, rose to prominence. Their combined strength gave the triumvirs absolute power. However, in 31 BCE war between the two broke out. The final confrontation occurred on 2 September 31 BCE, at the naval Battle of Actium where the fleet of Octavian under the command of Agrippa routed Antony's fleet. Thereafter, there was no one left in the Roman Republic who wanted to, or could stand against Octavian, and the adopted son of Caesar moved to take absolute control. Octavian left the majority of Republican institutions intact, though he influenced everything using personal authority and ultimately controlled the final decisions, having the military might to back up his rule if necessary. By 27 BCE the transition, though subtle, disguised, and relying on personal power over the power of offices, was complete. In that year, Octavian offered back all his powers to the Senate, and in a carefully staged way, the Senate refused and titled Octavian – "the revered one". He was always careful to avoid the title of – "king", and instead took on the titles of – "first citizen" and , a title given by Roman troops to their victorious commanders, completing the transition from the Roman Republic to the Roman Empire. Institutions in the medieval era Early institutions included: The continuations of the early Germanic thing from the Viking Age: The Witenagemot (folkmoot) of Early Medieval England, councils of advisors to the kings of the petty kingdoms and then that of a unified England before the Norman Conquest. The Frankish custom of the or Camp of Mars. In the Iberian Peninsula, in Portuguese, Leonese, Castillian, Aragonese, Catalan and Valencian customs, (or ) were periodically convened to debate the state of the Realms. The Corts of Catalonia were the first parliament of Europe that officially obtained the power to pass legislation. Tynwald, on the Isle of Man, claims to be one of the oldest continuous parliaments in the world, with roots back to the late 9th or 10th century. The , the parliament of the Icelandic Commonwealth, founded in 930. It consisted of the 39, later 55, ; each owner of a ; and each hereditary kept a tight hold on his membership, which could in principle be lent or sold. Thus, for example, when Burnt Njal's stepson wanted to enter it, Njal had to persuade the to enlarge itself so a seat would become available. But as each independent farmer in the country could choose what represented him, the system could be claimed as an early form of democracy. The has run nearly continuously to the present day. The was preceded by less elaborate "things" (assemblies) all over Northern Europe. Sicilian Parliament of the kingdom of Sicily, from 1097, one of the oldest parliaments in the world and the first legislature in the modern sense. The Thing of all Swedes, which took place annually at Uppsala at the end of February or in early March. As in Iceland, the lawspeaker presided over the assemblies, but the Swedish king functioned as a judge. A famous incident took place circa 1018, when King Olof Skötkonung wanted to pursue the war against Norway against the will of the people. Þorgnýr the Lawspeaker reminded the king in a long speech that the power resided with the Swedish people and not with the king. When the king heard the din of swords beating the shields in support of Þorgnýr's speech, he gave in. Adam of Bremen wrote that the people used to obey the king only when they thought his suggestions seemed better, although in war his power was absolute. The Swiss . In Norway: The election of Gopala in the Pala Empire (8th century). The system in early medieval Ireland. Landowners and the masters of a profession or craft were members of a local assembly, known as a . Each met in annual assembly which approved all common policies, declared war or peace on other , and accepted the election of a new "king"; normally during the old king's lifetime, as a tanist. The new king had to be descended within four generations from a previous king, so this usually became, in practice, a hereditary kingship; although some kingships alternated between lines of cousins. About 80 to 100 coexisted at any time throughout Ireland. Each controlled a more or less compact area of land which it could pretty much defend from cattle-raids, and this was divided among its members. The Ibadites of Oman, a minority sect distinct from both Sunni and Shia Muslims, have traditionally chosen their leaders via community-wide elections of qualified candidates starting in the 8th century. They were distinguished early on in the region by their belief that the ruler needed the consent of the ruled. The leader exercised both religious and secular rule. The guilds, of economic, social and religious natures, in the later Middle Ages elected officers for yearly terms. The city-states (republics) of medieval Italy, as Venice and Florence, and similar city-states in Switzerland, Flanders and the Hanseatic league had not a modern democratic system but a guild democratic system. The Italian cities in the middle medieval period had "lobbies war" democracies without institutional guarantee systems (a full developed balance of powers). During late medieval and renaissance periods, Venice became an oligarchy and others became ("lordships"). They were, in any case in late medieval times, not nearly as democratic as the Athenian-influenced city-states of Ancient Greece (discussed above), but they served as focal points for early modern democracy. , – popular assemblies in Slavic countries. In Poland, developed in 1182 into the – the Polish parliament. The was the highest legislature and judicial authority in the republics of Novgorod until 1478 and Pskov until 1510. The system of the Basque Country in which farmholders of a rural area connected to a particular church would meet to reach decisions on issues affecting the community and to elect representatives to the provincial /. The rise of democratic parliaments in England and Scotland: Magna Carta (1215) limiting the authority of the king; first representative parliament (1265). The version of Magna Carta signed by King John implicitly supported what became the English writ of habeas corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal. The emergence of petitioning in the 13th century is some of the earliest evidence of this parliament being used as a forum to address the general grievances of ordinary people. Indigenous peoples of the Americas Professor of anthropology Jack Weatherford has argued that the ideas leading to the United States Constitution and democracy derived from various indigenous peoples of the Americas including the Iroquois. Weatherford speculated that this democracy was founded between the years 1000–1450, that it lasted several hundred years, and that the U.S. democratic system was continually changed and improved by the influence of Native Americans throughout North America. Elizabeth Tooker, a professor of anthropology at Temple University and an authority on the culture and history of the Northern Iroquois, has reviewed Weatherford's claims and concluded they are myth rather than fact. The idea that North American Indians had a democratic culture is several decades old, but not usually expressed within historical literature. The relationship between the Iroquois League and the Constitution is based on a portion of a letter written by Benjamin Franklin and a speech by the Iroquois chief Canassatego in 1744. Tooker concluded that the documents only indicate that some groups of Iroquois and white settlers realized the advantages of a confederation, and that ultimately there is little evidence to support the idea that eighteenth century colonists were knowledgeable regarding the Iroquois system of governance. What little evidence there is regarding this system indicates chiefs of different tribes were permitted representation in the Iroquois League council, and this ability to represent the tribe was hereditary. The council itself did not practice representative government, and there were no elections; deceased chiefs' successors were selected by the most senior woman within the hereditary lineage in consultation with other women in the clan. Decision making occurred through lengthy discussion and decisions were unanimous, with topics discussed being introduced by a single tribe. Tooker concludes that "...there is virtually no evidence that the framers borrowed from the Iroquois" and that the myth is largely based on a claim made by Iroquois linguist and ethnographer J.N.B. Hewitt which was exaggerated and misinterpreted after his death in 1937. The Aztecs also practiced elections, but the elected officials elected a supreme speaker, not a ruler. However, a contemporary civilisation, Tlaxcallan, along with other Mesoamerican city states, are likely to have practiced collective rule. Rise of democracy in modern national governments Early Modern Era milestones Golden Liberty or the Nobles' Democracy (Rzeczpospolita Szlachecka) arose in the Kingdom of Poland and Polish–Lithuanian Commonwealth. This foreshadowed a democracy of about ten percent of the population of the Commonwealth, consisting of the nobility, who were an electorate for the office of the King. They observed Nihil novi of 1505, Pacta conventa and King Henry's Articles (1573). See also: Szlachta history and political privileges, Sejm of the Kingdom of Poland and the Polish–Lithuanian Commonwealth, Organisation and politics of the Polish–Lithuanian Commonwealth. 1588: The Justificatie of Deductie in the Dutch Republic argued that the sovereignty over the Netherlands was not in the hands of the monarch, but in those of the States-General, an assembly consisting of nobles and representatives of cities from all over the Netherlands. Furthermore, it decided that the sovereignty of the states was better guaranteed by the assembly, instead of a singular autocrat. 1610: The Case of Proclamations in England decided that "the King by his proclamation or other ways cannot change any part of the common law, or statute law, or the customs of the realm" and that "the King hath no prerogative, but that which the law of the land allows him." 1610: Dr. Bonham's Case decided that "in many cases, the common law will control Acts of Parliament". 1619: The Virginia House of Burgesses, the first representative legislative body in the New World, is established. 1620: The Mayflower Compact, an agreement among the Pilgrims, and fellow voyagers on forming a government among themselves, based on majority rule, is signed. 1628: During a period of renewed interest in Magna Carta, the Petition of Right was passed by the Parliament of England. It established, among other things, the illegality of taxation without parliamentary consent and of arbitrary imprisonment. 1642–1651: The idea of the political party with factions took form in Britain around the time of the English Civil War. Soldiers from the Parliamentarian New Model Army and a faction of Levellers freely debated rights to political representation during the Putney Debates of 1647. The Levellers published a newspaper (The Moderate) and pioneered political petitions, pamphleteering and party colours. Later, the pre-war Royalist (then Cavalier) and opposing Parliamentarian groupings became the Tory party and the Whigs in the Parliament. 1679: English Act of Habeas Corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal; one of the documents integral to the constitution of the United Kingdom and the history of the parliament of the United Kingdom. 1682: William Penn wrote his Frame of Government of Pennsylvania. The document gave the colony a representative legislature and granted liberal freedoms to the colony's citizens. 1689: The Bill of Rights 1689, enacted by Parliament, set out the requirement for regular parliaments, free elections, rules for freedom of speech in Parliament, and limited the power of the monarch. It ensured (with the Glorious Revolution of 1688) that, unlike much of the rest of Europe, royal absolutism would not prevail. 1689: John Locke published the Two Treatises of Government, attacking monarchical absolutism and promoting social contract theory and the consent of the governed. Eighteenth and nineteenth century milestones 1707: The first Parliament of Great Britain is established after the merger of the Kingdom of England and the Kingdom of Scotland under the Acts of Union 1707, succeeding the English parliament. From around 1721–1742, Robert Walpole, regarded as the first prime minister of Great Britain, chaired cabinet meetings, appointed all other ministers, and developed the doctrine of cabinet solidarity. 1755: The Corsican Republic led by Pasquale Paoli with the Corsican Constitution From the late 1770s: new Constitutions and Bills explicitly describing and limiting the authority of powerholders, many based on the English Bill of Rights (1689). Historian Norman Davies calls the Polish–Lithuanian Commonwealth Constitution of May 3, 1791 "the first constitution of its kind in Europe". The United States: the Founding Fathers rejected limited 'democracy' run by traditionally defined aristocrats, the creation of a legally defined "Title of Nobility" is forbidden by the Constitution. The Americans, as with the British, took their cue from the Roman republic model: only the patrician classes were involved in government. 1776: Virginia Declaration of Rights is published; the American Declaration of Independence proclaims that "All men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness." United States Constitution ratified in 1788, created bicameral legislature with members of the House of Representatives elected "by the People of the several states," and members of the Senate elected by the state legislatures. The Constitution did not originally define who was eligible to vote, leaving that to the constituent states, which mostly enfranchised only adult white males who owned land. 1791: the United States Bill of Rights ratified. 1790s: First Party System in U.S. involves invention of locally rooted political parties in the United States; networks of party newspapers; new canvassing techniques; use of caucus to select candidates; fixed party names; party loyalty; party platform (Jefferson 1799); 1800: peaceful transition between parties 1780s: development of social movements identifying themselves with the term 'democracy': Political clashes between 'aristocrats' and 'democrats' in Benelux countries changed the semi-negative meaning of the word 'democracy' in Europe, which was until then regarded as synonymous with anarchy, into a much more positive opposite of 'aristocracy'. 1789–1799: the French Revolution The Declaration of the Rights of Man and of the Citizen, based on the U.S. Declaration of Independence, is adopted on 26 August 1789 and declares that "Men are born and remain free and equal in rights" and proclaimed the universal character of human rights. Universal male suffrage established for the election of the National Convention in September 1792, but revoked by the Directory in 1795. Slavery abolished in the French colonies by the National Convention on 4 February 1794, with Black people made equal to White people ("All men, without distinction of color, residing in the colonies are French citizens and will enjoy all the rights assured by the Constitution"). Slavery was re-established by Napoleon in 1802. 1791: The Haitian Revolution a successful slave revolution, established a free republic. 1792: Local elections instituted in Freetown colony in December 1792, in which Nova Scotian immigrants could elect tythingmen and hundredors. The United Kingdom 1807: The Slave Trade Act banned the trade across the British Empire after which the U.K. established the Blockade of Africa and enacted international treaties to combat foreign slave traders. 1832: The passing of the Great Reform Act, which gave representation to previously under represented urban areas in the U.K. and extended the voting franchise to a wider population. Followed later in the 19th century and 20th century with several further Reform Acts. 1833: The Slavery Abolition Act was passed, which took effect across the British Empire from 1 August 1834. 1810: 24 of September: Opening session of the Cortes of Cádiz, with representatives of all Spanish provinces, including those in America. 1820: First Cortes Gerais in Portugal under a Constitutional Charter. 1835: Serbia's first modern constitution. 1837: February 3: Local election in South Africa (British colony) in the city of Beaufort West, the first city organizing the election of a municipal council after the Cape Town Ordinance of 1836 (Order 9 from 1836) 1844: The Greek Constitution of 1844 created a bicameral parliament consisting of an Assembly (Vouli) and a Senate (Gerousia). Power then passed into the hands of a group of Greek politicians, most of whom who had been commanders in the Greek War of Independence against the Ottomans. 1848: Universal male suffrage was re-established in France in March of that year, in the wake of the French Revolution of 1848. 1848: Following the French, the Revolutions of 1848, although in many instances forcefully put down, did result in democratic constitutions in some other European countries, among them the German states, Denmark and Netherlands. 1850s: introduction of the secret ballot in Australia; 1872 in UK; 1892 in US 1853: Black Africans given the vote for the first time in Southern Africa, in the British-administered Cape Province. 1856: US – property ownership requirements were eliminated in all states, giving suffrage to most adult white males. However, tax-paying requirements remained in five states until 1860 and in two states until the 20th century. 1870: US – 15th Amendment to the Constitution, prohibits voting rights discrimination on the basis of race, colour, or previous condition of slavery. 1878–1880: William Ewart Gladstone's UK Midlothian campaign ushered in the modern political campaign. 1893: New Zealand is the first nation to introduce universal suffrage by awarding the vote to women (universal male suffrage had been in place since 1879). 1894: South Australia is the first place to pass legislation allowing women to stand for election to parliament 1905: Persian Constitutional Revolution, first parliamentary system in middle east. 1911: UK Parliament Act restricted the unelected upper house from obstructing legislation from the elected lower house. The secret ballot The notion of a secret ballot, where one is entitled to the privacy of their votes, is taken for granted by most today by virtue of the fact that it is simply considered the norm. However, this practice was highly controversial in the 19th century; it was widely argued that no man would want to keep his vote secret unless he was ashamed of it. The two earliest systems used were the Victorian method and the South Australian method. Both were introduced in 1856 to voters in Victoria and South Australia. The Victorian method involved voters crossing out all the candidates whom he did not approve of. The South Australian method, which is more similar to what most democracies use today, had voters put a mark in the preferred candidate's corresponding box. The Victorian voting system also was not completely secret, as it was traceable by a special number. Waves of democracy in the 20th century The end of the First World War was a temporary victory for democracy in Europe, as it was preserved in France and temporarily extended to Germany. Already in 1906 full modern democratic rights, universal suffrage for all citizens was implemented constitutionally in Finland as well as a proportional representation, open list system. Likewise, the February Revolution in Russia in 1917 inaugurated a few months of liberal democracy under Alexander Kerensky until Lenin took over in October. The terrible economic consequences of the Great Depression hurt democratic forces in many countries. The 1930s became a decade of dictators in Europe and Latin America. In 1918 the United Kingdom granted the women over 30 who met a property qualification the right to vote, a second one was later passed in 1928 granting women and men equal rights. On 18 August 1920, the Nineteenth Amendment (Amendment XIX) to the United States Constitution was adopted which prohibits the states and the federal government from denying the right to vote to citizens of the United States on the basis of sex. French women got the right to vote in 1944, but did not actually cast their ballot for the first time until April 29, 1945. The Indian Citizenship Act of 1924 granted full U.S. citizenship to America's indigenous peoples, called "Indians" in this Act. (The Fourteenth Amendment guarantees citizenship to persons born in the U.S., but only if "subject to the jurisdiction thereof"; this latter clause excludes certain indigenous peoples.) The act was signed into law by President Calvin Coolidge on 2 June 1924. The act further enfranchised the rights of peoples resident within the boundaries of the United States. Post–World War II World War II was ultimately a victory for democracy in Western Europe, where representative governments were established that reflected the general will of their citizens. However, many countries of Central and Eastern Europe became undemocratic Soviet satellite states. In Southern Europe, a number of right-wing authoritarian dictatorships (most notably in Spain and Portugal) continued to exist. Japan had moved towards democracy during the Taishō period during the 1920s, but it was under effective military rule in the years before and during World War II. The country adopted a new constitution during the postwar Allied occupation, with initial elections in 1946. Decolonisation and civil rights movements World War II also planted seeds of democracy outside Europe and Japan, as it weakened, with the exception of the USSR and the United States, all the old colonial powers while strengthening anticolonial sentiment worldwide. Many restive colonies/possessions were promised subsequent independence in exchange for their support for embattled colonial powers during the war. In 1946, the United States granted independence to the Philippines, which preserved a democratic political system as a presidential republic until the presidency of Ferdinand Marcos. The aftermath of World War II also resulted in the United Nations' decision to partition the British Mandate into two states, one Jewish and one Arab. On 14 May 1948 the state of Israel declared independence and thus was born the first full democracy in the Middle East. Israel is a representative democracy with a parliamentary system and universal suffrage. India became a Democratic Republic in 1950 after achieving independence from Great Britain in 1947. After holding its first national elections in 1952, India achieved the status of the world's largest liberal democracy with universal suffrage which it continues to hold today. Most of the former British and French colonies were independent by 1965 and at least initially democratic; those that were formerly part of the British Empire often adopted the Westminster parliamentary system. The process of decolonisation created much political upheaval in Africa and parts of Asia, with some countries experiencing often rapid changes to and from democratic and other forms of government. In the United States of America, the Voting Rights Act of 1965 and the Civil Rights Act enforced the 15th Amendment. The 24th Amendment ended poll taxing by removing all tax placed upon voting, which was a technique commonly used to restrict the African American vote. The Voting Rights Act also granted voting rights to all Native Americans, irrespective of their home state. The minimum voting age was reduced to 18 by the 26th Amendment in 1971. Late Cold War and post-Soviet democratication New waves of democracy swept across Southern Europe in the 1970s, as a number of right-wing nationalist dictatorships fell from power. Later, in Central and Eastern Europe in the late 1980s, the communist states in the USSR sphere of influence were also replaced with liberal democracies. Much of Eastern Europe, Latin America, East and Southeast Asia, and several Arab, central Asian and African states, and the not-yet-state that is the Palestinian Authority moved towards greater liberal democracy in the 1990s and 2000s. By the end of the century, the world had changed from having in 1900 not a single liberal democracy with universal suffrage, to 120 of the world's 192 nations, or 62% having become such democracies. 25 nations, or 13% of the world's nations had "restricted democratic practices" in 1900 and in 2000 16, or 8% of the world's nations were such restricted democracies. Other nations had, and have, various forms of non-democratic rule. The numbers are indicative of the expansion of democracy during the twentieth century, the specifics though may be open to debate (for example, New Zealand enacted universal suffrage in 1893, but this is discounted due to lack of complete sovereignty of the Māori vote). Democracy in the 21st century By region The 2003 US-led invasion of Iraq led to a toppling of President Saddam Hussein and a new constitution with free and open elections.. Later, around 2011, the Arab Spring led to much upheaval, as well as to the establishing of a democracy in Tunisia and some increased democratic rights in Morocco. Egypt saw a temporary democracy before the re-establishment of military rule. The Palestinian Authority also took action to address democratic rights. In Africa, out of 55 countries, democratization seems almost stalled since 2005 because of the resistance of some 20 non-democratic regimes, most of which originated in the 1980s. In exception to this, in 2016, after losing an election, the president of the Gambia attempted to cling to power but a threatened regional military intervention forced him to leave. In 2018 dictatorships in Sudan and Algeria fell; it remains unclear what type of regimes will emerge in these two countries. In Asia, Myanmar (also known as Burma) the ruling military junta in 2011 made changes to allow certain voting-rights and released a prominent figure in the National League for Democracy, Aung San Suu Kyi, from house arrest. Myanmar did not allow Suu Kyi to run for election. However, conditions partially changed with the election of Suu Kyi's National League for Democracy party and her appointment as the de facto leader of Burma (Myanmar) with the title "state councilor", as she is still not allowed to become president and therefore leads through a figurehead, Htin Kyaw. Human rights, however, have not improved. In Bhutan, in December 2005, the 4th King Jigme Singye Wangchuck announced that the first general elections would take place in 2008, and that he would abdicate the throne in favor of his eldest son. Bhutan is currently undergoing further changes to allow for a constitutional monarchy. In the Maldives, protests and political pressure led to a government reform which allowed democratic rights and presidential elections in 2008. These were however undone by a coup in 2018. Meanwhile, in Thailand military junta twice overthrew democratically elected governments ( 2006 and 2014) and in 2014 changed the constitution in order to increase their own power. The authoritarian regime of Hun Sen in Cambodia dissolved the main opposition party (Cambodia National Rescue Party) in 2017 and effectively implemented a one-man dictatorship. In Europe, Ukraine saw several protest movements leading to a switch from effective oligarchy to more democracy; , since the Maidan revolution of February 2014 Ukraine has seen two presidential elections and the peaceful transfer of power. Not all movement has promoted democracy, however. In Poland and Hungary, so-called "illiberal democracies" have taken hold, with the ruling parties in both countries considered by the EU and by civil society to be working to undermine democratic governance. Within English-speaking Western democracies, "protection-based" attitudes combining cultural conservatism and leftist economic attitudes were the strongest predictor of support for authoritarian modes of governance. Overall Despite the number of democratic states has continued to grow since 2006, the share of weaker electoral democracies has grown significantly. This is the strongest causal factor behind fragile democracies. As of 2020, authoritarianism and populism are on the rise around the world, with the number of people living in democracies less than the end of the Cold War. "Democratic backsliding" in the 2010s were attributed to economic inequality and social discontent, personalism, poor management of COVID-19 pandemic, as well as other factors such as government manipulation of civil society, "toxic polarization", foreign disinformation campaigns, racism and nativism, excessive executive power, and decreased power of the opposition. Large parts of the world, such as China, Russia, Central and South East Asia, the Middle East and much of Africa have consolidated authoritarian rule rather seeing it weaken. Determining the continuity and age of independent democracies depends on the criteria applied, but generally the United States is identified as the oldest democracy, while the country with longest history of universal suffrage is New Zealand. Contemporary innovations Under the influence of the theory of deliberative democracy, there have been several experiments where citizens and their representatives assemble to exchange reasons. The use of random selection to form a representative deliberative body is most commonly known as citizens' assembly. Citizens' assemblies have been used in Canada (2004, 2006) and the Netherlands (2006) to debate electoral reform, and in Iceland (2009 and 2010) for broader constitutional change. Notes References Citations General sources Primary sources Print sources Journals Further reading Kaplan, Temma. Democracy: A World History (Oxford University Press, 2014) External links Democracy by Our World in Data Democracy Democracy Types of democracy Democracy by location
0.763726
0.998213
0.762362
Natural history of disease
The natural history of disease is the course a disease takes in individual people from its pathological onset ("inception") until its resolution (either through complete recovery or eventual death). The inception of a disease is not a firmly defined concept. The natural history of a disease is sometimes said to start at the moment of exposure to causal agents. Knowledge of the natural history of disease ranks alongside causal understanding in importance for disease prevention and control. Natural history of disease is one of the major elements of descriptive epidemiology. As an example, the cartilage of the knee, trapeziometacarpal and other joints deteriorates with age in most humans (osteoarthritis). There are no disease-modifying treatments for osteoarthritis---no way to slow, arrest, or reverse this pathophysiological process. There are only palliative/symptomatic treatments such as analgesics and exercises. In contrast, consider rheumatoid arthritis, a systemic inflammatory disease that damages articular cartilage throughout the body. There are now treatments that can modify that auto-immune inflammatory process (immune modulating drugs) that can slow the progression of the disease. Because these medications can alter the natural history of disease, they are referred to as disease-modifying antirheumatic drugs. The subclinical (pre-symptomatic) and clinical (symptomatic) evolution of disease is the natural progression of a disease without any medical intervention. It constitutes the course of biological events that occurs during the development of the origin of the diseases (etiologies) to its outcome, whether that be recovery, chronicity, or death. In regards to the natural history of disease, the goal of the medical field is to discover all of the different phases and components of each pathological process in order to intervene as early as possible and change the course of the disease before it leads to the deterioration of a patient's health. There are two complementary perspectives for characterizing the natural history of disease. The first is that of the family doctor, who, by means of detailed clinical histories of each patient, can determine the presence of and characteristics of any new health problems. In contrast to this individualized view, the second perspective is that of the epidemiologist, who, through a combination of health records and biostatistical data, can discover new diseases and their respective evolutions, which is more of a population view. Phases of disease Pre-pathogenic period In the pre-pathogenic period, the disease originates, but the patient does not yet present clinical symptoms or changes in his/her cells, tissues, or organs. This phase is defined by the host conditions, the disease agent (such as microorganisms and pathogens), and the environment. Pathogenic period The pathogenic period is the phase in which there are changes in the patient's cells, tissues, or organs, but the patient still does not notice any symptoms or signs of disease. This is a subclinical phase that can be subdivided into two more phases: Incubation period vs. latency period In transmissible diseases (like the flu), we refer to this phase as the incubation period because it's the time in which microorganisms are multiplying and producing toxins. It's fast-evolving and can last hours to days. However, in degenerative and chronic diseases (like osteoarthritis and dementia), we refer to this phase as the latency period because it has a very slow evolution that can last months to years. Clinical period The clinical period is when the patient finally presents clinical signs and symptoms. That is: when the disease is clinically expressed and the affected seek health care. During this phase, if the pathological process keeps evolving spontaneously without medical intervention, it will end in one of three ways: recovery, disability, or death. Additionally, this phase can be broken down into three different periods: Prodromal: the first signs or symptoms appear, which indicates the clinical start of the disease. Clinical: specific signs and symptoms appear, which allows the doctor to not only identify the disease but also determine the appropriate treatment in hopes of curing the patient or at least preventing long-term damages. Resolution: the final phase in which the disease disappears, becomes chronic, or leads to death. Types of prevention The medical field has developed many different interventions to diagnose, prevent, treat, and rehabilitate the natural course of disease. In artificially changing this evolution of disease, doctors hope to prevent the death of their patients by either curing them or reducing their long-term effects. Primary prevention Primary prevention is a group of sanitary activities that are carried out by the community, government, and healthcare personnel before a particular disease appears. This includes: Promotion of health, which is the encouragement and defense of the population's health through actions that fall upon individuals of the community, like, for example, anti-tobacco campaigns for preventing lung cancer and other illnesses associated with tobacco. Specific protection of health, including environmental safety and food safety. While vaccinations are carried out by medical and nursing personnel, health promotion and protection activities that influence the environment are carried out by other public health professionals. Chemical treatment, which consists of drug administration to prevent diseases. One example of this is the administration of estrogen in menopausal women to prevent osteoporosis. According to WHO, one of the instruments of health promotion and prevention is health education, which further deals with the transmission of information, the personal skills, and the self-esteem necessary to adopt measures intended to improve health. Health education involves the spreading of information related not only to underlying social, economic, and environmental conditions that influence health but also to factors and behaviors that put patients at risk. In addition to this, communication about the use of the healthcare system is becoming increasingly more important to primary prevention. Secondary prevention Secondary prevention, also called premature diagnosis or premature screening, is an early detection program. More specifically, it's an epidemiological program of universal application that is used to detect serious illnesses in particular, asymptomatic populations during the pre-pathogenic period. This form of prevention can be associated with an effective or curative treatment, and its goal is to reduce the mortality rate. Secondary prevention is based on population screenings, and, in order to justify these screenings, the following predetermined conditions defined by Frame and Carlson in 1975 must be met: That the disease represents an important health problem that produces noticeable effects on the quality and duration of one's life. That the disease has a prolonged initial, asymptomatic phase and that its natural history is known. That an effective treatment is available and accepted by the population in case the disease is found in the initial phase. That a rapid, reliable, and easily conducted screening test is available, is well-accepted by doctors and patients, and has high sensitivity, specificity, and validity. That the screening test is cost-effective. That the early detection of the disease and its treatment during the asymptomatic period reduces global morbidity and/or mortality. Tertiary prevention Tertiary prevention is the patient's recovery once the disease has appeared. A treatment is administered in an attempt to cure or palliate the disease or some of its specific symptoms. The recovery and treatment of the patient is carried out both in primary care and in hospital care. Tertiary prevention also occurs when a patient avoids a new contagion as a result of knowledge that he/she gained from having a different illness in the past. Quaternary prevention Quaternary prevention is the group of sanitary activities that mitigates or entirely bypasses the consequences of the health system's unnecessary or excessive interventions. They are "the actions that are taken to identify patients at risk of overtreatment, to protect them from new medical interventions, and to suggest ethically acceptable alternatives." This concept is coined by the Belgian general physician, Marc Jamoulle, and is included in WONCA's Dictionary of General/Family Practice. Example: Musculoskeletal diseases of senescence Pre-pathogenic period Musculoskeletal pathologies such as osteoarthritis of the knee or shoulder (rotator cuff) tendinopathy are aspects of normal human aging. Most humans eventually have evidence of these disease on imaging. In other words, they are diseases of senescence. In a sense, all humans are in the "pre-pathogenic period" for these diseases. Pathogenic period Latency period Osteoarthritis and tendinopathy can remain unnoticed (asymptomatic) for years or even decades. For instance, when one shoulder with tendinopathy develops painful movement, imaging of the opposite symptom-free shoulder tends to identify comparable pathology. Clinical period Prodromal: The first time a person notices pain or stiffness associated with osteoarthritis or tendinopathy, it may be misperceived as a new pathology and even as an injury. Clinical: There comes a time when the disease is symptomatic on most days and there may be deformity or stiffness (reduced motion). The person is now aware of the changes in their body. This may be a time of seeking medical advice or treatment. Resolution: As with all diseases of senescence, there is an accommodation phase a person redefines their sense of self and no longer perceives the disease as needing active care. Another example would be presbyopia, or the need for reading glasses. Once a person understands that they need glasses to read, they adjust and this is no longer a medical problem. Types of prevention The concept of prevention does not apply to musculoskeletal diseases of senescence, because there are no disease modifying treatments, and the pathology seems relatively independent of environmental exposures such as activity level. References Bibliography Cause (medicine) Epidemiology
0.774826
0.983906
0.762356
Fascism and ideology
The history of fascist ideology is long and it draws on many sources. Fascists took inspiration from sources as ancient as the Spartans for their focus on racial purity and their emphasis on rule by an elite minority. Fascism has also been connected to the ideals of Plato, though there are key differences between the two. Fascism styled itself as the ideological successor to Rome, particularly the Roman Empire. From the same era, Georg Wilhelm Friedrich Hegel's view on the absolute authority of the state also strongly influenced fascist thinking. The French Revolution was a major influence insofar as the Nazis saw themselves as fighting back against many of the ideas which it brought to prominence, especially liberalism, liberal democracy and racial equality, whereas on the other hand, fascism drew heavily on the revolutionary ideal of nationalism. The prejudice of a "high and noble" Aryan culture as opposed to a "parasitic" Semitic culture was core to Nazi racial views, while other early forms of fascism concerned themselves with non-racialized conceptions of the nation. Common themes among fascist movements include: authoritarianism, nationalism (including racial nationalism and religious nationalism), hierarchy and elitism, and militarism. Other aspects of fascism such as perception of decadence, anti-egalitarianism and totalitarianism can be seen to originate from these ideas. Roger Griffin has proposed that fascism is a synthesis of totalitarianism and ultranationalism sacralized through a myth of national rebirth and regeneration, which he terms "Palingenetic ultranationalism". Fascism's relationship with other ideologies of its day has been complex. It frequently considered those ideologies its adversaries, but at the same time it was also focused on co-opting their more popular aspects. Fascism supported private property rights – except for the groups which it persecuted – and the profit motive of capitalism, but it sought to eliminate the autonomy of large-scale capitalism from the state. Fascists shared many of the goals of the conservatives of their day and they often allied themselves with them by drawing recruits from disaffected conservative ranks, but they presented themselves as holding a more modern ideology, with less focus on things like traditional religion, and sought to radically reshape society through revolutionary action rather than preserve the status quo. Fascism opposed class conflict and the egalitarian and international character of socialism. It strongly opposed liberalism, communism, anarchism, and democratic socialism. Ideological origins Early influences (495 BCE–1880 CE) Early influences that shaped the ideology of fascism have been dated back to Ancient Greece. The political culture of ancient Greece and specifically the ancient Greek city state of Sparta under Lycurgus, with its emphasis on militarism and racial purity, were admired by the Nazis. Nazi Führer Adolf Hitler emphasized that Germany should adhere to Hellenic values and culture – particularly that of ancient Sparta. He rebuked potential criticism of Hellenic values being non-German by emphasizing the common Aryan race connection with ancient Greeks, saying in Mein Kampf: "One must not allow the differences of the individual races to tear up the greater racial community". In fact, drawing racial ties to ancient Greek culture was seen as necessary to the national narrative, as Hitler was unimpressed with the cultural works of Germanic tribes at the time, saying, "if anyone asks us about our ancestors, we should continually allude to the ancient Greeks." Hitler went on to say in Mein Kampf: "The struggle that rages today involves very great aims: a culture fights for its existence, which combines millenniums and embraces Hellenism and Germanity together". The Spartans were emulated by the quasi-fascist regime of Ioannis Metaxas who called for Greeks to wholly commit themselves to the nation with self-control as the Spartans had done. Supporters of the 4th of August Regime in the 1930s to 1940s justified the dictatorship of Metaxas on the basis that the "First Greek Civilization" involved an Athenian dictatorship led by Pericles who had brought ancient Greece to greatness. The Greek philosopher Plato supported many similar political positions to fascism. In The Republic (c. 380 BC), Plato emphasizes the need for a philosopher king in an ideal state. Plato believed the ideal state would be ruled by an elite class of rulers known as "Guardians" and rejected the idea of social equality. Plato believed in an authoritarian state. Plato held Athenian democracy in contempt by saying: "The laws of democracy remain a dead letter, its freedom is anarchy, its equality the equality of unequals". Like fascism, Plato emphasized that individuals must adhere to laws and perform duties while declining to grant individuals rights to limit or reject state interference in their lives. Like fascism, Plato also claimed that an ideal state would have state-run education that was designed to promote able rulers and warriors. Like many fascist ideologues, Plato advocated for a state-sponsored eugenics program to be carried out in order to improve the Guardian class in his Republic through selective breeding. Italian Fascist Il Duce Benito Mussolini had a strong attachment to the works of Plato. However, there are significant differences between Plato's ideals and fascism. Unlike fascism, Plato never promoted expansionism and he was opposed to offensive war. Italian Fascists identified their ideology as being connected to the legacy of ancient Rome and particularly the Roman Empire: they idolized Julius Caesar and Augustus. Italian Fascism viewed the modern state of Italy as the heir of the Roman Empire and emphasized the need for Italian culture to "return to Roman values". Italian Fascists identified the Roman Empire as being an ideal organic and stable society in contrast to contemporary individualist liberal society that they saw as being chaotic in comparison. Julius Caesar was considered a role model by fascists because he led a revolution that overthrew an old order to establish a new order based on a dictatorship in which he wielded absolute power. Mussolini emphasized the need for dictatorship, activist leadership style and a leader cult like that of Julius Caesar that involved "the will to fix a unifying and balanced centre and a common will to action". Italian Fascists also idolized Augustus as the champion who built the Roman Empire. The fasces – a symbol of Roman authority – was the symbol of the Italian Fascists and was additionally adopted by many other national fascist movements formed in emulation of Italian Fascism. While a number of Nazis rejected Roman civilization because they saw it as incompatible with Aryan Germanic culture and they also believed that Aryan Germanic culture was outside Roman culture, Adolf Hitler personally admired ancient Rome. Hitler focused on ancient Rome during its rise to dominance and at the height of its power as a model to follow, and he deeply admired the Roman Empire for its ability to forge a strong and unified civilization. In private conversations, Hitler blamed the fall of the Roman Empire on the Roman adoption of Christianity because he claimed that Christianity authorized the racial intermixing that weakened Rome and led to its destruction. There were a number of influences on fascism from the Renaissance era in Europe. Niccolò Machiavelli is known to have influenced Italian Fascism, particularly through his promotion of the absolute authority of the state. Machiavelli rejected all existing traditional and metaphysical assumptions of the timeespecially those associated with the Middle Agesand asserted as an Italian patriot that Italy needed a strong and all-powerful state led by a vigorous and ruthless leader who would conquer and unify Italy. Mussolini saw himself as a modern-day Machiavellian and wrote an introduction to his honorary doctoral thesis for the University of Bologna"Prelude to Machiavelli". Mussolini professed that Machiavelli's "pessimism about human nature was eternal in its acuity. Individuals simply could not be relied on voluntarily to 'obey the law, pay their taxes and serve in war'. No well-ordered society could want the people to be sovereign". Most dictators of the 20th century mimicked Mussolini's admiration for Machiavelli and "Stalin... saw himself as the embodiment of Machiavellian virtù". English political theorist Thomas Hobbes in his work Leviathan (1651) created the ideology of absolutism that advocated an all-powerful absolute monarchy to maintain order within a state. Absolutism was an influence on fascism. Absolutism based its legitimacy on the precedents of Roman law including the centralized Roman state and the manifestation of Roman law in the Catholic Church. Though fascism supported the absolute power of the state, it opposed the idea of absolute power being in the hands of a monarch and opposed the feudalism that was associated with absolute monarchies. During the Enlightenment, a number of ideological influences arose that would shape the development of fascism. The development of the study of universal histories by Johann Gottfried Herder resulted in Herder's analysis of the development of nations. Herder developed the term Nationalismus ("nationalism") to describe this cultural phenomenon. At this time nationalism did not refer to the political ideology of nationalism that was later developed during the French Revolution. Herder also developed the theory that Europeans are the descendants of Indo-Aryan people based on language studies. Herder argued that the Germanic peoples held close racial connections with the ancient Indians and ancient Persians, who he claimed were advanced peoples possessing a great capacity for wisdom, nobility, restraint and science. Contemporaries of Herder used the concept of the Aryan race to draw a distinction between what they deemed "high and noble" Aryan culture versus that of "parasitic" Semitic culture and this anti-Semitic variant view of Europeans' Aryan roots formed the basis of Nazi racial views. Another major influence on fascism came from the political theories of Georg Wilhelm Friedrich Hegel. Hegel promoted the absolute authority of the state and said "nothing short of the state is the actualization of freedom" and that the "state is the march of God on earth". The French Revolution and its political legacy had a major influence upon the development of fascism. Fascists view the French Revolution as a largely negative event that resulted in the entrenchment of liberal ideas such as liberal democracy, anticlericalism and rationalism. Opponents of the French Revolution initially were conservatives and reactionaries, but the Revolution was also later criticized by Marxists for its bourgeois character, and by racist nationalists who opposed its universalist principles. Racist nationalists in particular condemned the French Revolution for granting social equality to "inferior races" such as Jews. Mussolini condemned the French Revolution for developing liberalism, scientific socialism and liberal democracy, but also acknowledged that fascism extracted and used all the elements that had preserved those ideologies' vitality and that fascism had no desire to restore the conditions that precipitated the French Revolution. Though fascism opposed core parts of the Revolution, fascists supported other aspects of it, Mussolini declared his support for the Revolution's demolishment of remnants of the Middle Ages such as tolls and compulsory labour upon citizens and he noted that the French Revolution did have benefits in that it had been a cause of the whole French nation and not merely a political party. Most importantly, the French Revolution was responsible for the entrenchment of nationalism as a political ideology – both in its development in France as French nationalism and in the creation of nationalist movements particularly in Germany with the development of German nationalism by Johann Gottlieb Fichte as a political response to the development of French nationalism. The Nazis accused the French Revolution of being dominated by Jews and Freemasons and were deeply disturbed by the Revolution's intention to completely break France away from its history in what the Nazis claimed was a repudiation of history that they asserted to be a trait of the Enlightenment. Though the Nazis were highly critical of the Revolution, Hitler in Mein Kampf said that the French Revolution is a model for how to achieve change that he claims was caused by the rhetorical strength of demagogues. Furthermore, the Nazis idealized the levée en masse (mass mobilization of soldiers) that was developed by French Revolutionary armies and the Nazis sought to use the system for their paramilitary movement. Fin de siècle era and the fusion of nationalism with Sorelianism (1880–1914) The ideological roots of fascism have been traced to the 1880s and in particular the fin de siècle theme of that time. The theme was based on revolt against materialism, rationalism, positivism, bourgeois society and liberal democracy. The fin-de-siècle generation supported emotionalism, irrationalism, subjectivism and vitalism. The fin-de-siècle mindset saw civilization as being in a crisis that required a massive and total solution. The fin-de-siècle intellectual school of the 1890s – including Gabriele d'Annunzio and Enrico Corradini in Italy; Maurice Barrès, Edouard Drumont and Georges Sorel in France; and Paul de Lagarde, Julius Langbehn and Arthur Moeller van den Bruck in Germany – saw social and political collectivity as more important than individualism and rationalism. They considered the individual as only one part of the larger collectivity, which should not be viewed as an atomized numerical sum of individuals. They condemned the rationalistic individualism of liberal society and the dissolution of social links in bourgeois society. They saw modern society as one of mediocrity, materialism, instability, and corruption. They denounced big-city urban society as being merely based on instinct and animality and without heroism. The fin-de-siècle outlook was influenced by various intellectual developments, including Darwinian biology; Wagnerian aesthetics; Arthur de Gobineau's racialism; Gustave Le Bon's psychology; and the philosophies of Friedrich Nietzsche, Fyodor Dostoyevsky and Henri Bergson. Social Darwinism, which gained widespread acceptance, made no distinction between physical and social life and viewed the human condition as being an unceasing struggle to achieve the survival of the fittest. Social Darwinism challenged positivism's claim of deliberate and rational choice as the determining behaviour of humans, with social Darwinism focusing on heredity, race and environment. Social Darwinism's emphasis on biogroup identity and the role of organic relations within societies fostered legitimacy and appeal for nationalism. New theories of social and political psychology also rejected the notion of human behaviour being governed by rational choice, and instead claimed that emotion was more influential in political issues than reason. Nietzsche's argument that "God is dead" coincided with his attack on the "herd mentality" of Christianity, democracy and modern collectivism; his concept of the Übermensch; and his advocacy of the will to power as a primordial instinct were major influences upon many of the fin-de-siècle generation. Bergson's claim of the existence of an "élan vital" or vital instinct centred upon free choice and rejected the processes of materialism and determinism, thus challenged Marxism. With the advent of the Darwinian theory of evolution came claims of evolution possibly leading to decadence. Proponents of decadence theories claimed that contemporary Western society's decadence was the result of modern life, including urbanization, sedentary lifestyle, the survival of the least fit and modern culture's emphasis on egalitarianism, individualistic anomie, and nonconformity. The main work that gave rise to decadence theories was the work Degeneration (1892) by Max Nordau that was popular in Europe, the ideas of decadence helped the cause of nationalists who presented nationalism as a cure for decadence. Gaetano Mosca in his work The Ruling Class (1896) developed the theory that claims that in all societies, an "organized minority" will dominate and rule over the "disorganized majority". Mosca claims that there are only two classes in society, "the governing" (the organized minority) and "the governed" (the disorganized majority). He claims that the organized nature of the organized minority makes it irresistible to any individual of the disorganized majority. Mosca developed this theory in 1896 in which he argued that the problem of the supremacy of civilian power in society is solved in part by the presence and social structural design of militaries. He claims that the social structure of the military is ideal because it includes diverse social elements that balance each other out and more importantly is its inclusion of an officer class as a "power elite". Mosca presented the social structure and methods of governance by the military as a valid model of development for civil society. Mosca's theories are known to have significantly influenced Mussolini's notion of the political process and fascism. Related to Mosca's theory of domination of society by an organized minority over a disorganized majority was Robert Michels' theory of the iron law of oligarchy, created in 1911, which was a major attack on the basis of contemporary democracy. Michels argues that oligarchy is inevitable as an "iron law" within any organization as part of the "tactical and technical necessities" of organization and on the topic of democracy, Michels stated: "It is organization which gives birth to the dominion of the elected over the electors, of the mandataries over the mandators, of the delegates over the delegators. Who says organization, says oligarchy". He claims: "Historical evolution mocks all the prophylactic measures that have been adopted for the prevention of oligarchy". He states that the official goal of contemporary democracy of eliminating elite rule was impossible, that democracy is a façade which legitimizes the rule of a particular elite and that elite rule, which he refers to as oligarchy, is inevitable. Michels had previously been a social democrat, but became drawn to the ideas of Georges Sorel, Édouard Berth, Arturo Labriola and Enrico Leone and came to strongly oppose the parliamentarian, legalistic and bureaucratic socialism of social democracy. As early as 1904, he began to advocate in favor of patriotism and national interests. Later he began to support activist, voluntarist, and anti-parliamentarian concepts, and in 1911 he took a position in favor of the Italian war effort in Libya and started moving towards Italian nationalism. Michels eventually became a supporter of fascism upon Mussolini's rise to power in 1922, viewing fascism's goal to destroy liberal democracy in a sympathetic manner. Maurice Barrès, a French politician of the late 19th and early 20th centuries who influenced the later fascist movement, claimed that true democracy was authoritarian democracy while rejecting liberal democracy as a fraud. Barrès claimed that authoritarian democracy involved a spiritual connection between a leader of a nation and the nation's people, and that true freedom did not arise from individual rights nor parliamentary restraints, but through "heroic leadership" and "national power". He emphasized the need for hero worship and charismatic leadership in national society. Barrès was a founding member of the League for the French Fatherland in 1889, and later coined the term "socialist nationalism" to describe his views during an electoral campaign in 1898. He emphasized class collaboration, the role of intuition and emotion in politics alongside racial Antisemitism, and "he tried to combine the search for energy and a vital style of life with national rootedness and a sort of Darwinian racism." Later in life he returned to cultural traditionalism and parliamentary conservatism, but his ideas contributed to the development of an extremist form of nationalism in pre-1914 France. Other French nationalist intellectuals of the early 20th century also wished to "obliterate the class struggle in ideological terms," ending the threat of communism by persuading working people to identify with their nation rather than their class. The rise of support for anarchism in this period of time was important in influencing the politics of fascism. The anarchist Mikhail Bakunin's concept of propaganda of the deed, which stressed the importance of direct action as the primary means of politicsincluding revolutionary violence, became popular amongst fascists who admired the concept and adopted it as a part of fascism. One of the key persons who greatly influenced fascism was the French intellectual Georges Sorel, who "must be considered one of the least classifiable political thinkers of the twentieth century" and supported a variety of different ideologies throughout his life, including conservatism, socialism, revolutionary syndicalism and nationalism. Sorel also contributed to the fusion of anarchism and syndicalism together into anarcho-syndicalism. He promoted the legitimacy of political violence in his work Reflections on Violence (1908), during a period in his life when he advocated radical syndicalist action to achieve a revolution which would overthrow capitalism and the bourgeoisie through a general strike. In Reflections on Violence, Sorel emphasized need for a revolutionary political religion. Also in his work The Illusions of Progress, Sorel denounced democracy as reactionary, saying "nothing is more aristocratic than democracy". By 1909, after the failure of a syndicalist general strike in France, Sorel and his supporters abandoned the radical left and went to the radical right, where they sought to merge militant Catholicism and French patriotism with their views – advocating anti-republican Christian French patriots as ideal revolutionaries. In the early 1900s Sorel had officially been a revisionist of Marxism, but by 1910 he announced his abandonment of socialism, and in 1914 he claimed – following an aphorism of Benedetto Croce – that "socialism is dead" due to the "decomposition of Marxism". Sorel became a supporter of reactionary Maurrassian integral nationalism beginning in 1909, and this greatly influenced his works. Sorel's political allegiances were constantly shifting, influencing a variety of people across the political spectrum from Benito Mussolini to Benedetto Croce to Georg Lukács, and both sympathizers and critics of Sorel considered his political thought to be a collection of separate ideas with no coherence and no common thread linking them. In this, Sorelianism is considered to be a precursor to fascism, as fascist thought also drew from disparate sources and did not form a single coherent ideological system. Sorel described himself as "a self-taught man exhibiting to other people the notebooks which have served for my own instruction", and stated that his goal was to be original in all of his writings and that his apparent lack of coherence was due to an unwillingness to write down anything that had already been said elsewhere by someone else. The academic intellectual establishment did not take him seriously, but Mussolini applauded Sorel by declaring: "What I am, I owe to Sorel". Charles Maurras was a French right-wing monarchist and nationalist who held interest in merging his nationalist ideals with Sorelian syndicalism as a means to confront liberal democracy. This fusion of nationalism from the political right with Sorelian syndicalism from the left took place around the outbreak of World War I. Sorelian syndicalism, unlike other ideologies on the left, held an elitist view that the morality of the working class needed to be raised. The Sorelian concept of the positive nature of social war and its insistence on a moral revolution led some syndicalists to believe that war was the ultimate manifestation of social change and moral revolution. The fusion of Maurrassian nationalism and Sorelian syndicalism influenced radical Italian nationalist Enrico Corradini. Corradini spoke of the need for a nationalist-syndicalist movement, led by elitist aristocrats and anti-democrats who shared a revolutionary syndicalist commitment to direct action and a willingness to fight. Corradini spoke of Italy as being a "proletarian nation" that needed to pursue imperialism to challenge the "plutocratic" French and British. Corradini's views were part of a wider set of perceptions within the right-wing Italian Nationalist Association (ANI), which claimed that Italy's economic backwardness was caused by corruption in its political class, liberalism, and division caused by "ignoble socialism". The ANI held ties and influence among conservatives, Catholics, and the business community. Italian national syndicalists held a common set of principles: the rejection of bourgeois values, democracy, liberalism, Marxism, internationalism and pacifism and the promotion of heroism, vitalism and violence. Radical nationalism in Italysupport for expansionism and cultural revolution to create a "New Man" and a "New State"began to grow in 1912 during the Italian conquest of Libya and was supported by Italian Futurists and members of the ANI. Futurism was both an artistic-cultural movement and initially a political movement in Italy led by Filippo Tommaso Marinetti, the author of the Futurist Manifesto (1908), that championed the causes of modernism, action and political violence as necessary elements of politics while denouncing liberalism and parliamentary politics. Marinetti rejected conventional democracy for being based on majority rule and egalitarianism, while promoting a new form of democracy, that he described in his work "The Futurist Conception of Democracy" as the following: "We are therefore able to give the directions to create and to dismantle to numbers, to quantity, to the mass, for with us number, quantity and mass will never beas they are in Germany and Russiathe number, quantity and mass of mediocre men, incapable and indecisive". The ANI claimed that liberal democracy was no longer compatible with the modern world and advocated a strong state and imperialism, claiming that humans are naturally predatory and that nations were in a constant struggle, in which only the strongest nations could survive. Until 1914, Italian nationalists and revolutionary syndicalists with nationalist leanings remained apart. Such syndicalists opposed the Italo-Turkish War of 1911 as an affair of financial interests and not the nation, but World War I was seen by both Italian nationalists and syndicalists as a national affair. World War I and aftermath (1914–1922) At the outbreak of World War I in August 1914, the Italian political left became severely split over its position on the war. The Italian Socialist Party opposed the war on the grounds of proletarian internationalism, but a number of Italian revolutionary syndicalists supported intervention in the war on the grounds that it could serve to mobilize the masses against the status quo and that the national question had to be resolved before the social one. Corradini presented the need for Italy as a "proletarian nation" to defeat a reactionary Germany from a nationalist perspective. Angelo Oliviero Olivetti formed the Revolutionary Fascio for International Action in October 1914, to support Italy's entry into the war. At the same time, Benito Mussolini joined the interventionist cause. At first, these interventionist groups were composed of disaffected syndicalists who had concluded that their attempts to promote social change through a general strike had been a failure, and became interested in the transformative potential of militarism and war. They would help to form the Fascist movement several years later. This early interventionist movement was very small, and did not have an integrated set of policies. Its attempts to hold mass meetings were ineffective and it was regularly harassed by government authorities and socialists. Antagonism between interventionists and socialists resulted in violence. Attacks on interventionists were so violent that even democratic socialists who opposed the war, such as Anna Kuliscioff, said that the Italian Socialist Party had gone too far in its campaign to silence supporters of the war. Benito Mussolini became prominent within the early pro-war movement thanks to his newspaper, , which he founded in November 1914 to support the interventionist cause. The newspaper received funding from the governments of Allied powers that wanted Italy to join them in the war, particularly France and Britain. was also funded in part by Italian industrialists who hoped to gain financially from the war, including Fiat, other arms manufacturers, and agrarian interests. Mussolini did not have any clear agenda in the beginning other than support for Italy's entry into the war, and sought to appeal to diverse groups of readers. These ranged from dissident socialists who opposed the Socialist Party's anti-war stance, to democratic idealists who believed the war would overthrow autocratic monarchies across Europe, to Italian patriots who wanted to recover ethnic Italian territories from Austria, to imperialists who dreamed of a new Roman Empire. By early 1915, Mussolini had moved towards the nationalist position. He began arguing that Italy should conquer Trieste and Fiume, and expand its northeastern border to the Alps, following the ideals of Mazzini who called for a patriotic war to "secure Italy's natural frontiers of language and race". Mussolini also advocated waging a war of conquest in the Balkans and the Middle East, and his supporters began to call themselves . He also started advocating for a "positive attitude" towards capitalism and capitalists, as part of his transition towards supporting class collaboration and an "Italy first" position. Italy finally entered the war on the Allied side in May 1915. Mussolini later took credit for having allegedly forced the government to declare war on Austria, although his influence on events was minimal. He enrolled into the Royal Italian Army in September 1915 and fought in the war until 1917, when he was wounded during a training exercise and discharged. Italy's use of daredevil elite shock troops known as the , beginning in 1917, was an important influence on the early Fascist movement. The were soldiers who were specifically trained for a life of violence and wore unique blackshirt uniforms and fezzes. The formed a national organization in November 1918, the , which by mid-1919 had about twenty thousand young men within it. Mussolini appealed to the , and the Fascist movement that developed after the war was based upon the . A major event that greatly influenced the development of fascism was the October Revolution of 1917, in which Bolshevik communists led by Vladimir Lenin seized power in Russia. The revolution in Russia gave rise to a fear of communism among the elites and among society at large in several European countries, and fascist movements gained support by presenting themselves as a radical anti-communist political force. Anti-communism was also an expression of fascist anti-universalism, as communism insisted on international working class unity while fascism insisted on national interests. In addition, fascist anti-communism was linked to anti-Semitism and even anti-capitalism, because many fascists believed that communism and capitalism were both Jewish creations meant to undermine nation-states. The Nazis advocated the conspiracy theory that Jewish communists were working together with Jewish finance capital against Germany. After World War I, fascists have commonly campaigned on anti-Marxist agendas. Mussolini's immediate reaction to the Russian Revolution was contradictory. He admired Lenin's boldness in seizing power by force and was envious of the success of the Bolsheviks, while at the same time attacking them in his paper for restricting free speech and creating "a tyranny worse than that of the tsars." At this time, between 1917 and 1919, Mussolini and the early Fascist movement presented themselves as opponents of censorship and champions of free thought and speech, calling these "among the highest expressions of human civilization." Mussolini wrote that "we are libertarians above all" and claimed that the Fascists were committed to "loving liberty for everyone, even for our enemies." Mussolini consolidated control over the Fascist movement in 1919 with the founding of the in Milan. For a brief time in 1919, this early fascist movement tried to position itself as a radical populist alternative to the socialists, offering its own version of a revolutionary transformation of society. In a speech delivered in Milan's Piazza San Sepolcro in March 1919, Mussolini set forward the proposals of the new movement, combining ideas from nationalism, Sorelian syndicalism, the idealism of the French philosopher Henri Bergson, and the theories of Gaetano Mosca and Vilfredo Pareto. Mussolini declared his opposition to Bolshevism because "Bolshevism has ruined the economic life of Russia" and because he claimed that Bolshevism was incompatible with Western civilization; he said that "we declare war against socialism, not because it is socialism, but because it has opposed nationalism", that "we intend to be an active minority, to attract the proletariat away from the official Socialist party" and that "we go halfway toward meeting the workers"; and he declared that "we favor national syndicalism and reject state intervention whenever it aims at throttling the creation of wealth." In these early post-war years, the Italian Fascist movement tried to become a broad political umbrella that could include all people of all classes and political positions, united only by a desire to save Italy from the Marxist threat and to ensure the expansion of Italian territories in the post-war peace settlements. wrote in March 1919 that "We allow ourselves the luxury of being aristocrats and democrats, conservatives and progressives, reactionaries and revolutionaries, legalists and antilegalists." Later in 1919, Alceste De Ambris and futurist movement leader Filippo Tommaso Marinetti created The Manifesto of the Italian Fasci of Combat (also known as the Fascist Manifesto). The Manifesto was presented on 6 June 1919 in the Fascist newspaper . The Manifesto supported the creation of universal suffrage for both men and women (the latter being realized only partly in late 1925, with all opposition parties banned or disbanded); proportional representation on a regional basis; government representation through a corporatist system of "National Councils" of experts, selected from professionals and tradespeople, elected to represent and hold legislative power over their respective areas, including labour, industry, transportation, public health, communications, etc.; and the abolition of the Italian Senate. The Manifesto supported the creation of an eight-hour work day for all workers, a minimum wage, worker representation in industrial management, equal confidence in labour unions as in industrial executives and public servants, reorganization of the transportation sector, revision of the draft law on invalidity insurance, reduction of the retirement age from 65 to 55, a strong progressive tax on capital, confiscation of the property of religious institutions and abolishment of bishoprics and revision of military contracts to allow the government to seize 85% of war profits made by the armaments industry. It also called for the creation of a short-service national militia to serve defensive duties, nationalization of the armaments industry and a foreign policy designed to be peaceful but also competitive. Nevertheless, Mussolini also demanded the expansion of Italian territories, particularly by annexing Dalmatia (which he claimed could be accomplished by peaceful means), and insisted that "the state must confine itself to directing the civil and political life of the nation," which meant taking the government out of business and transferring large segments of the economy from public to private control. The intention was to appeal to a working class electorate while also maintaining the support of business interests, even if this meant making contradictory promises. With this manifesto, the campaigned in the Italian elections of November 1919, mostly attempting to take votes away from the socialists. The results were disastrous. The fascists received less than 5000 votes in their political heartland of Milan, compared to 190,000 for the socialists, and not a single fascist candidate was elected to any office. Mussolini's political career seemed to be over. This crippling electoral defeat was largely due to fascism's lack of ideological credibility, as the fascist movement was a mixture of many different ideas and tendencies. It contained monarchists, republicans, syndicalists and conservatives, and some candidates supported the Vatican while others wanted to expel the Pope from Italy. In response to the failure of his electoral strategy, Mussolini shifted his political movement to the right, seeking to form an alliance with the conservatives. Soon, agrarian conflicts in the region of Emilia and in the Po Valley provided an opportunity to launch a series of violent attacks against the socialists, and thus to win credibility with the conservatives and establish fascism as a paramilitary movement rather than an electoral one. With the antagonism between anti-interventionist Marxists and pro-interventionist Fascists complete by the end of the war, the two sides became irreconcilable. The Fascists presented themselves as anti-Marxists and as opposed to the Marxists. Mussolini tried to build his popular support especially among war veterans and patriots by enthusiastically supporting Gabriele D'Annunzio, the leader of the annexationist faction in post-war Italy, who demanded the annexation of large territories as part of the peace settlement in the aftermath of the war. For D'Annunzio and other nationalists, the city of Fiume in Dalmatia (present-day Croatia) had "suddenly become the symbol of everything sacred." Fiume was a city with an ethnic Italian majority, while the countryside around it was largely ethnic Croatian. Italy demanded the annexation of Fiume and the region around it as a reward for its contribution to the Allied war effort, but the Allies – and US president Woodrow Wilson in particular – intended to give the region to the newly formed Kingdom of Serbs, Croats and Slovenes (later renamed Yugoslavia). As such, the next events that influenced the Fascists were the raid of Fiume by Italian nationalist Gabriele D'Annunzio and the founding of the Charter of Carnaro in 1920. D'Annunzio and De Ambris designed the Charter, which advocated national-syndicalist corporatist productionism alongside D'Annunzio's political views. Many Fascists saw the Charter of Carnaro as an ideal constitution for a Fascist Italy. This behaviour of aggression towards Yugoslavia and South Slavs was pursued by Italian Fascists with their persecution of South Slavs – especially Slovenes and Croats. In 1920, militant strike activity by industrial workers reached its peak in Italy, where 1919 and 1920 were known as the "Red Years". Mussolini first supported the strikes, but when this did not help him to gain any additional supporters, he abruptly reversed his position and began to oppose them, seeking financial support from big business and landowners. The donations he received from industrial and agrarian interest groups were unusually large, as they were very concerned about working class unrest and eager to support any political force that stood against it. Together with many smaller donations that he received from the public as part of a fund drive to support D'Annunzio, this helped to build up the Fascist movement and transform it from a small group based around Milan to a national political force. Mussolini organized his own militia, known as the "blackshirts," which started a campaign of violence against Communists, Socialists, trade unions and co-operatives under the pretense of "saving the country from bolshevism" and preserving order and internal peace in Italy. Some of the blackshirts also engaged in armed attacks against the Church, "where several priests were assassinated and churches burned by the Fascists". At the same time, Mussolini continued to present himself as the champion of Italian national interests and territorial expansion in the Balkans. In the autumn of 1920, Fascist blackshirts in the Italian city of Trieste (located not far from Fiume, and inhabited by Italians as well as Slavs) engaged in street violence and vandalism against Slavs. Mussolini visited the city to support them and was greeted by an enthusiastic crowd – the first time in his political career that he achieved such broad popular support. He also focused his rhetoric on attacks against the liberal government of Giovanni Giolitti, who had withdrawn Italian troops from Albania and did not press the Allies to allow Italy to annex Dalmatia. This helped to draw disaffected former soldiers into the Fascist ranks. Fascists identified their primary opponents as the socialists on the left who had opposed intervention in World War I. The Fascists and the rest of the Italian political right held common ground: both held Marxism in contempt, discounted class consciousness and believed in the rule of elites. The Fascists assisted the anti-socialist campaign by allying with the other parties and the conservative right in a mutual effort to destroy the Italian Socialist Party and labour organizations committed to class identity above national identity. In 1921, the radical wing of the Italian Socialist Party broke away to form the Communist Party of Italy. This changed the political landscape, as the remaining Socialist Party – diminished in numbers, but still the largest party in parliament – became more moderate and was therefore seen as a potential coalition partner for Giolitti's government. Such an alliance would have secured a large majority in parliament, ending the political deadlock and making effective government possible. To prevent this from happening, Mussolini offered to ally his Fascists with Giolitti instead, and Giolitti accepted, under the assumption that the small Fascist movement would make fewer demands and would be easier to keep in check than the much larger Socialists. Mussolini and the Fascists thus joined a coalition formed of conservatives, nationalists and liberals, which stood against the left-wing parties (the socialists and the communists) in the Italian general election of 1921. As part of this coalition, the Fascists – who had previously claimed to be neither left nor right – identified themselves for the first time as the "extreme right", and presented themselves as the most radical right-wing members of the coalition. Mussolini talked about "imperialism" and "national expansion" as his main goals, and called for Italian domination of the Mediterranean Sea basin. The elections of that year were characterized by Fascist street violence and intimidation, which they used to suppress the socialists and communists and to prevent their supporters from voting, while the police and courts (under the control of Giolitti's government) turned a blind eye and allowed the violence to continue without legal consequences. About a hundred people were killed, and some areas of Italy came fully under the control of fascist squads, which did not allow known socialist supporters to vote or hold meetings. In spite of this, the Socialist Party still won the largest share of the vote and 122 seats in parliament, followed by the Catholic with 107 seats. The Fascists only picked up 7 percent of the vote and 35 seats in parliament, but this was a large improvement compared to their results only two years earlier, when they had won no seats at all. Mussolini took these electoral gains as an indication that his right-wing strategy paid off, and decided that the Fascists would sit on the extreme right side of the amphitheatre where parliament met. He also used his first speech in parliament to take a "reactionary" stance, arguing against collectivization and nationalization, and calling for the post office and the railways to be given to private enterprise. Prior to Fascism's accommodation of the political right, Fascism was a small, urban, northern Italian movement that had about a thousand members. After Fascism's accommodation of the political right, the Fascist movement's membership soared to approximately 250,000 by 1921. The other lesson drawn by Mussolini from the events of 1921 was about the effectiveness of open violence and paramilitary groups. The Fascists used violence even in parliament, for example by directly assaulting the communist deputy Misiano and throwing him out of the building on the pretext of having been a deserter during the war. They also openly threatened socialists with their guns in the chamber. They were able to do this with impunity, while the government took no action against them, hoping not to offend Fascist voters. Across the country, local branches of the National Fascist Party embraced the principle of squadrismo and organized paramilitary "squads" modeled after the from the war. Mussolini claimed that he had "400,000 armed and disciplined men at his command" and did not hide his intentions of seizing power by force. Rise to power and initial international spread of fascism (1922–1929) Beginning in 1922, Fascist paramilitaries escalated their strategy by switching from attacks on socialist offices and the homes of socialist leadership figures to the violent occupation of cities. The Fascists met little serious resistance from authorities and proceeded to take over several cities, including Bologna, Bolzano, Cremona, Ferrara, Fiume and Trent. The Fascists attacked the headquarters of socialist and Catholic unions in Cremona and imposed forced Italianization upon the German-speaking population of Trent and Bolzano. After seizing these cities, the Fascists made plans to take Rome. On 24 October 1922, the Fascist Party held its annual congress in Naples, where Mussolini ordered Blackshirts to take control of public buildings and trains and to converge on three points around Rome. The march would be led by four prominent Fascist leaders representing its different factions: Italo Balbo, a Blackshirt leader; General Emilio De Bono; Michele Bianchi, an ex syndicalist; and Cesare Maria De Vecchi, a monarchist Fascist. Mussolini himself remained in Milan to await the results of the actions. The Fascists managed to seize control of several post offices and trains in northern Italy while the Italian government, led by a left-wing coalition, was internally divided and unable to respond to the Fascist advances. The Italian government had been in a steady state of turmoil, with many governments being created and then being defeated. The Italian government initially took action to prevent the Fascists from entering Rome, but King Victor Emmanuel III of Italy perceived the risk of bloodshed in Rome in response to attempting to disperse the Fascists to be too high. Some political organizations, such as the conservative Italian Nationalist Association, "assured King Victor Emmanuel that their own Sempre Pronti militia was ready to fight the Blackshirts" if they entered Rome, but their offer was never accepted. Victor Emmanuel III decided to appoint Mussolini as Prime Minister of Italy and Mussolini arrived in Rome on 30 October to accept the appointment. Fascist propaganda aggrandized this event, known as "March on Rome", as a "seizure" of power due to Fascists' heroic exploits. Upon being appointed Prime Minister of Italy, Mussolini had to form a coalition government because the Fascists did not have control over the Italian parliament. The coalition government included a cabinet led by Mussolini and thirteen other ministers, only three of whom were Fascists, while others included representatives from the army and the navy, two Catholic Popolari members, two democratic liberals, one conservative liberal, one social democrat, one Nationalist member and the philosopher Giovanni Gentile. Mussolini's coalition government initially pursued economically liberal policies under the direction of liberal finance minister Alberto De Stefani from the Center Party, including balancing the budget through deep cuts to the civil service. Initially little drastic change in government policy occurred, and repressive police actions against communists and d'Annunzian rebels were limited. At the same time, Mussolini consolidated his control over the National Fascist Party by creating a governing executive for the party, the Grand Council of Fascism, whose agenda he controlled. In addition, the squadristi blackshirt militia was transformed into the state-run MVSN, led by regular army officers. Militant squadristi were initially highly dissatisfied with Mussolini's government and demanded a "Fascist revolution". In this period, to appease the King of Italy, Mussolini formed a close political alliance between the Italian Fascists and Italy's conservative faction in Parliament, which was led by Luigi Federzoni, a conservative monarchist and nationalist who was a member of the Italian Nationalist Association (ANI). The ANI joined the National Fascist Party in 1923. Because of the merger of the Nationalists with the Fascists, tensions existed between the conservative nationalist and revolutionary syndicalist factions of the movement. The conservative and syndicalist factions of the Fascist movement sought to reconcile their differences, secure unity and promote fascism by taking on the views of each other. Conservative nationalist Fascists promoted fascism as a revolutionary movement to appease the revolutionary syndicalists, while to appease conservative nationalists, the revolutionary syndicalists declared they wanted to secure social stability and ensure economic productivity. This sentiment included most syndicalist Fascists, particularly Edmondo Rossoni, who as secretary-general of the General Confederation of Fascist Syndical Corporations sought "labor's autonomy and class consciousness". The Fascists began their attempt to entrench Fascism in Italy with the Acerbo Law, which guaranteed a plurality of the seats in parliament to any party or coalition list in an election that received 25% or more of the vote. The Acerbo Law was passed in spite of numerous abstentions from the vote. In the 1924 election, the Fascists, along with moderates and conservatives, formed a coalition candidate list, and through considerable Fascist violence and intimidation, the list won with 66% of the vote, allowing it to receive 403 seats, most of which went to the Fascists. In the aftermath of the election, a crisis and political scandal erupted after Socialist Party deputy Giacomo Matteotti was kidnapped and murdered by a Fascist. The liberals and the leftist minority in parliament walked out in protest in what became known as the Aventine Secession. On 3 January 1925, Mussolini addressed the Fascist-dominated Italian parliament and declared that he was personally responsible for what happened, but he insisted that he had done nothing wrong and proclaimed himself dictator of Italy, assuming full responsibility for the government and announcing the dismissal of parliament. From 1925 to 1929, Fascism steadily became entrenched in power: opposition deputies were denied access to parliament, censorship was introduced and a December 1925 decree made Mussolini solely responsible to the King. Efforts to increase Fascist influence over Italian society accelerated beginning in 1926, with Fascists taking positions in local administration and 30% of all prefects being administered by appointed Fascists by 1929. In 1929, the Fascist regime gained the political support and blessing of the Roman Catholic Church after the regime signed a concordat with the Church, known as the Lateran Treaty, which gave the papacy recognition as a sovereign state (Vatican City) and financial compensation for the seizure of Church lands by the liberal state in the 19th century. Though Fascist propaganda had begun to speak of the new regime as an all-encompassing "totalitarian" state beginning in 1925, the Fascist Party and regime never gained total control over Italy's institutions. King Victor Emmanuel III remained head of state, the armed forces and the judicial system retained considerable autonomy from the Fascist state, Fascist militias were under military control and initially, the economy had relative autonomy as well. Between 1922 and 1925, Fascism sought to accommodate the Italian Liberal Party, conservatives, and nationalists under Italy's coalition government, where major alterations to its political agenda were madealterations such as abandoning its previous populism, republicanism, and anticlericalismand adopting policies of economic liberalism under Alberto De Stefani, a Center Party member who was Italy's Minister of Finance until dismissed by Mussolini after the imposition of a single-party dictatorship in 1925. The Fascist regime also accepted the Roman Catholic Church and the monarchy as institutions in Italy. To appeal to Italian conservatives, Fascism adopted policies such as promoting family values, including the promotion of policies designed to reduce the number of women in the workforce, limiting the woman's role to that of a mother. In an effort to expand Italy's population to facilitate Mussolini's future plans to control the Mediterranean region, the Fascists banned literature on birth control and increased penalties for abortion in 1926, declaring both crimes against the state. Though Fascism adopted a number of positions designed to appeal to reactionaries, the Fascists also sought to maintain Fascism's revolutionary character, with Angelo Oliviero Olivetti saying that "Fascism would like to be conservative, but it will [be] by being revolutionary". The Fascists supported revolutionary action and committed to secure law and order to appeal to both conservatives and syndicalists. The Fascist regime began to create a corporatist economic system in 1925 with the creation of the Palazzo Vidioni Pact, in which the Italian employers' association Confindustria and Fascist trade unions agreed to recognize each other as the sole representatives of Italy's employers and employees, excluding non-Fascist trade unions. The Fascist regime created a Ministry of Corporations that organized the Italian economy into 22 sectoral corporations, banned all independent trade unions, banned workers' strikes and lock-outs, and in 1927 issued the Charter of Labour, which established workers' rights and duties and created labor tribunals to arbitrate employer-employee disputes. In practice, the sectoral corporations exercised little independence and were largely controlled by the regime, while employee organizations were rarely led by employees themselves, but instead by appointed Fascist party members. In the 1920s, Fascist Italy pursued an aggressive foreign policy that included an attack on the Greek island of Corfu, aims to expand Italian territory in the Balkans, plans to wage war against Turkey and Yugoslavia, attempts to bring Yugoslavia into civil war by supporting Croat and Macedonian separatists to legitimize Italian intervention, and making Albania a de facto protectorate of Italy (which was achieved through diplomatic means by 1927). In response to revolt in the Italian colony of Libya, Fascist Italy abandoned the previous liberal-era colonial policy of cooperation with local leaders. Instead, claiming that Italians were a superior race to African races and thereby had the right to colonize the "inferior" Africans, it sought to settle 10 to 15 million Italians in Libya. This resulted in an aggressive military campaign against the Libyans, including mass killings, the use of concentration camps, and the forced starvation of thousands of people. Italian authorities committed ethnic cleansing by forcibly expelling 100,000 Bedouin Cyrenaicans, half the population of Cyrenaica in Libya, from land that was slated to be given to Italian settlers. The March on Rome brought Fascism international attention. One early admirer of the Italian Fascists was Adolf Hitler, who less than a month after the March had begun to model himself and the Nazi Party upon Mussolini and the Fascists. The Nazis, led by Hitler and the German war hero Erich Ludendorff, attempted a "March on Berlin" modeled upon the March on Rome, which resulted in the failed Beer Hall Putsch in Munich in November 1923, where the Nazis briefly captured Bavarian Minister-President Gustav Ritter von Kahr and announced the creation of a new German government to be led by a triumvirate of von Kahr, Hitler, and Ludendorff. The Beer Hall Putsch was crushed by Bavarian police, and Hitler and other leading Nazis were arrested and detained until 1925. Another early admirer of Italian Fascism was Gyula Gömbös, leader of the Hungarian National Defence Association (known by its acronym MOVE), one of several groups that were known in Hungary as the "right radicals." Gömbös described himself as a "national socialist" and championed radical land reform and "Christian capital" in opposition to "Jewish capital." He also advocated a revanchist foreign policy and in 1923 stated the need for a "march on Budapest". Yugoslavia briefly had a significant fascist movement, the ORJUNA, which supported Yugoslavism, advocated the creation of a corporatist economy, opposed democracy and took part in violent attacks on communists, though it was opposed to the Italian government due to Yugoslav border disputes with Italy. ARJUNA was dissolved in 1929 when the King of Yugoslavia banned political parties and created a royal dictatorship, though ARJUNA supported the King's decision. Amid a political crisis in Spain involving increased strike activity and rising support for anarchism, Spanish army commander Miguel Primo de Rivera engaged in a successful coup against the Spanish government in 1923 and installed himself as a dictator as head of a conservative military junta that dismantled the established party system of government. Upon achieving power, Primo de Rivera sought to resolve the economic crisis by presenting himself as a compromise arbitrator figure between workers and bosses and his regime created a corporatist economic system based on the Italian Fascist model. In Lithuania in 1926, Antanas Smetona rose to power and founded a parafascist regime under his Lithuanian Nationalist Union. International surge of fascism and World War II (1929–1945) The events of the Great Depression resulted in an international surge of fascism and the creation of several fascist regimes and regimes that adopted fascist policies. What would become the most prominent example of the new fascist regimes was Nazi Germany, under the leadership of Adolf Hitler. With the rise of Hitler and the Nazis to power in 1933, liberal democracy was dissolved in Germany and the Nazis mobilized the country for war, with expansionist territorial aims against several countries. In the 1930s, the Nazis implemented racial laws that deliberately discriminated against, disenfranchised, and persecuted Jews and other racial minority groups. Hungarian fascist Gyula Gömbös rose to power as Prime Minister of Hungary in 1932 and visited Fascist Italy and Nazi Germany to consolidate good relations with the two regimes. He attempted to entrench his Party of National Unity throughout the country, created a youth organization and a political militia with sixty thousand members, promoted social reforms such as a 48-hour workweek in industry, and pursued irredentist claims on Hungary's neighbors. The fascist Iron Guard movement in Romania soared in political support after 1933, gaining representation in the Romanian government and an Iron Guard member assassinated prime minister Ion Duca. The Iron Guard had little in the way of a concrete program and placed more emphasis on ideas of religious and spiritual revival. During the 6 February 1934 crisis, France faced the greatest domestic political turmoil since the Dreyfus Affair when the fascist Francist Movement and multiple far-right movements rioted en masse in Paris against the French government resulting in major political violence. A variety of para-fascist governments that borrowed elements from fascism were also formed during the Great Depression, including in Greece, Lithuania, Poland and Yugoslavia. Fascism also expanded its influence outside Europe, especially in East Asia, the Middle East and South America. In China, Wang Jingwei's Kai-Tsu p'ai (Reorganization) faction of the Kuomintang (Nationalist Party of China) supported Nazism in the late 1930s. In Japan, a Nazi movement called the Tōhōkai was formed by Seigō Nakano. The Al-Muthanna Club of Iraq was a pan-Arab movement that supported Nazism and exercised its influence in the Iraqi government through cabinet minister Saib Shawkat who formed a paramilitary youth movement. Another ultra-nationalist movement that arose in the Arab World during the 1930s was the irredentist Syrian Social Nationalist Party (SSNP) led by Antoun Sa'adeh, which advocated the formation of "Greater Syria". Inspired by the models of both Italian Fascism and German Nazism, Sa'adeh believed that Syrians were a "distinct and naturally superior race". SSNP engaged in violent activities to assert control over Syria, organize the country along militaristic lines and then impose its ideological project on the Greater Syrian region. During the Second World War, Sa'adeh developed close ties with officials of Fascist Italy and Nazi Germany. Although SSNP had managed to become the closest cognate of European fascism in the Arab World, the party failed to make any social impact and was eventually banned for terrorist activities during the 1950s. In South America, several mostly short-lived fascist governments and prominent fascist movements were formed during this period. Argentine President General José Félix Uriburu proposed that Argentina be reorganized along corporatist and fascist lines. Peruvian president Luis Miguel Sánchez Cerro founded the Revolutionary Union in 1931 as the state party for his dictatorship. Later, the Revolutionary Union was taken over by Raúl Ferrero Rebagliati, who sought to mobilize mass support for the group's nationalism in a manner akin to fascism and even started a paramilitary Blackshirts arm as a copy of the Italian group, but the Union lost heavily in the 1936 elections and faded into obscurity. In Paraguay in 1940, Paraguayan President General Higinio Morínigo began his rule as a dictator with the support of pro-fascist military officers, appealed to the masses, exiled opposition leaders and only abandoned his pro-fascist policies after the end of World War II. The Brazilian Integralists led by Plínio Salgado claimed as many as 200,000 members, but following coup attempts they faced a crackdown from the Estado Novo government of Getúlio Vargas in 1937. In the 1930s, the National Socialist Movement of Chile gained seats in Chile's parliament and attempted a coup d'état that resulted in the Seguro Obrero massacre of 1938. Fascist Italy and Nazi Germany pursued territorial expansionist and interventionist foreign policy agendas from the 1930s through the 1940s, culminating in World War II. Mussolini supported irredentist Italian claims over neighboring territories, establishing Italian domination of the Mediterranean Sea, securing Italian access to the Atlantic Ocean, and the creation of Italian spazio vitale ("vital space") in the Mediterranean and Red Sea regions. Hitler supported irredentist German claims overall territories inhabited by ethnic Germans, along with the creation of German Lebensraum ("living space") in Eastern Europe, including territories held by the Soviet Union, that would be colonized by Germans. From 1935 to 1939, Germany and Italy escalated their demands for territorial gains and greater influence in world affairs. Italy invaded Ethiopia in 1935, resulting in condemnation by the League of Nations and widespread diplomatic isolation. In 1936, Germany remilitarized the industrial Rhineland, a region that had been ordered demilitarized by the Treaty of Versailles. In 1938, Germany annexed Austria and the Sudetenland region of Czechoslovakia. The next year, Czechoslovakia was partitioned between Germany and a client state of Slovakia. At the same time, from 1938 to 1939, Italy was demanding territorial and colonial concessions from France and Britain in the Mediterranean. In 1939, Germany prepared for war with Poland, but also attempted to gain territorial concessions from Poland through diplomatic means. Germany demanded that Poland accept the annexation of the Free City of Danzig to Germany and authorize the construction of automobile highways from Germany through the Polish Corridor into Danzig and East Prussia, promising a twenty-five-year non-aggression pact in exchange. The Polish government did not trust Hitler's promises and refused to accept German demands. Following a strategic alliance between Germany and the Soviet Union in August 1939, the two powers invaded Poland in September of that year. In response, the United Kingdom, France, and their allies declared war against Germany, resulting in the outbreak of World War II. Germany and the Soviet Union partitioned Poland between them in late 1939 followed by the successful German offensive in Scandinavia and continental Western Europe in 1940. On 10 June 1940, Mussolini led Italy into World War II on the side of the Axis. Mussolini was aware that Italy did not have the military capacity to carry out a long war with France or Britain and waited until France was on the verge of imminent collapse before declaring war, on the assumption that the war would be short-lived. Mussolini believed that Italy could gain some territorial concessions from France and then concentrate its forces on a major offensive in Egypt. Plans by Germany to invade the United Kingdom in 1940 failed after Germany lost the aerial warfare campaign in the Battle of Britain. The war became prolonged contrary to Mussolini's plans, resulting in Italy losing battles on multiple fronts and requiring German assistance. In 1941, the Axis campaign spread to the Soviet Union after Hitler launched Operation Barbarossa. Axis forces at the height of their power controlled almost all of continental Europe, including the occupation of large portions of the Soviet Union. By 1942, Fascist Italy occupied and annexed Dalmatia from Yugoslavia, Corsica and Nice from France and controlled other territories. During World War II, the Axis Powers in Europe led by Nazi Germany participated in the extermination of millions of Jews and others in the genocide known as the Holocaust. After 1942, Axis forces began to lose their early upper hand. By 1943, after Italy faced multiple military failures, complete reliance and subordination to Germany and an Allied invasion, Mussolini was removed as head of government and arrested by the order of King Victor Emmanuel III. The king proceeded to dismantle the Fascist state and joined the Allies. Mussolini was rescued from arrest by German forces and led the German client state, the Italian Social Republic from 1943 to 1945. Nazi Germany faced multiple losses and steady Soviet and Western Allied offensives from 1943 to 1945. On 28 April 1945, Mussolini was captured and executed by Italian communist partisans. On 30 April 1945, Hitler committed suicide during the Battle of Berlin between collapsing German forces and Soviet armed forces. Shortly afterward, Germany surrendered and the Nazi regime was dismantled and key Nazi members were arrested to stand trial for crimes against humanity including the Holocaust. Yugoslavia, Greece and Ethiopia requested the extradition of 1,200 Italian war criminals, but these people never saw anything like the Nuremberg trials since the British government, with the beginning of Cold War, saw in Pietro Badoglio a guarantee of an anti-communist post-war Italy. The repression of memory led to historical revisionism in Italy and in 2003 the Italian media published Silvio Berlusconi's statement that Benito Mussolini only "used to send people on vacation", denying the existence of Italian concentration camps such as Rab concentration camp. Fascism, neofascism and postfascism after World War II (1945–2008) In the aftermath of World War II, the victory of the Allies over the Axis powers led to the collapse of multiple fascist regimes in Europe. The Nuremberg Trials convicted multiple Nazi leaders of crimes against humanity including the Holocaust. However, there remained multiple ideologies and governments that were ideologically related to fascism. Francisco Franco's quasi-fascist Falangist one-party state in Spain was officially neutral during World War II and survived the collapse of the Axis Powers. Franco's rise to power had been directly assisted by the militaries of Fascist Italy and Nazi Germany during the Spanish Civil War, and he later sent volunteers to fight on the side of Nazi Germany against the Soviet Union during World War II. After World War II and a period of international isolation, Franco's regime normalized relations with Western powers during the early years of the Cold War until Franco's death in 1975 and the transformation of Spain into a liberal democracy. Peronism, which is associated with the regime of Juan Peron in Argentina from 1946 to 1955 and 1973 to 1974, was strongly influenced by fascism. Prior to rising to power, from 1939 to 1941 Peron had developed a deep admiration of Italian Fascism and modelled his economic policies on Italian Fascist economic policies. The South African government of Afrikaner nationalist and white supremacist Daniel François Malan was closely associated with pro-fascist and pro-Nazi politics. In 1937, Malan's Purified National Party, the South African Fascists and the Blackshirts agreed to form a coalition for the South African election. Malan had fiercely opposed South Africa's participation on the Allied side in World War II. Malan's government founded apartheid, the system of racial segregation of whites and non-whites in South Africa. The most extreme Afrikaner fascist movement is the neo-Nazi white supremacist Afrikaner Resistance Movement (AWB) that at one point was recorded in 1991 to have 50,000 supporters with rising support. The AWB grew in support in response to efforts to dismantle apartheid in the 1980s and early 1990s and its paramilitary wing the Storm Falcons threatened violence against people it considered "trouble makers". Another ideology strongly influenced by fascism is Ba'athism. Ba'athism is a revolutionary Arab nationalist ideology that seeks the unification of all claimed Arab lands into a single Arab state. Zaki al-Arsuzi, one of the principal founders of Ba'athism, was strongly influenced by and supportive of Fascism and Nazism. Several close associates of Ba'athism's key ideologist Michel Aflaq have admitted that Aflaq had been directly inspired by certain fascist and Nazi theorists. Ba'athist regimes in power in Iraq and Syria have held strong similarities to fascism, they are radical authoritarian nationalist one-party states. Due to Ba'athism's anti-Western stances it preferred the Soviet Union in the Cold War and admired and adopted certain Soviet organizational structures for their governments, but the Ba'athist regimes have persecuted communists. Like fascist regimes, Ba'athism became heavily militarized in power. Ba'athist movements governed Iraq in 1963 and again from 1968 to 2003 and in Syria from 1963 to the present. Ba'athist heads of state such as Syrian President Hafez al-Assad and Iraqi President Saddam Hussein created personality cults around themselves portraying themselves as the nationalist saviours of the Arab world. Ba'athist Iraq under Saddam Hussein pursued ethnic cleansing or the liquidation of minorities, pursued expansionist wars against Iran and Kuwait and gradually replaced pan-Arabism with an Iraqi nationalism that emphasized Iraq's connection to the glories of ancient Mesopotamian empires, including Babylonia. Historian of fascism Stanley Payne has said about Saddam Hussein's regime: "There will probably never again be a reproduction of the Third Reich, but Saddam Hussein has come closer than any other dictator since 1945". Ba'athist Syria under the Assad dynasty granted asylum, protection and funding for the internationally wanted Nazi war-criminal Alois Brunner for decades. An SS officer under the command of Adolf Eichmann, Brunner directly oversaw the abduction and deportations of hundreds of thousands of Jews to Nazi extermination camps during the Holocaust. For decades, Brunner provided extensive training to the Syrian Mukhabarat on Nazi torture practices and re-organized the Ba'athist secret police on the model of the SS and Gestapo. Extreme anti-semitic sentiments have been normalized in Syrian society through the pervasive Ba'athist propaganda system. The Assad regime was also the only regime in the world that granted asylum to Abu Daoud, the mastermind of the 1972 Munich Olympic Massacre. In his notorious book Matzo of Zion, Syrian Minister of Defense Mustafa Tlass accused the Jews of blood libel and harbouring "black hatred against all humankind and religions". Anti-semitic canards and conspiracies have also been promoted as a regular feature in the state TV shows during the reign of Bashar al-Assad. Since the start of the Syrian civil war in 2011, some neo-Nazi and neo-fascist groups have supported the Assad regime, including CasaPound, Golden Dawn, Black Lily, the British National Party, National Rebirth of Poland, and Forza Nuova. The affinity shown by some neo-Nazis to the leftist-oriented Syrian Ba'ath party is commonly explained as part of the former's far-right worldview rooted in Islamophobia, admiration for totalitarian states and perception that the Ba'athist government is against Jews. British-Syrian activist Leila al-Shamy states this could also be due to doctrinal similarities: "the ideological roots of Baathism, which definitely incorporates elements of fascism... took inspiration from European fascism, particularly how to build a totalitarian state." In the 1990s, Stanley Payne claimed that the Hindu nationalist movement Rashtriya Swayamsevak Sangh (RSS) holds strong resemblances to fascism, including its use of paramilitaries and its irredentist claims calling for the creation of a Greater India. Cyprian Blamires in World Fascism: A Historical Encyclopedia describes the ideology of the RSS as "fascism with Sanskrit characters" – a unique Indian variant of fascism. Blamires notes that there is evidence that the RSS held direct contact with Italy's Fascist regime and admired European fascism, a view with some support from A. James Gregor. However, these views have met wide criticism, especially from academics specializing Indian politics. Paul Brass, expert on Hindu-Muslim violence, notes that there are many problems with accepting this point of view and identified four reasons that it is difficult to define the Sangh as fascist. Firstly, most scholars of the field do not subscribe to the view the RSS is fascist, notably among them Christophe Jaffrelot, A. James Gregor and Chetan Bhatt. The other reasons include an absence of charismatic leadership, a desire on the part of the RSS to differentiate itself from European fascism, major cultural differences between the RSS and European fascists and factionalism within the Sangh Parivar. Stanley Payne claims that it also has substantial differences with fascism such as its emphasis on traditional religion as the basis of identity. Contemporary fascism (2008–present) Since the Great Recession of 2008, fascism has seen an international surge in popularity, alongside closely associated phenomena like xenophobia, antisemitism, authoritarianism and euroskepticism. The alt-righta loosely connected coalition of individuals and organizations which advocates a wide range of far-right ideas, from neoreactionaries to white nationalistsis often included under the umbrella term neo-fascism because alt-right individuals and organizations advocate a radical form of authoritarian ultranationalism. Alt right neofascists often campaign in indirect ways linked to conspiracy theories like "white genocide," pizzagate and QAnon, and seek to question the legitimacy of elections. Groups which are identified as neo-fascist in the United States generally include neo-Nazi organizations and movements such as the Proud Boys, the National Alliance, and the American Nazi Party. The Institute for Historical Review publishes negationist articles of an anti-semitic nature. Since 2016 and increasingly over the course of the presidency of Donald Trump, scholars have debated whether Trumpism should be considered a form of fascism. Fascism's relationship with other political and economic ideologies Mussolini saw fascism as opposing socialism and other left-wing ideologies, writing in The Doctrine of Fascism: "If it is admitted that the nineteenth century has been the century of Socialism, Liberalism and Democracy, it does not follow that the twentieth must also be the century of Liberalism, Socialism and Democracy. Political doctrines pass; peoples remain. It is to be expected that this century may be that of authority, a century of the 'Right,' a Fascist century." Capitalism Fascism had a complex relationship with capitalism, both supporting and opposing different aspects of it at different times and in different countries. In general, fascists held an instrumental view of capitalism, regarding it as a tool that may be useful or not, depending on circumstances. Fascists aimed to promote what they considered the national interests of their countries; they supported the right to own private property and the profit motive because they believed that they were beneficial to the economic development of a nation, but they commonly sought to eliminate the autonomy of large-scale business interests from the state. There were both pro-capitalist and anti-capitalist elements in fascist thought. Fascist opposition to capitalism was based on the perceived decadence, hedonism, and cosmopolitanism of the wealthy, in contrast to the idealized discipline, patriotism and moral virtue of the members of the middle classes. Fascist support for capitalism was based on the idea that economic competition was good for the nation, as well as social Darwinist beliefs that the economic success of the wealthy proved their superiority and the idea that interfering with natural selection in the economy would burden the nation by preserving weak individuals. These two ways of thinking about capitalism – viewing it as a positive force which promotes economic efficiency and is necessary for the prosperity of the nation but also viewing it as a negative force which promotes decadence and disloyalty to the nation – remained in uneasy coexistence within most fascist movements. The economic policies of fascist governments, meanwhile, were generally not based on ideological commitments one way or the other, instead being dictated by pragmatic concerns with building a strong national economy, promoting autarky, and the need to prepare for and to wage war. In Italian Fascism The earliest version of a fascist movement, which consisted of the small political groups led by Benito Mussolini in the Kingdom of Italy from 1914 to 1922 (Fascio d'Azione Rivoluzionaria and Fasci Italiani di Combattimento, respectively), formed a radical pro-war interventionist movement which focused on Italian territorial expansion and aimed to unite people from across the political spectrum in service to this goal. As such, this movement did not take a clear stance either for or against capitalism, as that would have divided its supporters. Many of its leaders, including Mussolini himself, had come from the anti-capitalist revolutionary syndicalist tradition, and were known for their anti-capitalist rhetoric. However, a significant part of the movement's funding came from pro-war business interests and major landowners. Mussolini at this stage tried to maintain a balance, by still claiming to be a social revolutionary while also cultivating a "positive attitude" towards capitalism and capitalists. The small fascist movement that was led by Mussolini in Milan in 1919 bore almost no resemblance with the Italian Fascism of ten years later, as it put forward an ambitious anti-capitalist program calling for redistributing land to the peasants, a progressive tax on capital, greater inheritance taxes and the confiscation of excessive war profits, while also proclaiming its opposition to "any kind of dictatorship or arbitrary power" and demanding an independent judiciary, universal suffrage, and complete freedom of speech. Yet Mussolini at the same time promised to eliminate state intervention in business and to transfer large segments of the economy from public to private control, and the fascists met in a hall provided by Milanese businessmen. These contradictions were regarded by Mussolini as a virtue of the fascist movement, which, at this early stage, intended to appeal to everyone. Starting in 1921, Italian Fascism shifted from presenting itself as a broad-based expansionist movement, to claiming to represent the extreme right of Italian politics. This was accompanied by a shift in its attitude towards capitalism. Whereas in the beginning it had accommodated both anti-capitalist and pro-capitalist stances, it now took on a strongly pro-free-enterprise policy. After being elected to the Italian parliament for the first time, the Fascists took a stand against economic collectivization and nationalization, and advocated for the privatization of postal and railway services. Mussolini appealed to conservative liberals to support a future fascist seizure of power by arguing that "capitalism would flourish best if Italy discarded democracy and accepted dictatorship as necessary in order to crush socialism and make government effective." He also promised that the fascists would reduce taxes and balance the budget, repudiated his socialist past and affirmed his faith in economic liberalism. In 1922, following the March on Rome, the National Fascist Party came to power and Mussolini became prime minister of Italy. From that time until the advent of the Great Depression in 1929, the Italian Fascists pursued a generally free-market and pro-capitalist economic policy, in collaboration with traditional Italian business elites. Near the beginning of his tenure as prime minister, in 1923, Mussolini declared that "the [Fascist] government will accord full freedom to private enterprise and will abandon all intervention in private economy." Mussolini's government privatized former government monopolies (such as the telephone system), repealed previous legislation that had been introduced by the Socialists (such as the inheritance tax), and balanced the budget. Alfredo Rocco, the Fascist Minister of Justice at the time, wrote in 1926 that: Mussolini attracted the wealthy in the 1920s by praising free enterprise, by talking about reducing the bureaucracy and abolishing unemployment relief, and by supporting increased inequality in society. He advocated economic liberalization, asserted that the state should keep out of the economy and even said that government intervention in general was "absolutely ruinous to the development of the economy." At the same time, however, he also tried to maintain some of fascism's early appeal to people of all classes by insisting that he was not against the workers, and sometimes by outright contradicting himself and saying different things to different audiences. Many of the wealthy Italian industrialists and landlords backed Mussolini because he provided stability (especially compared to the Giolitti era), and because under Mussolini's government there were "few strikes, plenty of tax concessions for the well-to-do, an end to rent controls and generally high profits for business." The Italian Fascist outlook towards capitalism changed after 1929, with the onset of the Great Depression which dealt a heavy blow to the Italian economy. Prices fell, production slowed, and unemployment more than tripled in the first four years of the Depression. In response, the Fascist government abandoned economic liberalism and turned to state intervention in the economy. Mussolini developed a theory which held that capitalism had degenerated over time, and that the capitalism of his era was facing a crisis because it had departed too far from its original roots. According to Mussolini, the original form was heroic capitalism or dynamic capitalism (1830–1870), which gave way to static capitalism (1870–1914), which then transformed into decadent capitalism or "supercapitalism", starting in 1914. Mussolini denounced this supercapitalism as a failure due to its alleged decadence, support for unlimited consumerism and intention to create the "standardization of humankind". He claimed that supercapitalism had resulted in the collapse of the capitalist system in the Great Depression, but that the industrial developments of earlier types of capitalism were valuable and that private property should be supported as long as it was productive. Fascists also argued that, without intervention, supercapitalism "would ultimately decay and open the way for a Marxist revolution as labour-capital relations broke down". They presented their new economic program as a way to avoid this result. The idea of corporatism, which had already been part of Fascist rhetoric for some time, rose to prominence as a solution that would preserve private enterprise and property while allowing the state to intervene in the economy when private enterprise failed. Corporatism was promoted as reconciling the interests of capital and labour. Mussolini argued that this fascist corporatism would preserve those elements of capitalism that were deemed beneficial, such as private enterprise, and combine them with state supervision. At this time he also said that he rejected the typical capitalist elements of economic individualism and laissez-faire. Mussolini claimed that in supercapitalism "a capitalist enterprise, when difficulties arise, throws itself like a dead weight into the state's arms. It is then that state intervention begins and becomes more necessary. It is then that those who once ignored the state now seek it out anxiously". Due to the inability of businesses to operate properly when facing economic difficulties, Mussolini claimed that this proved that state intervention into the economy was necessary to stabilize the economy. Statements from Italian Fascist leaders in the 1930s tended to be critical of economic liberalism and laissez-faire, while promoting corporatism as the basis for a new economic model. Mussolini said in an interview in October 1933 that he "want[ed] to establish the corporative regime," and in a speech on 14 November 1933 he declared: A year later, in 1934, Italian Agriculture Minister Giacomo Acerbo claimed that Fascist corporatism was the best way to defend private property in the context of the Great Depression: In the late 1930s, Fascist Italy tried to achieve autarky (national economic self-sufficiency), and for this purpose the government promoted manufacturing cartels and introduced significant tariff barriers, currency restrictions and regulations of the economy to attempt to balance payments with Italy's trade partners. The attempt to achieve effective economic autonomy was not successful, but minimizing international trade remained an official goal of Italian Fascism. In German Nazism German Nazism, like Italian Fascism, also incorporated both pro-capitalist and anti-capitalist views. The main difference was that Nazism interpreted everything through a racial lens. Thus, Nazi views on capitalism were shaped by the question of which race the capitalists belonged to. Jewish capitalists (especially bankers) were considered to be mortal enemies of Germany and part of a global conspiracy that also included Jewish communists. On the other hand, ethnic German capitalists were regarded as potential allies by the Nazis. From the beginning of the Nazi movement, and especially from the late 1920s onward, the Nazi Party took the stance that it was not opposed to private property or capitalism as such, but only to its excesses and the domination of the German economy by "foreign" capitalists (including German Jews). There were a range of economic views within the early Nazi Party, ranging from the Strasserite wing which championed extensive state intervention, to the Völkisch conservatives who promoted a program of conservative corporatism, to the economic right-wing within Nazism, who hoped to avoid corporatism because it was viewed as too restrictive for big business. In the end, the approach that prevailed after the Nazis came to power was a pragmatic one, in which there would be no new economic system, but rather a continuation of "the long German tradition of authoritarian statist economics, which dated well back into the nineteenth century." Like Fascist Italy, Nazi Germany similarly pursued an economic agenda with the aims of autarky and rearmament and imposed protectionist policies, including forcing the German steel industry to use lower-quality German iron ore rather than superior-quality imported iron. The Nazis were economic nationalists who "favoured protective tariffs, foreign debt reduction, and import substitution to remove what they regarded as debilitating dependence on the world economy." The purpose of the economy, according to the Nazi worldview, was to "provide the material springboard for military conquest." As such, the Nazis aimed to place the focus of the German economy on a drive for empire and conquest, and they found and promoted businessmen who were willing to cooperate with their goals. They opposed free-market economics and instead promoted a state-driven economy that would guarantee high profits to friendly private companies in exchange for their support, which was a model adopted by many other political movements and governments in the 1930s, including the governments of Britain and France. Private capitalism was not directly challenged, but it was subordinated to the military and foreign policy goals of the state, in a way that reduced the decision-making power of industrial managers but did not interfere with the pursuit of private profit. Leading German business interests supported the goals of the Nazi government and its war effort in exchange for advantageous contracts, subsidies, and the suppression of the trade union movement. Avraham Barkai concludes that, because "the individual firm still operated according to the principle of maximum profit," the Nazi German economy was therefore "a capitalist economy in which capitalists, like all other citizens, were not free even though they enjoyed a privileged status, had a limited measure of freedom in their activities, and were able to accumulate huge profits as long as they accepted the primacy of politics." In other fascist movements Other fascist movements mirrored the general outlook of the Italian Fascists and German Nazis. The Spanish Falange called for respect for private property and was founded with support from Spanish landowners and industrialists. However, the Falange distinguished between "private property", which it supported, and "capitalism", which it opposed. The Falangist program of 1937 recognized "private property as a legitimate means for achieving individual, family and social goals," but Falangist leader José Antonio Primo de Rivera said in 1935: "We reject the capitalist system, which disregards the needs of the people, dehumanizes private property and transforms the workers into shapeless masses prone to misery and despair." After his death and the rise of Francisco Franco, the rhetoric changed, and Falangist leader Raimundo Fernández-Cuesta declared the movement's ideology to be compatible with capitalism. In Hungary, the Arrow Cross Party held anti-feudal, anti-capitalist and anti-socialist beliefs, supporting land reform and militarism and drawing most of its support from the ranks of the army. The Romanian Iron Guard espoused anti-capitalist, anti-banking and anti-bourgeois rhetoric, combined with anti-communism and a religious form of anti-Semitism. The Iron Guard saw both capitalism and communism as being Jewish creations that served to divide the nation, and accused Jews of being "the enemies of the Christian nation." Conservatism In principle, there were significant differences between conservatives and fascists. However, both conservatives and fascists in Europe have held similar positions on many issues, including anti-communism and support of national pride. Conservatives and fascists both reject the liberal and Marxist emphasis on linear progressive evolution in history. Fascism's emphasis on order, discipline, hierarchy, military virtues and preservation of private property appealed to conservatives. The fascist promotion of "healthy", "uncontaminated" elements of national tradition such as chivalric culture and glorifying a nation's historical golden age has similarities with conservative aims. Fascists also made pragmatic tactical alliances with traditional conservative forces to achieve and maintain power. Even at the height of their influence and popularity, fascist movements were never able to seize power entirely by themselves, and relied on alliances with conservative parties to come to power. However, while conservatives made alliances with fascists in countries where the conservatives felt themselves under threat and therefore in need of such an alliance, this did not happen in places where the conservatives were securely in power. Several authoritarian conservative regimes across Europe suppressed fascist parties in the 1930s and 40s. Many of fascism's recruits were disaffected right-wing conservatives who were dissatisfied with the traditional right's inability to achieve national unity and its inability to respond to socialism, feminism, economic crisis and international difficulties. With traditional conservative parties in Europe severely weakened in the aftermath of World War I, there was a political vacuum on the right which fascism filled. Fascists gathered support from landlords, business owners, army officers, and other conservative individuals and groups, by successfully presenting themselves as the last line of defense against land reform, social welfare measures, demilitarization, higher wages, and the socialization of the means of production. According to John Weiss, "Any study of fascism which centers too narrowly on the fascists and Nazis alone may miss the true significance of right-wing extremism." However, unlike conservatism, fascism specifically presents itself as a modern ideology that is willing to break free from the moral and political constraints of traditional society. The conservative authoritarian right is distinguished from fascism in that such conservatives tended to use traditional religion as the basis for their philosophical views, while fascists based their views on vitalism, nonrationalism, or secular neo-idealism. Fascists often drew upon religious imagery, but used it as a symbol for the nation and replaced spirituality with secular nationalism. Even in the most religious of the fascist movements, the Romanian Iron Guard, "Christ was stripped of genuine otherworldly mystery and was reduced to a metaphor for national redemption." Fascists claimed to support the traditional religions of their countries, but did not regard religion as a source of important moral principles, seeing it only as an aspect of national culture and a source of national identity and pride. Furthermore, while conservatives in interwar Europe generally wished to return to the pre-1914 status quo, fascists did not. Fascism combined an idealization of the past with an enthusiasm for modern technology. Nazi Germany "celebrated Aryan values and the glories of the Germanic knights while also taking pride in its newly created motorway system." Fascists looked to the spirit of the past to inspire a new era of national greatness and set out to "forge a mythic link between the present generation and a glorious stage in the past", but they did not seek to directly copy or restore past societies. Another difference with traditional conservatism lies in the fact that fascism had radical aspirations for reshaping society. Arthur M. Schlesinger Jr. wrote that "Fascists were not conservative in any very meaningful sense... The Fascists, in a meaningful sense, were revolutionaries". Fascists sought to destroy existing elites through revolutionary action to replace them with a new elite selected on the principle of the survival of the fittest, and thus they "rejected existing aristocracies in favor of their own new aristocracy." Yet at the same time, some fascist leaders claimed to be counter-revolutionary, and fascism saw itself as being opposed to all previous revolutions from the French Revolution onward, blaming them for liberalism, socialism, and decadence. In his book Fascism (1997), Mark Neocleous sums up these paradoxical tendencies by referring to fascism as "a prime example of reactionary modernism" as well as "the culmination of the conservative revolutionary tradition." Liberalism Fascism is strongly opposed to the individualism found in classical liberalism. Fascists accuse liberalism of de-spiritualizing human beings and transforming them into materialistic beings whose highest ideal is moneymaking. In particular, fascism opposes liberalism for its materialism, rationalism, individualism and utilitarianism. Fascists believe that the liberal emphasis on individual freedom produces national divisiveness. Mussolini criticized classical liberalism for its individualistic nature, writing: "Against individualism, the Fascist conception is for the State; ... It is opposed to classical Liberalism ... Liberalism denied the State in the interests of the particular individual; Fascism reaffirms the State as the true reality of the individual." However, Fascists and Nazis support a type of hierarchical individualism in the form of Social Darwinism because they believe it promotes "superior individuals" and weeds out "the weak". They also accuse both Marxism and democracy, with their emphasis on equality, of destroying individuality in favor of the "dead weight" of the masses. One issue where Fascism is in accord with liberalism is in its support of private property rights and the existence of a market economy. Although Fascism sought to "destroy the existing political order", it had tentatively adopted the economic elements of liberalism, but "completely denied its philosophical principles and the intellectual and moral heritage of modernity". Fascism espoused antimaterialism, which meant that it rejected the "rationalistic, individualistic and utilitarian heritage" that defined the liberal-centric Age of Enlightenment. Nevertheless, between the two pillars of fascist economic policy – national syndicalism and productionism – it was the latter that was given more importance, so the goal of creating a less materialist society was generally not accomplished. Fascists saw contemporary politics as a life or death struggle of their nations against Marxism, and they believed that liberalism weakened their nations in this struggle and left them defenseless. While the socialist left was seen by the fascists as their main enemy, liberals were seen as the enemy's accomplices, "incompetent guardians of the nation against the class warfare waged by the socialists." Social welfare and public works Fascists opposed social welfare for those they regarded as weak and decadent, but supported state assistance for those they regarded as strong and pure. As such, fascist movements criticized the welfare policies of the democratic governments they opposed, but eventually adopted welfare policies of their own to gain popular support. The Nazis condemned indiscriminate social welfare and charity, whether run by the state or by private entities, because they saw it as "supporting many people who were racially inferior." After coming to power, they adopted a type of selective welfare system that would only help those they deemed to be biologically and racially valuable. Italian Fascists had changing attitudes towards welfare. They took a stance against unemployment benefits upon coming to power in 1922, but later argued that improving the well-being of the labor force could serve the national interest by increasing productive potential, and adopted welfare measures on this basis. From 1925 to 1939, the Italian Fascist government "embarked upon an elaborate program" of social welfare provision, supplemented by private charity from wealthy industrialists "in the spirit of Fascist class collaboration." This program included food supplementary assistance, infant care, maternity assistance, family allowances per child to encourage higher birth rates, paid vacations, public housing, and insurance for unemployment, occupational diseases, old age and disability. Many of these were continuations of programs already begun under the parliamentary system that fascism had replaced, and they were similar to programs instituted by democratic governments across Europe and North America in the same time period. Social welfare under democratic governments was sometimes more generous, but given that Italy was a poorer country, its efforts were more ambitious, and its legislation "compared favorably with the more advanced European nations and in some respects was more progressive." Out of a "determination to make Italy the powerful, modern state of his imagination," Mussolini also began a broad campaign of public works after 1925, such that "bridges, canals, and roads were built, hospitals and schools, railway stations and orphanages; swamps were drained and land reclaimed, forests were planted and universities were endowed". The Mussolini administration "devoted 400 million lire of public monies" for school construction between 1922 and 1942 (an average of 20 million lire per year); for comparison, a total of only 60 million lire had been spent on school construction between 1862 and 1922 (an average of 1 million lire per year). Extensive archaeological works were also financed, with the intention of highlighting the legacy of the Roman Empire and clearing ancient monuments of "everything that has grown up round them during the centuries of decadence." In Germany, the Nazi Party condemned both the public welfare system of the Weimar Republic and private charity and philanthropy as being "evils that had to be eliminated if the German race was to be strengthened and its weakest elements weeded out in the process of natural selection." Once in power, the Nazis drew sharp distinctions between those undeserving and those deserving of assistance, and strove to direct all public and private aid towards the latter. They argued that this approach represented "racial self-help" and not indiscriminate charity or universal social welfare. An organization called National Socialist People's Welfare (Nationalsozialistische Volkswohlfahrt, NSV) was given the task of taking over the functions of social welfare institutions and "coordinating" the private charities, which had previously been run mainly by the churches and by the labour movement. Hitler instructed NSV chairman Erich Hilgenfeldt to "see to the disbanding of all private welfare institutions," in an effort to direct who was to receive social benefits. Welfare benefits were abruptly withdrawn from Jews, Communists, many Social Democrats, Jehovah's Witnesses, and others that were considered enemies of the Nazi regime, at first without any legal justification. The NSV officially defined its mandate very broadly. For instance, one of the divisions of the NSV, the Office of Institutional and Special Welfare, was responsible "for travellers' aid at railway stations; relief for ex-convicts; 'support' for re-migrants from abroad; assistance for the physically disabled, hard-of-hearing, deaf, mute, and blind; relief for the elderly, homeless and alcoholics; and the fight against illicit drugs and epidemics". But the NSV also explicitly stated that all such benefits would only be available to "racially superior" persons. NSV administrators were able to mount an effort towards the "cleansing of their cities of 'asocials'," who were deemed unworthy of receiving assistance for various reasons. The NSV limited its assistance to those who were "racially sound, capable of and willing to work, politically reliable, and willing and able to reproduce," and excluded non-Aryans, the "work-shy", "asocials" and the "hereditarily ill." The agency successfully "projected a powerful image of caring and support" for "those who were judged to have got into difficulties through no fault of their own," as over 17 million Germans had obtained assistance from the NSV by 1939. However, the organization also resorted to intrusive questioning and monitoring to judge who was worthy of support, and for this reason it was "feared and disliked among society's poorest." Socialism and communism Fascism is historically strongly opposed to socialism and communism, due to their support of class revolution as well as "decadent" values, including internationalism, egalitarianism, horizontal collectivism, materialism and cosmopolitanism. Fascists have thus commonly campaigned with anti-communist agendas. Fascists saw themselves as building a new aristocracy, a "warrior race or nation", based on purity of blood, heroism and virility. They strongly opposed ideas of universal human equality and advocated hierarchy in its place, adhering to "the Aristotelian conviction, amplified by the modern elite theorists, that the human race is divided by nature into sheep and shepherds." Fascists believed in the survival of the fittest, and argued that society should be led by an elite of "the fittest, the strongest, the most heroic, the most productive, and, even more than that, those most fervently possessed with the national idea." Marxism and fascism oppose each other primarily because Marxism "called on the workers of the world to unite across national borders in a global battle against their oppressors, treating nation-states and national pride as tools in the arsenal of bourgeois propaganda", while fascism, on the contrary, exalted the interests of the nation or race as the highest good, and rejected all ideas of universal human interests standing above the nation or race. Within the nation, Marxism calls for class struggle by the working class against the ruling class, while fascism calls for collaboration between the classes to achieve national rejuvenation. Fascism proposes a type of society in which different classes continue to exist, where the rich and the poor both serve the national interest and do not oppose each other. Following the Bolshevik revolution of 1917 and the creation of the Soviet Union, fear of and opposition to communism became a major aspect of European politics in the 1920s and 1930s. Fascists were able to take advantage of this and presented themselves as the political force most capable of defeating communism. This was a major factor in enabling fascists to make alliances with the old establishment and to come to power in Italy and Germany, in spite of fascism's own radical agenda, because of the shared anti-Marxism of fascists and conservatives. The Nazis in particular came to power "on the back of a powerfully anticommunist program and in an atmosphere of widespread fear of a Bolshevik revolution at home," and their first concentration camps in 1933 were meant for holding socialist and communist political prisoners. Both Fascist Italy and Nazi Germany also suppressed independent working-class organizations. The Bolshevik revolutionary Leon Trotsky formulated a theory to explain the rise of fascism based on a dialectical interpretation of events to analyze the manifestation of Italian fascism and the early emergence of Nazi Germany from 1930 to 1933. He was an early observer on the rise of Nazi Germany during his final years in exile, and he advocated the tactic of a united front to oppose fascism. Trotsky was also a strong critic of the shifting Comintern policy position under Stalin which directed German Communists to treat social democrats as "social fascists". Historian Bertrand Patenaude believed that the Comintern policy following the "Great Break" facilitated the rise of Hitler's party. Fascism regarded mainstream socialism as a bitter enemy. In opposing the latter's internationalist aspect, it sometimes defined itself as a new, alternative, nationalist form of socialism. Hitler at times attempted to redefine the word socialism, such as saying: "Socialism! That is an unfortunate word altogether... What does socialism really mean? If people have something to eat and their pleasures, then they have their socialism". In 1930, Hitler said: "Our adopted term 'Socialist' has nothing to do with Marxist Socialism. Marxism is anti-property; true Socialism is not". The name that Hitler later wished he had used to describe his political party was "social revolutionary". Mainstream socialists have typically rejected and opposed fascism in turn. Many communists regarded fascism as a tool of the ruling-class to destroy the working-class, regarding it as "the open but indirect dictatorship of capital." Nikita Khrushchev sardonically remarked: "In modern times the word Socialism has become very fashionable, and it has also been used very loosely. Even Hitler used to babble about Socialism, and he worked the word into the name of his Nazi [National Socialist] party. The whole world knows what sort of Socialism Hitler had in mind". However, the agency and genuine belief of fascists was recognised by some communist writers, like Antonio Gramsci, Palmiro Togliatti and Otto Bauer, who instead believed fascism to be a genuine mass movement that arose as a consequence of the specific socio-economic conditions of the societies it arose in. Despite the mutual antagonism that would later develop between the two, the attitude of communists towards early fascism was more ambivalent than it might appear from the writings of individual communist theorists. In the early days, Fascism was sometimes perceived as less of a mortal rival to revolutionary Marxism than as a heresy from it. Mussolini's government was one of the first in Western Europe to diplomatically recognise the USSR, doing so in 1924. On 20 June 1923, Karl Radek gave a speech before the Comintern in which he proposed a common front with the Nazis in Germany. However, the two radicalisms were mutually exclusive and they later become profound enemies. While fascism is opposed to Bolshevism, both Bolshevism and fascism promote the one-party state and the use of political party militias. Fascists and communists also agree on the need for violent revolution to forge a new era, and they hold common positions in their opposition to liberalism, capitalism, individualism and parliamentarism. Fascists and Soviet communists both created totalitarianism systems after coming into power and both used violence and terror when it was advantageous to do so. However, unlike communists, fascists were more supportive of capitalism and defended economic elites. Fascism denounces democratic socialism as a failure. Fascists see themselves as supporting a moral and spiritual renewal based on a warlike spirit of violence and heroism, and they condemn democratic socialism for advocating "humanistic lachrimosity" such as natural rights, justice, and equality. Fascists also oppose democratic socialism for its support of reformism and the parliamentary system that fascism typically rejects. Italian Fascism had ideological connections with revolutionary syndicalism, and in particular Sorelian syndicalism. Benito Mussolini mentioned revolutionary syndicalist Georges Sorelalong with Hubert Lagardelle and his journal Le Mouvement socialiste, which advocated a technocratic vision of societyas major influences on fascism. According to Zeev Sternhell, World War I caused Italian revolutionary syndicalism to develop into a national syndicalism, reuniting all social classes, which later transitioned into Italian Fascism, such that "most syndicalist leaders were among the founders of the Fascist movement" and "many even held key posts" in the Italian Fascist regime by the mid-1920s. The Sorelian emphasis on the need for a revolution based upon action of intuition, a cult of energy and vitality, activism, heroism and the use of myth was used by fascists. Many prominent fascist figures were formerly associated with revolutionary syndicalism, including Mussolini, Arturo Labriola, Robert Michels and Paolo Orano. See also List of fascist movements Fascist (insult) Clerical fascism Definitions of fascism Hindutva Violence against Christians in India Violence against Muslims in independent India "The Doctrine of Fascism" Ecofascism Economics of fascism Fascio Fascist architecture Fascist socialization Fascist symbolism Fascist Syndicalism Ideology of the Committee of Union and Progress Producerism Yellow socialism References General bibliography . online; covers 1908 to 1925. Bibliography on fascist ideology Bibliography on international fascism (Contains chapters on fascist movements in different countries.) Further reading Seldes, George. 1935. Sawdust Caesar: The Untold History of Mussolini and Fascism. New York and London: Harper and Brothers. Reich, Wilhelm. 1970. The Mass Psychology of Fascism. New York: Farrar, Straus & Giroux. Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation Crown. External links The Doctrine of Fascism signed by Benito Mussolini (complete text) Authorized translation of Mussolini's "The Political and Social Doctrine of Fascism" (1933) The Political Economy of Fascism – From Dave Renton's anti-fascist website Fascism and Zionism – From The Hagshama Department – World Zionist Organization Fascism Part I – Understanding Fascism and Anti-Semitism Eternal Fascism: Fourteen Ways of Looking at a Blackshirt – Umberto Eco's list of 14 characteristics of Fascism, originally published 1995. Site of an Italian fascist party Italian and German languages Site dedicated to the period of fascism in Greece (1936–1941) Text of the papal encyclical Quadragesimo Anno. Profits über Alles! American Corporations and Hitler by Jacques R. Pauwels Fascism Ideologies
0.765253
0.996189
0.762337
Comparative literature
Comparative literature studies is an academic field dealing with the study of literature and cultural expression across linguistic, national, geographic, and disciplinary boundaries. Comparative literature "performs a role similar to that of the study of international relations but works with languages and artistic traditions, so as to understand cultures 'from the inside'". While most frequently practised with works of different languages, comparative literature may also be performed on works of the same language if the works originate from different nations or cultures in which that language is spoken. The characteristically intercultural and transnational field of comparative literature concerns itself with the relation between literature, broadly defined, and other spheres of human activity, including history, politics, philosophy, art, and science. Unlike other forms of literary study, comparative literature places its emphasis on the interdisciplinary analysis of social and cultural production within the "economy, political dynamics, cultural movements, historical shifts, religious differences, the urban environment, international relations, public policy, and the sciences". Overview Students and instructors in the field, usually called "comparatists", have traditionally been proficient in several languages and acquainted with the literary traditions, literary criticism, and major literary texts of those languages. Many of the newer sub-fields, however, are more influenced by critical theory and literary theory, stressing theoretical acumen and the ability to consider different types of art concurrently over proficiency in multiple languages. The interdisciplinary nature of the field means that comparatists typically exhibit acquaintance with sociology, history, anthropology, translation studies, critical theory, cultural studies, and religious studies. As a result, comparative literature programs within universities may be designed by scholars drawn from several such departments. This eclecticism has led critics (from within and without) to charge that comparative literature is insufficiently well-defined or that comparatists too easily fall into dilettantism because the scope of their work is, of necessity, broad. Some question whether this breadth affects the ability of PhDs to find employment in the highly specialized environment of academia and the career market at large, although such concerns do not seem to be borne out by placement data, which shows comparative literature graduates to be hired at similar or higher rates than English literature graduates. The terms "comparative literature" and "world literature" are often used to designate a similar course of study and scholarship. Comparative literature is the more widely used term in the United States, with many universities having comparative literature departments or comparative literature programs. Comparative literature is an interdisciplinary field whose practitioners study literature across national borders, time periods, languages, genres, boundaries between literature and the other arts (music, painting, dance, film, etc.), and across disciplines (literature and psychology, philosophy, science, history, architecture, sociology, politics, etc.). Defined most broadly, comparative literature is the study of "literature without borders". Scholarship in comparative literature includes, for example, studying literacy and social status in the Americas, medieval epic and romance, the links of literature to folklore and mythology, colonial and postcolonial writings in different parts of the world, and asking fundamental questions about the definition of literature itself. What scholars in comparative literature share is a desire to study literature beyond national boundaries and an interest in languages so that they can read foreign texts in their original form. Many comparatists also share the desire to integrate literary experience with other cultural phenomena such as historical change, philosophical concepts, and social movements. The discipline of comparative literature has scholarly associations such as the International Comparative Literature Association (ICLA) and comparative literature associations in many countries. There are many learned journals that publish scholarship in comparative literature: see "Selected Comparative Literature and Comparative Humanities Journals" and for a list of books in comparative literature see "Bibliography of (Text)Books in Comparative Literature". Early work Work considered foundational to the discipline of comparative literature include Spanish humanist Juan Andrés's work, Transylvanian Hungarian Hugo Meltzl de Lomnitz's scholarship, also the founding editor of the journal Acta Comparationis Litterarum Universarum (1877) and Irish scholar H.M. Posnett's Comparative Literature (1886). However, antecedents can be found in the ideas of Johann Wolfgang von Goethe in his vision of "world literature" (Weltliteratur) and Russian Formalists credited Alexander Veselovsky with laying the groundwork for the discipline. Viktor Zhirmunsky, for instance, referred to Veselovsky as "the most remarkable representative of comparative literary study in Russian and European scholarship of the nineteenth century" (Zhirmunsky qtd. in Rachel Polonsky, English Literature and the Russian Aesthetic Renaissance [Cambridge UP, 1998. 17]; see also David Damrosch During the late 19th century, comparatists such as Fyodor Buslaev were chiefly concerned with deducing the purported Zeitgeist or "spirit of the times", which they assumed to be embodied in the literary output of each nation. Although many comparative works from this period would be judged chauvinistic, Eurocentric, or even racist by present-day standards, the intention of most scholars during this period was to increase the understanding of other cultures, not to assert superiority over them (although politicians and others from outside the field sometimes used their works for this purpose). French School From the early part of the 20th century until the Second World War, the field was characterised by a notably empiricist and positivist approach, termed the "French School", in which scholars like Paul Van Tiegham examined works forensically, looking for evidence of "origins" and "influences" between works from different nations often termed "rapport des faits". Thus a scholar might attempt to trace how a particular literary idea or motif traveled between nations over time. In the French School of Comparative Literature, the study of influences and mentalities dominates. Today, the French School practices the nation-state approach of the discipline although it also promotes the approach of a "European Comparative Literature". The publications from this school include, La Littérature Comparée (1967) by C. Pichois and A.M. Rousseau, La Critique Littéraire (1969) by J.-C. Carloni and Jean Filloux and La Littérature Comparée (1989) by Yves Cheverel, translated into English as Comparative Literature Today: Methods & Perspectives (1995). German School Like the French School, German Comparative Literature has its origins in the late 19th century. After World War II, the discipline developed to a large extent owing to one scholar in particular, Peter Szondi (1929–1971), a Hungarian who taught at the Free University Berlin. Szondi's work in Allgemeine und Vergleichende Literaturwissenschaft (German for "General and Comparative Literary Studies") included the genre of drama, lyric (in particular hermetic) poetry, and hermeneutics: "Szondi's vision of Allgemeine und Vergleichende Literaturwissenschaft became evident in both his policy of inviting international guest speakers to Berlin and his introductions to their talks. Szondi welcomed, among others, Jacques Derrida (before he attained worldwide recognition), Pierre Bourdieu and Lucien Goldman from France, Paul de Man from Zürich, Gershom Sholem from Jerusalem, Theodor W. Adorno from Frankfurt, Hans Robert Jauss from the then young University of Konstanz, and from the US René Wellek, Geoffrey Hartman and Peter Demetz (all at Yale), along with the liberal publicist Lionel Trilling. The names of these visiting scholars, who form a programmatic network and a methodological canon, epitomize Szondi's conception of comparative literature. However, German comparatists working in East Germany were not invited, nor were recognized colleagues from France or the Netherlands. Yet while he was oriented towards the West and the new allies of West Germany and paid little attention to comparatists in Eastern Europe, his conception of a transnational (and transatlantic) comparative literature was very much influenced by East European literary theorists of the Russian and Prague schools of structuralism, from whose works René Wellek, too, derived many of his concepts. These concepts continue to have profound implications for comparative literary theory today" ... A manual published by the department of comparative literature at the LMU Munich lists 31 German departments which offer a diploma in comparative literature in Germany, albeit some only as a 'minor'. These are: Augsburg, Bayreuth, Free University Berlin, Technische Universität Berlin, Bochum, Bonn, Chemnitz-Zwickau, Erfurt, Erlangen-Nürnberg, Essen, Frankfurt am Main, Frankfurt an der Oder, Gießen, Göttingen, Jena, Karlsruhe, Kassel, Konstanz, Leipzig, Mainz, München, Münster, Osnabrück, Paderborn, Potsdam, Rostock, Saarbrücken, Siegen, Stuttgart, Tübingen, Wuppertal. (Der kleine Komparatist [2003]). This situation is undergoing rapid change, however, since many universities are adapting to the new requirements of the recently introduced Bachelor and Master of Arts. German comparative literature is being squeezed by the traditional philologies on the one hand and more vocational programmes of study on the other which seek to offer students the practical knowledge they need for the working world (e.g., 'Applied Literature'). With German universities no longer educating their students primarily for an academic market, the necessity of a more vocational approach is becoming ever more evident". American (US) School Reacting to the French School, postwar scholars, collectively termed the "American School", sought to return the field to matters more directly concerned with literary criticism, de-emphasising the detective work and detailed historical research that the French School had demanded. The American School was more closely aligned with the original internationalist visions of Goethe and Posnett (arguably reflecting the postwar desire for international cooperation), looking for examples of universal human truths based on the literary archetypes that appeared throughout literatures from all times and places. Prior to the advent of the American School, the scope of comparative literature in the West was typically limited to the literatures of Western Europe and Anglo-America, predominantly literature in English, German and French literature, with occasional forays into Italian literature (primarily for Dante) and Spanish literature (primarily for Miguel de Cervantes). One monument to the approach of this period is Erich Auerbach's book Mimesis: The Representation of Reality in Western Literature, a survey of techniques of realism in texts whose origins span several continents and three thousand years. The approach of the American School would be familiar to current practitioners of cultural studies and is even claimed by some to be the forerunner of the Cultural Studies boom in universities during the 1970s and 1980s. The field today is highly diverse: for example, comparatists routinely study Chinese literature, Arabic literature and the literatures of most other major world languages and regions as well as English and continental European literatures. Current developments There is a movement among comparativists in the United States and elsewhere to re-focus the discipline away from the nation-based approach with which it has previously been associated towards a cross-cultural approach that pays no heed to national borders. Works of this nature include Alamgir Hashmi's The Commonwealth, Comparative Literature and the World, Gayatri Chakravorty Spivak's Death of a Discipline, David Damrosch's What is World Literature?, Steven Tötösy de Zepetnek's concept of "comparative cultural studies", and Pascale Casanova's The World Republic of Letters. It remains to be seen whether this approach will prove successful given that comparative literature had its roots in nation-based thinking and much of the literature under study still concerns issues of the nation-state. Given developments in the studies of globalization and interculturalism, comparative literature, already representing a wider study than the single-language nation-state approach, may be well suited to move away from the paradigm of the nation-state. Joseph Hankinson's stress on comparison's 'affiliative' potential is one recent effort in this direction. While in the West comparative literature is experiencing institutional constriction, there are signs that in many parts of the world the discipline is thriving, especially in Asia, Latin America, the Caribbean, and the Mediterranean. Current trends in Transnational studies also reflect the growing importance of post-colonial literary figures such as J. M. Coetzee, Maryse Condé, Earl Lovelace, V. S. Naipaul, Michael Ondaatje, Wole Soyinka, Derek Walcott, and Lasana M. Sekou. For recent post-colonial studies in North America see George Elliott Clarke. Directions Home: Approaches to African-Canadian Literature. (University of Toronto Press, 2011), Joseph Pivato. Echo: Essays in Other Literatures. (Guernica Editions, 2003), and "The Sherbrooke School of Comparative Canadian Literature". (Inquire, 2011). In the area of comparative studies of literature and the other arts see Linda Hutcheon's work on Opera and her A Theory of Adaptation. 2nd. ed. (Routledge, 2012). Canadian scholar Joseph Pivato is carrying on a campaign to revitalize comparative study with his book, Comparative Literature for the New Century eds. Giulia De Gasperi & Joseph Pivato (2018). In response to Pivato Canadian comparatists Susan Ingram and Irene Sywenky co-edited Comparative Literature in Canada: Contemporary Scholarship, Pedagogy, and Publishing in Review (2019), an initiative of the Canadian Comparative Literature Association. Interliterary study See also Comparative linguistics Literary criticism Literary translation Translation criticism References Citations General sources Tötösy de Zepetnek, Steven. "Multilingual Bibliography of (Text)Books in Comparative Literature, World Literature(s), and Comparative Cultural Studies". CLCWeb: Comparative Literature and Culture (Library) (1999–). CLCWeb: Comparative Literature and Culture. Companion to Comparative Literature, World Literatures, and Comparative Cultural Studies. Ed. Steven Tötösy de Zepetnek and Tutun Mukherjee. New Delhi: Cambridge University Press India, 2013. "New Work in Comparative Literature in Europe". Marina Grishakova, Lucia Boldrini, and Matthew Arnolds (eds.). Special Issue CLCWeb: Comparative Literature and Culture 15.7 (2013). Comparative Literature for the New Century. Giulia De Gasperi & Joseph Pivato (eds.). Montreal: McGill-Queen's U.P., 2018. External links A list of comparative literature departments and programs in the US, Canada, and UK AILC/ICLA: Association internationale de littérature comparée / International Comparative Literature Association REELC/ENCLS: Réseau européen d'études littéraires comparées/European Network for Comparative Literary Studies Critical theory
0.767727
0.992971
0.762331
Anagenesis
Anagenesis is the gradual evolution of a species that continues to exist as an interbreeding population. This contrasts with cladogenesis, which occurs when there is branching or splitting, leading to two or more lineages and resulting in separate species. Anagenesis does not always lead to the formation of a new species from an ancestral species. When speciation does occur as different lineages branch off and cease to interbreed, a core group may continue to be defined as the original species. The evolution of this group, without extinction or species selection, is anagenesis. Hypotheses One hypothesis is that during the speciation event in anagenetic evolution, the original populations will increase quickly, and then rack up genetic variation over long periods of time by mutation and recombination in a stable environment. Other factors such as selection or genetic drift will have such a significant effect on genetic material and physical traits that a species can be acknowledged as being different from the previous. Development An alternative definition offered for anagenesis involves progeny relationships between designated taxa with one or more denominated taxa in line with a branch from the evolutionary tree. Taxa must be within the species or genus and will help identify possible ancestors. When looking at evolutionary descent, there are two mechanisms at play. The first process is when genetic information changes. This means that over time there is enough of a difference in their genomes, and in the way that species' genes interact with each other during the developmental stage, that anagenesis can thereby be viewed as the processes of sexual and natural selection, and genetic drift's effect on an evolving species over time. The second process, speciation, is closely associated with cladogenesis. Speciation includes the actual separation of lineages, into two or more new species, from one specified species of origin. Cladogenesis can be seen as a similar hypothesis to anagenesis, with the addition of speciation to its mechanisms. Diversity on a species-level is able to be achieved through anagenesis. Anagenesis suggests that evolutionary changes can occur in a species over time to a sufficient degree that later organisms may be considered a different species, especially in the absence of fossils documenting the gradual transition from one to another. This is in contrast to cladogenesis—or speciation in a sense—in which a population is split into two or more reproductively isolated groups and these groups accumulate sufficient differences to become distinct species. The punctuated equilibria hypothesis suggests that anagenesis is rare and that the rate of evolution is most rapid immediately after a split which will lead to cladogenesis, but does not completely rule out anagenesis. Distinguishing between anagenesis and cladogenesis is particularly relevant in the fossil record, where limited fossil preservation in time and space makes it difficult to distinguish between anagenesis, cladogenesis where one species replaces the other, or simple migration patterns. Recent evolutionary studies are looking at anagenesis and cladogenesis for possible answers in developing the hominin phylogenetic tree to understand morphological diversity and the origins of Australopithecus anamensis, and this case could possibly show anagenesis in the fossil record. When enough mutations have occurred and become stable in a population so that it is significantly differentiated from an ancestral population, a new species name may be assigned. A series of such species is collectively known as an evolutionary lineage. The various species along an evolutionary lineage are chronospecies. If the ancestral population of a chronospecies does not go extinct, then this is cladogenesis, and the ancestral population represents a paraphyletic species or paraspecies, being an evolutionary grade. In humans The modern human origins debate caused researchers to look further for answers. Researchers were curious to know if present day humans originated from Africa, or if they somehow, through anagenesis, were able to evolve from a single archaic species that lived in Afro-Eurasia. Milford H. Wolpoff is a paleoanthropologist whose work, studying human fossil records, explored anagenesis as a hypothesis for hominin evolution. When looking at anagenesis in hominids, M. H. Wolpoff describes in terms of the 'single-species hypothesis,' which is characterized by thinking of the impact that culture has on a species, as an adaptive system, and as an explanation for the conditions humans tend to live in, based on the environmental conditions, or the ecological niche. When judging the effect that culture has as an Adaptive System, scientists must first look at modern Homo Sapiens. Wolpoff contended that the ecological niche of past, extinct hominidae is distinct within the line of origin. Examining early Pliocene and late Miocenes findings helps to determine the corresponding importance of Anagenesis vs. Cladogenesis during the period of morphological differences. These findings propose that branches of the human and chimpanzee once diverged from each other. The hominin fossils go as far as 5 to 7 million years ago (Mya). Diversity on a species-level is able to be achieved through anagenesis. With collected data, only one or two early hominin were found to be relatively close to the Plio-Pleistocene range. Once more research was done, specifically with the fossils of A. anamensis and A. afarensis, researchers were able to justify that these two hominin species were linked ancestrally. However, looking at data collected by William H. Kimbel and other researchers, they viewed the history of early hominin fossils and concluded that actual macroevolution change via anagenesis was scarce. Phylogeny DEM (or Dynamic Evolutionary Map) is a different way to track ancestors and relationships between organisms. The pattern of branching in phylogenetic trees and how far the branch grows after a species lineage has split and evolved, correlates with anagenesis and cladogenesis. However, in DEM dots depict the movement of these different species. Anagenesis is viewed by observing the dot movement across the DEM, whereas cladogenesis is viewed by observing the separation and movement of the dots across the map. Criticism Controversy arises among taxonomists as to when the differences are significant enough to warrant a new species classification: Anagenesis may also be referred to as gradual evolution. The distinction of speciation and lineage evolution as anagenesis or cladogenesis can be controversial, and some academics question the necessity of the terms altogether. The philosopher of science Marc Ereshefsky argues that paraphyletic taxa are the result of anagenesis. The lineage leading to birds has diverged significantly from lizards and crocodiles, allowing evolutionary taxonomists to classify birds separately from lizards and crocodiles, which are grouped as reptiles. Applications Regarding social evolution, it has been suggested that social anagenesis/aromorphosis be viewed as universal or widely diffused social innovation that raises social systems' complexity, adaptability, integrity, and interconnectedness. See also Multigenomic organism References External links Diagram contrasting Anagenesis and Cladogenesis from the University of Newfoundland Evolutionary biology concepts Evolutionary biology terminology Rate of evolution Speciation
0.779723
0.977668
0.76231
Agronomy
Agronomy is the science and technology of producing and using plants by agriculture for food, fuel, fiber, chemicals, recreation, or land conservation. Agronomy has come to include research of plant genetics, plant physiology, meteorology, and soil science. It is the application of a combination of sciences such as biology, chemistry, economics, ecology, earth science, and genetics. Professionals of agronomy are termed agronomists. History Agronomy has a long and rich history dating to the Neolithic Revolution. Some of the earliest practices of agronomy are found in ancient civilizations, including Ancient Egypt, Mesopotamia, China and India. They developed various techniques for the management of soil fertility, irrigation and crop rotation. During the 18th and 19th centuries, advances in science led to the development of modern agronomy. German chemist Justus von Liebig and John Bennett Lawes, an English entrepreneur, contributed to the understanding of plant nutrition and soil chemistry. Their work laid for the establishment of modern fertilizers and agricultural practices. Agronomy continued to evolve with the development of new technology and practices in the 20th century. From the 1960s, the Green Revolution saw the introduction of high-yield variety of crops, modern fertilizers and improvement of agricultural practices. It led to an increase of global food production to help reduce hunger and poverty in many parts of the world. Plant breeding This topic of agronomy involves selective breeding of plants to produce the best crops for various conditions. Plant breeding has increased crop yields and has improved the nutritional value of numerous crops, including corn, soybeans, and wheat. It has also resulted in the development of new types of plants. For example, a hybrid grain named triticale was produced by crossbreeding rye and wheat. Triticale contains more usable protein than does either rye or wheat. Agronomy has also been instrumental for fruit and vegetable production research. Furthermore, the application of plant breeding for turfgrass development has resulted in a reduction in the demand for fertilizer and water inputs (requirements), as well as turf-types with higher disease resistance. Biotechnology Agronomists use biotechnology to extend and expedite the development of desired characteristics. Biotechnology is often a laboratory activity requiring field testing of new crop varieties that are developed. In addition to increasing crop yields agronomic biotechnology is being applied increasingly for novel uses other than food. For example, oilseed is at present used mainly for margarine and other food oils, but it can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals. Soil science Agronomists study sustainable ways to make soils more productive and profitable. They classify soils and analyze them to determine whether they contain nutrients vital for plant growth. Common macronutrients analyzed include compounds of nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. Soil is also assessed for several micronutrients, like zinc and boron. The percentage of organic matter, soil pH, and nutrient holding capacity (cation exchange capacity) are tested in a regional laboratory. Agronomists will interpret these laboratory reports and make recommendations to modify soil nutrients for optimal plant growth. Soil conservation Additionally, agronomists develop methods to preserve soil and decrease the effects of [erosion] by wind and water. For example, a technique known as contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers of agronomy also seek ways to use the soil more effectively for solving other problems. Such problems include the disposal of human and animal manure, water pollution, and pesticide accumulation in the soil, as well as preserving the soil for future generations such as the burning of paddocks after crop production. Pasture management techniques include no-till farming, planting of soil-binding grasses along contours on steep slopes, and using contour drains of depths as much as 1 metre. Agroecology Agroecology is the management of agricultural systems with an emphasis on ecological and environmental applications. This topic is associated closely with work for sustainable agriculture, organic farming, and alternative food systems and the development of alternative cropping systems. Theoretical modeling Theoretical production ecology is the quantitative study of the growth of crops. The plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable products. The main parameters considered are temperature, sunlight, standing crop biomass, plant production distribution, and nutrient and water supply. See also Agricultural engineering Agricultural policy Agroecology Agrology Agrophysics Crop farming Food systems Horticulture Green Revolution Vegetable farming References Bibliography Wendy B. Murphy, The Future World of Agriculture, Watts, 1984. Antonio Saltini, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, , , , External links The American Society of Agronomy (ASA) Crop Science Society of America (CSSA) Soil Science Society of America (SSSA) European Society for Agronomy The National Agricultural Library (NAL) – Comprehensive agricultural library. Information System for Agriculture and Food Research . Applied sciences Plant agriculture
0.764961
0.996466
0.762258
Eurasia
Eurasia ( , ) is the largest continental area on Earth, comprising all of Europe and Asia. According to some geographers, physiographically, Eurasia is a single supercontinent. The concepts of Europe and Asia as distinct continents date back to antiquity, but their borders have historically been subject to change. For example, to the ancient Greeks, Asia originally included Africa but they classified Europe as separate land. Eurasia is connected to Africa at the Suez Canal, and the two are sometimes combined to describe the largest contiguous landmass on Earth, Afro-Eurasia. Geography Primarily in the Northern and Eastern Hemispheres, Eurasia spans from Iceland and the Iberian Peninsula in the west to the Russian Far East, and from the Russian Far North to Maritime Southeast Asia in the south, but other specific geographical limits of Eurasia states that the southern limit is in the Weber's line. Eurasia is bordered by Africa to the southwest, the Atlantic Ocean to the west, the Arctic Ocean to the north, the Pacific Ocean to the east, and the Indian Ocean to the south. The division between Europe and Asia as two continents is a historical social construct, as neither fits the usual definition; thus, in some parts of the world, Eurasia is recognized as the largest of the six, five, or four continents on Earth. Eurasia covers around , or around 36.2% of the Earth's total land area. The landmass contains well over 5 billion people, equating to approximately 70% of the human population. Humans first settled in Eurasia from Africa 125,000 years ago. Eurasia contains many peninsulas, including the Arabian Peninsula, Korean Peninsula, Indian subcontinent, Anatolia Peninsula, Kamchatka Peninsula, and Europe, which itself contains peninsulas such as the Italian or Iberian Peninsula. Due to its vast size and differences in latitude, Eurasia exhibits all types of climates under the Köppen classification, including the harshest types of hot and cold temperatures, high and low precipitation, and various types of ecosystems. Eurasia is considered a supercontinent, part of the supercontinent of Afro-Eurasia or simply a continent in its own right. In plate tectonics, the Eurasian Plate includes Europe and most of Asia but not the Indian subcontinent, the Arabian Peninsula or the area of the Russian Far East east of the Chersky Range. From the point of view of history and culture, Eurasia can be loosely subdivided into Western Eurasia and Eastern Eurasia. Geology In geology, Eurasia is often considered as a single rigid megablock, but this is debated. Eurasia formed between 375 and 325 million years ago with the merging of Siberia, Kazakhstania, and Baltica, which was joined to Laurentia (now North America), to form Euramerica. Rivers This is a list of the longest rivers in Eurasia. Included are all rivers over . Mountains All of the 100 highest mountains on Earth are in Eurasia, in the Himalaya, Karakoram, Hindu Kush, Pamir, Hengduan, and Tian Shan mountain ranges, and all peaks above 7,000 metres are in these ranges and the Transhimalaya. Other high ranges include the Kunlun, Hindu Raj, and Caucasus Mountains. The Alpide belt stretches 15,000 km across southern Eurasia, from Java in Maritime Southeast Asia to the Iberian Peninsula in Western Europe, including the ranges of the Himalayas, Karakoram, Hindu Kush, Alborz, Caucasus, and the Alps. Long ranges outside the Alpide Belt include the East Siberian, Altai, Scandinavian, Qinling, Western Ghats, Vindhya, Byrranga, and Annamite Ranges. Islands The largest Eurasian islands by area are Borneo, Sumatra, Honshu, Great Britain, Sulawesi, Java, Luzon, Iceland, Mindanao, Ireland, Hokkaido, Sakhalin, and Sri Lanka. The five most-populated islands in the world are Java, Honshu, Great Britain, Luzon, and Sumatra. Other Eurasian islands with large populations include Mindanao, Taiwan, Salsette, Borneo, Sri Lanka, Sulawesi, Kyushu, and Hainan. The most densely-populated islands in Eurasia are Caubian Gamay Island, Ap Lei Chau, and Navotas Island. In the Arctic Ocean, Severny Island, Nordaustlandet, October Revolution Island, and Bolshevik Island are Eurasia's largest uninhabited islands, and Kotelny Island, Alexandra Land, and Spitsbergen are the least-densely populated. History Eurasia has been the host of many ancient civilizations, including those based in Mesopotamia, the Indus Valley and China. In the Axial Age (mid-first millennium BCE), a continuous belt of civilizations stretched through the Eurasian subtropical zone from the Atlantic to the Pacific. This belt became the mainstream of world history for two millennia. Russian geopolitical ideology Originally, "Eurasia" is a geographical notion: in this sense, it is simply the biggest continent; the combined landmass of Europe and Asia. However, geopolitically, the word has several meanings, reflecting specific geopolitical interests. "Eurasia" is one of the most important geopolitical concepts and it figures prominently in the commentaries on the ideas of Halford Mackinder. As Zbigniew Brzezinski observed on Eurasia: The Russian "Eurasianism" corresponded initially more or less to the land area of Imperial Russia in 1914, including parts of Eastern Europe. One of Russia's main geopolitical interests lies in ever closer integration with those countries that it considers part of "Eurasia." The term Eurasia gained geopolitical reputation as one of the three superstates in 1984, George Orwell's novel where constant surveillance and propaganda are strategic elements (introduced as reflexive antagonists) of the heterogeneous dispositif such metapolitical constructs used to control and exercise power. Regional organisations and alliances Across Eurasia, several single markets have emerged, including the Eurasian Economic Space, European Single Market, ASEAN Economic Community, and the Gulf Cooperation Council. There are also several international organizations and initiatives which seek to promote integration throughout Eurasia, including: Asia-Europe Meeting Every two years since 1996 a meeting of most Asian and European countries is organised as the Asia–Europe Meeting (ASEM). Commonwealth of Independent States The Commonwealth of Independent States (CIS) is a political and economic association of 10 post-Soviet republics in Eurasia formed following the dissolution of the Soviet Union. It has an estimated population of 239,796,010. The CIS encourages cooperation in economic, political, and military affairs and has certain powers to coordinate trade, finance, lawmaking and security. In addition, six members of the CIS have joined the Collective Security Treaty Organization, an intergovernmental military alliance that was founded in 1992. Eurasian Economic Union Similar in concept to the European Union, the Eurasian Economic Union is an economic union established in 2015 including Russia, Armenia, Belarus, Kazakhstan, Kyrgyzstan and observer members Moldova, Uzbekistan, and Cuba. It is headquartered in Moscow, Russia and Minsk, Belarus. The union promotes economic integration among members and is theoretically open to enlargement to include any country in Europe or Asia. Federation of Euro-Asian Stock Exchanges The Federation of Euro-Asian Stock Exchanges (FEAS) is an international organization headquartered in Yerevan, comprising the main stock exchanges in Eastern Europe, the Middle East and Central Asia. The purpose of the Federation is to contribute to the cooperation, development, support and promotion of capital markets in the Eurasian region. Russia-EU Common Spaces The Russia – EU Four Common Spaces Initiative, is a joint European Union and Russian agreement to closer integrate Russia and the EU, remove barriers to trade and investment and promote reforms and competitiveness. In 2010, Russian Prime Minister Vladimir Putin called for common economic space, free-trade area or more advanced economic integration, stretching from Lisbon to Vladivostok. However, no significant progress was made and the project was put on hold after Russia-EU relations deteriorated following the Russo-Ukrainian War in 2014. Shanghai Cooperation Organisation The Shanghai Cooperation Organisation is a Eurasian political, economic and security alliance, the creation of which was announced on 15 June 2001 in Shanghai, China. It is the largest regional organisation in the world in terms of geographical coverage and population, covering three-fifths of the Eurasian continent and nearly half of the human population. Use of term History of the Europe–Asia division In ancient times, the Greeks classified Europe (derived from the mythological Phoenician princess Europa) and Asia which to the Greeks originally included Africa (derived from Asia, a woman in Greek mythology) as separate "lands". Where to draw the dividing line between the two regions is still a matter of discussion. Especially whether the Kuma-Manych Depression or the Caucasus Mountains form the southeast boundary is disputed, since Mount Elbrus would be part of Europe in the latter case, making it (and not Mont Blanc) Europe's highest mountain. Most accepted is probably the boundary as defined by Philip Johan von Strahlenberg in the 18th century. He defined the dividing line along the Aegean Sea, Dardanelles, Sea of Marmara, Bosporus, Black Sea, Kuma–Manych Depression, Caspian Sea, Ural River, and the Ural Mountains. However, at least part of this definition has been subject to criticism by many modern analytical geographers like Halford Mackinder, who saw little validity in the Ural Mountains as a boundary between continents. Soviet states after decentralization Nineteenth-century Russian philosopher Nikolai Danilevsky defined Eurasia as an entity separate from Europe and Asia, bounded by the Himalayas, the Caucasus, the Alps, the Arctic, the Pacific, the Atlantic, the Mediterranean, the Black Sea and the Caspian Sea, a definition that has been influential in Russia and other parts of the former Soviet Union. Nowadays, partly inspired by this usage, the term Eurasia is sometimes used to refer to the post-Soviet space – in particular Russia, the Central Asian republics, and the Transcaucasus republics – and sometimes also adjacent regions such as Turkey and Mongolia. The word "Eurasia" is often used in Kazakhstan to describe its location. Numerous Kazakh institutions have the term in their names, like the L. N. Gumilev Eurasian National University (; ) (Lev Gumilev's Eurasianism ideas having been popularized in Kazakhstan by Olzhas Suleimenov), the Eurasian Media Forum, the Eurasian Cultural Foundation, the Eurasian Development Bank, and the Eurasian Bank. In 2007 Kazakhstan's president, Nursultan Nazarbayev, proposed building a "Eurasia Canal" to connect the Caspian Sea and the Black Sea via Russia's Kuma-Manych Depression to provide Kazakhstan and other Caspian-basin countries with a more efficient path to the ocean than the existing Volga–Don Canal. This usage can also be seen in the names of Eurasianet, The Journal of Eurasian Studies, and the Association for Slavic, East European, and Eurasian Studies, as well as the titles of numerous academic programmes at US universities. This usage is comparable to how Americans use "Western Hemisphere" to describe concepts and organizations dealing with the Americas (e.g., Council on Hemispheric Affairs, Western Hemisphere Institute for Security Cooperation). See also Asia-Europe Foundation Asia–Europe Meeting Borders of the continents Council of Europe Community for Democracy and Rights of Nations Eastern European Group Eastern Partnership Eurasia (Nineteen Eighty-Four) Eurasian (disambiguation) Eurasian Economic Community Eurasia Tunnel Eurasia Canal Eurasian Economic Union European Union Euronest Parliamentary Assembly Federation of Euro-Asian Stock Exchanges Intermediate Region Laurasia – a geological supercontinent joining Eurasia and North America List of Eurasian countries by population Marmaray – railway tunnel links Europe to Asia Mongol Empire Organization of the Black Sea Economic Cooperation Organization for Security and Co-operation in Europe Palearctic Russian Empire Shanghai Cooperation Organisation Silk Road United States of Eurasia Vega expedition – the first voyage to circumnavigate Eurasia Notes References Further reading The Dawn of Eurasia: On the Trail of the New World Order by Bruno Maçães, Publisher: Allen Lane D. Lane, V. Samokhvalov, The Eurasian Project and Europe Regional Discontinuities and Geopolitics, Palgrave: Basingstoke (2015) V. Samokhvalov, The new Eurasia: post-Soviet space between Russia, Europe and China, European Politics and Society, Volume 17, 2016 – Issue sup1: The Eurasian Project in Global Perspective (Journal homepage) External links Eastern Hemisphere Supercontinents
0.762652
0.999463
0.762242
Last Glacial Period
The Last Glacial Period (LGP), also known as the Last glacial cycle, occurred from the end of the Last Interglacial to the beginning of the Holocene, years ago, and thus corresponds to most of the timespan of the Late Pleistocene. The LGP is part of a larger sequence of glacial and interglacial periods known as the Quaternary glaciation which started around 2,588,000 years ago and is ongoing. The glaciation and the current Quaternary Period both began with the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Mya, in the mid-Cenozoic (Eocene–Oligocene extinction event), and the term Late Cenozoic Ice Age is used to include this early phase with the current glaciation. The previous ice age within the Quaternary is the Penultimate Glacial Period, which ended about 128,000 years ago, was more severe than the Last Glacial Period in some areas such as Britain, but less severe in others. The last glacial period saw alternating episodes of glacier advance and retreat with the Last Glacial Maximum occurring between 26,000 and 20,000 years ago. While the general pattern of cooling and glacier advance around the globe was similar, local differences make it difficult to compare the details from continent to continent (see picture of ice core data below for differences). The most recent cooling, the Younger Dryas, began around 12,800 years ago and ended around 11,700 years ago, also marking the end of the LGP and the Pleistocene epoch. It was followed by the Holocene, the current geological epoch. Origin and definition The LGP is often colloquially referred to as the "last ice age", though the term ice age is not strictly defined, and on a longer geological perspective, the last few million years could be termed a single ice age given the continual presence of ice sheets near both poles. Glacials are somewhat better defined, as colder phases during which glaciers advance, separated by relatively warm interglacials. The end of the last glacial period, which was about 10,000 years ago, is often called the end of the ice age, although extensive year-round ice persists in Antarctica and Greenland. Over the past few million years, the glacial-interglacial cycles have been "paced" by periodic variations in the Earth's orbit via Milankovitch cycles. The LGP has been intensively studied in North America, northern Eurasia, the Himalayas, and other formerly glaciated regions around the world. The glaciations that occurred during this glacial period covered many areas, mainly in the Northern Hemisphere and to a lesser extent in the Southern Hemisphere. They have different names, historically developed and depending on their geographic distributions: Fraser (in the Pacific Cordillera of North America), Pinedale (in the Central Rocky Mountains), Wisconsinan or Wisconsin (in central North America), Devensian (in the British Isles), Midlandian (in Ireland), Würm (in the Alps), Mérida (in Venezuela), Weichselian or Vistulian (in Northern Europe and northern Central Europe), Valdai in Russia and Zyryanka in Siberia, Llanquihue in Chile, and Otira in New Zealand. The geochronological Late Pleistocene includes the late glacial (Weichselian) and the immediately preceding penultimate interglacial (Eemian) period. Overview Northern Hemisphere Canada was almost completely covered by ice, as was the northern part of the United States, both blanketed by the huge Laurentide Ice Sheet. Alaska remained mostly ice free due to arid climate conditions. Local glaciations existed in the Rocky Mountains and the Cordilleran ice sheet and as ice fields and ice caps in the Sierra Nevada in northern California. In northern Eurasia, the Scandinavian ice sheet once again reached the northern parts of the British Isles, Germany, Poland, and Russia, extending as far east as the Taymyr Peninsula in western Siberia. The maximum extent of western Siberian glaciation was reached by about 18,000 to 17,000 BP, later than in Europe (22,000–18,000 BP). Northeastern Siberia was not covered by a continental-scale ice sheet. Instead, large, but restricted, icefield complexes covered mountain ranges within northeast Siberia, including the Kamchatka-Koryak Mountains. The Arctic Ocean between the huge ice sheets of America and Eurasia was not frozen throughout, but like today, probably was covered only by relatively shallow ice, subject to seasonal changes and riddled with icebergs calving from the surrounding ice sheets. According to the sediment composition retrieved from deep-sea cores, even times of seasonally open waters must have occurred. Outside the main ice sheets, widespread glaciation occurred on the highest mountains of the Alpide belt. In contrast to the earlier glacial stages, the Würm glaciation was composed of smaller ice caps and mostly confined to valley glaciers, sending glacial lobes into the Alpine foreland. Local ice fields or small ice sheets could be found capping the highest massifs of the Pyrenees, the Carpathian Mountains, the Balkan mountains, the Caucasus, and the mountains of Turkey and Iran. In the Himalayas and the Tibetan Plateau, there is evidence that glaciers advanced considerably, particularly between 47,000 and 27,000 BP, but the exact ages, as well as the formation of a single contiguous ice sheet on the Tibetan Plateau, is controversial. Other areas of the Northern Hemisphere did not bear extensive ice sheets, but local glaciers were widespread at high altitudes. Parts of Taiwan, for example, were repeatedly glaciated between 44,250 and 10,680 BP as well as the Japanese Alps. In both areas, maximum glacier advance occurred between 60,000 and 30,000 BP. To a still lesser extent, glaciers existed in Africa, for example in the High Atlas, the mountains of Morocco, the Mount Atakor massif in southern Algeria, and several mountains in Ethiopia. Just south of the equator, an ice cap of several hundred square kilometers was present on the east African mountains in the Kilimanjaro massif, Mount Kenya, and the Rwenzori Mountains, which still bear relic glaciers today. Southern Hemisphere Glaciation of the Southern Hemisphere was less extensive. Ice sheets existed in the Andes (Patagonian Ice Sheet), where six glacier advances between 33,500 and 13,900 BP in the Chilean Andes have been reported. Antarctica was entirely glaciated, much like today, but unlike today the ice sheet left no uncovered area. In mainland Australia only a very small area in the vicinity of Mount Kosciuszko was glaciated, whereas in Tasmania glaciation was more widespread. An ice sheet formed in New Zealand, covering all of the Southern Alps, where at least three glacial advances can be distinguished. Local ice caps existed in the highest mountains of the island of New Guinea, where temperatures were 5 to 6 °C colder than at present. The main areas of Papua New Guinea where glaciers developed during the LGP were the Central Cordillera, the Owen Stanley Range, and the Saruwaged Range. Mount Giluwe in the Central Cordillera had a "more or less continuous ice cap covering about 188 km2 and extending down to 3200-3500 m". In Western New Guinea, remnants of these glaciers are still preserved atop Puncak Jaya and Ngga Pilimsit. Small glaciers developed in a few favorable places in Southern Africa during the last glacial period. These small glaciers would have been located in the Lesotho Highlands and parts of the Drakensberg. The development of glaciers was likely aided in part due to shade provided by adjacent cliffs. Various moraines and former glacial niches have been identified in the eastern Lesotho Highlands a few kilometres west of the Great Escarpment, at altitudes greater than 3,000 m on south-facing slopes. Studies suggest that the annual average temperature in the mountains of Southern Africa was about 6 °C colder than at present, in line with temperature drops estimated for Tasmania and southern Patagonia during the same time. This resulted in an environment of relatively arid periglaciation without permafrost, but with deep seasonal freezing on south-facing slopes. Periglaciation in the eastern Drakensberg and Lesotho Highlands produced solifluction deposits and blockfields; including blockstreams and stone garlands. Deglaciation Scientists from the Center for Arctic Gas Hydrate, Environment and Climate at the University of Tromsø, published a study in June 2017 describing over a hundred ocean sediment craters, some 3,000 m wide and up to 300 m deep, formed by explosive eruptions of methane from destabilized methane hydrates, following ice-sheet retreat during the LGP, around 12,000 years ago. These areas around the Barents Sea still seep methane today. The study hypothesized that existing bulges containing methane reservoirs could eventually have the same fate. Named local glaciations Antarctica During the last glacial period, Antarctica was blanketed by a massive ice sheet, much as it is today. The ice covered all land areas and extended into the ocean onto the middle and outer continental shelf. Counterintuitively though, according to ice modeling done in 2002, ice over central East Antarctica was generally thinner than it is today. Europe Devensian and Midlandian glaciation (Britain and Ireland) British geologists refer to the LGP as the Devensian. Irish geologists, geographers, and archaeologists refer to the Midlandian glaciation, as its effects in Ireland are largely visible in the Irish Midlands. The name Devensian is derived from the Latin Dēvenses, people living by the Dee (Dēva in Latin), a river on the Welsh border near which deposits from the period are particularly well represented. The effects of this glaciation can be seen in many geological features of England, Wales, Scotland, and Northern Ireland. Its deposits have been found overlying material from the preceding Ipswichian stage and lying beneath those from the following Holocene, which is the current stage. This is sometimes called the Flandrian interglacial in Britain. The latter part of the Devensian includes pollen zones I–IV, the Allerød oscillation and Bølling oscillation, and the Oldest Dryas, Older Dryas, and Younger Dryas cold periods. Weichselian glaciation (Scandinavia and northern Europe) Alternative names include Weichsel glaciation or Vistulian glaciation (referring to the Polish River Vistula or its German name Weichsel). Evidence suggests that the ice sheets were at their maximum size for only a short period, between 25,000 and 13,000 BP. Eight interstadials have been recognized in the Weichselian, including the Oerel, Glinde, Moershoofd, Hengelo, and Denekamp. Correlation with isotope stages is still in process. During the glacial maximum in Scandinavia, only the western parts of Jutland were ice-free, and a large part of what is today the North Sea was dry land connecting Jutland with Britain (see Doggerland). The Baltic Sea, with its unique brackish water, is a result of meltwater from the Weichsel glaciation combining with saltwater from the North Sea when the straits between Sweden and Denmark opened. Initially, when the ice began melting about 10,300 BP, seawater filled the isostatically depressed area, a temporary marine incursion that geologists dub the Yoldia Sea. Then, as postglacial isostatic rebound lifted the region about 9500 BP, the deepest basin of the Baltic became a freshwater lake, in palaeological contexts referred to as Ancylus Lake, which is identifiable in the freshwater fauna found in sediment cores. The lake was filled by glacial runoff, but as worldwide sea level continued rising, saltwater again breached the sill about 8000 BP, forming a marine Littorina Sea, which was followed by another freshwater phase before the present brackish marine system was established. "At its present state of development, the marine life of the Baltic Sea is less than about 4000 years old", Drs. Thulin and Andrushaitis remarked when reviewing these sequences in 2003. Overlying ice had exerted pressure on the Earth's surface. As a result of melting ice, the land has continued to rise yearly in Scandinavia, mostly in northern Sweden and Finland, where the land is rising at a rate of as much as 8–9 mm per year, or 1 m in 100 years. This is important for archaeologists, since a site that was coastal in the Nordic Stone Age now is inland and can be dated by its relative distance from the present shore. Würm glaciation (Alps) The term Würm is derived from a river in the Alpine foreland, roughly marking the maximum glacier advance of this particular glacial period. The Alps were where the first systematic scientific research on ice ages was conducted by Louis Agassiz at the beginning of the 19th century. Here, the Würm glaciation of the LGP was intensively studied. Pollen analysis, the statistical analyses of microfossilized plant pollens found in geological deposits, chronicled the dramatic changes in the European environment during the Würm glaciation. During the height of Würm glaciation,  BP, most of western and central Europe and Eurasia was open steppe-tundra, while the Alps presented solid ice fields and montane glaciers. Scandinavia and much of Britain were under ice. During the Würm, the Rhône Glacier covered the whole western Swiss plateau, reaching today's regions of Solothurn and Aargau. In the region of Bern, it merged with the Aar glacier. The Rhine Glacier is currently the subject of the most detailed studies. Glaciers of the Reuss and the Limmat advanced sometimes as far as the Jura. Montane and piedmont glaciers formed the land by grinding away virtually all traces of the older Günz and Mindel glaciation, by depositing base moraines and terminal moraines of different retraction phases and loess deposits, and by the proglacial rivers' shifting and redepositing gravels. Beneath the surface, they had profound and lasting influence on geothermal heat and the patterns of deep groundwater flow. North America Pinedale or Fraser glaciation (Rocky Mountains) The Pinedale (central Rocky Mountains) or Fraser (Cordilleran ice sheet) glaciation was the last of the major glaciations to appear in the Rocky Mountains in the United States. The Pinedale lasted from around 30,000 to 10,000 years ago, and was at its greatest extent between 23,500 and 21,000 years ago. This glaciation was somewhat distinct from the main Wisconsin glaciation, as it was only loosely related to the giant ice sheets and was instead composed of mountain glaciers, merging into the Cordilleran ice sheet. The Cordilleran ice sheet produced features such as glacial Lake Missoula, which broke free from its ice dam, causing the massive Missoula Floods. USGS geologists estimate that the cycle of flooding and reformation of the lake lasted an average of 55 years and that the floods occurred about 40 times over the 2,000-year period starting 15,000 years ago. Glacial lake outburst floods such as these are not uncommon today in Iceland and other places. Wisconsin glaciation The Wisconsin glacial episode was the last major advance of continental glaciers in the North American Laurentide ice sheet. At the height of glaciation, the Bering land bridge potentially permitted migration of mammals, including people, to North America from Siberia. It radically altered the geography of North America north of the Ohio River. At the height of the Wisconsin episode glaciation, ice covered most of Canada, the Upper Midwest, and New England, as well as parts of Montana and Washington. On Kelleys Island in Lake Erie or in New York's Central Park, the grooves left by these glaciers can be easily observed. In southwestern Saskatchewan and southeastern Alberta, a suture zone between the Laurentide and Cordilleran ice sheets formed the Cypress Hills, which is the northernmost point in North America that remained south of the continental ice sheets. The Great Lakes are the result of glacial scour and pooling of meltwater at the rim of the receding ice. When the enormous mass of the continental ice sheet retreated, the Great Lakes began gradually moving south due to isostatic rebound of the north shore. Niagara Falls is also a product of the glaciation, as is the course of the Ohio River, which largely supplanted the prior Teays River. With the assistance of several very broad glacial lakes, it released floods through the gorge of the Upper Mississippi River, which in turn was formed during an earlier glacial period. In its retreat, the Wisconsin episode glaciation left terminal moraines that form Long Island, Block Island, Cape Cod, Nomans Land, Martha's Vineyard, Nantucket, Sable Island, and the Oak Ridges Moraine in south-central Ontario, Canada. In Wisconsin itself, it left the Kettle Moraine. The drumlins and eskers formed at its melting edge are landmarks of the lower Connecticut River Valley. Tahoe, Tenaya, and Tioga, Sierra Nevada In the Sierra Nevada, three stages of glacial maxima, sometimes incorrectly called ice ages, were separated by warmer periods. These glacial maxima are called, from oldest to youngest, Tahoe, Tenaya, and Tioga. The Tahoe reached its maximum extent perhaps about 70,000 years ago. Little is known about the Tenaya. The Tioga was the least severe and last of the Wisconsin episode. It began about 30,000 years ago, reached its greatest advance 21,000 years ago, and ended about 10,000 years ago. Greenland glaciation In northwest Greenland, ice coverage attained a very early maximum in the LGP around 114,000. After this early maximum, ice coverage was similar to today until the end of the last glacial period. Towards the end, glaciers advanced once more before retreating to their present extent. According to ice core data, the Greenland climate was dry during the LGP, with precipitation reaching perhaps only 20% of today's value. South America Mérida glaciation (Venezuelan Andes) The name Mérida glaciation is proposed to designate the alpine glaciation that affected the central Venezuelan Andes during the Late Pleistocene. Two main moraine levels have been recognized - one with an elevation of , and another with an elevation of . The snow line during the last glacial advance was lowered approximately below the present snow line, which is . The glaciated area in the Cordillera de Mérida was about ; this included these high areas, from southwest to northeast: Páramo de Tamá, Páramo Batallón, Páramo Los Conejos, Páramo Piedras Blancas, and Teta de Niquitao. Around of the total glaciated area was in the Sierra Nevada de Mérida, and of that amount, the largest concentration, , was in the areas of Pico Bolívar, Pico Humboldt [], and Pico Bonpland []. Radiocarbon dating indicates that the moraines are older than 10,000 BP, and probably older than 13,000 BP. The lower moraine level probably corresponds to the main Wisconsin glacial advance. The upper level probably represents the last glacial advance (Late Wisconsin). Llanquihue glaciation (Southern Andes) The Llanquihue glaciation takes its name from Llanquihue Lake in southern Chile, which is a fan-shaped piedmont glacial lake. On the lake's western shores, large moraine systems occur, of which the innermost belong to the LGP. Llanquihue Lake's varves are a node point in southern Chile's varve geochronology. During the last glacial maximum, the Patagonian ice sheet extended over the Andes from about 35°S to Tierra del Fuego at 55°S. The western part appears to have been very active, with wet basal conditions, while the eastern part was cold-based. Cryogenic features such as ice wedges, patterned ground, pingos, rock glaciers, palsas, soil cryoturbation, and solifluction deposits developed in unglaciated extra-Andean Patagonia during the last glaciation, but not all these reported features have been verified. The area west of Llanquihue Lake was ice-free during the last glacial maximum, and had sparsely distributed vegetation dominated by Nothofagus. Valdivian temperate rain forest was reduced to scattered remnants on the western side of the Andes. See also Glacial history of Minnesota Glacial lake outburst flood Glacial period Penultimate Glacial Period Pleistocene, which includes: Pleistocene megafauna Plio-Pleistocene Late Pleistocene extinctions Quaternary glaciation Sea level rise Stone Age Timeline of glaciation Valparaiso Moraine Notes References Further reading External links Pielou, E. C. After the Ice Age: The Return of Life to Glaciated North America (University of Chicago Press: 1992) National Atlas of the USA: Wisconsin Glaciation in North America: Present state of knowledge Ice ages Pleistocene Pleistocene events Quaternary events Glaciology of the United States Glaciology
0.763045
0.998945
0.76224
Family trees of the Norse gods
These are family trees of the Norse gods showing kin relations among gods and other beings in Nordic mythology. Each family tree gives an example of relations according to principally Eddic material however precise links vary between sources. In addition, some beings are identified by some sources and scholars. Key Æsir are indicated with boldface Vanir are indicated with italics Other beings such as jötnar and humans are indicated with standard font. Æsir Vanir Angrboða and Loki Diversity in belief While the above family trees are based principally on Eddic material, it is widely accepted that the Eddas do not represent the worldview of all Nordic, or more widely Germanic heathens. Terry Gunnell has similarly challenged the concept of all Germanic pagans throughout the Viking Age believing in a single, universal pantheon of gods that all lived in Asgard and were ruled by Odin. Cultural exchange of both ideas and practices occurred across the soft cultural boundaries with neighbouring peoples from broad cultural groups such as Celts, Sámi, Baltic peoples, and, particularly later on, Christians. Geographical variation in religious practices and beliefs was also seen, which together with external influence made the belief systems dynamic, changing over time from the Nordic Bronze Age into the Viking age. In the Early Medieval period, Odin was principally a god of the warrior elite, however, due to his close association with skalds, whose poetry was preserved in works such as the Prose Edda and Heimskringla, he is highly represented in extant sources on Nordic pre-Christian religion. Snorri Sturluson also seems to have a preference towards the aristocratic-centred cosmology as opposed to the views more likely held by the wider population. The rise to prominence of male, war-oriented gods such as Odin, relative to protective female gods with a closer association to fertility and watery sites, has been proposed to have taken place around 500 CE, coinciding with the development of an expansionist aristocratic military class in southern Scandinavia. Very rarely in the Eddic stories are the gods described as forming a large family, instead typically acting individually or in groups of three. Gunnell puts forward the idea that the stories did not originate in the same cultural environment, but instead were collected over a wide geographic area and later compiled. This variation may be the cause of the apparent conflicts between sources, such as the most closely associated female god to Odin, which Gunnell suggests never formed a single unified system. He further puts forward the idea that Odinic myths centred on hierarchical assemblies and feasts originated in, and reflected, the halls of the elite, while the rural population would be more familiar with tales regarding Freyr and Thor; these two gods have a significantly more prominent position than Odin in Icelandic and Norwegian place names, sagas and Landnámabók. Gunnell suggests that Freyr, whose cult was centred in Uppland in Sweden, as another figure who acts more as an allfather than Odin, based on his diverse roles in farming, ruling and warfare. Gunnell further argues that in stories regarding Thor, he is typically highly independent, requiring little aid from other figures. He notes that Thor would fit well into the role of a chief god, being associated with trees, high-seat pillars and rain, and is called upon for help at sea and against Christian missionaries. Some sources, such as the prologue to the Prose Edda suggest that Thor was viewed by some as the father of Odin, and it has been argued that Thor was known in Northern Europe prior to the arrival of the cult of Odin, and thus would not have been originally viewed there as his son. It has been argued that Odin began to increasingly incorporate elements from subordinated gods and took on a role as the centre of a family that became depicted as living together. This conception, more akin to the Olympian pantheon, may have been facilitated by large things in which a diversity of peoples assembled, each potentially favouring an individual god. See also Anglo-Saxon royal genealogies Horses of the Æsir List of Germanic deities Norse cosmology Notes References Bibliography Primary Secondary Norse mythology Norse deities Norse mythology
0.769229
0.990852
0.762192
Work (human activity)
Work or labor (or labour in British English) is the intentional activity people perform to support the needs and wants of themselves, others, or a wider community. In the context of economics, work can be viewed as the human activity that contributes (along with other factors of production) towards the goods and services within an economy. Work is fundamental to all societies but can vary widely within and between them, from gathering natural resources by hand to operating complex technologies that substitute for physical or even mental effort by many human beings. All but the simplest tasks also require specific skills, equipment or tools, and other resources, such as material for manufacturing goods. Cultures and individuals across history have expressed a wide range of attitudes towards work. Outside of any specific process or industry, humanity has developed a variety of institutions for situating work in society. As humans are diurnal, they work mainly during the day. Besides objective differences, one culture may organize or attach social status to work roles differently from another. Throughout history, work has been intimately connected with other aspects of society and politics, such as power, class, tradition, rights, and privileges. Accordingly, the division of labour is a prominent topic across the social sciences as both an abstract concept and a characteristic of individual cultures. Some people have also engaged in critique of work and expressed a wish to abolish it, e.g. Paul Lafargue in his book The Right to Be Lazy. Related terms include occupation and job; related concepts are job title and profession. Description Work can take many different forms, as varied as the environments, tools, skills, goals, and institutions around a worker. This term refers to the general activity of performing tasks, whether they are paid or unpaid, formal or informal. Work encompasses all types of productive activities, including employment, household chores, volunteering, and creative pursuits. It is a broad term that encompasses any effort or activity directed towards achieving a particular goal. Because sustained effort is a necessary part of many human activities, what qualifies as work is often a matter of context. Specialization is one common feature that distinguishes work from other activities. For example, a sport is a job for a professional athlete who earns their livelihood from it, but a hobby for someone playing for fun in their community. An element of advance planning or expectation is also common, such as when a paramedic provides medical care while on duty and fully equipped rather than performing first aid off-duty as a bystander in an emergency. Self-care and basic habits like personal grooming are also not typically considered work. While a later gift, trade, or payment may retroactively affirm an activity as productive, this can exclude work like volunteering or activities within a family setting, like parenting or housekeeping. In some cases, the distinction between work and other activities is simply a matter of common sense within a community. However, an alternative view is that labeling any activity as work is somewhat subjective, as Mark Twain expressed in the "whitewashed fence" scene of The Adventures of Tom Sawyer. History Humans have varied their work habits and attitudes over time. Hunter-gatherer societies vary their "work" intensity according to the seasonal availability of plants and the periodic migration of prey animals. The development of agriculture led to more sustained work practices, but work still changed with the seasons, with intense sustained effort during harvests (for example) alternating with less focused periods such as winters. In the early modern era, Protestantism and proto-capitalism emphasized the moral and personal advantages of hard work. The periodic re-invention of slavery encouraged more consistent work activity in the working class, and capitalist industrialization intensified demands on workers to keep up with the pace of machines. Restrictions on the hours of work and the ages of workers followed, with worker demands for time off increasing, but modern office work retains traces of expectations of sustained, concentrated work, even in affluent societies. Kinds of work There are several ways to categorize and compare different kinds of work. In economics, one popular approach is the three-sector model or variations of it. In this view, an economy can be separated into three broad categories: Primary sector, which extracts food, raw materials, and other resources from the environment Secondary sector, which manufactures physical products, refines materials, and provides utilities Tertiary sector, which provides services and helps administer the economy In complex economies with high specialization, these categories are further subdivided into industries that produce a focused subset of products or services. Some economists also propose additional sectors such as a "knowledge-based" quaternary sector, but this division is neither standardized nor universally accepted. Another common way of contrasting work roles is ranking them according to a criterion, such as the amount of skill, experience, or seniority associated with a role. The progression from apprentice through journeyman to master craftsman in the skilled trades is one example with a long history and analogs in many cultures. Societies also commonly rank different work roles by perceived status, but this is more subjective and goes beyond clear progressions within a single industry. Some industries may be seen as more prestigious than others overall, even if they include roles with similar functions. At the same time, a wide swathe of roles across all industries may be afforded more status (e.g. managerial roles) or less (like manual labor) based on characteristics such as a job being low-paid or dirty, dangerous and demeaning. Other social dynamics, like how labor is compensated, can even exclude meaningful tasks from a society's conception of work. For example, in modern market-economies where wage labor or piece work predominates, unpaid work may be omitted from economic analysis or even cultural ideas of what qualifies as work. At a political level, different roles can fall under separate institutions where workers have qualitatively different power or rights. In the extreme, the least powerful members of society may be stigmatized (as in untouchability) or even violently forced (via slavery) into performing the least desirable work. Complementary to this, elites may have exclusive access to the most prestigious work, largely symbolic sinecures, or even a "life of leisure". Unusual Occupations In the diverse world of work, there exist some truly bizarre and unusual occupations that often defy conventional expectations. These unique jobs showcase the creativity and adaptability of humans in their pursuit of livelihood. Workers Individual workers require sufficient health and resources to succeed in their tasks. Physiology As living beings, humans require a baseline of good health, nutrition, rest, and other physical needs in order to reliably exert themselves. This is particularly true of physical labor that places direct demands on the body, but even largely mental work can cause stress from problems like long hours, excessive demands, or a hostile workplace. Particularly intense forms of manual labor often lead workers to develop physical strength necessary for their job. However, this activity does not necessarily improve a worker's overall physical fitness like exercise, due to problems like overwork or a small set of repetitive motions. In these physical jobs, maintaining good posture or movements with proper technique is also a crucial skill for avoiding injury. Ironically, white-collar workers who are sedentary throughout the workday may also suffer from long-term health problems due to a lack of physical activity. Training Learning the necessary skills for work is often a complex process in its own right, requiring intentional training. In traditional societies, know-how for different tasks can be passed to each new generation through oral tradition and working under adult guidance. For work that is more specialized and technically complex, however, a more formal system of education is usually necessary. A complete curriculum ensures that a worker in training has some exposure to all major aspects of their specialty, in both theory and practice. Equipment and technology Tool use has been a central aspect of human evolution and is also an essential feature of work. Even in technologically advanced societies, many workers' toolsets still include a number of smaller hand-tools, designed to be held and operated by a single person, often without supplementary power. This is especially true when tasks can be handled by one or a few workers, do not require significant physical power, and are somewhat self-paced, like in many services or handicraft manufacturing. For other tasks needing large amounts of power, such as in the construction industry, or involving a highly-repetitive set of simple actions, like in mass manufacturing, complex machines can carry out much of the effort. The workers present will focus on more complex tasks, operating controls, or performing maintenance. Over several millennia, invention, scientific discovery, and engineering principles have allowed humans to proceed from creating simple machines that merely redirect or amplify force, through engines for harnessing supplementary power sources, to today's complex, regulated systems that automate many steps within a work process. In the 20th century, the development of electronics and new mathematical insights led to the creation and widespread adoption of fast, general-purpose computers. Just as mechanization can substitute for the physical labor of many human beings, computers allow for the partial automation of mental work previously carried out by human workers, such as calculations, document transcription, and basic customer service requests. Research and development of related technologies like machine learning and robotics continues into the 21st century. Beyond tools and machines used to actively perform tasks, workers benefit when other passive elements of their work and environment are designed properly. This includes everything from personal items like workwear and safety gear to features of the workspace itself like furniture, lighting, air quality, and even the underlying architecture. In society Organizations Even if workers are personally ready to perform their jobs, coordination is required for any effort outside of individual subsistence to succeed. At the level of a small team working on a single task, only cooperation and good communication may be necessary. As the complexity of a work process increases though, requiring more planning or more workers focused on specific tasks, a reliable organization becomes more critical. Economic organizations often reflect social thought common to their time and place, such as ideas about human nature or hierarchy. These unique organizations can also be historically significant, even forming major pillars of an economic system. In European history, for instance, the decline of guilds and rise of joint-stock companies goes hand-in-hand with other changes, like the growth of centralized states and capitalism. In industrialized economies, labor unions are another significant organization. In isolation, a worker that is easily replaceable in the labor market has little power to demand better wages or conditions. By banding together and interacting with business owners as a corporate entity, the same workers can claim a larger share of the value created by their labor. While a union does require workers to sacrifice some autonomy in relation to their coworkers, it can grant workers more control over the work process itself in addition to material benefits. Institutions The need for planning and coordination extends beyond individual organizations to society as a whole too. Every successful work project requires effective resource allocation to provide necessities, materials, and investment (such as equipment and facilities). In smaller, traditional societies, these aspects can be mostly regulated through custom, though as societies grow, more extensive methods become necessary. These complex institutions, however, still have roots in common human activities. Even the free markets of modern capitalist societies rely fundamentally on trade, while command economies, such as in many communist states during the 20th century, rely on a highly bureaucratic and hierarchical form of redistribution. Other institutions can affect workers even more directly by delimiting practical day-to-day life or basic legal rights. For example, a caste system may restrict families to a narrow range of jobs, inherited from parent to child. In serfdom, a peasant has more rights than a slave but is attached to a specific piece of land and largely under the power of the landholder, even requiring permission to physically travel outside the land-holding. How institutions play out in individual workers' lives can be complex too; in most societies where wage-labor predominates, workers possess equal rights by law and mobility in theory. Without social support or other resources, however, the necessity of earning a livelihood may force a worker to cede some rights and freedoms in fact. Values Societies and subcultures may value work in general, or specific kinds of it, very differently. When social status or virtue is strongly associated with leisure and opposed to tedium, then work itself can become indicative of low social rank and be devalued. In the opposite case, a society may hold strongly to a work ethic where work itself is seen as virtuous. For example, German sociologist Max Weber hypothesized that European capitalism originated in a Protestant work ethic, which emerged with the Reformation. Many Christian theologians appeal to the Old Testament's Book of Genesis in regards to work. According to Genesis 1, human beings were created in the image of God, and according to Genesis 2, Adam was placed in the Garden of Eden to "work it and keep it". Dorothy L. Sayers has argued that "work is the natural exercise and function of man – the creature who is made in the image of his Creator." Likewise, John Paul II said in that by his work, man shares in the image of his creator. Christian theologians see the fall of man as profoundly affecting human work. In Genesis 3:17, God said to Adam, "cursed is the ground because of you; in pain you shall eat of it all the days of your life". Leland Ryken said out that, because of the fall, "many of the tasks we perform in a fallen world are inherently distasteful and wearisome." Christian theologians interpret that through the fall, work has become toil, but John Paul II says that work is a good thing for man in spite of this toil, and that "perhaps, in a sense, because of it", because work is something that corresponds to man's dignity and through it, he achieves fulfilment as a human being. The fall also means that a work ethic is needed. As a result of the fall, work has become subject to the abuses of idleness on the one hand, and overwork on the other. Drawing on Aristotle, Ryken suggests that the moral ideal is the golden mean between the two extremes of being lazy and being a workaholic. Some Christian theologians also draw on the doctrine of redemption to discuss the concept of work. Oliver O'Donovan said that although work is a gift of creation, it is "ennobled into mutual service in the fellowship of Christ." Pope Francis is critical of the hope that technological progress might eliminate or diminish the need for work: "the goal should not be that technological progress increasingly replace human work, for this would be detrimental to humanity", and McKinsey consultants suggest that work will change, but not end, as a result of automation and the increasing adoption of artificial intelligence. For some, work may hold a spiritual value in addition to any secular notions. Especially in some monastic or mystical strands of several religions, simple manual labor may be held in high regard as a way to maintain the body, cultivate self-discipline and humility, and focus the mind. Current issues The contemporary world economy has brought many changes, overturning some previously widespread labor issues. At the same time, some longstanding issues remain relevant, and other new ones have emerged. One issue that continues despite many improvements is slave labor and human trafficking. Though ideas about universal rights and the economic benefits of free labor have significantly diminished the prevalence of outright slavery, it continues in lawless areas, or in attenuated forms on the margins of many economies. Another difficulty, which has emerged in most societies as a result of urbanization and industrialization, is unemployment. While the shift from a subsistence economy usually increases the overall productivity of society and lifts many out of poverty, it removes a baseline of material security from those who cannot find employment or other support. Governments have tried a range of strategies to mitigate the problem, such as improving the efficiency of job matching, conditionally providing welfare benefits or unemployment insurance, or even directly overriding the labor market through work-relief programs or a job guarantee. Since a job forms a major part of many workers' self-identity, unemployment can have severe psychological and social consequences beyond the financial insecurity it causes. One more issue, which may not directly interfere with the functioning of an economy but can have significant indirect effects, is when governments fail to account for work occurring out-of-view from the public sphere. This may be important, uncompensated work occurring everyday in private life; or it may be criminal activity that involves clear but furtive economic exchanges. By ignoring or failing to understand these activities, economic policies can have counter-intuitive effects and cause strains on the community and society. Child labour Due to various reasons such as the cheap labour, the poor economic situation of the deprived classes, the weakness of laws and legal supervision, the migration existence of child labour is very much observed in different parts of the world. According to the World Bank Globally rate of child labour have decreased from 25% to 10% between 60s to the early years of the 21st century. Nevertheless, giving the population of the world also increased the total number of child labourers remains high, with UNICEF and ILO acknowledging an estimated 168 million children aged 5–17 worldwide were involved in some sort of child labour in 2013. Some scholars like Jean-Marie Baland and James A. Robinson suggests any labour by children aged 18 years or less is wrong since this encourages illiteracy, inhumane work and lower investment in human capital. In other words, there are moral and economic reasons that justify a blanket ban on labour from children aged 18 years or less, everywhere in the world. On the other hand, some scholars like Christiaan Grootaert and Kameel Ahmady believe that child labour is the symptom of poverty. If laws ban most lawful work that enables the poor to survive, informal economy, illicit operations and underground businesses will thrive. Workplace See also In modern market-economies: Career Employment Job guarantee Labour economics Profession Trade union Volunteering Wage slavery Workaholic Labor issues: Annual leave Informal economy Job strain Karoshi Labor rights Leave of absence Minimum wage Occupational safety and health Paid time off Sick leave Unemployment Unfree labor Unpaid work Working poor Workplace safety standards Related concepts: Critique of work Effects of overtime Ergonomics Flow (psychology) Helping behavior Occupational burnout Occupational stress Post-work society Problem solving Refusal of work References Employment Labour economics Sociological terminology
0.766709
0.994103
0.762187
Present tense
The present tense (abbreviated or ) is a grammatical tense whose principal function is to locate a situation or event in the present time. The present tense is used for actions which are happening now. In order to explain and understand present tense, it is useful to imagine time as a line on which the past tense, the present and the future tense are positioned. The term present tense is usually used in descriptions of specific languages to refer to a particular grammatical form or set of forms; these may have a variety of uses, not all of which will necessarily refer to present time. For example, in the English sentence "My train leaves tomorrow morning", the verb form leaves is said to be in the present tense, even though in this particular context it refers to an event in future time. Similarly, in the historical present, the present tense is used to narrate events that occurred in the past. There are two common types of present tense form in most Indo-European languages: the present indicative (the combination of present tense and indicative mood) and the present subjunctive (the combination of present tense and subjunctive mood). The present tense is mainly classified into four parts or subtenses. Simple present : The simple present tense is employed in a sentence to represent an action or event that takes place in the present regularly. Present perfect : The present perfect tense is utilized for events that begin in the past and continue to the moment of speaking, or to express the result of a past situation. Present continuous: The present continuous tense is used to describe an action that is happening right now. Present perfect continuous Use The present indicative of most verbs in modern English has the same form as the infinitive, except for the third-person singular form, which takes the ending -[e]s. The verb be has the forms am, is, are. For details, see English verbs. For the present subjunctive, see English subjunctive. A number of multi-word constructions exist to express the combinations of present tense with the basic form of the present tense is called the simple present; there are also constructions known as the present progressive (or present continuous) (e.g. am writing), the present perfect (e.g. have written), and the present perfect progressive (e.g. have been writing). Use of the present tense does not always imply the present time. In particular, the present tense is often used to refer to future events (I am seeing James tomorrow; My train leaves at 3 o'clock this afternoon). This is particularly the case in condition clauses and many other adverbial subordinate clauses: If you see him,...; As soon as they arrive... There is also the historical present, in which the present tense is used to narrate past events. For details of the uses of present tense constructions in English, see Uses of English verb forms. Hellenic languages Modern Greek present indicative tense In Modern Greek, the present tense is used in a similar way to the present tense in English and can represent the present continuous as well. As with some other conjugations in Greek, some verbs in the present tense accept different (but equivalent) forms of use for the same person. What follows are examples of present tense conjugation in Greek for the verbs βλέπω (see), τρώω (eat) and αγαπώ (love). Romance languages The Romance languages are derived from Latin, and in particular western Vulgar Latin. As a result, their usages and forms are similar. Latin present indicative tense The Latin present tense can be translated as progressive or simple present. Here are examples of the present indicative tense conjugation in Latin. French present indicative tense In French, the present tense is used similarly to that of English. Below is an example of present tense conjugation in French. The present indicative is commonly used to express the present continuous. For example, Jean mange may be translated as John eats, John is eating. To emphasise the present continuous, expressions such as "en train de" may be used. For example, Jean est en train de manger may be translated as John is eating, John is in the middle of eating. On est en train de chercher un nouvel appartement may be translated as We are looking for a new apartment, We are in the process of finding a new apartment. Italian present indicative tense In Italian, the present tense is used similarly to that of English. What follows is an example of present indicative tense conjugation in Italian. Portuguese and Spanish present indicative tense The present tenses of Portuguese and Spanish are similar in form, and are used in similar ways. What follows are examples of the present indicative conjugation in Portuguese. There follow examples of the corresponding conjugation in Spanish. Slavic languages Bulgarian present indicative tense In Bulgarian, the present indicative tense of imperfective verbs is used in a very similar way to the present indicative in English. It can also be used as present progressive. Below is an example of present indicative tense conjugation in Bulgarian. *Archaic, no infinitive in the modern language. Macedonian present tense The present tense in Macedonian is expressed using imperfective verbs. The following table shows the conjugation of the verbs write (пишува/pišuva), speak (зборува/zboruva), want (сака/saka) and open (отвaра/otvara). Sinitic languages In Wu Chinese, unlike other Sinitic languages (Varieties of Chinese), some tenses can be marked, including the present tense. For instance, in Suzhounese and Old Shanghainese, the word is used. The particle is placed at the end of a clause, and when a tense is referenced, the word order switches to SOV. In a sentence such as "", it would be the perfective aspect in Standard Mandarin, whereas this would be analysed as the present tense in contemporary Shanghainese, where has underwent lenition to . See also Grammatical aspect Tense–aspect–mood Tense confusion References Grammatical tenses
0.768047
0.992367
0.762184
Women in Africa
The culture, evolution, and history of women who were born in, live in, and are from the continent of Africa reflect the evolution and history of the African continent itself. Numerous short studies regarding women's history in African nations have been conducted. Many studies focus on the historic roles and status of women in specific countries and regions, such as Egypt, Ethiopia, Morocco, Nigeria Lesotho, and sub-Saharan Africa. Recently, scholars have begun to focus on the evolution of women's status throughout the history of Africa using less common sources, such as songs from Malawi, weaving techniques in Sokoto, and historical linguistics. The status of women in Africa is varied across nations and regions. For example, Rwanda is the only country in the world where women hold more than half the seats in parliament — 51.9% as of July 2019, but Morocco only has one female minister in its cabinet. Significant efforts towards gender equality have been made through the creation of the African Charter on Human and People's Rights, which encourages member states to end discrimination and violence against women. With the exception of Morocco and Burundi, all African states have adopted this charter. However, despite these strides towards equality, women still face various issues related to gender inequality, such as disproportionate levels of poverty and education, poor health and nutrition, lack of political power, limited workforce participation, gender-based violence, female genital mutilation, and child marriage. History of African women The study of African women's history emerged as a field relatively soon after African history became a widely respected academic subject. Historians such as Jan Vansina and Walter Rodney forced Western academia to acknowledge the existence of precolonial African societies and states in the wake of the African independence movements of the 1960s, although they mainly focused on men's history. Ester Boserup, a scholar of historical economics, published her groundbreaking book, Women's Role in Economic Development, in 1970. This book illustrated the central role women had played in the history of Africa as economic producers and how those systems had been disrupted by colonialism. By the 1980s, scholars had picked up threads of African women's history across the continent, for example, George Brooks' 1976 study of women traders in precolonial Senegal, Margaret Jean Hays' 1976 study of how economic change in colonial Kenya affected Luo women, and Kristin Mann's 1985 study on marriage in Nigeria. Over time, historians have debated the role and status of women in precolonial vs. colonial society, explored how women have dealt with changing forms of oppression, examined how phenomena like domesticity became gendered, unearthed women's roles in national struggles for independence, and even argued that the category of "woman" in some cases cannot be applied in precolonial contexts. Women have been shown to be essential historical, economic and social actors in practically every region of Africa for centuries. Culture In the home From the 1940s until Morocco's declaration of independence from the tutelage of France in 1956, Moroccan women lived in family units that were "enclosed households" or harems. The tradition of the harem lifestyle for women gradually ended upon Morocco's independence from France in 1956. Women in Southern Rhodesia in the 1940's and early 50's were not educated in Western domestic lifestyles. Women Clubs began to emerge where women aimed to educate one another on domestic living and hygiene. Helen Mangwende led the movement in Southern Rhodesia and founded the FAWC (Federation of African Women Clubs). This group had over 700 members in 1950. The traditional division of labour in Senegal saw Senegalese women as responsible for household tasks such as cooking, cleaning, and childcare. They were also responsible for a large share of agricultural work, including weeding and harvesting for common crops such as rice. In recent decades, economic change and urbanization has led to many young men migrating to the cities like Dakar. Rural women have become increasingly involved in managing village forestry resources and operating millet and rice mills. In society Gender discrimination was solidified across the continent during the colonial era. In the pre-colonial period, women held chieftaincies in their own right, and some tribes even had traditions to pass dynastic rights to exclusively male titles to royal descendants through the matrilineal line (e.g., Asanteman, Balobedu, Ijawland, Wolof kingdoms). Colonialism eroded the power of these chieftaincies and traditions, and reinforced what was by then an already ascendant patriarchy thereafter. This was met with fierce opposition, most famously in the case of the Abeokuta women's revolt in Nigeria. Following independence, sovereign states solidified the gender norms and class structures inherited from their colonial predecessors, as both the first and second generations of African administrations failed to restore women's traditional powers. This led to more opposition, and over the course of the past couple of decades there has been a significant improvement in the situation. Titled females throughout Africa's history include Fatim Beye, Ndoye Demba and Ndate Yalla Mbodj of Senegal, Moremi, Idia, Amina, Orompoto, Nana Asma'u and Efunroye Tinubu of Nigeria, Yaa Asantewaa of Ghana, Yennenga of Burkina Faso, Hangbe of Benin, Makeda, Zawditu and Embet Ilen of Ethiopia and Eritrea, Nandi of South Africa and Hatshepsut of Egypt. All are hailed as inspirations for contemporary African women. Many of Africa's contemporary titled women are members of the African Queens and Women Cultural Leaders Network, a voluntary organization. In literature Notable African writers have focused in their work on issues specifically concerning women in Africa, including Nawal El Saadawi (in books such as Woman at Point Zero and The Hidden Face of Eve), Flora Nwapa (Efuru), Ama Ata Aidoo (Anowa, Changes: A Love Story), and Buchi Emecheta (The Bride Price, The Slave Girl, The Joys of Motherhood). Education Sub-Saharan Africa Although sub-Saharan African countries have made considerable strides in providing equal access to education for boys and girls, 23% of girls do not receive a primary education. Factors such as a girl's social class and mother's education heavily influence her ability to attain an education Without easy access to schools, mothers are often the first and perhaps only form of education that a girl may receive. In Côte d'Ivoire, girls are 35 times more likely to attend secondary school if their father graduated from college. With 40% of girls getting married before the age of 18 in sub-Saharan Africa, girls are often forced to drop out of school to start families. Early marriage reinforces the cultural belief that educating daughters is a waste of resources because parents will not receive any economic benefit once their daughter is married to another family. This leads to the phenomena known as son-preference where families will choose to send their sons to school rather than their daughters because of the economic benefit that could educated sons afford the family. In addition, girls that do attend school tend to attend schools that are of lower quality. Bad quality schools are characterized by their lack of course offerings and weak preparation for the workforce. Another issue in education systems is the segregation of school subjects by gender. Girls are more likely to take domestic science and biology courses, whereas boys are more likely to take mathematics, chemistry, engineering, and vocational training. According to UNESCO Institute for Statistics, 58.8% of women are literate in 2018. However, literacy rates within sub-Saharan Africa vary a lot from Chad having a 14% female literacy rate in comparison to Seychelles 96%. South Africa According to Rowena Martineau's analysis on the educational disparities between men and women in South Africa, women have been historically overlooked within the education system. Some barriers women face in receiving education is that their education is less prioritized than their brothers, sexual assault is a common fear and widespread occurrence, and the social pressures to become married and start a family all hinder women's opportunity to become educated. Furthermore, women choose to study nursing and teaching above any other profession, which further excludes them from entering the higher-paying jobs in STEM, that also contributes to gender inequality. Sierra Leone Since the founding of Sierra Leone in 1787, the women in Sierra Leone have been a major influence in the political and economic development of the nation. They have also played an important role in the education system, founding schools and colleges, with some such as Hannah Benka-Coker being honoured with the erection of a statue for her contributions and Lati Hyde-Forster, first woman to graduate from Fourah Bay College being honored with a doctor of civil laws degree by the University of Sierra Leone. Angola In Angola, groups like the Organization of Angolan Women were founded in order to provide easier access to education and voting ability. The organization also advocated the passing of anti-discrimination and literacy laws. North Africa The seven countries—Algeria, Egypt, Libya, Morocco, Sudan, Tunisia, and Western Sahara—that make up North Africa have unique educational environments because of their relative wealth and strong Islamic faith. Gender norms and roles are very strictly defined to protect women's honor and modesty, which have inadvertently become barriers to women receiving equal education as men as women are expected to stay at home and raise a family. These gender expectations devalue women's education and bar girls access to education. As a result, North African countries such as Egypt and Morocco have higher illiteracy rates for women than other countries with similar GDPs. Similar to sub-Saharan Africa, women are disproportionately over-represented in the professions of teaching, medicine, and social welfare. Gender stereotypes are further reinforced by the fact that only 20% of women are part of the labor force. This creates a negative cycle wherein women are expected to stay at home, barring them from further educational opportunities, and creating barriers for women to gain the education and skills necessary to find gainful employment. Morocco Morocco's female literacy rate is 65%, which is still significantly lower than North Africa's female literacy rate of 73%. Moroccan women live under a strong framework of acceptable gender roles and expectations. Agnaou's study in 2004 found that for 40% of illiterate women, the greatest obstacle for women to become literate were their parents. Due to societal views of "literacy" and "education" as masculine, there is no strong policy push to educate women in Morocco. There have been various literacy campaigns run by the government such as the creation of the Adult Literacy Directorate in 1997 and the National Education and Training Charter. These literacy campaigns have had varying success in reducing illiteracy due to limited funding, lack of human resources, and cultural inertia. Politics North Africa Algeria Algeria is regarded as a relatively liberal nation and the status of women reflects this. Unlike other countries in the region, equality for women is enshrined in Algerian laws and the constitution. They can vote and run for political positions. Libya Since independence, Libyan leaders have been committed to improving the condition of women but within the framework of Arabic and Islamic values. Central to the revolution of 1969 was the empowerment of women and removal of inferior status. Niger In Niger, many of the laws adopted by the government of Niger to protect the rights of Nigerien women are often based on Muslim beliefs. Sahrawi Arab Democratic Republic Women in the Sahrawi Arab Democratic Republic are women who were born in, who live in, or are from the Sahrawi Arab Democratic Republic (SADR) in the region of the Western Sahara. In Sahrawi society, women share responsibilities at every level of its community and social organization. Article 41 of the Constitution of the Sahrawi Arab Democratic Republic ensures that the state will pursue "the promotion of women and [their] political, social and cultural participation, in the construction of society and the country's development". West Africa Benin The state of the rights of women in Benin has improved markedly since the restoration of democracy and the ratification of the Constitution, and the passage of the Personal and Family Code in 2004, both of which overrode various traditional customs that systematically treated women unequally. Still, inequality and discrimination persist. Polygamy and forced marriage are illegal but they still occur. Nigeria The freedom and right for women in Africa to participate in leadership and electoral processes differs by country and even ethnic groups within the same nation. For example, in Nigeria, women in Southern Nigeria had the right to vote as early as 1950 and contested for seats in the 1959 Nigerian elections, whereas women in Northern Nigeria could not vote or contest until 1976. Central Africa Democratic Republic of the Congo Women in the Democratic Republic of the Congo have not attained a position of full equality with men, with their struggle continuing to this day. Although the Mobutu regime paid lip service to the important role of women in society, and although women enjoy some legal rights (e.g., the right to own property and the right to participate in the economic and political sectors), custom and legal constraints still limit their opportunities. From 1939 to 1943, over 30% of adult Congolese women in Stanleyville (now Kisangani) were so registered. The taxes they paid constituted the second largest source of tax revenue for Stanleyville. Rwanda Claire Wallace, Christian Haerpfer and Pamela Abbott write that, in spite of Rwanda having the highest representation of women in parliament in the world, there are three major gender issues in Rwandan society: the workloads of women, access to education and gender-based violence. They conclude that the attitudes to women in Rwanda's political institutions has not filtered through to the rest of Rwandan society, and that for men, but not women, there are generational differences when it comes to gender-based attitudes. Other countries In 2023, the European Investment Bank and the European Commission allocated €10 million to female African entrepreneurs and firms that provide services or generate excellent jobs for women in the bioeconomy. This is accomplished through loan lines with Compagnie Financière Africaine COFINA in Côte d'Ivoire (focused on cocoa, cashews, and food crops), Senegal (for cereals and horticulture), and First Capital Bank Limited in Zambia to boost sustainable agricultural output. East Africa Seychelles Women in Seychelles enjoy the same legal, political, economic, and social rights as men. Seychellois society is essentially matriarchal. Mothers tend to be dominant in the household, controlling most current expenditures and looking after the interests of the children. Unwed mothers are the societal norm, and the law requires fathers to support their children. Men are important for their earning ability, but their domestic role is relatively peripheral. Older women can usually count on financial support from family members living at home or contributions from the earnings of grown-up children. South Sudan The women of the Republic of South Sudan had also been active in liberation causes, by providing food and shelters to soldiers, caring for children and caring for wounded heroes and heroines during their political struggle prior to the country's independence. An example was their formation of the Katiba Banat or women's battalion. Sudan Sudan is a developing nation that faces many challenges in regards to gender inequality. Freedom House gave Sudan the lowest possible ranking among repressive regimes during 2012. South Sudan received a slightly higher rating but it was also rated as "not free". In the 2013 report of 2012 data, Sudan ranks 171st out of 186 countries on the Human Development Index (HDI). Sudan also is one of very few countries that are not a signatory on the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). Despite all of this, there have been positive changes in regards to gender equality in Sudan. As of 2012, women comprise 24.1% of the National Assembly of Sudan. Uganda The roles of Ugandan women are clearly subordinate to those of men, despite the substantial economic and social responsibilities of women in Uganda's many traditional societies. Women are taught to accede to the wishes of their fathers, brothers, husbands, and sometimes other men as well, and to demonstrate their subordination to men in most areas of public life. Even in the 1980s, women in rural areas of Buganda were expected to kneel when speaking to a man. At the same time, however, women shouldered the primary responsibilities for childcare and subsistence cultivation, and in the twentieth century, women have made substantial contributions to cash-crop agriculture. Workforce participation Women in Africa are highly active whether that is within the sphere of formal or informal work. However, within the formal sphere, African women hold only 40% of formal jobs which has led to a labor gender gap of 54%. According to Bandara's analysis in 2015, this labor gender gap is equivalent to a US$255 billion loss in economic growth because women cannot fully contribute to economic growth. In addition, women earn on average two-thirds of their male colleague's salaries. Some of the challenges African women face in finding formal work are their general lack of education and technical skills, weak protection against gender discriminatory hiring, and double burden of work with the expectation to continue housekeeping and childbearing. Most of Africa's food is produced by women, but each female farmer produces significantly less food than male farmers because female farmers do not have access to the same land, fertilizers, technology, and credit to achieve maximum efficiency. For example, women in Ethiopia and Ghana produce 26% and 17% less food than their male counterparts as a result of resource inequality. The Senegalese government's rural development agency aims to organize village women and involve them more actively in the development process. Women play a prominent role in village health committees and prenatal and postnatal programs. In urban areas, cultural change has led to women entering the labour market as office and retail clerks, domestic workers and unskilled workers in textile mills and tuna-canning factories. Non-governmental organizations are also active in promoting women's economic opportunities in Senegal. Micro-financing loans for women's businesses have improved the economic situation of many. In May 2011, in Djibouti, Director of Gender for the Department of Women and Family Choukri Djibah launched the project SIHA (Strategic Initiative for the Horn of Africa), which is designed to support and reinforce the economic capacity of women in Djibouti, funded with a grant from the European Union of 28 Million Djibouti francs. Notable women Ellen Johnson Sirleaf of Liberia was Africa's first woman president. Since Sirleaf's election to office, Joyce Banda of Malawi, Ameenah Gurib of Mauritius and Sahle-Work Zewde of Ethiopia have also risen to the presidencies of their respective countries. Some other political leaders (in no particular order) are Sylvie Kinigi of Burundi, Luisa Diogo of Mozambique, Agathe Uwilingiyimana of Rwanda, Maria das Neves of Sao Tome and Principe, Aminata Toure of Senegal and Saara Kuugongelwa of Namibia. Each has held the office of prime minister of her country. In addition to political leaders, African nations boast many female artists, writers, and activists. For example: Sao Tome and Principe's lyricist of the national anthem and renowned writer, Alda do Espirito Santo; South African singer and apartheid activist, Miriam Makeba; Nigerian novelist and speaker, Chimamanda Ngozi Adichie; Ethiopian entrepreneur of SoleRebels, Bethlehem Alemu; Nigerien architect, Mariam Kamara; and environmental activist, Wanjira Mathai; Nigerian US-based philanthropist Efe Ukala; Nigerian Architect creative entrepreneur, public speaker and author Tosin Oshinowo. In Kenya Wamuyu Gakuru played a role in the Mau Mau rebellion as a fighter for Kenyan independence. Gender-based violence The 2003 Maputo Protocol of the African Union addressed gender-based violence against women, defined as meaning "all acts perpetrated against women which cause or could cause them physical, sexual, psychological, and economic harm, including the threat to take such acts; or to undertake the imposition of arbitrary restrictions on or deprivation of fundamental freedoms in private or public life in peace time and during situations of armed conflicts or of war...". Legal protections for sexual assault In Benin, enforcement of the law against rape, the punishment for which can be up to five years in prison, is hampered by corruption, ineffective police work, and fear of social stigma. Police incompetence results in most sexual offenses being reduced to misdemeanors. Domestic violence is widespread, with penalties of up to three years in prison, but women are reluctant to report cases and authorities are reluctant to intervene in what are generally considered private matters. Female genital mutilation In some African cultures, female genital mutilation is seen as traditional passage into womanhood and a way to purify a woman's body. There are four levels of female circumcision: Type 1 involves the complete removal of the clitoris, Type 2 goes beyond Type 1 and removes the labia minora as well, Type 3 stitches the vagina after a Type 2 procedure, and Type 4 is any mutilation of vaginal tissue. The procedure is very painful and often practiced without proper medical equipment and hygiene procedures leading to a high risk of infection and chronic pain. Female genital mutilation is practiced in Senegal, Mauritania, Mali, Nigeria, Niger, Chad, Egypt, Cameroon, Sudan, Ethiopia, Somalia, Kenya, Uganda, Central African Republic, Ghana, Togo, Benin, Burkina Faso, Sierra Leone among others. Femicide Femicide is broadly defined as the "intentional murder of women," which includes honor killings, dowry killings, sexual orientation hate crimes, and female infanticide. According to a 2013 study by Abrahams, South Africa has the fourth highest rate of female homicide with 12.9 per 100,000 women being murdered by intimate partners in South Africa annually. With a rate of 7.5/100,000 women, women in South Africa are four times more likely to be murdered with a gun than a woman in the United States. See also History of Africa Women's history Women and agriculture in Sub-Saharan Africa Women in Christianity Women in Islam African Women in Mathematics Association Daughters of Africa North Africa Women in Algeria Women in Egypt Women in Libya Women in Mauritania Women in Morocco Women in Sudan Women in Tunisia West Africa Women in Benin Women in Burkina Faso Women in Cape Verde Women in the Gambia Women in Ghana Women in Guinea Women in Guinea-Bissau Women in the Ivory Coast Women in Liberia Women in Mali Women in Niger Women in Nigeria Women in Saint Helena Women in Senegal Women in Sierra Leone Women in Togo Central Africa Women in Burundi Women in Cameroon Women in the Central African Republic Women in Chad Women in the Republic of the Congo Women in the Democratic Republic of the Congo Women in Equatorial Guinea Women in Gabon Women in Rwanda Women in São Tomé and Príncipe East Africa Women in Comoros Women in Djibouti Women in Eritrea Women in Ethiopia Women in Kenya Women in Mauritius Women in Mayotte Women in Réunion Women in the Republic of Seychelles Women in Somalia Women in Somaliland Women in South Sudan Women in Tanzania Women in Uganda South Africa Women in Angola Women in Botswana Women in Eswatini Women in Lesotho Women in Madagascar Women in Malawi Women in Mozambique Women in Namibia Women in South Africa Women in Zambia Women in Zimbabwe References External links UN Women Africa Overall status of women in Africa, United Nations University Women in Society (South Africa) Dimandja, Agnes Loteta, , 30 July 2004 Nwoko-Ud, Chichi, "Chebe Stressed the Role of Women in African Society" African women
0.780633
0.976331
0.762157
Delenda Est
"Delenda Est" is a science fiction short story by American writer Poul Anderson, part of his Time Patrol series. It was originally published in The Magazine of Fantasy and Science Fiction of December 1955. It was first reprinted in the first edition of the "Time Patrol" series collection Guardians of Time (Ballantine Books; September 1960). It was also a selection in the alternate history anthology Worlds of Maybe (Thomas Nelson; 1970) edited by Robert Silverberg. The title alludes to the Latin phrase Carthago delenda est ("Carthage must be destroyed") from the Third Punic War. Plot summary Renegade time travelers meddle in the outcome of the Second Punic War, bringing about the premature deaths of Publius Cornelius Scipio and Scipio Africanus at the Battle of Ticinus in 218 BC and so creating a new timeline in which Hannibal destroys Rome in 210 BC. That made Western European civilization come to be based on a Celtic-Carthaginian cultural synthesis (rather than a Greco-Roman, as in actual history). This civilization discovered the Western Hemisphere and created certain inventions (such as the steam engine) long before the corresponding events happened in actual history (partly since there was nothing corresponding to the fall of the Roman Empire), but overall technological progress has been slow since most developments are arrived at through ad hoc tinkering, and there is no scientific methodology of empirically testing rigorous theories. At the time of the story, Britain (Brittys), Ireland, France (Gallia) and Spain (Celtan) are under Celtic control, and the Celts have also colonized North America (Affalon). Italy (Cimmeria) is under Germanic domination, Switzerland and Austria exist within Helvetia, Lithuania (Littorn) controls Scandinavia, northern Germany and much of Eastern Europe, and a Carthaginian successor empire (Carthagalann) dominates much of Northern Africa. The Han (Chinese) Empire controls China and Taiwan and encompasses Korea, Japan and eastern Siberia. Punjab comprises western India, Pakistan and Afghanistan. The major global powers are Hinduraj, which is centered on India but also encompasses Southeast Asia, Indonesia, New Guinea and Australasia, and Huy Braseal, which controls much of South America. Technology is at roughly a 19th-century level, and transport is reliant on the steam engine although rudimentary biplanes exist for the purposes of combat. Christianity, Judaism and Islam do not exist in this polytheistic world. There is greater gender equality in this world, but slavery has also survived though it is not connected with any particular race or ethnicity. Manse Everard, a 20th-century Time Patrol agent, finds himself in the new timeline, in Catavellaunan (approximately New York), facing the moral dilemma. If he returns to the past before the events that led to Carthaginian victory and restores his original timeline by negating the assassinations and military upset that have led to the new alternative timeline, he would wipe out its billions of inhabitants when the course of human history reverts to his own. Similar themes in other works John Barnes's The Timeline Wars series has the same basic assumption: an alternative history timeline starting from Hannibal winning the Second Punic War. However, Anderson assumes that the Carthaginians would not have been able to fill the Roman niche and create something similar to the Roman Empire and that it would have been the Celts who would have become central to the successor culture. Conversely, Barnes assumes that the victorious Carthaginians would have succeeded in creating a world empire, an extremely cruel, aggressive and oppressive one, which is the undoubted villain of his books. See also Hannibal's Children and its sequel The Seven Hills References External links 1955 short stories Short stories by Poul Anderson Alternate history short stories Alternate history novels set in ancient Rome Cultural depictions of Hannibal Cultural depictions of Scipio Africanus
0.781116
0.975714
0.762146
Post-Fordism
Post-Fordism is a term used to describe the growth of new production methods defined by flexible production, the individualization of labor relations and fragmentation of markets into distinct segments, after the demise of Fordist production. It was widely advocated by French Marxist economists and American labor economists in the 1970s and 1980s. Definitions of the nature and scope of post-Fordism vary considerably and are a matter of debate among scholars. Fordism was the dominant model of production organization from the 1910s to the 1960s, which led to the massive growth of the American manufacturing sector and the establishment of the US as an industrial powerhouse. It was characterized by the assembly-line model, perfected by Henry Ford. Some post-Fordist theorists argue that the end of the superiority of the US economy is explained by the end of Fordism. Post-Fordist consumption is marked by increased consumer choice and identity. As such, retailers seek to collect consumer data through increased information technology to understand trends and changing demand. Production networks, therefore, demand greater flexibility in their workforce, leading to more varied job roles for employees and more individualized labour relations, and more flexible modes of production to react to changing consumer demand, such as lean manufacturing. Overview Post-Fordism is characterized by the following attributes: Small-batch production Economies of scope Specialized products and jobs New information technologies Emphasis on types of consumers in contrast to the previous emphasis on social class The rise of the service and the white-collar worker The feminisation of the work force Consumption and production Post-Fordist consumption is marked by individualism and consumer choice. Patterns of consumption are oriented toward lifestyle and identity and consumption is a key part of the culture. The consumer has become a 'global dictator' which determines the organization of production and retailers seek to process consumer data to react to patterns of consumer demand. As such, there is a strong link between post-Fordism and the rise of information technology. Post-Fordist production prioritizes increased flexibility, in particular lean production and just-in-time production methods. This creates an economic geography of greater interaction between suppliers and manufacturers. For labor markets, this has necessitated a shift from the division of labor to being more adaptable to different roles in production, however, it has also led to more involvement in and knowledge of the labor process and greater autonomy over work. There is an increase in non-standard forms of employment. Theoretical approaches According to geographer Ash Amin, post-Fordism is commonly divided into three schools of thought: the regulation school, flexible specialization, and neo-Schumpeterianism. Regulation school The regulation approach (also called the neo-Marxist or French Regulation School) was designed to address the paradox of how capitalism has both a tendency towards crisis, change and instability as well as an ability to stabilize institutions, rules, and norms. The theory is based on two key concepts. "Regimes of Accumulation" refer to systems of production and consumption, such as Fordism and post-Fordism. "Modes of Regulation" refer to the written and unwritten laws of society which control the Regime of Accumulation and determine its form. According to regulation theory, every Regime of Accumulation will reach a crisis point at which the Mode of Regulation will no longer support it, and society will be forced to find new rules and norms, forming a new Mode of Regulation. This will begin a new Regime of Accumulation, which will eventually reach a crisis, and so forth. Proponents of Regulation theory include Michel Aglietta, Robert Boyer, Bob Jessop, and Alain Lipietz. Flexible specialization Proponents of the flexible specialization approach (also known as the neo-Smithian approach) believe that fundamental changes in the international economy, especially in the early 1970s, forced firms to switch from mass production to a new tactic known as flexible specialization. Instead of producing generic goods, firms now found it more profitable to produce diverse product lines targeted at different groups of consumers, appealing to their sense of taste and fashion. Instead of investing huge amounts of money in the mass production of a single product, firms now needed to build intelligent systems of labor and machines that were flexible and could quickly respond to the whims of the market. The technology associated initially with flexible production was the numerical control technology, which was developed in the United States in the 1950s; however, the CNC, developed in Japan, later replaced it. The development of the computer was very important to the technology of flexible specialization. Not only could the computer change the characteristics of the goods being produced, but it could also analyze data to order supplies and produce goods in accordance with current demand. These types of technology made adjustments simple and inexpensive, making smaller specialized production runs economically feasible. Flexibility and skill in labor were also important. The workforce was now divided into a skill-flexible core and a time-flexible periphery. Flexibility and variety in the skills and knowledge of the core workers and the machines used for production allowed for the specialized production of goods. Modern just-in-time manufacturing is one example of a flexible approach to production. Likewise, the production structure began to change on the sector level. Instead of a single firm manning the assembly line from raw materials to finished products, the production process became fragmented as individual firms specialized in their areas of expertise. As evidence for this theory of specialization, proponents claim that Marshallian "industrial districts," or clusters of integrated firms, have developed in places like Silicon Valley, Jutland, Småland, and several parts of Italy. Neo-Schumpeterianism The new-Schumpeterian approach to post-Fordism is based upon the theory of Kondratiev waves (also known as long waves). The theory holds that a "techno-economic paradigm" (Perez) characterizes each long wave. Fordism was the techno-economic paradigm of the fourth Kondratiev wave, and post-Fordism is thus the techno-economic paradigm of the fifth, which is dominated by information and communication technology. Notable Neo-Schumpeterian thinkers comprise Carlota Perez and Christopher Freeman, as well as Michael Storper and Richard Walker. Post-Fordist theory in Italy In Italy, post-Fordism has been theorised by the long wave of workerism or autonomia. Major thinkers of this tendency include the Swiss-Italian economist Christian Marazzi, Antonio Negri, Paolo Virno, Carlo Vercellone, Maurizio Lazzarato. Marazzi's Capital and Language takes as its starting point the fact that the extreme volatility of financial markets is generally attributed to the discrepancy between the "real economy" (that of material goods produced and sold) and the more speculative monetary-financial economy. But this distinction has long ceased to apply in the post-Fordist New Economy, in which both spheres are structurally affected by language and communication. In Capital and Language Marazzi argues that the changes in financial markets and the transformation of labor into immaterial labor (that is, its reliance on abstract knowledge, general intellect, and social cooperation) are two sides of a new development paradigm: financialization through and thanks to the rise of the new economy. In terms of the development of the 'technical and political class-composition', in the post-Fordist era the crisis explains at the same time 'high points of the capitalist development' and how new technological tools develop and work altogether (money form, linguistic conventions, capital and language). Changes from Fordism to post-Fordism Post-Fordism brought on new ways of looking at consumption and production. The saturation of key markets brought on a turn against mass consumption and a pursuit of higher living standards. This shift brought a change in how the market was viewed from a production standpoint. Rather than being viewed as a mass market to be served by mass production, the consumers began to be viewed as different groups pursuing different goals who could be better served with small batches of specialized goods. Mass markets became less important while markets for luxury, custom, or positional goods became more significant. Production became less homogeneous and standardized and more diverse and differentiated as organizations and economies of scale were replaced with organizations and economies of scope. The changes in production with the shift from Fordism to post-Fordism were accompanied by changes in the economy, politics, and prominent ideologies. In the economic realm, post-Fordism brought the decline of regulation and production by the nation-state and the rise of global markets and corporations. Mass marketing was replaced by flexible specialization, and organizations began to emphasize communication more than command. The workforce changed with an increase in internal marketing, franchising, and subcontracting and a rise in part-time, temp, self-employed, and home workers. Politically, class-based political parties declined and social movements based on region, gender, or race increased. Mass unions began to vanish and were instead replaced by localized plant-based bargaining. Cultural and ideological changes included the rise in individualist modes of thought and behavior and a culture of entrepreneurialism. Following the shift in production and acknowledging the need for more knowledge-based workers, education became less standardized and more specialized. Prominent ideologies that arose included fragmentation and pluralism in values, post-modern eclecticism, and populist approaches to culture. Examples Italy One of the primary examples of specialized post-Fordist production took place in a region known as the Third Italy. The First Italy included the areas of large-scale mass production, such as Turin, Milan, and Genoa, and the Second Italy described the undeveloped South. The Third Italy, however, was where clusters of small firms and workshops developed in the 1970s and 1980s in the central and northeast regions of the country. Regions of the Third Italy included Tuscany, Umbria, Marche, Emilia-Romagna, Veneto, Friuli, and Trentino-Alto Adige/Südtirol. Each region specialized in a range of loosely related products and each workshop usually had five to fifty workers and often less than ten. The range of products in each region reflected the post-Fordist shift to economies of scope. Additionally, these workshops were known for producing high quality products and employing highly skilled, well-paid workers. The workshops were very design-oriented and multidisciplinary, involving collaboration between entrepreneurs, designers, engineers and workers. Japan There were several post-World War II changes in production in Japan that caused post-Fordist conditions to develop. First, there were changes to company structure, including the replacement of independent trade unions with pro-management, company-based unions; the development of a core of permanent male multi-skilled workers; and the development of a periphery of untrained temporary and part-time employees, who were mostly female. Second, after World War II, Japan was somewhat isolated because of import barriers and foreign investment restrictions, and as a result, Japan began to experiment with production techniques. Third, as imported technologies became more available, Japan began to replicate, absorb, and improve them, with many improvements deriving from modifications for local conditions. Fourth, Japan began to concentrate on the need for small-batch production and quick changeover of product lines to serve the demand for a wide range of products in a relatively small market. Because of informal price-fixing, competition was based not on price but rather on product differentiation. As a result, production became less standardized and more specialized, particularly across different companies. Fifth, Japan began to build long-term supply and subcontracting networks, which contrasted with the vertically integrated, Fordist American corporations. Sixth, because small and medium-size manufacturers produced a wide range of products, there was a need for affordable multipurpose equipment as opposed to the specialized, costly production machinery in Fordist industries in the United States. Technology for flexible production was significant in Japan and particularly necessary for smaller producers. The smaller producers also found it necessary to reduce costs. As a result, Japan became one of the main users of robots and CNC. Over time, these six changes in production in Japan were institutionalized. Criticisms The main criticism of post-Fordism asserts that post-Fordism mistakes the nature of the Fordist revolution and that Fordism was not in crisis, but was simply evolving and will continue to evolve. Other critics believe that post-Fordism does exist, but coexists with Fordism. The automobile industry has combined Fordist and post-Fordist strategies, using both mass production and flexible specialization. Ford introduced flexibility into mass production, so that Fordism could continue to evolve. Those who advocate post-Fordism, however, note that criticism that focuses primarily on flexible specialization ignores post-Fordist changes in other areas of life and that flexible specialization cannot be looked at alone when examining post-Fordism. Another criticism is that post-Fordism relies too heavily on the examples of the Third Italy and Japan. Some believe that Japan is neither Fordist nor post-Fordist and that vertical disintegration and mass production go hand in hand. Others argue that the new, smaller firms in Italy did not develop autonomously, but are a product of the vertical disintegration of the large Fordist firms who contracted lower value-added work to smaller enterprises. Other criticisms argue that flexible specialization is not happening on any great scale, and smaller firms have always existed alongside mass production. Another main criticism is that we are too much in the midst to judge whether or not there really is a new system of production. The term "post-Fordism" is gradually giving way in the literature to a series of alternative terms such as the knowledge economy, cognitive capitalism, the cognitive-cultural economy and so on. This change of vocabulary is also associated with a number of important conceptual shifts (see sections above). See also Civil society Social innovation Total quality management Notes References Baca, George (2004) "Legends of Fordism: Between Myth, History, and Foregone Conclusions," Social Analysis,48(3): 169–178. Gielen, Pascal (2015 - 3rd ed.), The Murmuring of the Artistic Multitude. Global Art, Politics and Post-Fordism. Valiz: Amsterdam, Production economics Economic systems Production and manufacturing
0.772861
0.98611
0.762126
Utopian and dystopian fiction
Utopian and dystopian fiction are subgenres of science fiction that explore social and political structures. Utopian fiction portrays a setting that agrees with the author's ethos, having various attributes of another reality intended to appeal to readers. Dystopian fiction offers the opposite: the portrayal of a setting that completely disagrees with the author's ethos. Some novels combine both genres, often as a metaphor for the different directions humanity can take depending on its choices, ending up with one of two possible futures. Both utopias and dystopias are commonly found in science fiction and other types of speculative fiction. More than 400 utopian works in the English language were published prior to the year 1900, with more than a thousand others appearing during the 20th century. This increase is partially associated with the rise in popularity of science fiction and young adult fiction more generally, but also larger scale social change that brought awareness of larger societal or global issues, such as technology, climate change, and growing human population. Some of these trends have created distinct subgenres such as ecotopian fiction, climate fiction, young adult dystopian novels, and feminist dystopian novels. Subgenres Utopian fiction The word utopia was first used in direct context by Thomas More in his 1516 work Utopia. The word utopia resembles both the Greek words outopos ("no place"), and eutopos ("good place"). More's book, written in Latin, sets out a vision of an ideal society. As the title suggests, the work presents an ambiguous and ironic projection of the ideal state. The whimsical nature of the text can be confirmed by the narrator of Utopia'''s second book, Raphael Hythloday. The Greek root of the name "Hythloday" suggests an 'expert in nonsense'. An earlier example of a Utopian work from classical antiquity is Plato's The Republic, in which he outlines what he sees as the ideal society and its political system. Later, Tommaso Campanella was influenced by Plato's work and wrote The City of the Sun (1623), which describes a modern utopian society built on equality. Other examples include Samuel Johnson's The History of Rasselas, Prince of Abissinia (1759) and Samuel Butler's Erewhon (1872), which uses an anagram of "nowhere" as its title. This, like much of utopian literature, can be seen as satire; Butler inverts illness and crime, with punishment for the former and treatment for the latter. One example of the utopian genre's meaning and purpose is described in Fredric Jameson's Archeologies of the Future (2005), which addresses many utopian varieties defined by their program or impulse. Dystopian fiction A dystopia is a society characterized by a focus on that which is contrary to the author's ethos, such as mass poverty, public mistrust and suspicion, a police state or oppression. Most authors of dystopian fiction explore at least one reason why things are that way, often as an analogy for similar issues in the real world. Dystopian literature serves to "provide fresh perspectives on problematic social and political practices that might otherwise be taken for granted or considered natural and inevitable". Some dystopias claim to be utopias. Samuel Butler's Erewhon can be seen as a dystopia because of the way sick people are punished as criminals while thieves are "cured" in hospitals, which the inhabitants of Erewhon see as natural and right, i.e., utopian (as mocked in Voltaire's Candide). Dystopias usually extrapolate elements of contemporary society, and thus can be read as political warnings. Eschatological literature may portray dystopias. Examples The 1921 novel We by Yevgeny Zamyatin portrays a post-apocalyptic future in which society is entirely based on logic and modeled after mechanical systems. George Orwell was influenced by We when he wrote Nineteen Eighty-Four (published in 1949), a novel about Oceania, a state at perpetual war, its population controlled through propaganda. Big Brother and the daily Two Minutes Hate set the tone for an all-pervasive self-censorship. Aldous Huxley's 1932 novel Brave New World started as a parody of utopian fiction, and projected into the year 2540 industrial and social changes he perceived in 1931, leading to industrial success by a coercively persuaded population divided into five castes. Karin Boye's 1940 novel Kallocain is set in a totalitarian world state where a drug is used to control the individual's thoughts. Anthony Burgess' 1962 novel A Clockwork Orange is set in a future England that has a subculture of extreme youth violence, and details the protagonist's experiences with the state intent on changing his character at their whim. Margaret Atwood's The Handmaid's Tale (1985) describes a future United States governed by a totalitarian theocracy, where women have no rights, and Stephen King's The Long Walk (1979) describes a similar totalitarian scenario, but depicting the participation of teenage boys in a deadly contest. Examples of young-adult dystopian fiction include (notably all published after 2000) The Hunger Games series by Suzanne Collins, the Divergent series by Veronica Roth, The Power of Five series by Anthony Horowitz, The Maze Runner series by James Dashner, and the Uglies series by Scott Westerfeld. Video games often include dystopias as well; notable examples include the Fallout series, BioShock, and the later games of the Half-Life series. History of dystopian fiction The history of dystopian literature can be traced back to the reaction to the French Revolution of 1789 and the prospect that mob rule would produce dictatorship. Until the late 20th century, it was usually anti-collectivist. Dystopian fiction emerged as a response to the utopian. Its early history is traced in Gregory Claeys' Dystopia: A Natural History (Oxford University Press, 2017). The beginning of technological dystopian fiction can be traced back to E. M. Forster's (1879–1970) "The Machine Stops." M Keith Booker states that "The Machine Stops," We and Brave New World are "the great defining texts of the genre of dystopian fiction, both in [the] vividness of their engagement with real-world social and political issues and in the scope of their critique of the societies on which they focus." Another important figure in dystopian literature is H.G. Wells, whose work The Time Machine (1895) is also widely seen as a prototype of dystopian literature. Wells' work draws on the social structure of the 19th century, providing a critique of the British class structure at the time. Post World War II, even more dystopian fiction was produced. These works of fiction were interwoven with political commentary: the end of World War II brought about fears of an impending Third World War and a consequent apocalypse. Modern dystopian fiction draws not only on topics such as totalitarian governments and anarchism, but also pollution, global warming, climate change, health, the economy and technology. Modern dystopian themes are common in the young adult (YA) genre of literature. Combinations Many works combine elements of both utopias and dystopias. Typically, an observer from our world will journey to another place or time and see one society the author considers ideal and another representing the worst possible outcome. Usually, the point is that our choices may lead to a better or worse potential future world. Ursula K. Le Guin's Always Coming Home fulfills this model, as does Marge Piercy's Woman on the Edge of Time. In Starhawk's The Fifth Sacred Thing there is no time-travelling observer. However, her ideal society is invaded by a neighbouring power embodying evil repression. In Aldous Huxley's Island, in many ways a counterpoint to his better-known Brave New World, the fusion of the best parts of Buddhist philosophy and Western technology is threatened by the "invasion" of oil companies. As another example, in the "Unwanteds" series by Lisa McMann, a paradox occurs where the outcasts from a complete dystopia are treated to absolute utopia. They believe that those who were privileged in said dystopia were the unlucky ones. In another literary model, the imagined society journeys between elements of utopia and dystopia over the course of the novel or film. At the beginning of The Giver by Lois Lowry, the world is described as a utopia. However, as the book progresses, the world's dystopian aspects are revealed. Jonathan Swift's Gulliver's Travels is also sometimes linked with both utopian and dystopian literatures, because it shares the general preoccupation with ideas of good and bad societies. Of the countries Lemuel Gulliver visits, Brobdingnag and Country of the Houyhnhnms approach a utopia; the others have significant dystopian aspects. Ecotopian fiction In ecotopian fiction, the author posits either a utopian or dystopian world revolving around environmental conservation or destruction. Danny Bloom coined the term "cli-fi" in 2006, with a Twitter boost from Margaret Atwood in 2011, to cover climate change-related fiction, but the theme has existed for decades. Novels dealing with overpopulation, such as Harry Harrison's Make Room! Make Room! (made into movie Soylent Green), were popular in the 1970s, reflecting the widespread concern with the effects of overpopulation on the environment. The novel Nature's End by Whitley Strieber and James Kunetka (1986) posits a future in which overpopulation, pollution, climate change, and resulting superstorms, have led to a popular mass-suicide political movement. Some other examples of ecological dystopias are depictions of Earth in the films Wall-E and Avatar. While eco-dystopias are more common, a small number of works depicting what might be called eco-utopia, or eco-utopian trends, have also been influential. These include Ernest Callenbach's Ecotopia, an important 20th century example of this genre. Kim Stanley Robinson has written several books dealing with environmental themes, including the Mars trilogy. Most notably, however, his Three Californias Trilogy contrasted an eco-dystopia with an eco-utopia and a sort of middling-future. Robinson has also edited an anthology of short ecotopian fiction, called Future Primitive: The New Ecotopias. Another impactful piece of Robinson's is New York 2140 which focuses on the aftermath of society after a major flooding event, and can be seen through both a utopian and dystopian lens. There are a few dystopias that have an "anti-ecological" theme. These are often characterized by a government that is overprotective of nature or a society that has lost most modern technology and struggles for survival. A fine example of this is the novel Riddley Walker. Feminist utopias Another subgenre is feminist utopias and the overlapping category of feminist science fiction. According to the author Sally Miller Gearhart, "A feminist utopian novel is one which a. contrasts the present with an envisioned idealized society (separated from the present by time or space), b. offers a comprehensive critique of present values/conditions, c. sees men or male institutions as a major cause of present social ills, d. presents women as not only at least the equals of men but also as the sole arbiters of their reproductive functions." Utopias have explored the ramification of gender being either a societal construct or a hard-wired imperative. In Mary Gentle's Golden Witchbreed, gender is not chosen until maturity, and gender has no bearing on social roles. In contrast, Doris Lessing's The Marriages Between Zones Three, Four and Five (1980) suggests that men's and women's values are inherent to the sexes and cannot be changed, making a compromise between them essential. In My Own Utopia (1961) by Elisabeth Mann Borgese, gender exists but is dependent upon age rather than sex — genderless children mature into women, some of whom eventually become men. Marge Piercy's novel Woman on the Edge of Time keeps human biology, but removes pregnancy and childbirth from the gender equation by resorting to assisted reproductive technology while allowing both women and men the nurturing experience of breastfeeding. Utopic single-gender worlds or single-sex societies have long been one of the primary ways to explore implications of gender and gender-differences. One solution to gender oppression or social issues in feminist utopian fiction is to remove men, either showing isolated all-female societies as in Charlotte Perkins Gilman's Herland, or societies where men have died out or been replaced, as in Joanna Russ's A Few Things I Know About Whileaway, where "the poisonous binary gender" has died off. In speculative fiction, female-only worlds have been imagined to come about by the action of disease that wipes out men, along with the development of a technological or mystical method that allows female parthenogenetic reproduction. The resulting society is often shown to be utopian by feminist writers. Many influential feminist utopias of this sort were written in the 1970s;Gaétan Brulotte & John Phillips, Encyclopedia of Erotic Literature, "Science Fiction and Fantasy", p.1189, CRC Press, 2006, the most often studied examples include Joanna Russ's The Female Man and Suzy McKee Charnas's The Holdfast Chronicles. Such worlds have been portrayed most often by lesbian or feminist authors; their use of female-only worlds allows the exploration of female independence and freedom from patriarchy. The societies may not necessarily be lesbian, or sexual at all — Herland (1915) by Charlotte Perkins Gilman is a famous early example of a sexless society. Charlene Ball writes in Women's Studies Encyclopedia that use of speculative fiction to explore gender roles has been more common in the United States than in Europe and elsewhere. Utopias imagined by male authors have generally included equality between sexes rather than separation. Cultural impact Étienne Cabet's work Travels in Icaria caused a group of followers, the Icarians, to leave France in 1848, and travel to the United States to start a series of utopian settlements in Texas, Illinois, Iowa, California, and elsewhere. These groups lived in communal settings and lasted until 1898. Among the first decades of the 20th century in Russia, utopian science fiction literature popularity rose extremely due to the fact that the citizens wanted to fantasize about the future instead of just the fact that it was a new, up and coming genre of literature. During the Cold War, however, utopian science fiction became exceptionally prominent among Soviet leaders. Many citizens of the Soviet Russia became dependent on this type of literature because it represented an escape from the real world which was not ideal at the time. Utopian science fiction allowed them to fantasize about how satisfactory it would be to live in a "perfect" world. See also The City of the Sun List of dystopian literature List of dystopian films List of dystopian comics List of utopian literature Social science fiction Utopian language References Bibliography Applebaum, Robert. Literature and Utopian Politics in Seventeenth-Century England. Cambridge, Cambridge University Press, 2002. Bartkowski, Frances. Feminist Utopias. Lincoln, NE, University of Nebraska Press, 1991. Booker, M. Keith. The Dystopian Impulse in Modern Literature. Westport, CT, Greenwood Press, 1994. Booker, M. Keith. Dystopian Literature: A Theory and Research Guide. Westport, CT, Greenwood Press, 1994. Claeys, Gregory. Dystopia: A Natural History. Oxford, Oxford University Press, 2017. Ferns, Chris. Narrating Utopia: Ideology, Gender, Form in Utopian Literature. Liverpool, Liverpool University Press, 1999. Gerber, Richard. Utopian Fantasy. London, Routledge & Kegan Paul, 1955. Gottlieb, Erika. Dystopian Fiction East and West: Universe of Terror and Trial. Montreal, McGill-Queen's Press, 2001. Haschak, Paul G. Utopian/Dystopian Literature. Metuchen, NJ, Scarecrow Press, 1994. Jameson, Fredric. Archaeologies of the future: the Desire Called Utopia and Other Science Fictions. London, Verso, 2005. Kessler, Carol Farley. Daring to Dream: Utopian Fiction by United States Women Before 1950. Syracuse, NY, Syracuse University Press, 1995. Mohr, Dunja M. Worlds Apart: Dualism and Transgression in Contemporary Female Dystopias. Jefferson, NC, McFarland, 2005. Tod, Ian, and Michael Wheeler. Utopia. London, Orbis, 1978. Szweykowski, Zygmunt. Twórczość Bolesława Prusa [The Art of Bolesław Prus], 2nd ed., Warsaw, Państwowy Instytut Wydawniczy, 1972. External links Dystopias and Utopias, The Encyclopedia of Science Fiction The Society for Utopian Studies Portal for Dystopian related Media Dystopia Tracker Modernist Utopias, BBC Radio 4 discussion with John Carey, Steve Connor & Laura Marcus (In Our Time'', Mar. 10, 2005) The Dystopia genre, discusses current popularity of the dystopian genre. Science fiction genres Science fiction themes Film genres Speculative fiction
0.76343
0.99827
0.762109
Chronological snobbery
Chronological snobbery is an argument that the thinking, art, or science of an earlier time is inherently inferior to that of the present, simply by virtue of its temporal priority or the belief that since civilization has advanced in certain areas, people of earlier periods were less intelligent. The term was coined by C. S. Lewis and Owen Barfield, and first mentioned by Lewis in his 1955 autobiographical work, Surprised by Joy. Chronological snobbery is a form of appeal to novelty. Explanation As Barfield explains it, it is the belief that "intellectually, humanity languished for countless generations in the most childish errors on all sorts of crucial subjects, until it was redeemed by some simple scientific dictum of the last century." The subject came up between them when Barfield had converted to Anthroposophy and was seeking to get Lewis (an atheist at the time) to join him. One of Lewis's objections was that religion was simply outdated, and in Surprised by Joy (chapter 13, pp. 207–208), he describes how this was fallacious: A manifestation of chronological snobbery is the usage in general of the word "medieval" to mean "backwards". See also Declinism Genetic fallacy Historian's fallacy Myth of progress Presentism (historical analysis) Whig history References External links Chronological Snobbery at Encyclopedia Barfieldiana C. S. Lewis on Chronological Snobbery Chronological Snobbery at Summa Bergania 1950s neologisms C. S. Lewis Relevance fallacies
0.782856
0.97347
0.762087
Historicist interpretations of the Book of Revelation
Historicism is a method of interpretation in Christian eschatology which associates biblical prophecies with actual historical events and identifies symbolic beings with historical persons or societies; it has been applied to the Book of Revelation by many writers. The Historicist view follows a straight line of continuous fulfillment of prophecy which starts in Daniel's time and goes through John of Patmos' writing of the Book of Revelation all the way to the Second Coming of Jesus Christ. One of the most influential aspects of the early Protestant historicist paradigm was the assertion that scriptural identifiers of the Antichrist were matched only by the institution of the Papacy. Particular significance and concern were the Papal claims of authority over the Church through Apostolic Succession, and the State through the Divine Right of Kings. When the Papacy aspires to exercise authority beyond its religious realm into civil affairs, on account of the Papal claim to be the Vicar of Christ, then the institution was fulfilling the more perilous biblical indicators of the Antichrist. Martin Luther wrote this view into the Smalcald Articles of 1537; this view was not novel and had been leveled at various popes throughout the centuries, even by Roman Catholic saints. It was then widely popularized in the 16th century, via sermons, drama, books, and broadside publication. The alternate methods of prophetic interpretation, Futurism and Preterism were derived from Jesuit writings, whose counter-reformation efforts were aimed at opposing this interpretation that the Antichrist was the Papacy or the power of the Roman Catholic Church. Origins in Judaism and Early Church The interpreters using the historicist approach for the Book of Revelation had their origins in the Jewish apocalyptic writings, such as those in the Book of Daniel, which predicted the future time between their writing and the end of the world. Throughout most of history since the predictions of the book of Daniel, historicism has been widely used. This approach can be found in the works of Josephus, who interpreted the fourth kingdom of Daniel 2 as the Roman empire with a future power as the stone "not cut by human hands", that would overthrow the Romans. It is also found in the early church in the works of Irenaeus and Tertullian, who interpreted the fourth kingdom of Daniel as the Roman empire and believed that in the future it was going to be broken up into smaller kingdoms, as the iron mixed with clay, and in the writings of Clement of Alexandria and Jerome, as well as other well-known church historians and scholars of the early church. But it has been associated particularly with Protestantism and the Reformation. It was the standard interpretation of the Lollard movement, which was regarded as the precursor to the Protestant Reformation, and it was known as the Protestant interpretation until modern times. Antichrist Church Fathers The Church Fathers who interpreted the Biblical prophecy historistically were: Justin Martyr, who wrote about the Antichrist: "He whom Daniel foretells would have dominion for a time and times and an half, is even now at the door"; Irenaeus, who wrote in Against Heresies about the coming of the Antichrist: "This Antichrist shall ... devastate all things ... But then, the Lord will come from Heaven on the clouds ... for the righteous"; Tertullian, looking to the Antichrist, wrote: "He is to sit in the temple of God, and boast himself as being god. In our view, he is Antichrist as taught us in both the ancient and the new prophecies; and especially by the Apostle John, who says that 'already many false-prophets are gone out into the world' as the fore-runners of Antichrist"; Hippolytus of Rome, in his Treatise on Christ and Antichrist, wrote: "As Daniel also says (in the words) 'I considered the Beast, and look! There were ten horns behind it – among which shall rise another (horn), an offshoot, and shall pluck up by the roots the three (that were) before it.' And under this, was signified none other than Antichrist"; Athanasius of Alexandria clearly held to the historical view in his many writings, writing in The Deposition of Arius: "I addressed the letter to Arius and his fellows, exhorting them to renounce his impiety.... There have gone forth in this diocese at this time certain lawless men – enemies of Christ – teaching an apostasy which one may justly suspect and designate as a forerunner of Antichrist"; Jerome wrote: "Says the apostle [Paul in the Second Epistle to the Thessalonians], 'Unless the Roman Empire should first be desolated, and antichrist proceed, Christ will not come.'" Jerome claimed that the time of the break-up of Rome, as predicted in Daniel 2, had begun even in his time. He also identifies the Little horn of and which "shall speak words against the Most High, and shall wear out the saints of the Most High, and shall think to change the times and the law" as the Papacy. Protestant view of the Papacy as the Antichrist Protestant Reformers, including John Wycliffe, Martin Luther, John Calvin, Thomas Cranmer, John Thomas, John Knox, Roger Williams, Cotton Mather, Jonathan Edwards, and John Wesley, as well as most Protestants of the 16th18th centuries, felt that the Early Church had been led into the Great Apostasy by the Papacy and identified the Pope with the Antichrist. The Centuriators of Magdeburg, a group of Lutheran scholars in Magdeburg headed by Matthias Flacius, wrote the 12-volume Magdeburg Centuries to discredit the Catholic Church and lead other Christians to recognize the Pope as the Antichrist. So, rather than expecting a single Antichrist to rule the earth during a future Tribulation period, Martin Luther, John Calvin, and other Protestant Reformers saw the Antichrist as a present feature in the world of their time, fulfilled in the Papacy. Like most Protestant theologians of his time, Isaac Newton believed that the Papal Office (and not any one particular Pope) was the fulfillment of the Biblical predictions about Antichrist, whose rule is prophesied to last for 1,260 years. The Protestant Reformers tended to believe that the Antichrist power would be revealed so that everyone would comprehend and recognize that the Pope is the real, true Antichrist and not the vicar of Christ. Doctrinal works of literature published by the Lutherans, the Reformed Churches, the Presbyterians, the Baptists, the Anabaptists, and the Methodists contain references to the Pope as the Antichrist, including the Smalcald Articles, Article 4 (1537), the Treatise on the Power and Primacy of the Pope written by Philip Melanchthon (1537), the Westminster Confession, Article 25.6 (1646), and the 1689 Baptist Confession of Faith, Article 26.4. In 1754, John Wesley published his Explanatory Notes Upon the New Testament, which is currently an official Doctrinal Standard of the United Methodist Church. In his notes on the Book of Revelation (chapter 13), Wesley commented: "The whole succession of Popes from Gregory VII are undoubtedly Antichrists. Yet this hinders not, but that the last Pope in this succession will be more eminently the Antichrist, the Man of Sin, adding to that of his predecessors a peculiar degree of wickedness from the bottomless pit." The identification of the Pope with the Antichrist was so ingrained in the Reformation Era, that Luther himself stated it repeatedly: and, John Calvin similarly wrote: John Knox wrote on the Pope: Thomas Cranmer on the Antichrist wrote: John Wesley, speaking of the identity given in the Bible of the Antichrist, wrote: Roger Williams wrote about the Pope: The identification of the Roman Catholic Church as the apostate power written of in the Bible as the Antichrist became evident to many as the Reformation began, including John Wycliffe, who was well-known throughout Europe for his opposition to the doctrine and practices of the Catholic Church, which he believed had clearly deviated from the original teachings of the early Church and to be contrary to the Bible. Wycliffe himself tells (Sermones, III. 199) how he concluded that there was a great contrast between what the Church was and what it ought to be, and saw the necessity for reform. Along with John Hus, they had started the inclination toward ecclesiastical reforms of the Catholic Church. When the Swiss Reformer Huldrych Zwingli became the pastor of the Grossmünster in Zurich in 1518, he began to preach ideas on reforming the Catholic Church. Zwingli, who was a Catholic priest before he became a Reformer, often referred to the Pope as the Antichrist. He wrote: "I know that in it works the might and power of the Devil, that is, of the Antichrist". The English Reformer William Tyndale held that while the Roman Catholic realms of that age were the empire of Antichrist, any religious organization that distorted the doctrine of the Old and New Testaments also showed the work of Antichrist. In his treatise The Parable of the Wicked Mammon, he expressly rejected the established Church teaching that looked to the future for an Antichrist to rise up, and he taught that Antichrist is a present spiritual force that will be with us until the end of the age under different religious disguises from time to time. Tyndale's translation of 2 Thessalonians, chapter 2, concerning the "Man of Lawlessness" reflected his understanding, but was significantly amended by later revisers, including the King James Bible committee, which followed the Vulgate more closely. In 1870, the newly formed Kingdom of Italy annexed the remaining Papal States, depriving the Pope of his temporal power. However, the Papal rule over Italy was later restored by the Italian Fascist regime (albeit on a greatly diminished scale) in 1929 as head of the Vatican City state; under Mussolini's dictatorship, Roman Catholicism became the State religion of Fascist Italy (see also Clerical fascism), and the Racial Laws were enforced to outlaw and persecute both Italian Jews and Protestant Christians, especially Evangelicals and Pentecostals. Thousands of Italian Jews and a small number of Protestants died in the Nazi concentration camps. Today, many Protestant and Restorationist denominations still officially maintain that the Papacy is the Antichrist, such as the conservative Lutheran Churches and the Seventh-day Adventists. In 1988, Ian Paisley, Evangelical minister and founder of the Free Presbyterian Church of Ulster, made headlines with such a statement about Pope John Paul II. The Wisconsin Evangelical Lutheran Synod states about the Pope and the Catholic Church: Other views Some Franciscans had considered the Emperor Frederick II a positive Antichrist who would purify the Catholic Church from opulence, riches, and clergy. Some of the debated features of the Reformation's Historicist interpretations reached beyond the Book of Revelation. They included the identification of: the Antichrist (1 and 2 John); the Beast of Revelation 13; the Man of Sin, or Man of Lawlessness, of 2 Thessalonians 2; the "Little horn" of Daniel 7 and 8; The Abomination of desolation of Daniel 9, 11, and 12; and the Whore of Babylon of Revelation 17. Seven churches The non-separatist Puritan, Thomas Brightman, was the first to propose a historicist interpretation of the Seven Churches of Revelation 2–3. He outlined how the seven Churches represent the seven ages of the Church of Christ. A typical historicist view of the Church of Christ spans several periods of church history, each similar to the original church, as follows: The age of Ephesus is the apostolic age. The age of Smyrna is the persecution of the Church through AD 313. The age of Pergamus is the compromised Church lasting until AD 500. The age of Thyatira is the rise of the papacy to the Reformation. The age of Sardis is the age of the Reformation. The age of Philadelphia is the age of evangelism. The age of Laodicea is liberal churches in a "present day" context. The age of Laodicea is typically identified as occurring in the same time period as the expositor. Brightman viewed the age of Laodicea as the England of his day. In the Millerite movement, each church represented a dateable period of ecclesiastical history. Thus, William Miller dated the age of Laodicea from 1798–1843, followed by the End of days in 1844. The Roman Catholic priest Fr. E. Berry in his commentary writes: "The seven candlesticks represent the seven churches of Asia. As noted above, seven is the perfect number which denotes universality. Hence by extension the seven candlesticks represent all churches throughout the world for all time. Gold signifies the charity of Christ which pervades and vivifies the Church." Seven seals The traditional historicist view of the Seven Seals spanned the time period from John of Patmos to Early Christendom. Protestant scholars such as Campegius Vitringa, Alexander Keith, and Christopher Wordsworth did not limit the timeframe to the 4th century. Some have even viewed the opening of the Seals right into the early modern period. Seventh-day Adventists view the first six seals as representing events that took place during the Christian era up until 1844. Contemporary-historicists view all of Revelation as it relates to John's own time, with the allowance of making some guesses about the future. Seven trumpets The classical historicist identifies the first four trumpets with the pagan invasions of Western Christendom in the 5th century AD (by the Visigoths, Vandals, Huns, and Heruli), while the fifth and sixth trumpets have been identified with the assault on Eastern Christendom by the Saracen armies and Turks during the Middle Ages. The symbolism of Revelation 6:12–13 is said by Adventists to have been fulfilled in the 1755 Lisbon earthquake, the dark day of 19 May 1780, and the Leonids meteor shower of 13 November 1833. Vision of Chapter 10 The classical historicist view of the vision of the angel with the little book, in Revelation 10, represents the Protestant Reformation and the printing of Bibles in the common languages. The Adventists take a unique view applying it to the Millerite movement; the "bitterness" of the book (Rev 10:10) represents the Great Disappointment. Two witnesses The classical historicist view takes a number of different perspectives, including that the two witnesses are symbolic of two insular Christian movements such as the Waldenses or the Reformers, or the Old Testament and the New Testament. It is usually taught that Revelation 11 corresponds to the events of the French Revolution. Beasts of Revelation The historicist views of Revelation 12–13 see the first beast of Revelation 13 (from the sea) to be considered to be the pagan Rome and the Papacy, or more exclusively the latter. In 1798, the French General Louis Alexandre Berthier exiled the Pope and took away all his authority, which was restored in 1813, destroyed again in 1870, and later restored in 1929. Adventists have taken this as fulfillment of the prophecy that the Beast of Revelation would receive a deadly wound but that the wound would be healed. They have attributed the wounding and resurgence in to the papacy, referring to General Louis Berthier's capture of Pope Pius VI in 1798 and the pope's subsequent death in 1799. Adventists believe that the second beast (from the earth) symbolizes the United States of America. The "image of the beast" represents Protestant churches who form an alliance with the Papacy, and the "mark of the beast" refers to a future universal Sunday law. Both Adventists and classical historicists view the Great whore of Babylon, in Revelation 17–18, as Roman Catholicism. Number of the Beast Adventists have interpreted the number of the beast, 666, as corresponding to the title Vicarius Filii Dei of the Pope. In 1866, Uriah Smith was the first to propose the interpretation to the Seventh-day Adventist Church. In The United States in the Light of Prophecy, he wrote: Prominent Adventist scholar J. N. Andrews also adopted this view. Uriah Smith maintained his interpretation in the various editions of Thoughts on Daniel and the Revelation, which was influential in the church. Various documents from the Vatican contain wording such as "Adorandi Dei Filii Vicarius, et Procurator quibus numen aeternum summam Ecclesiae sanctae dedit", which translates to "As the worshipful Son of God's Vicar and Caretaker, to whom the eternal divine will has given the highest rank of the holy Church". The New Testament was written in Koine Greek, and Adventists used Roman numerals to calculate the value of "Vicarius Filii Dei" whose word is in Latin language. "Vicarius Filii Dei" is Latin, and it does not exist in the New Testament, which was written in Koine Greek. Samuele Bacchiocchi, an Adventist scholar, and the only Adventist to be awarded a gold medal by Pope Paul VI for the distinction of summa cum laude (Latin for "with highest praise"), has documented the pope using such a title: However, Bacchiocchi's general conclusion regarding the interpretation of Vicarius Filii Dei is that he, together with many current Adventist scholars, refrains from using only the calculation of papal names for the number 666: Commentaries Notable and influential commentaries by Protestant scholars having historicist views of the Book of Revelation were: Clavis Apocalyptica (1627), a commentary on The Apocalypse by Joseph Mede. Anacrisis Apocalypseos (1705), a commentary on The Apocalypse by Campegius Vitringa. Commentary on the Revelation of St. John (1720), a commentary on The Apocalypse by Charles Daubuz. The Signs of the Times (1832), a commentary on The Apocalypse by Rev. Dr. Alexander Keith. Horae Apocalypticae (1837), a commentary on The Apocalypse by Rev. Edward Bishop Elliott. Vindiciae Horariae (1848), twelve letters to the Rev. Dr. Keith, in reply to his strictures on the "Horae apocalypticae" by Rev. Edward Bishop Elliott. Lectures on the Apocalypse (1848), a commentary on The Apocalypse by Christopher Wordsworth. The Final Prophecy of Jesus (2007), An Historicist Introduction, Analysis, and Commentary on the Book of Revelation by Oral E. Collins, Ph.D. See also Abomination of Desolation Apocalypticism Book of Daniel Christian eschatology Judgment day Prophecy of Seventy Weeks Whore of Babylon 2300 day prophecy Historicist interpretations of the Book of Daniel Notes Works cited . Book of Revelation Christian interpretation of the Book of Daniel and the Book of Revelation Biblical dreams and visions
0.773059
0.985799
0.76208
Technological revolution
A technological revolution is a period in which one or more technologies is replaced by another novel technology in a short amount of time. It is a time of accelerated technological progress characterized by innovations whose rapid application and diffusion typically cause an abrupt change in society. Description A technological revolution may involve material or ideological changes caused by the introduction of a device or system. It may potentially impact business management, education, social interactions, finance and research methodology, and is not limited to technical aspects. It has been shown to increase productivity and efficiency. A technological revolution often significantly changes the material conditions of human existence and has been seen to reshape culture. A technological revolution can be distinguished from a random collection of technology systems by two features: 1. A strong interconnectedness and interdependence of the participating systems in their technologies and markets. 2. A potential capacity to greatly affect the rest of the economy (and eventually society). On the other hand, negative consequences have also been attributed to technological revolutions. For example, the use of coal as an energy source have negative environmental impacts, including being a contributing factor to climate change and the increase of greenhouse gases in the atmosphere, and have caused technological unemployment. Joseph Schumpeter described this contradictory nature of technological revolution as creative destruction. The concept of technological revolution is based on the idea that technological progress is not linear but undulatory. Technological revolution can be: Relation revolution (social relations, phones) Sectoral (more technological changes in one sector, e.g. Green Revolution and Commercial Revolution) Universal (interconnected radical changes in more than one sector, the universal technological revolution can be seen as a complex of several parallel sectoral technological revolutions, e.g. Second Industrial Revolution and Renaissance technological revolution) The concept of universal technological revolutions is a "contributing factor in the Neo-Schumpeterian theory of long economic waves/cycles", according to Carlota Perez, Tessaleno Devezas, Daniel Šmihula and others. History Some examples of technological revolutions were the Industrial Revolution in the 19th century, the scientific-technical revolution about 1950–1960, the Neolithic Revolution, and the Digital Revolution. The distinction between universal technological revolution and singular revolutions have been debated. One universal technological revolution may be composed of several sectoral technological revolutions (such as in science, industry, or transport). There are several universal technological revolutions during the modern era in Western culture: Financial-agricultural revolution (1600–1740) Industrial Revolution (1760–1840) Technical Revolution or Second Industrial Revolution (1870–1920) Scientific-technical revolution (1940–1970) Information and telecommunications revolution, also known as the Digital Revolution or Third Industrial Revolution (1975–2021) Some say we’re on the brink of a Fourth Industrial Revolution, aka “The Technological Revolution” (2022- ) Comparable periods of well-defined technological revolutions in the pre-modern era are seen as highly speculative. One such example is an attempt by Daniel Šmihulato to suggest a timeline of technological revolutions in pre-modern Europe: Indo-European technological revolution (1900–1100 BC) Celtic and Greek technological revolution (700–200 BC) Germano-Slavic technological revolution (300–700 AD) Medieval technological revolution (930–1200 AD) Renaissance technological revolution (1340–1470 AD) Structure of technological revolution Each revolution comprises the following engines for growth: New cheap inputs New products New processes Technological revolutions has historically been seen to focus on cost reduction. For instance, the accessibility of coal at a low cost during the Industrial Revolution allowed for iron steam engines which led to production of Iron railways, and the progression of the internet was contributed by inexpensive microelectronics for computer development. A combination of low-cost input and new infrastructures are at the core of each revolution to achieve their all pervasive impact. Potential future technological revolutions Since 2000, there has been speculations of a new technological revolution which would focus on the fields of nanotechnologies, alternative fuel and energy systems, biotechnologies, genetic engineering, new materials technologies and so on. The Second Machine Age is the term adopted in a 2014 book by Erik Brynjolfsson and Andrew McAfee. The industrial development plan of Germany began promoting the term Industry 4.0. In 2019, at the World Economic Forum meeting in Davos, Japan promoted another round of advancements called Society 5.0. The phrase Fourth Industrial Revolution was first introduced by Klaus Schwab, the executive chairman of the World Economic Forum, in a 2015 article in Foreign Affairs. Following the publication of the article, the theme of the World Economic Forum Annual Meeting 2016 in Davos-Klosters, Switzerland was "Mastering the Fourth Industrial Revolution". On October 10, 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco. According to Schwab, fourth era technologies includes technologies that combine hardware, software, and biology (cyber-physical systems), and which will put an emphases on advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics, artificial intelligence, nanotechnology, quantum computing, biotechnology, the internet of things, the industrial internet of things (IIoT), decentralized consensus, fifth-generation wireless technologies (5G), 3D printing and fully autonomous vehicles. Jeremy Rifkin includes technologies like 5G, autonomous vehicles, Internet of Things, and renewable energy in the Third Industrial Revolution. Some economists do not think that technological growth will continue to the same degree it has in the past. Robert J. Gordon holds the view that today's inventions are not as radical as electricity and the internal combustion engine were. He believes that modern technology is not as innovative as others claim, and is far from creating a revolution. List of intellectual, philosophical and technological revolutions Pre-Industrialization The Upper Paleolithic Revolution: the emergence of "high culture", new technologies and regionally distinct cultures (50,000–40,000 years ago). The Neolithic Revolution (around 13,000 years ago), which formed the basis for human civilization to develop. The Renaissance technological revolution: the set of inventions during the Renaissance period, roughly the 14th through the 16th century. The Commercial Revolution: a period of European economic expansion, colonialism and mercantilism which lasted from approximately the 16th century until the early 18th century. The Price Revolution: a series of economic events from the second half of the 15th century to the first half of the 17th, the price revolution refers most specifically to the high rate of inflation that characterized the period across Western Europe. The Scientific Revolution: a fundamental transformation in scientific ideas around the 16th century. The British Agricultural Revolution (18th century), which spurred urbanization and consequently helped launch the Industrial Revolution. Industrialization The First Industrial Revolution: the shift of technological, socioeconomic and cultural conditions in the late 18th century and early 19th century that began in Britain and spread throughout the world. The Market Revolution: a change in the manual labour system originating in the Southern United States (and soon moving to the Northern United States) and later spreading to the entire world (about 1800–1900). The Second Industrial Revolution (1871–1914). The Green Revolution (1945–1975): the use of industrial fertilizers and new crops largely increased the world's agricultural output. The Third Industrial Revolution: the changes brought about by computing and communication technology, starting from around 1950 with the creation of the first general-purpose electronic computers. The Information Revolution: the economic, social and technological changes resulting from the Digital Revolution (after 1960). See also Accelerating change Automation Electrification Kondratiev wave Kranzberg's laws of technology List of emerging technologies Mass production Machine tool Mechanization Post-work society Productivity-improving technologies Innovation Technological change Technological unemployment The War on Normal People The Future of Work and Death References Revolution Revolution Revolution Stages of history Science and technology studies
0.768174
0.992057
0.762073
Phanerozoic
The Phanerozoic is the current and the latest of the four geologic eons in the Earth's geologic time scale, covering the time period from 538.8 million years ago to the present. It is the eon during which abundant animal and plant life has proliferated, diversified and colonized various niches on the Earth's surface, beginning with the Cambrian period when animals first developed hard shells that can be clearly preserved in the fossil record. The time before the Phanerozoic, collectively called the Precambrian, is now divided into the Hadean, Archaean and Proterozoic eons. The time span of the Phanerozoic starts with the sudden appearance of fossilised evidence of a number of animal phyla; the evolution of those phyla into diverse forms; the evolution of plants; the evolution of fish, arthropods and molluscs; the terrestrial colonization and evolution of insects, chelicerates, myriapods and tetrapods; and the development of modern flora dominated by vascular plants. During this time span, tectonic forces which move the continents had collected them into a single landmass known as Pangaea (the most recent supercontinent), which then separated into the current continental landmasses. Etymology The term "Phanerozoic" was coined in 1930 by the American geologist George Halcott Chadwick (1876–1953), deriving from the Ancient Greek words , meaning "visible"; and , meaning "life". This is because it was once believed that life began in the Cambrian, the first period of this eon, due to the lack of Precambrian fossil record back then. However, trace fossils of booming complex life from the Ediacaran period (Avalon explosion) of the preceding Proterozoic eon have since been discovered, and the modern scientific consensus now agrees that complex life (in the form of placozoans and primitive sponges such as Otavia) has existed at least since the Tonian period and the earliest known life forms (in the form of simple prokaryotic microbial mats) started in the ocean floor during the earlier Archean eon. Proterozoic–Phanerozoic boundary The Proterozoic–Phanerozoic boundary is at 538.8 million years ago. In the 19th century, the boundary was set at time of appearance of the first abundant animal (metazoan) fossils, but trace fossils of several hundred groups (taxa) of complex soft-bodied metazoa from the preceding Ediacaran period of the Proterozoic eon, known as the Avalon Explosion, have been identified since the systematic study of those forms started in the 1950s. The transition from the largely sessile Precambrian biota to the active mobile Cambrian biota occurred early in the Phanerozoic. Eras of the Phanerozoic The Phanerozoic is divided into three eras: the Paleozoic, Mesozoic and Cenozoic, which are further subdivided into 12 periods. The Paleozoic features the evolution of the three most prominent animal phyla, arthropods, molluscs and chordates, the last of which includes fish, amphibians and the fully terrestrial amniotes (synapsids and sauropsids). The Mesozoic features the evolution of crocodilians, turtles, dinosaurs (including birds), lepidosaurs (lizards and snakes) and mammals. The Cenozoic begins with the extinction of all non-avian dinosaurs, pterosaurs and marine reptiles, and features the great diversification in birds and mammals. Humans appeared and evolved during the most recent part of the Cenozoic. Paleozoic Era The Paleozoic is a time in Earth's history when active complex life forms evolved, took their first foothold on dry land, and when the forerunners of all multicellular life on Earth began to diversify. There are six periods in the Paleozoic era: Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Cambrian Period The Cambrian is the first period of the Paleozoic Era and ran from 539 million to 485 million years ago. The Cambrian sparked a rapid expansion in the diversity of animals, in an event known as the Cambrian explosion, during which the greatest number of animal body plans evolved in a single period in the history of Earth. Complex algae evolved, and the fauna was dominated by armoured arthropods (such as trilobites and radiodontids) and to a lesser extent shelled cephalopods (such as orthocones). Almost all phyla of marine animals evolved in this period. During this time, the super-continent Pannotia began to break up, most of which later recombined into the super-continent Gondwana. Ordovician Period The Ordovician spans from 485 million to 444 million years ago. The Ordovician was a time in Earth's history in which many groups still prevalent today evolved or diversified, such as primitive nautiloids, vertebrates (then only jawless fish) and corals. This process is known as the Great Ordovician Biodiversification Event or GOBE. Trilobites began to be replaced by articulate brachiopods, and crinoids also became an increasingly important part of the fauna. The first arthropods crept ashore to colonise Gondwana, a continent empty of animal life. A group of freshwater green algae, the streptophytes, also survived being washed ashore and began to colonize the flood plains and riparian zones, giving rise to primitive land plants. By the end of the Ordovician, Gondwana had moved from the equator to the South Pole, and Laurentia had collided with Baltica, closing the Iapetus Ocean. The glaciation of Gondwana resulted in a major drop in sea level, killing off all life that had established along its coast. Glaciation caused an icehouse Earth, leading to the Ordovician–Silurian extinction, during which 60% of marine invertebrates and 25% of families became extinct. Though one of the deadliest mass extinctions in earth's history, the O–S extinction did not cause profound ecological changes between the periods. Silurian Period The Silurian spans from 444 million to 419 million years ago, which saw a warming from an icehouse Earth. This period saw the mass diversification of fish, as jawless fish became more numerous, and early jawed fish and freshwater species appeared in the fossil record. Arthropods remained abundant, and some groups, such as eurypterids, became apex predators in the ocean. Fully terrestrial life established itself on land, including early fungi, arachnids, hexapods and myriapods. The evolution of vascular plants (mainly spore-producing ferns such as Cooksonia) allowed land plants to gain a foothold further inland as well. During this time, there were four continents: Gondwana (Africa, South America, Australia, Antarctica, India), Laurentia (North America with parts of Europe), Baltica (the rest of Europe), and Siberia (Northern Asia). Devonian Period The Devonian spans from 419 million to 359 million years ago. Also informally known as the "Age of the Fish", the Devonian features a huge diversification in fish such as the jawless conodonts and ostracoderms, as well as jawed fish such as the armored placoderms (e.g. Dunkleosteus), the spiny acanthodians and early bony fish. The Devonian also saw the primitive appearance of modern fish groups such as chondricthyans (cartilaginous fish) and osteichthyans (bony fish), the latter of which include two clades — the actinopterygians (ray-finned fish) and sarcopterygians (lobe-finned fish). One lineage of sarcopterygians, Rhipidistia, evolved the first four-limbed vertebrates, which would eventually become tetrapods. On land, plant groups diversified after the Silurian-Devonian Terrestrial Revolution; the first woody ferns and the earliest seed plants evolved during this period. By the Middle Devonian, shrub-like forests existed: lycophytes, horsetails and progymnosperm. This greening event also allowed the diversification of arthropods as they took advantage of the new habitat. Near the end of the Devonian, 70% of all species became extinct in a sequence of mass extinction events, collectively known as the Late Devonian extinction. Carboniferous Period The Carboniferous spans from 359 million to 299 million years ago. Tropical swamps dominated the Earth, and the large amounts of trees sequestered much of the carbon that became coal deposits (hence the name Carboniferous and the term "coal forest"). About 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. The high oxygen levels caused by these wetland rainforests allowed arthropods, normally limited in size by their respiratory systems, to proliferate and increase in size. Tetrapods also diversified during the Carboniferous as semiaquatic amphibians such as the temnospondyls, and one lineage developed extraembryonic membranes that allowed their eggs to survive outside of the water. These tetrapods, the amniotes, included the first sauropsids (which evolved the reptiles, dinosaurs and birds) and synapsids (the ancestors of mammal). Throughout the Carboniferous, there was a cooling pattern, which eventually led to the glaciation of Gondwana as much of it was situated around the South Pole. This event was known as the Permo-Carboniferous Glaciation and resulted in a major loss of coal forests, known as the Carboniferous rainforest collapse. Permian Period The Permian spans from 299 million to 251 million years ago and was the last period of the Paleozoic era. At its beginning, all landmasses came together to form the supercontinent Pangaea, surrounded by one expansive ocean called Panthalassa. The Earth was relatively dry compared to the Carboniferous, with harsh seasons, as the climate of the interior of Pangaea was not moderated by large bodies of water. Amniotes still flourished and diversified in the new dry climate, particularly synapsids such as Dimetrodon, Edaphosaurus and therapsids, which gave rise to the ancestors of modern mammals. The first conifers evolved during this period, then dominated the terrestrial landscape. The Permian ended with at least one mass extinction, an event sometimes known as "the Great Dying", caused by large floods of lava (the Siberian Traps in Russia and the Emeishan Traps in China). This extinction was the largest in Earth's history and led to the loss of 95% of all species of life. Mesozoic Era The Mesozoic ranges from 252 million to 66 million years ago. Also referred to as the Age of Reptiles, Age of Dinosaurs or Age of Conifers, the Mesozoic featured the first time the sauropsids ascended to ecological dominance over the synapsids, as well as the diversification of many modern ray-finned fish, insects, molluscs (particularly the coleoids), tetrapods and plants. The Mesozoic is subdivided into three periods: the Triassic, Jurassic and Cretaceous. Triassic Period The Triassic ranges from 252 million to 201 million years ago. The Triassic is mostly a transitional recovery period between the desolate aftermath of the Permian Extinction and the lush Jurassic Period. It has three major epochs: Early Triassic, Middle Triassic, and Late Triassic. The Early Triassic lasted between 252 million to 247 million years ago, and was a hot and arid epoch in the aftermath of the Permian Extinction. Many tetrapods during this epoch represented a disaster fauna, a group of survivor animals with low diversity and cosmopolitanism (wide geographic ranges). Temnospondyli recovered first and evolved into large aquatic predators during the Triassic. Other reptiles also diversified rapidly, with aquatic reptiles such as ichthyosaurs and sauropterygians proliferating in the seas. On land, the first true archosaurs appeared, including pseudosuchians (crocodile relatives) and avemetatarsalians (bird/dinosaur relatives). The Middle Triassic spans from 247 million to 237 million years ago. The Middle Triassic featured the beginnings of the break-up of Pangaea as rifting commenced in north Pangaea. The northern part of the Tethys Ocean, the Paleotethys Ocean, had become a passive basin, but a spreading center was active in the southern part of the Tethys Ocean, the Neotethys Ocean. Phytoplankton, coral, crustaceans and many other marine invertebrates recovered from the Permian extinction by the end of the Middle Triassic. Meanwhile, on land, reptiles continued to diversify, conifer forests flourished, as well as the first flies. The Late Triassic spans from 237 million to 201 million years ago. Following the bloom of the Middle Triassic, the Late Triassic was initially warm and arid with a strong monsoon climate and with most precipitation limited to coastal regions and high latitudes. This changed late in the Carnian period with a 2 million years-long wet season which transformed the arid continental interior into lush alluvial forests. The first true dinosaurs appeared early in the Late Triassic, and pterosaurs evolved a bit later. Other large reptilian competitors to the dinosaurs were wiped out by the Triassic–Jurassic extinction event, in which most archosaurs (excluding crocodylomorphs, pterosaurs and dinosaurs), most therapsids (except cynodonts) and almost all large amphibians became extinct, as well as 34% of marine life in the fourth mass extinction event. The cause of the extinction is debated, but likely resulted from eruptions of the CAMP large igneous province. Jurassic Period The Jurassic ranges from 201 million to 145 million years ago, and features three major epochs: Early Jurassic, Middle Jurassic and Late Jurassic. The Early Jurassic epoch spans from 201 million to 174 million years ago. The climate was much more humid than during the Triassic, and as a result, the world was warm and partially tropical, though possibly with short colder intervals. Plesiosaurs, ichthyosaurs and ammonites dominated the seas, while dinosaurs, pterysaurs and other reptiles dominated the land, with species such as Dilophosaurus at the apex. Crocodylomorphs evolved into aquatic forms, pushing the remaining large amphibians to near extinction. True mammals were present during the Jurassic but remained small, with average body masses of less than until the end of the Cretaceous. The Middle and Late Jurassic Epochs span from 174 million to 145 million years ago. Conifer savannahs made up a large portion of the world's forests. In the oceans, plesiosaurs were quite common, and ichthyosaurs were flourishing. The Late Jurassic Epoch spans from 163 million to 145 million years ago. The Late Jurassic featured a severe extinction of sauropods in northern continents, alongside many ichthyosaurs. However, the Jurassic-Cretaceous boundary did not strongly impact most forms of life. Cretaceous Period The Cretaceous is the Phanerozoic's longest period and the last period of the Mesozoic. It spans from 145 million to 66 million years ago, and is divided into two epochs: Early Cretaceous, and Late Cretaceous. The Early Cretaceous Epoch spans from 145 million to 100 million years ago. Dinosaurs continued to be abundant, with groups such as tyrannosauroids, avialans (birds), marginocephalians, and ornithopods seeing early glimpses of later success. Other tetrapods, such as stegosaurs and ichthyosaurs, declined significantly, and sauropods were restricted to southern continents. The Late Cretaceous Epoch spans from 100 million to 66 million years ago. The Late Cretaceous featured a cooling trend that would continue into the Cenozoic Era. Eventually, the tropical climate was restricted to the equator and areas beyond the tropic lines featured more seasonal climates. Dinosaurs still thrived as new species such as Tyrannosaurus, Ankylosaurus, Triceratops and hadrosaurs dominated the food web. Whether or not pterosaurs went into a decline as birds radiated is debated; however, many families survived until the end of the Cretaceous, alongside new forms such as the gigantic Quetzalcoatlus. Mammals diversified despite their small sizes, with metatherians (marsupials and kin) and eutherians (placentals and kin) coming into their own. In the oceans, mosasaurs diversified to fill the role of the now-extinct ichthyosaurs, alongside huge plesiosaurs such as Elasmosaurus. Also, the first flowering plants evolved. At the end of the Cretaceous, the Deccan Traps and other volcanic eruptions were poisoning the atmosphere. As this was continued, it is thought that a large meteor smashed into Earth, creating the Chicxulub Crater and causing the event known as the K–Pg extinction, the fifth and most recent mass extinction event, during which 75% of life on Earth became extinct, including all non-avian dinosaurs. Every living thing with a body mass over 10 kilograms became extinct, and the Age of Dinosaurs came to an end. Cenozoic Era The Cenozoic featured the rise of mammals and birds as the dominant class of animals, as the end of the Age of Dinosaurs left significant open niches. There are three divisions of the Cenozoic: Paleogene, Neogene and Quaternary. Paleogene Period The Paleogene spans from the extinction of the non-avian dinosaurs, some 66 million years ago, to the dawn of the Neogene 23 million years ago. It features three epochs: Paleocene, Eocene and Oligocene. The Paleocene Epoch began with the K–Pg extinction event, and the early part of the Paleocene saw the recovery of the Earth from that event. The continents began to take their modern shapes, but most continents (and India) remained separated from each other: Africa and Eurasia were separated by the Tethys Sea, and the Americas were separated by the Panamanic Seaway (as the Isthmus of Panama had not yet formed). This epoch featured a general warming trend that peaked at the Paleocene-Eocene Thermal Maximum, and the earliest modern jungles expanded, eventually reaching the poles. The oceans were dominated by sharks, as the large reptiles that had once ruled had become extinct. Mammals diversified rapidly, but most remained small. The largest tetrapod carnivores during the Paleocene were reptiles, including crocodyliforms, choristoderans and snakes. Titanoboa, the largest known snake, lived in South America during the Paleocene. The Eocene Epoch ranged from 56 million to 34 million years ago. In the early Eocene, most land mammals were small and living in cramped jungles, much like the Paleocene. Among them were early primates, whales and horses along with many other early forms of mammals. The climate was warm and humid, with little temperature gradient from pole to pole. In the Middle Eocene Epoch, the Antarctic Circumpolar Current formed when South America and Australia both separated from Antarctica to open up the Drake Passage and Tasmanian Passage, disrupting ocean currents worldwide, resulting in global cooling and causing the jungles to shrink. More modern forms of mammals continued to diversify with the cooling climate even as more archaic forms died out. By the end of the Eocene, whales such as Basilosaurus had become fully aquatic. The late Eocene Epoch saw the rebirth of seasons, which caused the expansion of savanna-like areas with the earliest substantial grasslands. At the transition between the Eocene and Oligocene epochs there was a significant extinction event, the cause of which is debated. The Oligocene Epoch spans from 34 million to 23 million years ago. The Oligocene was an important transitional period between the tropical world of the Eocene and more modern ecosystems. This period featured a global expansion of grass which led to many new species taking advantage, including the first elephants, felines, canines, marsupials and many other species still prevalent today. Many other species of plants evolved during this epoch also, such as the evergreen trees. The long term cooling continued and seasonal rain patterns established. Mammals continued to grow larger. Paraceratherium, one of the largest land mammals to ever live, evolved during this epoch, along with many other perissodactyls. Neogene Period The Neogene spans from 23.03 million to 2.58 million years ago. It features two epochs: the Miocene and the Pliocene. The Miocene spans from 23.03 million to 5.333 million years ago and is a period in which grass spread further across, effectively dominating a large portion of the world, diminishing forests in the process. Kelp forests evolved, leading to the evolution of new species such as sea otters. During this time, perissodactyls thrived, and evolved into many different varieties. Alongside them were the apes, which evolved into 30 species. Overall, arid and mountainous land dominated most of the world, as did grazers. The Tethys Sea finally closed with the creation of the Arabian Peninsula and in its wake left the Black, Red, Mediterranean and Caspian seas. This only increased aridity. Many new plants evolved, and 95% of modern seed plants evolved in the mid-Miocene. The Pliocene lasted from 5.333 million to 2.58 million years ago. The Pliocene featured dramatic climatic changes, which ultimately led to modern species and plants. The Mediterranean Sea dried up for hundreds of thousand years in the Messinian salinity crisis. Along with these major geological events, Africa saw the appearance of Australopithecus, the ancestor of Homo. The Isthmus of Panama formed, and animals migrated between North and South America, wreaking havoc on the local ecology. Climatic changes brought savannas that are still continuing to spread across the world, Indian monsoons, deserts in East Asia, and the beginnings of the Sahara Desert. The Earth's continents and seas moved into their present shapes. The world map has not changed much since, save for changes brought about by the Quaternary glaciation such as Lake Agassiz (precursor of the Great Lakes). Quaternary Period The Quaternary spans from 2.58 million years ago to present day, and is the shortest geological period in the Phanerozoic Eon. It features modern animals, and dramatic changes in the climate. It is divided into two epochs: the Pleistocene and the Holocene. The Pleistocene lasted from 2.58 million to 11,700 years ago. This epoch was marked by a series of glacial periods (ice ages) as a result of the cooling trend that started in the mid-Eocene. There were numerous separate glaciation periods marked by the advance of ice caps as far south as 40 degrees N latitude in mountainous areas. Meanwhile, Africa experienced a trend of desiccation which resulted in the creation of the Sahara, Namib and Kalahari deserts. Mammoths, giant ground sloths, dire wolves, sabre-toothed cats and archaic humans such as Homo erectus were common and widespread during the Pleistocene. A more anatomically modern human, Homo sapiens, began migrating out of East Africa in at least two waves, the first being as early as 270,000 years ago. After a supervolcano eruption in Sumatra 74,000 years ago caused a global population bottleneck of humans, a second wave of Homo sapiens migration successfully repopulated every continents except Antarctica. As the Pleistocene drew to a close, a major extinction wiped out much of the world's megafauna, including non-Homo sapiens human species such as Homo neanderthalensis and Homo floresiensis. All the continents were affected, but Africa was impacted to a lesser extent and retained many large animals such as elephants, rhinoceros and hippopotamus. The extent to which Homo sapiens were involved in this megafaunal extinction is debated. The Holocene began 11,700 years ago at the end of Younger Dryas and lasts until the present day. All recorded history and so-called "human history" lies within the boundaries of the Holocene epoch. Human activity is blamed for an ongoing mass extinction that began roughly 10,000 years ago, though the species becoming extinct have only been recorded since the Industrial Revolution. This is sometimes referred to as the "Sixth Extinction" with hundreds of species gone extinct due to human activities such as overhunting, habitat destruction and introduction of invasive species. Biodiversity It has been demonstrated that changes in biodiversity through the Phanerozoic correlate much better with the hyperbolic model (widely used in demography and macrosociology) than with exponential and logistic models (traditionally used in population biology and extensively applied to fossil biodiversity as well). The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) or a negative feedback that arises from resource limitation, or both. The hyperbolic model implies a second-order positive feedback. The hyperbolic pattern of the human population growth arises from quadratic positive feedback, caused by the interaction of the population size and the rate of technological growth. The character of biodiversity growth in the Phanerozoic Eon can be similarly accounted for by a feedback between the diversity and community structure complexity. It has been suggested that the similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the superposition on the hyperbolic trend of cyclical and random dynamics. Climate Across the Phanerozoic, the dominant driver of long-term climatic change was the concentration of carbon dioxide in the atmosphere, though some studies have suggested a decoupling of carbon dioxide and palaeotemperature, particularly during cold intervals of the Phanerozoic. Phanerozoic carbon dioxide concentrations have been governed partially by a 26 million year oceanic crustal cycle. Since the Devonian, large swings in carbon dioxide of 2,000 ppm or more were uncommon over short timescales. Variations in global temperature were limited by negative feedbacks in the phosphorus cycle, wherein increased phosphorus input into the ocean would increase surficial biological productivity that would in turn enhance iron redox cycling and thus remove phosphorus from seawater; this maintained a relatively stable rate of removal of carbon from the atmosphere and ocean via organic carbon burial. The climate also controlled the availability of phosphate through its regulation of rates of continental and seafloor weathering. Major global temperature variations of >7 °C during the Phanerozoic were strongly associated with mass extinctions. See also Citations General references External links Phanerozoic (chronostratigraphy scale)
0.763035
0.998706
0.762048