id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
23391548
https://en.wikipedia.org/wiki/David%20Baszucki
David Baszucki
David Baszucki (born January 20, 1963), also known by his Roblox username builderman, is a Canadian-born American entrepreneur, engineer, and innovator. He is best known as the co-founder and CEO of Roblox Corporation. He previously co-founded and served as the CEO of Knowledge Revolution, which was acquired by MSC Software in December 1998. Early life and education Baszucki was born on January 20, 1963, in Canada. He attended Eden Prairie High School in Eden Prairie, Minnesota, where he was the captain of his high school TV quiz team. He later went on to host his own talk radio show for KSCO Radio Santa Cruz from February to July 2003. Baszucki studied engineering and computer science at Stanford University. He graduated in 1985 as a General Motors Scholar in electrical engineering. Career Knowledge Revolution In the late 1980s, Baszucki, together with his brother Greg Baszucki, developed a simulation called "Interactive Physics", which was designed as an educational supplement that would allow the creation of 2D physics experiments. In 1989, Baszucki, together with his brother, founded the company Knowledge Revolution, which was based around the distribution of Interactive Physics. Originally released for Macintosh computers, Interactive Physics went on to win multiple awards. As a follow-up to Interactive Physics, Knowledge Revolution launched the mechanical design software Working Model in the early 1990s. MSC Software and investing In December 1998, Knowledge Revolution was acquired by MSC Software, a simulation software company based in Newport Beach, California, for $20 million. Baszucki was named vice president and general manager of MSC Software from 2000 to 2002, but he left to establish Baszucki & Associates, an angel investment firm. Baszucki led Baszucki & Associates from 2003 to 2004. While an investor, he provided seed funding to Friendster, a social networking service. Roblox In 2004, Baszucki, along with Erik Cassel – who worked as Baszucki's VP of Engineering for Interactive Physics – began working on an early prototype of Roblox under the working title DynaBlocks. It was later renamed Roblox, a portmanteau of "robots" and "blocks", in 2005. The website officially launched in 2006. In a June 2016 interview with Forbes, Baszucki stated that the idea for Roblox was inspired by the success of his Interactive Physics and Working Model software applications, especially among young students. In a December 2016 interview with VentureBeat, Baszucki said, “We believe we’re starting to see a network effect. Retention is getting higher as more people come to play with their friends and have a better chance of finding their friends.” Baszucki believes that Roblox is ushering in a new “human co-experience” category that will become larger than gaming. In a September 2018 interview with Forbes, Baszucki said, "Right when we started, we imagined a new category of people doing things together. A category that involved friends, like social networking; a category that involved immersive 3-D, like gaming; a category that involved cool content, like a media company; and finally a category that had unlimited creation, like a building toy.” Baszucki owns a roughly 13% stake in the Roblox Corporation, the company that owns Roblox, a stake estimated to be worth roughly $4.2 billion. He intends to use donate any net proceeds he earns from Roblox's listing on the New York Stock Exchange to philanthropic purposes. In December 2021, a New York Times investigation alleged that he and his relatives used a tax break intended for small business investors in order to legally avoid tens of millions of dollars in capital gains taxes. Other activities Baszucki spoke at the 2018 Disrupt - San Francisco conference. In 2020, Baszucki funded a study on the effect of hydroxychloroquine on COVID-19, which showed no evidence of it preventing or treating the virus. In March 2021, after Roblox's listing on the New York Stock Exchange, Baszucki and his wife launched the Baszucki Group, a philanthropic organisation. One of the organization's first projects was the Baszucki Brain Research Fund, which partnered with the Milken Institute to launch a program that provides $2 million in research grants to support research into bipolar disorder treatment options. In December 2021, the University of California, San Francisco launched the Baszucki Lymphoma Therapeutics Initiative, with $6 million in donations from Baszucki over five years, to increase the effectiveness and availability of chimeric antigen receptor T-cell therapy for lymphoma patients. Awards and recognition Baszucki has received the following awards and honors: Goldman Sachs 100 Most Intriguing Entrepreneurs (2017, 2018). Comparably's Best CEO's for Diversity (2018, 2019). Bloomberg Businessweeks list of the top 50 people and ideas that defined global business in 2021. Personal life Baszucki lives in the San Francisco Bay Area with his wife, Jan Ellison, and their four children. In a 2020 blog post after the murder of George Floyd, Baszucki expressed his support for the Black Lives Matter movement, expressing dismay at the large amounts of racial inequality in the United States. References 1963 births Living people American computer programmers Businesspeople from San Francisco Canadian expatriates in the United States Roblox Stanford University people
422575
https://en.wikipedia.org/wiki/XScreenSaver
XScreenSaver
XScreenSaver is a free and open-source collection of 240+ screensavers for Unix, macOS, iOS and Android operating systems. It was created by Jamie Zawinski in 1992 and is still maintained by him, with new releases coming out several times a year. Platforms The free software and open-source Unix-like operating systems running the X Window System (such as Linux and FreeBSD) use XScreenSaver almost exclusively. On those systems, there are several packages: one for the screen-saving and locking framework, and two or more for the display modes, divided somewhat arbitrarily. On Macintosh systems, XScreenSaver works with the built-in macOS screen saver. On iOS systems, XScreenSaver is a stand-alone app that can run any of the hacks full-screen. On Android systems, the XScreenSaver display modes work either as normal screen savers (which Android sometimes refers to as "Daydreams") or as live wallpapers. There is no official version for Microsoft Windows, and the developer discourages anyone from porting it. The author considers Microsoft to be "a company with vicious, predatory, anti-competitive business practices" and says that, as one of the original authors of Netscape Navigator, he holds a "personal grudge" against Microsoft because of its behavior during the First Browser War. Software Architecture The XScreenSaver daemon is responsible for detecting idle-ness, blanking and locking the screen, and launching the display modes. The display modes (termed "hacks" from the historical usage "display hack") are each stand-alone programs. This is an important security feature, in that the display modes are sandboxed into a separate process from the screen locking framework. This means that a programming error in one of the graphical display modes cannot compromise the screen locker itself (e.g., a crash in a display mode will not unlock the screen). It also means that a third-party screen saver can be written in any language or with any graphics library, so long as it is capable of rendering onto an externally provided window. For historical and portability reasons, the included hacks are all written in ANSI C. About half of them use the X11 API, and about half use the OpenGL 1.3 API. Rather than forking the code-base and re-writing the hacks to target different platforms, XScreenSaver contains a number of compatibility layers. To allow the X11-based hacks to run natively on macOS and iOS, XScreenSaver contains a complete implementation of the X11 API built on top of Cocoa ("jwxyz"). To allow the OpenGL 1.3-based hacks to run natively on iOS and Android systems, which only support OpenGL ES, XScreenSaver contains an implementation of the OpenGL 1.3 API built in top of OpenGL ES 1.0 ("jwzgles"). And to allow the X11-based hacks to run natively on iOS and Android, XScreenSaver also contains an implementation of the X11 API in terms of OpenGL ES 1.0. Security In addition to sandboxing the display modes, the XScreenSaver daemon links with as few libraries as possible. In particular, it does not link against GUI frameworks like GTK or KDE, but uses only raw Xlib for rendering the unlock dialog box. In recent years, some Linux distributions have begun using the gnome-screensaver or kscreensaver screen-blanking frameworks by default instead of the framework included with XScreenSaver. In 2011, gnome-screensaver was forked as both mate-screensaver and cinnamon-screensaver. Earlier versions of these frameworks still depended upon the XScreenSaver collection of screen savers, which is over 90% of the package. However, in 2011, gnome-screensaver version 3 dropped support for screensavers completely, supporting only simple screen blanking, and as of 2018, Linux Mint's cinnamon-screensaver 4.0.8 no longer supports the XScreenSaver hacks. Those Linux distributions that have replaced XScreenSaver with other screen-locking frameworks have suffered notable security problems. Those other frameworks have a history of security bugs that allow the screen to be un-locked without a password, e.g., by simply holding a key down until the locker crashes. In 2004, Zawinski had written about the architectural decisions made in XScreenSaver with the goal of avoiding this very class of bug, leading him to quip in 2015, "If you are not running XScreenSaver on Linux, then it is safe to assume that your screen does not lock." Display Modes The included hacks are highly varied, ranging from simple 2D psychedelia, to 3D demonstrations of complex mathematical principles, to simulations of other computer systems, to re-creations of artifacts and effects from movies. Though many of the newer hacks take full advantage of the power of modern computers, the age of the project means that some of the older hacks may look dated to modern eyes, as they were originally written for much less powerful computers. Examples of hacks include: Atlantis – an OpenGL animation showing whales and dolphins. BSOD – shows fake fatal screen of death variants from many computer systems, including Microsoft Windows Blue Screen of Death, a Linux kernel panic, a Darwin crash, an Amiga "Guru Meditation" error, a sad Mac, and more. Apple2 – simulates an Apple II computer, showing a user entering a simple BASIC program and running it. When run from the command-line, it is a fully functional terminal emulator (as is Phosphor.) Barcode – a number of coloured barcodes scroll across the screen. Flow – a 3D display of strange attractors. Flying toasters – 3D toasters fly around, inspired by the classic After Dark screensaver. Gears – an OpenGL animation of inter-meshing gears and planetary gears. GLMatrix – an OpenGL animation similar to the "digital rain" title sequence seen in the Matrix trilogy. Molecule – an OpenGL animation showing space-filling or ball-and-stick models of a series of common drugs and other molecules, of which thirty-eight (38) are built in. It can also read PDB (Protein Data Bank) from a file, or files placed in a directory, as input. Penrose – tiles the screen aperiodically with coloured Penrose tiles. Spotlight – puts a moving spotlight across the desktop in the style of the James Bond film opening sequences. Sproingies – an animation in the style of the video game Q*bert. Webcollage – creates collages out of random images found on the Web. XAnalogTV – simulates an analog cathode ray tube television set, including visual artifacts and reception issues. XPlanet – draws planets and other celestial bodies that update in real time. XMatrix – animations similar to the "digital rain" sequence seen in the Matrix trilogy. Some of the included hacks are very similar to demo effects created by the demoscene: Boing – based on the 1984 program regarded as the first Amiga demo ever, showing the bouncing red and white ball. Bumps – an implementation of full-screen 2D bump mapping. Metaballs – another common demo effect. Moire2 – moving interference circles similar to those common in older Amiga demos. ShadeBobs – another effect common in older Amiga demos. XFlame – the filter-based fire effect, also known as flame effect. See also XScreenSaver was featured in Sleep Mode: The Art of the Screensaver, a gallery exhibition curated by Rafaël Rozendaal at Rotterdam's Het Nieuwe Instituut in 2017. References External links Screensavers Utilities for macOS X Window programs
13824925
https://en.wikipedia.org/wiki/Serial%20computer
Serial computer
A serial computer is a computer typified by bit-serial architecture i.e., internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers. Serial computers require much less hardware than their parallel computing counterpart, but are much slower. There are modern variants of the serial computer available as a soft microprocessor which can serve niche purposes where size of the CPU is the main constraint. The first computer that was not serial (the first parallel computer) was the Whirlwind in 1951. A serial computer is not necessarily the same as a computer with a 1-bit architecture, which is a subset of the serial computer class. 1-bit computer instructions operate on data consisting of single bits, whereas a serial computer can operate on N-bit data widths, but does so a single bit at a time. Serial machines EDVAC 1949 BINAC 1949 SEAC 1950 UNIVAC I 1951 Elliott Brothers Elliott 153 1954 Bendix G-15 1956 LGP-30 1956 Elliott Brothers Elliott 803 1958 ZEBRA 1958 D-17B guidance computer 1962 PDP-8/S 1966 General Electric GE-PAC 4040 process control computer Datapoint 2200 1971 F14 CADC 1970: transferred all data serially, but internally operated on many bits in parallel. HP-35 1972 Massively parallel Most of the early massive parallel processing machines were built out of individual serial processors, including: ICL Distributed Array Processor 1979 Goodyear MPP 1983 Connection Machine CM-1 1985 Connection Machine CM-2 1987 MasPar MP-11990 (32-bit architecture, internally processed 4 bits at a time) VIRAM1 computational RAM 2003 See also 1-bit computing References Classes of computers Serial computers
10809677
https://en.wikipedia.org/wiki/Arthur%20Samuel
Arthur Samuel
Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and artificial intelligence. He popularized the term "machine learning" in 1959. The Samuel Checkers-playing Program was among the world's first successful self-learning programs, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI). He was also a senior member in the TeX community who devoted much time giving personal attention to the needs of users and wrote an early TeX manual in 1983. Biography Samuel was born on December 5, 1901, in Emporia, Kansas, and graduated from College of Emporia in Kansas in 1923. He received a master's degree in Electrical Engineering from MIT in 1926, and taught for two years as instructor. In 1928, he joined Bell Laboratories, where he worked mostly on vacuum tubes, including improvements of radar during World War II. He developed a gas-discharge transmit-receive switch (TR tube) that allowed a single antenna to be used for both transmitting and receiving. After the war he moved to the University of Illinois at Urbana–Champaign, where he initiated the ILLIAC project, but left before its first computer was complete. Samuel went to IBM in Poughkeepsie, New York, in 1949, where he would conceive and carry out his most successful work. He is credited with one of the first software hash tables, and influencing early research in using transistors for computers at IBM. At IBM he made the first checkers program on IBM's first commercial computer, the IBM 701. The program was a sensational demonstration of the advances in both hardware and skilled programming and caused IBM's stock to increase 15 points overnight. His pioneering non-numerical programming helped shape the instruction set of processors, as he was one of the first to work with computers on projects other than computation. He was known for writing articles that made complex subjects easy to understand. He was chosen to write an introduction to one of the earliest journals devoted to computing in 1953. In 1966, Samuel retired from IBM and became a professor at Stanford University, where he worked the remainder of his life. He worked with Donald Knuth on the TeX project, including writing some of the documentation. He continued to write software past his 88th birthday. He was given the Computer Pioneer Award by the IEEE Computer Society in 1987. He died of complications from Parkinson's disease on July 29, 1990. Computer checkers (draughts) development Samuel is most known within the AI community for his groundbreaking work in computer checkers in 1959, and seminal research on machine learning, beginning in 1949. He graduated from MIT and taught at MIT and UIUC from 1946 to 1949. He believed teaching computers to play games was very fruitful for developing tactics appropriate to general problems, and he chose checkers as it is relatively simple though has a depth of strategy. The main driver of the machine was a search tree of the board positions reachable from the current state. Since he had only a very limited amount of available computer memory, Samuel implemented what is now called alpha-beta pruning. Instead of searching each path until it came to the game's conclusion, Samuel developed a scoring function based on the position of the board at any given time. This function tried to measure the chance of winning for each side at the given position. It took into account such things as the number of pieces on each side, the number of kings, and the proximity of pieces to being “kinged”. The program chose its move based on a minimax strategy, meaning it made the move that optimized the value of this function, assuming that the opponent was trying to optimize the value of the same function from its point of view. Samuel also designed various mechanisms by which his program could become better. In what he called rote learning, the program remembered every position it had already seen, along with the terminal value of the reward function. This technique effectively extended the search depth at each of these positions. Samuel's later programs reevaluated the reward function based on input from professional games. He also had it play thousands of games against itself as another way of learning. With all of this work, Samuel's program reached a respectable amateur status, and was the first to play any board game at this high a level. He continued to work on checkers until the mid-1970s, at which point his program achieved sufficient skill to challenge a respectable amateur. Awards 1987. Computer Pioneer Award. For Adaptive non-numeric processing. Selected works 1953. Computing bit by bit, or Digital computers made easy. Proceedings of the Institute of Radio Engineers 41, 1223-1230. Pioneer of machine learning. Reprinted with an additional annotated game in Computers and Thought, edited by Edward Feigenbaum and Julian Feldman (New York: McGraw-Hill, 1963), 71-105. 1983. First Grade TeX: A Beginner's TeX Manual. Stanford Computer Science Report STAN-CS-83-985 (November 1983). Senior member in TeX community. References 1901 births 1990 deaths American computer scientists Artificial intelligence researchers Game artificial intelligence History of artificial intelligence College of Emporia alumni IBM Research computer scientists IBM employees Stanford University Department of Computer Science faculty People from Stanford, California
60783728
https://en.wikipedia.org/wiki/TENEX%20%28operating%20system%29
TENEX (operating system)
TENEX was an operating system developed in 1969 by BBN for the PDP-10, which later formed the basis for Digital Equipment Corporation's TOPS-20 operating system. Background In the 1960s, BBN was involved in a number of LISP-based artificial intelligence projects for DARPA, many of which had very large (for the era) memory requirements. One solution to this problem was to add paging software to the LISP language, allowing it to write out unused portions of memory to disk for later recall if needed. One such system had been developed for the PDP-1 at MIT by Daniel Murphy before he joined BBN. Early DEC machines were based on an 18-bit word, allowing addresses to encode for a 256 kiloword memory. The machines were based on expensive core memory and included nowhere near the required amount. The pager used the most significant bits of the address to index a table of blocks on a magnetic drum that acted as the pager's backing store. The software would fetch the pages if needed, and then resolve the address to the proper area of RAM. In 1964 DEC announced the PDP-6. DEC was still heavily involved with MIT's AI Lab, and many feature requests from the LISP hackers were moved into this machine. 36-bit computing was especially useful for LISP programming because with an 18-bit address space, a word of storage on these systems contained two addresses, a perfect match for the common LISP CAR and CDR operations. BBN became interested in buying one for their AI work when they became available, but wanted DEC to add a hardware version of Murphy's pager directly into the system. With such an addition, every program on the system would have paging support invisibly, making it much easier to do any sort of programming on the machine. DEC was initially interested, but soon (1966) announced they were in fact dropping the PDP-6 and concentrating solely on their smaller 18-bit and new 16-bit lines. The PDP-6 was expensive and complex, and had not sold well for these reasons. It was not long until it became clear that DEC was once again entering the 36-bit business with what would become the PDP-10. BBN started talks with DEC to get a paging subsystem in the new machine, then known by its CPU name, the KA-10. DEC was not terribly interested. However, one development of these talks was support for a second virtual memory segment, allowing half of the user address space to be mapped to a separate (potentially read-only) region of physical memory. Additionally, DEC was firm on keeping the cost of the machine as low as possible, such as supporting bare-bones systems with a minimum of 16K words of core, and omitting the fast semiconductor register option (substituting core), at the cost of a considerable performance decrease. BBN and PDP-10s BBN nevertheless went ahead with its purchase of several PDP-10s, and decided to build their own hardware pager. During this period a debate began on what operating system to run on the new machines. Strong arguments were made for the continued use of TOPS-10, in order to keep their existing software running with minimum effort. This would require a re-write of TOPS to support the paging system, and this seemed like a major problem. At the same time, TOPS did not support a number of features the developers wanted. In the end they decided to make a new system, but include an emulation library that would allow it to run existing TOPS-10 software with minor effort. The developer team—amongst them Daniel Murphy and Daniel G. Bobrow—chose the name TENEX (TEN-EXtended) for the new system. It included a full virtual memory system—that is, not only could programs access a full 18 bit address space of 262144 words of virtual memory, every program could do so at the same time. The pager system would handle mapping as it would always, copying data to and from the backing store as needed. The only change needed was for the pager to be able to hold several sets of mappings between RAM and store, one for each program using the system. The pager also held access time information in order to tune performance. The resulting pager was fairly complex, filling a full-height 19" rackmount chassis. One notable feature of TENEX was its user-oriented command line interpreter. Unlike typical systems of the era, TENEX deliberately used long command names and even included non-significant noise words to further expand the commands for clarity. For instance, Unix uses ls to print a list of files in a directory, whereas TENEX used DIRECTORY (OF FILES). "DIRECTORY" was the command word, "(OF FILES)" was noise added to make the purpose of the command clearer. To relieve users of the need to type these long commands, TENEX used a command completion system that understood unambiguously abbreviated command words, and expanded partial command words into complete words or phrases. For instance, the user could type DIR and the escape key, at which point TENEX would replace DIR with the full command. The completion feature also worked with file names, which took some effort on the part of the interpreter, and the system allowed for long file names with human-readable descriptions. TENEX also included a command recognition help system: typing a question mark (?), printed out a list of possible matching commands and then return the user to the command line with the question mark removed. The command line completion and help live on in current CLIs like tcsh. From TENEX to TOPS-20 TENEX became fairly popular in the small PDP-10 market, and the external pager hardware developed into a small business of its own. In early 1970 DEC started work on an upgrade to the PDP-10 processor, the KI-10. BBN once again attempted to get DEC to support a complex pager with indirect page tables, but instead DEC decided on a much simpler single-level page mapping system. This compromise impacted system sales; by this point TENEX was the most popular customer-written PDP-10 operating systems, but it would not run on the new, faster KI-10s. To correct this problem, the DEC PDP-10 sales manager purchased the rights to TENEX from BBN and set up a project to port it to the new machine. At around this time Murphy moved from BBN to DEC as well, helping on the porting project. Most of the work centered on emulating the BBN pager hardware in a combination of software and the KI-10's simpler hardware. The speed of the KI-10 compared to the PDP-6 made this possible. Additionally the porting effort required a number of new device drivers to support the newer backing store devices being used. Just as the new TENEX was shipping, DEC started work on the KL-10, intended to be a low-cost version of the KI-10. While this was going on, Stanford University AI programmers, many of them MIT alumni, were working on their own project to build a PDP-10 that was ten times faster than the original KA-10. The project evolved into the Foonly line of computers. DEC visited them and many of their ideas were then folded into the KL-10 project. The same year IBM also announced their own machine with virtual memory, making it a standard requirement for any computer. In the end the KL integrated a number of major changes to the system, but did not end up being any lower in cost. From the start, the new DECSYSTEM-20 would run a version of TENEX as its default operating system. Functional upgrades for the KL-10 processor architecture were limited. The most significant new feature (called extended addressing) was modified pager microcode running on a Model B hardware revision to enlarge the user virtual address space. Some effective address calculations by instructions located beyond the original 18-bit address space were performed to 30 significant bits, although only a 23-bit virtual address space was supported. Program code located in the original 18-bit address space had unchanged semantics, for backward compatibility. The first in-house code name for the operating system was VIROS (VIRtual memory Operating System); when customers started asking questions, the name was changed to SNARK so that DEC could truthfully deny that there was any project called VIROS. When the name SNARK became known, the name was briefly reversed to become KRANS; this was quickly abandoned when someone objected that "krans" meant "funeral wreath" in Swedish (though it simply means "wreath"; this part of the story may be apocryphal). Ultimately DEC picked TOPS-20 as the name of the operating system, and it was as TOPS-20 that it was marketed. The hacker community, mindful of its origins, quickly dubbed it TWENEX (a portmanteau of "twenty TENEX"), even though by this point very little of the original TENEX code remained (analogously to the differences between AT&T V7 Unix and BSD). DEC people cringed when they heard "TWENEX", but the term caught on nevertheless (the written abbreviation "20x" was also used). TWENEX was successful and very popular; in fact, there was a period in the early 1980s when it commanded as fervent a culture of partisans as Unix or ITS—but DEC's decision to scrap all the internal rivals to the VAX architecture and its VMS operating system killed the DEC-20 and put an end to TWENEX's brief period of popularity. DEC attempted to convince TOPS-20 users to convert to VMS, but instead, by the late 1980s, most of the TOPS-20 users had migrated to Unix. A loyal group of TOPS-20 enthusiasts kept working on various projects to preserve and extend TOPS-20, notably Mark Crispin and the Panda TOPS-20 distribution. See also Time-sharing system evolution References Some text in this article was taken from The Jargon File entry on "TWENEX", which is in the public domain. Further reading Daniel G. Bobrow, Jerry D. Burchfiel, Daniel L. Murphy, Raymond S. Tomlinson, "TENEX, A Paged Time Sharing System for the PDP-10", Communications of the ACM, Vol. 15, pages 135–143, March 1972. Discontinued operating systems Time-sharing operating systems 1969 software
16761786
https://en.wikipedia.org/wiki/Type%20%28Unix%29
Type (Unix)
In Unix and Unix-like operating systems, type is a command that describes how its arguments would be interpreted if used as command names. Function Where applicable, type will display the command name's path. Possible command types are: shell built-in function alias hashed command keyword The command returns a non-zero exit status if command names cannot be found. Examples $ type test test is a shell builtin $ type cp cp is /bin/cp $ type unknown unknown not found $ type type type is a shell builtin History The type command was a shell builtin for Bourne shell that was introduced in AT&T's System V Release 2 (SVR2) in 1984, and continues to be included in many other POSIX-compatible shells such as Bash. However, type is not part of the POSIX standard. With a POSIX shell, similar behavior is retrieved with command -V name In the KornShell, the command whence provides similar functionality. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. See also List of Unix commands which (command) hash (Unix) References Standard Unix programs Unix SUS2008 utilities
1297539
https://en.wikipedia.org/wiki/Free%20particle
Free particle
In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space. Classical free particle The classical free particle is characterized by a fixed velocity v. The momentum is given by and the kinetic energy (equal to total energy) by where m is the mass of the particle and v is the vector velocity of the particle. Quantum free particle Mathematical description A free particle with mass in non-relativistic quantum mechanics is described by the free Schrödinger equation: where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by the complex plane wave: with amplitude A and restricted to: if the particle has mass : (or equivalent ). if the particle is a massless particle: . The eigenvalue spectrum is infinitely degenerate since for each eigenvalue E>0, there corresponds an infinite number of eigenfunctions corresponding to different directions of . The De Broglie relations: apply. Since the potential energy is (stated to be) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics: As for all quantum particles free or bound, the Heisenberg uncertainty principles apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, these stationary states can not correspond to physical realizable states. Measurement and calculations The integral of the probability density function where * denotes complex conjugate, over all space is the probability of finding the particle in all space, which must be unity if the particle exists: This is the normalization condition for the wave function. The wavefunction is not normalizable for a plane wave, but is for a wavepacket. Fourier decomposition The free particle wave function may be represented by a superposition of momentum eigenfunctions, with coefficients given by the Fourier transform of the initial wavefunction: where the integral is over all k-space and (to ensure that the wave packet is a solution of the free particle Schrödinger equation). Here is the value of the wave function at time 0 and is the Fourier transform of . (The Fourier transform is essentially the momentum wave function of the position wave function , but written as a function of rather than .) The expectation value of the momentum p for the complex plane wave is and for the general wave packet it is The expectation value of the energy E is Group velocity and phase velocity The phase velocity is defined to be the speed at which a plane wave solution propagates, namely Note that is not the speed of a classical particle with momentum ; rather, it is half of the classical velocity. Meanwhile, suppose that the initial wave function is a wave packet whose Fourier transform is concentrated near a particular wave vector . Then the group velocity of the plane wave is defined as which agrees with the formula for the classical velocity of the particle. The group velocity is the (approximate) speed at which the whole wave packet propagates, while the phase velocity is the speed at which the individual peaks in the wave packet move. The figure illustrates this phenomenon, with the individual peaks within the wave packet propagating at half the speed of the overall packet. Spread of the wave packet The notion of group velocity is based on a linear approximation to the dispersion relation near a particular value of . In this approximation, the amplitude of the wave packet moves at a velocity equal to the group velocity without changing shape. This result is an approximation that fails to capture certain interesting aspects of the evolution a free quantum particle. Notably, the width of the wave packet, as measured by the uncertainty in the position, grows linearly in time for large times. This phenomenon is called the spread of the wave packet for a free particle. Specifically, it is not difficult to compute an exact formula for the uncertainty as a function of time, where is the position operator. Working in one spatial dimension for simplicity, we have: where is the time-zero wave function. The expression in parentheses in the second term on the right-hand side is the quantum covariance of and . Thus, for large positive times, the uncertainty in grows linearly, with the coefficient of equal to . If the momentum of the initial wave function is highly localized, the wave packet will spread slowly and the group-velocity approximation will remain good for a long time. Intuitively, this result says that if the initial wave function has a very sharply defined momentum, then the particle has a sharply defined velocity and will (to good approximation) propagate at this velocity for a long time. Relativistic quantum free particle There are a number of equations describing relativistic particles: see relativistic wave equations. See also Wave packet Group velocity Particle in a box Finite square well Delta potential References Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd Edition), R. Eisberg, R. Resnick, John Wiley & Sons, 1985, Stationary States, A. Holden, College Physics Monographs (USA), Oxford University Press, 1971, Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, Elementary Quantum Mechanics, N.F. Mott, Wykeham Science, Wykeham Press (Taylor & Francis Group), 1972, Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum's Outlines, Mc Graw Hill (USA), 1998, Specific Further reading The New Quantum Universe, T.Hey, P.Walters, Cambridge University Press, 2009, . Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008, Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum's Easy Outlines Crash Course, Mc Graw Hill (USA), 2006, Concepts in physics Classical mechanics Quantum mechanics Quantum models
43622046
https://en.wikipedia.org/wiki/OpenWireless.org
OpenWireless.org
The Open Wireless Movement hosted at OpenWireless.org is an Internet activism project which seeks to increase Internet access by encouraging people and organizations to configure or install software on their own wireless router to offer a separate public guest network or to make a single public wireless access point. If many people did this, then a ubiquitous global public wireless network would be created which would achieve and surpass the goal of increasing Internet access. History The project was initiated in November 2012 by a coalition of ten advocacy groups including the Electronic Frontier Foundation (EFF), Fight for the Future, Free Press, Internet Archive, NYCwireless, Open Garden, OpenITP, the Open Spectrum Alliance, the Open Technology Institute, and the Personal Telco Project. EFF representative Adi Kamdar commented, "We envision a world where sharing one's Internet connection is the norm. A world of open wireless would encourage privacy, promote innovation, and largely benefit the public good. And everyone—users, businesses, developers, and Internet service providers—can get involved." As of September 2016, seventeen groups have joined the project, adding Engine, Mozilla, Noisebridge, the Open Rights Group, OpenMedia International, Sudo Room, and the Center for Media Justice. The project uses various strategies to encourage and assist people to make their Internet connections available for public use. It explains the benefits and drawbacks of the effects on society and on the owners of routers, answers questions regarding safety and legality, guides novice users in configuring their routers, and provides firmware for novices to install on their routers. Router The EFF created a router firmware called OpenWireless, a fork of CeroWRT, which is a branch of the OpenWrt firmware. which anyone may volunteer to install on their router to make it work for the OpenWireless.org project. This firmware was first shared at the 2014 Hackers on Planet Earth conference. Its developers set out to achieve simple installation on a wide range of hardware routers but struggled with the diversity of closed, proprietary devices, and development of the OpenWireless firmware ended in April 2015 and was merged into Linux kernel and OpenWRT, openwireless.org redirects to eff.org. "In particular, once we obtained our first field data on router prevalence, we saw that none of the router models we expected to be able to support well have market shares above around 0.1%. Though we anticipated a fragmented market, that extreme degree of router diversity means that we would need to support dozens of different hardware platforms in order to be available to any significant number of users, and that does not seem to be an efficient path to pursue. Without a good path to direct deployment, EFF is deprioritizing our work on the freestanding router firmware project." See also Legality of piggybacking Municipal wireless network Open-source hardware Piggybacking (Internet access) Wireless community network References External links redirects to https://github.com/EFForg/OpenWireless Internet activism Internet access Wireless networking
54529458
https://en.wikipedia.org/wiki/Twocanoes
Twocanoes
Twocanoes Software, Inc. is a small software company founded in 2012 and headquartered in Naperville, Illinois. The company develops and sells software and hardware, including Winclone, Smart Card Utility, Boot Runner, and MDS. References External links Software companies based in Illinois Mobile software IOS software Windows software MacOS software Software companies of the United States
1497138
https://en.wikipedia.org/wiki/The%20Palace%20%28computer%20program%29
The Palace (computer program)
The Palace is a computer program to access graphical chat room servers, called palaces, in which users may interact with one another using graphical avatars overlaid on a graphical backdrop. The software concept was originally created by Jim Bumgardner and produced by Time Warner in 1994, and was first opened to the public in November 1995. While there is no longer any official support for the original program, a new client has been developed and is actively maintained by Jameson Heesen. Many chat servers are still operating and can be found on the Palace Portal Live Directory. Palace clients and servers are available for Mac OS 9, Mac OS X, Linux, and Microsoft Windows. Concept and design Palaces Each room in a palace is represented by a large image that serves as a backdrop for users. By clicking on certain areas in a room called "doors", users can travel either to different rooms in the same palace, another palace server, or an address leading to a different service, such as websites and email. In some rooms, users are allowed to paint on the backdrop using a simple suite of drawing tools. User messages appear as chat bubbles above their avatar, similar to those in comic books, and stored in a chat log. Avatars The Palace has an avatar system that allows users to combine small, partially transparent images. Once a member has created an avatar, the member can pick up various pieces of clothing or other accessories. By default, users are represented by spherical smiley face emoticons, but can also wear up to nine separate bitmap images known as "props." In Q3 1997, several users began using doll-inspired images as avatars with a customizable appearance. The avatars were known as "Little People" before later collectively named Dollz. A fanzine credited the creation of Dollz to Rainman, who based his "Sk8er" doll on his comic strip. Other sources claimed that Melicia Greenwood created the first Dollz, basing her avatar on Barbie while catering to counter-culture audiences of preps, goths, and skaters. Other popular Dollz used on The Palace were Wonderkins, Silents, and Divas (based on Diva Starz). Dollz became popular with the users on The Palace, particularly teenagers, with several rooms dedicated to unofficial Dollz editing contests. Teenagers also used Dollz as avatars as a sign of rebellion against The Palace's older users. The popularity of Dollz has inspired several personal websites dedicated to creating and customizing Dollz, outside of The Palace community. The majority of Dollz creators were female. History The Palace was originally created by Jim Bumgardner and produced by Time Warner Interactive in 1994, with its official website launching to the public in November 1995. Bumgardner incorporated many features of Idaho, an in-house authoring tool he had previously developed for making multimedia CD-ROMs. One of the features of Idaho was IPTSCRAE, a Forth-like programming language. The name is a play on the word "script" in Pig Latin. One of the unique features of the Palace for its time was that the server software was given away for free and ran on consumer PCs, rather than being housed in a central location. Two of the original beta testers, Ben LaCascia(Now Bethany O'Brien), and Justice LeClaire are still active(as of 2/2020). From around 1997, artists began to use the Palace as a site for experimental live performance. Notably, the group Desktop Theatre staged interventions and performances in their own and public Palaces from 1997 until 2002. In 1997 they presented "waitingforgodot.com" at the Third Annual Digital Storytelling Festival, which took an interesting turn when another Palatian changed their name to Godot and arrived in the performance. Other artists working in The Palace include Avatar Body Collision (2002-2007). The Palace's popularity peaked around 1999–2000, when nu metal band Korn had their own palace chat room that fans could download from their official website. Palace's popularity at this time could also be attributed to a palace which focused on the cartoon South Park as well as the Sci Fi channel's Mothership palace. There was even a link to the South Park palace on the Comedy Central website at the time. The Palace was the subject of a number of sales between companies until 2001, when Open Text Corporation purchased the rights to the Palace software and technology as part of a bankruptcy settlement. The software is currently unsupported by Open Text or any of its previous owners, and many members of the community now consider the software abandonware and provide support for existing versions on unofficial web sites. The original thepalace.com domain was bought by a long time Palace user, and is now used as a directory for other sites. Official Palace software development ceased when Communities.com declared bankruptcy, but at least four groups are working on Palace protocol compatible clients. One of the biggest contributions came from Ruben Pizarro, known as oORubenOo only 13 years old at the time, was successfully able to reverse engineer the most important protocol packets talk (Windows) & xtlk (Unix) for proper communication between the client and server. All of these new clients support improved high-color avatars, larger room backgrounds (also in high-color), and modern sound formats (such as MP3), and are designed for modern operating systems. However, there are some drawbacks to the new clients, such as not being fully compatible with older clients (because of the latter's limitations), and many users have chosen to remain with older alternatives. One of the first comprehensive psychological studies of avatar communities, conducted by John Suler, took place at the Palace. This collection of essays, entitled Life at the Palace, consists of an analysis of Palace history, social relationships, "addiction," and deviance. Suler's work focused on the unique aspects of interacting via avatars and in a graphical space. Privacy Signing into The Palace does not require any registration or personal information. To begin chatting, users download the client, set their user handle and login to a server. A child filter is enabled on the client by default, which filters out chat servers with an Adult ranking and inappropriate language used in chat rooms. Other Clients PalaceChat, created by Jameson Heesen (known in the community as PaVVn), which supports all original features of The Palace, as well as high-quality backgrounds and avatars, larger rooms and videos. This is the primary client in use. Linpal, an open source Linux client using GTK+. Phalanx, by Brainhouse Laboratories. Incompatible Palace-like Clients The Manor, written by a former Palace lead developer. The Manor includes embedded Python for user and room scripting with an encrypted data stream. Supports importing Palace avatars. Both new incarnations of The Palace support larger room sizes and 32-bit color avatars. Worlize, an online virtual world utilizing user-generated content OpenVerse, an open-source visual chat program written in TCL/Tk. See also Active Worlds Second Life CyberTown References 1995 software Virtual world communities
48191940
https://en.wikipedia.org/wiki/Automotive%20hacking
Automotive hacking
Automotive hacking is the exploitation of vulnerabilities within the software, hardware, and communication systems of automobiles. Overview Modern automobiles contain hundreds of on-board computers processing everything from vehicle controls to the infotainment system. These computers, called Electronic control units (ECU), communicate with each other through multiple networks and communication protocols including the Controller Area Network (CAN) for vehicle component communication such as connections between engine and brake control; Local Interconnect Network (LIN) for cheaper vehicle component communication such as between door locks and interior lights; Media Oriented Systems Transport (MOST) for infotainment systems such as modern touchscreen and telematics connections; and FlexRay for high-speed vehicle component communications such as active suspension and active cruise control data synchronization. Additional consumer communication systems are also integrated into automobile architectures including Bluetooth for wireless device connections, 4G Internet hotspots, and vehicle Wi-Fi. The integration of these various communications and software systems leaves automobiles vulnerable to attack. Security researchers have begun demonstrating the multitude of potential attack vectors in modern vehicles, and some real-world exploits have resulted in manufacturers issuing vehicle recalls and software updates to mobile applications. Manufacturers, such as John Deere, have used computer systems and Digital Rights Management to prevent repairs by the vehicle owners, or by third parties, or the use of aftermarket parts. Such limitations have prompted efforts to circumvent these systems, and increased interest in measures such as Motor Vehicle Owners' Right to Repair Act. Research In 2010, security researchers demonstrated how they could create physical effects and undermine system controls by hacking the ECU. The researchers needed physical access to the ECU and were able to gain full control over any safety or automotive system including disabling the brakes and stopping the engine. In a follow-up research paper published in 2011, researchers demonstrated that physical access is not even necessary. The researchers showed that “remote exploitation is feasible via...mechanics tools, CD players, Bluetooth, cellular radio...and wireless communication channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft”. This means that a hacker could gain access to a vehicle's vital control systems through almost anything that interfaces with the automobile's systems. Recent exploits 2015 Fiat Chrysler UConnect Hack UConnect is Fiat Chrysler's Internet-connected feature which enables owners the ability to control the vehicle's infotainment/navigation system, sync media, and make phone calls. It even integrates with the optional on-board WiFi. However, susceptibilities in Fiat Chrysler’s UConnect system, available on over 1.4 million cars, allows hackers to scan for cars with the system, connect and embed malicious code, and ultimately, commandeer vital vehicle controls like steering and brakes. 2015 Tesla Model S Hack In 2015 at the DEF CON hacking conference Marc Rogers and Kevin Mahaffey demonstrated how a chain of exploits could be used to take complete control of the Model S. Marc Rogers and Kevin Mahaffey identified several remote and local vulnerabilities that could be used as entry points. They demonstrated that after exploitation the vehicle could be remotely controlled with an iPhone. Finally, they also demonstrated that it was possible to install a backdoor that allowed persistent access and control of the vehicle in a similar fashion to exploit techniques more usually associated with traditional computer systems. Marc Rogers and Kevin Mahaffey worked with Tesla, Inc. to resolve the issues before disclosure. It was announced before the presentation that the entire global fleet of Model S cars had been patched overnight, the first proactive mass Over The Air (OTA) security update of vulnerable vehicles. General Motors OnStar RemoteLink App The OnStar RemoteLink app allows users the ability to utilize OnStar capabilities from their Android or iOS smartphones. The RemoteLink app can locate, lock and unlock, and even start your vehicle. The flaw in General Motors’ OnStar RemoteLink app, while not as extreme as UConnect, allows hackers to impersonate the victim in the eyes of the RemoteLink app. This means that the hackers can access all of the features of the RemoteLink app available to the victim including locating, locking and unlocking, and starting the engine. Keyless entry The security researcher Samy Kamkar has demonstrated a device that intercepts signals from keyless-entry fobs and would allow an attacker to unlock doors and start a car's engine. References Hacking (computer security) Terrorism by method
14776792
https://en.wikipedia.org/wiki/Donald%20P.%20Greenberg
Donald P. Greenberg
Donald Peter Greenberg (born 1934) is the Jacob Gould Schurman Professor of Computer Graphics at Cornell University. Early life Greenberg earned his undergraduate and Ph.D. degrees from Cornell University, where he played on the tennis and soccer teams and was a member of Tau Delta Phi and the Quill and Dagger society. Career In the late 1960s, Greenberg constructed the so-called "flying diaper" sculpture, which currently stands at the entrance of the Cornell Botanic Gardens. Greenberg joined the Cornell faculty in 1968 with a joint appointment in the College of Engineering and Department of Architecture. In 1971, Greenberg produced an early sophisticated computer graphics movie, Cornell in Perspective, using the General Electric Visual Simulation Laboratory with the assistance of its director, Quill and Dagger classmate Rodney S. Rougelot. Greenberg also co-authored a series of papers on the Cornell Box. An internationally recognized pioneer in computer graphics, Greenberg has authored hundreds of articles and served as a teacher and mentor to many prominent computer graphic artists and animators. Five former students have won Academy Awards for Scientific or Technical Achievements, five have won the SIGGRAPH Achievement Award, and many now work for Pixar Animation Studios. Greenberg was the founding director of the National Science Foundation Science and Technology Center for Computer Graphics and Scientific Visualization when it was created in 1991. His former students include Robert L. Cook, Marc Levoy, and Wayne Lytle. He has been the Director of the Program of Computer Graphics for thirty-two years and was the originator and former Director of the Computer Aided Design Instructional Facility at Cornell University. Greenberg received the Steven Anson Coons Award in 1987, the most prestigious award in the field of computer graphics. Prior to teaching at Cornell, Greenberg was a consulting engineer with Severud Associates, working on famous structures like the St. Louis Arch and Madison Square Garden. Greenberg has served as a visiting professor at ETH Zurich and Yale University. He is on the board of directors of the Interactive Data Corporation and Chyron Corporation. He holds membership in the National Academy of Engineering, American Association for the Advancement of Science, Association for Computing Machinery, Institute of Electrical and Electronics Engineers, SIGGRAPH, and Eurographics. He was named a fellow of ACM in 1995. He currently teaches a Virtual reality course cross listed under 4 departments at Cornell- Architecture, Art, Computer Science and Engineering. References Greenberg Faculty Biography Greenberg Vita True Big Red: Professor Don Greenberg '55 video "Videos on computer graphics pioneer Don Greenberg '55, architect Jill Lerner '75 highlight reunion," Cornell University News Service, June 1, 2005 Hyperbolic Paraboloid American soccer players Computer graphics professionals Cornell University College of Engineering alumni Cornell University faculty Johnson School faculty Cornell Big Red men's soccer players Fellows of the Association for Computing Machinery 1934 births Living people Members of the United States National Academy of Engineering Association footballers not categorized by position
32784561
https://en.wikipedia.org/wiki/Bluebeam%20Software%2C%20Inc.
Bluebeam Software, Inc.
Bluebeam, Inc. is an American software company founded in 2002 and headquartered in Pasadena, California, United States, with additional offices in Chicago, Illinois; San Diego, California; and Manchester, New Hampshire. The company specializes in designing tools for creating, editing, marking up, collaborating and sharing PDF documents. Their main product is Bluebeam Revu. In October 2014, Bluebeam was acquired by Nemetschek for $100 million and the company's software has more than 1.6 million users. Products Bluebeam Software's products include: Bluebeam Revu for PDF creation, markup and editing. Bluebeam Vu for PDF viewing on Windows and the iPad. Vu for the Windows platform includes Bluebeam Studio for online collaboration. Bluebeam Vu iPad, a complimentary iPad app for PDF viewing. Studio Server is a server-based program for firms that want to use Bluebeam Studio to collaborate and share information in a locally hosted environment. Previous Products Pushbutton PDF References Software companies established in 2002 Software companies based in California 2002 establishments in California Software companies of the United States
306799
https://en.wikipedia.org/wiki/TACACS
TACACS
Terminal Access Controller Access-Control System (TACACS, ) refers to a family of related protocols handling remote authentication and related services for networked access control through a centralized server. The original TACACS protocol, which dates back to 1984, was used for communicating with an authentication server, common in older UNIX networks; it spawned related protocols: Extended TACACS (XTACACS) is a proprietary extension to TACACS introduced by Cisco Systems in 1990 without backwards compatibility to the original protocol. TACACS and XTACACS both allow a remote access server to communicate with an authentication server in order to determine if the user has access to the network. Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol developed by Cisco and released as an open standard beginning in 1993. Although derived from TACACS, TACACS+ is a separate protocol that handles authentication, authorization, and accounting (AAA) services. TACACS+ has largely replaced its predecessors. History TACACS was originally developed in 1984 by BBN Technologies for administering MILNET, which ran unclassified network traffic for DARPA at the time and would later evolve into the U.S. Department of Defense's NIPRNet. Originally designed as a means to automate authentication – allowing someone who was already logged into one host in the network to connect to another on the same network without needing to re-authenticate – it was first formally described by BBN's Brian Anderson in December 1984 in IETF RFC 927. Cisco Systems began supporting TACACS in its networking products in the late 1980s, eventually adding several extensions to the protocol. In 1990, Cisco's extensions on the top of TACACS became a proprietary protocol called Extended TACACS (XTACACS). Although TACACS and XTACACS are not open standards, Craig Finseth of the University of Minnesota, with Cisco's assistance, published a description of the protocols in 1993 in IETF RFC 1492 for informational purposes. Technical descriptions TACACS TACACS is defined in RFC 8907 (older rfc 1492), and uses (either TCP or UDP) port 49 by default. TACACS allows a client to accept a username and password and send a query to a TACACS authentication server, sometimes called a TACACS daemon or simply TACACSD. It would determine whether to accept or deny the authentication request and send a response back. The TIP (routing node accepting dial-up line connections, which the user would normally want to log in into) would then allow access or not, based upon the response. In this way, the process of making the decision is "opened up" and the algorithms and data used to make the decision are under the complete control of whomever is running the TACACS daemon. XTACACS XTACACS, which stands for Extended TACACS, provides additional functionality for the TACACS protocol. It also separates the authentication, authorization, and accounting (AAA) functions out into separate processes, even allowing them to be handled by separate servers and technologies. TACACS+ TACACS+ and RADIUS have generally replaced TACACS and XTACACS in more recently built or updated networks. TACACS+ is an entirely new protocol and is not compatible with its predecessors, TACACS and XTACACS. TACACS+ uses TCP (while RADIUS operates over UDP). Since TCP is a connection oriented protocol, TACACS+ has to implement transmission control. RADIUS, however, does not have to detect and correct transmission errors like packet loss, timeout etc. since it rides on UDP which is connectionless. RADIUS encrypts only the users' password as it travels from the RADIUS client to RADIUS server. All other information such as the username, authorization, accounting are transmitted in clear text. Therefore, it is vulnerable to different types of attacks. TACACS+ encrypts all the information mentioned above and therefore does not have the vulnerabilities present in the RADIUS protocol. TACACS+ is a Cisco designed extension to TACACS that encrypts the full content of each packet. Moreover, it provides granular control (command by command authorization). Implementations TACACS+ client and PAM module tacacs+ VM, an implementation of tac_plus+webadmin from in a VM TACACS.net, a free implementation of TACACS+ for Windows TAC_plus from Shrubbery TAC_plus from Pro-Bono-Publico FreeRADIUS TACACS+ module available from v4.0 See also List of authentication protocols RADIUS Kerberos Diameter References External links Overview of AAA Technology An Analysis of the TACACS+ Protocol and its Implementations from a security standpoint, by Openwall TACACS+ Benefits and Best Practices RFC – TACACS User Identification Telnet Option – An Access Control Protocol, Sometimes Called TACACS - The Terminal Access Controller Access-Control System Plus (TACACS+) Protocol Cisco protocols Computer access control protocols Computer network security
398191
https://en.wikipedia.org/wiki/List%20of%20aircraft%20engines
List of aircraft engines
This is an alphabetical list of aircraft engines by manufacturer. 0–9 2si 2si 215 2si 230 2si 430 2si 460 2si 500 2si 540 2si 690 3W 3WSource: RMV 3W 106iB2 3W-110 3W-112 3W-170 3W-210 3W-220 A Abadal (Francisco Serramalera Abadal) Abadal Y-12 350/400 hp ABC Source: Lumsden. ABC 8 hp ABC 30hp V-4 ABC 45hp V-6 ABC 60hp V-8 ABC 85hp V-6 ABC 100hp V-8 ABC 115 hp ABC 170hp V-12 ABC 225hp V-16 ABC Dragonfly ABC Gadfly ABC Gnat ABC Hornet ABC Mosquito ABC Scorpion ABC Wasp ABC type 10 APU ABC type 11 APU ABECO Source: RMV ABECO GEM Aberg Source: RMV Type Sklenar ABLE Source: RMV, Able Experimental Aircraft Engine Co. (Able Experimental Aircraft Engine Co., Altimizer, Hoverhawk (US)) ABLE 2275 ABLE 2500 ABLE VW x 2 Geared Drive Accurate Automation Corp Accurate Automation AT-1500 Accurate Automation AT-1700 Ace (Ace American Engr Corp, Horace Keane Aeroplane Co, North Beach, Long Island NY.) Ace 1919 40hp ACE (American Cirrus Engine Inc) Source: RMV ACE Cirrus ACE LA-1 19?? (ATC 31) = 140 hp 7RA. Evolved into Jacobs LA-1. ACE Mk III 1929 (ATC 30, 44) = 90 hp 310ci 4LAI; (44) for 110 hp supercharged model. ACE Mk III Hi-Drive ACE Ensign ACT (Aircraft Cylinder and Turbine Co) Source: RMV ACT Super 600 Adams Source: RMV Adams (UK) 4 Cylinder in-line of 140 HP Adams (UK) 8 V Adams-Dorman Source: RMV Adams-Dorman 60/80 HP Adams-Farwell The Adams Company, Dubuque, Iowa / F.O. Farwell, engines for gyrocopters Adams-Farwell 36 hp 5-cyl rotary engine Adams-Farwell 50 HP Adams-Farwell 55hp 5-cyl rotary Adams-Farwell 63hp 5-cyl rotary Adams-Farwell 72hp 5-cyl rotary Adams-Farwell 280hp 6cyl double rotary Adams-Farwell 6-cyl double rotary Adams-Farwell 10-cyl double rotary Adams-Farwell 14-cyl double rotary Adams-Farwell 18-cyl double rotary Adams-Farwell KM 11 ADC ADC (from "Aircraft Disposal Company") bought 35,000 war-surplus engines in 1920. Initially produced engines from Renault 70 hp spares. ADC Airdisco ADC Cirrus ADC Nimbus, development of Siddeley Puma ADC Airsix, air-cooled version of Nimbus. Not put into use ADC BR2 ADC Viper ADC Airdisco-Renault Adept-Airmotive Source: RMV Adept 280 N Adept 300 R Adept 320 T Ader Source: RMV Ader Eole engine (Vapour) Ader Avion engine (Vapour) Ader 2V Ader 4V Adler Source: RMV Adler 50hp 4-cyl in-line Adler 100hp 6-cyl in-line Adler 222hp V-8 Adorjan & Dedics Source: RMV Adorjan & Dedics 2V Advance Engines Source: RMV Advance 4V, 20/25 HP Advanced Engine Design Source: RMV Advanced Engine Design Spitfire 1 Cylinder Advanced Engine Design Spitfire 2 Cylinder Advanced Engine Design Spitfire 3 Cylinder Advanced Engine Design Spitfire 4 Cylinder Advanced Engine Design K2-1000 Advanced Engine Design 110 HP (BMW Conversion) Advanced Engine Design 220 LC Advanced Engine Design 440 LC Advanced Engine Design 660 LC Advanced Engine Design 880 LC Advanced Engine Design 530 (Kawasaki Conversion) AEADC (Aircraft Engine & Accessory Development Corporation) Source: RMV AEADC Gryphon M AEADC Gryphon N AEADC O-510 (Gryphon M) AEADC O-810 (Gryphon N) AEC Source: RMV AEC Keane Aeolus Flugmotor Source: RMV Aerien CC Source: RMV Aerien 20/25 HP Aerien 30 HP Aermacchi Source: RMV Aermacchi MB-2 Aero & Marine Aero & Marine 50 HP Aero Adventure Source: RMV Aero Adventure GFL-2000 AeroConversions AeroConversions AeroVee 2180 Aero Development Source: RMV (See SPEER) Aero Engines Ltd. (formerly William Douglas (Bristol) Ltd.) Aero Engines Dryad Aero Engines Pixie Aero Engines Sprite Aero Engines inverted V-4 Aero Engines inverted V-6 Douglas 750cc Aero Motion Source: RMV Aero Motion 0-100 Aero Motion 0-101 Aero Motors Source: RMV Aero Motors Aerobat 150 HP Aero Pixie Source: RMV Aero Pixie 153 cc, 2T Aero Prag Source: RMV Aeroprag KT-422 Aeroprag AP-45 Aeroprag TP-422 Aero Products (Aero Products Aeronautical Products Corp, Naugatuck CT.) Source: RMV Aero Products Scorpion 100 HP Aero Sled Source: RMV Aero Sled Twin Flat, 20 HP Aero Sport International Source: RMV Aero Sport International Wade Aero (WANKEL) 2 Types AeroTwin Motors Corporation AeroTwin AT972T Aerojet Aerojet produced rocket engines for missiles. It merged with Pratt & Whitney Rocketdyne Aerojet LR1 (Aerojet 25AL-1000) Aerojet LR3 (Aerojet 25ALD-1000) Aerojet LR5 (Aerojet X40ALD-3000) Aerojet LR7 (Aerojet ZCALT-6000) Aerojet LR9 (Aerojet X4AL-1000) Aerojet LR13 (Aerojet X60ALD-4000 / Aerojet 4.104a / Aerojet 4.103a) Aerojet LR15 (Aerojet XCNLT-1500) Aerojet LR45 (Aerojet AJ24-1) Aerojet LR49 Aerojet LR51 Aerojet LR53 Aerojet LR59 (CIM-99 Bomarc booster engine) Aerojet LR87 Aerojet LR91 Aerojet-General SR19 (Aerojet Minuteman 2nd stage motor) Aerojet 1KS-2800A Aerojet 2KS-11000 (X102C1) Aerojet 2KS-33000A Aerojet 2.2KS-33000 Aerojet 2.5KS-18000 (X103C1) Aerojet 5KS-4500 Aerojet 12AS-250 Junior Aerojet 14AS-1000 (D-5) - RATO unit Aerojet 15KS-1000 RATO unit Aerojet 15NS-250 Aerojet 30AS-1000C - RATO unit Aerojet 2.2KS-11000 Aerojet AJ10 Aerojet AJ-260 - largest solid rocket motor ever built. Aerojet M-1 Aerojet Hawk motor (for Hawk SAM) Aerojet Polaris motor Aerojet Senior Aeromarine Company Source: RMV Aeromarine Company D5-1 (Pulse-Jet) Aeromarine Aeromarine AL Aeromarine NAL Aeromarine S Aeromarine S-12 Aeromarine AR-3 Aeromarine AR-3-40 Aeromarine AR-5 Aeromarine AR-7 Aeromarine AL-24 Aeromarine B-9 Aeromarine B-45 Aeromarine B-90 Aeromarine D-12 150 hp Aeromarine K-6 Aeromarine L-6 130 hp Aeromarine L-6-D (direct drive) Aeromarine L-6-G (geared) Aeromarine L-8 192 hp Aeromarine RAD Aeromarine T-6 Aeromarine U-6 Aeromarine U-6-D Aeromarine U-8 Aeromarine U-8-873 Aeromarine U-8D Aeromarine 85hp 1914 Aeromarine 90hp Aeromarine 100hp V-8 Aeromax Source: RMV Aeromax 100 I-F-B Aeromax 100 L-D Aeromotion See: AMI Aeromotor (Detroit Aeromotor. Const. Co) Source: RMV Aeromotor 30hp 4-cyl in-line Aeromotor 75hp 6-cyl in-line Aeronamic Source: RMV Aeronamic ATS Aeronautical Engineering Co. Source: RMV Aeronautical Engineering 9-cyl radial 200 HP Aeronca Aeronca E-107 (O-107) Aeronca E-113 (O-113) Aeroplane Motors Company (Aeroplane Motors) Source: RMV Aeroplane 59hp V-8 Aeroprotech Source: RMV Aeroprotech VW 2.3 Aerosila Source: RMV Aerosila TA-4 FE Aerosila 6 A/U Aerosila 8 N/K Aerosila 12 Aerosila 12-60 Aerosila 14 (-032,-130,-35) Aerosila 18-100 (-200) GTTP-300 Aerosport Aerosport-Rockwell LB600 Aerostar Source: RMV Aerostar M14P Aerostar M14V-26 Aerotech engines Source: RMV Aerotech 2 Cylinder 2T Aerotech-PL Source: RMV Aerotech-PL EA81, Subaru conversion Aerotech-PL VW conversion Aerotech-PL BMW conversion Aerotech-PL Suzuki conversion Aerotech-PL Guzzi conversion Aerotechnik Source: RMV Aerotechnik Tatra-100 Aerotechnik Tatra-102 Aerotechnik Hirth (Lic) Aerotechnik Mikron (Lic) Aerotechnik Tatra-714 (VW) Aerotek Source: RMV Aerotek Mazda RX-7 (conversion) AES (See Rev-Air) Affordable Turbine Power Source: RMV Affordable Turbine Power Model 6.5 AFR Source: RMV AFR BMW Conversion AFR R 100 70/80 hp AFR R 1100D 90/100 hp AFR R 1100S 98 hp AFR R 1150RT 95 hp AFR R 1200GS 100 hp Agilis (Agilis Engines) Sources: RMV Agilis TF-800 Agilis TF-1000 Agilis TF-1200 Agilis TF-1400 Agilis TF-1500 Agilis TJ-60 (MT-60) Agilis TJ-75 Agilis TJ-80 Agilis TJ-400 Agusta Agusta GA.40 Agusta GA.70 Agusta GA.140 Agusta A.270 Turbomeca-Agusta TA.230 Ahrbecker Son and Hankers Source: RMV Ahrbecker Son and Hankers 10 HP Ahrbecker Son and Hankers 20 HP Ahrbecker Son and Hankers 1 Cylinder – vapor AIC (Aviation Ind. China. See Catic and Carec) Aichi Source:Gunston 1989 except where noted. Aichi AC-1 Aichi Atsuta (Atsuta 31) - Licence-builtDaimler-Benz DB 601A for IJN Aichi AE1A (Atsuta 21) Aichi AE1P (Atsuta 32) Aichi Ha-70 (Coupled Atsuta 30s) AICTA (AICTA Design Work, Prague, Czech Republic) AICTA LMD 416-00R Aile Volante Aile Volante C.C.2 Aile Volante C.C.4 Air Repair Incorporated Source: RMV (Jacobs Licence) Air Repair Incorporated L-4 Air Repair Incorporated L-5 Air Repair Incorporated L-6 (Jacobs-Page Licence) Air Repair Incorporated R755 Air Ryder Source: RMV Air Ryder Subaru EA-81 (Conversion) Air Technical Arsenal Source: RMV Air Technical Arsenal TSU-11 Air Technical Arsenal TR-30 Air-Craft Engine Corp Source: RMV Air-Craft Engine Corp LA-1 Aircat (Detroit Aircraft Eng. Corp.) Source: RMV Aircat Radial 5 cylinders Aircooled Motors See: Franklin Aircraft Engine Co (Aircraft Engine Co Inc, Oakland, CA) Aircraft 1911 80hp Aircraft & Ind. Motor Corp (See Schubert) AiResearch See: Garrett, Allied Signal and Honeywell Airex Airex Rx2 Airex Rx10 Airmotive-Perito See: Adept-Airmotive Airship Aircraft Engine Company Airship A-Tech 100 Diesel Airtrike (AirTrike GmbH i.L., Berlin, Germany) Airtrike 850ti AISA Source: RMV Ramjet on rotor Aixro Source: RMV Aixro XF-40 Aixro XH-40 Aixro XP-40 Aixro XR-30 Aixro XR-40 Aixro XR-50 Ajax Source: RMV Ajax 7-cyl rotary Ajax 6-cyl radial (2 rows of 3 cyls.), 80 HP Akkerman Akkerman Model 235 30 HP, special fuel Akron Funk E200 Funk E4L Albatross (Albatross Co Detroit) Albatross 50hp 6-cyl radial Albatross 100hp 6-cyl radial Aldasoro Aldasoro aero engine Alexander Alexander 4-cyl Alexanderradial 5-cyl Alfa Romeo Societa per Azioni Alfa Romeo Romeo 600hp V-12 Alfa Romeo V-6 diesel Alfa Romeo V-12 diesel Alfa Romeo D2 Alfa Romeo 100 or RA.1100 Alfa Romeo 101 or RA.1101 Alfa Romeo 110/111 Alfa Romeo 115/116 Alfa Romeo 121 Alfa Romeo 122 Alfa Romeo 125/126/127/128/129/131 Alfa Romeo 135/136 Alfa Romeo 138 R.C.23/65 RA.1000 Monsone - licensed Daimler-Benz DB 601 Alfa Romeo RA.1050 Alfa Romeo RA.1100 or AR.100 Alfa Romeo RA.1101 or AR.101 Alfa Romeo AR.318 Alfa Romeo Dux Alfa Romeo Jupiter - licensed Bristol Jupiter Alfa Romeo Lynx/Lince - licensed Armstrong Siddeley Lynx Alfa Romeo Mercury Alfa Romeo Pegasus Alfaro Alfaro baby engine Alfaro 155 hp 4-cyl barrel engine Allen Allen O-675 Alliance (Aubrey W. Hess/Alliance Aircraft Corporation) Hess Warrior Allied Allied Monsoon Licensed manufacturer of French Règnier 4L AlliedSignal AlliedSignal TPE-331 Garrett TPF351 AlliedSignal LTS101 AlliedSignal ALF502/LF507 Allis-Chalmers Source: Gunston Allis-Chalmers J36 Allison Allison V-1410 - Liberty L-12 Allison V-1650 - Liberty L-12 Allison V-1710 Allison V-3420 Allison X-4520 Allison 250 (T63)(T703) Allison 252 Allison 504 Allison 545 Allison 550 Pratt & Whitney/Allison 578-DX Allison J33 (Allison 400) Allison J35 (Allison 450) Allison J56 Allison J71 Allison J89 Allison J102 Allison T38 Allison T39 Allison T40 (Allison 500, 503) Allison T44 Allison T54 Allison T56 (501-D) Allison T61 Allison T63 Allison T71 Allison T78 Allison T80 Allison T406 (AE1107) Allison T701 (Allison 501-M62) Allison T703 (Allison 250) Allison TF32 Allison TF41 (development of Rolls-Royce Spey) Allison GMA 200 Allison GMA 500 Allison AE3010 Allison AE3012 Allison PD-37 Pyrodyne Almen Almen A-4 Alvaston Alvaston 20hp 2-cyl opposed Alvaston 30hp 2-cyl opposed Alvaston 50hp 4-cyl opposed Alvis Alvis Alcides Alvis Alcides Major Alvis Leonides Alvis Leonides Major Alvis Maeonides Major Alvis Pelides Alvis Pelides Major American Cirrus Engine See: ACE American Engineering Corporation Source: RMV ACE Keane American Helicopter American Helicopter PJ49 Pulsejet American Helicopter XPJ49-AH-3 American Motor & Aviation Co American 1911 rotary American S-5 radial AMCEL (AMCEL Propulsion Company) AMCEL controllable solid fuel rocket AMI (AeroMotion Inc.) AeroMotion Twin AeroMotion O-100 Twin AeroMotion O-101 Twin AMT (Aviation Microjet Technology) AMT-450 AMT Olympus AMT Titan A.M.U.A.L (Établissement A.M.U.A.L) A.M.U.A.L M.J.5 65° V-8 350 hp A.M.U.A.L M.J.6 90° V-8 400 hp A.M.U.A.L M.J.7 90° V-8 600 hp Angle Angle 100hp Radial Ansaldo Ansaldo San Giorgio 4E-145 6I 300 hp Ansaldo San Giorgio 4E-150 6I 300 hp Ansaldo San Giorgio 4E-284 V-12 450 hp Ansaldo San Giorgio 4E-290 V-12 550 hp Antoinette Source:Gunston Antoinette 32hp V-8 Antoinette 46hp? Antoinette 64hp V-16 Antoinette 67hp V-8 Antoinette 165hp V-16 Antoinette 134hp V-8 Antoinette 55hp V-8 Antoinette V-32 Anzani For British Anzani products see: British Anzani Source: Air-cooled Anzani engines Anzani V-2 Anzani 3-cylinder fan engines Anzani 14hp Anzani 15hp Anzani 24.5hp Anzani 31.6hp Anzani 42.3hp Anzani 10-12hp Anzani 12-15hp Anzani 25-30hp Anzani 30-35hp Anzani 40-45hp Anzani 45-50hp Anzani 30hp 3-cyl radial Anzani 45hp 5-cyl radial Anzani 60hp 5-cyl radial Anzani 6-cylinder Anzani 40-45hp radial Anzani 50-60hp radial Anzani 70hp radial Anzani 80hp radial Anzani 95hp 7-cyl radial Anzani 10-cylinder Anzani 60-70hp radial Anzani 100-110hp radial Anzani 95-100hp radial Anzani 125hp radial Anzani 125hp radial Anzani 200hp radial Anzani 100hp 14-cyl radial Anzani 150-160hp 14-cyl radial Anzani 20 200hp 20-cyl radial Water-cooled Anzani engines Anzani 30-32hp V-4 Anzani 56-70hp V-4 Anzani 600-700hp 20-cyl radial In-line radial 10 banks of 2 cylinders Anzani W-6 Anzani 6A3 (6-cyl radial 60 hp) ARDEM (Avions Roger Druine Engines M) Ardem 4 CO2 Ares (Ares ltd., Finland) Ares diesel Cirrus Argus Motoren Source:Gunston except where noted Argus Type I ("50hp") - 4-cyl. 50-70 hp ) Argus Type II (4-cyl. 100 hp ) Argus Type III (aka Argus 110 hp) - 6-cyl ) Argus Type IV (aka 140/150hp) - 4-cyl. 140 hp ) Argus Type V (6-cyl. 140 hp ) Argus Type VI (6-cyl. 140 hp ) Argus Type VII (6-cyl. 115-130 hp ) Argus Type VIII (6-cyl. 190 hp ) Argus As I 4-cylinder, 100-hp, year 1913 Argus As II, 6-cylinder, 120-hp, year 1914 Argus As III 6-cylinder upright inline Argus As 5 24-cylinder in-line radial (6 banks of four cylinders) Argus As VI 700 hp V-12 Argus As VIA Argus As 7 9R 700 hp Argus As 8 4-cylinder inverted inline Argus As 10 8-cylinder inverted V Argus As 12 16H 550 hp Argus As 16 4-cylinder horizontally-opposed 35 hp Argus As 17 Argus As 014 (aka "Argus 109-014") - pulse jet engine for V-1 flying bomb and Tornado boat Argus As 044 Argus As 16 4-cylinder inverted inline 40 hp Argus As 17 6-cylinder inverted inline 225 hp / 285 hp Argus As 401 development and renumbering of the As 10 Argus As 402 Argus As 410 12-cylinder inverted V Argus As 411 12-cylinder inverted V Argus As 412 24-cylinder H-block, prototyped Argus As 413 - similar to 412, never built Argus 109-044 Argus 115 hp 6-cylinder upright inline Argus 130 hp 6-cylinder upright inline Argus 145 hp 6-cylinder upright inline Argus 190 hp 6-cylinder upright inline Argylls a 120-130hp sleeve valve 6-cylinder exhibited at Olympia 1914 Armstrong Siddeley Armstrong Siddeley was formed by purchase of Siddeley-Deasy in 1919. Piston Engines Armstrong Siddeley Terrier Armstrong Siddeley Mastiff Armstrong Siddeley Boarhound Armstrong Siddeley Cheetah Armstrong Siddeley Civet Armstrong Siddeley Cougar Armstrong Siddeley Deerhound Armstrong Siddeley Genet Armstrong Siddeley Genet Major Armstrong Siddeley Hyena Armstrong Siddeley Jaguar Armstrong Siddeley Leopard Armstrong Siddeley Lynx Armstrong Siddeley Mongoose Armstrong Siddeley Ounce Armstrong Siddeley Panther Armstrong Siddeley Puma - originally the Siddeley Puma Armstrong Siddeley Serval Armstrong Siddeley Tiger Armstrong Siddeley Wolfhound - paper project of developed Deerhound Gas turbines Armstrong Siddeley Adder Armstrong Siddeley ASX Armstrong Siddeley Double Mamba Armstrong Siddeley Mamba Armstrong Siddeley Python Armstrong Siddeley Sapphire Armstrong Siddeley Viper Rocket engines Armstrong Siddeley Alpha Armstrong Siddeley Beta Armstrong Siddeley Delta Armstrong Siddeley Gamma Armstrong Siddeley Screamer Armstrong Siddeley Snarler Armstrong Siddeley Spartan Armstrong Siddeley Stentor Armstrong Whitworth Armstrong Whitworth 1918 30° V-12 Arrow SNC Arrow 250 Arrow 270 AC Arrow 500 Arrow 1000 Arsenal Source:Gunston Arsenal 213 Arsenal 12H Arsenal 12H-Tandem Arsenal 12K Arsenal 24H Arsenal 24H-Tandem Asahina Asahina 9-cyl 100hp rotary Ashmusen (Ashmusen Manufacturing Company) Ashmusen 1908 60hp 8HOA Ashmusen 1908 105hp 12HOA Aspin (F.M. Aspin & Company) Aspin Flat-Four Aster Aster 51hp 4-cylinder-line Astrodyne (Astrodyne Inc.) Astrodyne 16NS-1000 Astrodyne XM-34 (ZELL booster) ATAR ( Atelier Technique Aéronautique de Rickenbach - pre SNECMA take-over) ATAR 101 ATAR 103 ATAR 104 ( Vulcain) ATAR 201 ATAR 202 ATAR 203 Atwood (Atwood Aeronautic Company, Williamsport, PA / Harry N. Atwood) Atwood 120-180hp V-12 ( bore x stroke Atwood M-1 (1916) Atwood M-2 (1916) Atwood Twin Six Aubier & Dunne Data from:Italian Civil & Military Aircraft 1930–1945 Aubier & Dunne 2-cyl 17hp Aubier & Dunne 3-cyl Aubier-Dunne V.2D Austin Austin V-12 Austin rotary engine Austro-Daimler Source:Gunston Austro-Daimler 35-40hp 4-cyl. (35-40 hp ) Austro-Daimler 65-70hp 4-cyl. (65-70 hp ) Austro-Daimler 90hp 6-cyl. (90 hp ) Austro-Daimler 120hp 6-cyl. (120 hp ) Austro-Daimler 160hp 6-cyl. Austro-Daimler 185hp 6-cyl. Austro-Daimler 200hp 6-cyl. (200 hp ) Austro-Daimler 210hp 6-cyl. Austro-Daimler 225hp 6-cyl. Austro-Daimler 300hp V-12 Austro-Daimler 360hp 6-cyl (360 hp ) Austro-Daimler 400hp V-12 (400 hp ) Austro-Daimler D-35 (400 hp ) Austro Engine Austro Engine E4 (AE 300) Austro Engine AE50R Austro Engine AE75R Austro Engine AE80R Austro Engine AE500 Austro Engine GIAE110R Auto Diesels Auto Diesels STAD A250 Auto Diesels STAD A260 Auto Diesels LPI Mk.12A/L Auto Diesels LPI Mk.12A/T Auto Diesels LPI Mk.12A/D Auto Diesels GT15 Auto Diesels 7660.001.020 Ava (L'Agence General des Moteurs Ava) Ava 4A Avco Lycoming See:Lycoming Avia Aviadvigatel Aviadvigatel PD-14 Aviadvigatel PS-90 Aviatik Argus engines sold in France under the brand name 'Aviatik' by Automobil und Aviatik AG Aviatik 70hp 4-cyl in-line Aviatik 100hp 4-cyl in-line Aviatik 150hp 4-cyl in-line A.V. Roe A.V. Roe 20hp 2-cyl. Avro Avro Alpha Avro Canada Avro Chinook Avro Iroquois Avro Orenda Avro P.35 Waconda Axelson Axelson A-7-R 115hp Axelson-Floco B 150hp Axial Vector Engine Corporation Dyna-Cam Aztatl Aztatl 3-cyl radial Aztatl 6-cyl 80hp radial Aztatl 10-cyl radial B Bailey Bailey C-7-R "Bull's Eye" 1927 = 140hp 7RA. Bailey Aviation Bailey B200 Bailey Hornet Bailey V5 engine Baradat–Esteve (Claudio Baradat Guillé & Carlos Esteve) Baradat toroidal engine Basse und Selve Basse und Selve BuS. 120hp ( 120-130 hp) Basse und Selve BuS.III 150 hp Basse und Selve BuS.IV ( / 260 hp / 270 hp) Basse und Selve BuS.IVa 300 hp Bates Data from: Bates 29hp V-4 Bayerische (Bayerische Motoren Gesellschaft) Bayerische 7-cyl 50hp rotary Beardmore Source: Lumsden Beardmore 90 hp () Beardmore 120 hp Beardmore 160 hp Beardmore Pacific Beardmore Simoon Beardmore Cyclone Beardmore Tornado Beardmore 12-cyl opposed diesel Beardmore Typhoon Galloway Adriatic Galloway Atlantic Béarn Construction Mécanique du Béarn/Société de Construction et d'Exploitation de Matériels et de Moteurs Béarn 6 Béarn 12A Béarn 12B Beatty Beatty 40hp 4-cyl.() Beatty 50hp 4-cyl.() Beatty 60hp 4-cyl. (geared 0.66:1 ) Beatty 80hp 8-cyl. V-8 () Beck Beck 1910 toroidal engine Beck 35hp 4cyl toroidal engine Beck 50hp 4cyl toroidal engine Beck 75hp 4cyl toroidal engine Beecher (B.L. Beecher Company, New Haven, Connecticut) Beecher 8HOA Bell Aerosystems Company Bell Model 117 Bell Model 8001 Bell Model 8048 Bell Model 8081 Bell Model 8096 Bell Model 8096-39 Bell Model 8096A Bell Model 8096B Bell Model 8096L Bell Model 8247 Bell Model 8533 Bell LR67 Bell XLR-81 Bell XLR-81-BA-3 Bell XLR-81-BA-5 Bell XLR-81-BA-7 Bell XLR-81-BA-11 Bell XLR-81-BA-13 Bell Hustler Bell Nike-Ajax engine Bentley W. O. Bentley Bentley BR1 Bentley BR2 Benz Source:Gunston Benz 195hp Benz FX Benz Bz.I (Type FB) Benz Bz.II (Type FD) Benz Bz.III (Type FF) Benz Bz.IIIa Benz Bz.IIIb Benz Bz.IV Benz Bz.IVa Benz Bz.V Benz Bz.Vb Benz Bz.VI Benz Bz.VIv Berliner Berliner 6hp rotary helicopter engine Bertin Bertin 50hp X-4 Bertin 100hp X-8 Besler See: Doble-Besler Beaussier (Moteurs Beaussier) Beaussier 4-cyl Bessonov (A. A. Bessonov) Bessonov MM-1 Better Half Better Half VW Beardmore Halford Pullinger (B.H.P.) Atlantic 230 hp - built by Galloway and Siddeley-Deasy developed into Siddeley Puma Binetti Binetti B-300 Blackburn Includes engines of Cirrus Engine Division of Blackburn Source: Lumsden Blackburn Cirrus - originally ADC Cirrus, Blackburn Cirrus Midget Blackburn Cirrus Minor Blackburn Cirrus Major Blackburn Cirrus Bombardier Blackburn Cirrus Grenadier Blackburn Cirrus Musketeer Blackburn Nimbus Blackburn Artouste - licence built Turbomeca Artouste Blackburn Turbomeca Palouste - Turbomeca Palouste Blackburn Turbomeca Palas - Turbomeca Palas Blackburn Turbomeca Turmo - Turbomeca Turmo Blackburn A.129 Blackburne Blackburne Tomtit Blackburne Thrush Bliss (E.W. Bliss Company) Bliss Jupiter Bliss Neptune Bliss Titan Bloch Bloch 4B-1 Bloch 6B-1 BMW Source: Gunston except where noted BMW Sytlphe 5-cyl rotary BMW III BMW IIIa BMW IV BMW V BMW Va BMW VI BMW VIIa BMW VIII BMW IX BMW X BMW XI BMW 003 axial-flow turbojet BMW 112 12-cylinder, (prototype) BMW 114 BMW 116 BMW 117 BMW 132 BMW 139 BMW 801 BMW 802 BMW 803 BMW 804 BMW 805 BMW 109-002 (Bramo 109-002) BMW 109-003 BMW 109-018 BMW 109-028 BMW 109-510 BMW 109-511 BMW 109-528 BMW 109-548 BMW 109-558 BMW 109-708 BMW 109-718 BMW P-3306 BMW P-3307 BMW MTU 6011 BMW 6002 BMW 6011 BMW 6012 (MTU 6012) BMW 8025 BMW 8026 BMW GO-480-B1A6 BMW-Lanova 114 V-4 9-cyl. radial diesel engine BMW M2 B15 - 2 cyl. air-cooled boxer Boeing Source:Pelletier except where noted Boeing T50 Boeing T60 Boeing 500 Boeing 502 Boeing 514 Boeing 520 Boeing 540 gas turbine engine (turboprop) Boeing 550 Boeing 551 gas turbine engine (turboprop) Boeing 553 gas turbine engine (turboprop) Boitel Boitel soleil Boland Boland V-8 Bonner (Aero Bonner Ltd.) Bonner Super Sapphire Borzecki (Jozef Borzecki) Borzecki 2RB Borzecki JB 2X250 Botali Botali Diesel – eight-cylinder air-cooled 118 hp Bramo Source:Gunston except where noted Bramo Sh.14A Bramo 301 Bramo 314 Bramo 322 Bramo 323 Fafnir Bramo 325 Bramo 328 Bramo 329 Twin Fafnir Bramo 109-002 Bramo 109-003 Brandner Brandner E-300 Breda Breda 320hp V-8 Breguet-Bugatti Breguet-Bugatti U.16 Breguet-Bugatti U.24 Breguet-Bugatti U.24bis Breguet-Bugatti Quadrimotor Type A Breguet-Bugatti Quadrimotor Type B Breguet-Bugatti H-32B Breitfeld & Danek Breitfeld & Danek Perun I 6-cylinder 170 hp Breitfeld & Danek Perun II 6-cylinder 276 hp Breitfeld & Danek BD-500 500 hp Breitfeld & Daněk Hiero IV Breitfeld & Daněk Hiero L Breitfeld & Daněk Hiero N Breese Breese 40hp 3-cyl radial Breuer (Breuer Werke G.m.b.H.) Breuer 9-091 Breuer 9-094 Brewer (Captain R.W.A. Brewer) Brewer Type M Gryphon O-8 Brewer 250hp O-12 Brewer 500hp X-16 Briggs & Stratton Briggs & Stratton Vanguard Big Block V-Twin Bristol Engine Company (Bristol) Division of Bristol Aeroplane Company formed when Cosmos Engineering was taken over in 1920. Became Bristol Aero Engines in 1956. Merged with Armstrong Siddeley in 1958 to form Bristol Siddeley. Sources: Piston engines, Lumsden, gas turbine and rocket engines, Gunston. Bristol Aquila Bristol Centaurus Bristol Coupled Centaurus Bristol Cherub Bristol Draco - fuel injected Pegasus radial Bristol Hercules Bristol Hydra Bristol Jupiter - originally Cosmos Jupiter Bristol Lucifer Bristol Mercury Bristol Neptune Bristol Olympus Bristol Orion - Jupiter variant Bristol Orion sleeve-valve Bristol Orion (BE.25) turbo-prop/shaft Bristol Orpheus Bristol Pegasus (radial engine) Bristol BE53 Pegasus (later, BS53 the Harrier engine) Bristol Perseus Bristol Phoebus Bristol Phoenix diesel radial Bristol Proteus - turboprop Bristol Taurus Bristol Theseus - turboprop Bristol Thor - ramjet Bristol Titan - 5-cylinder radial Ramjets Bristol BE.25 Bristol BRJ.1 6in ramjet, Initial development model using Boeing combustor. Bristol BRJ.2 16in ramjet. Scaled up BRJ1 with Boeing combustor. Bristol BRJ.2/5 16in M2 ramjet. Used on early Red Duster. Known to the MoS as BT.1 Thor Bristol BRJ.3 16in M2 ramjet. Fitted with NGTE combustor and used on XRD. Rated at at M3, Ø = Bristol BRJ.4/1 16in M2 ramjet. Used on early Red Duster and Bloodhound I. Known to the MoS as BT.2 Thor Bristol BRJ.5/1 16in M2 ramjet. Used on Bloodhound II. Became BT.3 Thor Bristol BRJ.601 16in M3 ramjet. Tested on Bobbin. Bristol BRJ.701 23in M3 ramjet project study. Bristol BRJ.801 Bristol BRJ.801 18in M3 ramjet. Initial M3 ramjet developed for Stage 1¾ Blue Envoy. Bristol BRJ.811 18in M3 ramjet. M3 ramjet developed for Stage 1¾ Blue Envoy. Bristol BRJ.824 18in M3 ramjet. Cancelled with Blue Steel Mk2. Bristol Siddeley Bristol Siddeley was formed by Bristol taking over Armstrong Siddeley , rebranding several of the engines. It took over de Havilland engines and, in turn, became a division of Rolls-Royce Limited. Bristol Siddeley BE.58 Bristol Siddeley Pegasus (BE.53 Bristol Siddeley BS.59 Bristol Siddeley BS.100 Bristol Siddeley BS.143 Bristol Siddeley BS.347 Bristol Siddeley BS.358 Bristol Siddeley BS.360 -ex de Havilland, finalised as Rolls-Royce Gem Bristol Siddeley BS.605 Bristol Siddeley BS.1001 Bristol Siddeley M2.4 - 4.2 ramjet. Bristol Siddeley BS.1002 Bristol Siddeley M4.5 ramjet. Bristol Siddeley BS.1003 Odin Bristol Siddeley M3.5 ramjet, Odin. Bristol Siddeley BS.1004 Bristol Siddeley M2.3 ramjet. Bristol Siddeley BS.1005 Bristol Siddeley BS.1006 Bristol Siddeley M4 research ramjet. Became R.2 research engine. Bristol Siddeley BS.1007 Bristol Siddeley BS.1008 Bristol Siddeley M1.2 ramjet. Bristol Siddeley BS.1009 Bristol Siddeley M3 ramjet. Modified BT.3 Thor intended for proposed Bloodhound III. Modified nozzle, intake and diffuser. Bristol Siddeley BS.1010 Bristol Siddeley BS.1011 Rated at 40000 lb (177.9KN). Bristol Siddeley BS.1012 Bristol Siddeley combination powerplant for APD 1019 and P.42. Used Olympus or BS.100 turbomachinery, bypass duct burning and ramjets. Bristol Siddeley BS.1013 Bristol Siddeley ramjet study for stand-off missile. Possibly for Pandora. Bristol Siddeley/SNECMA M45G Bristol Siddeley/SNECMA M45H Bristol Siddeley Gamma (for Black Knight) Bristol Siddeley Gnome - ex de Havilland Bristol Siddeley Gyron Junior ex de Havilland Bristol Siddeley Stentor - Ex Armstrong-Siddeley Bristol Siddeley Double Spectretwo stacked de Havilland Spectres Bristol Siddeley PR.23 Bristol Siddeley PR.37 Bristol Siddeley Artouste - licence-built Turbomeca Artouste Bristol Siddeley Cumulus Bristol Siddeley Nimbus Bristol Siddeley Orpheus Bristol Siddeley Palouste - licence-built Turbomeca Palouste Bristol Siddeley Sapphire - ex Armstrong Siddeley Bristol Siddeley Spartan I Bristol Siddeley T64 (T64-BS-6) Bristol Siddeley Viper Bristol Siddeley BSRJ.801 Bristol Siddeley BSRJ.824 Bristol Siddeley NRJ.1 Bristol Siddeley R.1 Bristol Siddeley research ramjet. Bristol Siddeley R.2 Bristol Siddeley research ramjet. British Anzani For French Anzani engines see: Anzani British Anzani 35hp 2-cyl. British Anzani 45hp 6-cyl. British Anzani 60hp 6-cyl. British Anzani 100hp 10-cyl. British Salmson British Salmson AD.3 British Salmson AC.7 British Salmson AC.9 British Salmson AD.9 British Salmson AD.9R srs III British Salmson AD.9NG British Rotary British Rotary 100hp 10-cyl. rotary Brooke (Brooke, Chicago) Brooke 85hp 10-cyl. rotary Brooke 24hp 6-cyl. rotary Brooke Multi-X Brott (A. Brott, Denver, Colorado) Brott 35hp V-4 air-cooled Brott 45hp V-4 water-cooled Brott 60hp V-8 air-cooled Brouhot Brouhot 60hp V-8 Brownback (Brownback Motor Laboratories Inc.) Brownback C-400 (Tiger 100) Bucherer Bucherer 2-cyl rotary Buchet Buchet 6 in-line Buchet 8-12hp 3-cyl inline Buchet 24hp 6-cyl radial Bücker Bücker M 700 Budworth (David Budworth Limited) Budworth Puffin Budworth Brill Budworth Buzzard Bugatti Bugatti 8 Bugatti U-16 Bugatti Type 14 Bugatti Type 34 U-16 Bugatti Type 50B Bugatti Type 60 Burgess-White (W. Starling Burgess, Rollin H. White / Burgess Company of Marblehead, MA and White Company of Cleveland, OH) Burgess-White X-16 Burlat (Société des Moteurs Rotatifs Burlat) Burlat 8cyl. 35hp rotary - at 1800 rpm, . . 6 500F Burlat 8cyl. 60hp rotary - at 1800 rpm, , , 11000F Burla 8cyl. 75hp rotary - at 1800 rpm, , , 11000F Burlat 16cyl. 120hp rotary - p at 1750 rpm, , , 22000 F Burnelli Burnelli AR-3 Burt (Peter Burt) Burt 180hp V-12 C CAC CAC R-975 Cicada CAC R-1340 CAC R-1830 CAC Merlin CAE See:Teledyne CAE Caffort (Anciens Etablissements Caffort Frères) Caffort 12Aa Cal-Aero ( Cal-aero Institute, California) Cal-Aero XLC-1 Call (Henry L. Call) Call E-1 2OW Call E-2 4OW CAM (Canadian Airmotive Inc.) CAM TURBO 90 Canton-Unné Canton-Unné X-9 Cameron (Cameron Aero Engine Division / Everett S. Cameron) Cameron C4-I-E1 Cameron C6 Cameron C12 Campini Source:Gunston Secondo Campini thermojet CANSA (Fabbrica Italiana Automobili Torino – Costruzioni Aeronautiche Novaresi S.A.) CANSA C.80 Carden Aero Engines Source:Ord-Hume. Carden-Ford 31hp 4-cyl. Carden-Ford S.P.1 CAREC (China National Aero-Engine Corporation) CAREC WP-11 Casanova (Ramon Casanova) Casanova pulse-jet Cato Cato 35hp 2-cyl 2OA Cato 60hp 4-cyl 4IL Cato C-2 75 hp 2OA Caunter Caunter B Caunter C Caunter D Centrum Centrum 150hp 6-cyl radial Ceskoslovenska Zbrojovka Data from: Ceskoslovenska Zbrojovka ZOD 260-B 2-stroke radial diesel engine – 260 hp CFM International CFM International CFM56 CFM International LEAP Chaise (Societe Anonyme Omnium Metallurgique et Industriel / Etablissements Chaise et Cie) Chaise 12hp V-2 Chaise 30hp V-4 Chaise 4A 101 hp Chaise 4B 120 hp (14° inverted V-4) Chaise 4Ba Chaise AV.2 Chamoy (M. Fernand Chamoy) Chamoy 5-cyl radial Chamberlin Chamberlin L-236 Chamberlin L-267 Changzhou (Changzhou Lan Xiang Machinery Works) Changzhou WZ-6 Charomskiy Source:Gunston Charomskiy AN.1 Charomskiy ACh-30 Charomskiy ACh-31 Charomskiy ACh-32 Charomskiy ACh-39 Charomskiy M-40 Chelomey Chelomey D-3 Pulse-jet Chelomey D-5 Pulse-jet Chelomey D-6 Pulse-jet Chelomey D-7 Pulse-jet Chenu Chenu 50-65hp 4-cyl DD Chenu 75hp 6-cyl in-line Chenu 90hp 4-cyl GD Chenu 80-90hp 6-cyl DD Chenu 80-90hp 6-cyl GD Chenu 200-250hp 6-cyl DD (for dirigibles) Chengdu Chengdu WS-18 Chevrolair (The Arthur Chevrolet Aviation Motors Corporation) Chevrolair 1923 Water-cooled in-line 4 upright Chevrolair D-4 Chevrolair D-6 Chevrolair 1923 Air-cooled in-line 4 upright and inverted Chevrolet Chevrolet Turbo-Air 6 engine Chinese aero-engines Chotia Chotia 460 Christoffersen (Christoffersen Aircraft Company) Christoffersen 120hp 6-cyl in-line Christoffersen 120hp V-12 Chrysler Chrysler IV-2220 Chrysler T36D Church (Jim Church) Church J-3 Marathon Church V-248 V-8 Cicaré Cicaré 4C2T Cirrus Cirrus I Cirrus II Cirrus III Cirrus Hermes Cirrus Major Cirrus Minor Cisco Motors Cisco Snap 100 Citroën Citroen 2cyl Citroën 2CV – 18 hp Citroen 4cyl Citroën GS 1.2 – 65 hp at 5,700 rpm Clapp's Cars Clapp's Cars Spyder Standard Clément-Bayard Data from: Clément-Bayard 30hp 2-cyl HOW Clément-Bayard 29hp 4-cyl in-line Clément-Bayard 40hp 4-cyl in-line Clément-Bayard 100hp 4-cyl in-line Clément-Bayard 118.5hp 4-cyl in-line Clément-Bayard 117.5hp 6-cyl in-line Clément-Bayard 250hp 6-cyl in-line (for dirigibles) Clément-Bayard 50hp 7-cyl Radial Clément-Bayard 300hp 8-cyl in-line (for airships) Clément-Bayard V-16 (for airships) Cleone Cleone 1930 25hp 2-cyl hor opp 2 stroke Clerget (Société Clerget-Blin et Cie / Pierre Clerget) Source:Lumsden except where noted Clerget 50hp 7-cyl water-cooled radial (1907) Clerget 50hp 4-cyl Clerget 100hp 4-cyl Clerget 200hp V-8 Clerget 2K 16 hp Clerget 4V 40 hp 4-cyl in-line water-cooled (1908) Clerget 4W 40 hp 4-cyl in-line water-cooled (1910) Clerget 7Y 60 hp Clerget 7Z Clerget 9A (Diesel radial engine) Clerget 9B Clerget 9Bf British version of 9B 140 hp Clerget 9C Clerget 9F Clerget 9J 100 hp Clerget 9Z 110 hp Clerget 11A 200 hp variable compression Clerget 11Eb Clerget 11G 250 hp 5.7:1 compression Clerget 14D Clerget 14E Clerget 14F (Diesel radial engine) Clerget 14Fcs Clerget 14F1 Clerget 14F2 Clerget 14U Clerget 16H diesel V-16 (180x200=81.43L) Clerget 16SS diesel Clerget 16X Clerget 18 rotary 300 hp Clerget 32 diesel Clerget Type Transatlantique (H type) Clerget monocylinder powder powdered coal test engine Clerget monocylinder 2x variable compression Clerget monocylinder 4x variable compression Clerget 180-2T V-8 2x variable compression Clerget 180-4T V-8 4x variable compression Clerget 100hp diesel 1928 9-cyl. radial Clerget 200hp diesel 1929 9-cyl. radial Clerget 250hp diesel 9-cyl. radial Clerget 300hp diesel 9-cyl. radial Cleveland (Walter C. Willard / Cleveland Aero Engines) Cleveland 150hp 6-cyl axial engine 6x Cleveland (Cleveland Engineering Laboratories Company) Cleveland Weger 400hp 6-cyl 2-stroke radial C.L.M. (Compagnie Lilloise de Moteurs S.A) Lille 6As 6-cyl opposed piston 2-stroke diesel (Junkers Jumo 205 licence built) Lille 6Brs (600 hp) CMB (Construction Mécanique du Béarn) See: Béarn CNA CNA C.II CNA C.VI I.R.C.43 CNA C.7 CNA D.4 CNA D.VIII Coatalen Source:Brew Coatalen 12Vrs-2 diesel Colombo Colombo C.160 Colombo D.110 Colombo E.150 Colombo S.53 Colombo S.63 Combi Combi 150hp 6-cyl Comet (Comet Engine Corp, Madison WI.) Comet 130hp Comet 5 Comet 7-D 1928 (ATC 9) = 150 hp 612ci 7RA. Comet 7-E 1929 (ATC 47) = 165 hp 612ci 7RA. Comet 7-RA 1928 (ATC 9) = 130 hp 7RA. Compagnie Lilloise de Moteurs See:C.L.M. Conrad (Deutsche Motorenbau G.m.b.H. / Robert Conrad) Conrad C.III – (licence built by N.A.G. as the C.III N.A.G.) Continental Continental 140 Continental 141 Continental 142 Continental 160 Continental 210 Continental 217 Continental 219 Continental 220 Continental 227 Continental 320 Continental 324 Continental TS-325 Continental 327 Continental 352 Continental 354 Continental 356 Continental 420 Continental 500 Continental TP-500 Continental A40 Continental A50 Continental A65 Continental A70 Continental A75 Continental A80 Continental A90 Continental A100 Continental C75 Continental C85 Continental C90 Continental C115 Continental C125 Continental C140 Continental C145 Continental C175 Continental CD175 Thielert Centurion diesel engines 2010s Continental CD300 Thielert Centurion diesel engines 2010s Continental E165 Continental E185 Continental E225 Continental E260 Continental GR9-A Continental GR18 Continental GR36 Continental Tiara 4-180 Continental Tiara 6-260 Continental Tiara 6-285 Continental Tiara 6-320 Continental Tiara 8-380 Continental Tiara 8-450 Continental Voyager 200 Continental Voyager 300 Continental Voyager 370 Continental Voyager 550 Continental O-110 Continental O-170 Continental O-190 Continental O-200 Continental O-240 Continental O-255 Continental O-270 (Tiara) Continental O-280 Continental O-300 Continental O-315 Continental IO-346 Continental O-360 Continental O-368 (4cyl. O-550) Continental O-405 (Tiara) Continental O-470 Continental O-520 Continental O-526 Continental O-540 (Tiara) Continental O-550 Continental OL-200 Continental OL-370 Continental-Honda OL-370 Continental OL-550 Continental OL-1430 Continental V-1650 (Merlin) Continental V-1430 Continental IV-1430 Continental I-1430 Continental XH-2860 Continental R-545 Continental R-670 Continental R-975 Continental W670 Continental TD-300 Continental Model R-20 Continental J69 Continental J87 Continental J100 Continental RJ35 Ramjet Continental RJ45 Ramjet Continental RJ49 Ramjet Continental T51 Continental T65 Continental T67 Continental T69 Continental T72 Continental Titan X340 Continental Titan X320 Continental Titan X370 Cors-Air (Cors-Air srl, Barco di Bibbiano, Italy) Cors-Air M19 Black Magic Cors-Air M21Y Cors-Air M25Y Black Devil Corvair (conversions and derivatives of the Chevrolet Turbo-Air 6 engine) AeroMax Aviation AeroMax 100 Clapp's Cars Spyder Standard Magsam/Wynne (Del Magsam / William Wynne) Cosmos Engineering Cosmos Jupiter Cosmos Lucifer Cosmos Mercury Cosmos Hercules 1,000 hp - 18x Coventry Victor Coventry Victor Neptune Crankless Engines Company (Anthony Michell) Michell XB-4070 C.R.M.A. (Société de construction et de Reparationde Materiel Aéronautique) C.R.M.A. Type 102 Curtiss Curtiss 250hp V-12 1649 cu in AB? Curtiss 25-30hp Curtiss A-2 (9 hp V-2) Curtiss A-4 Curtiss A-8 Curtiss B-4 Curtiss AB Curtiss B-8 Curtiss C-1 Curtiss C-2 Curtiss C-4 Curtiss C-6 Curtiss C-12 Curtiss CD-12 Curtiss Crusader Curtiss D-12 Curtiss E-4 Curtiss E-8 100 hp V-8 Curtiss H Curtiss K Curtiss H-1640 Chieftain Curtiss K-6 Curtiss K-12 Curtiss S Curtiss L Curtiss O Curtiss OX-2 Curtiss OX-5 Curtiss OXX-2 Curtiss OXX-3 Curtiss OXX-5 Curtiss OXX-6 Curtiss R-600 Challenger Curtiss R-1454 Curtiss V V-8 Curtiss V-2 V-8 Curtiss V-3 V-8-8 Curtiss V-4 V-12 Curtiss V-1400 Curtiss V-1460 Curtiss V-1550 Curtiss V-1570 Conqueror Curtiss VX Curtiss-Kirkham Curtiss-Kirkham K-12 Curtiss-Wright Curtiss-Wright LR25 Curtiss-Wright RJ41 Ramjet Curtiss-Wright RJ47 Ramjet Curtiss-Wright RJ51 Ramjet Curtiss-Wright RJ55 Ramjet Curtiss-Wright RC2-60 Wankel engine Curtiss-Wright R-600 Challenger Curtiss-Wright TJ-32 (Olympus from Bristol, modified by CW) Curtiss-Wright TJ-38 Zephyr (Americanised Olympus 551) Cuyuna See:2si D D-Motor D-Motor LF26 D-Motor LF39 D'Hennian D'Hennian 10-12hp rotary D'Hennian 50hp 7-cyl rotary Daiichi Kosho Company Daiichi Kosho DK 472 Daimler-Benz Source:Gunston except where noted Daimler P 12hp 1896 airship engine Daimler N 28hp 1899 airship engine Daimler 1900 flugmotor Daimler 1910 4-cyl. 55hp Daimler H4L 160hp airship engine Daimler J4 210hp airship engine Daimler J4L 230hp airship engine Daimler J4F 360hp airship engine Daimler J8L 480hp airship engine Daimler-Benz 1926 2-cyl. Daimler-Benz F.2 Daimler-Benz 750hp V-12 diesel Mercedes-Benz LOF.6 airship diesel engine Daimler NL.1 - Zeppelin motor Daimler-Benz OF 2 4-stroke V-12 diesel Daimler-Benz DB 600 Daimler-Benz DB 601 Daimler-Benz DB 602 V-16 diesel Daimler-Benz DB 603 Daimler-Benz DB 604 (X-24) Daimler-Benz DB 605 Daimler-Benz DB 606 (Coupled DB 601) Daimler-Benz DB 607 (Diesel) Daimler-Benz DB 609 (IV-16) Daimler-Benz DB 610 (Coupled DB 605) Daimler-Benz DB 612 Daimler-Benz DB 613 (Coupled DB 603G) Daimler-Benz DB 614 Daimler-Benz DB 615 (Coupled DB 614) Daimler-Benz DB 616 Daimler-Benz DB 617 Daimler-Benz DB 618 (Coupled DB 617) Daimler-Benz DB 619 (Coupled DB 609) Daimler-Benz DB 620 (Coupled DB 628) Daimler-Benz DB 621 Daimler-Benz DB 622 Daimler-Benz DB 623 Daimler-Benz DB 624 Daimler-Benz DB 625 Daimler-Benz DB 626 Daimler-Benz DB 627 Daimler-Benz DB 628 Daimler-Benz DB 629 Daimler-Benz DB 630 W-36(Coupled W-18) Daimler-Benz DB 631 Daimler-Benz DB 632 Daimler-Benz DB 670 Daimler-Benz DB 720 (PTL 6) Daimler-Benz DB 721 (PTL 10) Daimler-Benz DB 730 (ZTL 6) Daimler-Benz 109-007 (Turbofan) Daimler-Benz 109-016 (Turbojet) Daimler-Benz 109-021 (Turbojet) Daimler-Benz PTL 6 Daimler-Benz PTL 10 Daimler-Benz ZTL 6 Daimler-Benz ZTL 6000 Daimler-Benz ZTL 6001 Daimler-Benz ZTL 109-007 Daimler F7502 Daimler-Versuchmotor F7506 Daimler D.IIIb - (Not related to Mercedes D.III) Mercedes 50hp 4-cyl in-line Mercedes 60hp 4-cyl in-line Mercedes 70hp 4-cyl in-line inverted Mercedes 80hp 6-cyl in-line Mercedes 90hp 4-cyl in-line Mercedes 120hp 4-cyl in-line (airship engine) Mercedes 160hp 6-cyl in-line Mercedes 180hp 6-cyl in-line Mercedes 240hp 8-cyl in-line Mercedes 240hp V-8 (airship engine) Mercedes 260hp 6-cyl in-line Mercedes 650hp V-12 Mercedes Typ E4F 70 hp Mercedes Typ E6F 100 hp Mercedes Typ J4L 120 hp Mercedes Typ J8L 240 hp V-8 Mercedes W-18 Mercedes Fh 1256 Mercedes D.I Mercedes D.II Mercedes D.III Mercedes D.IIIa Mercedes D.IIIaü Mercedes D.IIIav Mercedes D.IV Mercedes D.IVa Damblanc-Mutti Damblanc-Mutti 165hp Damblanc-Mutti 11-cyl. rotary 220 hp Danek (Ceskomorarsk-Kolben-Danek & Co.) Danek Praga 500 hp V-12 Daniel (Daniel Engine Company) Daniel 7-cyl rotary Dansette-Gillet Dansette-Gillet Type A 45hp 4-cyl in-line Dansette-Gillet Type C 32hp 4-cyl in-line Dansette-Gillet Type D 70hp 4-cyl in-line Dansette-Gillet 100hp 6-cyl in-line Dansette-Gillet 120hp V-8 Dansette-Gillet 200hp 6-cyl in-line Darracq Data from: Darracq 25hp O-2 Darracq 50hp O-4 Darracq 43hp 4-cyl in-line Darracq 84hp 4-cyl in-line Darracq 12Da 420 hp V-12 Dassault Dassault MD.30 Viper Dassault R.7 Farandole Day (Charles Day) Day 25hp 5-cyl Dayton (Dayton Airplane Engine Co.) Dayton Bear de Dietrich de Dietrich 4-cyl in-line De Dion-Bouton De Dion-Bouton 80 hp V-8 De Dion-Bouton 100 hp V-8 De Dion-Bouton 130 hp 12B V-12 De Dion-Bouton 150 hp V-8 De Dion-Bouton 800 hp X-16 de Havilland Sources: Piston engines, Lumsden, gas turbine and rocket engines, Gunston. Piston engines de Havilland Iris de Havilland Ghost (V8) de Havilland Gipsy de Havilland Gipsy Twelve - known as "Gipsy King" in military service de Havilland Gipsy Major - also known as Gipsy IIIA de Havilland Gipsy Minor de Havilland Gipsy Queen de Havilland Gipsy Six Gas turbines Halford H.1 de Havilland Ghost de Havilland Gnome de Havilland Goblin de Havilland Gyron de Havilland Gyron Junior Rockets de Havilland Spectre de Havilland Double Spectre - two Spectre engines mounted together de Havilland Sprite - for rocket-assisted take off de Havilland Super Sprite - development of Sprite de Laval de Laval T42 Deicke (Arthur Deicke) Deicke ADM-7 Delafontaine Delafontaine Diesel – seven-cylinder air-cooled Delage Delage 12C.E.D.irs Delage Gvis DeltaHawk DeltaHawk DH160 DeltaHawk DH180A4 Demont (Messrs Demont, Puteaux, France) Demont 300hp 6-cyl double acting rotary Deschamps Data from: (D.J.Deschampsdesigner – Lambert Engine & machine Co.,Illinois manufacturer) Deschamps v-12 inverted 2-stroke diesel Detroit Aero Detroit Aero 25-30hp 2OA DGEN DGEN 380 Diamond Engines Diamond Engines GIAE 50R Diamond Engines GIAE 75R Diamond Engines GIAE-110R Diemech Turbine Solutions (DeLand, Florida, United States) Diemech TJ 100 Diemech TP 100 Diesel Air Diesel Air Dair 100 DKW (A.G.-Werk DKW, Zschopau S.a.) DKW FL 600W Doble-Besler Doble-Besler V-2 steam engine Dobrotvorskiy Dobrotvorskiy MB-100 Dobrotvorskiy MB-102 Dobrynin Source:Gunston Dobrynin VD-4K Dobrynin VD-7 Dongan (also known as Harbin Engine Factory) Dongan HS-7 Dongan HS-8 Dongan WJ-5 Dongan WZ-5 Dongan WZ-6 Dodge Dodge 125hp 6-cyl rotary Victory Dorman (W. H. Dorman and Co., Ltd) Dorman 60-80hp V-8 Douglas Mostly developed from Douglas motorcycle engines Douglas 350cc Douglas 500cc Douglas Dot Douglas 736cc (some sources 737cc) Douglas 750cc Douglas Digit 22 hp at 3,000rpm Douglas Dryad Douglas/Aero Engines Sprite/ Aero Engines 1500cc Douseler Douseler 40hp 4-cyl in-line Dreher (Dreher Engineering Company) Dreher TJD-76 Baby Mamba Duesenberg Duesenberg Special A Duesenberg Special A3 Duesenberg H 850 hp V-16 Duesenberg 100hp 4-cyl. direct drive in-line Duesenberg 125hp 4-cyl. geared in-line Duesenberg 300hp V-12 Duesenberg A-44 70 hp 4-cyl Dufaux Dufaux 5-cyl tandem double-acting in-line engine Dushkin Dushkin D-1-A-1100 Dushkin RD-A-150 Dushkin RD-A-300 Dushkin S-155 Dushkin RD-2M Dutheil et Chalmers Data from: (some sources erroneously as Duthiel-Chambers) Dutheil et Chalmers 20hp O-2 Dutheil et Chalmers 25hp O-2 Dutheil et Chalmers 37.25hp O-2 Dutheil et Chalmers 40hp O-4 Dutheil et Chalmers 50hp O-4 Dutheil et Chalmers 60hp O-6 Dutheil et Chalmers 72.5hp O-6 Dutheil et Chalmers 76hp O-4 Dutheil et Chalmers 38hp OP-2 Dutheil et Chalmers 56.5hp O-3 Dutheil et Chalmers 75hp O-4 Dutheil et Chalmers 97hp O-4 Dutheil et Chalmers 100hp O-4 Dutheil et Chalmers 72.5hp O-6 Dux Dux Hypocycle Dyna-Cam Dyna-Cam E Easton Data from: Easton 50hp V-8 Easton 75hp V-8 ECi ECi O-320 ECi Titan X320 ECi Titan X340 ECi Titan X370 Ecofly (Ecofly GmbH, Böhl-Iggelheim, Germany) Ecofly M160 Edelweiss Edelweiss 75hp 6-cyl fixed piston radial Edelweiss 125hp 6-cyl fixed piston radial Eggenfellner Aircraft Eggenfellner E6 E.J.C. E.J.C. 60hp 6-cyl rotary E.J.C. 10-cyl rotary Elbridge (Elbridge Engine Company) Elbridge A 2IW 6-10 hp Elbridge C 3IW 18-30 hp Elbridge 4-cyl 4IW Elbridge Featherweight 3-cyl 3IW 30-40 hp Elbridge Featherweight 4-cyl 4IW 40-60 hp Elbridge Featherweight 6-cyl 6IW 60-90 hp Elbridge Aero Special 4IW 50-60 hp Electravia Electravia GMPE 102 Electravia GMPE 104 Electravia GMPE 205 Electric Aircraft Corporation Electric Aircraft Corporation Electra 1 Elektromechanische Werke Elektromechanische Werke Taifun rakatenmotor Elektromechanische Werke Wasserfall rakatenmotor Elizalde Source:Gunston Elizalde A Elizalde A6? Elizalde Dragon Elizalde D V Elizalde D VII Elizalde D IX B. Elizalde D IX M.R. Elizalde D IX C.R. Elizalde Super Dragon Elizalde S.D.M Elizalde S.D.M.R. Elizalde S.D.C Elizalde S.D.C.R. Elizalde Sirio Elizalde Tigre IV Elizalde Tigre VI Elizalde Tigre VIII Elizalde Tigre XII Ellehammer Elllehammer 3-cyl radial Elllehammer 5-cyl radial Elllehammer rotary engine Emerson Emerson 100hp 6-cyl EMG (EMG Engineering Company / Eugene M. Gluhareff) Gluhareff G8-2-20 Gluhareff G8-2-80 Gluhareff G8-2-130 Gluhareff G8-2-250 Emrax Emrax 2 Emrax 207 Emrax 228 Emrax 268 Endicott Endicott 60hp 3-cyl 2-stroke Engine Alliance Engine Alliance GP7000 Engineered Propulsion Systems (Engineered Propulsion Systems) Engineered Propulsion Systems Graflight V-8 Engineering Division Engineering Division W-1 750 hp W-18 Engineering Division W-1A-18 Engineering Division W-2779 Engineering Division W-2 1000 hp W-18 Engineering Division 350hp 9-cyl radial ENMA (Empresea Nacional de motores de Aviacion S.A.) ENMA Alcion ENMA Beta ENMA Flecha ENMA Sirio ENMA Tigre ENMA A-1 Alcion ENMA F-IV Flecha ENMA Flecha F.1 ENMA Sirio S2 ENMA Sirio S3 ENMA S-VII Sirio ENMA 4.(2L)-00-93 ENMA 7.E-CR.15-275 ENMA 7.E-C20-500 ENMA 7.E-CR20-600 ENMA 7.E-CR.15-275 ENMA 9.E-C.29-775 E.N.V. E.N.V. Type A E.N.V. Type C E.N.V. Type D E.N.V. Type F/FA E.N.V. Type H E.N.V. 40hp V-8 E.N.V. 62hp V-8 E.N.V. 75hp V-8 E.N.V. 100hp V-8 E.N.V. 1914 100hp V-8 E.N.V. 1909 25/30hp O-4 E.N.V. 1910 30hp O-4 ERCO ERCO IL-116 Esselbé Esselbé 65hp 7-cyl rotary Etoile Etoile 400hp EuroJet Eurojet EJ200 Europrop Europrop TP400 F F&S F&S K 8 B Fahlin Fahlin Plymouth conversion Fairchild For Ranger and Fairchild Ranger engines see: Ranger Source:Gunston except where noted Fairchild Caminez 4-cylinder Fairchild Caminez 8 cylinder Fairchild J44 Fairchild J63 Fairchild J83 Fairchild T46 Fairdiesel Fairdiesel barrel engine Fairey None of Fairey Aviation Company's own engine designs made it to production. Felix - imported Curtiss D-12 engines P.12 Prince - V-12 P.16 Prince - H-16 P.24 Monarch also known as Prince 4 Falconer (Ryan Falconer Racing Engines) Falconer L-6 Falconer V-12 Farcot Farcot 8-10hp V-2 Farcot Fan-6 Farcot 100-110hp V-8 Farcot 30 hp 8cyl radial Farcot 65 hp 8cyl radial Farcot 100 hp 8cyl radial Farina (S.A. Stabilimenti Farina) Farina Algol Farina Aligoth Farina T.58 Farman Source:Liron Note: Farman engine designations differ from other French manufacturers in using the attributes as the basis of the designation, thus; Farman 7E ( 7-cyl radial E - Etoile / Star / Radial) or Farman 12We ( W-12 fifth type - the e is not a variant or sub-variant it is the type designator). As usual there are exceptions such as the 12Gvi, 12B, 12C and 18T. Farman 7E Farman 7Ea Farman 7Ear Les Établissements lipton Farman 7Ears Farman 7Ec Farman 7Ed Farman 7Edrs Farman 8V 200 hp Farman 8Va Farman 8VI Farman 9E Farman 9Ea Farman 9Ears Farman 9Eb Farman 9Ebr Farman 9Ecr Farman 9Fbr Farman 12B Farman 12Bfs Farman 12Brs Farman 12C Farman 12Crs Farman 12Crvi Farman 12D Farman 12Drs Farman 12G inverted V-12 350 hp Farman 12Goi Farman 12Gvi Farman 12V Farman 12Va Farman 12W Farman 12Wa 40° W-12 1919 Farman 12Wb Farman 12Wc Farman 12Wd Farman 12We Farman 12Wers Farman 12Wh Farman 12Wiars Farman 12Wirs Farman 12Wkrs Farman 12Wkrsc Farman 12WI Farman 18T Farman 18W Farman 18Wa , Farman 18Wd Farman 18We , Farman 18Wi , Farman 18Wirs Fasey Fasey 200hp V-12 Fatava Source: Fatava 45hp 4IL Fatava 90hp V-8 Fatava 180hp X-16 Faure and Crayssac Faure and Crayssac 80hp rotary Faure and Crayssac 350hp 6-cyl. 2st barrel engine Fedden Designed post war by Roy Fedden formerly of Cosmos Engineering and Bristol. Roy Fedden Ltd went into liquidation in 1947 Fedden Cotswold - design only. Fedden 6A1D-325 (185 hp 6HO) Fedden G6A1D-325 (geared) 6AID-325? Fiat Data from:Italian Civil & Military Aircraft 1930–1945 Fiat twin Airship engine Fiat V-12 400hp ca. 1919 Fiat SA8/75 (50 hp V-8 air-cooled) 1908 Fiat S.54 Fiat S.55 (V-8 water-cooled 1912) Fiat S.56A Fiat S.76A Fiat A.10 Fiat A.12 Fiat A.14 Fiat A.15 Fiat A.16 Fiat A.18 Fiat A.20 Fiat A.22 Fiat A.24 Fiat A.25 Fiat A.30 Fiat A.33 Fiat A.33 R.C.35 Fiat A.38 R.C.15/45 Fiat A.50 Fiat A.52 Fiat A.53 Fiat A.54 Fiat A.55 Fiat A.58 Fiat A.58 C. Fiat A.58 R.C. Fiat A.59 Fiat A.60 Fiat A.70 Fiat A.70 S. Fiat A.74 Fiat A.75 R.C.53 Fiat A.76 Fiat A.76 R.C.18S Fiat A.76 R.C.40 Fiat A.78 Fiat A.80 Fiat A.82 Fiat A.83 Fiat A.83 R.C.24/52 Fiat AS.2 Schneider Trophy 1926 Fiat AS.3 Fiat AS.5 Schneider Trophy 1929 Fiat AS.6 Schneider Trophy 1931 Fiat AS.8 Fiat RA.1000 Monsone Fiat RA.1050 Tifone Fiat ANA Diesel – six in-line, water-cooled – 220 hp Fiat AN.1 Diesel Fiat AN.2 Diesel Fiat 4001 Fiat 4002 Fiat 4004 Fiat 4023 Fiat 4024 Fiat 4032 Fiat 4301 Fiat 4700 Fiat D.16 Firewall Forward Aero Engines Firewall Forward CAM 100 Firewall Forward CAM 125 FKFS FKFS Gruppen-Flugmotor A FKFS Gruppen-Flugmotor B? FKFS Gruppen-Flugmotor C FKFS Gruppen-Flugmotor D FKFS Gruppen-Flugmotor 37.6 l 48-cyl Flader Source:Geen and Cross Flader J55 Type 124 Lieutenant Flader T33 Type 125? Brigadier Fletcher Fletcher 5hp Fletcher 9hp Fletcher Empress 50 hp rotary FNM FNM R-760 FNM R-975 Ford Ford O-145 4 Cylinder X engine 8 Cylinder X engine Ford PJ31 Pulsejet, see Republic-Ford JB-2 Ford V-1650 Liberty V-12 Fox (Dean Manufacturing Company, Newport, Kentucky) Fox 45hp 3-cyl in-line 2-stroke Fox 36hp 4-cyl in-line 2-stroke Fox 60hp 4-cyl in-line 2-stroke Fox 90hp 6-cyl in-line 2-stroke Fox 200hp 8-cyl in-line 2-stroke Fox De-luxe 50hp 4-cyl in-line 2-stroke Franklin Source:Gunston. Franklin 2A4-45 Franklin 2A4-49 Franklin 2A-110 Franklin 2A-120 Franklin 2AL-112 Franklin 4A-225 Franklin 4A-235 Franklin 4A4-100 Franklin 4A4-75 Franklin 4A4-85 Franklin 4A4-95 Franklin 4AC-150 Franklin 4AC-171 Franklin 4AC-176 Franklin 4AC-199 Franklin 4AC Franklin 4ACG-176 Franklin 4ACG-199 Franklin 4AL-225 Franklin 6A-335 Franklin 6A-350 Franklin 6A3 Franklin 6A4 Franklin 6A4-125 Franklin 6A4-130 Franklin 6A4-135 Franklin 6A4-140 Franklin 6A4-145 Franklin 6A4-150 Franklin 6A4-165 Franklin 6A4-200 Franklin 6A8-215 Franklin 6A8-225-B8 Franklin 6AC-264 Franklin 6AC-298 Franklin 6AC-403 Franklin 6ACG-264 Franklin 6ACG-298 Franklin 6ACGA-403 Franklin 6ACGSA-403 Franklin 6ACSA-403 Franklin 6ACT-298 Franklin 6ACTS-298 Franklin 6ACV-245 Franklin 6ACV-298 Franklin 6ACV-403 (O-405? most likely company designation) Franklin 6AG-335 Franklin 6AG4-185 Franklin 6AG6-245 Franklin 6AGS-335 Franklin 6AGS6-245 Franklin 6AL-315 Franklin 6AL-335 Franklin 6AL-500 Franklin 6ALG-315 Franklin 6ALV-335 Franklin 6AS-335 Franklin 6AS-350 Franklin 6V-335-A Franklin 6V-335-A1A Franklin 6V-335-A1B Franklin 6V-335-B Franklin 6V-335 Franklin 6V-350 Franklin 6V4 Franklin 6V4-165 Franklin 6V4-178 Franklin 6V4-200 Franklin 6V4-335 Franklin 6V6-245-B16F Franklin 6V6-245 Franklin 6V6-300-D16FT Franklin 6V6-300 Franklin 6VS-335 Franklin 8AC-398 Franklin 8ACG-398 Franklin 8ACG-538 Franklin 8ACGSA-538 Franklin 8ACSA-538 Franklin 12AC-596 Franklin 12AC-806 Franklin 12ACG-596 Franklin 12ACG-806 Franklin 12ACGSA-806 Franklin O-150 Franklin O-170 Franklin O-175 Franklin O-180 (Franklin 4AC-176-F3) Franklin O-200 Franklin O-300 Franklin O-335 Franklin O-405 Franklin O-425-13 Franklin O-425-2 Franklin O-425-9 Franklin O-425 Franklin O-540 Franklin O-595 Franklin O-805 Franklin XO-805-1 Franklin XO-805-3 Franklin XO-805-312 Franklin Sport 4 Fredrickson (World's Motor Company, Bloomington, Illinois) Fredrickson Model 5a Fredrickson Model 10a Frontier (Frontier Iron Works, Buffalo, New York) Frontier 35hp 4-cyl in-line Frontier 55hp V-8 Fuji Fuji JO-1 (Nippon JO-1) Fuji J3-1 (Nippon J3-1) Fuscaldo Fuscaldo 90hp Funk (Akron Aircraft Company / Funk Aircraft Company) Funk Model E G Gaggenau Gaggenau 4-cyl in-line Gajęcki Gajęcki XL-Gad Galloway (Galloway Engineering Company ltd.) Galloway Adriatic 6IL Galloway Atlantic (master rod) Garrett Source:Gunston except where noted Now under Honeywell management/design/production AiResearch GTC 43-44 AiResearch GTC 85 Gas generator for McDonnell 120 AiResearch GTP 30 AiResearch GTP 70 AiResearch GTP 331 AiResearch GTPU 7C AiResearch GTG series AiResearch GTU series AiResearch GTCP 36 AiResearch GTCP 85 AiResearch GTCP 95 AiResearch GTCP 105 AiResearch GTCP 165 AiResearch GTCP 660 AiResearch TPE-331 AiResearch TSE-331 AiResearch TSE-231 AiResearch ETJ-131 AiResearch ETJ-331 AiResearch TJE-341 AiResearch 600 AiResearch 700 Garrett ATF3 Garrett TFE1042 Garrett TFE1088 Garrett TFE76 Garrett TFE731 Garrett TSE331 Garrett TPE331 Garrett TPF351 Garrett T76 Garrett F104 Garrett F109 Garrett F124 Garrett F125 Garrett JFS 100-13A Garuff Garuff A – aircraft diesel engine GE Honda Aero Engines GE Honda HF120 Geiger Engineering Geiger HDP 10 Geiger HDP 12 Geiger HDP 13.5 Geiger HDP 16 Geiger HDP 25 Geiger HDP 32 Geiger HDP 50 GEN Corporation GEN 125 General Aircraft Limited General Aircraft Monarch V-4 General Aircraft Monarch V-6 General Electric General Electric 7E General Electric CF6 General Electric CF34 General Electric CF700 General Electric CFE738 General Electric CJ610 General Electric CJ805 General Electric CJ810 General Electric CT7 General Electric CT58 General Electric CTF39 General Electric GE1 General Electric GE4 General Electric GE1/10 General Electric GE15 General Electric GE27 General Electric GE36 (UDF) General Electric GE37 General Electric GE38 General Electric GE90 General Electric GE9X General Electric GEnx General Electric H75 General Electric H80 General Electric H85 General Electric I-A General Electric I-16 General Electric I-20 General Electric/Allison I-40 General Electric TG-100 General Electric TG-110 General Electric/Allison TG-180 General Electric TG-190 General Electric X39 General Electric X211 General Electric X24A General Electric X84 General Electric X353-5 General Electric F101 General Electric F103 General Electric F108 General Electric F110 General Electric F118 General Electric F120 General Electric F127 General Electric F128 General Electric F136 General Electric F138 General Electric F400 General Electric F404 General Electric T407 General Electric F412 General Electric F414 General Electric F700 General Electric J31 General Electric J33 General Electric J35 General Electric J39 General Electric J47 General Electric J53 General Electric J73 General Electric J77 General Electric J79 General Electric J85 General Electric J87 General Electric J93 General Electric J97 General Electric J101 (GE15) General Electric JT12A General Electric T31 General Electric T41 General Electric T58 General Electric T64 General Electric T407 General Electric T408 General Electric T700 (GE12) General Electric T708 General Electric TF31 General Electric TF34 General Electric TF35 General Electric TF37 General Electric TF39 General Electric/Rolls-Royce General Electric/Rolls-Royce F136 General Motors Research General Motors Research X-250 General Ordnance (General Ordnance Company, Derby, Conn.) General Ordnance 200hp V-8 Giannini (Pulsejets) Giannini PJ33 Giannini PJ35 Giannini PJ37 Giannini PJ39 Glushenkov Source:Gunston. Glushenkov TVD-10 Glushenkov TVD-20 Glushenkov GTD-3 Gnome et Rhône Gnome et Rhône except where noted Im French engine designations —even— sub-series numbers (for example Gnome-Rhône 14N-68) rotated anti-clockwise (LH rotation) and were generally fitted on the starboard side, —odd numbers— (for example Gnome-Rhône 14N-69) rotated clockwise (RH rotation) and were fitted on the port side. Gnome Gnome 1906 25hp rotary – prototype Gnome rotary engine Gnome 34hp 5-cyl rotary Gnome 123hp 14-cyl rotary Gnome 1907 50hp Gnome 7 Gamma 70 hp Gnome 14 Gamma-Gamma Gnome 9 Delta 100 hp Gnome 18 Delta-Delta 200 hp Gnome 7 Lambda 80 hp Gnome 14 Lambda-Lambda 160 hp Gnome 7 Sigma 60 hp Gnome 14 Sigma-Sigma 120 hp Gnome 7 Omega 50 hp Gnome 14 Omega-Omega 100 hp Gnome Monosoupape 7 Type A 80 hp Gnome Monosoupape 9 Type B-2 100 hp Gnome Monosoupape 11 Type C 190 hp Gnome Monosoupape 9 Type N 165/170 hp Gnome Monosoupape 18 Type Double-N 300 hp Gnome 600hp 20-cyl radial Gnome et Rhône Gnome-Rhône 5B - licence built Bristol Titan Gnome-Rhône 5K Titan - licence built Bristol Titan Gnome-Rhône 7K Titan Major - 7-cylinder development of 5K Gnome-Rhône 9A Jupiter - licence built Bristol Jupiter Gnome-Rhône 9K Mistral Gnome-Rhône 14K Mistral Major Gnome-Rhône 14M Mars Gnome-Rhône 14N Gnome-Rhône 14P Gnome-Rhône 14R Gnome-Rhône 18L Gnome-Rhône 18R Gnome-Rhône 28T Gobe Gobe 2-stroke engine Gobrón-Brillié (Gustave Gobrón and Eugène Brillié) Gobrón-Brillié 54hp X-8 (fitted to 1910 Voisin de-Caters) Gobrón-Brillié 102hp X-8 Goebel (Georg Goebel of Darmstadt) / (ver Gandenbergesche Maschinen Fabrik) Goebel 2-cyl. 20/25hp HOA Goebel Type II 100/110 hp 7-cyl. rotary Goebel Type III 200/230 hp 9-cyl. rotary Goebel Type V 50/60 hp 7-cyl. rotary Goebel Type VI 30/40 hp 7-cyl. rotary Goebel 170hp 9-cyl rotary Goebel 170hp 11-cyl rotary Goebel 180hp 11-cyl rotary Grade Grade 16hp V-4 2-stroke Great Plains Aircraft Supply Great Plains Type 1 Front Drive Green Green 32hp 4-cyl in-line Green 60hp 4-cyl in-line Green 82hp V-8 Green C.4 Green D.4 Green E.6 Green 150hp 6-cyl in-line Green 260-275hp V-12 1914 Green 300hp V-12 Green 450hp W-18 1914 Grégoire-Gyp (Pierre Joseph Grégoire / Automobiles Grégoire) Grégoire-Gyp 26hp 4-cyl in-line (3-cyl?) Grégoire-Gyp 40hp 4-cyl inverted in-line Grégoire-Gyp 51hp 4-cyl in-line Grégoire-Gyp 70hp Grey Eagle Grey Eagle 40hp 4-cyl in-line - Grey Eagle 60hp 6-cyl in-line - Grey Eagle 50hp 4-cyl in-line - Grizodubov (S.V. Grizodubov) Grizodubov 1910 40hp 4-cyl. Grob Grob 2500 Grob 2500E Guiberson (Guiberson Diesel Engine Company) Source:Gunston except where noted Guiberson A-918 Guiberson A-980 – Guiberson A-1020 – Guiberson T-1020 - (tank engine?) Guiberson T-1400 - (tank engine) Guizhou (Guizhou Liming Aircraft Engine Company) Guizhou WP-13 Guizhou WS-13 ("Taishan") Gyro Data from: Gyro 50hp 7-cyl rotary Old Gyro Gyro Model J 5-cyl 50 hp Duplex Gyro Model K 7-cyl 50 hp Duplex Gyro Model L 9-cyl 50 hp Duplex H Haacke (Haacke Flugmotoren)Source: RMV Haacke HFM 2 - 2cyl. 25/28 hp Haacke HFM 2a - 2cyl. 35 hp Haacke HFM 3 - 3cyl. fan 40 hp Haacke 55/60hp 5-cyl. radial Haacke 60/70hp radial Haacke 90hp 7-cyl. radial Haacke 120hp 10-cyl. radial HAL See:Hindustan Aeronautics Limited Hall-Scott Hall-Scott 60 hp Hall-Scott A-1 Hall-Scott A-2 Hall-Scott A-3 Hall-Scott A-4 Hall-Scott A-5 Hall-Scott A-5a Hall-Scott A-7 Hall-Scott A-7a Hall-Scott A-8 Hall-Scott L-4 Hall-Scott L-6 Hallett (Hallett Aero Motors Corp, Inglewood CA.) Hallett H-526 7-cyl radial 130 hp Hamilton Hamilton DOHC V-8 Hamilton Sundstrand Sundstrand T100 Hansa-Lloyd (Hansa-LLoyd Werke AG) Hansa-LLoyd V-16 Hansen-Snow (W.G. Hansen & L.L. Snow, Pasadena, CA) Hansen-Snow 35hp 4-cyl in-line Hardy-Padmore Hardy-Padmore 100hp 5-cyl rqdial Harkness (Donald (Don) Harkness, built by Harkness & Hillier Ltd) Harkness Hornet Harriman (Harriman Motors Company, South Glastonbury, Conn.) Harriman 30hp 4-cyl in-line Harriman 60hp 4-cyl in-line Harriman 100hp 4-cyl in-line Harris-Gassner Harris-Gassner 50/60hp V-8 Harroun Harroun 24hp 2-cyl HOA Hart Hart 150hp 9-cyl rotary Hart 156hp 9-cyl radial (?) Hartland Hartland 125hp H.C.G. (Les Établissements lipton) H.C.G. 2-cyl HOA Heath (Heath Aircraft Corp) Heath 4-B Heath 4-C Heath B-4 Heath B-12 Heath C-2 Heath C-3 Heath C-6 Heath (Heath Aerial Vehicle Company, Chicago Illinois) Heath 25/30hp 4-cyl in-line Heath-Henderson Heath-Henderson B-4 Heinkel-Hirth Source: Heinkel HeS 1 Heinkel HeS 2 Heinkel HeS 3 Heinkel HeS 6 Heinkel HeS 8 (Heinkel 109-001) Heinkel HeS 9 Heinkel HeS 10 Heinkel HeS 011(Heinkel 109-011) Heinkel HeS 21 Heinkel HeS 30 (Heinkel 109-006) Heinkel HeS 35 Heinkel HeS 36 Heinkel HeS 40 - paper design only Heinkel HeS 50d Heinkel HeS 50z Heinkel HeS 053 Heinkel HeS 60 Heinkel 109-021 Helium From Flight Helium 45hp 3-cyl radial Helium 60hp 3-cyl radial Helium 75hp 5-cyl radial Helium 100hp 5-cyl radial Helium 45hp 3-cyl rotary 2-stroke Helium 60hp 3-cyl rotary 2-stroke Helium 100hp 5-cyl rotary 2-stroke Helium 120hp 6-cyl rotary 2-stroke Helium 200hp 10-cyl rotary 2-stroke Helium 120hp 6-cyl rotary 2-stroke Helium 200hp 10-cyl rotary 2-stroke Hendee Hendee Indian 60/65hp V-8 Hendee Indian 50hp 7-cyl rotary Hendee Indian 60hp 9-cyl rotary Henderson Henderson 6hp 4-cyl in-line Herman Herman 45hp Herman 70hp Hermes Engine Company Hermes Cirrus Hess (Aubrey W. Hess / Alliance Aircraft Corporation) Hess Warrior Hewland Hewland AE75 Hexatron Engineering Hexadyne P60 Hexadyne O-49 Hiero (Otto Hieronimus – designer – several manufacturers) Hiero 50/60hp 4-cyl in-line Hiero 6 – generic title for all the Hiero 6-cyl. engines Hiero B Hiero C Hiero D Hiero E Hiero L Hiero N Hiero 85/95hp 4-cyl in-line Hiero 145hp Hiero 185hp Hiero 180/190hp 4-cyl inline Hiero 200hp 6-cyl inline Hiero 230/240hp 6-cyl inline Hiero 240/250hp 6-cyl inline HC Hiero 200/220hp V-8 Hiero 300/320hp 6-cyl inline Hiero 270/280hp 6-cyl inline Hiero 35/40hp 2-cyl HOA Hill Helicopters Hill Helicopters GT50 Hiller Hiller 1910 Hiller 30hp Hiller 60hp Hiller 90hp Hiller Aircraft Hiller 8RJ2B – ramjet for the Hiller YH-32 Hornet Hilz Hilz 45/50hp 4-cyl in-line Hilz 50/55hp 4-cyl in-line Hilz 65hp 4-cyl in-line Hindustan Aeronautics Limited HAL HPE-2 HAL PTAE-7 HAL HJE-2500 HAL HTFE-25 HAL HTSE-1200 HAL HPE-90 HAL P.E.90H HAL HJE-2500 GTRE GTX-35VS Kaveri PTAE-7 GTSU-110 Hiro Hiro Type 14 Hiro Type 61 Hiro Type 90 Hiro Type 91 Hiro Type 94 Hirth Hirth Motoren GmbH was merged with Heinkel to make "Heinkel-Hirth" in 1941. Hirth HM 60 Hirth HM 150 Hirth HM 500 Hirth HM 501 Hirth HM 504 Hirth HM 506 Hirth HM 508 Hirth HM 512 Hirth HM 515 Hirth F-10 Hirth F-23 Hirth F-30 Hirth F-33 Hirth F-36 Hirth F-40 Hirth F-102 Hirth F-263 Hirth O-280 Hirth O-280R Hirth 2702/2703 Hirth 2704/2706 Hirth 3002 Hirth 3202/3203 Hirth 3502/3503 Hirth 3701 Hispano-Suiza Hispano-Suiza 4B? 75 hp 4 in-line Hispano-Suiza 5Q Hispano-Suiza 6M 250 hp Hispano-Suiza 6Ma 220 hp Hispano-Suiza 6Mb 220 hp Hispano-Suiza 6Mbr 250 hp Hispano-Suiza 6O Hispano-Suiza 6P Hispano-Suiza 6Pa Hispano-Suiza 8A Hispano-Suiza 8B Hispano-Suiza 8F Hispano-Suiza 9Q licensed Wright J-6 / R-975 Whirlwind Hispano-Suiza 9T licensed Clerget 9C, diesel radial Hispano-Suiza 9V licensed Wright R-1820 Cyclone Hispano-Suiza 12B (1945) Hispano-Suiza 12G (W-12) Hispano-Suiza 12Ga (W-12) Hispano-Suiza 12Gb (W-12) Hispano-Suiza 12H Hispano-Suiza 12Ha Hispano-Suiza 12Hb Hispano-Suiza 12Hbr Hispano-Suiza 12J Hispano-Suiza 12Ja 350 hp Hispano-Suiza 12Jb Hispano-Suiza 12K Hispano-Suiza 12Kbrs Hispano-Suiza 12L Hispano-Suiza 12Lb Hispano-Suiza 12Lbr Hispano-Suiza 12Lbrx Hispano-Suiza 12M Hispano-Suiza 12N Hispano-Suiza 12X Hispano-Suiza 12Y Hispano-Suiza 12Z Hispano-Suiza 14AA radial Hispano-Suiza 14AB radial Hispano-Suiza 14H radial Hispano-Suiza 14Ha Hispano-Suiza 14Hbs Hispano-Suiza 14Hbrs 600 hp radial Hispano-Suiza 14U diesel radial Hispano Suiza 18R Hispano-Suiza 18S Hispano-Suiza 24Y Hispano-Suiza 24Z Latécoère-(Hispano-Suiza) 36Y Hispano-Suiza 48H Hispano-Suiza 48Z Hispano-Suiza Nene Hispano-Suiza Tay Hispano-Suiza Verdon Hispano-Suiza R.300 Hispano-Suiza R.800 Hispano-Suiza R.804 Hispano-Suiza J-5 Whirlwind Hispano-Suiza Type 31 Hispano-Suiza Type 34 Hispano-Suiza Type 35 Hispano-Suiza Type 36 Hispano-Suiza Type 38 Hispano-Suiza Type 39 Hispano-Suiza Type 40 Hispano-Suiza Type 41 Hispano-Suiza Type 42 Hispano-Suiza Type 42VS Hispano-Suiza Type 43 Hispano-Suiza Type 44 Hispano-Suiza Type 45 Hispano-Suiza Type 50 Ga W-12 450 hp Hispano-Suiza Type 51 Ha V-12 450 hp Hispano-Suiza Type 52 Ja V-12 350 hp Hispano-Suiza Type 57 Mb V-12 500 hp Hispano-Suiza Type 61 Hispano-Suiza Type 72 Hispano-Suiza Type 73 Hispano-Suiza Type 76 Hispano-Suiza Type 77 Hispano-Suiza Type 79 Hispano-Suiza Type 80 Hispano-Suiza Type 82 Hispano-Suiza Type 89 12Z Hispano-Suiza Type 90 Hispano-Suiza Type 93 Hitachi Source:Gunston. Hitachi Ha12 (Army Type 95 150hp Air Cooled Radial) Hitachi Ha13 (Army Type 95 350hp Air Cooled Radial) Hitachi Ha13a (Army Type 98 450hp Air Cooled Radial) Hitachi Ha42 Hitachi Ha47 Hitachi Ha-51 (unified designation) Hitachi GK2 Hitachi GK4 Hitachi GK2 Amakaze Hitachi Kamikaze Hitachi Hatsukaze Hitachi Jimpu Hitachi Tempu Army Type 95 150hp Air Cooled Radial (Ha12 - Hatsudoki system) Army Type 95 350hp Air Cooled Radial (Ha13 - Hatsudoki system) Army Type 98 450hp Air Cooled Radial (Ha13a - Hatsudoki system) Army Type 4 110hp Air Cooled Inline (Ha47 - Hatsudoki system / GK4 - Navy system) HKS HKS 700E HKS 700T Hodge Hodge 320hp 18-cyl radial Hofer (Al Hofer) Hofer 10-12hp 4cyl in-line Holbrook (Holbrook Aero Supply) Holbrook 35hp Holbrook 50hp Honda Honda HFX-01 Honda HFX20 Honda HF118 GE Honda HF120 Honeywell Honeywell ALF502 Honeywell HTF7000 Honeywell LF507 Honeywell LTS101 Honeywell TPE-331 Honeywell TFE731 Honeywell FX5 Hopkins & de Kilduchevsky Hopkins & de Kilduchevsky 30-40hp Hopkins & de Kilduchevsky 60-80hp Howard Howard 120hp 6-cyl in-line Hudson (John W Hudson) Hudson 100hp 10-cyl radial Hummel ( James Morris (Morry) Hummel of Bryan, Ohio) Hummel 28hp 1/2 VW Hummel 32hp 1/2 VW Hummel 45hp 1/2 VW Hummel 50hp VW Hummel 60hp VW Hummel 70hp VW Hummel 85hp VW HuoSai (HuoSai - Piston engine) HuoSai HS-5 HuoSai HS-6 HuoSai HS-7 HuoSai HS-8 Hurricane Hurricane C-450 (8-cyl 2-stroke radial) I IAE IAE V2500 IAE V2500SF SuperFan I.Ae. I.Ae. 16 El Gaucho I.Ae. 19R El Indio IA IAO-1600-RX/1 IAME (Ital-American Motor Engineering) KFM 104 KFM 105 KFM 107 KFM 112M IAR IAR K7-I 20 IAR K9-I C40 IAR K14 IAR 4-G1 IAR 6-G1 IAR LD 450 IAR DB605 ICP ICP M09 IHI Ishikawajima Tsu-11 Ishikawajima TR-10 Ishikawajima TR-12 Ishikawajima Ne-20 Ishikawajima Ne-20-kai Ishikawajima Ne-30 Turbojet Engine of 850 kg Ishikawajima Ne-130 Ishikawajima Ne-230 Ishikawajima Ne-330 Turbojet of 1,320 hp Ishikawajima-Harima JR100 Ishikawajima-Harima JR200 Ishikawajima-Harima JR220 Ishikawajima-Harima XJ11 Ishikawajima-Harima F3 Ishikawajima-Harima F5 Ishikawajima-Harima F7 Ishikawajima-Harima XF9 Ishikawajima-Harima IGT60 Ishikawajima-Harima J3 Ishikawajima-Harima XF5 Ishikawajima-Harima T64-IHI-10 Ishikawajima-Harima T58-IHI-8B BLC Ishikawajima-Harima J79-17 Ishikawajima-Harima CT58-IHI-110 IL (Instytut Lotnictwa – Aviation Institute) IL SO-1 IL SO-3 IL K-15 ILO ILO F 12/400 Imaer Imaer 1000 Imaer 2000 Imperial (Imperial Airplane Society) Imperial 35-70hp (various 6cyl rotary engines) Imperial 100hp (12cyl rotary) In-Tech (In-Tech International Inc.) In-Tech Merlyn Indian See: Hendee Innodyn (Innodyn L.L.C.) Innodyn TAE165 Innodyn TAE185 Innodyn TAE205 Innodyn TAE255 Innodyn 165 TE Innodyn 185 TE Innodyn 205 TE Innodyn 255 TE International Data from: International 21.5hp 4-cyl rotary International 66hp 6-cyl rotary Ion (Gabriel Ion) Ion airship steam engine Irwin (Irwin Aircraft Co) Irwin 79 Meteormotor (a.k.a. X) Isaacson (Isaacson Engine (Motor Supply Co.) / R.J. Isaacson) Isaacson 45hp 7-cyl. radial Isaacson 50hp Isaacson 60hp Isaacson 6-cyl. radial Isaacson 50hp 7-cyl. radial Isaacson 65hp 7-cyl. radial Isaacson 100hp 14-cyl. radial Isaacson 100hp 9-cyl. rotary Isaacson 200hp 18-cyl. rotary Ishikawajima See: IHI Isotov Source:Gunston Isotov GTD-350 Isotov TV-2-117 Isotov TV-3-117 Isotov TVD-850 Isotta Fraschini Isotta Fraschini L.170 Isotta Fraschini L.180 I.R.C.C.15/40 Isotta Fraschini L.180 I.R.C.C.45 Isotta Fraschini Asso 80 Isotta Fraschini Asso 120 R.C.40 Isotta Fraschini Asso 200 Isotta Fraschini Asso 250 Isotta Fraschini Asso 450 Caccia Isotta Fraschini Asso 500 Isotta Fraschini Asso 750 Isotta Fraschini Asso IX Isotta Fraschini Asso 1000 Isotta Fraschini Asso Caccia Isotta Fraschini Asso XI Isotta Fraschini A.120 R.C.40 Isotta Fraschini L.121 R.C.40 Isotta Fraschini Asso XII Isotta Fraschini Asso XII R. Isotta Fraschini Asso (racing) Isotta Fraschini Beta Isotta Fraschini Gamma Isotta Fraschini Delta Isotta Fraschini Zeta Isotta Fraschini Sigma Isotta Fraschini Astro 7 Isotta Fraschini Astro 14 Isotta Fraschini V.4 Isotta Fraschini V.5 Isotta Fraschini V.6 Isotta Fraschini V.7 Isotta Fraschini V.8 Isotta Fraschini V.9 Isotta Fraschini 245hp Isotta Fraschini K.14 - licence built Gnome-Rhône Mistral Major Isotta Fraschini 80T Ivchenko Source:Gunston. Ivchenko AI-4 Ivchenko AI-7 Ivchenko AI-8 Ivchenko AI-9 Ivchenko AI-10 Ivchenko AI-14 Ivchenko AI-20 Progress AI-22 Ivchenko AI-24 Ivchenko AI-25 Ivchenko AI-26 Progress AI-222 Ivchenko-Progress AI-450S Progress D-18T Progress D-27 Lotarev D-36 Lotarev D-136 Progress D-236 Progress D-436 IWL See:Pirna J Jabiru Jabiru 1600 Jabiru 2200 Jabiru 3300 Jabiru 5100 Jack & Heinz Jack & Heinz O-126 Jacobs Source:Gunston except where noted Jacobs 35 hp Jacobs B-1 Jacobs L-3 Jacobs L-4 Jacobs L-5 Jacobs L-6 Jacobs LA-1 Jacobs LA-2 Jacobs O-200 Jacobs O-240A Jacobs O-240L Jacobs O-360A (air-cooled) Jacobs O-360L (liquid-cooled) Jacobs R-755 Jacobs R-830 Jacobs R-915 Jaenson Jaenson 300hp V-8 Jalbert-Loire Jalbert-Loire 4-cyl. 160 hp Jalbert-Loire 6-cyl. 235 hp Jalbert-Loire 16-H – 16-cyl. 600 hp Jameson (Jameson Aero Engines Ltd.) Jameson FF-1 - 1940s horizontally opposed, four cylinder (106hp) Janowski (Jaroslaw Janowski) Janowski Saturn 500 J.A.P. Data from: J.A.P. 1909 9hp 2-cyl. J.A.P. 1909 20hp 4-cyl. J.A.P. 38hp V-8 (air-cooled) J.A.P. 45hp V-8 (water-cooled) J.A.P. 1910 40hp V-8 J.A.P. 8-cyl. Aeronca-J.A.P. J-99 Japanese rockets and Pulse-jets Type4 I-Go Model-20 (Rocket) Tokuro-1 Type 2 (Rocket) Javelin Javelin Ford 230hp conversion Jawa Jawa 1000 Jawa M-150 Jendrassik Jendrassik Cs-1 J.E.T (James Engineering Turbines Ltd) J.E.T Cobra JetBeetle JetBeetle Tarantula H90 JetBeetle Locust H150R JetBeetle Mantis H250 Jetcat Jetcat P160 Jetcat P200 Jetcat P400 Johnson Johnson Aero 75hp V-6 Johnson Aero 100hp V-8 Johnson Aero 150hp V-12 JLT Motors (Boos, Seine-Maritime, France) JLT Motors Ecoyota 82 JLT Motors Ecoyota 100 JPX JPX 4TX75 JPX D160 JPX PUL 212 JPX PUL 425 JPX D-320 Junkers Source:Kay Jumo 4 later Jumo 204 Jumo 5 later Jumo 205 Junkers L1 air-cooled in-line 6 4-stroke petrol Junkers L2 Junkers L3 Junkers L4 Junkers L5 Junkers L55 Junkers L7 Junkers L8 Junkers L88 Junkers L10 Junkers Jumo 004 Turbojet Junkers Jumo 204 Junkers Jumo 205 Junkers Jumo 206 Junkers Jumo 207 Junkers Jumo 208 Junkers Jumo 209 Junkers Jumo 210 Junkers Jumo 211 Junkers Jumo 213 Junkers Jumo 218 Junkers Jumo 222 Junkers Jumo 223 Junkers Jumo 224 Junkers Jumo 225 Junkers Jumo 109-004 Junkers Jumo 109-006 (Junkers/Heinkel 109-006) Junkers Jumo 109-012 Junkers Jumo 109-022 Junkers Mo3 diesel opposed-piston aero-engine prototype Junkers Fo2 Petrol opposed-piston 6-cyl/12piston horizontal Junkers Fo3 diesel opposed-piston aero-engine prototype Junkers Fo4 diesel opposed-piston aero-engine prototype Junkers SL1 company designation for Fo4 K Kalep (Fyodor Grigoryevich Kalep) Kalep 1911 4-cyl 2-stroke Kalep-60 Kalep-80 Kalep-100 Kawasaki Source:Gunston except where noted Kawasaki Ha9 – Licence-built BMW VI for IJAAF Kawasaki Ha40 – Licence-built Daimler-Benz DB 601A for IJAAF Kawasaki Ha-60 Kawasaki Ha140 Kawasaki Ha201 – twin Ha40s with common gearbox Kawasaki KAE-240 Kawasaki 440 engine. Kawasaki KJ12 Kawasaki KT5311A Kelly Kelly 200hp 2-stroke 4-cyl inline Kemp (a.k.a. Grey Eagle ) Kemp D-4 Kemp E-6 Kemp G-2 Kemp H-6 (55 hp 6IL) Kemp I-4 (35 hp 4IL) Kemp J-8 (80 hp V-8) Kemp K-2 Kemp M-2 Kemp O-101 Kemp-Henderson 27hp Ken Royce LeBlond Aircraft Engine Corporation was sold to Rearwin Airplanes in 1937 and renamed Ken-Royce. Ken-Royce 5E - LeBlond 70-5E Ken-Royce 5G - LeBlond 90-5G Ken-Royce 7F- developed from LeBlond 7DF Ken-Royce 7G Kessler Kessler 200hp Kessler 6C-400 KFM (KFM (Komet Flight Motor) Aircraft Motors Division of Italian American Motor Engineering) KFM 107 KFM 112M Khatchaturov Khatchaturov R-35 KHD Humboldt-Deutz 6 cyl. in-line diesel Klöckner-Humboldt-Deutz diesel 8 cyl. rotary DZ 700? Klöckner-Humboldt-Deutz DZ 700 Klöckner-Humboldt-Deutz DZ 710 16-cylinder horizontally opposed diesel Klöckner-Humboldt-Deutz DZ 720 32-cylinder H-block version of the 710 KHD T112 (APU) KHD T117 KHD T317 Klöckner-Humboldt-Deutz T53-L-13A Kiekhaefer Kiekhaefer O-45 Kiekhaefer V-105 Kimball Kimball Beetle K Kimball Gnat M King (Chas. B. King) King 550hp V-12 King-Bugatti King-Bugatti U-16 Kinner Source:Gunston except where noted Kinner 60 hp Kinner B-5 Kinner B-54 Kinner C-5 Kinner C-7 Kinner SC-7 Kinner K-5 Kinner O-550 Kinner O-552 Kinner R-5 Kinner R-53 Kinner R-55 Kinner R-56 Kinner R-370 Kinner R-440 Kinner R-540 Kinner R-720 Kinner R-1045-2 Kirkham Kirkham 50hp 4IL (C-4?) Kirkham 75-85hp Kirkham 110hp Kirkham 180hp 9-cyl. radial Kirkham B-4 Kirkham B-6 Kirkham B-12 Kirkham BG-6 (geared) Kirkham C-4 Kirkham K-12 Kishi Kishi 70hp V-8 Klimov Source:Gunston Klimov M-100 Klimov M-103 Klimov M-105 Klimov VK-106 Klimov VK-107 Klimov VK-108 Klimov VK-109 Klimov M-120 Klimov RD-33 Klimov RD-45 Klimov RD-500 Klimov VK-1 Klimov VK-2 Klimov VK-3 Klimov VK-5 Klimov VK-2500 Klimov VK-800 Klimov TV2-117 Klimov TV3-117 Klimov TV7-117 Knox (Knox Motors Company, Springfield Mass.) Knox 300hp V-12 Knox H-106 Knox R-266 Koerting Koerting 65hp V-8 Koerting 185hp V-8 Koerting 250hp V-12 Kosoku (Kosokudo Kikan KK) Kosoku KO-4 Kolesov Kolesov RD-36-51 Kolesov VD-7 Köller (Dr. Kröber und Sohn GmbH, Treuenbrietzen) Köller M3 König (Compact Radial Engines) König SC 430 König SD 570 Konrad (Oberbayische Forschungsanhalt Dr. Konrad) Konrad 109-613 Konrad Enzian IV rakatenmotor Konrad Enzian V rakatenmotor Konrad Rheintochter R 3 rakatenmotor Körting Körting Kg IV V-8 Körting 8 SL Kossov Kossov MG-31F Kostovich (O.S. Kostovich) Kostovich 2-cyl airship engine Kostovich 80hp 8-cyl airship engine Krautter (Dipl. Ing. Willi Krautter) Krautter-Leichtflugmotor Kroeber (Doktor Kroeber & Sohn G.m.b.H.) Kroeber M4 Kruk Kruk rotary Kuznetsov Source:Gunston except where noted Kuznetsov Type 022 Kuznetsov NK-2 Kuznetsov NK-4 Kuznetsov NK-6 Kuznetsov NK-8 Kuznetsov NK-12 Kuznetsov NK-22 Kuznetsov NK-25 Kuznetsov NK-32 Kuznetsov NK-86 Kuznetsov NK-87 Kuznetsov NK-88 Kuznetsov NK-89 Kuznetsov NK-144 Kuznetsov TV-2 Kuznetsov 2TV-2F L L'Aisle Volante L'Aisle Volante C.C.4 Labor Labor 70hp 4-cyl in-line Lambert Engine Division (Monocoupe Corporation – Lambert Engine Division) Lambert M-5 Lambert R-266 Lambert R-270 Lamplough Lamplough 6-cyl 2-stroke rotary Lamplough 6-cyl 2-stroke axial Lancia (Lancia & Company. / Vincenzo Lancia) Lancia Tipo 4 Lancia Tipo 5 Lange Lange EA 42 Laviator Laviator 35hp 3-cyl rotary 2-stroke Laviator 50hp 6-cyl rotary 2-stroke Laviator 65hp 6-cyl rotary 2-stroke Laviator 75hp 9-cyl rotary 2-stroke Laviator 100hp 12-cyl rotary 2-stroke Laviator 80hp 6-cyl 2-stroke water-cooled radial Laviator 120hp 4IL Laviator 110hp 6IL Laviator 250hp 6IL Laviator 80hp V-8 Laviator 120hp V-8 Laviator 200hp V-8 Lawrance Lawrance A-3 Lawrance B 60 hp 3-cyl. Lawrance C-2 Lawrance J-1 Lawrance J-2 Lawrance L-2 65 hp Lawrance L-3 Lawrance L-4 a.k.a. 'Wright Gale' Lawrance L-5 Lawrance L-64 Lawrance N Lawrance N-2 40HP 2OA Lawrance R Lawrance R-1 Lawrance-Moulton A (France) Lawrance-Moulton B (200 hp V-8 USA) Lawrance 140hp 9-cyl radial Lawrance 200hp 9-cyl radial Lawrence Radiation Laboratory Tory IIA (Project Pluto) Tory IIC (Project Pluto) Le Gaucear Le Gaucear 150hp 10-cyl rotary Le Maitre et Gerard Le Maitre et Gerard 700hp V-8 Le Rhône Le Rhône 7A Le Rhône 7B Le Rhône 7B2 Le Rhône 7Z Le Rhône 9C Le Rhône 9J Le Rhône 9R Le Rhône 9Z Le Rhône 11F Le Rhône 14D Le Rhône 18E (1912) Le Rhône 18E (1917) Le Rhône 28E Le Rhône K Le Rhône L Le Rhône M Le Rhône P Le Rhône R LeBlond LeBlond was sold to Rearwin and engines continued under Ken-Royce name. LeBlond B-4 LeBlond B-8 LeBlond 40-3 LeBlond 60-5D LeBlond 70-5DE LeBlond 75-5 LeBlond 80-5 LeBlond 85-5DF LeBlond 70-5E LeBlond 80-5F (in military use known as R-265) LeBlond 85-5DF LeBlond 90-5F LeBlond 90-5G LeBlond 90-7 LeBlond 110-7 LeBlond 120-7 LeBlond 7D LeBlond 7DF Lee Lee 80hp Lefèrve (F. Lefèrve) Lefèrve 2-cyl. 33hp Lenape Lenape AR-3 Lenape LM-3 Papoose 3-cyl. Lenape LM-5 Brave 5-cyl. Lenape LM-7 Chief 7-cyl. Lenape LM-125 Brave (suspect should be LM-5-125) Lenape LM-365 Papoose (suspect should be LM-3-65) Lenape LM-375 Papoose (suspect should be LM-3-75) Lessner Lessner 1908 4-cyl airship engine Levavasseur Léon Levavasseur see Antoinette Levi Levi 7-cyl barrel engine Leyland Motors J. G. Parry-Thomas, the chief engineer at Leyland Motors. A single X-8 engine was built in August 1918 but failed during testing and with the end of WWI development was abandoned. LFW LFW 0 LFW I LFW II LFW III LFW-12 X-1 LHTEC LHTEC T800 Liberty Source:Gunston except where noted Liberty L-4 Liberty L-6 Liberty L-8 Liberty L-12 Liberty L-12 double-crankshaft Liberty X-24 Ligez Ligez 3-cyl rotary Light Light Kitten 20 Light Kitten 30 Light Tiger 100 Light Tiger 125 Light Tiger Junior 50 Lilloise See:C.L.M. Limbach Limbach L1700 Limbach L2000 Limbach L2400 Limbach L275E Limbach L550E Lincoln Rocket 29hp Lindequist (Konsortiert Överingeniör Sven Lindequist's Uppfinninggar – Consortium Senior Engineer Sven Lindqvist Inventions) Lindewqiuist 1,000hp Stratospheric engine Les Long Long Harlequin Long Harlequin 933 Lockheed Lockheed XJ37/L-1000 LOM (Letecke Opravny Malesice, Praha) LOM M132 LOM M137 LOM M337 Loravia (Yutz, France) Loravia LOR 75 Lorraine-Dietrich (Société Lorraine des Anciens Établissements de Dietrich) Source:Jane's All the World's Aircraft 1938 except where noted Lorraine 3B licence-built Potez 3B? Lorraine 3D licence-built Potez 3B Lorraine 5P Ecole – 5 cyl radial Lorraine 6A – (AM) 110 hp Lorraine 6Ba - 6 cyl two-row radial 130CV Lorraine 7M Mizar – 7 cyl radial Lorraine 8A – V-8 Lorraine 8Aa Lorraine 8Ab Lorraine 8Aby Lorraine 8B – V-8 Lorraine 8Ba Lorraine 8Bb Lorraine 8Bd Lorraine 8Be Lorraine 8BI (inverted?) Lorraine 9A Lorraine 9N Algol – Type 120 9 cyl radial Lorraine Dietrich 12Cc ? Dc in error? Lorraine 12? Hibis 450 hp Lorraine 12D Lorraine 12 DOO 460 hp O-12 Lorraine 12E Courlis – W-12 450 hp Lorraine 12F Courlis – W-12 600 hp Lorraine 12H Pétrel – V-12 Lorraine 12Q Eider Lorraine 12Qo Eider Lorraine 12R Sterna – V-12 Type 111 700 hp Lorraine 12Rs Sterna – V-12 Type 111 700 hp Lorraine 12Rcr Radium – inverted V-12 with turbochargers 2,000 hp Lorraine 14A Antarès – 14 cylinder radial 500 hp Lorraine 14E – 14 cylinder radial 470 hp Lorraine 18F Sirius - Type 112 Lorraine 18F.0 Sirius Lorraine 18F.00 Sirius Lorraine 18F.100 Sirius Lorraine 18G Orion – W-18 Lorraine 18Ga Orion – W-18 Lorraine 18Gad Orion – W-18 Lorraine 18K – W-18 Lorraine 18Ka Lorraine 18Kd Lorraine 18Kdrs Lorraine 24 – W-24 1,000 hp (3 banks of 8 cylinders) Lorraine 24E Taurus – 24 cyl in-line radial (six banks of 4-inline?) 1,600 hp Lorraine P5 Lorraine AM (moteur d’Aviation Militaire (A.M.)) – derived from German 6-cyl in-line engines Lorraine Algol Junior – 230 hp Lorraine-Latécoère 8B Lorraine Diesel – built in 1932, rated at 200 hp Lorraine DM-400 Lotarev (Vladimir Lotarev) (see also Ivchenko-Progress) Lotarev D-36 Lotarev D-136 Lotarev D-236-T Lotarev DV-2 Lotarev RD-36 (lift turbofan) Loughead Loughead XL-1 LPC LPC Fang 1-KS-40 LPC Sword 3.81-KS-4090 LPC Meteor 33-KS-2800 LPC Mercury 0.765-KS-53,600 LPC Viper I-C 5.6-KS-5,400 LPC Viper II-C 3.77-KS-8,040 LPC Lance I-C 6.65-KS-38,800 LSA-Engines (LSA-Engines GmbH, Berlin, Germany) LSA-Engines LSA850 Lucas Lucas CT 3201 Lutetia (Marcel Echard / Moteurs Lutetia) Lutetia 4.C.02 V-4, 2-stroke, 1267 cc, 40-45 hp at 2800rpm Lutetia 6-cyl radial 70 hp a 2600 rpm Lycoming Lycoming O-145 Lycoming O-160 Lycoming O-233 Lycoming IO-233 Lycoming O-235 Lycoming O-290 Lycoming O-320 Lycoming O-340 Lycoming O-350 Lycoming O-360 Lycoming IO-390 Lycoming O-435 Lycoming O-480 Lycoming O-530 Lycoming O-540 Lycoming O-541 Lycoming IO-580 Lycoming GSO-580 Lycoming SO-590 Lycoming IO-720 Lycoming O-1230 Lycoming R-500 Lycoming R-530 Lycoming R-645 Lycoming R-680 Lycoming H-2470 Lycoming XR-7755 (36cyl 7,755ci) Lycoming AGT1500 Lycoming AL55 Lycoming ALF101 Lycoming ALF502 Lycoming LF507 Lycoming LTC1 Lycoming LTC4 Lycoming LTP101 Lycoming LTS101 Lycoming PLF1A Lycoming PLF1B Lycoming F102 (ALF502) Lycoming F106 (ALF502) Lycoming F408 (Teledyne CAE 382) Lycoming J402 (Teledyne CAE 370/372/373) Lycoming T702 (PLT27) Lycoming T53 Lycoming T55 Lycoming TF40 Lyulka Source:Gunston. Lyulka TR-1 Lyulka AL-5 Lyulka AL-7 Lyulka AL-21 Lyulka AL-31 Lyulka AL-34 Lyulka TS-31M LZ Design Front electric sustainer M M&D Flugzeugbau M&D Flugzeugbau TJ-42 MAB 4-cylinder air-cooled "fan" engine 4-cylinder vertical water cooled in-line engine MacClatchie MacClatchie X-2 Panther Macchi Macchi MB.2 – 2.cyl 20 hp at 3,000 rpm Macomber Avis Macomber Rotary Engine Company with Avis Engine Company Macomber Avis 7-cylinder axial engine M.A.N. Maschinenfabrik Augsburg-Nürnberg (MAN) Licence-built Argus As III MAN Mana V (350 hp V-10) V-10 airship engine? MAN Mana III (185 hp 6-cyl in-line) 260 hp 6-cylinder in-line - "quite similar to 160-hp Mercedes design" MAN Turbo MAN Turbo 6012 MAN Turbo 6022 Rolls-Royce/MAN Turbo RB153 Rolls-Royce/MAN Turbo RB193 Manfred Weiss See: Weiss Manly Charles M. Manly redesigned an engine built by Stephen Balzer. Manly–Balzer engine Mantovani Mantovani Citroën 2CV car engine conversion Marchetti (Marchetti Motor Patents) Marchetti A Mark (Stahlwerk Mark Flugzeugbau) Mark F.II (35 hp) Mark M.3 (40 hp) Mark M.5 (70 hp) Mark 55hp Mark 120hp Marcmotor (Macerata, Italy) Marcmotor ROS100 Marcmotor ROS125 Marcmotor ROS200 Marlin-Rockwell Marlin-Rockwell 72hp Marquardt Corporation Marquardt PJ40 pulsejet Marquardt PJ46 pulsejet Marquardt RJ30 C-20 ramjet Marquardt RJ31 C-30 Ramjet Marquardt RJ34 ramjet Marquardt RJ39 ramjet Marquardt RJ43 Marquardt RJ57ramjet Marquardt RJ59ramjet Marquardt MA-19 Marquardt MA-20 Marquardt MA-24 Marquardt MA-74 Marquardt MA-196 Marquardt C-20 (2x C-20s fitted to P-51 and 2x Marquardt C20-85D fitted to P-80A 44-85042) Marquardt C-30 (2x Marquardt C30-10B fitted to P-80A 44-85214) Marquardt C-48 Marquardt R-1E Marquardt R-40A Martin Martin 133? typo? Martin 333 Martin 500 Martin 8200 (190 hp V-8) Martin L-330 Maru Maru Ka10 Masson Masson 50hp 6-cyl in-line Mathis Mathis G.2F Mathis G.4 Mathis G.4F Mathis G.4R Mathis G.7 Mathis G.7R Mathis G.8 Mathis G.8R Mathis G.14R Mathis G.14RS Mathis G.16R Mathis Vega 42 Mathis Vesta 42 Mathis 175H Mathis 2.G.60 Mathis 4.G.60 Mathis 4.GB.60 Mathis 4.GB.62 Mathis BG-20 Mathis 12.GS.DS Mathis 16.GB.21 Mawen (Mawen S.A.) Mawen 150hp rotary Mawen 350hp rotary Mawen 700hp two row rotary Max Ams (Max Ams machine Company) Max Ams 75hp V-8 Maxim Maxim 87hp 4-cyl in-line Maximotor Makers Maximotor 50hp Maximotor 60-70hp Maximotor 70-80hp Maximotor 80-100hp Maximotor 100hp Maximotor 120hp Maximotor 150hp Maximotor A-4 (50 hp 4ILW) Maximotor A-6 (75 hp 6ILW) Maximotor A-8 (110 hp V-8) Maximotor B-6 (115 6ILW) Maximotor 70hp 4-in-line Maybach Maybach AZ Maybach DW Maybach IR Maybach BY Maybach CX Maybach HS Maybach HS D Maybach HS-Lu Maybach Mb.III Maybach Mb.IV Maybach Mb.IVa Maybach 300hp Maybach VL.I Maybach VL.II Maybach 180hp 6IL Maybach 200hp 6IL Maybach 300hp 6IL Mayo {Mayo Radiator Co) Mayo 1915 (6LW) McCulloch McCulloch MAC-101 McCulloch 104-100 McCulloch O-90 McCulloch O-100 McCulloch O-150 McCulloch 4318A O-100-1 McCulloch 4318B O-100-2 McCulloch 4318C O McCulloch 4318E YO-100-4 McCulloch TSIR-5190 McCulloch 6150 O-150-1 McCulloch 6318 O-150-2 McDonnell McDonnell PJ42 pulsejet McDowell (Geo. McDowell. Brooklyn NY.) McDowell Twin-Piston V-4 2-stroke Mead (Mead Engine Co.) Mead 50hp 4-cyl in-line Mekker Mekker Sport Menasco Sources:Gunston and Jane's. Menasco Pirate/Super Pirate Menasco Buccaneer/Super Buccaneer Menasco M-50 Menasco Unitwin 2-544 Menasco-Salmson B-2 Menasco L-365 - Military designation for Pirate Menasco XIV-2040 Menasco XH-4070 Menasco RJ37 Mengin (Établissements Pierre Mengin) Mengin B Mengin C (later 2A.01), Poinsard design Mengin G.M.H. (Genete, Mengin, and Hochet) Mengin 2A.01 Poinsard design Hochet-Mengin Mercedes See: Daimler-Benz Merkulov (Ivan A. Merkulov) Merkulov DM-4 ramjet Métallurgique Data from: Métallurgique 32hp 4-cyl in-line Métallurgique 40hp 4-cyl in-line Métallurgique 48hp 4-cyl in-line Métallurgique 60hp 4-cyl in-line Métallurgique 90hp 4-cyl in-line Meteormotor Meteormotor 20-25hp Meteor (Meteor S.p.A. Constuzioni Aeronautiche) Meteor G 80cc Meteor Alfa 1 Meteor Alfa 1AQ Meteor Alfa 2 Meteor Alfa 2AQ Meteor Alfa 2V Meteor Alfa 3 Meteor Alfa 3AQ Meteor Alfa 4 Meteor Alfa 4V Meteor Alfa 5 Metropolitan-Vickers Metrovick F.1 Metrovick F.2 Freda Metrovick F.2/2 Metrovick F.2/3 Metrovick F.2/4 Beryl Metrovick F.3 Metrovick F.5 Metrovick F.9 Sapphire Metz (Metz Company, Waltham, Mass.) Metz 125hp rotary Michel Michel IV-AT3 Michel 4A-14 Michel RAT-3 100 hp Michel A.M. 14 MARK II Michel A.M.7 6L 200 hp Michel A.M.14 Type I 4L 100 hp Michel A.M.14 Type II Michel A.M.14 Type III Michel A.M.16 6L 40 hp Michigan Michigan 2-cyl 2-stroke rotary Michigan Rover Microturbo Microturbo TRB 13 Microturbo SG 18 Microturbo TRS 18 Microturbo TRB 19 Microturbo TRS 25 Microturbo TRI-40 Microturbo TRI 60 Microturbo TFA 66 Microturbo TRI 80 Microturbo TFA 130 Microturbo J403 Microturbo Cougar Microturbo Eclair Microturbo Eclair II Microturbo Lynx Microturbo Noelle (starter) Microturbo Emeraude (starter) Microturbo Espadon (starter) Microturbo Saphir 007 (starter) Mid-west ( Mid-West Engines Limited / Diamond engines / Austro Engine) MidWest AE50 MidWest AE100 MidWest AE110 Austro Engine AE50R Austro Engine AE75R Miese Data from: Miese 50-60hp 8-cyl Miese 100hp 8-cyl radial Mikulin Mikulin AM-3M Mikulin AM-13 Mikulin AM-34 Mikulin AM-35 Mikulin AM-37 Mikulin AM-38 Mikulin AM-39 Mikulin AM-42 Mikulin M-85 Mikulin RD-3M Mikulin M-17 Mikulin M-209 Mikulin AM-TKRD-01 Mikulin-Stechkin (A.A. Mikulin & B.S. Stechkin) AMBS-1 Milwaukee Tank Milwaukee Tank V-470 Milwaukee Tank V-502 Miller Miller 22hp radial Miller (Harry A. Miller Manufacturing Company) Miller 125hp 4-cyl in-line Miller V-12 Minié Data from: (Établissements Minié, Colombes, Seine, France) Minié 4.B0 Horus Minié 4.D Minié 4.E0 Horus Minié 4.E2 Horus Mistral Engines Mistral G-190 Mistral G-200 Mistral G-230-TS Mistral G-300 Mistral G-360-TS Mistral K-200 Mistral K-300 Mitsubishi Mitsubishi Ha-42 Mitsubishi Ha-43 Mitsubishi Kasei Mitsubishi Kinsei Mitsubishi Shinten Mitsubishi TS1/MG5 Mitsubishi Zuisei Modena Avio Engines (Rubiera, Italy) MAE 323 Monaco (Monaco Motor and Engineering Co. Ltd.) Monaco 75hp Monaco 100hp Monnett Data from: Monnett AeroVee Monnett 1600cc E-Vee Monnett 1600cc SuperVee Monnett 1700cc E-Vee Monnett 1700cc SuperVee Monnett 1835cc E-Vee Monnett 2007cc E-Vee Morehouse Morehouse 15hp Morehouse 29hp Morehouse M-42 Morehouse M-80 Mors Data from: Mors 30hp V-4 Mosler (Mosler, Inc. of Hendersonville, North Carolina) Mosler MM CB-35 Mosler MM CB-40 Mosler Red 82X Motor Sich Motor Sich MS-500V Motorav Industria Motorav 2.3 V Motorav 2.6 R Motorav 2.6 V Motorav 2.8 R Motorav 3.1 R Motorlet Motorlet M-701 Motorlet M-601 Motorlet M-602 Motorlet M-20 Motorlet AI-25/Titan/Sirius - see Ivchenko AI-25 Mozhaiskiy Mozhaisky gas fired machine MTH MTH R 422-CG MTR MTR MTR390 MTU Aero Engines MTU DB 720F/PTL6 MTU DB 721/PTL10 MTU DB 730F/PTL6 MTU DB 730H/ZTL6 MTU 6012 MTU 6022 Mudry (Moteurs Mudry-Buchoux) Mudry MB-4-80 Mudry MB-4-90 Mulag Mulag 90/113hp 6-cyl in-line Murray-Willat Murray Ajax Murray Atlas Murray-Willat 35hp 6-cyl 2-stroke rotary Murray-Willat 90hp 6-cyl 2-stroke rotary MWfly (MWfly srl, Passirana di Rho, Italy) MWfly B22 MWfly B25 N N.A.G. Source:Angle. NAG 40hp 4-cyl in-line NAG C.III NAG F.1 NAG F.2 NAG F.3 NAG F.4 NAG Model 301 NAG 6-cyl 135hp Nagel Nagel 444 Nagliati Nagliati V.N.V 160 hp Y-12 Nagliati 250hp 8-cyl twin4 Nakajima Nakajima Ha5 Nakajima Ha219 Nakajima Hikari Nakajima Homare Nakajima Kotobuki Nakajima Mamoru Nakajima Sakae NAMI NAMI A.M.B.20 Napier Sources: Piston engines, Lumsden, gas turbine and rocket engines, Gunston. Napier Cub Napier Culverin Napier Cutlass Napier Dagger Napier E.237 – Submission to the NGTE specification TE 10/56 Napier Eland Napier Gazelle Napier Javelin Napier Lion Napier Lioness Napier Naiad Napier Nomad Napier Scorpion Napier Double Scorpion Napier Triple Scorpion Napier Oryx Napier Rapier Napier RJTV (Ramjet test Vehicle) Napier Sabre Napier Sea Lion (marinised Lions) Napier N.R.E. 17 Napier N.R.E. 19 Napier N.R.J. 1 Narkiewicz (Wiktor N. Narkiewicz - production at C.Z.P.S.K. (National)} Narkiewicz WN-1 Narkiewicz WN-2 Narkiewicz WN-3 Narkiewicz WN-4 Narkiewicz WN-6 Narkiewicz WN-6R Narkiewicz WN-7 Narkiewicz WN-7R Narkiewicz NP-1 Narkiewicz 2-cyl. Naskiewicz (Stanislaw Naskiewicz) Naskiewicz gas turbine National Aerospace Laboratory of Japan MITI/NAL FJR710 National National 35 N.E.C. (New Engine Co.) N.E.C. 1910 2-cyl 2-stroke N.E.C. 1910 60hp 6-cyl 2-stroke N.E.C. 40hp 4-cyl 2-stroke N.E.C. 50hp V-4 2-stroke N.E.C. 90hp 6-cyl 2-stroke N.E.C. 100hp 6-cyl 2-stroke(1912) N.E.C. 69.6hp 4-cyl 2-stroke Nelson Nelson 60hp 4-stroke Nelson 120hp 4-stroke Nelson 150hp 4-stroke Nelson H-44 Nelson H-49 Nelson H-56 Nelson H-59 Nelson H-63 Nelson O-65 Nielsen & Winther Nielsen & Winther M.A.J. Nieuport Nieuport 28hp 2-cyl opposed Nieuport 32/35hp 2-cyl opposed Nihonnainenki Nihonnainenki Semi Nippon (Nippon Jet Engine Company) Nippon J0-1 Nippon J0-3 Nippon J1-1 Nippon J3-1 Nord Nord ST.600 Sirius I Nord ST.600 Sirius II Nord ST.600 Sirius III Nord Véga Normalair Garrett NGL WAM 274 NGL WAM 342 Northrop Source:Gunston. Northrop Model 4318F Northrop O-100 Northrop Turbodyne XT-37 Norton (Kenneth Norton / Norton-Newby Motorcycle Co.) Norton 2-cyl opposed Novus Novus 70hp 6-cyl rotary Novus 70hp 6-cyl double rotary NPO Saturn AL-31 AL-32 AL-31 AL-34 AL-55 NPT NPT100 NPT109 NPT151 NPT301 NPT301 LTD NST-Machinenbau (Niedergoersdorf, Germany) NST BS 650 Nuffield Nuffield 100hp 4HO O Oberursel Oberursel U.0 Oberursel U.I Oberursel U.II Oberursel U.III Oberursel Ur.II Oberursel Ur.III Oberursel 200hp 18-cyl rotary Oberursel 240hp V-8 Oerlikon Oerlikon 50/60hp 4-cyl opposed Oldfield Oldfield 15A Omsk Omsk TVO-100 Opel Opel Argus As III Orenda Engines Orenda Engines, formed by Avro Canada taking over publicly funded jet engine development by Turbo Research. Later became Orenda Aerospace under Magellan. Avro Canada Chinook Avro Canada Orenda Orenda Iroquois Orenda OE600 licence-built General Electric J79 licence-built General Electric J85 Orion Orion LL-30 Orlo (Orlo Motor Company) Orlo B-4 4IL 50 hp Orlo B-6 6IL 75 hp Orlo B-8 V-8 100 hp Orlogsværftet Orlogsværftet O.V. 160 OKL (Ośrodek Konstrukcji Lotniczych WSK Okęcie) OKL LIS-2 OKL LIS-2A OKL LIS-5 OKL LIT-3 OKL TO-1 OKL NP-1 OKL WN-3 (Wiktor Narkiewicz) OKL WN-6 (Wiktor Narkiewicz) OKL WN-7 (Wiktor Narkiewicz) Otis-Pifre Otis-Pifre 6-cyl in-line Otis-Pifer 500hp V-12 Otto A.G.O. Otto A.G.O. 50 hp Otto A.G.O. 70 hp Otto A.G.O. 80/100 hp Otto A.G.O. 100/130 hp Otto 200hp 8 in-line P Packard Source:Gunston. Packard 1A-258 1922 single Packard 1A-744 1919 V-8(60) 180 hp Packard 1A-825 1921 V-8(60) Packard 1A-905 225 hp V-12 Packard 1A-1100 1917 V-8(45) - small scale production of Liberty L-8 Packard 1A-1116 1919 V-12(60) 282 hp Packard 1A-1237 1920 V-12(60) 315 hp Packard 2A-1237 1923 V-12(60) Packard 1A-1300 1923 V-12(60) Packard 1A-1464 1924 V-12(60) 1st redesign of 1A-1300 Packard 1A-1500 1924 V-12(60) variants: Packard 2A-1500 1925 V-12(60), Packard 3A-1500 1927 V-12(60) Packard 1M-1551 test engine Packard 1A-1551 1921 IL-6 Packard 1A-1650 1919 Packard's post war Liberty Packard 1A-2025 1920 V-12(60) 540 hp Packard 1A-2200 1923 V-12(60) (made as 6 cyl.) Packard 1A-2500 1924 V-12 variants include 2A-2500, 2A-2540, 3A-2500, 4A-2500, 5A-2500, 3M-2500, 4M-2500, 5M-2500 Packard X-2775 - experimental X-24, three engines built 1A-2775, 2A-2775 (1935) Packard 1A-3000 193? H-24 "H" exp. Packard 1A-5000 1939 X-24(60) exp. Packard 2A-5000 1939 H-24 exp. Packard 3A-5000 1939 X-24(90) exp. sleeve valve Packard 1D-2270 1952 V-16(TD60) Packard DR-980 1928 R-9(D) 1st diesel to fly Packard DR-1340 1932 R-9(D) 2-cycle Packard DR-1520 1932 R-9(D) 2-cycle Packard DR-1655 1932 R-9(D) exp. diesel Packard 299 1916 V-12(60) "299" racer engine Packard 452 1917 IL-6 aero exp. Packard 905-1 1916 V-12(40) Packard 905-2 1917 V-12(40) Packard 905-3 1917 V-12(40) (1A-905) Packard IL-6 (1A-1551) Packard L-8 (1A-1100) - licence-built Liberty L-12 Packard L-12 1917 Liberty L-12 engines Packard L-12E 1918 U-12 Duplex – 2 crankshafts Packard V-1650 - inverted Libery L-12 Packard V-1650 Merlin - licence-built Rolls-Royce Merlin Packard W-1 1921 W-18(40) Air Service-designed and Packard-built Packard W-1-A 1923 W-18(40) Air Service-designed and Packard-built Packard W-1-B 1923 W-18(40) Air Service-designed and Packard-built Packard W-2 1923 W-18(40) Air Service designed Packard XJ41 1946 Turbo-Jet Experimental turbojet. 7 were contracted Packard XJ49 1948 Turbo-Fan Experimental fan jet. Highest thrust——jet built up to that time Palmer (Palmer Motor Company) Palmer 80hp Palons & Beuse Palons & Beuse 2-cyl opposed Panhard & Levassor Source: (Société Panhard & Levassor) Inline engines Panhard & Levassor 4M - Dirigible engine with power outputs of 50 to 120 hp (1905-1911) Panhard & Levassor 4I - 35/40 hp (1909) Panhard & Levassor 6I - 55 hp (1910) Panhard & Levassor 6J - 65 hp (1910) V8 engines Panhard & Levassor V8 - 100 hp (1912) V12 engines Panhard & Levassor 12J - 220 hp (1915) Panhard & Levassor 12M - 500 hp (1918) V12 sleeve valve engines Panhard & Levassor VL 12L - 450 hp (1924) Panhard & Levassor VK 12L - 450 hp (1925) W16 engines Panhard & Levassor 16W - 650 hp (1920) Parker (Aero Parker Motor Sales Company) Parker 1912 3 cyl Parker 1912 6 cyl Parma Technik (Luhačovice, Zlín Region, Moravia, Czech Republic) Parma Mikron III UL Parodi (Roland Parodi) Parodi HP 60Z PBS (První Brnenská Strojírna Velká Bíteš, a.s.) PBS TJ-100 PBS Velka Bites ÒÅ 50Â Pegasus Aviation Pegasus PAL 95 Per Il Volo Per Il Volo Top 80 Peterlot Peterlot 80hp 7-cyl radial Peugeot Peugeot 8A Peugeot L112 V-8 Peugeot Type 16AJ 440 hp double V-8 Peugeot L41 600 hp V-12 Peugeot Type 16X X-16 Peugeot 12L13 Pheasant Aircraft Company Pheasant Flight 4-cyl Phillips (Phillips Aviation Company) Phillips 333 (Martin 333) Phillips 500 Piaggio Data from:Italian Civil & Military Aircraft 1930–1945 and Jane's 1938 Piaggio P.II (Armstrong Siddeley Lynx) Piaggio Stella P.VII Piaggio Stella P.IX Piaggio P.X Piaggio P.XI Piaggio P.XII Piaggio P.XV Piaggio P.XVI Piaggio P.XIX Piaggio P.XXII Piaggio-Jupiter Piaggio Lycoming Pierce (Samuel S Pierce Airplane Company) Pierce B 35 hp 3RA Pieper (Pieper Motorenbau GmbH) Pieper Stamo MS 1500 Pieper Stamo 1000 Pipistrel Pipistrel E-811 Pipe Data from: Pipe 50hp V-8 Pipe 110hp V-8 Pirna Pirna 014 Platzer Platzer MA 12 P/Nissan Pobjoy Pobjoy P Pobjoy R Pobjoy Cataract Pobjoy Cascade Pobjoy Niagara Poinsard Poinsard 25hp 2-cyl Porsche Porsche 678 Porsche 702 Porsche PFM N00 Porsche PFM N01 Porsche PFM N03 Porsche PFM T03 Porsche PFM 3200 Porsche 109-005 Porsche YO-95-6 Potez Potez A-4 50 hp 4IL upright Potez 1C APU Potez 1D APU Potez 1D-3 APU Potez 2D APU Potez 2D-2 APU Potez 2D-5 APU Potez 2C APU Potez 3B Potez 4D Potez 4E Potez 6A Potez 6Aa Potez 6Ab Potez 6Ac Potez 6B Potez 6Ba Potez 6D Potez 6E Potez 6E.30 Potez 8D Potez 9A Potez 9Ab Potez 9Abr Potez 9Ac Potez 9B Potez 9Ba Potez 9Bb Potez 9Bd Potez 9C Potez 9C-01 Potez 9E Potez 9Eo Potez 12As Potez 12D (a.k.a. D.12) Potez 12D-00 Potez 12D-01 Potez 12D-03 Potez 12D-30 Pouit Pouit S-4 PowerJet PowerJet SaM146 Power Jets Power Jets WU Power Jets W.1 Power Jets W.2 Power Jets/Rover B/23 - Rolls-Royce Welland Poyer (Poyer Aircraft Engine Company) Poyer 3-40 Poyer 3-50 Praga Source:Jane's All the World's Aircraft 1938 Praga B Praga B2 Praga D Praga DH Praga DR Praga ER Praga ES Praga ESV Praga ESVKe Praga ESVR Praga FRK Praga M-197 helicopter engine Praga Doris B Praga Doris M-208B Praga E-I Praga BD 500 Pratt & Whitney Pratt & Whitney H-2600 - enlarged X-1800 Pratt & Whitney X-1800 Pratt & Whitney XH-3130 – cancelled Pratt & Whitney XH-3730 – cancelled Pratt & Whitney R-985 Wasp Junior Pratt & Whitney R-1340 Wasp Pratt & Whitney R-1535 Twin Wasp Junior Pratt & Whitney R-1690 Hornet Pratt & Whitney R-1830 Twin Wasp Pratt & Whitney R-1860 Hornet B Pratt & Whitney R-2000 Twin Wasp Pratt & Whitney R-2060 Yellow Jacket Pratt & Whitney R-2180-A Twin Hornet Pratt & Whitney R-2180-E Twin Wasp E Pratt & Whitney R-2270 Pratt & Whitney R-2800 Double Wasp Pratt & Whitney R-4360 Wasp Major Pratt & Whitney JT3 Pratt & Whitney JT3C – company designation for J57 Pratt & Whitney JT3D Pratt & Whitney JT4 – company designation for J75 Pratt & Whitney JT4A - company designation for J75 Pratt & Whitney JT4D Pratt & Whitney JT7 Pratt & Whitney JT8 Pratt & Whitney JT8D Pratt & Whitney JT9D Pratt & Whitney JT10D Pratt & Whitney JT11D Pratt & Whitney JT12A Pratt & Whitney JT18D Pratt & Whitney JTF10A - company designation of Pratt & Whitney TF30 Pratt & Whitney JTF16 Pratt & Whitney JTF17 Pratt & Whitney JTF22 - company designation of Pratt & Whitney F100 Pratt & Whitney JFTD12 - company designation of Pratt & Whitney T73 Pratt & Whitney JTN9 Pratt & Whitney PT1 (T32) Pratt & Whitney PT2 - company designation of Pratt & Whitney T34 Pratt & Whitney PT4 Pratt & Whitney PT5 Pratt & Whitney PW1000G Pratt & Whitney PW1120 Pratt & Whitney PW1130 Pratt & Whitney PW2000 Pratt & Whitney PW3000 Pratt & Whitney PW3005 Pratt & Whitney PW4000 Pratt & Whitney PW6000 Pratt & Whitney RL-10 Pratt & Whitney ST9 Pratt & Whitney STF300 Pratt & Whitney LR115 Pratt & Whitney F100 Pratt & Whitney F105 - US military designation of JT9D Pratt & Whitney F117 (PW2037) - military designation of Pratt & Whitney PW2000 Pratt & Whitney F119 (PW5000) Pratt & Whitney F135 Pratt & Whitney F401 - USN designation for F100 Pratt & Whitney J42 (licence built Rolls-Royce Nene) Pratt & Whitney J48 (licence built Rolls-Royce RB.44 Tay) Pratt & Whitney J52 (JT84) Pratt & Whitney J57 Pratt & Whitney J58 Pratt & Whitney J60 - military designation of JT12 Pratt & Whitney J75 Pratt & Whitney J91 Pratt & Whitney RJ40 Ramjet Pratt & Whitney T32 - US military designation of PT1 Pratt & Whitney T34 Pratt & Whitney T45 Pratt & Whitney T48 Pratt & Whitney T52 Pratt & Whitney XT57 Pratt & Whitney T73 Pratt & Whitney T101 - military designation of Pratt & Whitney Canada PT6-45A) Pratt & Whitney T400 - military designation of Pratt & Whitney Canada PT6T Pratt & Whitney TF30 Pratt & Whitney TF33 Pratt & Whitney / SNECMA TF104, TF106, TF306 -variants of Pratt & Whitney TF30 by SNECMA Pratt & Whitney/Allison PW-Allison 578DX Pratt & Whitney Canada Pratt & Whitney Canada PT6 Pratt & Whitney Canada PT6T Pratt & Whitney Canada ST6 Pratt & Whitney Canada JT15D Pratt & Whitney Canada PW100 Pratt & Whitney Canada PW200 Pratt & Whitney Canada PW300 Pratt & Whitney Canada PW500 Pratt & Whitney Canada PW600 Pratt & Whitney Canada PW800 Pratt & Whitney Canada T74 Pratt & Whitney Canada T101 Pratt & Whitney Canada T400 Pratt & Whitney Rzeszów Pratt & Whitney Rzeszów PZL-10 Preceptor Preceptor 1/2 VW Preceptor 1600cc Preceptor Gold 1835 Preceptor Gold 2074 Preceptor 2180cc Price Induction DGEN Primi-Berthand Primi-Berthand 4-cyl in-line 2-stroke Pulch (Otto Pulch) Pulch 003 Pulch 3-cyl. radial Pulsar Pulsar Aeromaxx 100 PZI (Państwowe Zakłady Inżynieryjne - National Engineering Works) P.Z. Inż. Junior 120 hp P.Z. Inż. Major P.Z. Inż. Minor PZL (PZL Państwowe Zakłady Lotnicze) PZL Rzeszów (PZL Rzeszów) PZL Rzeszów SO-1 PZL Rzeszów SO-3 PZL-Wytwórnia Silników PZL GR.760 PZL GR.1620-A PZL GR.1620-B PZL-3 - Ivchenko AI-26 PZL-10 PZL GTD-350 - Klimov GTD-350 PZL-Kalisz ASz-61R PZL ASz-62 - Shvetsov ASh-62 PZL-F 2A - Franklin 2 series PZL-F 4A - licence built Franklin Engine Company PZL-F 6A - licence built Franklin Engine Company PZL-F 6V - licence built Franklin Engine Company PZL-65KM PZL K-15 Q Quick Air Motors Co (Quick Air Motors, Wichita KS.) Quick Super Rhone - conversion of 80hp Le Rhône 9C rotary engine to radial. Quick 180hp R Radne Motor AB Radne Raket 120 Ranger Ranger Engines were a division of Fairchild Aircraft Ranger 6-370 Ranger 6-375 Ranger 6-390 Ranger 6-410 Ranger L-440 (company designation 6-440) Ranger V-770 Ranger V-880 Ranger XV-920 Ranger XH-1850 (not actually an H - a double 150° V - two separate crankshafts linked by a gearbox) Rapp Rapp Motorenwerke became BMW in 1917 Rapp 100 hp Rapp 125/145 hp Rapp Rp III Rapp 200 hp Rasmussen (Hans L Rasmussen) Rasmussen 65hp Rateau Rateau GTS.65 Rateau A.65 gas turbine Rateau SRA-01 Savoie Rateau SRA-101 10-stage axial compressor Rateau SRA-301 16-stage axial compressor Rausenberger Rausenberger A-8 45 hp V-8 Rausenberger B-8 75 hp V-8 Rausenberger C-12 150 hp V-12 Rausenberger D-23 250 hp V-12 Rausenberger E-6 150 hp 6IL Rausenberger 500hp Raven Redrives Raven 1000 UL Raven 1300 SVS Turbo Raven 1600 SV RBVZ RBVZ-6 (V.V. Kireev) MRB-6 (Igor Sikorsky) Reaction Motors Reaction Motors LR2 Reaction Motors LR6 Reaction Motors LR8 Reaction Motors LR10 Reaction Motors LR11 Reaction Motors LR22 Reaction Motors LR26 Reaction Motors LR30 Reaction Motors LR32 Reaction Motors LR33 Reaction Motors LR34 Reaction Motors LR35 Reaction Motors LR39 Reaction Motors LR40 Reaction Motors LR44 Guardian Reaction Motors LR48 Reaction Motors LR99 Reaction Motors 6000C4 Reaction Motors ROR Reaction Motors Patriot Reaction Motors TU205 Rearwin Rearwin 1909 30-45hp Rearwin 1909 40-60hp Rearwin 1910 50-75hp Rearwin 1911 80-90hp Rebus Rebus 50hp 4-cyl Rectimo (Rectimo Aviation SA) / (Rectimo-Savoie Aviation) Rectimo 4 AR 1200 Rectimo 4 AR 1600 RED RED Aircraft GmbH RED A03 - V12 four-stroke iesel engine Redrup Redrup 1910 50hp 10-cyl contra-rotating rotary Redrup 1914 150hp 7-cyl radial Redrup 5-cyl barrel engine Redrup Fury (barrel engine built by Aero Syndicate Ltd.) Reggiane Reggiane Re 101 R.C.50 I (sometimes designated Re L 101 R.C.50 I) Reggiane Re 102 R.C.50 I (inverted W-18) Reggiane Re 103 R.C.40 I (inverted W-18) Reggiane Re 103 R.C.50 I (inverted W-18) Reggiane Re 103 R.C.57 I (inverted W-18) Reggiane Re 103 R.C.48 (inverted W-18) Reggiane Re 104 R.C.38 (V-12 derived from the Isotta Fraschini Asso L.121 R.C.40) Reggiane Re 105 R.C.100 I (inverted W-18) Reggiane H-24 Régnier Régnier R1 Régnier 2 Régnier 4B (derived from de Havilland Gipsy) Régnier 4D.2 Régnier 4E.0 Régnier 4F.0 Régnier 4JO Régnier 4KO Régnier 4LO Régnier 4L Régnier 4R Régnier 6B Régnier 6C Régnier 6GO Régnier 6R Régnier 6RS Régnier R161-01 Régnier Martinet Régnier 12Hoo Renard (Société anonyme des avions et moteurs Renard / Alfred Renard, Belgium) Renard Type 7 7RA Renard Type 100 5RA Renard Type 120 5RA Renard Type 200 9RA Renard Type 400 18RA (twin-row type 200) Renard Renard y Krebs Renault (Source: and) Note: some of the early Renaults seem to have oversquare cylinders and may be listed with bore and stroke transposed below. Renault 38.5hp 4-cyl in-line Renault 42.5hp 4-cyl in-line Renault 25/30hp 4-cyl in-line Renault 35-40hp V-4 Renault 35hp V-8 Renault 35hp V-8 Renault 45hp V-8 Renault 50hp V-8 Renault 50.5hp V-8 Renault 60hp V-8 Renault 70hp Type WB Renault 70hp Type WC Renault 75hp V-8 Renault 80hp Type WS Renault 90hp V-8 Renault 100hp V-8 Renault 130hp V-8 Renault 90hp V-12 12D Renault 100hp V-12 Renault 120hp V-12 Renault 138hp V-12 Renault 190hp V-12 Renault 200hp V-12 Renault 220hp V-12 12E Renault 265hp V-12 Renault 300hp V-12 12F Renault 320hp V-12 12Fe Renault 38.5hp 4-cyl in-line water-cooled Renault 42.5hp 4-cyl in-line water-cooled airship engine Renault 7A 7 radial Renault 8A V-8 Renault 8Aa V-8 Renault 8Ab V-8 Renault 9A Renault 4B 25 hp V-4 1910 Renault 8B V-8 Renault 8C V-8 Renault 8Ca V8 Renault 9C Renault 9Ca 9 radial Renault 12D Renault 12Da Renault 12Db V12 Renault 12Dc V12 Renault 12Drs V12 Renault 12E V12 Renault 12Eb Renault 12Ec V12 Renault 9F Renault 9Fas 9 radial Renault 12F Renault 12Fa V12 Renault 12Fb V12 Renault 12Fc V12 Renault 12Fe V12 Renault 12Fex V-12 Renault 14Fas 14 radial Renault 8G to V8 Renault 12H Renault 12Ha V12 Renault 12Hd V12 Renault 12He V12 Renault 12Hg V12 Renault 12J Renault 12Ja V12 Renault 12Jb V12 Renault 12Jc V12 Renault 18J Renault 18Jbr W18 Renault 12K (aka 450 hp and 500 hp) Renault 12K1? Renault 12Ka Renault 12Kb V12 Renault 12Kd Renault 12Ke V12 Renault 12Kg V12 Renault 12M V12 Renault 12Ma Renault 12N Renault 12Ncr Renault 12O air-cooled V-12 inverted Renault 4P Renault 6P Renault 9P 9 radial (aka 250 hp air-cooled engine) Renault 9Pa Renault 6Q Renault 12R air-cooled V-12 inverted Renault 12S V-12 inverted Renault 14T Renault 12T V-12 inverted Renault Bengali 4 Renault Bengali 6 Renault Type WB Renault Type WC Renault Type WS Renault Moteur Coupe Deutsch 6 inline (109.75x140), turbocharged Renault 438 (Coupe Deutsch) 180 hp 6 in-line Renault 446 450 hp V-12? Renault 454 220 hp 6 in-line Renault 456 300 hp 6 in-line Renault 468 730 hp inverted V-12 Renault 626 800 hp inverted V-8? Renault 8? 200 hp 8 cyl in-line water-cooled R.E.P. R.E.P. 20/24hp 5-cyl. R.E.P. 30/34hp 7-cyl. R.E.P. 95hp 7-cyl. R.E.P. 40/48hp 10-cyl. R.E.P. 60hp 14-cyl. R.E.P. 60hp 5-cyl fan R.E.P. 50hp 5-cyl fan R.E.P. 75hp 6-cyl R.E.P. 60hp 7-cyl R.E.P. 85hp 7-cyl radial Revmaster Revmaster R-800 2cyl 27 hp (Citroën 2CV) Revmaster R-1600D VW Revmaster R-1600S Revmaster R-1831D Revmaster R-1831S Revmaster R-2100D Revmaster R-2100D Turbo 70 hp at 3,200 rpm Revmaster R-2100S 65 hp at 3,200 rpm Revmaster R-2300 Revmaster R-3000D 110 hp at 3,200 rpm Rex (Flugmachine Rex GesellschaftG.m.b.H.) Rex rotary engine RFB RFB SG 85 RFB SG 95 Rheem Rheem S-10 axial Rheinische Rheinische 35hp 3-cyl fan Rheinische 50/60hp 5-cyl radial Rheinische 70hp 4-cyl in-line Rheinische 100hp 6-cyl in-line Rheinmetall-Borsig Rheinmetall 109-502 Rheinmetall 109-505 Rheinmetall 109-515 rocket (solid fuel) Rheinmetall Rheintochter R 1 first stage Rheinmetall Rheintochter R 1 second stage Rheinmetall Rheintochter R 3 first stage Rhenania (Rhenania Motorenwerke) Rhenania 11-cyl. rotary engine Ricardo Ricardo-Burt S55/4 Ricardo-Halford-Armstrong R.H.A. Richard & Hering (Rex-Simplex Automobilwerke) Richard & Hering engines Richardson (Archibald and Mervyn, Sydney Australia) Richardson rotary Righter Manufacturing Righter O-15 Righter O-45 Roberts (Roberts Motor Company / E.W. Roberts, Sandusky. Ohio) Roberts 50hp 4-cyl in-line Roberts 75hp 6-cyl in-line Roberts 4-X. Roberts 6-X 100 hp Roberts 6-XX 200 hp Roberts 6-Z Roberts E-12 350 hp Robinson (Grinnell Aeroplane Co. / William C. Robinson) Robinson 60hp Robinson 100hp Robinson Robinson R-13 Roché (Jean A Roché) Roché L-267 Rocket Propulsion Establishment RPE Gamma Rocketdyne Rocketdyne 16NS-1,000 Rocketdyne AR1 Rocketdyne AR2 Rocketdyne LR36 (AR1) Rocketdyne LR42 (AR2) Rocketdyne LR64 Rocketdyne LR79 Rocketdyne LR89 Rocketdyne LR101 Rocketdyne LR105 Rocketdyne Aeolus Rocketdyne A-7 Redstone Rocketdyne E-1 Rocketdyne F-1 (RP-1/LOX) Saturn V. Rocketdyne H-1 (RP-1/LOX) Saturn I, Saturn IB, Jupiter, and some Deltas Rocketdyne J-2 (LH2/LOX) Saturn V and Saturn IB. Rocketdyne M-34 Rocketdyne MA-2 Rocketdyne MA-3 Rocketdyne MB-3 Rocketdyne MB-93 Rocketdyne P-4 Rocketdyne RS-25 (LH2/LOX) Used by the Space Shuttle Rocketdyne RS-27A (RP-1/LOX) Used by the Delta II/III and Atlas ICBM Rocketdyne RS-68 (LH2/LOX) Used by the Delta IV Heavy core stage Rocketdyne Kiwi Nuclear rocket engine Rocketdyne Megaboom modular sled rocket Rocketdyne Vernier engine Atlas, some Thor with MA-2 & MB-3 Rocky Mountain Rocky Mountain Pegasus Rollason Rollason Ardem RTW Rollason Ardem 4 CO2 FH mod Rolls-Royce Limited Sources: Piston engines, Lumsden, gas turbine and rocket engines, Gunston. Note: For alternative 'RB' gas turbine designations please see the Rolls-Royce aero engine template. Rolls-Royce 190hp Rolls-Royce 250hp Rolls-Royce Avon Rolls-Royce Bristol Olympus Rolls-Royce Buzzard Rolls-Royce Clyde Rolls-Royce Condor Rolls-Royce Condor diesel Rolls-Royce Conway Rolls-Royce Crecy Rolls-Royce Dart Rolls-Royce Derwent Rolls Royce Eagle (H-24) Rolls-Royce Eagle (V-12) Rolls-Royce Eagle (X-16) Rolls-Royce Exe Rolls-Royce Falcon Rolls-Royce Gem Rolls-Royce Gnome Rolls-Royce Goshawk Rolls-Royce Griffon Rolls-Royce Hawk Rolls-Royce Kestrel Rolls-Royce Merlin Rolls-Royce Nene Rolls-Royce Olympus Rolls-Royce Pegasus Rolls-Royce Pennine Rolls-Royce Peregrine Rolls-Royce R Rolls-Royce RB.44 Tay Rolls-Royce RB.50 Trent Rolls-Royce RB.106 Rolls-Royce RB.108 Rolls-Royce RB.141 Medway Rolls-Royce RB.145 Rolls-Royce/MAN Turbo RB153 Rolls-Royce RB.162 Rolls-Royce RB.175 Rolls-Royce RB.181 Rolls-Royce/MAN Turbo RB193 Rolls-Royce RB.203 Trent Rolls-Royce RB.207 Rolls-Royce RB211 Rolls-Royce Soar Rolls-Royce Spey Rolls-Royce Tweed Rolls-Royce Tyne Rolls-Royce Viper Rolls-Royce Vulture Rolls-Royce Welland Rolls-Royce/Continental C90 Rolls-Royce/Continental O-200 Rolls-Royce/Continental O-240 Rolls-Royce/Continental O-300 Rolls-Royce/Continental GIO-470 Rolls-Royce/Continental IO-520 Rolls-Royce RZ.2 Rolls-Royce RZ.12 Rolls-Royce Holdings Note: For alternative 'RB' gas turbine designations please see the Rolls-Royce aero engine template. Rolls-Royce Trent Rolls-Royce AE 1107C-Liberty Rolls-Royce AE 2100 Rolls-Royce AE 3007 Rolls-Royce AE 3010 Rolls-Royce AE 3012 Rolls-Royce BR700 Rolls-Royce BR701 Rolls-Royce BR710 Rolls-Royce BR715 Rolls-Royce RB.183 Tay Rolls-Royce RB.200 Rolls-Royce RB.202 Rolls-Royce RB.203 Trent Rolls-Royce RB.207 Rolls-Royce RB.213 Rolls-Royce RB.220 Rolls-Royce RB401 Rolls-Royce 250 - Allison Model 250 Rolls-Royce RR300 Rolls-Royce RR500 Rolls-Royce 501 Rolls-Royce F113 - (Spey Mk.511) Rolls-Royce F126 - (Tay Mk.611 / 661) Rolls-Royce F137 (AE3007H) Rolls-Royce F402 - (Rolls Royce Pegasus) Rolls-Royce J99 Rolls-Royce XV99-RA-1 Rolls-Royce T56 (T501-D) Rolls-Royce T68 Rolls-Royce T406 Rolls-Royce Turbomeca Rolls-Royce Turbomeca Adour Rolls-Royce Turbomeca RTM322 Rolls-Royce/SNECMA Rolls-Royce/Snecma Olympus 593 Rolls-Royce/SNECMA M45H Rossel-Peugeot (Frédéric Rossel et les frères Peugeot) Rossel-Peugeot 100hp 4-cyl in-line Rossel-Peugeot 30hp 7-cyl rotary Rossel-Peugeot 40hp 7-cyl rotary Rossel-Peugeot 50hp 7-cyl rotary Rotax Rotax 185 Rotax 277 Rotax 377 Rotax 447 Rotax 462 Rotax 503 Rotax 508UL Rotax 532 Rotax 535 Rotax 582 Rotax 642 Rotax 618 Rotax 804 Rotax 912 Rotax 914 Rotax 915 iS Rotec Rotec R2800 Rotec R3600 Rotex Electric Rotex Electric REB 20 Rotex Electric REB 30 Rotex Electric REB 50 Rotex Electric REB 90 Rotex Electric REG 20 Rotex Electric REG 30 Rotex Electric RET 30 Rotex Electric RET 60 Rotex Electric REX 30 Rotex Electric REX 50 Rotex Electric REX 90 RotorWay RotorWay RI-162F RotorWay RW-100 RotorWay RW-133 RotorWay RW-145 RotorWay RW-152 Rotron Rotron RT300 Rotron RT600 Rover (Rover Company / Rover Gas Turbines Ltd.) Rover W.2B Rover Marton Rover Moreton Rover Napton Rover Wolston Rover T.P.90 Rover/Lucas TJ125 (CT3201) Rover 1S/60 Rover 2S/150A Rover 748 Rover 801 Rover TJ-125 Royal Aircraft Establishment RAE 21 RAE 22 Royal Aircraft Factory RAF 1 RAF 2 RAF 3 RAF 4 RAF 5 RAF 7 RAF 8 RRJAEL (Rolls-Royce and Japanese Aero-engines Ltd.) RRJAEL RJ.500 Rumpler Rumpler Aeolus Ruston-Proctor Ruston-Proctor 200hp 6-stroke rotary(6-cyl 2-stroke?) Ryan-Siemens ( Ryan Aeronautical Corp/Siemens-Halske) Ryan-Siemens 5 (Sh-13) Ryan-Siemens 7 (Sh-14) Ryan-Siemens 9 (Sh-12) Ryan-Siemens Sh-14 Rybinsk Motor Factory DN-200 Rybinsk RD-36-35 Rybinsk RD-38 S SACMA (Guy Negre) SACMA 100 SACMA 120 SACMA 150 SACMA 180 SACMA 240 Safran Helicopter Engines Safran Arrano Safran Aneto SAI Ambrosini Ambrosini P-25 – 2-cyl. horizontally opposed Salmson Salmson air-cooled aero-engines Salmson 3A, 3Ad Salmson 5A, 5Ac, 5Ap, 5Aq Salmson 6A, 6Ad, 6Af Salmson 6TE, 6TE.S Salmson 7A, 7AC, 7ACa, 7Aq Salmson 7M Salmson 7O, 7Om Salmson 9AB, 9ABa, 9ABc Salmson 9AC Salmson 9AD Salmson 9AE, 9AEr, 9AErs Salmson 9NA, 9NAs, 9NC, 9ND, 9NE, 9NH Salmson 11B Salmson 12C W-12? Salmson 12V, 12Vars - V-12 Salmson water-cooled aero-engines Salmson A - 2x7-cylinder barrel engine, 1 built Salmson B - 2x7-cylinder barrel engine, 1 built Salmson C - 2x7-cylinder barrel engine, 1 built Salmson E- 2x9-cylinder barrel engine, 1 built Salmson F - 2x9-cylinder barrel engine, 1 built Salmson G - 2x7-cylinder barrel engine, 1 built Salmson K- 2x7-cylinder barrel engine, 1 built Salmson A.7 Salmson A.9 Salmson 2A.9 2-row radial engine Salmson B.9 water-cooled radial engine Salmson C.9 water-cooled radial engine Salmson M.9 water-cooled radial engine Salmson P.9 water-cooled radial engine Salmson R.9 water-cooled radial engine Salmson M.7 water-cooled radial engine Salmson 2M.7 water-cooled 2-row radial engine Salmson 9.Z, 9.Za, 9.Zc, 9.ZmSalmsons 18 cylinder in-line radial engines''' Salmson 18Z (1919) 9-bank water-cooled in-line radial 2 x 9Z on common 2-throw crankshaft Salmson 18AB (1920s) 9-bank air-cooled in-line radial Salmson 18Cm, 18Cma, 18Cmb - (late 20s early 30s) 9-bank water-cooled (air-cooled heads) in-line radial Salmson-Szydlowski SH.18 – 18-cyl 2-stroke radial diesel engine (nine banks of two in-line) Licence-built Argus As 10 - as Salmson 8As.00, 8As.04 Saroléa Saroléa V-4 Saroléa Albatros 30 hp 2HO Saroléa Aiglon Saroléa Vautour 32 hp 2HO Saroléa Epervier 25 hp 2HO S.A.N.A. S.A.N.A. 700hp Saunders-Roe Saunders-Roe 45 lbf pulse-jet Saunders-Roe 120 lbf pulse-jet Sauer Sauer S 1800 Sauer S 1800 UL Sauer S 1900 UL Sauer S 2100 Sauer S 2100 UL Sauer S 2200 UL Sauer S 2400 UL Sauer S 2500 Sauer S 2500 UL Sauer S 2700 UL Saurer Saurer GT-15 Saurer YS-2 Saurer YS-3 Saurer YS-4 Scania-Vabis Scania-Vabis PD Schliha (Schlüpmannsche Industrie und Handelsgesellschaft) Schliha 36hp 2-cyl Schliha F-1200 Schmidding Schmidding 109-505 rocket (solid fuel) Schmidding 109-513 Schmidding 109-533 Schmidding 109-543 Schmidding 109-553 Schmidding 109-563 Schmidding 109-573 Schmidding 109-593 Schmidding 109-603 Schroeter Schroeter 89hp 6-cyl in-line Schwade (Otto Schwade GmbH, Erfurt, Germany) Schwade Stahlherz engine SCI Aviation R6-80 R6-150 B4-160 Scott Scott A2S Flying Squirrel Scott 40hp 2-stroke Scott 1939 2-stroke Scott 1950 2-stroke V4 Security (Security Aircraaft Corporation) Security S-5-120 Sega Sega trunnion radial engine SELA (Société d'Etude pour la Locomotion Aérienne [SELA]) SELA V-8 Seld (Seld-Kompressorbau G.m.b.H.) Seld F2 SEPR SEPR 9 SEPR 16 SEPR 24 SEPR 25 SEPR 35 SEPR 44 SEPR 50 SEPR 55 SEPR 57 SEPR 63 SEPR 65 SEPR 66 SEPR 73 SEPR 732 SEPR 734 SEPR 7341 SEPR 737 SEPR 738 SEPR 739 (Stromboli) SEPR 78 SEPR 81A SEPR 167 SEPR 178 SEPR 189 SEPR 192 SEPR 200 (Tramontane) SEPR 201 SEPR 202 SEPR 2020 SEPR 251 SEPR 481 SEPR 504 SEPR 505 SEPR 5051 SEPR 5052 SEPR 50531 SEPR 5054 SEPR 631 SEPR 683 SEPR 684 SEPR 685 SEPR 6854 SEPR 686 SEPR 703 SEPR 705 SEPR 706 SEPR 740 SEPR 841 SEPR 844 SEPR Topaze SEPR Diamante SEPR C2 Sergant Sergant A SERMEL SERMEL TRS 12 SERMEL TRS 18 SERMEL TRS 25 SFFA (Société Française de Fabrication Aéronautique, France) SFFA Type A 100 hp 7-cyl SFFA Type B 45 hp 3-cyl SFECMAS SFECMAS Ars 600 SFECMAS Ars 900 SFECMAS 12H SFECMAS 12K Shenyang Shenyang PF-1 Shenyang Aircraft Development Office PF-1A Shenyang WP-5 Shenyang WP-6 Shenyang WP-7 Shenyang WP-14 ("Kunlun") Shenyang WS-5 Shenyang WS-6 Shenyang WS-8 Shenyang WS-10 Shimadzu Shimadzu 80hp 9-cyl rotary Shimadzu 90hp V-8 ShvetsovData from:Russian Piston Aero Engines Shvetsov M-11 Shvetsov M-3 Shvetsov M-25 Shvetsov M-62 Shvetsov M-63 Shvetsov M-64 Shvetsov M-65 Shvetsov M-70 Shvetsov M-71 Shvetsov M-72 Shvetsov M-80 Shvetsov M-81 Shvetsov M-82 Shvetsov ASh-2 Shvetsov ASh-3 Shvetsov ASh-4 Shvetsov ASh-21 Shvetsov ASh-62 Shvetsov ASh-72 (M-72?) Shvetsov ASh-73 Shvetsov ASh-82 Shvetsov ASh-83 Shvetsov ASh-84 Shvetsov ASh-90 Shvetsov ASh-93 S.H.K. S.H.K. 70hp 7-cyl rotary S.H.K. 140hp 14-cyl rotary S.H.K. 90hp 7-cyl rotary S.H.K. 180hp 14-cyl rotary Siddeley-Deasy Siddeley Ounce Siddeley Pacific Siddeley Puma Siddeley Tiger Siemens Siemens SP90G Siemens SP260D Siemens-Halske Siemens-Halske 100PS 9-cyl rotary Siemens VI Siemens-Halske Sh.0 Siemens-Halske Sh.I Siemens-Halske Sh.II Siemens-Halske Sh.III Siemens-Halske Sh 4 Siemens-Halske Sh 5 Siemens-Halske Sh 6 Siemens-Halske Sh 7 Siemens-Halske Sh 10 Siemens-Halske Sh 11 Siemens-Halske Sh 12 Siemens-Halske Sh 13 Siemens-Halske Sh 14 Siemens-Halske Sh 15 Siemens-Bramo Sh 20 Siemens-Bramo Sh 21 Siemens-Bramo Sh 22 Siemens-Bramo Sh 25 Siemens-Bramo Sh 28 Siemens-Bramo Sh 29 Siemens Bramo SAM 22B Siemens Bramo 314 Siemens Bramo 322 Siemens Bramo 323 Fafnir Silnik Silnik M 11 Silnik Sh 14 Simms Simms 51hp V-6 Simonini Racing Simonini 200cc Simonini Mini 2 Evo Simonini Mini 2 Plus Simonini Mini 3 Simonini Mini 4 Simonini Victor 1 Super Simonini Victor 2 Simonini Victor 2 Plus Simonini Victor 2 Super Škoda Skoda G-594 Czarny Piotruś Skoda L Skoda Lr Skoda S.14 Skoda S.20 Skoda Hispano-Suiza W-12 Skymotors Skymotors 70 Skymotors 70A Smallbone (Harry Eales Smallbone) Smallbone 4-cyl wobble-plate axial piston engine Smalley (General Machinery Co) Smalley Aero SMA Engines SMA SR305-230 SMA SR460 Smith Smith Static Smith 300 hp radial SMPMC (South Motive Power and Machinery Complex SMPMC prev Zhuzhou Aeroengine Factory) SMPMC HS-5 - Chinese production of ShvetsovvASh-62 SMPMC HS-6 - Chinese production of Ivchenko AI-14 SMPMC WZ-8 - Chinese production of Turbomeca Arriel SMPMC WZ-9 SMPMC WZ-16 SNCAN SNCAN Ars 600 SNCAN Ars 900 SNCAN Pulse-jet SNECMASociété nationale d'études et de construction de moteurs d'aviation formed by nationalisation of Gnome et Rhône in 1945. On French engine designations even sub-series numbers (for example Gnome-Rhône 14N-68) rotated anti-clockwise (LH rotation) and were generally fitted on the starboard side, odd numbers (for example Gnome-Rhône 14N-69) rotated clockwise (RH rotation) and were fitted on the port side. SNECMA Régnier 4L SNECMA 12S/12T - post war Argus As 411 production SNECMA-GR 14M - Gnome-Rhône 14M SNECMA-GR 14N - Gnome-Rhône 14N SNECMA 14NC Diesel 1945 1,015 hp SNECMA 14R SNECMA 14U 1948 2,200 hp(14R-1000) SNECMA 14X Super Mars 1949 850 hp SNECMA 14X-02 SNECMA 14X-04 SNECMA 14X-H SNECMA 28T 1945 3,500 hp SNECMA 32HL 1947 4,000 hp SNECMA 36T 1948 4,150 hp SNECMA 42T 1946 5,000 hp SNECMA M26 SNECMA M28 SNECMA M45/Mars Rolls-Royce/SNECMA M45H SNECMA Turbomeca Larzac (M49) SNECMA M53 SNECMA M88 SNECMA Atar 101 SNECMA Atar 8 SNECMA Atar 9 SNECMA Hercules - Bristol Hercules Snecma Silvercrest SNECMA-BMW 132Z SNECMA / Pratt & Whitney TF104 SNECMA / Pratt & Whitney TF106 SNECMA / Pratt & Whitney TF306 SNECMA-Renault 4P SNECMA-Renault 6Q SNECMA Hispano 12B 1950 2,200 hp SNECMA Hispano 12Y 1947 900 hp SNECMA Hispano 12Z SNECMA Super ATAR SNECMA R.104 Vulcain SNECMA R.105 Vesta SNECMA Escopette SNECMA Tromblon SNECMA Ecrevisse Type A SNECMA Ecrevisse Type B SNeCMA AS.11 SNECMA S.402 A.3 SNECMA S.407 A.2 SNECMA TA-1000 SNECMA TB-1000 SNCM (Société Nationale de Constructions de Moteurs - Lorraine post 1936) Lorraine Type 120 Algol Lorraine Type 111 Sterna Lorraine Type 112 Sirius SOCEMA (Société de Construction et d'Équipments Méchaniques pour l'Aviation) SOCEMA TGA 1 SOCEMA TG 1008 SOCEMA TGAR 1008 SOCEMA TP.1 SOCEMA TP.2 Sodemo Sodemo V2-1.0 Sodemo V2-1.2 Solar Solar PJ32 pulse-jet Solar T45 (Mars 50 hp gas turbine) Solar T62 Titan Solar T66 free turbine Titan Solar T-150 Solar Centaur 40 Solar Centaur 50 Solar Jupiter (500 hp gas turbine) Solar Mars 90 Solar Mars 100 Solar Mercury 50 Solar Saturn Solar Saturn 10 Solar Saturn 20 Solar Taurus 60 Solar Taurus 65 Solar Taurus 70 Solar Titan 130 Solar Titan 250 Solar A-103B (early detachable afterburner for J34) Solar AAP-80 Solar M-80 Solar MA-1 (Mars) Solar T-41M-1 Solar T-41M-2 Solar T-41M-5 Solar T-41M-6 Solar T-45M-1 (Mars) Solar T-45M-2 Solar T-45M-7 Solar T-300J-2 Solar T-520J Solar T-522J Solo (Solo Kleinmotoren GmbH) Solo 560, also known as the Hirth F-10, used in the Scheibe SF-24 Motorspatz Solo 2350, widely used in motor-gliders Solo 2625 01 Solo 2625 02, used in the Glaser-Dirks DG-500, Schempp-Hirth Ventus-2, Sportinė Aviacija LAK-20 etc. Solo 2625 02i, a fuel-injected version used in the Schempp-Hirth Arcus and Schempp-Hirth Quintus self-launching gliders SolovievSource:Gunston.Soloviev D-15 Soloviev D-20 Soloviev D-25V (TB-2BM) Soloviev D-30 Soloviev D-30K (completely revised) Soloviev D-90A Soloy (Soloy Conversions / Soloy Dual Pak Inc.) Soloy Dual Pac Soloy Turbine Pac Soverini (Soverini Freres et Cie) Soverini-Echard 4D Soverini-Echard 4DR Soviet union experimental engines AD-1 (diesel engine) AD-3 (diesel engine) AD-5 (diesel engine) FED-8 (diesel engine) MB-100 (A.M. Dobrotvorskiy) MB-102 (A.M. Dobrotvorskiy) MSK (diesel engine) AN-1 (diesel engine) AN-1A (diesel engine) AN-1R (diesel engine) (geared) AN-1RTK (diesel engine) (geared, turbo-supercharged) AN-5 (diesel engine) (N - Neftyanoy - of crude oil type - 24-cyl rhombic opposed piston) AN-20 (diesel engine) (24-cyl rhombic opposed piston) BD-2A (diesel engine) M-1 (aero-engine) (V-12 a.k.a. M-116 - S.D. Kolosov) M-5-400 M-9 (L.I. Starostin - swashplate engine) M-10 (diesel engine) (5-cyl radial) M-16 (aero-engine) (4-cyl horizontally opposed - S.D. Kolosov) M-20 (diesel engine) (48-cyl rhombic opposed piston) M-30 (diesel engine) M-31 (diesel engine) M-35 (diesel engine) M-40 (diesel engine) M-47 (aero-engine) - fitted to Ilyushin Il-20 M-50R (diesel engine) (marine rhombic opposed piston) M-52 (diesel engine) M-87D (diesel engine) M-116 (aero-engine) (V-12 a.k.a. M-1 - S.D. Kolosov) M-127 (X-24 conrod free) M-127K (X-24 conrod free) M-130 (aircraft engine) (H-24) M-224 (diesel engine) M-501 (diesel engine) MB-4 (X-4 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) MB-4b (X-4 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) MB-8 (X-8 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) MB-8b (X-8 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) MF-45Sh (M-47) D-11 (diesel engine) (5-cyl radial based on the M-11) N-1 (diesel engine) (N - Neftyanoy - of crude oil type) N-2 (diesel engine) N-3 (diesel engine) N-4 (diesel engine) N-5 (diesel engine) N-6 (diesel engine) N-9 (diesel engine) OMB (OMB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) OMB-127 (X-12 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) OMB-127RN (X-12 MB - O Motor Besshatunniy - con-rod free engine - S.S. Balandin) Soyuz (AMNTK Soyuz) Soyuz R-79V-300 Soyuz R-79M Soyuz R-179-300 Soyuz VK-21 Soyuz R134-300 SPA SPA 6A Speer Speer S-2-C Sperry (Lawrence Sperry Aircraft Co) Sperry WBB 2-stroke Spyker Spijker 135hp rotary Sport Plane Power (Sport Plane Power Inc.) Sport Plane Power K-100A STAL STAL Skuten STAL Dovern Star (Star Engineering Co. ltd.) Star 40hp Stark (Stark Flugzeugbau KG) Stark Stamo 1400 Statax (Statax Engine Company Ltd. – prev. Statax-Motor of Zurich) Statax 3cyl 10hp axial Statax 5cyl 40hp axial Statax 7cyl 80hp axial Statax 10cyl 100hp axial Stoewer Stoewer 125hp Stoewer 150hp Stoewer 180hp Stratus 2000 Stratus EJ 22 Straughan (Straughn Aircraft Corp) Straughan AL-1000 (Ford model 1A) Studebaker H-9350 (24cyl 153.2 litres) Studebaker-Waterman Studebaker-Waterman S-1 Sturtevant Sturtevant 1913 40hp Sturtevant 1913 60hp Sturtevant 5 140 hp V-8 Sturtevant 5A 140 hp V-8 Sturtevant 5A-4 Sturtevant 5A-4½ 210 hp V-8 Sturtevant 7 300 hp V-12 Sturtevant D-4 48 hp 4IL Sturtevant D-6 86 hp 6IL Sturtevant E-6 100 hp 6IL Subaru Subaru EJ25 Subaru EA82 Sulzer Sulzer ATAR 09C SunbeamSource: Lumsden. Sunbeam 110 hp Sunbeam 150 hp Sunbeam 200 hp Sunbeam 225 hp Sunbeam Afridi Sunbeam Amazon Sunbeam Arab Sunbeam Bedouin Sunbeam Cossack Sunbeam Crusader Sunbeam Dyak Sunbeam Gurkha Sunbeam Kaffir Sunbeam Malay Sunbeam Maori Sunbeam Manitou Sunbeam Matabele Sunbeam Mohawk Sunbeam Nubian Sunbeam Pathan Sunbeam Saracen Sunbeam Sikh Sunbeam Semi-Sikh Sunbeam Sikh II a.k.a. Semi-Sikh Sunbeam Sikh III Sunbeam Spartan Sunbeam Tartar Sunbeam Viking Sunbeam Zulu Sunbeam 2,000 hp – engine for Kaye Don's Silver Bullet land speed record car Superior Superior Air Parts XP-320 Superior Air Parts XP-360 Superior Air Parts XP-382 Superior Air Parts XP-400 Superior Air Parts Gemini Diesel 100 Superior Air Parts Gemini Diesel 125 Superior Air Parts Vantage Survol-de Coucy Survol-de Coucy Pygmée 40 hp Svenska Svenska Flygmotor P/15-54 IA R-19-SR/1 Indio Svenska Flygmotor RM1 Goblin Svenska Flygmotor RM2 Ghost Svenska Flygmotor RM5 Avon Svenska Flygmotor RM6 Avon Svenska Flygmotor RR2 Svenska RM8 Svenska F-451-A Trollet Svenska Flygmotor VR-3 Szekely Szekely SR-3 O 3-cyl (SR - "Sky Roamer") Szekely SR-3 L Szekely SR-5 5-cyl Szekely 100 7-cyl Szekely O-125 T Take Off Take Off TBM 10 Take Off TBM 11 Take Off TBM 12 Tatra Tatra T100 Tatra T101 TBS (Turbinenbau Schuberth Schwabhausen GmbH) TBS 400N-J40P TECSee: MoslerTechnopower (Technopower Inc.) Technopower Twin O-101 TEI TEI PD170 TEI TS1400 Teledyne CAE CAE 210 (XT51-1 - Turbomeca Artouste I) 280 shp CAE 217-5 (XT72 - Turbomeca Astazou) 600shp CAE 217-10 (XT65 - scaled down Astazou) 305 shp CAE 217A (XT67 - coupled Turbomeca Astazou X) CAE 220-2 (XT51-3 - Turbomeca Artouste II) CAE 227 CAE 300 CAE 320 (Turbomeca Palas - 350 lbf thrust) CAE 325 (Continental TS325-1?) CAE 324 CAE 382 Continental T51 - (development of Turbomeca Artouste I) 280 shp CAE T72 - (Turbomeca Astazou) 600shp CAE T65 - (scaled down Astazou) 305 shp CAE T67 - (coupled Turbomeca Astazou X) Teledyne CAE 352 Teledyne CAE 354 Teledyne CAE 356 Teledyne CAE 365 Teledyne CAE 370 Teledyne CAE 372 Teledyne CAE 373 Teledyne CAE 382 Teledyne CAE 440 Teledyne CAE 455 Teledyne CAE 472 (see F106) Teledyne CAE 490 Teledyne CAE 555 Teledyne CAE J69 Teledyne CAE LJ95 Teledyne CAE J100 Teledyne CAE J402 Teledyne CAE F106 Teledyne CAE F408 Teledyne CAE CJ69 Teledyne CAE TS120 Thaheld Thaheld O-290 diesel Thermo-Jet (Thermo-Jet Standard Inc.) Thermo-Jet J3-200 Thermo-Jet J5-200 Thermo-Jet J7-300 Thermo-Jet J8-200 Thermo-Jet J10-200 Thermo-Jet J13-202 Thames (Thames Ironworks and Ship[building Co.Ltd.) Thames 30hp 4OW Thielert Thielert Centurion 1.7 Thielert Centurion 4.0 Thiokol Data from:Jane's All the World's Aircraft 1962-3 Thiokol LR44 Thiokol LR58 Thiokol LR62 Thiokol LR99 Thiokol M6 (TX-136) Thiokol M10 (TX-10) Thiokol M12 (TX-12) Thiokol M16 (TX-16) Thiokol M18 (TX-18) Thiokol M19 Thiokol M20 (TX-20) Thiokol M30 (TX-30) Thiokol M33 (TX-33) Thiokol M46 Thiokol M51 (TX-131-15) Thiokol M55 Thiokol M58 (TX-58) Thiokol TU-122 Thiokol TX-135 Thiokol TD-174 Guardian Thiokol TE-29 Recruit Thiokol TD-214 Pioneer Thiokol TE-289 Yardbird Thiokol TE-307 Apache Thomas (Thomas Aeromotor Company, United States) Thomas 120hp 4-cyl in-line Thomas 8 135 hp Thomas 88 150 hp Thomas 890 250 hp Thorotzkai (Thorotzkai Péter alt, spelling Thoroczkay) Thorotzkai 12hp Thorotzkai 22hp 3cyl. radial Thorotzkai 35hp opposed twin Thorotzkai typ.7 35hp Thorotzkai 120hp Thorotzkai Gamma-III (35 hp 3cyl. radial) Thulin Thulin A (engine) Thulin D (engine) ( Le Rhône 18E ?) Thulin E (engine) Thulin G (engine) ( Le Rhône 11F ?) Thunder (Thunder Engines Inc.) Thunder TE495-TC700 Tiger (The Light Manufacturing and Foundry Company) Tiger 100 Tiger 125 Tiger Kitten-20 Tiger Kitten-30 Tiger Junior 50 Tips Tips 480hp 250 hp (18 cyl., 1717.67 ci, air- and water-cooled rotary engine. At rated RPM the crankshaft rotated at 1800 rpm, propeller shaft at 1080 rpm and the engine body at 60 rpm. Cooling was by direct air flow and tubular radiators between the cylinders, with water circulating without hoses or pumps.) Tips & Smith Tips & Smith Super-Rhône Tomonoo (Tomon Naoji) Tomono 90hp 6-cyl in-line Tone Tone 2V9 180 hp TNCA TNCA Aztatl TNCA Trebol Tokyo Gasu Denk/Gasuden Tokyo Gasu Denki Amakaze Tokyo Gasu Denki Hatakaze Tokyo Gasu Denki Jimpu 3 Tokyo Gasu Denki Kamikaze Tokyo Gasu Denki Tempu Gasuden Amakaze Gasuden Hatakaze Gasuden Jimpu 3 Gasuden Kamikaze Gasuden Tempu Torque Master (Valley Engineering) Torque Master 1835cc Torque Master 1915cc Torque Master 2180cc Tosi Tosi 450hp V-12 Total Engine Concepts Total Engine Concepts MM CB-40 Trace Engines Trace turbocharged V-8 Train (Établissements E. Train / Société des Constructions Guinard) Train 2T Train 4A Train 4E Train 4T Train 6C Train 6D Train 6T Trebert Trebert 60hp 6-cyl rotary barrel engine Trebert 100hp V-8 Tumansky Tumansky M-87 Tumansky M-88 Tumansky R-11 Tumansky R-13 Tumansky R-15 Tumansky RU-19 Tumansky R-21 Tumansky R-25 Tumansky R-266 Tumansky R-27 Tumansky R-29 Tumansky RD-9 TurbomecaSource:Gunston except where notedTurbomeca Arbizon Turbomeca Ardiden Turbomeca Arrius Turbomeca Arrius (1950s) Turbomeca Arriel Turbomeca Artouste Turbomeca Aspin Turbomeca Astazou Turbomeca Astafan Turbomeca Aubisque Turbomeca Autan Turbomeca Bastan Turbomeca Bi-Bastan - paired Bastan IV Turbomeca Gabizo Turbomeca Gourdon Turbomeca Makila Turbomeca Marboré Turbomeca Marcadau Turbomeca Orédon (1947) Turbomeca's first gas turbine ca 1948; name reused in 1965 Turbomeca Ossau Turbomeca Palas Turbomeca Palouste Turbomeca Piméné Turbomeca Soular (Soulor?) Turbomeca Super Palas Turbomeca Tramontane Turbomeca Turmo I (turboshaft) Turbomeca Turmo II (turboshaft) Turbomeca Turmo III (turboshaft) Turbomeca Turmastazou Turbomeca Double Turmastazou Turbomeca TM251 Turbomeca TM319 Turbomeca TM333 Turbomeca Agusta TAA230 Turbomeca/SNECMA Larzac Rolls-Royce/Turbomeca RTM321 Rolls-Royce/Turbomeca RTM322 Rolls-Royce/Turbomeca Adour Rolls-Royce/Turbomeca Orédon MAN/Rolls-Royce/Turboméca MTR390 MTU/Turbomeca MTM385 Turbo Research Turbo Research was taken over by Avro Canada Turbo Research TR.1 – abandoned design study Turbo Research TR.2 – abandoned design study Turbo Research TR.3 – abandoned design study Turbo Research TR.4 - see Avro Canada Chinook Turbo Research TR.5 - see Avro Canada Orenda Turbo-Union Turbo-Union was a joint venture between Rolls-Royce Ltd, MTU and Aeritalia to produce engine for Panavia Tornado Turbo-Union RB199 Twombly Motor Company Twombly Motor Company (Willard Irving Twombly) A 50hp 7-cylinder rotary; , 1912. U Ufimtsev (A.G. Ufimtsev) Ufimtsev 1908 20hp 2-cyl 2-stroke rotary Ufimtsev 1910 35-40hp 4-cyl contra-rotating rotary Ufimtsev ADU-4 – 60 hp 6-cyl contra-rotating rotary ULPower ULPower UL260i ULPower UL350i ULPower UL390i ULPower UL520i Union (Union Gas Engine Company, United States) Union 120hp 6-cyl in-line Ursinus (Ursinus Leichtmotorenbau) Ursinus U.1 Ursinus U.2 UTC (United Technology Corporation) UTC P-1 V Valley (Valley Engineering) Valley 1915cc Valley 2276cc Van Blerck (Van Blerck Motor Co., Monroe, Michigan) Van Blerck 124hp V-8 Van Blerck 135hp V-8 Van Blerck 185hp V-12 Vaslin (Henri Vaslin) Vaslin 15hp flat-4 Vaslin 24hp Vaslin 55hp 6 in-line water-cooled Vauxhall (Vauxhall Motors Ltd.) Vauxhall 175hp V-12 Vaxell Vaxell 60i Vaxell 80i Vaxell 100i Vedeneyev Vedeneyev M14P Velie Velie M-5 Velie L-9 Verdet Verdet 55hp 7-cyl rotary Vereinegung Volkseigener Betriebe FlugzeugbauSee: PirnaVerner MotorSource: RMV, Verner Motor range of engines, Verner Scarlett mini 3 – 3 cyl radial Verner Scarlett mini 5 – 5 cyl radial Verner Scarlett 7H – 7 cyl radial Verner Scarlett 36Hi Verner JCV 360 Verner VM 125 Verner VM 133 Verner VM 144Hi Verner VM 1400 Verner Scarlett 3V Verner Scarlett 5V Verner Scarlett 5Si Verner Scarlett 7U Verner Scarlett 9S Viale Viale 35 hp (1910 35-50 hp 5-cyl. radial) Viale 30hp 3-cyl fan Viale 50hp 5-cyl radial Viale 70hp 7-cyl radial Viale 100hp 10-cyl radial VIJA VIJA J-10Si VIJA J-10Sbi VIJA AG-12Si VIJA AG-12Sbi VIJA J-16Ti Viking (Viking Aircraft Engines) Viking 100 Viking 110 Viking (Detroit Manufacturers Syndicate Inc) Viking 140hp X-16 Villiers-Hay (Villiers-Hay Development Ltd.) Villiers-Hay 4-L-318 Maya I Villiers-Hay 4-L-319 Maya II Vittorazi (Morrovalle, Italy) Vittorazi Easy 100 Plus Vittorazi Fly 100 Evo 2 Vittorazi Moster 185 VivinusData from:Vivinus 32.5hp 4-cyl in-line Vivinus 37.5hp 4-cyl in-line Vivinus 39.2hp 4-cyl in-line Vivinus 50hp 4-cyl in-line Vivinus 60hp 4-cyl in-line Vivinus 70hp 4-cyl in-line Volkswagen 1/2 VW Volvo Aero RM1 RM2 - licence built de Havilland Ghost RM3 RM4 RM5, RM6 - licence built Rolls-Royce Avon Volvo RM8 - modified Pratt_&_Whitney_JT8D Volvo RM12 - variant of General Electric F404 von Behren von Behren O-113 Air Horse Voronezh (Voronezh engine factory) Voronezh MV-6 W Wackett Source: RMV Wackett 2-cylinder 20/25hp Wackett 2-cylinder 40hp Wackett Victa 1-cylinder 1924 Walter Aircraft Engines Walter A Walter 108H Walter 110H Walter W.III - licensed BMW IIIa Walter W.IV - licensed BMW IV Walter W.V - licensed Fiat A.20 Walter W.VI - licensed Fiat A.22 Walter W.VII -licensed Fiat A.24 Walter W.VIII - licensed Fiat A.25 Walter H80 Walter NZ 40 Walter NZ 60 Walter NZ 85 Walter NZ 120 Walter M05 - Rolls-Royce Nene Walter M06 - Klimov VK-1 Walter M701 Walter M202 Walter M208 Walter M332 Walter M337 Walter M436 Walter M462 Walter M466 Walter M601 Walter M602 Walter M701 Walter Junior Walter Mikron Walter Minor 4 Walter Minor 6 Walter Minor 12 I-MR Walter Major 4-1 Walter Major 6-1 Walter Atlas Walter Atom Walter Bora Walter Castor Walter Gemma Walter Jupiter - licensed Bristol Jupiter Walter Merkur - licensed Bristol Mercury Walter Mars- licensed Gnome-Rhône 14M Walter Mars I Walter Mira R - licensed and developed Pobjoy R Walter Mistral K 14 - licensed Gnome-Rhône Mistral Major Walter Pegas - licensed Bristol Pegasus Walter Polaris Walter Pollux Walter Regulus Walter Sagitta Walter Scolar Walter Super Castor Walter Vega Walter Venus Walter (HWK) Walter RI-201 "Cold" Take Off Pack Walter RI-203 "Hot" Take Off Pack Walter RII.203 Walter RII.211 Walter HWK 109-500 Walter HWK 109-501 Walter HWK 109-507 Walter HWK 109-509 Walter HWK 109-559 Walter HWK 109-719 Walter HWK 109-729 (SV-stoff and R-stoff) Walter HWK 109-739 Walter Heimatschützer I Walter Heimatschützer IV Walter Me.109 Climb Assister Wankel Wankel AG LCR - 407 SGti Wankel AG LCR - 814 TGti Warbirds-engines (Cesky znalecky institut sro, Prague, Czech Republic) Warbirds ASz-62 IR Warner Warner Scarab/Super Scarab Warner Scarab Junior Warner R-420 Warner R-500 Warner R-550 Warner 145 Warner 165 Warner 185 WASAG (Westphalisch-Anhaltische Springstoff A.G.)Source: RMVWASAG 109-506 WASAG 109-512 WASAG 109-522 WASAG 109-532 Watson (Gary Watson of Newcastle, Texas) Watson 917cc 1/2 VW Weir Weir 2HOA Weir 40/50hp 4IL Weiss (Weiss Manfréd Repülögép- és Motorgyár Rt – Mannfred Weiss Aircraft company – engine works) Weiss WM Sh 10 – licence built Siemens-Halske Sh 10 Weiss WM Sh 11 – licence built Siemens-Halske Sh 11 Weiss WM Sh 12 – licence built Siemens-Halske Sh 12 Weiss Sport I 100-130 hp air-cooled 4-cylinder inline engines Weiss Sport II 100-130 hp air-cooled 4-cylinder inline engines Weiss Sport III 100-130 hp air-cooled 4-cylinder inline engines Weiss - Bristol Jupiter VI Weiss MW 9K Mistral (520 hp Gnome-Rhône 9Krsd) Weiss WM-K-14A (870 hp Gnome-Rhône 14K Mistral Major) Weiss WM-K-14B (910 hp Gnome-Rhône 14K Mistral Major) Weiss-Daimler-Benz DB 605B (for Hungarian built Messerschmitt Me 210Ca-1/C-1s). Welch (Welch Aircraft Co) Welch O-2 (O-135) Wells & Adams Wells & Adams 50hp Wells & Adams 135hp V-8 Werner Werner 30hp 4-cyl in-line Werner & Pfleiderer Werner & Pfleiderer 90/95hp 4-cyl inline Werner & Pfleiderer 95hp 4-cyl inverted inline Werner & Pfleiderer 140/150hp 6-cyl inline Werner & Pfleiderer 220hp 8-cyl Wessex a 130 hp 6-cylinder in-line West Engineering West Engineering XJ38 Westermayer (Oskar Westermayer) Westermayer W-5-33 Western (Western Enterprise Engine Co) Western L-7 Westinghouse Westinghouse J30 Westinghouse J32 Westinghouse J34 Westinghouse J40 Westinghouse J43 Westinghouse J45 Westinghouse J46 Westinghouse J50 Westinghouse J54 Westinghouse J74 (none built?) Westinghouse J81 (Rolls-Royce Soar) Westinghouse T30 (25D) Westinghouse T70 Westinghouse 19XB Westinghouse 24C Westinghouse 25D (T30) Westinghouse 40E Westinghouse 9.5A/B Wherry Wherry 4-cyl rotary barrel engine White & PoppeSource: RMVWhite & Poppe 23hp 6-cyl in-line White & Poppe 130hp V-8 WhiteheadSource: RMVWhitehead 1910 40hp Whitehead 1910 75hp Wickner Wickner Wicko F Wiley Post Wiley Post AL-1000 WilkschSource: RMVWilksch WAM100 Wilksch WAM120 Wilksch WAM160 Williams a water-cooled 125hp V-8 Williams InternationalSource: RMVWilliams F107 (WR19) Williams F112 Williams F121 Williams F122 Williams F124 Williams F129 (FJ44) Williams F415 Williams EJ22 Williams FJ22 Williams FJ33 Williams FJ44 Williams FJX-1 Williams FJX-2 Williams J400 (WR24) Williams WJ38-5 Williams WJ119 Williams WR2 Williams WR9 Williams WR19 Williams WRC19 Williams WR24 Williams WR27-1 Williams WR34 Williams WR44 Williams WST117 Williams WTS34 Williams FJX-2 Wills (C. Howard Wills) WBB V-4 2-stroke for Sperry aerial torpedo Winterthur (The Swiss Locomotive and machine Works) Winterthur V-8 Winterthur V-12 Wisconsin 140hp 6-cyl in-line 250hp V-12 Woelfe Aixro Woelfe Aixro XF40 Wojcicli (S.Wojcicli) Wojcicli 10kg pulsejet Wojcicli 20kg pulsejet Wojcicli 40kg pulsejet Wojcicli 70kg pulsejet Wojcicli 11kg ramjet Wojcicli 200kg ramjet WolseleySource: Lumsden. Wolseley 30hp 4-cylinder Wolseley 50hp V-8 air-cooled Wolseley 54hp V-8 water-cooled Wolseley 60 hp, also known as Type C - V-8 water-cooled 80 hp "Type B" Wolseley 75hp V-8 air-cooled Wolseley 90hp V-8 air-cooled Wolseley 90hp V-8 water-cooled Wolseley 120/150hp V-8 water-cooled Wolseley 1911 Type A V-8 Wolseley 1911 Type D V-8 Wolseley 160hp - 1912 V-8 Wolseley Aquarius, also known as Wolseley AR7 Wolseley Aries, also known as Wolseley AR9 Wolseley Leo Wolseley Libra Wolseley Scorpio Wolseley Viper - licence built Hispano Suiza HS-8 Wolseley Python Wolseley Adder Wright Wright Model 4 Wright 1903 12hp Wright 32.5hp 4-cylinder in-line 4.25" x 4.33" Wright 30/35hp 4-cyl in-line Wright 50hp 6-cyl in-line Wright 60hp V-8 Wright 1910 50-60hp Wright 6-60 60 hp 6IL Wright R-460 Wright R-540 Whirlwind Wright R-760 Whirlwind Wright R-790 Whirlwind Wright R-975 Whirlwind Wright R-1200 Simoon Wright R-1300 Cyclone 7 Wright R-1454 (R-1) Wright R-1510 Whirlwind 14Wright R-1670 Wright R-1750 Cyclone 9Wright R-1820 Cyclone Wright R-2160 Tornado Wright R-2600 Twin Cyclone Wright R-3350 Duplex-Cyclone Wright R-4090 Cyclone 22 Wright Gale (from Lawrance L-4) Wright V-720 Wright IV-1460 Wright IV-1560 Wright V-1950 Tornado Wright H-2120 12 cylinder liquid cooled radial Wright XH-4240 Wright D-1 Wright F-50 Cyclone Wright F-60 Cyclone Wright G Cyclone Wright G-100 Wright G-200 Wright GTC-1 Wright J-1 Wright J-3 Whirlwind Wright J-4 Whirlwind Wright J-5 Whirlwind Wright J-6 Whirlwind 5 Wright J-6 Whirlwind 7 Wright J-6 Whirlwind 9 Wright K-2 Wright P-1 Wright P-2 Wright R-1 (R-1454) Wright T Wright T-1 Wright T-2 Wright T-3 Tornado Wright T-3A Tornado (V-1950) Wright T-4 Wright TJ-6 Wright TJ-7 Wright TJA-1 Wright TJ-38A1 Commercial (Olympus 6) Wright TP-51A2 Wright J51 Wright J59 Wright J61 Wright J65 (Armstrong-Siddeley Sapphire) Wright J67 (Bristol Olympus) Wright T35 (from Lockheed J37) Wright T43 Wright T47 (Olympus turboprop ~10,500shp) Wright T49 (Sapphire turboprop ~6,500–10,380ehp) Wright Company Wright Vertical 4 Wright-Gypsy Wright-Gypsy L-320 Wright-Hisso (Wright-Martin/Wright-Hisso) Wright-Hisso A Wright-Hisso B 4-cyl in-line water-cooled Wright-Hisso C geared A Wright-Hisso D geared A with cannon Wright-Hisso E (HC 'I') Wright-Hisso E-2 (HC 'E') Wright-Hisso E-3 Wright-Hisso E-4 Wright-Hisso F ('D' without cannon) Wright-Hisso H Wright-Hisso H-2 improved 'H' Wright-Hisso I Wright-Hisso K H with 37mm Baldwin cannon Wright-Hisso K-2 Wright-Hisso M experimental 300 hp Wright-Hisso T Wright-Hisso 180hp V-8 direct drive Wright-Hisso 220hp V-8 geared drive Wright-Hisso 300hp V-8 geared drive Wright-Morehouse Wright-Morehouse 2-cyl horizontally opposed 26hp (Lincoln Rocket) Wright-Siemens Wright-Siemens Sh-14 Wright-Tuttle Wright-Tuttle WT-5 Wynne (William Wynne) (The Corvair Authority) Wynne O-164B 100 HP Wynne O-164-BE 110 HP Wynne TSIO-164-BE 145 HP X XCOR Aerospace XCOR XR-4A3 XCOR XR-4K14 Xian Xian WS-9 ("Qinling") Xian WS-15 ("Emei") Y Yamaha Yamaha KT100 York (Jo York) York 4-cyl in-line Yuneec International Yuneec Power Drive 10 Yuneec Power Drive 20 Yuneec Power Drive 40 Yuneec Power Drive 60 Z Zanzottera Zanzottera MZ 34 Zanzottera MZ 100 Zanzottera MZ 201 Zanzottera MZ 202 Zanzottera MZ 301 Zanzottera MZ 313 Z.B. (Ceskoslovenska Zbrojovka A.S. Brno / Zbrojovka Brno) Z.B. ZOD-260 Zeitlin (Joseph Zeitlin) Zeitlin 220hp 7-cyl rotary bore, variable stroke Zenoah Zenoah G-25 Zenoah G-50 Zenoah G-72 Zhuzhou (Zhuzhou Aeroengine Factory -ZEF now South Motive Power and Machinery Complex (SMPMC)) ZEF HS-5 ZEF HS-6 ZEF WZ-8 ZEF WZ-9 ZEF WZ-16 ZlinSource:Zlin Persy Zlin Persy II Zlin Persy III Zlin Toma 4 Zlin Toma 6 Zoche Zoche Z 01 Zoche Z 02 Zoche Z 03 Zoche Z 04 ZOD (Československá zbrojovka Brno'' - ZOD) ZOD-240 (2-stroke radial) ZOD-260 (2-stroke radial) Zündapp Zündapp 9-090 Zündapp 9-092 See also United States military aircraft engine designations Notes References Further reading External links Zlin Website Lists of aircraft engines
13952620
https://en.wikipedia.org/wiki/Mozilla%20Prism
Mozilla Prism
Mozilla Prism (formerly WebRunner) is a discontinued project which integrated web applications with the desktop, allowing web applications to be launched from the desktop and configured independently of the default web browser. As of November 2010, Prism is listed as an inactive project at the Mozilla labs website. Prism is based on a concept called a site-specific browser (SSB). An SSB is designed to work exclusively with one web application. It doesn't have the menus, toolbars and other accoutrements of a traditional web browser. The software is built upon XULRunner, so it is possible to get some Mozilla Firefox extensions to work in it. The preview announcement of Prism was made in October 2007. On February 1, 2011, Mozilla labs announced it would no longer maintain Prism, its ideas having been subsumed into a newer project called Chromeless. However, the Mozilla Labs mailing list revealed that Chromeless is not in fact a replacement for Prism, and there is currently no Mozilla replacement for the out-of-the-box site-specific browser functionality of Prism, Chromeless instead being a platform for developers rather than users. For a while Prism continued to be maintained under the original name of WebRunner, which then also was discontinued in September 2011. See also Chromium Embedded Framework Site-specific browser Rich Internet application Fluid (web browser) References External links Prism Project at Mozilla Development Center Prism extension for Firefox 3.0 Prism - MozillaWiki prism.mozillalabs.com/ via Internet Archive Free application software Cross-platform free software Free software programmed in C++ Prism Site-specific browsing 2007 software Discontinued software
22370080
https://en.wikipedia.org/wiki/3rd%20Command%20and%20Control%20Squadron
3rd Command and Control Squadron
The United States Air Force's 3d Command and Control Squadron (3 CACS) was a command and control unit located at Offutt AFB, Nebraska. Mission The 3d Command and Control Squadron mission was shrouded in secrecy, however some details from its location, successor unit, and unit emblem shed some light on its operations. History Emblem Significance The device in the center of the disc is a Hydra, a creature which in mythology had several heads. These heads represent the multiple sensor sites of the 3 CACS, which provide continuous, worldwide surveillance. According to mythology, the hydra in combat could detect an attack from any direction even with the loss of a head. Similarly, 3 CACS maintains global coverage even if one of its sensors is put out of operation. This global capability is further amplified by the motto of the unit: "Watching the World". The disc is divided by the colors black and white which represent night and day. The squadron works night and day. The line of USAF yellow dividing these colors symbolizes the sun which divides the day and night. The USAF blue in the scrolls suggests the sky, the medium in which the Air Force operates. The USAF yellow in the emblem's borders and in the lettering in the scroll denotes the excellence required of Air Force personnel. The motto describes the mission of 3 CACS and how the men and women continuously carry out their mission. Previous designations Detachment 1, 21st Operations Group (31 Jul 1999–Present?) 3d Command and Control Squadron (1 Mar 1996-31 Jul 1999) Detachment 1, 21st Operations Group (11 Jan 1996-1 Mar 1996) Commanders Major Michael J. Morgan (30 Jun 1998-Aug 1999) Lt Col Diann Latham (11 Mar 1996-???) Bases stationed Offutt AFB, Nebraska (1 Mar 1996-31 Jul 1999) Equipment Operated Communications System Segment Replacement (CSSR) Survivable Communications Integration System (SCIS) Command and Control Processing and Display System Replacement (CCPDS-R) Decorations Air Force Outstanding Unit Award 1 Jan 1998-31 Dec 1998 1 Oct 1997-30 Sep 1999 1 Oct 1995-30 Sep 1997 References External links Air Force Link Offutt AFB, Nebraska Command and control squadrons of the United States Air Force Military units and formations in New Mexico Space Development 0001
36137956
https://en.wikipedia.org/wiki/PeerJ
PeerJ
PeerJ is an open access peer-reviewed scientific mega journal covering research in the biological and medical sciences. It is published by a company of the same name that was co-founded by CEO Jason Hoyt (formerly at Mendeley) and publisher Peter Binfield (formerly at PLOS ONE), with initial financial backing of US$950,000 from O'Reilly Media's O'Reilly AlphaTech Ventures, and later funding from Sage Publishing. PeerJ officially launched in June 2012, started accepting submissions on December 3, 2012, and published its first articles on February 12, 2013. The company is a member of CrossRef, CLOCKSS, ORCID, and the Open Access Scholarly Publishers Association. The company's offices are in Corte Madera (California, USA), and London (Great Britain). Submitted research is judged solely on scientific and methodological soundness (as at PLoS ONE), with a facility for peer reviews to be published alongside each paper. Business model PeerJ uses a business model that differs from traditional publishers – in that no subscription fees are charged to its readers – and initially differed from the major open-access publishers in that publication fees were not levied per article but per publishing researcher and at a much lower level. PeerJ also offered a preprint service named PeerJ Preprints (launched on April 3, 2013 and discontinued in September 2019). The low costs were said to be in part achieved by using cloud infrastructure: both PeerJ and PeerJ Preprints run on Amazon EC2, with the content stored on Amazon S3. Originally, PeerJ charged a one-time membership fee to authors that allowed them—with some additional requirements, such as commenting upon, or reviewing, at least one paper per year—to publish in the journal for life. Since October 2016, PeerJ has reverted to article processing charges, but still offers the lifetime membership subscription as an alternative option. The current charge for non-members publishing a single article in PeerJ is $1,195.00, regardless of the number of authors. Alternatively, the life-time membership permitting one free paper per year for life is $399 per author (basic membership) or five per year for $499 (premium membership). It may sometimes be cheaper to pay the per publication charge than paying membership fees for all authors. Reception The journal is abstracted and indexed in Science Citation Index Expanded, PubMed, PubMed Central, Scopus, Web of Science, Google Scholar, the DOAJ, the American Chemical Society (ACS) databases, EMBASE, CAB Abstracts, Europe PubMed Central, AGORA, ARDI, HINARI, OARE, the ProQuest databases, and OCLC. According to the Journal Citation Reports, its impact factor increased from 2.118 in 2017 to 2.353 in 2018. In April 2013 The Chronicle of Higher Education selected PeerJ CEO and co-founder Jason Hoyt as one of "Ten Top Tech Innovators" for the year. On September 12, 2013 the Association of Learned and Professional Society Publishers awarded PeerJ the "Publishing Innovation" of the year award. Computer science and chemistry journals On 3 February 2015, PeerJ launched a new journal dedicated to computer science: PeerJ Computer Science. The first article on PeerJ Computer Science was published on 27 May 2015. On 6 November 2018, PeerJ launched five new journals dedicated to chemistry: PeerJ Physical Chemistry, PeerJ Organic Chemistry, PeerJ Inorganic Chemistry, PeerJ Analytical Chemistry, and PeerJ Materials Science. See also arXiv eLife References External links PeerJ PrePrints PeerJ Computer Science Creative Commons Attribution-licensed journals Publications established in 2013 Science and technology in London Science and technology in the San Francisco Bay Area English-language journals Biology journals General medical journals Open access publishers Continuous journals Academic publishing companies O'Reilly Media
28911287
https://en.wikipedia.org/wiki/Sophalexios
Sophalexios
In Greek mythology, Sophalexios (“skilled defender”) was the son of Jason, leader of the Argonauts, and Creusa, the daughter of Creon, king of Corinth. Mythology As Jason was still married to Medea, daughter of King Aeetes of Colchis, when Sophalexios was born, Sophalexios’ true parentage was kept secret for fear that Medea would kill the infant Sophalexios. Jason later married Creusa which angered Medea. As revenge, Medea presented Creusa with a cursed dress that burned both Creusa and her father, King Creon of Corinth, to death. After her revenge, Medea fled Corinth without ever knowing of Jason and Creusa’s son. Sophalexios remained in Corinth where he later learned of his background. Agamemnon, king of Mycenae, never recognised Sophalexios as Creon’s heir and so claimed to be king of Corinth himself. Sophalexios never realised Agamemnon as king of Corinth. Sophalexios grew into a fine, young soldier, often proving his skill, courage and leadership in battle. He became the leader of the Ephyrans, an elite force within the Corinthian army. After Paris of Troy had taken Helen of Sparta back to Troy, Agamemnon sent Odysseus, king of Ithaca, and Nestor, king of Pylos, to recruit forces from around the Achaean region to attack Troy. Sophalexios was well known for his disdain for Agamemnon, so it came as no surprise that when Odysseus and Nestor arrived in Corinth to recruit forces, particularly Sophalexios’ Ephyrans, Sophalexios strongly disagreed with their plans. He knew Agamemnon was only using Helen of Sparta as an excuse to attack Troy so he could sack the city and gain control of the Aegean Sea’s trade routes. Sophalexios also knew that by opposing Agamemnon in the Trojan War, he could use it as an opportunity to challenge Agamemnon’s claim for the kingdom of Corinth. To keep the peace, Sophalexios made Odysseus and Nestor believe that he would commit his Ephyrans to the Achaean forces and gather them at Aulis. As soon as Odysseus and Nestor had left Corinth, Sophalexios and his Ephyrans sailed for Troy to warn the Trojans of Agamemnon’s plans and help defend the city as Trojan allies. Sophalexios and his Ephyrans played an important role in the defence of Troy. The elite force always fought immediately near Hector, the prince of Troy and leader of the Trojan forces. Renowned for their superior defensive skills, none more skilful than their leader Sophalexios, the Ephyrans added tremendous strength to the Trojan forces. A few years into the Trojan War, Sophalexios married Lysimache, a daughter of king Priam of Troy and had a son called Dardanos. After Hector was slain by Achilles, Sophalexios knew Troy’s fall was imminent. Fearing for his family’s safety, Sophalexios helped his wife and son flee the city. Never one to neglect his courage, Sophalexios stayed behind to help defend Troy. He was killed by Diomedes. Lysimache and Dardanos later returned to Troy after the city had been sacked and rebuilt. Little is known after their return. It is believed that they both became part of the restored royal court. References and further reading Ancient authors Apollodorus, Gods & Heroes of the Greeks: The Library of Apollodorus, translated by Michael Simpson, The University of Massachusetts Press, (1976). . Apollodorus, Apollodorus: The Library, translated by Sir James George Frazer, two volumes, Cambridge MA: Harvard University Press and London: William Heinemann Ltd. 1921. Volume 1: . Volume 2: . Euripides, Andromache, in Euripides: Children of Heracles, Hippolytus, Andromache, Hecuba, with an English translation by David Kovacs. Cambridge. Harvard University Press. (1996). . Euripides, Helen, in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. in two volumes. 1. Helen, translated by E. P. Coleridge. New York. Random House. 1938. Euripides, Hecuba, in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. in two volumes. 1. Hecuba, translated by E. P. Coleridge. New York. Random House. 1938. Herodotus, Histories, A. D. Godley (translator), Cambridge: Harvard University Press, 1920; . Online version at the Perseus Digital Library]. Pausanias, Description of Greece, (Loeb Classical Library) translated by W. H. S. Jones; Cambridge, Massachusetts: Harvard University Press; London, William Heinemann Ltd. (1918). Vol 1, Books I–II, ; Vol 2, Books III–V, ; Vol 3, Books VI–VIII.21, ; Vol 4, Books VIII.22–X, . Proclus, Chrestomathy, in Fragments of the Kypria translated by H.G. Evelyn-White, 1914 (public domain). Proclus, Proclus' Summary of the Epic Cycle, trans. Gregory Nagy. Quintus Smyrnaeus, Posthomerica, in Quintus Smyrnaeus: The Fall of Troy, Arthur Sanders Way (Ed. & Trans.), Loeb Classics #19; Harvard University Press, Cambridge MA. (1913). (1962 edition: ). Strabo, Geography, translated by Horace Leonard Jones; Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. (1924) Modern authors Burgess, Jonathan S. 2004. The Tradition of the Trojan War in Homer and the Epic Cycle (Johns Hopkins). . Castleden, Rodney. The Attack on Troy. Barnsley, South Yorkshire, UK: Pen and Sword Books, 2006 (hardcover, ). Davies, Malcolm (2000). "Euripides Telephus Fr. 149 (Austin) and the Folk-Tale Origins of the Teuthranian Expedition" (PDF). Zeitschrift für Papyrologie und Epigraphik 133: 7–10. http://www.uni-koeln.de/phil-fak/ifa/zpe/downloads/2000/133pdf/133007.pdf. Durschmied, Erik. The Hinge Factor:How Chance and Stupidity Have Changed History. Coronet Books; New Ed edition (7 Oct 1999). Frazer, Sir James George, Apollodorus: The Library, two volumes, Cambridge MA: Harvard University Press and London: William Heinemann Ltd. 1921. Volume 1: . Volume 2: . Graves, Robert. The Greek Myths, Penguin (Non-Classics); Cmb/Rep edition (April 6, 1993). . Kakridis, J., 1988. Ελληνική Μυθολογία ("Greek mythology"), Ekdotiki Athinon, Athens. Karykas, Pantelis, 2003. Μυκηναίοι Πολεμιστές ("Mycenean Warriors"), Communications Editions, Athens. Latacz, Joachim. Troy and Homer: Towards a Solution of an Old Mystery. New York: Oxford University Press (USA), 2005 (hardcover, ). Simpson, Michael. Gods & Heroes of the Greeks: The Library of Apollodorus, The University of Massachusetts Press, (1976). . Strauss, Barry. The Trojan War: A New History. New York: Simon & Schuster, 2006 (hardcover, ). Thompson, Diane P. The Trojan War: Literature and Legends from the Bronze Age to the Present. Jefferson, NC: McFarland. (paperback). Troy: From Homer's Iliad to Hollywood Epic, edited by Martin M. Winkler. Oxford: Blackwell Publishers, 2007 (hardcover, ; paperback, ). Wood, Michael. In Search of the Trojan War. Berkeley: University of California Press, 1998 (paperback, ); London: BBC Books, 1985 (). Characters in Greek mythology
5134
https://en.wikipedia.org/wiki/Chess
Chess
Chess is a board game played between two players. It is sometimes called Western chess or international chess to distinguish it from related games such as xiangqi and shogi. The current form of the game emerged in Southern Europe during the second half of the 15th century after evolving from chaturanga, a similar but much older game of Indian origin. Today, chess is one of the world's most popular games, played by millions of people worldwide. Chess is an abstract strategy game and involves no hidden information. It is played on a square chessboard with 64 squares arranged in an eight-by-eight grid. At the start, each player (one controlling the white pieces, the other controlling the black pieces) controls sixteen pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. The object of the game is to checkmate the opponent's king, whereby the king is under immediate attack (in "check") and there is no way for it to escape. There are also several ways a game can end in a draw. Organized chess arose in the 19th century. Chess competition today is governed internationally by FIDE (International Chess Federation). The first universally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886; Magnus Carlsen is the current World Champion. A huge body of chess theory has developed since the game's inception. Aspects of art are found in chess composition; and chess in its turn influenced Western culture and art and has connections with other fields such as mathematics, computer science, and psychology. One of the goals of early computer scientists was to create a chess-playing machine. In 1997, Deep Blue became the first computer to beat the reigning World Champion in a match when it defeated Garry Kasparov. Today's chess engines are significantly stronger than the best human players, and have deeply influenced the development of chess theory. Rules The rules of chess are published by FIDE (Fédération Internationale des Échecs), chess's international governing body, in its Handbook. Rules published by national governing bodies, or by unaffiliated chess organizations, commercial publishers, etc., may differ in some details. FIDE's rules were most recently revised in 2018. Setup Chess pieces are divided into two different colored sets. While the sets may not be literally white and black (e.g. the light set may be a yellowish or off-white color, the dark set may be brown or red), they are always referred to as "white" and "black". The players of the sets are referred to as White and Black, respectively. Each set consists of 16 pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. Chess sets come in a wide variety of styles; for competition, the Staunton pattern is preferred. The game is played on a square board of eight rows (called ) and eight columns (called ). By convention, the 64 squares alternate in color and are referred to as light and dark squares; common colors for chessboards are white and brown, or white and dark green. The pieces are set out as shown in the diagram and photo. Thus, on White's first rank, from left to right, the pieces are placed in the following order: rook, knight, bishop, queen, king, bishop, knight, rook. On the second rank is placed a row of eight pawns. Black's position mirrors White's, with an equivalent piece on the same file. The board is placed with a light square at the right-hand corner nearest to each player. The correct positions of the king and queen may be remembered by the phrase "queen on her own color" ─ i.e. the white queen begins on a light square and the black queen on a dark square. In competitive games, the piece colors are allocated to players by the organizers; in informal games, the colors are usually decided randomly, for example by a coin toss, or by one player concealing a white pawn in one hand and a black pawn in the other, and having the opponent choose. Movement White moves first, after which players alternate turns, moving one piece per turn, except for castling, when two pieces are moved. A piece is moved to either an unoccupied square or one occupied by an opponent's piece, which is captured and removed from play. With the sole exception of en passant, all pieces capture by moving to the square that the opponent's piece occupies. Moving is compulsory; a player may not skip a turn, even when having to move is detrimental. Each piece has its own way of moving. In the diagrams, the dots mark the squares to which the piece can move if there are no intervening piece(s) of either color (except the knight, which leaps over any intervening pieces). All pieces except the pawn can capture an enemy piece if it is located on a square to which they would be able to move if the square was unoccupied. The squares on which pawns can capture enemy pieces are marked in the diagram with black crosses. The king moves one square in any direction. There is also a special move called castling that involves moving the king and a rook. The king is the most valuable piece — attacks on the king must be immediately countered, and if this is impossible, immediate loss of the game ensues (see Check and checkmate below). A rook can move any number of squares along a rank or file, but cannot leap over other pieces. Along with the king, a rook is involved during the king's castling move. A bishop can move any number of squares diagonally, but cannot leap over other pieces. A queen combines the power of a rook and bishop and can move any number of squares along a rank, file, or diagonal, but cannot leap over other pieces. A knight moves to any of the closest squares that are not on the same rank, file, or diagonal. (Thus the move forms an "L"-shape: two squares vertically and one square horizontally, or two squares horizontally and one square vertically.) The knight is the only piece that can leap over other pieces. A pawn can move forward to the unoccupied square immediately in front of it on the same file, or on its first move it can advance two squares along the same file, provided both squares are unoccupied (black dots in the diagram). A pawn can capture an opponent's piece on a square diagonally in front of it by moving to that square (black crosses). A pawn has two special moves: the en passant capture and promotion. Check and checkmate When a king is under immediate attack, it is said to be in check. A move in response to a check is legal only if it results in a position where the king is no longer in check. This can involve capturing the checking piece; interposing a piece between the checking piece and the king (which is possible only if the attacking piece is a queen, rook, or bishop and there is a square between it and the king); or moving the king to a square where it is not under attack. Castling is not a permissible response to a check. The object of the game is to checkmate the opponent; this occurs when the opponent's king is in check, and there is no legal way to get it out of check. It is never legal for a player to make a move that puts or leaves the player's own king in check. In casual games, it is common to announce "check" when putting the opponent's king in check, but this is not required by the rules of chess and is not usually done in tournaments. Castling Once per game, each king can make a move known as castling. Castling consists of moving the king two squares toward a rook of the same color on the same rank, and then placing the rook on the square that the king crossed. Castling is permissible if the following conditions are met: Neither the king nor the rook has previously moved during the game. There are no pieces between the king and the rook. The king is not in check and does not pass through or land on any square attacked by an enemy piece. Castling is still permitted if the rook is under attack, or if the rook crosses an attacked square. En passant When a pawn makes a two-step advance from its starting position and there is an opponent's pawn on a square next to the destination square on an adjacent file, then the opponent's pawn can capture it en passant ("in passing"), moving to the square the pawn passed over. This can be done only on the turn immediately following the enemy pawn's two-square advance; otherwise, the right to do so is forfeited. For example, in the animated diagram, the black pawn advances two squares from g7 to g5, and the white pawn on f5 can take it en passant on g6 (but only immediately after the black pawn's advance). Promotion When a pawn advances to its eighth rank, as part of the move, it is promoted and must be exchanged for the player's choice of queen, rook, bishop, or knight of the same color. Usually, the pawn is chosen to be promoted to a queen, but in some cases, another piece is chosen; this is called underpromotion. In the animated diagram, the pawn on c7 can be advanced to the eighth rank and be promoted. There is no restriction on the piece promoted to, so it is possible to have more pieces of the same type than at the start of the game (e.g., two or more queens). If the required piece is not available (e.g. a second queen) an inverted rook is sometimes used as a substitute, but this is not recognized in FIDE sanctioned games. End of the game Win A game can be won in the following ways: Checkmate: The king is in check and the player has no legal move. (See check and checkmate above) Resignation: A player may resign, conceding the game to the opponent. Most tournament players consider it good etiquette to resign in a hopeless position. Win on time: In games with a time control, a player wins if the opponent runs out of time, even if the opponent has a superior position, as long as the player has a theoretical possibility to checkmate the opponent were the game to continue. Forfeit: A player who cheats, violates the rules, or violates the rules of conduct specified for the particular tournament can be forfeited. Occasionally, both players are forfeited. Draw There are several ways a game can end in a draw: Stalemate: If the player to move has no legal move, but is not in check, the position is a stalemate, and the game is drawn. Dead position: If neither player is able to checkmate the other by any legal sequence of moves, the game is drawn. For example, if only the kings are on the board, all other pieces having been captured, checkmate is impossible, and the game is drawn by this rule. On the other hand, if both players still have a knight, there is a highly unlikely yet theoretical possibility of checkmate, so this rule does not apply. The dead position rule supersedes the previous rule which referred to "insufficient material", extending it to include other positions where checkmate is impossible, such as blocked pawn endings where the pawns cannot be attacked. Draw by agreement: In tournament chess, draws are most commonly reached by mutual agreement between the players. The correct procedure is to verbally offer the draw, make a move, then start the opponent's clock. Traditionally, players have been allowed to agree to a draw at any point in the game, occasionally even without playing a move; in recent years efforts have been made to discourage short draws, for example by forbidding draw offers before move thirty. Threefold repetition: This most commonly occurs when neither side is able to avoid repeating moves without incurring a disadvantage. In this situation, either player can claim a draw; this requires the players to keep a valid written record of the game so that the claim can be verified by the arbiter if challenged. The three occurrences of the position need not occur on consecutive moves for a claim to be valid. The addition of the fivefold repetition rule in 2014 requires the arbiter to intervene immediately and declare the game a draw after five occurrences of the same position, consecutive or otherwise, without requiring a claim by either player. FIDE rules make no mention of perpetual check; this is merely a specific type of draw by threefold repetition. Fifty-move rule: If during the previous 50 moves no pawn has been moved and no capture has been made, either player can claim a draw. The addition of the seventy-five-move rule in 2014 requires the arbiter to intervene and immediately declare the game drawn after 75 moves without a pawn move or capture, without requiring a claim by either player. There are several known endgames where it is possible to force a mate but it requires more than 50 moves before a pawn move or capture is made; examples include some endgames with two knights against a pawn and some pawnless endgames such as queen against two bishops. Historically, FIDE has sometimes revised the fifty-move rule to make exceptions for these endgames, but these have since been repealed. Some correspondence chess organizations do not enforce the fifty-move rule. Draw on time: In games with a time control, the game is drawn if a player is out of time and no sequence of legal moves would allow the opponent to checkmate the player. Time control In competition, chess games are played with a time control. If a player's time runs out before the game is completed, the game is automatically lost (provided the opponent has enough pieces left to deliver checkmate). The duration of a game ranges from long (or "classical") games, which can take up to seven hours (even longer if adjournments are permitted), to bullet chess (under 3 minutes per player for the entire game). Intermediate between these are rapid chess games, lasting between one and two hours per game, a popular time control in amateur weekend tournaments. Time is controlled using a chess clock that has two displays, one for each player's remaining time. Analog chess clocks have been largely replaced by digital clocks, which allow for time controls with increments. Time controls are also enforced in correspondence chess competitions. A typical time control is 50 days for every 10 moves. Notation Historically, many different notation systems have been used to record chess moves; the standard system today is short-form algebraic notation. In this system, each square is uniquely identified by a set of coordinates, a–h for the files followed by 1–8 for the ranks. The usual format is: initial of the piece moved – file of destination square – rank of destination square The pieces are identified by their initials. In English, these are K (king), Q (queen), R (rook), B (bishop), and N (knight; N is used to avoid confusion with king). For example, Qg5 means "queen moves to the g-file, 5th rank" (that is, to the square g5). Different initials may be used for other languages. In chess literature figurine algebraic notation (FAN) is frequently used to aid understanding independent of language. To resolve ambiguities, an additional letter or number is added to indicate the file or rank from which the piece moved (e.g. Ngf3 means "knight from the g-file moves to the square f3"; R1e2 means "rook on the first rank moves to e2"). For pawns, no letter initial is used; so e4 means "pawn moves to the square e4". If the piece makes a capture, "x" is usually inserted before the destination square. Thus Bxf3 means "bishop captures on f3". When a pawn makes a capture, the file from which the pawn departed is used to identify the pawn making the capture, for example, exd5 (pawn on the e-file captures the piece on d5). Ranks may be omitted if unambiguous, for example, exd (pawn on the e-file captures a piece somewhere on the d-file). A minority of publications use ":" to indicate a capture, and some omit the capture symbol altogether. In its most abbreviated form, exd5 may be rendered simply as ed. An en passant capture may optionally be marked with the notation "e.p." If a pawn moves to its last rank, achieving promotion, the piece chosen is indicated after the move (for example, e1=Q or e1Q). Castling is indicated by the special notations 0-0 (or O-O) for castling and 0-0-0 (or O-O-O) for castling. A move that places the opponent's king in check usually has the notation "+" added. There are no specific notations for discovered check or double check. Checkmate can be indicated by "#". At the end of the game, "1–0" means White won, "0–1" means Black won, and "½–½" indicates a draw. Chess moves can be annotated with punctuation marks and other symbols. For example: "!" indicates a good move; "!!" an excellent move; "?" a mistake; "??" a blunder; "!?" an interesting move that may not be best; or "?!" a dubious move not easily refuted. For example, one variation of a simple trap known as the Scholar's mate (see animated diagram) can be recorded: 1. e4 e5 2. Qh5 Nc6 3. Bc4 Nf6 4. Qxf7 Variants of algebraic notation include long form algebraic, in which both the departure and destination square are indicated; abbreviated algebraic, in which capture signs, check signs, and ranks of pawn captures may be omitted; and Figurine Algebraic Notation, used in chess publications for universal readability regardless of language. Portable Game Notation (PGN) is a text-based file format for recording chess games, based on short form English algebraic notation with a small amount of markup. PGN files (suffix .pgn) can be processed by most chess software, as well as being easily readable by humans. Until about 1980, the majority of English language chess publications used descriptive notation, in which files are identified by the initial letter of the piece that occupies the first rank at the beginning of the game. In descriptive notation, the common opening move 1.e4 is rendered as "1.P-K4" ("pawn to king four"). Another system is ICCF numeric notation, recognized by the International Correspondence Chess Federation though its use is in decline. In competitive games, players are normally required to keep a score (record of the game). For this purpose, only algebraic notation is recognized in FIDE-sanctioned events; game scores recorded in a different notation system may not be used as evidence in the event of a dispute. Organized competition Tournaments and matches Contemporary chess is an organized sport with structured international and national leagues, tournaments, and congresses. Thousands of chess tournaments, matches, and festivals are held around the world every year catering to players of all levels. Tournaments with a small number of players may use the round-robin format, in which every player plays one game against every other player. For a large numbers of players, the Swiss system may be used, in which each player is paired against an opponent who has the same (or as similar as possible) score in each round. In either case, a player's score is usually calculated as 1 point for each game won and one-half point for each game drawn. Variations such as "football scoring" (3 points for a win, 1 point for a draw) may be used by tournament organizers, but ratings are always calculated on the basis of standard scoring. There are different ways to denote a player's score in a match or tournament, most commonly: P / G (points scored out of games played, e.g. 5½ / 8); P – A (points for and points against, e.g. 5½ – 2½); or +W –L =D (W wins, L losses, D draws, e.g. +4 –1 =3). The term "match" refers not to an individual game, but to either a series of games between two players, or a team competition in which each player of one team plays one game against a player of the other team. Governance Chess's international governing body is usually known by its French acronym FIDE (pronounced FEE-day) (French: Fédération internationale des échecs), or International Chess Federation. FIDE's membership consists of the national chess organizations of over 180 countries; there are also several associate members, including various supra-national organizations, the International Braille Chess Association (IBCA), International Committee of Chess for the Deaf (ICCD), and the International Physically Disabled Chess Association (IPCA). FIDE is recognized as a sports governing body by the International Olympic Committee, but chess has never been part of the Olympic Games. FIDE's most visible activity is organizing the World Chess Championship, a role it assumed in 1948. The current World Champion is Magnus Carlsen of Norway. The reigning Women's World Champion is Ju Wenjun from China. Other competitions for individuals include the World Junior Chess Championship, the European Individual Chess Championship, the tournaments for the World Championship qualification cycle, and the various national championships. Invitation-only tournaments regularly attract the world's strongest players. Examples include Spain's Linares event, Monte Carlo's Melody Amber tournament, the Dortmund Sparkassen meeting, Sofia's M-tel Masters, and Wijk aan Zee's Tata Steel tournament. Regular team chess events include the Chess Olympiad and the European Team Chess Championship. The World Chess Solving Championship and World Correspondence Chess Championships include both team and individual events; these are held independently of FIDE. Titles and rankings In order to rank players, FIDE, ICCF, and most national chess organizations use the Elo rating system developed by Arpad Elo. An average club player has a rating of about 1500; the highest FIDE rating of all time, 2882, was achieved by Magnus Carlsen on the March 2014 FIDE rating list. Players may be awarded lifetime titles by FIDE: Grandmaster (shortened as GM; sometimes International Grandmaster or IGM is used) is awarded to world-class chess masters. Apart from World Champion, Grandmaster is the highest title a chess player can attain. Before FIDE will confer the title on a player, the player must have an Elo rating of at least 2500 at one time and three results of a prescribed standard (called norms) in tournaments involving other grandmasters, including some from countries other than the applicant's. There are other milestones a player can achieve to attain the title, such as winning the World Junior Championship. International Master (shortened as IM). The conditions are similar to GM, but less demanding. The minimum rating for the IM title is 2400. FIDE Master (shortened as FM). The usual way for a player to qualify for the FIDE Master title is by achieving a FIDE rating of 2300 or more. Candidate Master (shortened as CM). Similar to FM, but with a FIDE rating of at least 2200. The above titles are open to both men and women. There are also separate women-only titles; Woman Grandmaster (WGM), Woman International Master (WIM), Woman FIDE Master (WFM) and Woman Candidate Master (WCM). These require a performance level approximately 200 Elo rating points below the similarly named open titles, and their continued existence has sometimes been controversial. Beginning with Nona Gaprindashvili in 1978, a number of women have earned the open GM title. FIDE also awards titles for arbiters and trainers. International titles are also awarded to composers and solvers of chess problems and to correspondence chess players (by the International Correspondence Chess Federation). National chess organizations may also award titles. Theory Chess has an extensive literature. In 1913, the chess historian H.J.R. Murray estimated the total number of books, magazines, and chess columns in newspapers to be about 5,000. B.H. Wood estimated the number, as of 1949, to be about 20,000. David Hooper and Kenneth Whyld write that, "Since then there has been a steady increase year by year of the number of new chess publications. No one knows how many have been printed." Significant public chess libraries include the John G. White Chess and Checkers Collection at Cleveland Public Library, with over 32,000 chess books and over 6,000 bound volumes of chess periodicals; and the Chess & Draughts collection at the National Library of the Netherlands, with about 30,000 books. Chess theory usually divides the game of chess into three phases with different sets of strategies: the opening, typically the first 10 to 20 moves, when players move their pieces to useful positions for the coming battle; the middlegame; and last the endgame, when most of the pieces are gone, kings typically take a more active part in the struggle, and pawn promotion is often decisive. Opening theory is concerned with finding the best moves in the initial phase of the game. There are dozens of different openings, and hundreds of variants. The Oxford Companion to Chess lists 1,327 named openings and variants. Middlegame theory is usually divided into chess tactics and chess strategy. Chess strategy concentrates on setting and achieving long-term positioning advantages during the game – for example, where to place different pieces – while tactics concerns immediate maneuver. These two aspects of the gameplay cannot be completely separated, because strategic goals are mostly achieved through tactics, while the tactical opportunities are based on the previous strategy of play. Endgame theory is concerned with positions where there are only a few pieces left. Theoretics categorise these positions according to the pieces, for example "King and pawn endings" or "Rook versus a minor piece". Opening A chess opening is the group of initial moves of a game (the "opening moves"). Recognized sequences of opening moves are referred to as openings and have been given names such as the Ruy Lopez or Sicilian Defense. They are catalogued in reference works such as the Encyclopaedia of Chess Openings. There are dozens of different openings, varying widely in character from quiet (for example, the Réti Opening) to very aggressive (the Latvian Gambit). In some opening lines, the exact sequence considered best for both sides has been worked out to more than 30 moves. Professional players spend years studying openings and continue doing so throughout their careers, as opening theory continues to evolve. The fundamental strategic aims of most openings are similar: development: This is the technique of placing the pieces (particularly bishops and knights) on useful squares where they will have an optimal impact on the game. control of the : Control of the central squares allows pieces to be moved to any part of the board relatively easily, and can also have a cramping effect on the opponent. king safety: It is critical to keep the king safe from dangerous possibilities. A correctly timed castling can often enhance this. pawn structure: Players strive to avoid the creation of pawn weaknesses such as isolated, doubled, or backward pawns, and pawn islands – and to force such weaknesses in the opponent's position. Most players and theoreticians consider that White, by virtue of the first move, begins the game with a small advantage. This initially gives White the initiative. Black usually strives to neutralize White's advantage and achieve , or to develop in an unbalanced position. Middlegame The middlegame is the part of the game which starts after the opening. There is no clear line between the opening and the middlegame, but typically the middlegame will start when most pieces have been developed. (Similarly, there is no clear transition from the middlegame to the endgame; see start of the endgame.) Because the opening theory has ended, players have to form plans based on the features of the position, and at the same time take into account the tactical possibilities of the position. The middlegame is the phase in which most combinations occur. Combinations are a series of tactical moves executed to achieve some gain. Middlegame combinations are often connected with an attack against the opponent's king. Some typical patterns have their own names; for example, the Boden's Mate or the Lasker–Bauer combination. Specific plans or strategic themes will often arise from particular groups of openings which result in a specific type of pawn structure. An example is the , which is the attack of queenside pawns against an opponent who has more pawns on the queenside. The study of openings is therefore connected to the preparation of plans that are typical of the resulting middlegames. Another important strategic question in the middlegame is whether and how to reduce material and transition into an endgame (i.e. ). Minor material advantages can generally be transformed into victory only in an endgame, and therefore the stronger side must choose an appropriate way to achieve an ending. Not every reduction of material is good for this purpose; for example, if one side keeps a light-squared bishop and the opponent has a dark-squared one, the transformation into a bishops and pawns ending is usually advantageous for the weaker side only, because an endgame with bishops on opposite colors is likely to be a draw, even with an advantage of a pawn, or sometimes even with a two-pawn advantage. Tactics In chess, tactics in general concentrate on short-term actions – so short-term that they can be calculated in advance by a human player or a computer. The possible depth of calculation depends on the player's ability. In positions with many possibilities on both sides, a deep calculation is more difficult and may not be practical, while in positions with a limited number of variations, strong players can calculate long sequences of moves. Theoreticians describe many elementary tactical methods and typical maneuvers, for example: pins, forks, skewers, batteries, discovered attacks (especially discovered checks), zwischenzugs, deflections, decoys, sacrifices, underminings, overloadings, and interferences. Simple one-move or two-move tactical actions – threats, exchanges of , and double attacks – can be combined into more complicated sequences of tactical maneuvers that are often forced from the point of view of one or both players. A forced variation that involves a sacrifice and usually results in a tangible gain is called a combination. Brilliant combinations – such as those in the Immortal Game – are considered beautiful and are admired by chess lovers. A common type of chess exercise, aimed at developing players' skills, is a position where a decisive combination is available and the challenge is to find it. Strategy Chess strategy is concerned with the evaluation of chess positions and with setting up goals and long-term plans for future play. During the evaluation, players must take into account numerous factors such as the value of the pieces on the board, control of the center and centralization, the pawn structure, king safety, and the control of key squares or groups of squares (for example, diagonals, open files, and dark or light squares). The most basic step in evaluating a position is to count the total value of pieces of both sides. The point values used for this purpose are based on experience; usually, pawns are considered worth one point, knights and bishops about three points each, rooks about five points (the value difference between a rook and a bishop or knight being known as the exchange), and queens about nine points. The king is more valuable than all of the other pieces combined, since its checkmate loses the game. But in practical terms, in the endgame, the king as a fighting piece is generally more powerful than a bishop or knight but less powerful than a rook. These basic values are then modified by other factors like position of the piece (e.g. advanced pawns are usually more valuable than those on their initial squares), coordination between pieces (e.g. a pair of bishops usually coordinate better than a bishop and a knight), or the type of position (e.g. knights are generally better in with many pawns while bishops are more powerful in ). Another important factor in the evaluation of chess positions is pawn structure (sometimes known as the pawn skeleton): the configuration of pawns on the chessboard. Since pawns are the least mobile of the pieces, pawn structure is relatively static and largely determines the strategic nature of the position. Weaknesses in pawn structure include isolated, doubled, or backward pawns and ; once created, they are often permanent. Care must therefore be taken to avoid these weaknesses unless they are compensated by another valuable asset (for example, by the possibility of developing an attack). Endgame The endgame (also end game or ending) is the stage of the game when there are few pieces left on the board. There are three main strategic differences between earlier stages of the game and the endgame: Pawns become more important. Endgames often revolve around endeavors to promote a pawn by advancing it to the furthest . The king, which requires safeguarding from attack during the middlegame, emerges as a strong piece in the endgame. It is often brought to the where it can protect its own pawns, attack enemy pawns, and hinder moves of the opponent's king. Zugzwang, a situation in which the player who is to move is forced to incur a disadvantage, is often a factor in endgames but rarely in other stages of the game. In the example diagram, either side having the move is in zugzwang: Black to move must play 1...Kb7 allowing White to promote the pawn after 2.Kd7; White to move must permit a draw, either by 1.Kc6 stalemate or by losing the pawn after any other legal move. Endgames can be classified according to the type of pieces remaining on the board. Basic checkmates are positions in which one side has only a king and the other side has one or two pieces and can checkmate the opposing king, with the pieces working together with their king. For example, king and pawn endgames involve only kings and pawns on one or both sides, and the task of the stronger side is to promote one of the pawns. Other more complicated endings are classified according to pieces on the board other than kings, such as "rook and pawn versus rook" endgames. History Predecessors The earliest texts referring to the origins of chess date from the beginning of the 7th century. Three are written in Pahlavi (Middle Persian) and one, the Harshacharita, is in Sanskrit. One of these texts, the Chatrang-namak, represents one of the earliest written accounts of chess. The narrator Bozorgmehr explains that Chatrang, the Pahlavi word for chess, was introduced to Persia by 'Dewasarm, a great ruler of India' during the reign of Khosrow I. The oldest known chess manual was in Arabic and dates to about 840, written by al-Adli ar-Rumi (800–870), a renowned Arab chess player, titled Kitab ash-shatranj (The Book of Chess). This is a lost manuscript, but is referenced in later works. Here also, al-Adli attributes the origins of Persian chess to India, along with the eighth-century collection of fables Kalīla wa-Dimna. By the twentieth century, a substantial consensus developed regarding chess's origins in northwest India in the early 7th century. More recently, this consensus has been the subject of further scrutiny. The early forms of chess in India were known as chaturaṅga (), literally four divisions [of the military] – infantry, cavalry, elephants, and chariotry ─ represented by pieces which would later evolve into the modern pawn, knight, bishop, and rook, respectively. Chaturanga was played on an 8×8 uncheckered board, called ashtāpada. Thence it spread eastward and westward along the Silk Road. The earliest evidence of chess is found in the nearby Sasanian Persia around 600 A.D., where the game came to be known by the name chatrang. Chatrang was taken up by the Muslim world after the Islamic conquest of Persia (633–51), where it was then named shatranj, with the pieces largely retaining their Persian names. In Spanish, "shatranj" was rendered as ajedrez ("al-shatranj"), in Portuguese as xadrez, and in Greek as ζατρίκιον (zatrikion, which comes directly from the Persian chatrang), but in the rest of Europe it was replaced by versions of the Persian shāh ("king"), from which the English words "check" and "chess" descend. The word "checkmate" is derived from the Persian shāh māt ("the king is dead"). Xiangqi is the form of chess best-known in China. The eastern migration of chess, into China and Southeast Asia, has even less documentation than its migration west, making it largely conjectured. The word xiàngqí was used in China to refer to a game from 569 A.D. at the latest, but it has not been proven if this game was or was not directly related to chess. The first reference to Chinese chess appears in a book entitled Xuán guaì lù ("Record of the Mysterious and Strange"), dating to about 800. A minority view holds that western chess arose from xiàngqí or one of its predecessors, although this has been contested. Chess historians Jean-Louis Cazaux and Rick Knowlton contend that xiangqi's intrinsic characteristics make it easier to construct an evolutionary path from China to India/Persia than the opposite direction. The oldest archaeological chess artifacts ─ ivory pieces ─ were excavated in ancient Afrasiab, today's Samarkand, in Uzbekistan, Central Asia, and date to about 760, with some of them possibly being older. Remarkably, almost all findings of the oldest pieces come from along the Silk Road, from the former regions of the Tarim Basin (today's Xinjiang in China), Transoxiana, Sogdiana, Bactria, Gandhara, to Iran on one end and to India through Kashmir on the other. The game reached Western Europe and Russia via at least three routes, the earliest being in the 9th century. By the year 1000, it had spread throughout both the Muslim Iberia and Latin Europe. A Latin poem called de scachis, dated to the late 10th century, has been preserved at the Einsiedeln Abbey. 1200–1700: Origins of the modern game The game of chess was then played and known in all European countries. A famous 13th-century manuscript covering chess, backgammon, and dice is known as the Libro de los juegos. The rules were fundamentally similar to those of the Arabic shatranj. The differences were mostly in the use of a checkered board instead of a plain monochrome board used by Arabs and the habit of allowing some or all pawns to make an initial double step. In some regions, the Queen, which had replaced the Vizier, and/or the King could also make an initial two-square leap under some conditions. Around 1200, the rules of shatranj started to be modified in southern Europe, culminating, several major changes later, in the emergence of modern chess practically as it is known today. The modern piece movement rules began to appear in intellectual circles in Valencia, Spain around 1475 and were then quickly adopted in Italy and Southern France before diffusing into the rest of Europe. Pawns gained the ability to advance two squares on their first move, while bishops and queens acquired their modern movement powers. The queen replaced the earlier vizier chess piece toward the end of the 10th century and by the 15th century had become the most powerful piece; in light of that, modern chess was often referred to at the time as "Queen's Chess" or "Mad Queen Chess". Castling, derived from the "king's leap", usually in combination with a pawn or rook move to bring the king to safety, was introduced. These new rules quickly spread throughout Western Europe. Writings about chess theory began to appear in the 15th century. The Repetición de Amores y Arte de Ajedrez (Repetition of Love and the Art of Playing Chess) by Spanish churchman Luis Ramírez de Lucena was published in Salamanca in 1497. Lucena and later masters like Portuguese Pedro Damiano, Italians Giovanni Leonardo Di Bona, Giulio Cesare Polerio and Gioachino Greco, and Spanish bishop Ruy López de Segura developed elements of opening theory and started to analyze simple endgames. 1700–1873: The Romantic Era in chess In the 18th century, the center of European chess life moved from Southern Europe to mainland France. The two most important French masters were François-André Danican Philidor, a musician by profession, who discovered the importance of pawns for chess strategy, and later Louis-Charles Mahé de La Bourdonnais, who won a famous series of matches against Irish master Alexander McDonnell in 1834. Centers of chess activity in this period were coffee houses in major European cities like Café de la Régence in Paris and Simpson's Divan in London. At the same time, the intellectual movement of romanticism had had a far-reaching impact on chess, with aesthetics and tactical beauty being held in higher regard than objective soundness and strategic planning. As a result, virtually all games began with the Open Game, and it was considered unsportsmanlike to decline gambits that invited tactical play such as the King's Gambit and Evans Gambit. This chess philosophy is known as Romantic chess, and a sharp, tactical style consistent with the principles of chess romanticism was predominant until the late 19th century. The rules concerning stalemate were finalized in the early 19th century. Also in the 19th century, the convention that White moves first was established (formerly either White or Black could move first). Finally, the rules around castling were standardized – variations in the rules of castling had persisted in Italy until the late 19th century. The resulting standard game is sometimes referred to as Western chess or international chess, particularly in Asia where other games of the chess family such as xiangqi are prevalent. Since the 19th century, the only rule changes, such as the establishment of the correct procedure for claiming a draw by repetition, have been technical in nature. As the 19th century progressed, chess organization developed quickly. Many chess clubs, chess books, and chess journals appeared. There were correspondence matches between cities; for example, the London Chess Club played against the Edinburgh Chess Club in 1824. Chess problems became a regular part of 19th-century newspapers; Bernhard Horwitz, Josef Kling, and Samuel Loyd composed some of the most influential problems. In 1843, von der Lasa published his and Bilguer's Handbuch des Schachspiels (Handbook of Chess), the first comprehensive manual of chess theory. The first modern chess tournament was organized by Howard Staunton, a leading English chess player, and was held in London in 1851. It was won by the German Adolf Anderssen, who was hailed as the leading chess master. His brilliant, energetic attacking style was typical for the time. Sparkling games like Anderssen's Immortal Game and Evergreen Game or Morphy's "Opera Game" were regarded as the highest possible summit of the art of chess. Deeper insight into the nature of chess came with the American Paul Morphy, an extraordinary chess prodigy. Morphy won against all important competitors (except Staunton, who refused to play), including Anderssen, during his short chess career between 1857 and 1863. Morphy's success stemmed from a combination of brilliant attacks and sound strategy; he intuitively knew how to prepare attacks. 1873–1945: Birth of a sport Prague-born Wilhelm Steinitz laid the foundations for a scientific approach to the game, the art of breaking a position down into components and preparing correct plans. In addition to his theoretical achievements, Steinitz founded an important tradition: his triumph over the leading German master Johannes Zukertort in 1886 is regarded as the first official World Chess Championship. This win marked a stylistic transition at the highest levels of chess from an attacking, tactical style predominant in the Romantic era to a more positional, strategic style introduced to the chess world by Steinitz. Steinitz lost his crown in 1894 to a much younger player, the German mathematician Emanuel Lasker, who maintained this title for 27 years, the longest tenure of any world champion. After the end of the 19th century, the number of master tournaments and matches held annually quickly grew. The first Olympiad was held in Paris in 1924, and FIDE was founded initially for the purpose of organizing that event. In 1927, the Women's World Chess Championship was established; the first to hold the title was Czech-English master Vera Menchik. A prodigy from Cuba, José Raúl Capablanca, known for his skill in endgames, won the World Championship from Lasker in 1921. Capablanca was undefeated in tournament play for eight years, from 1916 to 1924. His successor (1927) was the Russian-French Alexander Alekhine, a strong attacking player who died as the world champion in 1946. Alekhine briefly lost the title to Dutch player Max Euwe in 1935 and regained it two years later. In the interwar period, chess was revolutionized by the new theoretical school of so-called hypermodernists like Aron Nimzowitsch and Richard Réti. They advocated controlling the of the board with distant pieces rather than with pawns, thus inviting opponents to occupy the center with pawns, which become objects of attack. 1945–1990: Post-World War II era After the death of Alekhine, a new World Champion was sought. FIDE, which has controlled the title since then (except for one interruption), ran a tournament of elite players. The winner of the 1948 tournament was Russian Mikhail Botvinnik. In 1950 FIDE established a system of titles, conferring the titles of Grandmaster and International Master on 27 players. (Some sources state that in 1914 the title of chess Grandmaster was first formally conferred by Tsar Nicholas II of Russia to Lasker, Capablanca, Alekhine, Tarrasch, and Marshall, but this is a disputed claim.) Botvinnik started an era of Soviet dominance in the chess world, which mainly through the Soviet government's politically inspired efforts to demonstrate intellectual superiority over the West stood almost uninterrupted for more than a half-century. Until the dissolution of the Soviet Union, there was only one non-Soviet champion, American Bobby Fischer (champion 1972–1975). Botvinnik also revolutionized opening theory. Previously, Black strove for equality, attempting to neutralize White's first-move advantage. As Black, Botvinnik strove for the initiative from the beginning. In the previous informal system of World Championships, the current champion decided which challenger he would play for the title and the challenger was forced to seek sponsors for the match. FIDE set up a new system of qualifying tournaments and matches. The world's strongest players were seeded into Interzonal tournaments, where they were joined by players who had qualified from Zonal tournaments. The leading finishers in these Interzonals would go through the "Candidates" stage, which was initially a tournament, and later a series of knockout matches. The winner of the Candidates would then play the reigning champion for the title. A champion defeated in a match had a right to play a rematch a year later. This system operated on a three-year cycle. Botvinnik participated in championship matches over a period of fifteen years. He won the world championship tournament in 1948 and retained the title in tied matches in 1951 and 1954. In 1957, he lost to Vasily Smyslov, but regained the title in a rematch in 1958. In 1960, he lost the title to the 23-year-old Latvian prodigy Mikhail Tal, an accomplished tactician and attacking player who is widely regarded as one of the most creative players ever, hence his nickname the magician from Riga. Botvinnik again regained the title in a rematch in 1961. Following the 1961 event, FIDE abolished the automatic right of a deposed champion to a rematch, and the next champion, Armenian Tigran Petrosian, a player renowned for his defensive and positional skills, held the title for two cycles, 1963–1969. His successor, Boris Spassky from Russia (champion 1969–1972), won games in both positional and sharp tactical style. The next championship, the so-called Match of the Century, saw the first non-Soviet challenger since World War II, American Bobby Fischer. Fischer defeated his opponents in the Candidates matches by unheard-of margins, and convincingly defeated Spassky for the world championship. The match was followed closely by news media of the day, leading to a surge in popularity for chess; it also held significant political importance at the height of the Cold War, with the match being seen by both sides as a microcosm of the conflict between East and West. In 1975, however, Fischer refused to defend his title against Soviet Anatoly Karpov when he was unable to reach agreement on conditions with FIDE, and Karpov obtained the title by default. Fischer modernized many aspects of chess, especially by extensively preparing openings. Karpov defended his title twice against Viktor Korchnoi and dominated the 1970s and early 1980s with a string of tournament successes. In the 1984 World Chess Championship, Karpov faced his toughest challenge to date, the young Garry Kasparov from Baku, Soviet Azerbaijan. The match was aborted in controversial circumstances after 5 months and 48 games with Karpov leading by 5 wins to 3, but evidently exhausted; many commentators believed Kasparov, who had won the last two games, would have won the match had it continued. Kasparov won the 1985 rematch. Kasparov and Karpov contested three further closely fought matches in 1986, 1987 and 1990, Kasparov winning them all. Kasparov became the dominant figure of world chess from the mid 1980s until his retirement from competition in 2005. Beginnings of chess technology Chess-playing computer programs (later known as chess engines) began to appear in the 1960s. In 1970, the first major computer chess tournament, the North American Computer Chess Championship, was held, followed in 1974 by the first World Computer Chess Championship. In the late 1970s, dedicated home chess computers such as Fidelity Electronics' Chess Challenger became commercially available, as well as software to run on home computers. However, the overall standard of computer chess was low until the 1990s. The first endgame tablebases, which provided perfect play for relatively simple endgames such as king and rook versus king and bishop, appeared in the late 1970s. This set a precedent to the complete six- and seven-piece tablebases that became available in the 2000s and 2010s respectively. The first commercial chess database, a collection of chess games searchable by move and position, was introduced by the German company ChessBase in 1987. Databases containing millions of chess games have since had a profound effect on opening theory and other areas of chess research. Digital chess clocks were invented in 1973, though they did not become commonplace until the 1990s. Digital clocks allow for time controls involving increments and delays. 1990–Present: The rise of computers and online chess Technology The Internet enabled a new medium of playing chess, with chess servers allowing users to play other people from different parts of the world in real time. The first such server, known as Internet Chess Server or ICS, was developed at the University of Utah in 1992. ICS formed the basis for the first commercial chess server, the Internet Chess Club, which was launched in 1995, and for other early chess servers such as FICS (Free Internet Chess Server). Since then, many other platforms have appeared, and online chess began to rival over-the-board chess in popularity. During the 2020 COVID-19 pandemic, the isolation ensuing from quarantines imposed in many places around the world, combined with the success of the popular Netflix show The Queen's Gambit and other factors such as the popularity of online tournaments (notably PogChamps) and chess Twitch streamers, resulted in a surge of popularity not only for online chess, but for the game of chess in general; this phenomenon has been referred to in the media as the 2020 online chess boom. Computer chess has also seen major advances. By the 1990s, chess engines could consistently defeat most amateurs, and in 1997 Deep Blue defeated World Champion Garry Kasparov in a six-game match, starting an era of computer dominance at the highest level of chess. In the 2010s, engines of superhuman strength became accessible for free on a number of PC and mobile platforms, and free engine analysis became a commonplace feature on internet chess servers. An adverse effect of the easy availability of engine analysis on hand-held devices and personal computers has been the rise of computer cheating, which has grown to be a major concern in both over-the-board and online chess. In 2017, AlphaZero ─ a neural network also capable of playing shogi and go ─ was introduced. Since then, many chess engines based on neural network evaluation have been written, the best of which have surpassed the traditional "brute-force" engines. AlphaZero also introduced many novel ideas and ways of playing the game, which affected the style of play at the top level. As endgame tablebases developed, they began to provide perfect play in endgame positions in which the game-theoretical outcome was previously unknown, such as positions with king, queen and pawn against king and queen. In 1991, Lewis Stiller published a tablebase for select six-piece endgames, and by 2005, following the publication of Nalimov tablebases, all six-piece endgame positions were solved. In 2012, Lomonosov tablebases were published which solved all seven-piece endgame positions. Use of tablebases enhances the performance of chess engines by providing definitive results in some branches of analysis. Technological progress made in the 1990s and the 21st century has influenced the way that chess is studied at all levels, as well as the state of chess as a spectator sport. Previously, preparation at the professional level required an extensive chess library and several subscriptions to publications such as Chess Informant to keep up with opening developments and study opponents' games. Today, preparation at the professional level involves the use of databases containing millions of games, and engines to analyze different opening variations and prepare novelties. A number of online learning resources are also available for players of all levels, such as online courses, tactics trainers, and video lessons. Since the late 1990s, it has been possible to follow major international chess events online, the players' moves being relayed in real time. Sensory boards have been developed to enable automatic transmission of moves. Chess players will frequently run engines while watching these games, allowing them to quickly identify mistakes by the players and spot tactical opportunities. While in the past the moves have been relayed live, today chess organizers will often impose a half-hour delay as an anti-cheating measure. In the mid-to-late 2010s ─ and especially following the 2020 online boom ─ it became commonplace for supergrandmasters, such as Hikaru Nakamura and Magnus Carlsen, to livestream chess content on platforms such as Twitch. Also following the boom, online chess started being viewed as an e-sport, with e-sport teams signing chess players for the first time in 2020. Growth Organized chess even for young children has become common. FIDE holds world championships for age levels down to 8 years old. The largest tournaments, in number of players, are those held for children. The number of grandmasters and other chess professionals has also grown in the modern era. Kenneth Regan and Guy Haworth conducted research involving comparison of move choices by players of different levels and from different periods with the analysis of strong chess engines; they concluded that the increase in the number of grandmasters and higher Elo ratings of the top players reflect an actual increase in the average standard of play, rather than "rating inflation" or "title inflation". Professional chess In 1993, Garry Kasparov and Nigel Short broke ties with FIDE to organize their own match for the title and formed a competing Professional Chess Association (PCA). From then until 2006, there were two simultaneous World Championships and respective World Champions: the PCA or "classical" champions extending the Steinitzian tradition in which the current champion plays a challenger in a series of games, and the other following FIDE's new format of many players competing in a large knockout tournament to determine the champion. Kasparov lost his PCA title in 2000 to Vladimir Kramnik of Russia. Due to the complicated state of world chess politics and difficulties obtaining commercial sponsorships, Kasparov was never able to challenge for the title again. Despite this, he continued to dominate in top level tournaments and remained the world's highest rated player until his retirement from competitive chess in 2005. The World Chess Championship 2006, in which Kramnik beat the FIDE World Champion Veselin Topalov, reunified the titles and made Kramnik the undisputed World Chess Champion. In September 2007, he lost the title to Viswanathan Anand of India, who won the championship tournament in Mexico City. Anand defended his title in the revenge match of 2008, 2010 and 2012. In 2013, Magnus Carlsen of Norway beat Anand in the 2013 World Chess Championship. He defended his title 3 times since then and is the reigning world champion. Connections Arts and humanities In the Middle Ages and during the Renaissance, chess was a part of noble culture; it was used to teach war strategy and was dubbed the "King's Game". Gentlemen are "to be meanly seene in the play at Chestes", says the overview at the beginning of Baldassare Castiglione's The Book of the Courtier (1528, English 1561 by Sir Thomas Hoby), but chess should not be a gentleman's main passion. Castiglione explains it further: And what say you to the game at chestes? It is truely an honest kynde of enterteynmente and wittie, quoth Syr Friderick. But me think it hath a fault, whiche is, that a man may be to couning at it, for who ever will be excellent in the playe of chestes, I beleave he must beestowe much tyme about it, and applie it with so much study, that a man may assoone learne some noble scyence, or compase any other matter of importaunce, and yet in the ende in beestowing all that laboure, he knoweth no more but a game. Therfore in this I beleave there happeneth a very rare thing, namely, that the meane is more commendable, then the excellency. Some of the elaborate chess sets used by the aristocracy at least partially survive, such as the Lewis chessmen. Chess was often used as a basis of sermons on morality. An example is Liber de moribus hominum et officiis nobilium sive super ludo scacchorum ('Book of the customs of men and the duties of nobles or the Book of Chess'), written by an Italian Dominican monk Jacobus de Cessolis . This book was one of the most popular of the Middle Ages. The work was translated into many other languages (the first printed edition was published at Utrecht in 1473) and was the basis for William Caxton's The Game and Playe of the Chesse (1474), one of the first books printed in English. Different chess pieces were used as metaphors for different classes of people, and human duties were derived from the rules of the game or from visual properties of the chess pieces: The knyght ought to be made alle armed upon an hors in suche wyse that he haue an helme on his heed and a spere in his ryght hande/ and coueryd wyth his sheld/ a swerde and a mace on his lyft syde/ Cladd wyth an hawberk and plates to fore his breste/ legge harnoys on his legges/ Spores on his heelis on his handes his gauntelettes/ his hors well broken and taught and apte to bataylle and couerid with his armes/ whan the knyghtes ben maad they ben bayned or bathed/ that is the signe that they shold lede a newe lyf and newe maners/ also they wake alle the nyght in prayers and orysons vnto god that he wylle gyue hem grace that they may gete that thynge that they may not gete by nature/ The kynge or prynce gyrdeth a boute them a swerde in signe/ that they shold abyde and kepe hym of whom they take theyr dispenses and dignyte. Known in the circles of clerics, students, and merchants, chess entered into the popular culture of the Middle Ages. An example is the 209th song of Carmina Burana from the 13th century, which starts with the names of chess pieces, Roch, pedites, regina... The game of chess, at times, has been discouraged by various religious authorities in Middle Ages: Jewish, Catholic and Orthodox. Some Muslim authorities prohibited it even recently, for example Ruhollah Khomeini in 1979 and Abdul-Aziz ash-Sheikh even later. During the Age of Enlightenment, chess was viewed as a means of self-improvement. Benjamin Franklin, in his article "The Morals of Chess" (1750), wrote: The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it. By playing at Chess then, we may learn: I. Foresight, which looks a little into futurity, and considers the consequences that may attend an action [...] II. Circumspection, which surveys the whole Chess-board, or scene of action: – the relation of the several Pieces, and their situations [...] III. Caution, not to make our moves too hastily [...] Chess was occasionally criticized in the 19th century as a waste of time. Chess is taught to children in schools around the world today. Many schools host chess clubs, and there are many scholastic tournaments specifically for children. Tournaments are held regularly in many countries, hosted by organizations such as the United States Chess Federation and the National Scholastic Chess Foundation. Chess is many times depicted in the arts; significant works where chess plays a key role range from Thomas Middleton's A Game at Chess to Through the Looking-Glass by Lewis Carroll, to Vladimir Nabokov's The Defense, to The Royal Game by Stefan Zweig. Chess is featured in films like Ingmar Bergman's The Seventh Seal and Satyajit Ray's The Chess Players. Chess is also present in contemporary popular culture. For example, the characters in Star Trek play a futuristic version of the game called "Federation Tri-Dimensional Chess" and "Wizard's Chess" is played in J.K. Rowling's Harry Potter. Mathematics The game structure and nature of chess are related to several branches of mathematics. Many combinatorical and topological problems connected to chess, such as the knight's tour and the eight queens puzzle, have been known for hundreds of years. The number of legal positions in chess is estimated to be 4x1044, with a game-tree complexity of approximately 10123. The game-tree complexity of chess was first calculated by Claude Shannon as 10120, a number known as the Shannon number. An average position typically has thirty to forty possible moves, but there may be as few as zero (in the case of checkmate or stalemate) or (in a constructed position) as many as 218. In 1913, Ernst Zermelo used chess as a basis for his theory of game strategies, which is considered one of the predecessors of game theory. Zermelo's theorem states that it is possible to solve chess, i.e. to determine with certainty the outcome of a perfectly played game (either White can force a win, or Black can force a win, or both sides can force at least a draw). However, with 1043 legal positions in chess, it will take an impossibly long time to compute a perfect strategy with any feasible technology. Psychology There is an extensive scientific literature on chess psychology. Alfred Binet and others showed that knowledge and verbal, rather than visuospatial, ability lies at the core of expertise. In his doctoral thesis, Adriaan de Groot showed that chess masters can rapidly perceive the key features of a position. According to de Groot, this perception, made possible by years of practice and study, is more important than the sheer ability to anticipate moves. De Groot showed that chess masters can memorize positions shown for a few seconds almost perfectly. The ability to memorize does not alone account for chess-playing skill, since masters and novices, when faced with random arrangements of chess pieces, had equivalent recall (about six positions in each case). Rather, it is the ability to recognize patterns, which are then memorized, which distinguished the skilled players from the novices. When the positions of the pieces were taken from an actual game, the masters had almost total positional recall. More recent research has focused on chess as mental training; the respective roles of knowledge and look-ahead search; brain imaging studies of chess masters and novices; blindfold chess; the role of personality and intelligence in chess skill; gender differences; and computational models of chess expertise. The role of practice and talent in the development of chess and other domains of expertise has led to much recent research. Ericsson and colleagues have argued that deliberate practice is sufficient for reaching high levels of expertise in chess. Recent research indicates that factors other than practice are also important. For example, Fernand Gobet and colleagues have shown that stronger players started playing chess at a young age and that experts born in the Northern Hemisphere are more likely to have been born in late winter and early spring. Compared to general population, chess players are more likely to be non-right-handed, though they found no correlation between handedness and skill. A relationship between chess skill and intelligence has long been discussed in the literature and popular culture. Academic studies of the relationship date back at least to 1927. Academic opinion has long been split on how strong the relationship is, as some studies find no relationship and others find a relatively strong one. Composition Chess composition is the art of creating chess problems (also called chess compositions). The creator is known as a chess composer. There are many types of chess problems; the two most important are: : White to move first and checkmate Black within a specified number of moves, against any defense. These are often referred to as "mate in n" – for example "mate in three" (a three-mover); two- and three-move problems are the most common. These usually involve positions that would be highly unlikely to occur in an actual game, and are intended to illustrate a particular , usually requiring a surprising or counter-intuitive move. Themes associated with chess problems occasionally appear in actual games, when they are referred to as "problem-like" moves. Studies: orthodox problems where the stipulation is that White to play must win or draw. The majority of studies are endgame positions. Fairy chess is a branch of chess problem composition involving altered rules, such as the use of unconventional pieces or boards, or unusual stipulations such as reflexmates. Tournaments for composition and solving of chess problems are organized by the World Federation for Chess Composition, which works cooperatively with but independent of FIDE. The WFCC awards titles for composing and solving chess problems. Online chess Online chess is chess that is played over the internet, allowing players to play against each other in real time. This is done through the use of Internet chess servers, which pair up individual players based on their rating using an Elo or similar rating system. Online chess saw a spike in growth during the quarantines of the COVID-19 pandemic. This can be attributed to both isolation and the popularity of Netflix miniseries The Queen's Gambit, which was released in October 2020. Chess app downloads on the App Store and Google Play Store rose by 63% after the show debuted. Chess.com saw more than twice as many account registrations in November as it had in previous months, and the number of games played monthly on Lichess doubled as well. There was also a demographic shift in players, with female registration on Chess.com shifting from 22% to 27% of new players. Grandmaster Maurice Ashley said "A boom is taking place in chess like we have never seen maybe since the Bobby Fischer days," attributing the growth to an increased desire to do something constructive during the pandemic. USCF Women's Program Director Jennifer Shahade stated that chess works well on the internet, since pieces do not need to be reset and matchmaking is virtually instant. Computer chess The idea of creating a chess-playing machine dates to the 18th century; around 1769, the chess-playing automaton called The Turk became famous before being exposed as a hoax. Serious trials based on automata, such as El Ajedrecista, were too complex and limited to be useful. Since the advent of the digital computer in the 1950s, chess enthusiasts, computer engineers, and computer scientists have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. The groundbreaking paper on computer chess, "Programming a Computer for Playing Chess", was published in 1950 by Claude Shannon. He wrote: The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require "thinking" for skillful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking"; (4) the discrete structure of chess fits well into the digital nature of modern computers. The Association for Computing Machinery (ACM) held the first major chess tournament for computers, the North American Computer Chess Championship, in September 1970. CHESS 3.0, a chess program from Northwestern University, won the championship. The first World Computer Chess Championship, held in 1974, was won by the Soviet program Kaissa. At first considered only a curiosity, the best chess playing programs have become extremely strong. In 1997, a computer won a chess match using classical time controls against a reigning World Champion for the first time: IBM's Deep Blue beat Garry Kasparov 3½–2½ (it scored two wins, one loss, and three draws). There was some controversy over the match, and human-computer matches were relatively close over the next few years, until convincing computer victories in 2005 and in 2006. In 2009, a mobile phone won a category 6 tournament with a performance rating of 2898: chess engine Hiarcs 13 running on the mobile phone HTC Touch HD won the Copa Mercosur tournament with nine wins and one draw. The best chess programs are now able to consistently beat the strongest human players, to the extent that human–computer matches no longer attract interest from chess players or the media. While the World Computer Chess Championship still exists, the Top Chess Engine Championship (TCEC) is widely regarded as the unofficial world championship for chess engines. The current champion is Stockfish. With huge databases of past games and high analytical ability, computers can help players to learn chess and prepare for matches. Internet Chess Servers allow people to find and play opponents worldwide. The presence of computers and modern communication tools have raised concerns regarding cheating during games. Variants There are more than two thousand published chess variants, games with similar but different rules. Most of them are of relatively recent origin. They include: direct predecessors of chess, such as chaturanga and shatranj; traditional national or regional games that share common ancestors with Western chess such as xiangqi, shogi, janggi, makruk, sittuyin, and shatar; modern variations employing different rules (e.g. Losing chess or Chess960), different forces (e.g. Dunsany's Chess), non-standard pieces (e.g. Grand Chess), and different board geometries (e.g. hexagonal chess or Infinite chess); In the context of chess variants, regular (i.e. FIDE) chess is commonly referred to as Western chess, international chess, orthodox chess, orthochess, and classic chess. See also Glossary of chess List of chess games List of chess players List of strong chess tournaments List of World Chess Championships Women in chess Notes Notelist References Bibliography Further reading (see the included supplement, "How Do You Play Chess") External links International organizations FIDE – World Chess Federation ICCF – International Correspondence Chess Federation News Chessbase news The Week in Chess History Chesshistory.com Abstract strategy games Individual sports Indian inventions Traditional board games Partially solved games Games related to chaturanga
381123
https://en.wikipedia.org/wiki/Baan%20Corporation
Baan Corporation
Baan was a vendor of enterprise resource planning (ERP) software that is now owned by Infor Global Solutions. Baan or Baan ERP, was also the name of the ERP product created by this company. History The Baan Corporation was created by Jan Baan in 1978 in Barneveld, Netherlands to provide financial and administrative consulting services. With the development of his first software package, Jan Baan and his brother Paul Baan entered what was to become the ERP industry. The Baan company focused on the creation of enterprise resource planning (ERP) software. Jan Baan developed his first computer program on a Durango F-85 computer in the programming language BASIC. In the early '80s, The Baan Corporation began to develop applications for Unix computers with C and a self-developed Baan-C language, the syntax of which was very similar to the BASIC language. Baan rose in popularity during the early nineties. Baan software is famous for its Dynamic Enterprise Modeler (DEM), technical architecture, and its 4GL language. Baan 4GL and Tools is still considered to be one of the most efficient and productive database application development platforms. Baan became a real threat to market leader SAP after winning a large Boeing deal in 1994. It went IPO in 1995 and became a public listed company in Amsterdam and US Nasdaq. Several large consulting firms around the world partnered to implement Baan IV for multi-national companies. It acquired several other software companies to enrich its product portfolio, including Antalys, Aurum, Berclain, Coda and Caps Logistics. Sales growth rate was once claimed to reach 91% per year. However the fall of the Baan Company began in 1998. The management exaggerated company revenue by booking "sales" of software licenses that were actually transferred to a related distributor. The discovery of this revenue manipulation led to a sharp decline of Baan's stock price at the end of 1998. In June 2000, facing worsening financial difficulties, lawsuits and reporting seven consecutive quarterly losses and bleak prospects, Baan was sold at a price of US$700 million to Invensys, a UK automation, controls, and process solutions group to become a unit of its Software and Services Division. Laurens van der Tang was the president of this unit. With the acquisition of Baan, Invensys's CEO Allen Yurko began to offer "Sensor to Boardroom" solutions to customers. In June 2003, after Allen Yurko stepped down, Invensys sold its Baan unit to SSA Global Technologies for US$ 135 million. Upon acquiring the Baan software, SSA renamed Baan as SSA ERP Ln. In August 2005, SSA Global released a new version of Baan, named SSA ERP LN 6.1. In May 2006, SSA was acquired by Infor Global Solutions of Atlanta, a major ERP consolidator in the market. Product version Triton 1.0 to 2.2d, 3.0 to last version of Triton is 3.1bx, then the product is renamed to Baan Baan 4.0 (last version of BaanIV is BaanIVc4 SP30) & Industry extensions (A&D,...) Baan 5.0 (last version of BaanV is Baan5.0 c SP26.0) Baan 5.1, 5.2 (for specific customers only) SSA ERP 6.1 / Infor ERP LN 6.1 / Infor10 ERP Enterprise / Infor LN ERP Ln 6.1 FP6, released in December, 2009 ERP Ln 6.1 FP7, released in January, 2011 ERP LN 6.1 10.2.1, released 2012 Infor LN 10.3, released in July, 2013 Infor LN 10.4, released 2015 Infor LN 10.5, released in June, 2016 Infor LN 10.6, released in March, 2018 Infor LN 10.7, released in January, 2020 Infor ERP Ln 6.1 supports Unicode and comes with additional language translations. Supported platform and database (server) Server Platform: Windows Server, Linux, IBM AIX, Oracle Solaris, HP-UX, OS/400 (Obsolete), OS/390 (Obsolete) Database: Oracle Database, IBM DB2, MS SQL Server, Informix (Obsolete since December 2015), MySQL (Obsolete since year 2010), Bisam (Obsolete), Btam (Obsolete) Standard packages Baan IV Packages: Common (tc), Finance (tf), Project (tp), Manufacturing (ti), Distribution (td), Process (ps), Transportation (tr), Service (ts), Enterprise Modeler (tg), Constraint Planning (cp), Tools (tt), Utilities (tu), Baan DEM (tg) ERP Ln 6.1 Packages: PDM BaanIV (ba), Conversion (bc), Enterprise Modeler (tg), Common, Taxation (tc), People (bp), Financials (tf), Project (tp), Enterprise Planning (cp), Order Management (td), Electronic Commerce (ec), Central Invoicing (ci), Manufacturing (ti), Warehouse Management (wh),Freight Management (fm), Service (ts), Quality Management (qm), Object Data Management (dm), Tools (tt), Tools Addons (tl), Development Utilities (du) Baan Virtual Machine – bshell Bshell is the core component of a Baan application server. It is a process on a virtual machine, to run the Baan 4GL language. Bshell was ported to different server platforms which made the Baan program scripts platform independent. For example, a Baan session developed on the Windows platform could be copied to a Linux platform without re-compiling the application code. Bshell is similar to nowaday's Java VM or .Net CLR. Fraud In 1998 Baan had a class action lawsuit filed against them for violation of the securities exchange act of 1934 held in the United States District Court of Columbia. Baan "...undertook a scheme and course of conduct intended to inflate Baan's results through various financial manipulations". The movie was based on the events with the Baan brothers in 1998. The end credits indicate that it is not a documentary but fiction and that the makers did not intend to portray individuals and events accurately; However, the similarity with the Baan debacle is obvious. References External links Infor ERP software companies Barneveld (municipality) Companies based in Hyderabad, India Dutch companies established in 1978
34252965
https://en.wikipedia.org/wiki/List%20of%20USC%20Trojans%20bowl%20games
List of USC Trojans bowl games
The USC Trojans football team competes as part of the National Collegiate Athletic Association (NCAA) Division I Football Bowl Subdivision (FBS), representing the University of Southern California in the South Division of the Pac-12 Conference (Pac-12). Since the establishment of the team in 1888, USC has appeared in 55 bowl games. The Trojans appeared in 34 Rose Bowls, winning 25, both records for the bowl. Bowl games References USC Trojans
147973
https://en.wikipedia.org/wiki/Creator%20code
Creator code
A creator code is a mechanism introduced in the classic Mac OS to link a data file to the application program which created it. The similar type code held the file type, like "TEXT". Together, the type and creator indicated what application should be used to open a file, similar to (but richer than) the file extensions in other operating systems. Creator codes are four-byte OSTypes. They allow applications to launch and open a file whenever any of their associated files is double-clicked. Creator codes could be any four-byte value, but were usually chosen so that their ASCII representation formed a word or acronym. For example, the creator code of the HyperCard application and its associated "stacks" is represented in ASCII as , from the application's original name of WildCard. Occasionally they represented inside jokes. For instance, the Marathon computer game had a creator code of (the approximate length, in miles, of a marathon) and Marathon 2: Durandal had a creator code of . The binding are stored inside the resource fork of the application as BNDL and fref resources. These resources maintained the creator code as well as the association with each type code and icon. The OS collected this data from the files when they were copied between mediums, thereby building up the list of associations and icons as software was installed onto the machine. Periodically this "desktop database" would become corrupted, and had to be fixed by "rebuilding the desktop database." The key difference between extensions and Apple's system is that file type and file ownership bindings are kept distinct. This allows files to be written of the same type - TEXT say - by different applications. Although any application can open anyone else's TEXT file, by default, opening the file will open the original application that created it. With the extensions approach, this distinction is lost - all files with a .txt extension will be mapped to a single text editing application of the user's choosing. A more obvious advantage of this approach is allowing for double click launching of specialized editors for more complex but common file types, like .csv or .html. This can also represent a disadvantage as in the illustration above, where double clicking the four mp3 files would launch and play the files in four different music applications instead of queuing them in the user's preferred player application. macOS retains creator codes, but supports extensions as well. However, beginning with Mac OS X Snow Leopard, creator codes are ignored by the operating system. Creator codes have been internally superseded by Apple's Uniform Type Identifier scheme, which manages application and file type identification as well as type codes, creator codes and file extensions. To avoid conflicts, Apple maintained a database of creator codes in use. Developers could fill out an online form to register their codes. Apple reserves codes containing all lower-case ASCII characters for its own use. Creator codes are not readily accessible for users to manipulate, although they can be viewed and changed with certain software, most notably the macOS command line tools GetFileInfo and SetFile which are installed as part of the developer tools into /Developer/Tools. See also Type code Uniform Type Identifier References External links How application binding policy changed in Snow Leopard Macintosh operating systems Metadata
60914941
https://en.wikipedia.org/wiki/Andromeda%20%28trojan%29
Andromeda (trojan)
Andromeda is a modular trojan which was first spotted in 2011. The behavior of this malware is its capability of checking whether it is being executed or debugged in a virtual environment by using anti-virtual machine techniques. It downloads other malware from its control servers, often in order to steal information from infected computers. The most affected countries are India (24%), Vietnam (12%) and Iran (7%). Andromeda has been heavily linked to phishing campaigns, spam email attachments, illegal software downloads and various exploit kits as a means of distribution. Research into the malware design has revealed that it contains many similarities to the source code of zbot/zeus. References Windows trojans
17873905
https://en.wikipedia.org/wiki/TorChat
TorChat
TorChat was a peer-to-peer anonymous instant messenger that used Tor onion services as its underlying network. It provided cryptographically secure text messaging and file transfers. The characteristics of Tor's onion services ensure that all traffic between the clients is encrypted and that it is very difficult to tell who is communicating with whom and where a given client is physically located. TorChat is free software licensed under the terms of the GNU General Public License (GPL). Features In TorChat every user has a unique alphanumeric ID consisting of 16 characters. This ID will be randomly created by Tor when the client is started the first time, it is basically the .onion address of an onion service. TorChat clients communicate with each other by using Tor to contact the other's onion service (derived from their ID) and exchanging status information, chat messages and other data over this connection. Since onion services can receive incoming connections even if they are behind a router doing network address translation (NAT), TorChat does not need any port forwarding to work. History The first public version of TorChat was released in November 2007 by Bernd Kreuss (prof7bit). It is written in Python and used the cross-platform widget toolkit wxPython which made it possible to support a wide range of platforms and operating systems. The older Windows versions of TorChat were built with py2exe (since 0.9.9.292 replaced with pyinstaller) and came bundled with a copy of Tor readily configured so that it could be run as a portable application right off a USB flash drive without any installation, configuration or account creation. Between 2008 and 2010 weren't any updated packages, resulting in the bundled version of Tor becoming obsolete and unable to connect to the Tor network, which was the reason for the appearance of forks that basically just replaced the bundled Tor.exe with a current one. In December 2010, an official update finally became available that, among some minor bugfixes, also again included an up-to-date Tor.exe. After 2014, all development activity stopped and TorChat haven't received any further updates. Forks A fork was released for OS X in the summer of 2010 by a French developer. The binary (a Cocoa application) and source-code (Objective-C) bundled in a Xcode 7 project can be downloaded on SourceMac. A rewrite of the TorChat protocol in Java was created in the beginning of 2012, called jTorChat on Google Code. Containing the latest Tor.exe, it is meant to emulate all the features of the original TorChat protocol, as well as extending the protocols for jTorChat-specific features. Filesharing, while implemented in the original TorChat, is not yet implemented in jTorChat. A new capability in jTorChat is the broadcast mode, which allows a user to send messages to everybody in the network, even if they are not in their buddylist. Also buddy request mode is implemented, which allows a user to request a random user in the jTorChat network to add them. At this stage jTorChat is designed to work effectively on Windows without any configuration, however since its written in Java, it can run on any platform supported by both, Tor and Java itself, making it very portable. The project is actively seeking Java contributors, especially to help debug the GUI interface. In February 2012, developer Prof7bit moved TorChat to GitHub, as a protest against Google selectively censoring access to TorChat download to certain countries. Prof7bit has switched to working on torchat2, which is a rewrite from scratch, using Lazarus and Free Pascal. Security In 2015 security analysis of TorChat protocol and its Python implementation was conducted. It was found that although the design of TorChat is sound, its implementation has several flaws, which make TorChat users vulnerable to impersonation, communication confirmation and denial-of-service attacks. Despite the flaws found, the use of TorChat might still be secure in a scenario where the peer's onion address does not become known to an adversary interested in attacking the person behind the TorChat address. See also Bitmessage Briar (software) Tor (anonymity network) Ricochet (software) Tox References External links TorChat for Mac OS X Tor onion services Free instant messaging clients Free security software Tor (anonymity network) Free software programmed in Pascal Pascal (programming language) software Discontinued software
43601
https://en.wikipedia.org/wiki/Gnuplot
Gnuplot
gnuplot is a command-line and GUI program that can generate two- and three-dimensional plots of functions, data, and data fits. The program runs on all major computers and operating systems (Linux, Unix, Microsoft Windows, macOS, FreeDOS, and many others). It is a program with a fairly long history, dating back to 1986. Despite its name, this software is not part of the GNU Project. Features gnuplot can produce output directly on screen, or in many formats of graphics files, including Portable Network Graphics (PNG), Encapsulated PostScript (EPS), Scalable Vector Graphics (SVG), JPEG and many others. It is also capable of producing LaTeX code that can be included directly in LaTeX documents, making use of LaTeX's fonts and powerful formula notation abilities. The program can be used both interactively and in batch mode using scripts. gnuplot can read data in multiple formats, including ability to read data on the fly generated by other programs (piping), create multiple plots on one image, do 2D, 3D, contour plots, parametric equations, supports various linear and non-linear coordinate systems, projections, geographic and time data reading and presentation, box plots of various forms, histograms, labels, and other custom elements on the plot, including shapes, text and images, that can be set manually, computed by script or automatically from input data. gnuplot also provides scripting capabilities, looping, functions, text processing, variables, macros, arbitrary pre-processing of input data (usually across columns), as well ability to perform non-linear multi-dimensional multi-set weighted data fitting (see Curve fitting and Levenberg–Marquardt algorithm). The gnuplot core code is programmed in C. Modular subsystems for output via Qt, wxWidgets, and LaTeX/TikZ/ConTeXt are written in C++ and Lua. The code below creates the graph to the right. set title "Some Math Functions" set xrange [-10:10] set yrange [-2:2] set zeroaxis plot (x/4)**2, sin(x), 1/x The name of this program was originally chosen to avoid conflicts with a program called "newplot", and was originally a compromise between "llamaplot" and "nplot". Support of Epidemic daily and week formats in Version 5.4.2 is a result of pandemic coronavirus data needs. Development Version 5.5 is 2021 available. Distribution terms Despite gnuplot's name, it is not named after, part of or related to the GNU Project, nor does it use the GNU General Public License. It was named as part of a compromise by the original authors, punning on gnu (the animal) and newplot. Official source code to gnuplot is freely redistributable, but modified versions thereof are not. The gnuplot license allows instead distribution of patches against official releases, optionally accompanied by officially released source code. Binaries may be distributed along with the unmodified source code and any patches applied thereto. Contact information must be supplied with derived works for technical support for the modified software. Permission to modify the software is granted, but not the right to distribute the complete modified source code. Modifications are to be distributed as patches to the released version. Despite this restriction, gnuplot is accepted and used by many GNU packages and is widely included in Linux distributions including the stricter ones such as Debian and Fedora. The OSI Open Source Definition and the Debian Free Software Guidelines specifically allow for restrictions on distribution of modified source code, given explicit permission to distribute both patches and source code. Newer gnuplot modules (e.g. Qt, wxWidgets, and cairo drivers) have been contributed under dual-licensing terms, e.g. gnuplot + BSD or gnuplot + GPL. GUIs and programs that use gnuplot Several third-party programs have graphical user interfaces that can be used to generate graphs using gnuplot as the plotting engine. These include: gretl, a statistics package for econometrics JGNUPlot, a java-based GUI Kayali a computer algebra system xldlas, an old X11 statistics package gnuplotxyz, an old Windows program wxPinter, a graphical plot manager for gnuplot Maxima is a text-based computer algebra system which itself has several third-party GUIs. Other programs that use gnuplot include: GNU Octave, a mathematical programming language statist, a terminal-based program gplot.pl provides a simpler command-line interface. feedgnuplot provides a plotting of stored and realtime data from a pipe. ElchemeaAnalytical, an Impedance spectroscopy plotting and fitting program developed by DTU Energy Gnuplot add-in for MS-Excel Calc, the GNU Emacs calculator Programming and application interfaces gnuplot can be used from various programming languages to graph data, including Perl (via PDL and other CPAN packages), Python (via gnuplotlib, Gnuplot-py and SageMath), R via (Rgnuplot), Julia (via Gaston.jl), Java (via JavaGnuplotHybrid and jgnuplot), Ruby (via Ruby Gnuplot), Ch (via Ch Gnuplot), Haskell (via Haskell gnuplot), Fortran 95, Smalltalk (Squeak and GNU Smalltalk) and Rust (via RustGnuplot). gnuplot also supports piping, which is typical of scripts. For script-driven graphics, gnuplot is one of the most popular programs. Gnuplot output formats Gnuplot allows you to display or store plots in several ways: On the console (output modes dumb, sixel) In a desktop window (output modes qt, wxt, x11, aquaterm, win, ...) Embedded in a web page (output modes svg, HTML5, png, jpeg, animated gif, ...) File formats designed for document processing (output modes PostScript, PDF, cgm, emf, LaTeX variants, ...) See also List of graphing software References Further reading and external links Gnuplot 5: an interactive ebook about gnuplot v.5. gnuplotting: a blog of gnuplot examples and tips spplotters: a blog of gnuplot examples and tips gnuplot surprising: a blog of gnuplot examples and tips Visualize your data with gnuplot: an IBM tutorial Articles containing video clips Computer animation Cross-platform free software Data analysis software Free 3D graphics software Free educational software Free mathematics software Free plotting software Free software programmed in C Plotting software Regression and curve fitting software Software that uses wxWidgets Software that uses Qt
57480095
https://en.wikipedia.org/wiki/Nvidia%20RTX
Nvidia RTX
Nvidia GeForce RTX (Ray Tracing Texel eXtreme) is a high-end professional visual computing platform created by Nvidia, primarily used for designing complex large-scale models in architecture and product design, scientific visualization, energy exploration, games, and film and video production. Nvidia RTX enables real-time ray tracing. Historically, ray tracing had been reserved to non-real time applications (like CGI in visual effects for movies and in photorealistic renderings), with video games having to rely on direct lighting and precalculated indirect contribution for their rendering. RTX facilitates a new development in computer graphics of generating interactive images that react to lighting, shadows and reflections. RTX runs on Nvidia Volta-, Turing- and Ampere-based GPUs, specifically utilizing the Tensor cores (and new RT cores on Turing and successors) on the architectures for ray-tracing acceleration. In March 2019, Nvidia announced that selected GTX 10 series (Pascal) and GTX 16 series (Turing) cards would receive support for subsets of RTX technology in upcoming drivers, although functions and performance will be affected by their lack of dedicated hardware cores for ray tracing. In October 2020, Nvidia announced Nvidia RTX A6000 as the first Ampere-architecture-based graphics card for use in professional workstations in the Nvidia RTX product line. Nvidia worked with Microsoft to integrate RTX support with Microsoft's DirectX Raytracing API (DXR). RTX is currently available through Nvidia OptiX and for DirectX. For the Turing and Ampere architectures, it is also available for Vulkan. Components In addition to ray tracing, RTX includes artificial intelligence integration, common asset formats, rasterization (CUDA) support, and simulation APIs. The components of RTX are: AI-accelerated features (NGX) Asset formats (USD and MDL) Rasterization including advanced shaders Raytracing via OptiX, Microsoft DXR and Vulkan Simulation tools: CUDA 10 Flex PhysX Ray tracing In computer graphics, ray tracing generates an image by tracing rays cast through pixels of an image plane and simulating the effects of its encounters with virtual objects. This enables advanced effects that better reflect real-world optical properties, such as softer and more realistic shadows and reflections; as compared to traditional rasterization techniques which prioritize performance over accuracy. NVIDIA RTX achieves this through a combination of hardware and software acceleration. On a hardware level, RTX cards feature fixed-function "RT cores" that are designed to accelerate mathematical operations needed to simulate rays, such as bounding volume hierarchy traversal. The software implementation is open to individual application developers. As ray-tracing is still computationally intensive, many developers choose to take a hybrid rendering approach where certain graphical effects, such as shadows and reflections, are performed using ray-tracing; while the remaining scene is rendered using the more performant rasterization. Development APIs using RTX Nvidia OptiX Nvidia OptiX is part of Nvidia DesignWorks. OptiX is a high-level, or "to-the-algorithm" API, meaning that it is designed to encapsulate the entire algorithm of which ray tracing is a part, not just the ray tracing itself. This is meant to allow the OptiX engine to execute the larger algorithm without application-side changes. Aside from computer graphics rendering, OptiX also helps in optical and acoustical design, radiation and electromagnetic research, artificial intelligence queries and collision analysis. References External links Nvidia developer page on RTX 3D graphics software Image processing software 3D computer graphics Computer-aided design software Computer-aided engineering software
37445411
https://en.wikipedia.org/wiki/Public%20copyright%20license
Public copyright license
A public license or public copyright licenses is a license by which a copyright holder as licensor can grant additional copyright permissions to any and all persons in the general public as licensees. By applying a public license to a work, provided that the licensees obey the terms and conditions of the license, copyright holders give permission for others to copy or change their work in ways that would otherwise infringe copyright law. Some public licenses, such as the GNU GPL and the CC BY-SA, are also considered free or open copyright licenses. However, other public licenses like the CC BY-NC are not open licenses, because they contain restrictions on commercial or other types of use. Public copyright licenses do not limit their licensees. In other words, any person can take advantage of the license. The former Creative Commons (CC) Developing Nations License was not a public copyright license, because it limited licensees to those in developing nations. Current Creative Commons licenses are explicitly identified as public licenses. Any person can apply a CC license to their work, and any person can take advantage of the license to use the licensed work according to the terms and conditions of the relevant license. According to the Open Knowledge Foundation, a public copyright license does not limit licensors either. Under this definition, license contract texts specific to a single licensor (like the UK government’s Open Government License, which would have to be edited to be used by other licensors) are not considered public copyright licenses, although they may qualify as open licenses. Some organisations approve public copyright licenses that meet certain criteria, in particular being free or open licenses. The Free Software Foundation keeps a list of FSF-approved software licenses and free documentation licenses. The Open Source Initiative keeps a similar list of OSI-approved software licenses. The Open Knowledge Foundation has a list of OKFN-approved licenses for content and data licensing. Types of copyright license The implied license imposed by the Berne Convention, and the public domain (the CC0 license as waiver), are the references for any other public license. Considering all cultural works, as in the Open Definition, the four freedoms summarizes the main differences: The "open licenses" preserve the main freedoms of CC0, but add some reasonable restriction. Labeling by its acronyms, the main restrictions are: BY (attribution): restriction on freedoms 2, 3 or 2.1, the copy must to cite (attribute); give the author or licensor the credits in the manner specified by these. SA (share-alike): restriction on freedoms 2 or 3, the copy must distributed under a license identical to the license that governs the original work (see copyleft). ND (Non-derivative): exclusion of freedom 3. NC (Non-commercial): partial exclusion of freedoms 2 and 3 of commercial purposes. Other: other less usual restrictions on "open licenses". Varieties of public copyright license Free licenses are a popular subset of public copyright licenses. They include free and open source software licenses and free content licenses. To qualify as a libre license, a public copyright license must allow licensees to share and adapt the licensed work for any purpose, including commercial ones. Licenses that purport to release a work into the public domain are a type of libre license. Share-alike licenses require derivatives of the licensed work to be released under the same license as the original. When a libre license has a share-alike term, it is called a copyleft license. Libre licenses without share-alike terms are sometimes called permissive licenses. The Creative Commons public copyright license suite includes licenses with attribution, share-alike, non-commercial and no-derivatives conditions. It also offers a public domain license and the Founders' Copyright license. Open supplement licenses permit derivatives of the work (specifically material that supplements the original work) but not duplicates. Public domain like licenses A subset of public copyright licenses which aim for no restrictions at all like public domain ("full permissive"), are public domain-like licenses. The 2000 released WTFPL license is a short public domain like software license. The 2009 released CC0 was created as public domain license for all content with compatibility with also law domains (e.g. Civil law of continental Europe) where dedicating into public domain is problematic. This is achieved by a public domain waiver statement and a fall-back all-permissive license. The Unlicense, published around 2010, has a focus on an anti-copyright message. The Unlicense offers a public domain waiver text with a fall-back public domain-like license inspired by permissive licenses but without attribution. See also Anti-copyright notice Copyright Copyright reform movement Free and open-source software Open content/Free content Public-domain-equivalent license References Public copyright licenses
159320
https://en.wikipedia.org/wiki/Tux%20Racer
Tux Racer
Tux Racer is a 2000 open-source winter sports racing video game starring the Linux mascot, Tux the penguin. It was originally developed by Jasmin Patry as a computer graphics project in a Canadian university. Later on, Patry and the newly founded Sunspire Studios, composed of several former students of the university, expanded it. In the game, the player controls Tux as he slides down a course of snow and ice collecting herring. Tux Racer was officially downloaded over one million times as of 2001. It also was well received, often being acclaimed for the graphics, fast-paced gameplay, and replayability, and was a fan favorite among Linux users and the free software community. The game's popularity secured the development of a commercialized release that included enhanced graphics and multiplayer, and it also became the first GPL-licensed game to receive an arcade adaptation. It is the only product that Sunspire Studios developed and released, after which the company liquidated. The free game's source code has led to it receiving forks, made by fans continuing its development. Gameplay Tux Racer is a racing game in which the player must control Tux across a mountainside. Tux can turn left, right, brake, jump, and paddle, and flap his wings. If the player presses the brakes and turn buttons, Tux will perform a tight turn. Pressing the paddling buttons on the ground gives Tux some additional speed. The paddling stops giving speed and in turn slows Tux down when the speedometer turns yellow. Tux can slide off slopes or charge his jumps to temporarily launch into midair, during which he can flap his flippers to fly farther and adjust his direction left or right. The player can also reset the penguin should he be stuck in any part of the course. Courses are composed of various terrain types that affect Tux's performance. Sliding on ice allows speeding at the expense of traction, and snow allows for more maneuverability. However, rocky patches slow him down, as does crashing into trees. The player gains points by collecting herrings scattered along the courses, and the faster the player finishes the course, the higher the score. Players can select cups, where progression is by completing a series of courses in order by satisfying up to three requirements: collecting sufficient herring, finishing the course below a specified time, and scoring enough points. Failing to meet all the criteria or aborting the race costs a life, and should the player lose all four lives, they must reenter the cup and start over. During level selection, the player can choose daytime settings and weather conditions such as wind and fog that affect the gameplay. Maps are composed of three separately saved raster layers that each determine a map's elevation, terrain layout, and object placement. Commercial version The commercial version of Tux Racer introduces new content. Besides Tux, players can select one of three other characters to race as: Samuel the seal, Boris the polar bear, and Neva the penguin. Some courses contain jump and speed pads as power-ups, and players can perform tricks in midair to receive points. They can participate in cups in one of the two events serving as game modes: the traditional "Solo Challenge" or the new "Race vs Opponents", where a computer opponent is added and must be defeated in order for the player to advance. Courses are unlocked for completing unfinished cups. In non-campaign sessions, besides practicing, players can also race in the two-player "Head to Head" local multiplayer mode, viewed on a split-screen. Development Tux Racer was originally developed by Jasmin Patry, a student attending the University of Waterloo in Ontario, Canada, where he aimed to begin a career in the video game industry by pursuing a computer graphics degree. Development of the game began in August 1999 as a final computer graphics project in Computer Graphics Lab, and was completed in three days to positive class reception. A webpage for the game was then started, and someone suggested he release the game's source code. Patry felt that made sense due to Tux being the mascot for the open-source Linux, and continued to work on the game before publicly uploading it to SourceForge for Linux under the free GNU General Public License on February 28, 2000, hoping others would join in on developing it. This early version featured a very basic gameplay that consisted of Tux sliding down a hill of snow, ice, rock, and trees for Tux to avoid along the way. To write the game, Patry tended to use free premade content such as textures borrowed from websites, rather than original content made from scratch. In December 1999, Patry, fine arts students Rick Knowles and Mark Riddell, and computer graphics students Patrick Gilhuly, Eric Hall, and Rob Kroeger announced the foundation of the company Sunspire Studios to develop a video game project. Patry stated the game would have a massively multiplayer and a persistent universe with real-time strategy and first-person shooter components. Since their ideas were limited by that time's 3D engines, they embarked on creating their own, which according to Patry would make Quake 3 and Unreal engine look "tame" in comparison. Fine arts undergraduate classmate Roger Fernandez was chosen as the artist. The project was eventually abandoned due to it being a "massive undertaking," and in August 2000, Knowles suggested the company resume working on Tux Racer, which became their first official project. Continued development of the free version was swift; numerous elements such as herrings, jumping, and a soundtrack, as well as graphical improvements, were added in just three weeks. Porting the game from Linux to Windows was easy, as it used cross-platform tools such as OpenGL and Simple DirectMedia Layer. A major update including those improvements, version 0.60, was freely uploaded to SourceForge for both Linux and Windows on October 2, 2000. A minor patch for that release was often included in most Linux distributions, and a port for Macintosh was released in November 21, 2000. Ports and remakes On February 5, 2002, Sunspire Studios released in retail a closed-source and commercial expansion of the game titled Tux Racer, with each CD designed to support both Linux and Windows operating systems. Improvements from the open-source version include a vastly enhanced engine and graphics, the ability to perform tricks, character selection, and competitive multiplayer. The open-source version of Tux Racer, however, remained available to download on SourceForge. Sunspire Studios ceased business in the early- to mid-2000s. Since its inception, Tux Racer has seen unofficial forks and updates. One of the most popular examples is Extreme Tux Racer, released in September 2007 and itself based on a previous fork, PlanetPenguin Racer. An arcade version of the game was released by Roxor Games, making it the first GPL-licensed video game to receive an arcade adaption. The game and subsequent forks have also been ported to various other platforms, for instance Android and Ubuntu Touch. Reception Tux Racer was well-received, with the latest version seeing over one million downloads as of October 2001 since its release in January, according to Sunspire Studios. It was a personal fan favorite among Linux users, who often ranked it as the best or one of the best free games. In August 2000, Lee Anderson of LinuxWorld.com commended the game's graphics, speed, and the easiness of the ability to create tracks. In 2001, TuxRadar said the game provided a "shining light" of what free applications could achieve. In its 2001 preview, the Brazilian magazine SuperGamePower considered the game's graphics to be the best aspect and described the sound as not innovative, but good. Also in 2001, MacAddict compared the game's fast-paced style to podracing in Star Wars and summed up the Macintosh port as "more fun than words can describe." The commercial version of Tux Racer attracted little attention. Andon Logvinov of Igromania described it as a "pure arcade game" featuring nothing but four selectable characters and a set of courses with fish scattered about. He described the gameplay as calm and addictive and the music as relaxing, and praised the character models and track layout, with his only criticism being the system requirements. Seiji Nakamura of the Japanese website Game Watch described it as cute and humorous and praised the game's graphics and shadow and reflection effects, but found the game to lack appeal for adults. Even after its production's cessation, Tux Racer has continued to be generally well-received, largely concerning its forks. Linux Journal gave it an Editors' Choice Award in the "Game or Entertainment Software" category in 2005. Digit applauded the graphics and replayability, as well as the speed of the game and the abundance of courses, but found the music to be monotonous. Daniel Voicu of Softpedia praised the Extreme Tux Racer fork for being relaxing and funny and having the ability to reset Tux, as well as noted the game's fast pace, but criticized its perceived lack of interactivity and having Tux look like a "plastic puppet." Linux For You also called the fork entertaining, but also criticized its bugs and the "plastic" look of Tux. References External links Tux Racer on SourceForge Official website for commercial Tux Racer Tux Racer Arcade and the new Tux2 Arcade 2000 video games Cross-platform software Formerly free software Linux games MacOS games Multiplayer and single-player video games Open-source video games Racing video games Split-screen multiplayer games Video games developed in Canada Windows games Winter sports video games
55715744
https://en.wikipedia.org/wiki/Charlie%20Thacker
Charlie Thacker
Charles Michael Thacker (born 10 August 1996) is an English rugby union centre who plays for Nottingham in the RFU Championship. He previously played for Leicester Tigers in Premiership Rugby. Early life Thacker was born in Leicester. His elder brother is Bristol Bears hooker Harry Thacker, and his father Troy Thacker also played hooker for Leicester. Career Thacker made his Leicester Tigers debut on 1 November 2014 in a 17-16 win against London Irish in the Anglo-Welsh Cup. He scored his first try for the club on 4 November 2017 against Gloucester again in the Anglo-Welsh Cup and was named as the fans' man of the match for his performance. On 15 May 2019 he was announced as one of the players to leave Leicester following the end of the 2018-19 Premiership Rugby season. References 1996 births Living people English rugby union players Leicester Tigers players Rugby union centres Rugby union players from Leicester
3872262
https://en.wikipedia.org/wiki/SILO%20%28boot%20loader%29
SILO (boot loader)
The SPARC Improved bootLOader (SILO) is the bootloader used by the SPARC port of the Linux operating system; it can also be used for Solaris as a replacement for the standard Solaris boot loader. SILO generally looks similar to the basic version of LILO, giving a "boot:" prompt, at which the user can press the Tab key to see the available images to boot. The configuration file format is reasonably similar to LILO's, as well as some of the command-line options. However, SILO differs significantly from LILO because it reads and parses the configuration file at boot time, so it is not necessary to re-run it after every change to the file or to the installed kernel images. SILO is able to access ext2, ext3, ext4, UFS, romfs and ISO 9660 file systems, enabling it to boot arbitrary kernels from them (more similar to GRUB). SILO also has support for transparent decompression of gzipped vmlinux images, making the bzImage format unnecessary on SPARC Linux. SILO is loaded from the SPARC PROM. Licensed under the terms of the GNU General Public License (GPL). See also bootman LILO elilo NTLDR BCD References External links Gentoo wiki about SILO Free boot loaders
59424437
https://en.wikipedia.org/wiki/Witold%20Lipski
Witold Lipski
Witold Lipski Jr. (July 13, 1949, in Warsaw, Poland – May 30, 1985, in Nantes, France) was a Polish computer scientist (habilitation in computer science), and an author of two books: Combinatorics for Programmers (two editions) and (jointly with Wiktor Marek Combinatorial analysis. Jointly with his PhD student, Tomasz Imieliński, created foundations of the theory of incomplete information in relational databases. Life Lipski graduated from the Program of Fundamental Problems of Technology, at the Warsaw Technical University. He received Ph.D. in computer science at the Computational Center (later: Institute for Computer Science) of the Polish Academy of Sciences, under supervision of Prof. Wiktor Marek. Lipski's dissertation was on the topic of information storage and retrieval systems and titled 'Combinatorial Aspects of Information Retrieval'. His habilitation was granted by the Institute of Computer Science of Polish Academy of Sciences. Lipski spent academic year 1979/1980 at the University of Illinois at Urbana–Champaign, and the last two years before his death, at the University of Paris. Jointly with his doctoral student, Tomasz Imieliński, Lipski investigated foundations of treatment of 'Incomplete Information in Relational Databases'. The results of these investigations were published in the bibliographical items in the period of 1978 through 1985. This collaboration produced a fundamental concept that became later known as Imieliński-Lipski Algebras. Again, in collaboration with Imieliński, Lipski studied the semantical issues of relational databases. These investigations were based on the theory of cylindric algebras, a topic studied within Universal Algebra. According to Van den Bussche, the first people from database community to recognize the connection between Codd's relational algebra and Tarski's cylindric algebras were Witold Lipski and Tomasz Imieliński, in a talk given at the very first edition of PODS (the ACM Symposium on Principles of Database Systems), in 1982. Their work,"The relational model of data and cylindric algebras" was later published in 1984. Additionally, Lipski contributed to the research in the area of algorithm analysis, specifically - by discovering a number of efficient algorithms applicable in the analysis of VLSI devices (collaboration with Franco P Preparata), time-sharing in database implementations (collaboration with Christos Papadimitriou), computational geometry (as applied to shape recognition, again, in collaboration with Franco Preparata). Lipski was an author of a book on combinatorial algorithms, Combinatorics for Programmers ("Kombinatoryka dla Programistow", in Polish). This book has had two editions (one of these posthumous) and it was also translated in Russian. Additionally, jointly with Wiktor Marek, Lipski published a monograph on Combinatorial analysis. Personal Witold Lipski Jr. is survived by two children, Dr. Kasia Lipska, endocrinologist, and Dr. Witold Lipski, neuroscientist. Father of Witold Lipski Jr. was an economist and politician Witold Lipski Sr. Lipski died in Nantes, France, after a battle with cancer. He is buried in Powązki Cemetery in Warsaw, Poland, (Location: C/39 (5/7)). Witold Lipski Prize for Young Computer Scientists in Poland Witold Lipski Prize is the most prestigious award for young Computer scientists in Poland. Many are inspired by a brilliant career of Witold Lipski whose life was cut shot by a terminal illness. The Prize is awarded for achievements in the area of theoretical and applied Computer Science. It has been created by the initiative of a group of Polish Computer Scientists active outside of Poland and in Poland. The submissions for the Prize are limited to applicants with exceptional accomplishments, who are younger than 30, or who are younger than 32, in case if a candidate was on maternity/paternity leave. The Prize is administrated by the (Polish) Foundation for Computer Science Research, in cooperation with Polish Chapter of the Association for Computing Machinery, and Polish Computer Science Society. See also Null (SQL) Relational algebra Imieliński-Lipski Algebras Cylindric algebra References 1949 births 1985 deaths Polish computer scientists Warsaw University of Technology alumni Scientists from Warsaw
3830007
https://en.wikipedia.org/wiki/S-PLUS
S-PLUS
S-PLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc. It features object-oriented programming capabilities and advanced analytical algorithms. Due to the increasing popularity of the open source S successor R, TIBCO Software released the TIBCO Enterprise Runtime for R (TERR) as an alternative R interpreter. Historical timeline 1988: S-PLUS is first produced by a Seattle-based start-up company called Statistical Sciences, Inc. The founder and sole owner is R. Douglas Martin, professor of statistics at the University of Washington, Seattle. 1993: Statistical Sciences acquires the exclusive license to distribute S and merges with MathSoft, becoming the firm's Data Analysis Products Division (DAPD). 1995: S-PLUS 3.3 for Windows 95/NT. Matrix library, command history, Trellis graphics 1996: S-PLUS 3.4 for UNIX. Trellis graphics, (non-linear mixed effects) library, hexagonal binning, cluster methods. 1997: S-PLUS 4 for Windows. New GUI, integration with Excel, editable graphics. 1998: S-PLUS 4.5 for Windows. Scatterplot brushing, create S-PLUS graphs from within Excel & SPSS. 1998: S-PLUS is available for Linux & Solaris. 1999: S-PLUS 5 for Solaris, Linux, HP-UX, AIX, IRIX, and DEC Alpha. S-PLUS 2000 for Windows. 3.3, quality control charting, new commands for data manipulation. 2000: S-PLUS 6 for Linux/Unix. Java-based GUI, Graphlets, survival5, missing data library, robust library. 2001: MathSoft sells its Cambridge-based Engineering and Education Products Division (EEPD), changes name to Insightful Corporation, and moves headquarters to Seattle. This move is basically an "Undo" of the previous merger between MathSoft and Statistical Sciences, Inc. 2001: S-PLUS Analytic Server 2.0. S-PLUS 6 for Windows (Excel integration, C++ classes/libraries for connectivity, Graphlets, S version 4, missing data library, robust library). 2002: StatServer 6. Student edition of S-PLUS now free. 2003: S-PLUS 6.2 New reporting, database integration, improved Graphlets, ported to AIX, libraries for correlated data, Bayesian methods, multivariate regressions. 2004: Insightful purchases the S language from Lucent Technologies for $2 million. 2004: S+ArrayAnalyzer 2.0 released. 2005: S-PLUS 7.0 released. BigData library for working with larger-than-memory data sets, S-PLUS Workbench (Eclipse development tool). Insightful Miner 7.0 released. 2007: S-PLUS 8 released. New package system, language extensions for R package compatibility, Workbench debugger. 2008: TIBCO acquires Insightful Corporation for $25 million. See also R programming language References Programming languages Proprietary commercial software for Linux Statistical software
56943758
https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge%20Analytica%20data%20scandal
Facebook–Cambridge Analytica data scandal
In the 2010s, personal data belonging to millions of Facebook users was collected without their consent by British consulting firm Cambridge Analytica, predominantly to be used for political advertising. The data was collected through an app called "This Is Your Digital Life", developed by data scientist Aleksandr Kogan and his company Global Science Research in 2013. The app consisted of a series of questions to build psychological profiles on users, and collected the personal data of the users’ Facebook friends via Facebook's Open Graph platform. The app harvested the data of up to 87 million Facebook profiles. Cambridge Analytica used the data to provide analytical assistance to the 2016 presidential campaigns of Ted Cruz and Donald Trump. Cambridge Analytica was also widely accused of interfering with the Brexit referendum, although the official investigation recognised that the company was not involved "beyond some initial enquiries" and that "no significant breaches" took place. Information about the data misuse was disclosed in 2018 by Christopher Wylie, a former Cambridge Analytica employee, in interviews with The Guardian and The New York Times. In response, Facebook apologized for their role in the data harvesting and their CEO Mark Zuckerberg testified in front of Congress. In July 2019, it was announced that Facebook was to be fined $5 billion by the Federal Trade Commission due to its privacy violations. In October 2019, Facebook agreed to pay a £500,000 fine to the UK Information Commissioner's Office for exposing the data of its users to a "serious risk of harm". In May 2018, Cambridge Analytica filed for Chapter 7 bankruptcy. Other advertising agencies have been implementing various forms of psychological targeting for years and Facebook had patented a similar technology in 2012. Nevertheless, Cambridge Analytica’s openness about their methods and the caliber of their clients — including the Trump and the UK’s Vote Leave campaign — brought the challenges of psychological targeting that scholars have been warning against to public awareness. The scandal sparked an increased public interest in privacy and social media's influence on politics. The online movement #DeleteFacebook trended on Twitter. The Russo brothers are producing a film on the scandal, starring Paul Bettany. Overview Aleksandr Kogan, a data scientist at the University of Cambridge, was hired by Cambridge Analytica, an offshoot of SCL Group, to develop an app called "This Is Your Digital Life" (sometimes stylized as "thisisyourdigitallife"). Cambridge Analytica then arranged an informed consent process for research in which several hundred thousand Facebook users would agree to complete a survey for payment that was only for academic use. However, Facebook allowed this app not only to collect personal information from survey respondents but also from respondents’ Facebook friends. In this way, Cambridge Analytica acquired data from millions of Facebook users. The collection of personal data by Cambridge Analytica was first reported in December 2015 by Harry Davies, a journalist for The Guardian. He reported that Cambridge Analytica was working for United States Senator Ted Cruz using data harvested from millions of people's Facebook accounts without their consent. Further reports followed in November 2016 by McKenzie Funk for the New York Times Sunday Review, December 2016 by Hannes Grasseger and Mikael Krogerus for the Swiss publication Das Magazin (later translated and published by Vice), in February 2017 by Carole Cadwalladr for The Guardian (starting in February 2017), and in March 2017 by Mattathias Schwartz for The Intercept. According to PolitiFact, in his 2016 presidential campaign, Trump paid Cambridge Analytica in September, October, and November for data on Americans and their political preferences. Information on the data breach came to a head in March 2018 with the emergence of a whistleblower, an ex-Cambridge Analytica employee Christopher Wylie. He had been an anonymous source for an article in 2017 in The Observer by Cadwalladr, headlined "The Great British Brexit Robbery". Cadwalladr worked with Wylie for a year to coax him to come forward as a whistleblower. She later brought in Channel 4 News in the UK and The New York Times due to legal threats against The Guardian and The Observer by Cambridge Analytica. Kogan's name change to Aleksandr Spectre, which resulted in the ominous "Dr. Spectre", added to the intrigue and popular appeal of the story. The Guardian and The New York Times published articles simultaneously on March 17, 2018. More than $100 billion was knocked off Facebook's market capitalization in days and politicians in the US and UK demanded answers from Facebook CEO Mark Zuckerberg. The negative public response to the media coverage eventually led to him agreeing to testify in front of the United States Congress. Meghan McCain drew an equivalence between the use of data by Cambridge Analytica and Barack Obama's 2012 presidential campaign; PolitiFact, however, alleged that this data was not used in an unethical way, since Obama's campaign used this data to “have their supporters contact their most persuadable friends” rather than using this data for highly targeted digital ads on websites such as Facebook. Data characteristics Numbers Wired, The New York Times, and The Observer reported that the data-set had included information on 50 million Facebook users. While Cambridge Analytica claimed it had only collected 30 million Facebook user profiles, Facebook later confirmed that it actually had data on potentially over 87 million users, with 70.6 million of those people from the United States. Facebook estimated that California was the most affected U.S. state, with 6.7 million impacted users, followed by Texas, with 5.6 million, and Florida, with 4.3 million. Data was collected on 87 million users while only 270,000 people downloaded the app. Information Facebook sent a message to those users believed to be affected, saying the information likely included one's "public profile, page likes, birthday and current city". Some of the app's users gave the app permission to access their News Feed, timeline, and messages. The data was detailed enough for Cambridge Analytica to create psychographic profiles of the subjects of the data. The data also included the locations of each person. For a given political campaign, each profile's information suggested what type of advertisement would be most effective to persuade a particular person in a particular location for some political event. Data use Ted Cruz campaign In 2016, American senator Ted Cruz hired Cambridge Analytica to aid his presidential campaign. The Federal Election Commission reported that Cruz paid the company $5.8 million in services. Although Cambridge Analytica was not well known at the time, this is when it started to create individual psychographic profiles. This data was then used to create tailored advertisements for each person to sway them into voting for Cruz. Donald Trump campaign Donald Trump’s 2016 presidential campaign used the harvested data to build psychographic profiles, determining users' personality traits based on their Facebook activity. The campaign team used this information as a micro-targeting technique, displaying customized messages about Trump to different US voters on various digital platforms. Ads were segmented into different categories, mainly based on whether individuals were Trump supporters or potential swing votes. As described by Cambridge Analytica’s CEO, the key was to identify those who might be enticed to vote for their client or be discouraged to vote for their opponent. Supporters of Trump received triumphant visuals of him, as well as information regarding polling stations. Swing voters were instead often shown images of Trump’s more notable supporters and negative graphics or ideas about his opponent, Hillary Clinton. For example, the collected data was specifically used by “Make America Number 1 Super PAC” to attack Clinton through constructed advertisements that accused Clinton of corruption as a way of propping up Trump as a better candidate for the presidency. However, a former Cambridge Analytica employee claims that the use of the illicitly-obtained data by the Trump campaign has not been proven. Brittany Kaiser was asked "Is it absolutely proven that the Trump campaign relied on the data that had been illicitly obtained from Facebook?" She responded: "It has not been proven, because the difficult thing about proving a situation like that is that you need to do a forensic analysis of the database". Potential usage Russia In 2018, the Parliament of the United Kingdom questioned SCL Group director Alexander Nix in a hearing about Cambridge Analytica's connections with Russian oil company, Lukoil. Nix stated he had no connections to the two companies despite concerns that the oil company was interested in how the company's data was used to target American voters. Cambridge Analytica had become a point of focus in politics since its involvement in Trump's campaign at this point. Democratic officials made it a point of emphasis for improved investigation over concerns of Russian ties with Cambridge Analytica. It was later confirmed by Christopher Wylie that Lukoil was interested in the company's data regarding political targeting. Brexit Cambridge Analytica was allegedly hired as a consultant company for Leave.EU and the UK Independence Party during 2016, as an effort to convince people to support Brexit. These rumors were the result of the leaked internal emails that were sent between Cambridge Analytica firm and the British parliament. Brittany Kaiser declared that the datasets that Leave.EU used to create databases were provided by Cambridge Analytica. These datasets composed of the data obtained from Facebook were said to be work done as an initial job deliverable for them. Although Arron Banks, co-founder of Leave.EU, denied any involvement with the company, he later declared “When we said we’d hired Cambridge Analytica, maybe a better choice of words could have been deployed." The official investigation by the UK Information Commissioner found that Cambridge Analytica was not involved "beyond some initial enquiries" and the regulator did not identify any "significant breaches" of data protection legislation or privacy or marketing regulations "which met the threshold for formal regulatory action". Responses Facebook and other companies Facebook CEO Mark Zuckerberg first apologized for the situation with Cambridge Analytica on CNN, calling it an "issue", a "mistake" and a "breach of trust". He explained that he was responding to the Facebook community's concerns and that the company's initial focus on data portability had shifted to locking down data; he also reminded the platform's users of their right of access to personal data. Other Facebook officials argued against calling it a "data breach," arguing those who took the personality quiz originally consented to give away their information. Zuckerberg pledged to make changes and reforms in Facebook policy to prevent similar breaches. On March 25, 2018, Zuckerberg published a personal letter in various newspapers apologizing on behalf of Facebook. In April, Facebook decided to implement the EU's General Data Protection Regulation in all areas of operation and not just the EU. In April 2018, Facebook established Social Science One as a response to the event. On April 25, 2018, Facebook released their first earnings report since the scandal was reported. Revenue fell since the last quarter, but this is usual as it followed the holiday season quote. The quarter revenue was the highest for a first quarter, and the second overall. Amazon said that they suspended Cambridge Analytica from using their Amazon Web Services when they learned in 2015 that their service was collecting personal information. The Italian banking company UniCredit stopped advertising and marketing on Facebook in August 2018. Governmental actions The governments of India and Brazil demanded that Cambridge Analytica report how anyone used data from the breach in political campaigning, and various regional governments in the United States have lawsuits in their court systems from citizens affected by the data breach. In early July 2018, the United Kingdom's Information Commissioner's Office announced it intended to fine Facebook £500,000 ($663,000) over the data breach, this being the maximum fine allowed at the time of the breach, saying Facebook "contravened the law by failing to safeguard people's information". In March 2019, a court filing by the U.S. Attorney General for the District of Columbia alleged that Facebook knew of Cambridge Analytica's "improper data-gathering practices" months before they were first publicly reported in December 2015. In July 2019, the Federal Trade Commission voted to approve fining Facebook around $5 billion to finally settle the investigation into the data breach, with a 3–2 vote. The record-breaking settlement was one of the largest penalties ever assessed by the U.S. government for any violation. Again, in July 2019, Facebook has agreed to pay $100 million to settle with the U.S. Securities and Exchange Commission for "misleading investors about the risks it faced from misuse of user data". The SEC's complaint alleged that Facebook did not correct its existing disclosure for more than two years despite discovering the misuse of its users’ information in 2015. Impact on Facebook users and investors Since April 2018, the first full month since the breaking of the Cambridge Analytica data breach, the number of likes, posts and shares on the site had decreased by almost 20%, and has decreased ever since, with the aforementioned activity only momentarily increasing during the summer and during the 2018 US midterm elections. Despite this, user growth of the site has increased in the period since increased media coverage, increasing by 1.8% during the final quarter of 2018. On March 26, 2018, a little after a week after the story was initially published, Facebook stock fell by about 24%, equivalent to $134 billion. By May 10, Wall Street reported that the company recovered their losses. #DeleteFacebook movement The public reacted to the data privacy breach by initiating the campaign #DeleteFacebook with the aim of starting a movement to boycott Facebook. The co-founder of WhatsApp, which is owned by Facebook, joined in on the movement by declaring it was time to delete the platform. The hashtag was tweeted almost 400,000 times on Twitter within a 30-day period after news of the data breach. 93% of the mentions of the hashtag actually appeared on Twitter, making it the main social media platform used to share the hashtag. However, a survey by investment firm Raymond James found that although approximately 84% of Facebook users were concerned about how the app used their data, about 48% of those surveyed claimed they wouldn't actually cut back on their usage of the social media network. Additionally, in 2018, Mark Zuckerberg commented that he didn't think the company had seen "a meaningful number of people act" on deleting Facebook. An additional campaign and hashtag, #OwnYourData, was coined by Brittany Kaiser. The hashtag was created by Kaiser as a Facebook campaign that pushed for increased transparency on the platform. #OwnYourData was also used in Kaiser's petition for Facebook to alter their policies and give users increased power and control over their data, which she refers to as users’ assets and property. In addition to the hashtag, Kaiser also created the Own Your Data Foundation to promote increased digital intelligence education. The Great Hack The Facebook–Cambridge Analytica data scandal also received media coverage in the form of a 2019 Netflix documentary, The Great Hack. This is the first feature-length media piece that ties together the various elements of the scandal through a narrative. The documentary provides information on the background information and events related to Cambridge Analytica, Facebook, and the 2016 election that resulted in the overall data scandal. The Great Hack communicates the experiences and personal journeys of multiple individuals that were involved in the event in different ways and through different relationships. These individuals include David Carroll, Brittany Kaiser, and more. David Carroll is a New York professor in the field of media that attempted to navigate the legal system in order to discover what data Cambridge Analytica had in possession about him. Meanwhile, Brittany Kaiser is a former Cambridge Analytica employee that ultimately became a whistleblower for the data scandal. Witness and expert testimony The United States Senate Judiciary Committee called witnesses to testify about the data breach and general data privacy. They held two hearings, one focusing on Facebook's role in the breach and privacy on social media, and the other on Cambridge Analytica's role and its impact in data privacy. The former was held on April 10, 2018, where Mark Zuckerberg testified and Senator Chuck Grassley and Senator Dianne Feinstein gave statements. The latter occurred on May 16, 2018, where Professor Eitan Hersh, Dr. Mark Jamison, and Christopher Wylie testified, while Senators Grassley and Feinstein again made statements. Mark Zuckerberg During his testimony before Congress on April 10, 2018, Zuckerberg said it was his personal mistake that he did not do enough to prevent Facebook from being used for harm. "That goes for fake news, foreign interference in elections and hate speech". During the testimony, Mark Zuckerberg publicly apologized for the breach of private data: "It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here". Zuckerberg said that in 2013 Aleksandr Kogan had created a personality quiz app, which was installed by 300,000 people. The app was then able to retrieve Facebook information, including that of the users' friends, and this was obtained by Kogan. It was not until 2015 that Zuckerberg learned that these users' information was shared by Kogan with Cambridge Analytica. Cambridge Analytica was subsequently asked to remove all the data. It was later discovered by The Guardian, The New York Times and Channel 4 that the data had in fact not been deleted. Eitan Hersh In 2015, Professor Eitan Hersh published Hacking the Electorate: How Campaigns Perceive Voters, which analyzed the databases used for campaigns between 2008 and 2014. On May 6, 2018, Eitan Hersh, a professor of political science at Tufts University testified before Congress as an expert on voter targeting. Hersh claimed that the voter targeting by Cambridge Analytica did not excessively affect the outcome of the 2016 election because the techniques used by Cambridge Analytica were similar to those of presidential campaigns well before 2016. Further, he claimed that the correlation between user “likes” and personality traits were weak and thus the psychological profiling of users were also weak. Mark Jamison Dr. Mark Jamison, the director and Gunter Professor of the Public Utility Research Center at the University of Florida, testified before Congress on May 6, 2018 as an expert. Jamison reiterated that it was not unusual for presidential campaigns to use data like Facebook's data to profile voters; Presidents Barack Obama and George W. Bush also used models to micro-target voters. Jamison criticized Facebook for not being “clear and candid with its users” because the users were not aware of the extent that their data would be used. Jamison finished his testimony by saying that if the federal government were to regulate voter targeting to happen on sites like Facebook, it would harm the users of those sites because it would be too restrictive of those sites and would make things worse for regulators. Christopher Wylie On May 16, 2018, Christopher Wylie, who is considered the “whistleblower” on Cambridge Analytica and also served as Cambridge Analytica's Director of Research in 2013 and 2014, also testified to the United States Senate Judiciary Committee. He was considered a witness to both British and American authorities, and he claims he decided to whistle-blow to “protect democratic institutions from rogue actors and hostile foreign interference, as well as ensure the safety of Americans online.” He claimed that at Cambridge Analytica “anything goes” and that Cambridge Analytica was “a corrupting force in the world.” He detailed to Congress how Cambridge Analytica used Facebook's data to categorize people into groups based on political ideology. He also claimed that Eitan Hersh contradicted “copious amounts of peer-reviewed literature in top scientific journals, including the Proceedings of the National Academy of Science, Psychological Science, and Journal of Personality and Individual Differences” by saying that Facebook's categorizing of people were weak. Christopher Wylie also testified about Russian contact with Cambridge Analytica and the campaign, voter disengagement, and his thoughts on Facebook's response. Aftermath Following the downfall of Cambridge Analytica, a number of related companies have been established by people formerly affiliated with Cambridge Analytica, including Emerdata Limited and Auspex International. At first, Julian Wheatland, the former CEO of Cambridge Analytica and former director of many SCL-connected firms, stated that they did not plan on reestablishing the two companies. Instead, the directors and owners of Cambridge and its London-based parent SCL group strategically positioned themselves to be acquired in the face of bankruptcy procedures and lawsuits. While employees of both companies dispersed to successor firms, Cambridge and SCL were acquired by Emerdata Limited, a data processing company. Wheatland responded to news of this story and emphasized that Emerdata would not inherit SCL companies’ existing data or assets and that this information belongs to the administrators in charge of the SCL companies’ bankruptcy. David Carroll, an American professor who sued Cambridge, stated that Emerdata was aiming to conceal the scandals and minimize further criticism. Carroll's lawyers argued that Cambridge's court administrators were acting unlawfully by liquidating the company's assets prior to a full investigation being performed. While these administrators subjected SCL Group to criminal injury and a $26,000 fine, a U.K. court denied Carroll's lawsuit, allowing SCL to disintegrate without turning over his data. In October 2021 following Facebook employee Sophie Zhang whistleblowing Facebook activities, NPR revisited the Cambridge Analytica data scandal by observing that Facebook neither took responsibility for their behavior there nor did consumers get any benefit of reform as a result. See also AggregateIQ BeLeave The Great Hack, 2019 documentary film Russian interference in the 2016 Brexit referendum Timeline of investigations into Trump and Russia (2019) References External links BBC Coverage The Guardian Coverage Carole Cadwalladr @TED2019: Facebook's role in Brexit — and the threat to democracy New York Times Coverage The Guardian Article; Revealed Data breaches Big data Cambridge Analytica Facebook criticisms and controversies 2018 scandals Political scandals in the United Kingdom Political scandals in the United States Corporate scandals
29536747
https://en.wikipedia.org/wiki/OnlyOffice
OnlyOffice
OnlyOffice (formerly TeamLab), stylized as ONLYOFFICE, is a free software office suite developed by Ascensio System SIA, a company headquartered in Riga, Latvia. It features online document editors, platform for document management, corporate communication, mail and project management tools. OnlyOffice is delivered either as SaaS or as an installation for deployment on a private network. Access to the system is provided through a private online portal. Properties The interface of OnlyOffice is divided into several modules: Documents, CRM, Projects, Mail, Community, Calendar and Talk. They are combined in a bundle called OnlyOffice Groups which is a part of OnlyOffice Workspace together with OnlyOffice Docs The Documents module is a document management and sharing system for OnlyOffice files. The integrated audio and video player allows playing media from files stored in OnlyOffice. The Projects module is developed for managing project stages: planning, team management and task delegation, monitoring and reporting. This module also includes Gantt charts for illustrating the projects stages and dependencies between tasks. The CRM module allows maintaining client databases, transactions and potential sales, tasks, client relationship history. This module also provides online billing and sales reports. The Mail module combines a mail server for creating own-domain mailboxes and mail aggregator for centralized management of multiple mailboxes. The Calendar module allows planning and monitoring of personal and corporate events, task deadlines in Projects and CRM, sending and receiving invitations to events. The Community module offers corporate social network features: polls, corporate blog and forums, news, orders and announcements, and a messenger. Technology It is technologically based on three components: Document Server, Community Server and Mail Server. The Document server maintains text document, spreadsheet and presentation editors and is written in JavaScript using HTML5 Canvas element. The Community server hosts all functional modules of OnlyOffice. It is written in ASP.NET for Windows and in Mono for Linux and distributions. The Mail server represents set of components that allows creating corporate mailbox using default or custom domain names. Mail Server is based on the iRedMail package which consists of Postfix, Dovecot, SpamAssassin, ClamAV, OpenDKIM, Fail2ban. Online editors OnlyOffice includes an online editing suite called OnlyOffice Docs. It combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint). Since version 5.0 of the editors the interface has been renewed with a tabbed toolbar. Editors allow co-editing, commenting and chatting in real time and provide such functions as Revision History and Mail Merge. The beta version of OnlyOffice Docs predecessor, Teamlab Document Editor, was introduced at CeBIT 2012 in Hannover. The product was built using Canvas, a part of HTML5 that allows dynamic, scriptable rendering of 2D shapes and bitmap images. The basic type of formats used in OnlyOffice Docs is OOXML (DOCX, XLSX, PPTX). Other types of supported formats (ODT, DOC, RTF, EPUB, MHT, HTML, HTM, ODS, XLS, CSV, ODP, PPT, DOTX, XLTX, POTX, OTT, OTS, OTP, and PDF-A) are processed with inner conversion to DOCX, XLSX or PPTX. Functionality of the suite can be extended using plugins (side applications). Users can choose from the existing list of plugins or create their own applications using the provided API. There is a connector to integrate the online editing suite with ownCloud. Desktop editors OnlyOffice Desktop is an offline version of OnlyOffice editing suite. The desktop application supports collaborative editing features when connected to the portal, Nextcloud or ownCloud. It is offered free of charge for both personal and commercial usage. The desktop editors are cross-platform available for Windows 10, 8.1, 8, 7, Vista, and XP (x32 and x64), Debian, Ubuntu and other Linux distributions based on RPM, Mac OS 10.10 and newer. Besides platform-specific versions there is also a portable option. OnlyOffice Desktop Editors are available for installation as a snap package and AppImage. Editors are compatible with MS Office (OOXML) and OpenDocument (ODF) formats and support DOC, DOCX, ODT, RTF, TXT, PDF, HTML, EPUB, XPS, DjVu, XLS, XLSX, ODS, CSV, PPT, PPTX, ODP, DOTX, XLTX, POTX, OTT, OTS, OTP, and PDF-A. Like the online editing suite, the basic toolset of OnlyOffice Desktop can be upgraded using side plugins. The desktop editors are distributed under AGPL-3.0-only license for personal and commercial usage. OnlyOffice editors are also available as mobile application for iOS and Android. The application is called ONLYOFFICE Documents. In early 2019, OnlyOffice announced the launch of a developer preview of end-to-end encryption of documents (files themselves, online editing and collaboration) that involves blockchain technology and is included in the functionality of the desktop suite. History In 2009, a group of software developers, headed by Lev Bannov, launched a project called TeamLab, a platform for internal team collaboration that encompassed several social computing features (e.g. blog, forum, wiki, bookmarks). In March 2012, TeamLab introduced the first HTML5-based document editors at CeBIT. In July 2014, Teamlab Office was officially rebranded to OnlyOffice and the source code of the product was published on SourceForge and GitHub on terms of AGPL-3.0-only. In March 2016, the developers of OnlyOffice released a desktop application – OnlyOffice Desktop Editors, which is positioned as an open source alternative to Microsoft Office. In February 2017, the app for integration with ownCloud/Nextcloud was launched. In February 2018, OnlyOffice Desktop Editors became available as a snap package. In January 2019, OnlyOffice announced the release of end-to-end encryption functionality. In August 2019, Document Builder is published on GitHub under AGPL-3.0-only licence. In November 2019, OnlyOffice enters AWS Marketplace. In January 2020, OnlyOffice launches App Directory. In September 2020, OnlyOffice rebrands its product portfolio, introducing OnlyOffice Workspace, OnlyOffice Docs, and OnlyOffice Groups. It also releases Groups (collaboration platform) under Apache license. In October 2020, OnlyOffice announces compliance with HIPAA. See also Collaboration platform Collaboration software List of collaborative software Project management software List of project management software References External links 2009 software Customer relationship management software Document management systems Office suites for Linux Project management software Software using the GNU AGPL license
147332
https://en.wikipedia.org/wiki/IBM%207090
IBM 7090
The IBM 7090 is a second-generation transistorized version of the earlier IBM 709 vacuum tube mainframe computer that was designed for "large-scale scientific and technological applications". The 7090 is the fourth member of the IBM 700/7000 series scientific computers. The first 7090 installation was in December 1959. In 1960, a typical system sold for $2.9 million (equivalent to $ million in ) or could be rented for $63,500 a month (). The 7090 uses a 36-bit word length, with an address space of 32,768 words (15-bit addresses). It operates with a basic memory cycle of 2.18 μs, using the IBM 7302 Core Storage core memory technology from the IBM 7030 (Stretch) project. With a processing speed of around 100 Kflop/s, the 7090 is six times faster than the 709, and could be rented for half the price. An upgraded version, the 7094 was up to twice as fast. It was withdrawn from sale on July 14, 1969, but systems remained in service for more than a decade after. Development and naming Although the 709 was a superior machine to its predecessor, the 704, it was being built and sold at the time that transistor circuitry was supplanting vacuum tube circuits. Hence, IBM redeployed its 709 engineering group to the design of a transistorized successor. That project became called the 709-T (for transistorized), which because of the sound when spoken, quickly shifted to the nomenclature 7090 (i.e., seven - oh - ninety). Similarly, the related machines such as the 7070 and other 7000 series equipment were sometimes called by names of digit - digit - decade (e.g., seven - oh - seventy). IBM 7094 An upgraded version, the IBM 7094, was first installed in September 1962. It has seven index registers, instead of three on the earlier machines. The 7094 console has a distinctive box on top that displays lights for the four new index registers. The 7094 introduced double-precision floating point and additional instructions, but is largely backward compatible with the 7090. Although the 7094 has 4 more index registers than the 709 and 7090, at power-on time it is in multiple tag mode, compatible with the 709 and 7090, and requires a Leave Multiple Tag Mode instruction in order to enter seven index register mode and use all 7 index registers. In multiple tag mode, when more than one bit is set in the tag field, the contents of the two or three selected index registers are logically ORed, not added, together, before the decrement takes place. In seven index register mode, if the three-bit tag field is not zero, it selects just one of seven index registers, however, the program can return to multiple tag mode with the instruction Enter Multiple Tag Mode, restoring 7090 compatibility. In April 1964, the first 7094 II was installed, which had almost twice as much general speed as the 7094 due to a faster clock cycle, dual memory banks and improved overlap of instruction execution, an early instance of pipelined design. IBM 7040/7044 In 1963, IBM introduced two new, lower cost machines called the IBM 7040 and 7044. They have a 36-bit architecture based on the 7090, but with some instructions omitted or optional, and simplified input/output that allows the use of more modern, higher performance peripherals from the IBM 1400 series. 7094/7044 Direct Coupled System The 7094/7044 Direct Coupled System (DCS) was initially developed by an IBM customer, the Aerospace Corporation, seeking greater cost efficiency and scheduling flexibility than IBM's IBSYS tape operating system provided. DCS used a less expensive IBM 7044 to handle input/output (I/O) with the 7094 performing mostly computation. Aerospace developed the Direct Couple operating system, an extension to IBSYS, which was shared with other IBM customers. IBM later introduced the DCS as a product. Transistors and circuitry The 7090 used more than 50,000 germanium alloy-junction transistors and (faster) germanium diffused junction drift transistors. The 7090 used the Standard Modular System (SMS) cards using current-mode logic some using diffused junction drift transistors. Instruction and data formats The basic instruction format were the same as the IBM 709: A three-bit opcode (prefix), 15-bit decrement (D), three-bit tag (T), and 15-bit address (Y) A twelve-bit opcode, two-bit flag (F), four unused bits, three-bit tag (T), and 15-bit address (Y) Variations of the above with different allocation of bits 12-17 or different allocations of bits 18-35 The documentation of opcodes used signed octal The flag field indicated whether to use indirect addressing or not. The decrement field often contained an immediate operand to modify the results of the operation, or was used to further define the instruction type. The tag field might describe an index register to be operated on, or be used as described below. The Y field might contain an address, an immediate operand or an opcode modifier. For instructions where the tag field indicated indexing, the operation was T=0 use Y 7090 form the logical or of the selected index registers and subtract from Y 7094 in multiple tag mode (power-on default) same as 7090 7094 in seven index register mode subtract the index register from Y If there was no F field or F is not all one bits, then the above was the effective address. Otherwise it was an indirect effective address; i.e., fetch the word at that location and treat the T and Y fields as described above. Data formats are Fixed-point numbers were stored in binary sign/magnitude format. Single-precision floating-point numbers had a magnitude sign, an eight-bit excess-128 exponent and a 27-bit magnitude (numbers were binary, rather than the hexadecimal format introduced later for System/360) Double-precision floating-point numbers, introduced on the 7094, had a magnitude sign, an eight-bit excess-128 exponent, and a 54-bit magnitude. The double-precision number was stored in memory in an even-odd pair of consecutive words; the sign and exponent in the second word were ignored when the number was used as an operand. Alphanumeric characters were six-bit BCD, packed six to a word. Octal notation was used in documentation and programming; console displays lights and switches were grouped into three-bit fields for easy conversion to and from octal. Input/output The 7090 series features a data channel architecture for input and output, a forerunner of modern direct memory access I/O. Up to eight data channels can be attached, with up to ten IBM 729 tape drives attached to each channel. The data channels have their own very limited set of operations called commands. These are used with tape (and later, disk) storage as well as card units and printers, and offered high performance for the time. Printing and punched card I/O, however, employed the same modified unit record equipment introduced with the 704 and was slow. It became common to use a less expensive IBM 1401 computer to read cards onto magnetic tape for transfer to the 7090/94. Output would be written onto tape and transferred to the 1401 for printing or card punching using its much faster peripherals, notably the IBM 1403 line printer. Later IBM introduced the 7094/7044 Direct Coupled System; the 7044 handled spooling between its fast 1400-series peripherals and 1301 or 1302 disk files, and used data channel to data channel communication as the 7094's interface to spooled data, with the 7094 primarily performing computations. There is also a 7090/7040 DCS. Software The 7090 and 7094 machines were quite successful for their time, and had a wide variety of software provided for them by IBM. In addition, there was a very active user community within the user organization, SHARE. IBSYS is a "heavy duty" production operating system with numerous subsystem and language support options, among them FORTRAN, COBOL, SORT/MERGE, the MAP assembler, and others. FMS, the Fortran Monitor System, was a more lightweight but still very effective system optimized for batch FORTRAN and assembler programming. The assembler provided, FAP, (FORTRAN Assembly Program), was somewhat less complete than MAP, but provided excellent capabilities for the era. FMS also incorporated a considerably enhanced derivative of the FORTRAN compiler originally written for the 704 by Backus and his team. Notable applications The Compatible Time-Sharing System (CTSS), one of the first time-sharing operating systems, was developed at MIT's Computation Center using a 7090 with an extra bank of memory, among other modifications; it eventually ran on two separate 7094s, one of them at Project MAC. NASA used 7090s, and, later, 7094s to control the Mercury and Gemini space flights. Goddard Space Flight Center operated three 7094s. During the early Apollo Program, a 7094 was kept operational to run flight planning software that had not yet been ported to mission control's newer System/360 computers. Caltech/NASA Jet Propulsion Laboratory had three 7094s in the Space Flight Operations Facility (SFOF, building 230), fed via tape using several 1401s, and two 7094/7044 direct-coupled systems (in buildings 125 and 156). An IBM 7090 was installed at LASL, Los Alamos Scientific Laboratory (Now Los Alamos National Laboratory). In 1961, Alexander Hurwitz used a 7090 to discover two Mersenne primes, with 1,281 and 1,332 digits—the largest known prime number at the time. In 1961, Michael Minovitch used UCLA's 7090 to tackle the three-body problem. His research was the scientific foundation of NASA's Planetary Grand Tour project. On February 13, 1961, an IBM 7090 was installed at the Woomera Long Range Weapons Establishment in Southern Australia. In 1962, a pair of 7090s in Briarcliff Manor, New York, were the basis for the original version of the SABRE airlines reservation system introduced by American Airlines. The composer Iannis Xenakis wrote his piece "Atrées" using an IBM 7090 at Place Vendôme, Paris. In 1962, Daniel Shanks and John Wrench used an IBM 7090 to compute the first 100,000 digits of . In 1963, three 7090 systems were imported into and installed in Japan, one each at Mitsubishi Nuclear Power Co. (whose DP division later merged with Mitsubishi Research Institute, Inc.), IBM Japan's data center in Tokyo, and Toshiba in Kawasaki. They were mainly used for scientific computing. In 1964, an early version of TRACE, a high-precision orbit determination and orbit propagation program, was used on an IBM 7090 computer. Operation Match, the first computer dating service in the U.S., begun in 1965, used a 7090 at the Avco service bureau in Wilmington, Massachusetts. In 1967, Roger N. Shepard adapted M.V. Mathews' algorithm using an IBM 7090 to synthesize Shepard tones. The US Air Force retired its last 7090s in service from the Ballistic Missile Early Warning System ("BMEWS") in the 1980s after almost 30 years of use. 7090 serial number 1 and serial number 3 were installed at Thule Air Base in Greenland for this application. The US Navy continued to use a 7094 at Pacific Missile Test Center, Point Mugu, California through much of the 1980s, although a "retirement" ceremony was held in July 1982. Not all of the applications had been ported to its successor, a dual-processor CDC Cyber 175. In the media A 7090/1401 installation is featured in the motion picture Dr. Strangelove, with the 1403 printer playing a pivotal role in the plot An IBM 7090 is featured in the 2016 American biographical film Hidden Figures. IBM 7094 specs are visible scrolling on a screen in the 1997 film Event Horizon. See also 9PAC Early IBM disk storage IBM 701 IBM 704 IBM 709 IBM 711 card reader IBM 716 line printer IBM 729 tape drive SHARE and IBSYS operating systems SQUOZE UNIVAC 1100/2200 series, UNIVAC's 36-bit scientific computing family University of Michigan Executive System References Further reading External links IBM Archives - 7090 IBM 7090 Data Processing System from BRL61 Report IBM 7090/94 Architecture page IBM 7090 Music From Mathematics recorded in 1960 by Bell Labs, using the "Digital to Sound Transducer" to realize several traditional and original compositions; this album contains the original Daisy (Bicycle Built for Two). IBM 7094 singing Daisy (mp3) Bob Supnik's SimH project – Includes a simulator for the 7090/7094 in a user-modifiable package Dave Pitts' IBM 7090 support – Includes a simulator, cross assembler and linker The IBM 7094 and CTSS, Tom Van Vleck 7090 7 7090 Computer-related introductions in 1959 36-bit computers
24132
https://en.wikipedia.org/wiki/Priam
Priam
In Greek mythology, Priam (; , ) was the legendary king of Troy during the Trojan War. He was the son of Laomedon. His many children included notable characters such as Hector and Paris. Etymology Most scholars take the etymology of the name from the Luwian 𒉺𒊑𒀀𒈬𒀀 (Pa-ri-a-mu-a-, or “exceptionally courageous”), attested as the name of a man from Zazlippa, in Kizzuwatna. A similar form is attested transcribed in Greek as Paramoas near Kaisareia in Cappadocia. A popular folk etymology derives the name from the Greek verb , meaning 'to buy'. This in turn gives rise to a story of Priam's sister Hesione ransoming his freedom, with a golden veil that Aphrodite herself once owned, from Heracles, thereby 'buying' him. This story is attested in the Bibliotheca and in other influential mythographical works dated to the first and second centuries AD. These sources are, however, dated much later than the first attestations of the name Priamos or Pariya-muwas, and thus are more problematic. Life In Book 3 of Homer's Iliad, Priam tells Helen of Troy that he once helped King Mygdon of Phrygia in a battle against the Amazons. When Hector is killed by Achilles, the Greek warrior treats the body with disrespect and refuses to give it back. According to Homer in book XXIV of the Iliad, Zeus sends the god Hermes to escort King Priam, Hector's father and the ruler of Troy, into the Greek camp. Priam tearfully pleads with Achilles to take pity on a father bereft of his son and return Hector's body. He invokes the memory of Achilles' own father, Peleus. Priam begs Achilles to pity him, saying "I have endured what no one on earth has ever done before – I put my lips to the hands of the man who killed my son." Deeply moved, Achilles relents and returns Hector's corpse to the Trojans. Both sides agree to a temporary truce, and Achilles gives Priam leave to hold a proper funeral for Hector, complete with funeral games. He promises that no Greek will engage in combat for at least nine days, but on the twelfth day of peace, the Greeks would all stand once more and the mighty war would continue. Priam is killed during the Sack of Troy by Achilles' son Neoptolemus (also known as Pyrrhus). His death is graphically related in Book II of Virgil's Aeneid. In Virgil's description, Neoptolemus first kills Priam's son Polites in front of his father as he seeks sanctuary on the altar of Zeus. Priam rebukes Neoptolemus, throwing a spear at him, harmlessly hitting his shield. Neoptolemus then drags Priam to the altar and there kills him too. Priam's death is alternatively depicted in some Greek vases. In this version, Neoptolemus clubs Priam to death with the corpse of the latter's baby grandson, Astyanax. It has been suggested by Hittite sources, specifically the Manapa-Tarhunta letter, that there is historical basis for the archetype of King Priam. The letter describes one Piyama-Radu as a troublesome rebel who overthrew a Hittite client king and thereafter established his own rule over the city of Troy (mentioned as Wilusa in Hittite). There is also mention of an Alaksandu, suggested to be Alexander (King Priam's son from the Iliad), a later ruler of the city of Wilusa who established peace between Wilusa and Hatti (see the Alaksandu treaty). Marriage and children See List of children of Priam Priam is said to have fathered fifty sons and many daughters, with his chief wife Hecuba, daughter of the Phrygian king Dymas and many other wives and concubines. These children include famous mythological figures such as Hector, Paris, Helenus, Cassandra, Deiphobus, Troilus, Laodice, Polyxena, Creusa, and Polydorus. Priam was killed when he was around 80 years old by Achilles' son Neoptolemus. Family tree Cultural depiction King Priam, a 1962 opera by Michael Tippett See also Priam's Treasure Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). "Priamus" Mythological kings of Troy Kings in Greek mythology Trojans Characters in the Aeneid Characters in the Iliad Characters in Greek mythology Mythology of Heracles
2846262
https://en.wikipedia.org/wiki/Psygnosis
Psygnosis
Psygnosis Limited (known as SCE Studio Liverpool or simply Studio Liverpool from 1999) was a British video game developer and publisher headquartered at Wavertree Technology Park in Liverpool. Founded in 1984 by Ian Hetherington, Jonathan Ellis, and David Lawson, the company initially became known for well-received games on the Atari ST and Commodore Amiga. In 1993, it became a wholly owned subsidiary of Sony Computer Entertainment (SCE) and began developing games for the original PlayStation. It later became a part of SCE Worldwide Studios. The company was the oldest and second largest development house within SCE's European stable of developers, and became best known for franchises such as Lemmings, Wipeout, Formula One, and Colony Wars. Reports of Studio Liverpool's closure surfaced on 22 August 2012, with Edge quoting staff tweets. Staff members were told the news by Michael Denny, vice president of Sony Worldwide Studios Europe. Sony said that the Liverpool site would remain in operation, as it was still home to many Sony Departments. At the time of its closure, it employed roughly 100 people comprising two development teams. Mick Hocking oversaw Studio Liverpool's operations as its last Group Studio Director, a position he continued to hold within Evolution Studios. Psygnosis still exists as a legal entity under Sony and continues to make legal fillings, but has had no developers since 2012. In December 2021, Sony renewed Psygnosis' logo and trademarks despite not using the Psygnosis branding since 2000, though this thought to be standard filling practice as trademarks last for a decade in the United States and Sony had previously filed renewal applications in 2011 as well. History As Psygnosis Psygnosis was the eventual successor of the defunct 8-bit software house Imagine Software, where Lawson was one of the founders and Hetherington was financial director. Finchspeed, a company created by the directors, attempted to acquire the assets of the failing company but this was unsuccessful and the remains of Imagine, including their much-hyped but never completed "megagames", were sold by the receivers. While the name and trademarks were bought by Ocean Software, Sinclair Research paid a rumoured £100,000 for the rights to Bandersnatch and contracted a new company set up by Hetherington and Lawson, Fire Iron, to produce the game for the Sinclair QL for release in early 1985. Sinclair withdrew funding from Fire Iron in early 1985 and Psygnosis, which became a limited company under United Kingdom company law in July 1985, launched their first title Brataccas, which featured many of the concepts originally intended for Bandersnatch, at the 1985 Personal Computer World show in September. The name of another Imagine Megagame (the proposed but never developed Psyclapse) was later used by Psygnosis as an alternative label for some of their releases, such as Ballistix and Captain Fizz Meets The Blaster-Trons. Their box artwork was very distinctive with a black background and fantasy artwork by Roger Dean bordered in red. This style was maintained for the better part of 10 years. For the next few years, Psygnosis' releases contained increasingly improved graphics, but were marred by similarly difficult gameplay and control methods. The original company headquarters were located at the Port of Liverpool Building at the Pier Head in Liverpool, but soon moved to Century Buildings in Liverpool's Brunswick Business Park, and later moved down the road to South Harrington Building by the docks. Although Psygnosis primarily became a game publisher, some games were developed fully or partly in-house. During the early days, artists were employed full-time at the headquarters, offering third-party developers, who were often just single programmers, a high-quality art resource. This allowed Psygnosis to maintain high graphical standards across the board. The original artists were Garvan Corbett, Jeff Bramfitt, Colin Rushby and Jim Bowers, with Neil Thompson joining a little later. Obliterator, released in 1988, contained an opening animation by Jim Bowers. This short scene would pave the way for increasingly sophisticated intro animations, starting with 2D hand drawn sequences, and progressing into FMV and 3D rendered movies created with Sculpt 4D on the Amiga. Eventually, Psygnosis would buy Silicon Graphics workstations for the sole purpose of creating these animations. While most game companies of the mid-to-late 1980s (including Psygnosis) were releasing identical games on both the Amiga and Atari ST, Psygnosis started to use the full potential of the Amiga's more powerful hardware to produce technically stunning games, with the landmark title Shadow of the Beast bringing the company its greatest success so far in 1989. Its multi-layered parallax scrolling and music were highly advanced for the time and as such led to the game being used as a showcase demonstration for the Amiga in many computer shops. Psygnosis consolidated its fame after publishing the DMA Design Lemmings game franchise: debuting in 1991 on the Amiga, Lemmings was ported to a plethora of different computer and video game platforms, generating many sequels and variations of its concept through the years. Microcosm, a game that appeared on the FM Towns, Amiga CD32, and 3DO furthered the company's reputation for games with excellent graphics but limited and poorly designed gameplay. Psygnosis also created the "Face-Off" games in the Nickelodeon 1992 television game show, Nick Arcade, such as "Post Haste", "Jet Jocks" and "Battle of the Bands". In 1993, the company was acquired by Sony Electronic Publishing. In preparation for the September 1995 introduction of Sony's PlayStation console in Western markets, Psygnosis started creating games using the PlayStation as primary reference hardware. Among the most famous creations of this period were Wipeout, G-Police, and the Colony Wars series, some of which were ported to PC and to other platforms. The PlayStation marked a turning point in Psygnosis's game design, moving away from the prerendered graphics and limited gameplay that the company had become associated with. This was a successful period for the company; in the 1995-96 financial year, Psygnosis games accounted for 40% of all video games sales in Europe. The acquisition was rewarding for Sony in another aspect: development kits for PlayStation consoles. As it had previously published PSY-Q development kits for various consoles by SN Systems, Psygnosis arranged for them to create a development system for the PS based on cheap PC hardware. Sony evaluated the system during CES in January 1994 and decided to adopt it. As Psygnosis expanded after the Sony buyout, another satellite office was opened in Century Building with later offices opening in Stroud, London, Chester, Paris, Germany, and Foster City in California (as the Customer Support & Marketing with software development done in San Francisco), now the home of Sony Computer Entertainment America. The company headquarters has resided at Wavertree Technology Park since 1995. The Stroud studio was opened in November 1993 in order to attract disgruntled MicroProse employees. Staff grew from initially about 50 to about 70 in 1997. Among the titles created at Stroud are Overboard! and G-Police. The Wheelhouse—its publishing name—was closed in 2000 as part of the Sony Computer Entertainment takeover of Psygnosis. Some members joined Bristol-based Rage Software, but faced a similar demise a number of years later. Despite being owned by Sony, Psygnosis retained a degree of independence from its parent company during this period and continued to develop and publish titles for other platforms, including the Sega Saturn and the Nintendo 64. This caused friction between Psygnosis and Sony, and in 1996 Sony engaged SBC Warburg's services in finding a buyer for Psygnosis. However, though bids reportedly went as high as $300 million (more than ten times what Sony paid for the company just three years before), after six months Sony rescinded its decision to sell Psygnosis. Relations between the two companies had improved during this time, and Sony became reconciled to Psygnosis releasing games for competing platforms. Shortly after, Psygnosis took over distribution of its own titles, a task that Sony had been handling following the buyout. As Studio Liverpool In 1999, a process to consolidate Psygnosis into Sony Computer Entertainment was underway, resulting in the bulk of Psygnosis' sales, marketing and PR staff being made redundant and the development teams reporting directly into Sony Computer Entertainment Europe's president of software development. To reflect this, in 2000, the Psygnosis brand was dropped in favour of SCE Studio Liverpool. The newly named SCE Studio Liverpool released its first title, Formula One 2001, in 2001. The game was also the studio's first release on the PlayStation 2, and the first entry in the Formula One series after taking over from developer Studio 33. From 2001 to 2007, Studio Liverpool released 8 installments in the series between the PlayStation 2, PlayStation Portable and PlayStation 3. However, Sony Computer Entertainment's exclusive licence with the Formula One Group expired, without renewal, before the 2007 season, marking the end of any further Formula One series installments from the developer. Studio Liverpool also created Wipeout Fusion, the first of two installments of the series on the PlayStation 2, released in 2002. Next they developed Wipeout Pure for the PlayStation Portable, which launched alongside the handheld in 2005 to significant acclaim, with many media outlets heralding it a return to glory for the series. They followed up with the sequel Wipeout Pulse in 2007 which was later ported to the PlayStation 2 and released in Europe. In 2008, they released Wipeout HD, a downloadable title for the PlayStation 3's PlayStation Network service, consisting of various courses taken from both Wipeout Pure and Wipeout Pulse remade in high definition. An expansion pack for Wipeout HD named Wipeout HD Fury is available at PlayStation Network, including new game modes, new tracks, new music and new ship skins/models. In 2007, a copy of Manhunt 2 was leaked online prior to its release by an employee from the Sony Europe Liverpool office. On 29 January 2010, Sony made a public statement. The closure of Studio Liverpool was announced on 22 August 2012. In a press release, Sony stated that after an assessment of all European studios, it had decided to close Studio Liverpool. Sony said that the Liverpool site would remain in operation, as it is home to a number of Sony World Wide Studios and SCEE Departments. Eurogamer was told by an unnamed source that, at the time of its closure, Studio Liverpool was working on two PlayStation 4 launch titles. One was a Wipeout title described as "dramatically different"; the other was a motion capture-based game along the lines of Tom Clancy's Splinter Cell. Spin-off studios In 2013, a number of former Studio Liverpool employees formed two new studios: Firesprite which worked on the visuals of The Playroom for the PlayStation 4, and Playrise Digital who had success with their Table Top Racing games. In September 2021, Sony acquired Firesprite. XDev XDev, Sony's external development studio is responsible for managing the development of titles at developers that are outside of Sony's own developer group. It has won 14 British Academy (BAFTA) video game awards and AIAS awards for LittleBigPlanet, 3 BAFTA awards for the Buzz! series and Develop Industry Excellence Awards for MotorStorm and Buzz!. Games Games developed or published as Psygnosis Games developed as SCE Studio Liverpool See also London Studio Guerrilla Cambridge Evolution Studios Bigbig Studios Tim Wright References External links 1984 establishments in England 1993 mergers and acquisitions 2012 disestablishments in England British companies disestablished in 2012 British companies established in 1984 Defunct companies based in Liverpool Defunct video game companies of the United Kingdom Sony Interactive Entertainment game studios Video game companies disestablished in 2012 Video game companies established in 1984 Video game development companies Video game publishers British subsidiaries of foreign companies
56244619
https://en.wikipedia.org/wiki/DataGravity
DataGravity
DataGravity Inc. was an industry data management company, which produced security software. The company was founded in April 2012 by Paula Long and John Joseph. DataGravity announced its first products at VMworld in 2014. It won Best of Show, and New Technology awards for the event. It began shipping their first products in October 2014. The company focused on protection and security of the data stored on the array, and named this new type of storage as data-aware storage. It publicly changed its product strategy in February 2016 from data storage appliances to a software solution focused on behavioral data security. This product strategy change resulted in multiple rounds of layoffs. Fate of the Company Multiple reports use conflicting terminology about the final fate of the company. Some reports say HyTrust acquired DataGravity. Other reports, including a press release issued by HyTrust itself, say HyTrust acquired the assets of DataGravity after it was signed over to a liquidator. HyTrust told Fortune that founder and CEO Paula Long left DataGravity a few weeks before the transaction was announced, and that co-founder John Joseph left some time before that. According to some reports, DataGravity ceased day-to-day operations in June 2017, when it cancelled employee benefit plans and signed the company over to liquidator Barry Kallander of the Kallander Group. In one such report, correspondence from DataGravity President Barry Kallander states "The corporation was not sold - the assets of the company were....Unfortunately the common shares are worthless." Conversely, DataGravity CTO David Siles was quoted as saying the company "did not shut down", and that the transaction "wasn't a fire sale. We were acquired because we complete a vision, add value, have customers who love what we do. Together we will offer a very compelling offering to the marketplace solving very pressing needs for many enterprises." Approximately 20 former DataGravity employees joined HyTrust to support DataGravity's product integration, led by former DataGravity CTO David Siles. DataGravity's products remain a part of HyTrust's portfolio under its CloudAdvisor suite. References Software companies of the United States Storage software Computer security software companies Security software American companies established in 2012 Software companies established in 2012
28819911
https://en.wikipedia.org/wiki/OpenIndiana
OpenIndiana
OpenIndiana is a free and open-source Unix operating system derived from OpenSolaris and based on illumos. Forked from OpenSolaris after OpenSolaris was discontinued by Oracle Corporation, OpenIndiana takes its name from Project Indiana, the internal codename for OpenSolaris at Sun Microsystems before Oracle’s acquisition of Sun in 2010. Created by a development team led by Alasdair Lumsden, the OpenIndiana project is now stewarded by the illumos Foundation, which develops and maintains the illumos operating system. The project aims to make OpenIndiana “the de facto OpenSolaris distribution installed on production servers where security and bug fixes are provided free of charge.” History Origins Project Indiana was originally conceived by Sun Microsystems, to construct a binary distribution around the OpenSolaris source code base. Project Indiana was led by Ian Murdock, founder of the Debian Linux distribution. OpenIndiana was conceived after negotiations of a takeover of Sun Microsystems by Oracle were proceeding, in order to ensure continued availability and further development of an OpenSolaris-based OS, as it is widely used. Uncertainty among the OpenSolaris development community led some developers to form tentative plans for a fork of the existing codebase. These plans came to fruition following the announcement of discontinuation of support for the OpenSolaris project by Oracle. Initial reaction The formal announcement of the OpenIndiana project was made on September 14, 2010, at the JISC Centre in London. The first release of the operating system was made available publicly at the same time, despite being untested. The reason for the untested release was that the OpenIndiana team set a launch date ahead of Oracle OpenWorld in order to beat the release of Solaris 11 Express. The announcement of OpenIndiana was met with a mainly positive response; over 350 people viewed the online announcement, the ISO image was downloaded over 2000 times, the Twitter account obtained over 500 followers, and numerous notable IT press websites wrote about the release. The broadcast bandwidth of the announcement was substantial, noted to top 350Mbit/second. The network package depot server experienced 20x as much traffic interested in their distribution than they originally planned for, resulting in more threads later being provisioned. Not all reporting was positive, though, as some online articles questioned the relevance of Solaris given the market penetration of Linux. One article was critical of the OpenIndiana launch, citing a lack of professionalism with regard to releasing an untested build, and the project's lack of commitment to a release schedule. The initial OpenIndiana release was advertised as experimental and directly based on the latest OpenSolaris development build, preliminary to the OpenSolaris 2010 release. Community building With the OpenSolaris binary distribution moved to SolarisExpress and the real-time feed of OpenSolaris updates discontinued, concerns abounded over what would happen to OpenIndiana if Oracle decided to stop feeding source code back into the community. The OpenIndiana team mitigated these concerns when they announced their intention to move the source code feed to the illumos Foundation. Concerns were raised about possible discontinuation of free access to the Oracle-owned compiler being used to produce OpenIndiana. In response, OpenIndiana was modified to be able to compile under the open source GNU Compiler Collection. The Hardware Compatibility List (HCL) remains somewhat informal, fragmented and uncentralized, requiring much end-user research for hardware selection. The lack of a comprehensive centralized HCL follows from the fact that the OpenSolaris HCL was hosted on Oracle server infrastructure and the server-side code for the Device Driver Utility submission was not made available. In August 2012, founding project lead Alasdair Lumsden stepped down from the project, citing personal reasons and frustration with the lack of progress made on the project. Among the reasons for lack of progress were lack of developers and resources. In his resignation, Lumsden wrote, "For many of us this was the first open source project we had ever contributed to, myself included. The task at hand was vast, and we were ill equipped to deal with it." Since Lumsden's resignation, the project is developed by a team of volunteers and is a completely horizontal and participative community effort. Media reception A September 2013 DistroWatch review stated that the OpenIndiana project has "seemingly been in steady decline for the last couple of years." The same review concluded that OpenIndiana had not progressed significantly from the state of OpenSolaris five years before: A May 2015 DistroWatch review of OpenIndiana similarly concluded that little major progress had been made to the system over the years. The review stated that the package selection and hardware support seemed to lag behind other systems, while many of the system administration features have either replicated or ported to Linux and BSD. The review concludes that: Claims about lack of package support may be mitigated by the fact that the 3500+ software packages provided by OpenIndiana Hipster are not split into several packages, which would artificially increase the package count (e.g. like in Linux distributions): the Image Packaging System is a file-based package management providing incremental updates and package facets, making such splitting an unnecessary burden. In the course of the first two years of its existence, the Hipster project has migrated and updated over 1500 packages: it maintains a collection of selected software packages while relying on third-party repositories like SFE for add-ons. For extended selection, the pkgsrc system supported by Joyent readily provides 20000+ packages for illumos systems. Relation to other operating systems OpenIndiana is a fork in the technical sense but it is a continuation of OpenSolaris in spirit. The project intends to deliver a System V family operating system which is binary-compatible with the Oracle products Solaris 11 and Solaris 11 Express. However, rather than being based on the OS/Net consolidation like OpenSolaris was, OpenIndiana is based on illumos. The project does use the same Image Packaging System (IPS) package management system as OpenSolaris. While the OpenIndiana codebase was initially based on the majority of publicly available code from Oracle, this is not the case since the oi_151a Development Builds which are based on illumos from September 2011 onwards. The project has effectively moved away from Oracle-owned tools such as Sun Studio: all builds since 2013, including the active Hipster branch, use the GNU Compiler Collection (GCC) as sole compiler. The illumos project itself is built with GCC since June 15, 2012. Release schedule Experimental Builds The first experimental release of OpenIndiana, Build 147, was released on September 14, 2010; the second experimental release, Build 148, was released on December 17, 2010. Development Builds A first development release, Build 151 was released on September 14, 2011. This is the first release to be based upon illumos. MartUX 151a0 was released as the first SPARC build for OpenIndiana. Build 151a7 for Intel/AMD architectures was released on October 6, 2012. Build 151a8 was released August 10, 2013. OpenSXCE 2013.01 SPARC Build 151a, formerly MartUX, was released through OpenIndiana on February 1, 2013, as the second and possibly last OpenIndiana SPARC build, with subsequent releases based upon DilOS. Hipster Since the development model inherited from the OpenSolaris project was unsuitable for a community project, the Hipster initiative was created late 2013 to reboot and modernize OpenIndiana. The Hipster project is a fast development branch of OpenIndiana based on a rolling-release model and a horizontal contribution scheme through the oi-userland build system and the use of continuous integration. Hipster is actively maintained: the repository receives software updates as well as security fixes, and installation images are published twice a year. Every snapshot release is announced via mailing list and Twitter. The first snapshot release was delivered on February 14, 2014, and subsequent snapshots were based on a six-month development cycle. Some notable features of Hipster: MATE as the default desktop environment (since Hipster 2016.10) Update to newer illumos KVM Update of the graphic stack with newer Xorg and DRM support Support for FUSE and NTFS-3G Support for multimedia software Support for third-party SFE repository providing LibreOffice Migration to GCC as default compiler Migration of legacy software consolidations to unified build system The list of features is updated for each development cycle on the Roadmap page of the issue tracker. References External links List of supported hardware OpenIndiana Officially Announced Announcement on OSNews OpenSolaris-derived software distributions Software forks Solaris software X86 operating systems 2010 software
28356751
https://en.wikipedia.org/wiki/PowerUP%20%28accelerator%29
PowerUP (accelerator)
PowerUP boards were dual-processor accelerator boards designed by Phase5 Digital Products for Amiga computers. They had two different processors, a Motorola 68000 series (68k) and a PowerPC, working in parallel, sharing the complete address space of the Amiga computer system. History In 1995, Amiga Technologies GmbH announced they were going to port AmigaOS to PowerPC. As part of their Power Amiga plan, Amiga Technologies was going to launch new Power Amiga models using the PowerPC 604e reduced instruction set computer (RISC) CPU and in cooperation with Amiga Technologies Phase5 would release AmigaOS 4-compatible PowerPC accelerator boards for old Amiga 1200, Amiga 3000 and Amiga 4000 models. However, in 1996 Amiga Technologies' parent company ESCOM entered into deep financial problems and could not support Amiga development. Due to a lack of resources, the PowerPC project at Amiga Technologies stalled and Phase5 had to launch accelerators without a PowerPC-native AmigaOS. As a stopgap solution, a new PowerUP kernel was created allowing new PPC-native software run parallel with 68k Amiga OS. To complicate things even further, former Commodore International chief engineer Dave Haynie questioned Phase5's plans to develop PowerPC boards without Amiga Technologies: "Their approach on the software front is kind of a hack, and on the hardware front it's just too much like the old Commodore; at best, they'll wind up with interesting, non-standard, and overpriced machines that can't keep up with the rapid changes in the industry." Nevertheless Phase5 had decided to go their own way and develop a PowerPC-based AmigaOS-compatible computer without Amiga Technologies. They also announced plans to write a new Amiga OS-compatible operating system. Wolf Dietrich (managing director of phase5) earlier commented that "we found that Amiga Technologies offers us no sort of outlook or basis for developing into the future". There is no detailed information about how many PowerPC accelerator boards Phase5 (and later DCE) sold. According to Ralph Schmidt in an AmigActive article featuring MorphOS, there were about 10,000 people using Phase5 PowerPC accelerator boards. The unofficial PowerUP support page estimates similar figures. PowerUP software PowerUP kernel is a multitasking kernel developed by Ralph Schmidt for Phase5 PowerPC accelerator boards. The kernel ran alongside the AmigaOS where PPC and 68k native software could run parallel. The PowerUP kernel used Executable and Linkable Format (ELF) as the executable format and supported runtime linking, relocations and custom sections; it used GNU Compiler Collection (GCC) as its default compiler. This caused controversy in the Amiga community when developers thought that phase5 was bringing "too Unixish stuff" to Amiga. It was feared that PowerUP kernel introducing shared objects and dynamic linking would replace the original shared library model and shared objects were indeed adapted into AmigaOS. Another controversy was caused by different designs and purposes of Blizzard PPC and Cyberstorm PPC boards. The Blizzard PPC was designed to fit Amiga 1200 as a standalone device which would not need installing additional software but utilised Amiga's unique AutoConfig feature. This caused problems to some 3rd party developers who developed their own PPC kernels for PowerUP cards since they could not work on Amiga 1200 without removing the PowerUP kernel first. A few hundred titles were released for PowerUP including TurboPrint PPC, Amiga datatypes, MP3 and MPEG players, games (Quake and Doom video games to mention few) and various plugins including Flash Video plugin for Voyager web browser. PowerUP hardware Blizzard 2604e On May 12, 1997, Phase5 announced PowerUP accelerator board for the Amiga 2000 line of computers. The card never got past the prototype stage and hence never released to the public. PowerPC 604e at 150, 180 or 200 MHz 68040 at 25 MHz or 68060 at 50 MHz Four 72 pin SIMM sockets accepting 128 MB RAM, 64 bit wide Ultra Wide SCSI controller Expansion slot for the CyberVision PPC Blizzard PPC Also known as Blizzard 603e, this accelerator board was designed for the Amiga 1200 and plugged into the trapdoor slot. It used a low cost, low end PowerPC 603e processor designed for portable and embedded use. PowerPC 603e at 160, 200 or 240 MHz 68040 or 68LC040 at 25 MHz or 68060 at 50 MHz Two 72 pin SIMM sockets accepting 256 MB RAM, 32 bit wide SCSI II controller (Blizzard 603e+ models only) Expansion slot for the BlizzardVision PPC CyberStorm PPC This accelerator board was designed for the Amiga 3000 and Amiga 4000. The accelerator board was famous for its high performance due to its 64 bit wide memory bus and PowerPC 604e processor. According to Phase 5 it could sustain memory transfers up to 68 MB/s on the 68060 and up to 160 MB/s on the 604e. PowerPC 604e at 150, 180, 200 or 233 MHz 68040 at 25 MHz or 68060 at 50 MHz Four 72 pin SIMM sockets accepting 128 MB RAM, 64 bit wide Ultra Wide SCSI controller Expansion slot for the CyberVision PPC CyberVision PPC, BlizzardVision PPC CyberVision PPC and BlizzardVision PPC (BVision PPC) were graphics board add-ons for the CyberStorm PPC and Blizzard PPC accelerator boards. The BlizzardVision PPC could be installed into an Amiga 1200 desktop case. They had a random-access memory (RAM) digital-to-analog converter (DAC, RAMDAC) with a bandwidth of 230 MHz able to display resolutions with 80 Hz vertical refresh rate up to 1152×900 pixels at 24 bits, or 1600×1200 pixels at 16 bits. Permedia 2 GPU 8 MB 64-bit-wide SGRAM 3D LCD shutter glass connector CyberGraphX V3 drivers CyberGL 3D library References See also Amiga AmigaOS MorphOS Operating system kernels Microkernel-based operating systems Microkernels
29740553
https://en.wikipedia.org/wiki/Jerrel%20Jernigan
Jerrel Jernigan
Jerrel Marquis Jernigan (born June 14, 1989) is a former American football wide receiver. He was drafted by the New York Giants in the third round of the 2011 NFL Draft and won Super Bowl XLVI with the team against the New England Patriots. He played college football at Troy. Jerrel currently works at Eufaula High School as a wide receiver coach. Professional career Pre-draft He was considered one of the best wide receiver prospects for the 2011 NFL Draft. He was a starter all four years for the Trojans. Jernigan's body frame and playing style was compared to the likes of DeSean Jackson and Steve Smith. New York Giants Jernigan was drafted by the New York Giants in the third round with the 83rd overall pick in the 2011 NFL Draft. Through four seasons with the Giants, Jernigan played in 34 games catching 38 passes for 391 yards with 2 touchdowns. At the end of the 2011 season, Jernigan and the Giants appeared in Super Bowl XLVI. He had three kick returns for 71 net yards as the Giants defeated the New England Patriots by a score of 21–17. Winnipeg Blue Bombers After not playing professional football in 2015, Jernigan signed with the Winnipeg Blue Bombers of the Canadian Football League on April 12, 2016. References External links Troy Trojans bio 1989 births American football return specialists American football wide receivers Living people New York Giants players People from Barbour County, Alabama People from Midway, Alabama Players of American football from Alabama Troy Trojans football players
53489871
https://en.wikipedia.org/wiki/Software%20testing%20tactics
Software testing tactics
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance (traditionally called by the acronym "QA") and general application of the test method (usually just called "testing" or sometimes "developer testing"). Installation testing An installation test assures that the system is installed correctly and working at actual customer's hardware. The box approach Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White-box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include: API testing – testing of the application using public and private APIs (application programming interfaces) Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies Mutation testing methods Static testing methods Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for: Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test Decision coverage, which reports on whether both the True and the False branch of a given test has been executed 100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Black-box testing Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations. One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight." Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested. This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well. Visual testing The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly. At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams. Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug. Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process. For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers. Grey-box testing Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code. Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations. Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on. Automated testing Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system. While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful. Automated testing tools Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as: Program monitors, permitting full or partial monitoring of program code including: Instruction set simulator, permitting complete instruction level monitoring and trace facilities Hypervisor, permitting complete control of the execution of program code including:- Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code Code coverage reports Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points Automated functional GUI(Graphical User Interface) testing tools are used to repeat system-level tests through the GUI Benchmarks, allowing run-time performance comparisons to be made Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage Some of these features may be incorporated into a single composite tool or an Integrated Development Environment (IDE). Abstraction of application layers as applied to automated testing There are generally four recognized levels of tests: unit testing, integration testing, component interface testing, and system testing. Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model. Other test levels are classified by the testing objective. There are two different levels of tests from the perspective of customers: low-level testing (LLT) and high-level testing (HLT). LLT is a group of tests for different level components of software application or product. HLT is a group of tests for the whole software application or product. Unit testing Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other. Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, unit testing might include static code analysis, data-flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software verification practices. Integration testing Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. Component interface testing The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit. Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. System testing System testing tests a completely integrated system to verify that the system meets its requirements. For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff. Operational acceptance testing Operational acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative. Compatibility testing A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library. Smoke and sanity testing Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test. Regression testing Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Acceptance testing Acceptance testing can mean one of two things: A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e., before integration or regression. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development. Alpha testing Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. Beta testing Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta). Functional vs non-functional testing Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals. Destructive testing Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing. Software performance testing Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably. Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used. Usability testing Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. Accessibility testing Accessibility testing may include compliance with standards such as: Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) Security testing Security testing is essential for software that processes confidential data to prevent system intrusion by hackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them." Internationalization and localization testing The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones). Actual translation to human languages must be tested, too. Possible localization failures include: Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string. Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent. Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. Untranslated messages in the original language may be left hard coded in the source code. Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language. Software may lack support for the character encoding of the target language. Fonts and font sizes which are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. Software may lack proper support for reading or writing bi-directional text. Software may display images with text that was not localized. Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency. Development testing "Development testing" is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices. A/B testing A/B testing is basically a comparison of two outputs, generally when only one variable has changed: run a test, change one thing, run the test again, compare the results. This is more useful with more small-scale situations, but very useful in fine-tuning any program. With more complex projects, multivariant testing can be done. Concurrent testing In concurrent testing, the focus is on the performance while continuously running with normal input and under normal operational conditions, as opposed to stress testing, or fuzz testing. Memory leaks, as well as basic faults are easier to find with this method. Conformance testing or type testing In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. References External links
1208775
https://en.wikipedia.org/wiki/EXist
EXist
eXist-db (or eXist for short) is an open source software project for NoSQL databases built on XML technology. It is classified as both a NoSQL document-oriented database system and a native XML database (and it provides support for XML, JSON, HTML and Binary documents). Unlike most relational database management systems (RDBMS) and NoSQL databases, eXist-db provides XQuery and XSLT as its query and application programming languages. eXist-db is released under version 2.1 of the GNU LGPL. History eXist-db was created in 2000 by Wolfgang Meier. Major versions released were 1.0 in October 2006, 2.0 in February 2013, 3.0 in February 2017, 4.0 in February 2018, 5.0.0 in September 2019, and 6.0.0 in January 2022. eXist-db was awarded the best XML database of the year by InfoWorld in 2006. The companies eXist Solutions GmbH in Germany, and Evolved Binary in the UK, promote and provide support for the software. There is an O'Reilly book for eXist-db which is co-authored by Adam Retter and Erik Siegel. Features eXist-db allows software developers to persist XML/JSON/Binary documents without writing extensive middleware. eXist-db follows and extends many W3C XML standards such as XQuery. eXist-db also supports REST interfaces for interfacing with AJAX-type web forms. Applications such as XForms may save their data by using just a few lines of code. The WebDAV interface to eXist-db allows users to "drag and drop" XML files directly into the eXist-db database. eXist-db automatically indexes documents using a keyword indexing system. Supported standards and technologies eXist-db has support for the following standards and technologies: XPath - XML Path language XQuery - XML Query language XSLT - Extensible Stylesheet Language Transformations XSL-FO - XSL Formatting Objects WebDAV - Web distributed authoring and versioning REST - Representational state transfer (URL encoding) RESTXQ - RESTful annotations for XQuery XInclude - server-side include file processing (limited support) XML-RPC - a remote procedure call protocol XProc - a XML Pipeline processing language XQuery API for Java See also BaseX - another Open Source Native XML Database CouchDB - a document-oriented database based on JSON References External links Free database management systems XML databases Software using the LGPL license Database-related software for Linux Free software programmed in Java (programming language)
64148984
https://en.wikipedia.org/wiki/MSC%20Adams
MSC Adams
MSC ADAMS (Automated Dynamic Analysis of Mechanical Systems) is a multibody dynamics simulation software system. It is currently owned by MSC Software Corporation. The simulation software solver runs mainly on Fortran and more recently C++ as well. According to the publisher, Adams is the most widely used multibody dynamics simulation software. The software package runs on both Windows and Linux. Capabilities Adams has a full graphical user interface to model the entire mechanical assembly in a single window. Graphical Computer-aided design tools are used to insert a model of a mechanical system in three-dimensional space or import geometry files such as STEP or IGS. Joints can be added between any two bodies to constrain their motion. Variety of inputs such as velocities, forces, and initial conditions can be added to the system. Adams simulates the behavior of the system over time and can animate its motion and compute properties such as accelerations, forces, etc. The system can include further complicated dynamic elements like springs, friction, flexible bodies, contact between bodies. The software also provides extra CAE tools such as design exploration and optimization based on selected parameters. The inputs and outputs of the simulation can be interfaced with Simulink for applications such as control. Applications The Adams software package is used both in academic research and engineering. The most common usage of the software is analysis of vehicle structure and suspension through the Adams/Car and Adams/Tire modules. Various types of mechanical systems such as wind turbines, powertrains, and robotic systems. References Simulation software
1862923
https://en.wikipedia.org/wiki/PhysX
PhysX
PhysX is an open-source realtime physics engine middleware SDK developed by Nvidia as a part of Nvidia GameWorks software suite. Initially, video games supporting PhysX were meant to be accelerated by PhysX PPU (expansion cards designed by Ageia). However, after Ageia's acquisition by Nvidia, dedicated PhysX cards have been discontinued in favor of the API being run on CUDA-enabled GeForce GPUs. In both cases, hardware acceleration allowed for the offloading of physics calculations from the CPU, allowing it to perform other tasks instead. PhysX and other middleware physics engines are used in a large majority of today's video games because they free game developers from having to write their own code that implements classical mechanics (Newtonian physics) to do, for example, soft body dynamics. History What is known today as PhysX originated as a physics simulation engine called NovodeX. The engine was developed by Swiss company NovodeX AG, an ETH Zurich spin-off. In 2004, Ageia acquired NovodeX AG and began developing a hardware technology that could accelerate physics calculations, aiding the CPU. Ageia called the technology PhysX, the SDK was renamed from NovodeX to PhysX, and the accelerator cards were dubbed PPUs (Physics Processing Units). The first game to use PhysX was Bet On Soldier: Blood Sport (2005). In 2008, Ageia was itself acquired by graphics technology manufacturer Nvidia. Nvidia started enabling PhysX hardware acceleration on its line of GeForce graphics cards and eventually dropped support for Ageia PPUs. PhysX SDK 3.0 was released in May 2011 and represented a significant rewrite of the SDK, bringing improvements such as more efficient multithreading and a unified code base for all supported platforms. At GDC 2015, Nvidia made the source code for PhysX available on GitHub, but required registration at developer.nvidia.com. The proprietary SDK was provided to developers for free for both commercial and non-commercial use on Windows, Linux, macOS, iOS and Android platforms. On December 3, 2018, PhysX was made open source under a 3-clause BSD license, but this change applied only to computer and mobile platforms. PhysX 5.0 was announced in December 2019 but has not released to the public due to the COVID-19 pandemic. Features The PhysX engine and SDK are available for Microsoft Windows, macOS, Linux, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Wii, iOS and Android. PhysX is a multi-threaded physics simulation SDK. It supports rigid body dynamics, soft body dynamics (like cloth simulation, including tearing and pressurized cloth), ragdolls and character controllers, vehicle dynamics, particles and volumetric fluid simulation. Hardware acceleration PPU A physics processing unit (PPU) is a processor specially designed to alleviate the calculation burden on the CPU, specifically calculations involving physics. PhysX PPUs were offered to consumers in the forms of PCI or PCIe cards by ASUS, BFG Technologies, Dell and ELSA Technology. Beginning with version 2.8.3 of the PhysX SDK, support for PPU cards was dropped, and PPU cards are no longer manufactured. The last incarnation of PhysX PPU standalone card designed by Ageia had roughly the same PhysX performance as a dedicated 9800GTX. GPU After Nvidia's acquisition of Ageia, PhysX development turned away from PPU extension cards and focused instead on the GPGPU capabilities of modern GPUs. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for accelerating physical simulations using PhysX. Any CUDA-ready GeForce graphics card (8-series or later GPU with a minimum of 32 cores and a minimum of 256 MB dedicated graphics memory) can take advantage of PhysX without the need to install a dedicated PhysX card. APEX Nvidia APEX technology is a multi-platform scalable dynamics framework build around the PhysX SDK. It was first introduced in Mafia II in August 2010. Nvidia's APEX comprises the following modules: APEX Destruction, APEX Clothing, APEX Particles, APEX Turbulence, APEX ForceField and formerly APEX Vegetation which was suspended in 2011. From version 1.4.1 APEX SDK is deprecated. Nvidia FleX FleX is a particle based simulation technique for real-time visual effects. Traditionally, visual effects are made using a combination of elements created using specialized solvers for rigid bodies, fluids, clothing, etc. Because FleX uses a unified particle representation for all object types, it enables new effects where different simulated substances can interact with each other seamlessly. Such unified physics solvers are a staple of the offline computer graphics world, where tools such as Autodesk Maya's nCloth, and Softimage's Lagoa are widely used. The goal for FleX is to use the power of GPUs to bring the capabilities of these offline applications to real-time computer graphics. Criticism from Real World Technologies On July 5, 2010, Real World Technologies published an analysis of the PhysX architecture. According to this analysis, most of the code used in PhysX applications at the time was based on x87 instructions without any multi-threading optimization. This could cause significant performance drops when running PhysX code on the CPU. The article suggested that a PhysX rewrite using SSE instructions may substantially lessen the performance discrepancy between CPU PhysX and GPU PhysX. In response to the Real World Technologies analysis, Mike Skolones, product manager of PhysX, said that SSE support had been left behind because most games are developed for consoles first and then ported to the PC. As a result, modern computers run these games faster and better than the consoles even with little or no optimization. Senior PR manager of Nvidia, Bryan Del Rizzo, explained that multi-threading had already been available with CPU PhysX 2.x and that it had been up to the developer to make use of it. He also stated that automatic multithreading and SSE would be introduced with version 3 of the PhysX SDK. PhysX SDK 3.0 was released in May 2011 and represented a significant rewrite of the SDK, bringing improvements such as more efficient multithreading and a unified code base for all supported platforms. Usage PhysX in video games PhysX technology is used by game engines such as Unreal Engine (version 3 onwards), Unity, Gamebryo, Vision (version 6 onwards), Instinct Engine, Panda3D, Diesel, Torque, HeroEngine and BigWorld. As one of the handful of major physics engines, it is used in many games, such as The Witcher 3: Wild Hunt, Warframe, Killing Floor 2, Fallout 4, Batman: Arkham Knight, Borderlands 2, etc. Most of these games use the CPU to process the physics simulations. Video games with optional support for hardware-accelerated PhysX often include additional effects such as tearable cloth, dynamic smoke or simulated particle debris. PhysX in other software Other software with PhysX support includes: Active Worlds (AW), a 3D virtual reality platform with its client running on Windows Amazon Lumberyard, a 3D game development engine developed by Amazon Autodesk 3ds Max, Autodesk Maya and Autodesk Softimage, computer animation suites DarkBASIC Professional (with DarkPHYSICS upgrade), a programming language targeted at game development DX Studio, an integrated development environment for creating interactive 3D graphics Futuremark's 3DMark06 and Vantage benchmarking tools Microsoft Robotics Studio, an environment for robot control and simulation Nvidia's SuperSonic Sled and Raging Rapids Ride, technology demos OGRE (via the NxOgre wrapper), an open source rendering engine The Physics Abstraction Layer, a physical simulation API abstraction system (it provides COLLADA and Scythe Physics Editor support for PhysX) Rayfire, a plug-in for Autodesk 3ds Max that allows fracturing and other physics simulations The Physics Engine Evaluation Lab, a tool designed to evaluate, compare and benchmark physics engines. Unreal Engine game development software by Epic Games. Unreal Engine 4.26 and onwards has officially deprecated PhysX. Unity by Unity ApS. Unity's Data-Oriented Technology Stack does not use PhysX. See also DirectX Bullet (software) Havok (software) Open Dynamics Engine Newton Game Dynamics OpenGL Vortex (software) AGX Multiphysics References External links Official Product Site Techgage: AGEIA PhysX.. First Impressions Techgage: NVIDIA's PhysX: Performance and Status Report Computer physics engines MacOS programming tools Nvidia software PlayStation 3 software PlayStation 4 software Programming tools for Windows Science software for MacOS Science software for Windows Virtual reality Wii software Xbox 360 software Science software for Linux Software using the BSD license Video game development Video game development software for Linux
271832
https://en.wikipedia.org/wiki/ABAP
ABAP
ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report preparation processor") is a high-level programming language created by the German software company SAP SE. It is currently positioned, alongside Java, as the language for programming the SAP NetWeaver Application Server, which is part of the SAP NetWeaver platform for building business applications. Introduction ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting. ABAP establish integration between independent software. ABAP used to be an abbreviation of Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s),which supports every platform, language and units. The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language was geared towards more technical customers with programming experience. ABAP remains as the language for creating programs for the client–server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6. SAP's current development platform NetWeaver supports both ABAP and Java. ABAP has an abstraction between the business applications, the operating system and database. This ensures that applications do not depend directly upon a specific server or database platform and can easily be ported from one platform to another. SAP Netweaver currently runs on UNIX (AIX, HP-UX, Solaris, Linux), Microsoft Windows, i5/OS on IBM System i (formerly iSeries, AS/400), and z/OS on IBM System z (formerly zSeries, S/390). Supported databases are HANA, SAP ASE (formerly Sybase), IBM DB2, Informix, MaxDB, Oracle, and Microsoft SQL Server (support for Informix was discontinued in SAP Basis release 7.00). ABAP runtime environment All ABAP programs reside inside the SAP database. They are not stored in separate external files like Java or C++ programs. In the database all ABAP code exists in two forms: source code, which can be viewed and edited with the ABAP Workbench tools; and generated code, a binary representation somewhat comparable with Java bytecode. ABAP programs execute under the control of the runtime system, which is part of the SAP kernel. The runtime system is responsible for processing ABAP statements, controlling the flow logic of screens and responding to events (such as a user clicking on a screen button); in this respect it can be seen as a Virtual Machine comparable with the Java VM. A key component of the ABAP runtime system is the Database Interface, which turns database-independent ABAP statements ("Open SQL") into statements understood by the underlying DBMS ("Native SQL"). The database interface handles all the communication with the relational database on behalf of ABAP programs; It also contains extra features such as buffering of tables and frequently accessed data in the local memory of the application server. SAP systems and landscapes All SAP data exists and all SAP software runs in the context of a SAP system. A system consists of a central relational database and one or more application ("instances") accessing the data and programs in this database. A SAP system contains at least one instance but may contain more, mostly for reasons of sizing and performance. In a system with multiple instances, load balancing mechanisms ensure that the load is spread evenly over the available application servers. Installations of the Web Application Server (landscapes) typically consist of three systems: one for development; one for testing and quality assurance; and one for production. The landscape may contain more systems (e.g., separate systems for unit testing and pre-production testing) or it may contain fewer (e.g., only development and production, without separate QA); nevertheless three is the most common configuration. ABAP programs are created and undergo first testing in the development system. Afterwards they are distributed to the other systems in the landscape. These actions take place under control of the Change and Transport System (CTS), which is responsible for concurrency control (e.g., preventing two developers from changing the same code at the same time), version management, and deployment of programs on the QA and production systems. The Web Application Server consists of three layers: the database layer; the application layer; and the presentation layer. These layers may run on the same or on different physical machines. The database layer contains the relational database and the database software. The application layer knowledge contains the instance or instances of the system. All application processes, including the business transactions and the ABAP development, run on the application layer. The presentation layer handles the interaction with users of the system. Online access to ABAP application servers can go via a proprietary graphical interface, which is called "SAP GUI", or via a Web browser. Software layers ABAP software is deployed in software components. Examples for these are: SAP_BASIS is the required technical base layer which is required in every ABAP system. SAP_ABA contains functionalities which is required for all kinds of business applications, like business partner and address management. SAP_UI provides the functionality to create SAP UI5 applications. BBPCRM is an example for a business application, in this case the CRM application SAP ABAP is an ERP programming language. Transactions A transaction in SAP terminology is the execution of a program. The normal way of executing ABAP code in the SAP system is by entering a transaction code (for instance, VA01 is the transaction code for "Create Sales Order"). Transactions can be called via system-defined or user-specific, role-based menus. They can also be started by entering the transaction code directly into a command field, which is present in every SAP screen. Transactions can also be invoked programmatically by means of the ABAP statements CALL TRANSACTION and LEAVE TO TRANSACTION. The general notion of a transaction is called a Logical Unit of Work (LUW) in SAP terminology; the short form of transaction code is T-code. Types of ABAP programs As in other programming languages, an ABAP program is either an executable unit or a library, which provides reusable code to other programs and is not independently executable. ABAP distinguishes two types of executable programs: Reports Module pools Reports follow a relatively simple programming model whereby a user optionally enters a set of parameters (e.g., a selection over a subSET of data) and the program then uses the input parameters to produce a report in the form of an interactive list. The term "report" can be somewhat misleading in that reports can also be designed to modify data; the reason why these programs are called reports is the "list-oriented" nature of the output they produce. Module pools define more complex patterns of user interaction using a collection of screens. The term “screen” refers to the actual, physical image that the user sees. Each screen also has a "flow logic", which refers to the ABAP code implicitly invoked by the screens, which is divided into a "PBO" (Process Before Output) and "PAI" (Process After Input) section. In SAP documentation the term “dynpro” (dynamic program) refers to the combination of the screen and its flow logic. The non-executable program types are: INCLUDE modules - These get included at generation time into the calling unit; it is often used to subdivide large programs. Subroutine pools - These contain ABAP subroutines (blocks of code enclosed by FORM/ENDFORM statements and invoked with PERFORM). Function groups - These are libraries of self-contained function modules (enclosed by FUNCTION/ENDFUNCTION and invoked with CALL FUNCTION). Object classes - These are similar to Java classes and interfaces; the first define a set of methods and attributes, the second contain "empty" method definitions, for which any class implementing the interface must provide explicit code. Interfaces - Same as object classes Type pools - These define collections of data types and constants. ABAP programs are composed of individual sentences (statements). The first word in a statement is called an ABAP keyword. Each statement ends with a period. Words must always be separated by at least one space. Statements can be indented as you wish. With keywords, additions and operands, the ABAP runtime system does not differentiate between upper and lowercase. Statements can extend beyond one line. You can have several statements in a single line (though this is not recommended). Lines that begin with asterisk * in the first column are recognized as comment lines by the ABAP runtime system and are ignored. Double quotations marks (") indicate that the remainder of a line is a comment. Development environment There are two possible ways to develop in ABAP. The availability depends on the release of the ABAP system. ABAP Workbench The ABAP Workbench is part of the ABAP system and is accessed via SAP GUI. It contains different tools for editing programs. The most important of these are (transaction codes are shown in parentheses): ABAP Editor for writing and editing reports, module pools, includes and subroutine pools (SE38) ABAP Dictionary for processing database table definitions and retrieving global types (SE11) Menu Painter for designing the user interface (menu bar, standard toolbar, application toolbar, function key assignment) (SE41) Screen Painter for designing screens and flow logic (SE51) Function Builder for function modules (SE37) Class Builder for ABAP Objects classes and interfaces (SE24) The Object Navigator (transaction SE80) provides a single integrated interface into these various tools. ABAP Development Tools The ABAP Development Tools (ADT), formally known as "ABAP in Eclipse", is a set of plugins for the Eclipse platform to develop ABAP. In this scenario, the ABAP developer installs the required tools on his computer and works locally, whereas a continuous synchronization with the backend is performed. ABAP Dictionary The ABAP Dictionary contains all metadata about the data in the SAP system. It is closely linked with the ABAP Workbench in that any reference to data (e.g., a table, a view, or a data type) will be obtained from the dictionary. Developers use the ABAP Dictionary transactions (directly or through the SE80 Object Navigator inside the ABAP Workbench) to display and maintain this metadata. When a dictionary object is changed, a program that references the changed object will automatically reference the new version the next time the program runs. Because ABAP is interpreted, it is not necessary to recompile programs that reference changed dictionary objects. A brief description of the most important types of dictionary objects follows: Tables are data containers that exist in the underlying relational database. In the majority of cases there is a 1-to-1 relationship between the definition of a table in the ABAP Dictionary and the definition of that same table in the database (same name, same columns). These tables are known as "transparent". There are two types of non-transparent tables: "pooled" tables exist as independent entities in the ABAP Dictionary but they are grouped together in large physical tables ("pools") at the database level. Pooled tables are often small tables holding for example configuration data. "Clustered" tables are physically grouped in "clusters" based on their primary keys; for instance, assume that a clustered table H contains "header" data about sales invoices, whereas another clustered table D holds the invoice line items. Each row of H would then be physically grouped with the related rows from D inside a "cluster table" in the database. This type of clustering, which is designed to improve performance, also exists as native functionality in some, though not all, relational database systems. Indexes provide accelerated access to table data for often used selection conditions. Every SAP table has a "primary index", which is created implicitly along with the table and is used to enforce primary key uniqueness. Additional indexes (unique or non-unique) may be defined; these are called "secondary indexes". Views have the same purpose as in the underlying database: they define subsets of columns (and/or rows) from one or - using a join condition - several tables. Since views are virtual tables (they refer to data in other tables) they do not take a substantial amount of space. Structures are complex data types consisting of multiple fields (comparable to struct in C/C++). Data elements provide the semantic content for a table or structure field. For example, dozens of tables and structures might contain a field giving the price (of a finished product, raw material, resource, ...). All these fields could have the same data element "PRICE". Domains define the structural characteristics of a data element. For example, the data element PRICE could have an assigned domain that defines the price as a numeric field with two decimals. Domains can also carry semantic content in providing a list of possible values. For example, a domain "BOOLEAN" could define a field of type "character" with length 1 and case-insensitive, but would also restrict the possible values to "T" (true) or "F" (false). Search helps (successors to the now obsolete "matchcodes") provide advanced search strategies when a user wants to see the possible values for a data field. The ABAP runtime provides implicit assistance (by listing all values for the field, e.g. all existing customer numbers) but search helps can be used to refine this functionality, e.g. by providing customer searches by geographical location, credit rating, etc. Lock objects implement application-level locking when changing data. ABAP syntax This brief description of the ABAP syntax begins with the ubiquitous "Hello world" program. Hello world REPORT TEST. WRITE 'Hello World'. This example contains two statements: REPORT and WRITE. The program displays a list on the screen. In this case, the list consists of the single line "Hello World". The REPORT statement indicates that this program is a report. This program could be a module pool after replacing the REPORT statement with PROGRAM. Chained statements Consecutive statements with an identical first (leftmost) part can be combined into a "chained" statement using the chain operator :. The common part of the statements is written to the left of the colon, the differing parts are written to the right of the colon and separated by commas. The colon operator is attached directly to the preceding token, without a space (the same applies to the commas in the token list on, as can be seen in the examples below). Chaining is often used in WRITE statements. WRITE accepts just one argument, so if for instance you wanted to display three fields from a structure called FLIGHTINFO, you would have to code: WRITE FLIGHTINFO-CITYFROM. WRITE FLIGHTINFO-CITYTO. WRITE FLIGHTINFO-AIRPTO. Chaining the statements results in a more readable and more intuitive form: WRITE: FLIGHTINFO-CITYFROM, FLIGHTINFO-CITYTO, FLIGHTINFO-AIRPTO. In a chain statement, the first part (before the colon) is not limited to the statement name alone. The entire common part of the consecutive statements can be placed before the colon. Example: REPLACE 'A' WITH 'B' INTO LASTNAME. REPLACE 'A' WITH 'B' INTO FIRSTNAME. REPLACE 'A' WITH 'B' INTO CITYNAME. could be rewritten in chained form as: REPLACE 'A' WITH 'B' INTO: LASTNAME, FIRSTNAME, CITYNAME. Comments ABAP has 2 ways of defining text as a comment: An asterisk (*) in the leftmost column of a line makes the entire line a comment A double quotation mark (") anywhere on a line makes the rest of that line a comment Example: *************************************** ** Program: BOOKINGS ** ** Author: Joe Byte, 07-Jul-2007 ** *************************************** REPORT BOOKINGS. * Read flight bookings from the database SELECT * FROM FLIGHTINFO WHERE CLASS = 'Y' "Y = economy OR CLASS = 'C'. "C = business (...) Spaces Code in ABAP is whitespace-sensitive. x = a+b(c). assigns to variable x the substring of the variable a, starting from b with the length defined by the variable c. x = a + b( c ). assigns to variable x the sum of the variable a and the result of the call to method b with the parameter c. ABAP statements In contrast with languages like C/C++ or Java, which define a limited set of language-specific statements and provide most functionality via libraries, ABAP contains an extensive amount of built-in statements. These statements traditionally used sentence-like structures and avoided symbols, making ABAP programs relatively verbose. However, in more recent versions of the ABAP language, a terser style is possible. An example of statement based syntax (whose syntax originates in COBOL) versus expression-based syntax (as in C/Java): ADD TAX TO PRICE. * is equivalent to PRICE = PRICE + TAX . Data types and variables ABAP provides a set of built-in data types. In addition, every structure, table, view or data element defined in the ABAP Dictionary can be used to type a variable. Also, object classes and interfaces can be used as types. The built-in data types are: Date variables or constants (type D) contain the number of days since January 1, 1 AD. Time variables or constants (type T) contain the number of seconds since midnight. A special characteristic of both types is that they can be accessed both as integers and as character strings (with internal format "YYYYMMDD" for dates and "hhmmss" for times), which can be used for date and time handling. For example, the code snippet below calculates the last day of the previous month (note: SY-DATUM is a system-defined variable containing the current date): DATA LAST_EOM TYPE D. "last end-of-month date * Start from today's date LAST_EOM = SY-DATUM. * Set characters 6 and 7 (0-relative) of the YYYYMMDD string to "01", * giving the first day of the current month LAST_EOM+6(2) = '01'. * Subtract one day LAST_EOM = LAST_EOM - 1. WRITE: 'Last day of previous month was', LAST_EOM. All ABAP variables have to be explicitly declared in order to be used. They can be declared either with individual statements and explicit typing or, since ABAP 7.40, inline with inferred typing. Explicitly typed declaration Normally all declarations are placed at the top of the code module (program, subroutine, function) before the first executable statement; this placement is a convention and not an enforced syntax rule. The declaration consists of the name, type, length (where applicable), additional modifiers (e.g. the number of implied decimals for a packed decimal field) and optionally an initial value: * Primitive types: DATA: COUNTER TYPE I, VALIDITY TYPE I VALUE 60, TAXRATE(3) TYPE P DECIMALS 1, LASTNAME(20) TYPE C, DESCRIPTION TYPE STRING. * Dictionary types: DATA: ORIGIN TYPE COUNTRY. * Internal table: DATA: T_FLIGHTS TYPE TABLE OF FLIGHTINFO, T_LOOKUP TYPE HASHED TABLE OF FLT_LOOKUP. * Objects: DATA: BOOKING TYPE REF TO CL_FLT_BOOKING. Notice the use of the colon to chain together consecutive DATA statements. Inline declaration Since ABAP 7.40, variables can be declared inline with the following syntax: DATA(variable_name) = 'VALUE'. For this type of declaration it must be possible to infer the type statically, e.g. by method signature or database table structure. This syntax is also possible in OpenSQL statements: SELECT * FROM ekko into @DATA(lt_ekko) WHERE ebeln EQ @lv_ebeln. ABAP Objects The ABAP language supports object-oriented programming, through a feature known as "ABAP Objects". This helps to simplify applications and make them more controllable. ABAP Objects is fully compatible with the existing language, so one can use existing statements and modularization units in programs that use ABAP Objects, and can also use ABAP Objects in existing ABAP programs. Syntax checking is stronger in ABAP Objects programs, and some syntactical forms (usually older ones) of certain statements are not permitted. Objects form a capsule which combines the character to the respective behavior. Objects should enable programmers to map a real problem and its proposed software solution on a one-to-one basis. Typical objects in a business environment are, for example, ‘Customer’, ‘Order’, or ‘Invoice’. From Release 3.1 onwards, the Business Object Repository (BOR) of SAP Web Application Server ABAP has contained examples of such objects. The BOR object model will be integrated into ABAP Objects in the next Release by migrating the BOR object types to the ABAP class library. A comprehensive introduction to object orientation as a whole would go far beyond the limits of this introduction to ABAP Objects. This documentation introduces a selection of terms that are used universally in object orientation and also occur in ABAP Objects. In subsequent sections, it goes on to discuss in more detail how these terms are used in ABAP Objects. The end of this section contains a list of further reading, with a selection of titles about object orientation. Objects are instances of classes. They contain data and provide services. The data forms the attributes of the object. The services are known as methods (also known as operations or functions). Typically, methods operate on private data (the attributes, or state of the object), which is only visible to the methods of the object. Thus the attributes of an object cannot be changed directly by the user, but only by the methods of the object. This guarantees the internal consistency of the object. Classes describe objects. From a technical point of view, objects are runtime instances of a class. In theory, any number of objects based on a single class may be created. Each instance (object) of a class has a unique identity and its own set of values for its attributes. Object References are unique addresses that may be used to identify and point to objects in a program. Object references allow access to the attributes and methods of an object. In object-oriented programming, objects usually have the following properties: Encapsulation - Objects restrict the visibility of their resources (attributes and methods) to other users. Every object has an interface, which determines how other objects can interact with it. The implementation of the object is encapsulated, that is, invisible outside the object itself. Inheritance - An existing class may be used to derive a new class. Derived classes inherit the data and methods of the superclass. However, they can overwrite existing methods, and also add new ones. Polymorphism - Identical (identically-named) methods behave differently in different classes. In ABAP Objects, polymorphism is implemented by redefining methods during inheritance and by using constructs called interfaces. CDS Views The ABAP Core Data Services (ABAP CDS) are the implementation of the general CDS concept for AS ABAP. ABAP CDS makes it possible to define semantic data models on the central database of the application server. On AS ABAP, these models can be defined independently of the database system. The entities of these models provide enhanced access functions when compared with existing database tables and views defined in ABAP Dictionary, making it possible to optimize Open SQL-based applications. This is particularly clear when an AS ABAP uses a SAP HANA database, since its in-memory characteristics can be implemented in an optimum manner. The data models are defined using the data definition language (DDL) and data control language (DCL) provided in the ABAP CDS in the ABAP CDS syntax. The objects defined using these languages are integrated into ABAP Dictionary and managed here too. CDS source code can only be programmed in the Eclipse-based ABAP Development Tools (ADT). The Data Definition Language (DDL) and the Data Control Language (DCL) use different editors. Features Internal tables in ABAP Internal tables are an important feature of the ABAP language. An internal table is defined similarly to a vector of structs in C++ or a vector of objects in Java. The main difference with these languages is that ABAP provides a collection of statements to easily access and manipulate the contents of internal tables. Note that ABAP does not support arrays; the only way to define a multi-element data object is to use an internal table. Internal tables are a way to store variable data sets of a fixed structure in the working memory of ABAP, and provides the functionality of dynamic arrays. The data is stored on a row-by-row basis, where each row has the same structure. Internal tables are preferably used to store and format the content of database tables from within a program. Furthermore, internal tables in connection with structures are an important means of defining complex data structures in an ABAP program. The following example defines an internal table with two fields with the format of database table VBRK. * First define structured type TYPES: BEGIN OF t_vbrk, VBELN TYPE VBRK-VBELN, ZUONR TYPE VBRK-ZUONR, END OF t_vbrk. * Now define internal table of our defined type t_vbrk DATA : gt_vbrk TYPE STANDARD TABLE OF t_vbrk, gt_vbrk_2 TYPE STANDARD TABLE OF t_vbrk. "easy to define more tables * If needed, define structure (line of internal table) * Definition with type or with reference to internal table: DATA : gs_vbrk TYPE t_vbrk, gs_vbrk_2 LIKE LINE OF gt_vbrk_2. * You can also define table type if needed TYPES tt_vbrk TYPE STANDARD TABLE OF t_vbrk. History The following list only gives a rough overview about some important milestones in the history of the language ABAP. For more details, see ABAP - Release-Specific Changes. See also ERP software Secure Network Communications SAP Logon Ticket Single sign-on References External links ABAP — Keyword Documentation SAP Help Portal ABAP Development discussions, blogs, documents and videos on the SAP Community Network (SCN) Cross-platform software Fourth-generation programming languages SAP SE
34656582
https://en.wikipedia.org/wiki/Annual%20BCI%20Research%20Award
Annual BCI Research Award
The BCI Award is an annual award for innovative research in the field of brain-computer interfaces. It is organized by the BCI Award Foundation. The prize is $3000 for first, $2000 for second, and $1000 for third place. The prizes are provided by g.tec medical engineering, Cortec, Intheon and IEEE Brain.. Christoph Guger and Dean Krusienski are the chairmen of the Foundation. In 2017 the awards were made during the Graz Brain-Computer Interface Conference at the Institute of Neural Engineering of Graz University of Technology in Graz, Austria. Past winners The following list presents the first-place winners of the Annual BCI Research Award: 2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang "Motor imagery-based Brain-Computer Interface robotic rehabilitation for stroke" 2011: Moritz Grosse-Wentrup and Bernhard Schölkopf "What are the neuro-physiological causes of performance variations in brain-computer interfacing?" 2012: Surjo R. Soekadar and Niels Birbaumer "Improving Efficacy of Ipsilesional Brain-Computer Interface Training in Neurorehabilitation of Chronic Stroke" 2013: M. C. Dadarlat, J. E. O'Doherty, P. N. Sabes "A learning-based approach to artificial sensory feedback: intracortical microstimulation replaces and augments vision" 2014: Katsuhiko Hamada, Hiromu Mori, Hiroyuki Shinoda, Tomasz M. Rutkowski "Airborne Ultrasonic Tactile Display BCI" 2015: Guy Hotson, David P McMullen, Matthew S. Fifer, Matthew S. Johannes, Kapil D. Katyal, Matthew P. Para, Robert Armiger, William S. Anderson, Nitish V. Thakor, Brock A. Wester, Nathan E. Crone "Individual Finger Control of the Modular Prosthetic Limb using High-Density Electrocorticography in a Human Subject" 2016: Gaurav Sharma, Nick Annetta, Dave Friedenberg, Marcie Bockbrader, Ammar Shaikhouni, W. Mysiw, Chad Bouton, Ali Rezai "An Implanted BCI for Real-Time Cortical Control of Functional Wrist and Finger Movements in a Human with Quadriplegia" 2017: S. Aliakbaryhosseinabadi, E. N. Kamavuako, N. Jiang, D. Farina, N. Mrachacz-Kersting "Online adaptive brain-computer interface with attention variations" 2018: Abidemi Bolu Ajiboye, Francis R. Willett, Daniel R. Young, William D. Memberg, Brian A. Murphy, Jonathan P. Miller, Benjamin L. Walter, Jennifer A. Sweet, Harry A. Hoyen, Michael W. Keith, Paul Hunter Peckham, John D. Simeral, John P. Donoghue, Leigh R. Hochberg, Robert F. Kirsch "Restoring Functional Reach-to-Grasp in a Person with Chronic Tetraplegia using Implanted Functional Electrical Stimulation and Intracortical Brain-Computer Interfaces" 2019: Sergey D. Stavisky, Francis R. Willett, Paymon Rezaii, Leigh R. Hochberg, Krishna V. Shenoy, Jaimie M. Henderson "Decoding speech from intracortical multielectrode arrays in dorsal motor cortex" 2020: Francis R. Willett, Donald T. Avansino, Leigh Hochberg, Jaimie Henderson, Krishna V. Shenoy "A High-Performance Handwriting BCI" 2021: Thomas Oxley, Nicholas Opie "Stentrode, a component of the Synchron brain computer interface (BCI)" Associated events There are also some other awards for BCI research. For example, the Berlin BCI group has hosted several Data Analysis Competitions. These competitions provide data from different types of BCIs (such as P300, ERD, or SSVEP), and competitors attempt to develop data analysis algorithms that can most accurately classify new data. The recently announced HCI Challenge instead focuses on improving the human-computer interaction within BCIs, such as through more natural and friendly interfaces. The X-Prize Foundation lists X-Prizes for BCI and Enduring Brain Computer Communication as “Concepts Under Consideration". The Gao group at Tsinghua University in Beijing coordinated an online BCI competition at a conference that they hosted in 2010, and hosted a second competition in 2012. The Annual BCI Research Award is the only general award open to any facet of BCI research. References Brain–computer interfacing Science and technology awards
28084344
https://en.wikipedia.org/wiki/Ken%20Kennedy%20Award
Ken Kennedy Award
The Ken Kennedy Award, established in 2009 by the Association for Computing Machinery and the IEEE Computer Society in memory of Ken Kennedy, is awarded annually and recognizes substantial contributions to programmability and productivity in computing and substantial community service or mentoring contributions. The award includes a $5,000 honorarium and the award recipient will be announced at the ACM - IEEE Supercomputing Conference. Ken Kennedy Award Past Recipients Source: IEEE 2020 Vivek Sarkar. "For foundational technical contributions to the area of programmability and productivity in parallel computing, as well as leadership contributions to professional service, mentoring, and teaching." 2019 Geoffrey Charles Fox for "Foundational contributions to parallel computing methodology, algorithms and software, data analysis, and their interface with broad classes of applications, and mentoring students at minority-serving institutions". 2018 Sarita Adve "For research contributions and leadership in the development of memory consistency models for C++ and Java, for service to numerous computer science organizations, and for exceptional mentoring". 2017 Jesus Labarta "For his contributions to programming models and performance analysis tools for High Performance Computing". 2016 William Gropp. "For highly influential contributions to the programmability of high performance parallel and distributed computers." 2015 Katherine Yelick. "For advancing the programmability of HPC systems, strategic national leadership, and mentorship in academia and government labs." 2014 Charles E. Leiserson. "For enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development and for distinguished mentoring of computer science leaders and students." 2013 Jack Dongarra. "For influential contributions to mathematical software, performance measurement, and parallel programming, and significant leadership and service within the HPC community." 2012 Mary Lou Soffa "For contributions to compiler technology and software engineering, exemplary service to the profession, and life-long dedication to mentoring and improving diversity in computing." 2011 Susan L. Graham. "For foundational compilation algorithms and programming tools; research and discipline leadership; and exceptional mentoring." 2010 David Kuck "For his pioneering contributions to compiler technology and parallel computing, the profound impact of his research on industry, and the widespread and long-lasting influence of his teaching and mentoring." 2009 Francine Berman. "For her influential leadership in the design, development and deployment of national-scale cyber infrastructure, her-inspiring work as a teacher and mentor, and her exemplary service to the high performance community." See also List of computer science awards References Nomination Process IEEE Computer Society Nomination Process External links ACM - IEEE CS Ken Kennedy Award Computer science awards Computational science IEEE society and council awards
11996370
https://en.wikipedia.org/wiki/Electronic%20shelf%20label
Electronic shelf label
An electronic shelf label (ESL) system is used by retailers for displaying product pricing on shelves. The product pricing is automatically updated whenever a price is changed under the control of a central server. Typically, electronic display modules are attached to the front edge of retail shelving. The ESL market has been expected to witness a growth throughout 2024 when global ESL industry are forecast to register more than 16% CAGR. The majority of end users for ESL belongs to the retail industry. The wide range of users ranges from groceries market, hardware stores, sports equipment, furniture, consumer appliances, and electronic and gadgets. This forecast growth is due to the increasing adoption of ESL by the retail industry as ESL are made more easily accessible for retail chains due to the reduction in pricing over time. With the rapid increase in the inclusion of Internet of things technology in the retail industry, with over 79% of retailers in the North America alone investing in ESL and people counter. 72% of these retailers in North America have plans to reinvent the supply chain management through adoption of ESL in their stores, thereby accelerating the market growth of ESL. Further studies show that Europe currently dominates the ESL market in terms of size, with over one-third of the total market share in 2017, due to the strong presence of domestic and multinational retailers in the region. However the market in APAC is expected to grow at the highest CAGR within the forecast period. The ESL market in the APAC region is segmented into China, Japan, Australia, Singapore, South Korea, and the rest of the region; the only prominent countries with significant market potential are China, Japan, Australia, Singapore, and South Korea. Additionally, the expansion of large scale retailers in the region is responsible for the expected high growth rate of the market. A study led by ABI Research said that the global ESL market could reach US$2 billion by 2019. Technological development of electronic shelf labels ESL modules use electronic paper (E-paper) or liquid-crystal display (LCD) to show the current product price to the customer. E-paper is widely used on ESLs as it provides crisp display and supports full graphic imaging while needing no power to retain an image. A communication network allows the price display to be automatically updated whenever a product price is changed. This communication network is the true differentiation and what really makes ESL a viable solution. The wireless communication must support reasonable range, speed, battery life, and reliability. The means of wireless communication can be based on radio, infrared or even visible light communication. Currently, the ESL market leans heavily towards radio frequency based ESL solutions with additional integrations. First generation: LCD and infrared communication Liquid crystal display ESL tags similar to how a calculator displays the numerical values. Each number on an liquid crystal display ESL tag is made up of seven bars and segments. The numerical value to display on the tags itself are then shown based on activating a different combinations of these seven bars and segments. A disadvantage of using a liquid crystal display tag is the difficulties in displaying certain letters. The communication technology used for the transmitter to connect to the label is through diffused infrared communication. The values on the LCD tags is established by infrared bouncing off of surfaces. However, the speed of transmission is heavily compromised due to the data compression of each data packets from the transmitter. Also, LCDs need power to retain an image. Second generation: E-paper and Infrared or radio communication Electronic paper (E-paper) are sometimes referred to as electronic ink or e-ink. It describes a technology that mimics the appearance of ordinary ink on paper. An e-paper display is made up of capsules in a thin film, with each particles within the capsules emitting a different color and different electric charge. Electrodes are placed above and below the capsule film and when a charge is applied to an individual electrode, the color particle will move to either the top or the bottom of a capsule, allowing the ESL to display a certain color. E-paper generally uses infrared or radio communication technology to communicate from the transmitter to the tags. As for radio, typically, low frequency solution is used for simple tags, but with the draw backs of low data rate that makes it difficult to show segmented image. Third generation: Geo-location and product finder The current generation of ESL units utilize e-paper display technology along with wireless radio communications, are integrated with existing retail technologies such as electronic article surveillance, digital signage, and people counters. Therefore, retailers are able to upload a floor plan of the sales area into the label-management software. Once this has been done and all the hardware and software pieces are in place, consumers are automatically tracked (in real time) through the network of people-counting devices, or via their personal Bluetooth devices, in order to determine their positioning within the store at all times. In this way, the customer may be subjected to highly targeted, hyper-customized marketing initiatives. General principles A typical ESL utilizes ultra-low-power CPU and wireless communication solutions to meet the power of low cost and low power, due to the high number of label tags required in an average retail store. ESL consists of three aspects to function. Label management software: Responsible for the configuration of the system, configuration of the properties on the label itself, and to update the database for the list of prices Typically, a centralized software that is responsible for the building of and maintenance of the network for the data communication between the label management software and the terminal display. Communication station: Responsible for the stability and reliability of transmittance through a long distance from the label management software to the label. Terminal display: Functions as a receiver from the communication station to display the price configured from the label management software. The label management software processes and packs the data of product information and the prices configured into packets of information. The data packets are then sent to the communication station via wireless network. Once the data packets are transmitted to the communication station, they will then be sent to the terminal display to update the price labels based on the information inputted into the label management software. A communication network allows the price display to be automatically updated whenever a product price is changed. This communication network is the true differentiation and what really makes ESL a viable solution. The wireless communication must support reasonable range, speed, battery life, and reliability. The means of wireless communication can be based on radio, infrared or even visible light communication. Currently, the ESL market leans heavily towards radio frequency based ESL solutions. The label will then act based on the instructions that was given in the data packets. Hardware design The ESL hardware design generally includes the circuit design of the communication station and the terminal display label. The typical chipset used to perform the basic functional requirement of ESL is TI MSP432. The communication between the communication station and the terminal display label is controlled by a RF module, the general protocol for RF module uses CC2500 with a communication distance of upwards to 30 meters. For terminal display, it can be displayed via electronic ink, electronic paper or liquid crystal display. Software design The software module of ESL is typically divided into three parts. Application module: Contains the database and manages the information of goods and the users to control the terminal display labels. Communication module: The communication module of the ESL contains the network and communication links required for the label management software to transmit the packet of information to the terminal display. Display module: The display of the ESL tag that utilizes either electronic ink, electronic paper, or liquid crystal display to output the information entered through the label management software. The software mainly covers the network management, file systems, and transmission of data whereas the display module will be receiving transmission from the application module. Usage of electronic shelf labels Electronic shelf labels are primarily used by retailers who sell their products in stores. The display modules are usually attached to the front edge of the retail shelves and display the price of the product. Additional information such as stock levels, expiration dates, or product information may also be displayed as well, depending on the type of ESL. Benefits Automated ESL systems reduces pricing management labor costs, improves pricing accuracy and allows dynamic pricing. Dynamic pricing is the concept in which retailers can fluctuate pricing to match demand, online competition, inventory levels, shelf-life of items, and to create promotions. Some advantages of using electronic shelf labels are Accurate pricing: Prices on shelves are updated on time and on demand to match with price files on the label management software from a link between the in store point of sale processor and the label management software. This will increase pricing accuracy to avoid branding issues revolving around price integrity. As a result, decreasing the lost revenue from undervalued items, as consumers generally alert staffs of overpriced items, and not the inverse. Save costs: As opposed to traditional pricing labels, whenever prices are changed and updated; employees will no longer require to print out labels and manually replace them in the shelf tags. With ESL, this eliminates the need to visit each shelf and make changes as all changes are made in the label management software and updated to the labels digitally. This saves retailers on the materials and labor in producing and replacing printed tags, and offer the ability to update prices dynamically on demand. Product finder: Retailers are able to integrate each ESL tags with an external application to offer wayfinding capability for their products. A customer can input the product they are looking for either through a developed mobile application from the retailer, or through an external digital signage to direct the customer to the product's location. In store heat map: Some ESL providers integrate with Bluetooth Low Energy enabled devices to track the movement of consumers and how long they remain at a particular location. This is done by displaying an image of the floor plan of the store on the label management software with a heat map showing locations of hot spots based on the detection of responses from high traffic areas through Bluetooth. Regulate stock levels: Inventory management is crucial for retailers. Inventory information may be displayed on ESL through connection with the point of sale processor. Additional information the ESL can display is an expected date on when the stock will refill on the shelf. ESL will also be able to display a quick response code to allow consumers to easily find the item online, or for retailers to display relevant product information to their consumers. Disadvantages While there are benefits to ESL, it is not without its flaws. Some disadvantages of using electronic shelf labels are: Error propagation: As ESL are controlled by a label management software that regulates all ESL within a store or throughout an entire chain of company, any erroneous or undervalued price entered into the label management software will be reflected through the entire retail chain. Inability to quantify return on investments: Due to the large volume of ESL a retailer will need for their stores, the initial investment cost for a store could be marginally high. This along with the inability to quantify whether consumer experience were improved during shopping due to the implementation of ESL makes it difficult to quantify the return on investment of ESL. References Retail store elements Electronic paper technology Display technology
555304
https://en.wikipedia.org/wiki/Virtual%20private%20server
Virtual private server
A virtual private server (VPS) is a virtual machine sold as a service by an Internet hosting service. The virtual dedicated server (VDS) also has a similar meaning. A virtual private server runs its own copy of an operating system (OS), and customers may have superuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes it is functionally equivalent to a dedicated physical server and, being software-defined, can be created and configured much more easily. A virtual server costs much less than an equivalent physical server. However, as virtual servers share the underlying physical hardware with other VPSes, performance may be lower, depending on the workload of any other executing virtual machines. Virtualization The force driving server virtualization is similar to that which led to the development of time-sharing and multiprogramming in the past. Although the resources are still shared, as under the time-sharing model, virtualization provides a higher level of security, dependent on the type of virtualization used, as the individual virtual servers are mostly isolated from each other and may run their own full-fledged operating system which can be independently rebooted as a virtual instance. Partitioning a single server to appear as multiple servers has been increasingly common on microcomputers since the launch of VMware ESX Server in 2001. The physical server typically runs a hypervisor which is tasked with creating, releasing, and managing the resources of "guest" operating systems, or virtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources save for those allocated to it by the hypervisor. As a VPS runs its own copy of its operating system, customers have superuser-level access to that operating system instance, and can install almost any software that runs on the OS; however, due to the number of virtualization clients typically running on a single machine, a VPS generally has limited processor time, RAM, and disk space. Motivation Ultimately, it is used to decrease hardware costs by condensing a failover cluster to a single machine, thus decreasing costs dramatically while providing the same services. Server roles and features are generally designed to operate in isolation. For example, Windows Server 2019 requires a certificate authority and a domain controller to exist on independent servers with independent instances of Windows Server. This is because additional roles and features adds areas of potential failure as well as adding visible security risks (placing a certificate authority on a domain controller poses the potential for root access to the root certificate). This directly motivates demand for virtual private servers in order to retain conflicting server roles and features on a single hosting machine. Also, the advent of virtual machine encrypted networks decreases pass-through risks that might have otherwise discouraged VPS usage as a legitimate hosting server. A dedicated server will meet your requirements, but it will not eat into your budget. The good news is that a VPS can improve the performance of your website. Your site will be safely sectioned off in its own zone, free of traffic from other websites. These elements can make or break your site's ability to give visitors a trustworthy experience. Finally, it is utilized to reduce hardware costs by consolidating a failover cluster into a single server, resulting in considerable cost savings while maintaining the same level of service. The majority of server roles and functionalities are designed to work independently. For example, Windows Server 2019 necessitates the presence of a certificate right and a domain controller on split servers running part of Windows Server instances. Roles and features on servers are classically planned to work in separation. This is expected to the fact that adding more roles and features increases the number of potential failure points while also increasing the visibility of security threats that placing a certificate authority on a domain controller poses the potential for root access to the root certificate. This drives demand for virtual private servers, which allow conflicting server responsibilities and functionalities to be maintained on a single hosting machine. In addition, the introduction of virtual machine encrypted networks reduces pass-through hazards that may otherwise deter VPS use as a genuine hosting server. Finally, it is used to reduce hardware costs by consolidating a failover cluster into a single server, resulting in significant cost savings while maintaining the same level of service. Hosting Many companies offer virtual private server hosting or virtual dedicated server hosting as an extension for web hosting services. There are several challenges to consider when licensing proprietary software in multi-tenant virtual environments. With unmanaged or self-managed hosting, the customer is left to administer their own server instance. Unmetered hosting is generally offered with no limit on the amount of data transferred on a fixed bandwidth line. Usually, unmetered hosting is offered with 10 Mbit/s, 100 Mbit/s, or 1000 Mbit/s (with some as high as 10Gbit/s). This means that the customer is theoretically able to use ~3 TB on 10 Mbit/s or up to ~300 TB on a 1000 Mbit/s line per month, although in practice the values will be significantly less. In a virtual private server, this will be shared bandwidth and a fair usage policy should be involved. Unlimited hosting is also commonly marketed but generally limited by acceptable usage policies and terms of service. Offers of unlimited disk space and bandwidth are always false due to cost, carrier capacities, and technological boundaries. Many firms provide virtual private server hosting or dedicated server hosting as an add-on to their web hosting services. When licensing proprietary software in multi-tenant virtual environments, there are numerous problems to consider. The customer is left to administer their own server instance with unmanaged or self-managed hosting. Unmetered hosting is when a fixed bandwidth line has no limit on the amount of data that can be transferred. Unmetered hosting often comes with speeds of 10 megabits per second, 100 megabits per second, or 1000 megabits per second, with some as high as 10 gigabits per second. This means that the user can theoretically consume up to 3 TB per month on a 10 Mbit/s line or up to 300 TB on a 1000 Mbit/s line, and however, these figures will be substantially lower in practice. This will be shared bandwidth on a virtual private server, and a fair usage policy should be implemented. Unlimited hosting is also widely advertised, and however, it is usually subject to acceptable usage limits and terms of service. Due to cost, carrier capacity, and technological limitations, offering unlimited storage space and bandwidth is always untrue. See also Comparison of platform virtualization software Cloud computing References Servers (computing) Computer network security Cloud computing
38365867
https://en.wikipedia.org/wiki/How%20to%20Create%20a%20Mind
How to Create a Mind
How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published in hardcover on November 13, 2012 by Viking Press it became a New York Times Best Seller. It has received attention from The Washington Post, The New York Times and The New Yorker. Kurzweil describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. Based on this he introduces his Pattern Recognition Theory of Mind (PRTM). He says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. He also suggests that the brain is a "recursive probabilistic fractal" whose line of code is represented within the 30-100 million bytes of compressed code in the genome. Kurzweil then explains that a computer version of this design could be used to create an artificial intelligence more capable than the human brain. It would employ techniques such as hidden Markov models and genetic algorithms, strategies Kurzweil used successfully in his years as a commercial developer of speech recognition software. Artificial brains will require massive computational power, so Kurzweil reviews his law of accelerating returns, which explains how the compounding effects of exponential growth will deliver the necessary hardware in only a few decades. Critics felt the subtitle of the book, The Secret of Human Thought Revealed, overpromises. Some protested that pattern recognition does not explain the "depth and nuance" of mind including elements like emotion and imagination. Others felt Kurzweil's ideas might be right, but they are not original, pointing to existing work as far back as the 1980s. Yet critics admire Kurzweil's "impressive track record" and say that his writing is "refreshingly clear", containing "lucid discussions" of computing history. Background Kurzweil has written several futurology books including The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). In his books he develops the law of accelerating returns. The law is similar to Moore's Law, the persistent doubling in capacity of computer chips, but extended to all "human technological advancement, the billions of years of terrestrial evolution" and even "the entire history of the universe". Due to the exponential growth in computing technologies predicted by the law, Kurzweil says that by "the end of the 2020s" computers will have "intelligence indistinguishable to biological humans". As computational power continues to grow, machine intelligence will represent an ever-larger percentage of total intelligence on the planet. Ultimately it will lead to the Singularity, a merger between biology and technology, which Kurzweil predicts will occur in 2045. He says "There will be no distinction, post-Singularity, between human and machine...". Kurzweil himself plans to "stick around" for the Singularity. He has written two health and nutrition books aimed at living longer, the subtitle of one is "Live Long Enough to Live Forever". One month after How to Create a Mind was published, Google announced that it had hired Kurzweil to work as Director of Engineering "on new projects involving machine learning and language processing". Kurzweil said his goal at Google is to "create a truly useful AI [artificial intelligence] that will make all of us smarter". Content Thought experiments Kurzweil opens the book by reminding us of the importance of thought experiments in the development of major theories, including evolution and relativity. It's worth noting that Kurzweil sees Darwin as "a good contender" for the leading scientist of the 19th century. He suggests his own thought experiments related to how the brain thinks and remembers things. For example, he asks the reader to recite the alphabet, but then to recite the alphabet backwards. The difficulty in going backwards suggests "our memories are sequential and in order". Later he asks the reader to visualize someone he has met only once or twice, the difficulty here suggests "there are no images, videos, or sound recordings stored in the brain" only sequences of patterns. Eventually he concludes the brain uses a hierarchy of pattern recognizers. Pattern Recognition Theory of Mind Kurzweil states that the neocortex contains about 300 million very general pattern recognizers, arranged in a hierarchy. For example, to recognize a written word there might be several pattern recognizers for each different letter stroke: diagonal, horizontal, vertical or curved. The output of these recognizers would feed into higher level pattern recognizers, which look for the pattern of strokes which form a letter. Finally a word-level recognizer uses the output of the letter recognizers. All the while signals feed both "forward" and "backward". For example, if a letter is obscured, but the remaining letters strongly indicate a certain word, the word-level recognizer might suggest to the letter-recognizer which letter to look for, and the letter-level would suggest which strokes to look for. Kurzweil also discusses how listening to speech requires similar hierarchical pattern recognizers. Kurzweil's main thesis is that these hierarchical pattern recognizers are used not just for sensing the world, but for nearly all aspects of thought. For example, Kurzweil says memory recall is based on the same patterns that were used when sensing the world in the first place. Kurzweil says that learning is critical to human intelligence. A computer version of the neocortex would initially be like a new born baby, unable to do much. Only through repeated exposure to patterns would it eventually self-organize and become functional. Kurzweil writes extensively about neuroanatomy, of both the neocortex and "the old brain". He cites recent evidence that interconnections in the neocortex form a grid structure, which suggests to him a common algorithm across "all neocortical functions". Digital brain Kurzweil next writes about creating a digital brain inspired by the biological brain he has been describing. One existing effort he points to is Henry Markram's Blue Brain Project, which is attempting to create a full brain simulation by 2023. Kurzweil says the full molecular modeling they are attempting will be too slow, and that they will have to swap in simplified models to speed up initial self-organization. Kurzweil believes these large scale simulations are valuable, but says a more explicit "functional algorithmic model" will be required to achieve human levels of intelligence. Kurzweil is unimpressed with neural networks and their potential while he's very bullish on vector quantization, hidden Markov models and genetic algorithms since he used all three successfully in his speech recognition work. Kurzweil equates pattern recognizers in the neocortex with statements in the LISP programming language, which is also hierarchical. He also says his approach is similar to Jeff Hawkins' hierarchical temporal memory, although he feels the hierarchical hidden Markov models have an advantage in pattern detection. Kurzweil touches on some modern applications of advanced AI including Google's self-driving cars, IBM's Watson which beat the best human players at the game Jeopardy!, the Siri personal assistant in the Apple iPhone or its competitor Google Voice Search. He contrasts the hand-coded knowledge of the Douglas Lenat's Cyc project with the automated learning of systems like Google Translate and suggests the best approach is to use a combination of both, which is how IBM's Watson was so effective. Kurzweil says that John Searle's has leveled his "Chinese Room" objection at Watson, arguing that Watson only manipulates symbols without meaning. Kurzweil thinks the human brain is "just" doing hierarchical statistical analysis as well. In a section entitled A Strategy for Creating a Mind Kurzweil summarizes how he would put together a digital mind. He would start with a pattern recognizer and arrange for a hierarchy to self-organize using a hierarchical hidden Markov model. All parameters of the system would be optimized using genetic algorithms. He would add in a "critical thinking module" to scan existing patterns in the background for incompatibilities, to avoid holding inconsistent ideas. Kurzweil says the brain should have access to "open questions in every discipline" and have the ability to "master vast databases", something traditional computers are good at. He feels the final digital brain would be "as capable as biological ones of effecting changes in the world". Philosophy A digital brain with human-level intelligence raises many philosophical questions, the first of which is whether it is conscious. Kurzweil feels that consciousness is "an emergent property of a complex physical system", such that a computer emulating a brain would have the same emergent consciousness as the real brain. This is in contrast to people like John Searle, Stuart Hameroff and Roger Penrose who believe there is something special about the physical brain that a computer version could not duplicate. Another issue is that of free will, the degree to which people are responsible for their own choices. Free will relates to determinism, if everything is strictly determined by prior state, then some would say that no one can have free will. Kurzweil holds a pragmatic belief in free will because he feels society needs it to function. He also suggests that quantum mechanics may provide "a continual source of uncertainty at the most basic level of reality" such that determinism does not exist. Finally Kurzweil addresses identity with futuristic scenarios involving cloning a nonbiological version of someone, or gradually turning that same person into a nonbiological entity one surgery at a time. In the first case it is tempting to say the clone is not the original person, because the original person still exists. Kurzweil instead concludes both versions are equally the same person. He explains that an advantage of nonbiological systems is "the ability to be copied, backed up, and re-created" and this is just something people will have to get used to. Kurzweil believes identity "is preserved through continuity of the pattern of information that makes us" and that humans are not bound to a specific "substrate" like biology. Law of accelerating returns The law of accelerating returns is the basis for all of these speculations about creating a digital brain. It explains why computational capacity will continue to increase unabated even after Moore's Law expires, which Kurzweil predicts will happen around 2020. Integrated circuits, the current method of creating computer chips, will fade from the limelight, while some new more advanced technology will pick up the slack. It is this new technology that will get us to the massive levels of computation needed to create an artificial brain. As exponential progress continues into and beyond the Singularity, Kurzweil says "we will merge with the intelligent technology we are creating". From there intelligence will expand outward rapidly. Kurzweil even wonders whether the speed of light is really a firm limit to civilization's ability to colonize the universe. Reception Analysis Simson Garfinkel, an entrepreneur and professor of computer science at the Naval Postgraduate School, says Kurzweil's pattern recognition theory of mind (PRTM) is misnamed because of the word "theory", he feels it is not a theory since it cannot be tested. Garfinkel rejects Kurzweil's one-algorithm approach instead saying "the brain is likely to have many more secrets and algorithms than the one Kurzweil describes". Garfinkel caricatures Kurzweil's plan for artificial intelligence as "build something that can learn, then give it stuff to learn", which he thinks is hardly the "secret of human thought" promised by the subtitle of the book. Gary Marcus, a research psychologist and professor at New York University, says only the name PRTM is new. He says the basic theory behind PRTM is "in the spirit of" a model of vision known as the neocognitron, introduced in 1980. He also says PRTM even more strongly resembles Hierarchical Temporal Memory promoted by Jeff Hawkins in recent years. Marcus feels any theory like this needs to be proven with an actual working computer model. And to that end he says that "a whole slew" of machines have been programmed with an approach similar to PRTM, and they have often performed poorly. Colin McGinn, a philosophy professor at the University of Miami, asserted in The New York Review of Books that "pattern recognition pertains to perception specifically, not to all mental activity". While Kurzweil does say "memories are stored as sequences of patterns" McGinn asks about "emotion, imagination, reasoning, willing, intending, calculating, silently talking to oneself, feeling pain and pleasure, itches, and mood" insisting these have nothing to do with pattern recognition. McGinn is also critical of the "homunculus language" Kurzweil uses, the anthropomorphization of anatomical parts like neurons. Kurzweil will write that a neuron "shouts" when it "sees" a pattern, where McGinn would prefer he say a neuron "fires" when it receives certain stimuli. In McGinn's mind only conscious entities can "recognize" anything, a bundle of neurons cannot. Finally he takes objection with Kurzweil's "law" of accelerating change, insisting it is not a law, but just a "fortunate historical fact about the twentieth century". In 2015, Kurzweil's theory was extended to a Pattern Activation/Recognition Theory of Mind with a stochastic model of self-describing neural circuits. Reviews Garfinkel says Kurzweil is at his best with the thought experiments early in the book, but says the "warmth and humanitarianism" evident in Kurzweil's talks is missing. Marcus applauds Kurzweil for "lucid discussion" of Alan Turing and John von Neumann and was impressed by his descriptions of computer algorithms and the detailed histories of Kurzweil's own companies. Matthew Feeney, assistant editor for Reason, was disappointed in how briefly Kurzweil dealt with the philosophical aspects of the mind-body problem, and the ethical implications of machines which appear to be conscious. He does say Kurzweil's "optimism about an AI-assisted future is contagious." While Drew DeSilver, business reporter at the Seattle Times, says the first half of the book "has all the pizazz and drive of an engineering manual" but says Kurzweil's description of how the Jeopardy! computer champion Watson worked "is eye-opening and refreshingly clear". McGinn says the book is "interesting in places, fairly readable, moderately informative, but wildly overstated." He mocks the book's subtitle by writing "All is revealed!" after paraphrasing Kurzweil's pattern recognition theory of mind. Speaking as a philosopher, McGinn feels that Kurzweil is "way of out of his depth" when discussing Wittgenstein. Matt Ridley, journalist and author, wrote in The Wall Street Journal that Kurzweil "has a more impressive track record of predicting technological progress than most" and therefore he feels "it would be foolish, not wise, to bet against the emulation of the human brain in silicon within a couple of decades". Translations Spanish: "Cómo crear una mente. El secreto del pensamiento humano" (Lola Books, 2013). German: "Das Geheimnis des menschlichen Denkens. Einblicke in das Reverse Engineering des Gehirns" (Lola Books, 2014). Notes References External links C-SPAN After Words with Ray Kurzweil (video) Science Friday, Is It Possible to Create a Mind? Books by Ray Kurzweil 2012 non-fiction books Futurology books Transhumanist books Neuroscience books Artificial intelligence publications Brain Books about cognition
207076
https://en.wikipedia.org/wiki/Confused%20deputy%20problem
Confused deputy problem
In information security, the confused deputy problem is often cited as an example of why capability-based security is important. A confused deputy is a legitimate, more privileged computer program that is tricked by another program into misusing its authority on the system. It is a specific type of privilege escalation. Capability systems protect against the confused deputy problem, whereas access control list-based systems do not. Example In the original example of a confused deputy, there is a compiler program provided on a commercial timesharing service. Users could run the compiler and optionally specify a filename where it would write debugging output, and the compiler would be able to write to that file if the user had permission to write there. The compiler also collected statistics about language feature usage. Those statistics were stored in a file called "(SYSX)STAT", in the directory "SYSX". To make this possible, the compiler program was given permission to write to files in SYSX. But there were other files in SYSX: in particular, the system's billing information was stored in a file "(SYSX)BILL". A user ran the compiler and named "(SYSX)BILL" as the desired debugging output file. This produced a confused deputy problem. The compiler made a request to the operating system to open (SYSX)BILL. Even though the user did not have access to that file, the compiler did, so the open succeeded. The compiler wrote the compilation output to the file (here "(SYSX)BILL") as normal, overwriting it, and the billing information was destroyed. The confused deputy In this example, the compiler program is the deputy because it is acting at the request of the user. The program is seen as 'confused' because it was tricked into overwriting the system's billing file. Whenever a program tries to access a file, the operating system needs to know two things: which file the program is asking for, and whether the program has permission to access the file. In the example, the file is designated by its name, “(SYSX)BILL”. The program receives the file name from the user, but does not know whether the user had permission to write the file. When the program opens the file, the system uses the program's permission, not the user's. When the file name was passed from the user to the program, the permission did not go along with it; the permission was increased by the system silently and automatically. It is not essential to the attack that the billing file be designated by a name represented as a string. The essential points are that: the designator for the file does not carry the full authority needed to access the file; the program's own permission to access the file is used implicitly. Other examples A cross-site request forgery (CSRF) is an example of a confused deputy attack that uses the web browser to perform sensitive actions against a web application. A common form of this attack occurs when a web application uses a cookie to authenticate all requests transmitted by a browser. Using JavaScript, an attacker can force a browser into transmitting authenticated HTTP requests. The Samy computer worm used cross-site scripting (XSS) to turn the browser's authenticated MySpace session into a confused deputy. Using XSS the worm forced the browser into posting an executable copy of the worm as a MySpace message which was then viewed and executed by friends of the infected user. Clickjacking is an attack where the user acts as the confused deputy. In this attack a user thinks they are harmlessly browsing a website (an attacker-controlled website) but they are in fact tricked into performing sensitive actions on another website. An FTP bounce attack can allow an attacker to connect indirectly to TCP ports to which the attacker's machine has no access, using a remote FTP server as the confused deputy. Another example relates to personal firewall software. It can restrict Internet access for specific applications. Some applications circumvent this by starting a browser with instructions to access a specific URL. The browser has authority to open a network connection, even though the application does not. Firewall software can attempt to address this by prompting the user in cases where one program starts another which then accesses the network. However, the user frequently does not have sufficient information to determine whether such an access is legitimate—false positives are common, and there is a substantial risk that even sophisticated users will become habituated to clicking "OK" to these prompts. Not every program that misuses authority is a confused deputy. Sometimes misuse of authority is simply a result of a program error. The confused deputy problem occurs when the designation of an object is passed from one program to another, and the associated permission changes unintentionally, without any explicit action by either party. It is insidious because neither party did anything explicit to change the authority. Solutions In some systems it is possible to ask the operating system to open a file using the permissions of another client. This solution has some drawbacks: It requires explicit attention to security by the server. A naive or careless server might not take this extra step. It becomes more difficult to identify the correct permission if the server is in turn the client of another service and wants to pass along access to the file. It requires the client to trust the server to not abuse the borrowed permissions. Note that intersecting the server and client's permissions does not solve the problem either, because the server may then have to be given very wide permissions (all of the time, rather than those needed for a given request) in order to act for arbitrary clients. The simplest way to solve the confused deputy problem is to bundle together the designation of an object and the permission to access that object. This is exactly what a capability is. Using capability security in the compiler example, the client would pass to the server a capability to the output file, such as a file descriptor, rather than the name of the file. Since it lacks a capability to the billing file, it cannot designate that file for output. In the cross-site request forgery example, a URL supplied "cross"-site would include its own authority independent of that of the client of the web browser. See also Setuid executables in Unix Ambient authority References External links Norman Hardy, The Confused Deputy: (or why capabilities might have been invented), ACM SIGOPS Operating Systems Review, Volume 22, Issue 4 (October 1988). ACM published document. Document text on Norm Hardy's website. Document text on University of Pennsylvania's website. Citeseer cross reference. Capability Theory Notes from several sources (collated by Norm Hardy). Everything2: Confused Deputy (some introductory level text). Computer security
67424034
https://en.wikipedia.org/wiki/Dag%20Sj%C3%B8berg
Dag Sjøberg
Dag I.K. Sjøberg (born 24 January 1961) is a Norwegian computer scientist, software engineer, and politician. He is a professor of software engineering at the Department of Informatics at the University of Oslo. From 2001 to 2008 he was Research Director at Simula Research Laboratory and headed the Department of Software Engineering. Life Sjøberg took his master's degree in Computer Science (cand.scient.) at the University of Oslo in 1987, and a doctorate (PhD) in Computing Science at the University of Glasgow in 1993. In 1999, Sjøberg established the research group Industrial System Development (ISU) at the Department of Informatics at the University of Oslo. In 2002 Sjøberg was awarded The Simula Researcher of the Year Award by Managing Director Aslak Tveito. Sjøberg has also been Deputy Chair of the Urban Development, Environment and Transport Committee for The Green Party in Nordstrand since 2016. Publications D.I.K. Sjøberg, A. Johnsen and J. Solberg. Quantifying the Effect of Using Kanban versus Scrum: A Case Study. IEEE Software, 29(5):47-53, September/October 2012. D.I.K. Sjøberg. Confronting the Myth of Rapid Obsolescence in Computing Research, Contributed Article, Communications of the ACM 53(9):62-67, 2010. B.C.D. Anda, D.I.K. Sjøberg and A. Mockus. Variability and Reproducibility in Software Engineering: A Study of four Companies that Developed the same System, IEEE Transactions on Software Engineering 35(3):407-429, 2009. D.I.K. Sjøberg, T. Dybå and M. Jørgensen. The Future of Empirical Methods in Software Engineering Research, In: Future of Software Engineering (FOSE '07), side 358-378, IEEE-CS Press, 2007. E. Arisholm, H.E. Gallis, T. Dybå and D.I.K. Sjøberg. Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise, IEEE Transactions on Software Engineering 33(2):65-86, 2007. D.I.K. Sjøberg, J.E. Hannay, O. Hansen, V.B. Kampenes, A. Karahasanovic, N.K. Liborg and A.C. Rekdal. A Survey of Controlled Experiments in Software Engineering, IEEE Transactions on Software Engineering 31(9):733-753, 2005. References External links 1961 births Living people University of Oslo faculty Norwegian computer scientists
14617
https://en.wikipedia.org/wiki/Intel
Intel
Intel Corporation, stylized as intel, is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is the developer of the x86 series of microprocessors, the processors found in most personal computers (PCs). Incorporated in Delaware, Intel ranked No. 45 in the 2020 Fortune 500 list of the largest United States corporations by total revenue for nearly a decade, from 2007 to 2016 fiscal years. Intel supplies microprocessors for computer system manufacturers such as Acer, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing. Intel was founded on July 18, 1968, by semiconductor pioneers Gordon Moore (of Moore's law) and Robert Noyce, and is associated with the executive leadership and vision of Andrew Grove. Intel was a key component of the rise of Silicon Valley as a high-tech center. The company's name was conceived as portmanteau of the words integrated and electronics, with co-founder Noyce having been a key inventor of the integrated circuit (microchip). The fact that "intel" is the term for intelligence information also made the name appropriate. Intel was an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business. During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period, Intel became the dominant supplier of microprocessors for PCs and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry. The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Mesa, Threading Building Blocks (TBB), and Xen. Current operations Operating segments Client Computing Group 51.8% of 2020 revenues produces PC processors and related components. Data Center Group 33.7% of 2020 revenues produces hardware components used in server, network, and storage platforms. Non-Volatile Memory Solutions Group 6.9% of 2020 revenues produces components for solid-state drives: NAND flash memory and 3D XPoint (Optane). Internet of Things Group 5.2% of 2020 revenues offers platforms designed for retail, transportation, industrial, buildings and home use. Programmable Solutions Group 2.4% of 2020 revenues manufactures programmable semiconductors (primarily FPGAs). Customers In 2020, Dell accounted for about 17% of Intel's total revenues, Lenovo accounted for 12% of total revenues, and HP Inc. accounted for 10% of total revenues. As of August 2021 US Department of Defense is another large customer for Intel. Market share According to IDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011. Intel's market share decreased significantly in the enthusiast market as of 2019, and they have faced delays for their 10 nm products. According to former Intel CEO Bob Swan, the delay was caused by the company's overly aggressive strategy for moving to its next node. Historical market share In the 1980s Intel was among the top ten sellers of semiconductors (10th in 1987) in the world. In 1992, Intel became the biggest chip maker by revenue and held the position until 2018 when it was surpassed by Samsung, but Intel returned to its former position the year after. Other top semiconductor companies include TSMC, Advanced Micro Devices, Samsung, Texas Instruments, Toshiba and STMicroelectronics. Major competitors Intel's competitors in PC chipsets included Advanced Micro Devices (AMD), VIA Technologies, Silicon Integrated Systems, and Nvidia. Intel's competitors in networking include NXP Semiconductors, Infineon, Broadcom Limited, Marvell Technology Group and Applied Micro Circuits Corporation, and competitors in flash memory included Spansion, Samsung Electronics, Qimonda, Toshiba, STMicroelectronics, and SK Hynix. The only major competitor in the x86 processor market is AMD, with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time. However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover. Some smaller competitors such as VIA Technologies produce low-power x86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular, smartphones, has in recent years led to a decline in PC sales. Since over 95% of the world's smartphones currently use processors designed by ARM Holdings, ARM has become a major competitor for Intel's processor market. ARM is also planning to make inroads into the PC and server market. Intel has been involved in several disputes regarding violation of antitrust laws, which are noted below. Carbon footprint Intel reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 2,882 Kt (+94/+3.4% y-o-y). Intel plans to reduce carbon emissions 10% by 2030 from a 2020 base year. Corporate history Origins Intel was founded in Mountain View, California, in 1968 by Gordon E. Moore (known for "Moore's law"), a chemist, and Robert Noyce, a physicist and co-inventor of the integrated circuit. Arthur Rock (investor and venture capitalist) helped them find investors, while Max Palevsky was on the board from an early stage. Moore and Noyce had left Fairchild Semiconductor to found Intel. Rock was not an employee, but he was an investor and was chairman of the board. The total initial investment in Intel was $2.5 million in convertible debentures (equivalent to $ million in ) and $10,000 from Rock. Just 2 years later, Intel became a public company via an initial public offering (IPO), raising $6.8 million ($23.50 per share). Intel's third employee was Andy Grove, a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s. In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce", near homophone for "more noise" – an ill-suited name for an electronics company, since noise in electronics is usually undesirable and typically associated with bad interference. Instead, they founded the company as NM Electronics (or MN Electronics) on July 18, 1968, but by the end of the month had changed the name to Intel which stood for Integrated Electronics. Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name. Early history At its founding, Intel was distinguished by its ability to make logic circuits using semiconductor devices. The founders' goal was the semiconductor memory market, widely predicted to replace magnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101 Schottky TTL bipolar 64-bit static random-access memory (SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory in Tsukuba, Japan. In the same year, Intel also produced the 3301 Schottky bipolar 1024-bit read-only memory (ROM) and the first commercial metal–oxide–semiconductor field-effect transistor (MOSFET) silicon gate SRAM chip, the 256-bit 1101. While the 1101 was a significant advance, its complex static cell structure made it too slow and costly for mainframe memories. The three-transistor cell implemented in the first commercially available dynamic random-access memory (DRAM), the 1103 released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications. Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices. Intel created the first commercially available microprocessor (Intel 4004) in 1971. The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could actually become the basis of what was first known as a "mini computer" and then known as a "personal computer". Intel also created one of the first microcomputers in 1973. Intel opened its first international manufacturing facility in 1972, in Malaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants in Singapore and Jerusalem in the early 1980s, and manufacturing and development centres in China, India and Costa Rica in the 1990s. By the early 1980s, its business was dominated by dynamic random-access memory (DRAM) chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success. By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growing personal computer market, Intel embarked on a 10-year period of unprecedented growth as the primary (and most profitable) hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over to Andy Grove in 1987. By launching its Intel Inside marketing campaign in 1991, Intel was able to associate brand loyalty with consumer selection, so that by the end of the 1990s, its line of Pentium processors had become a household name. Challenges to dominance (2000s) After 2000, growth in demand for high-end microprocessors slowed. Competitors, notably AMD (Intel's largest competitor in its primary x86 architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position in its core market was greatly reduced, mostly due to controversial NetBurst microarchitecture. In the early 2000s then-CEO, Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful. Litigation Intel had also for a number of years been embroiled in litigation. US law did not initially recognize intellectual property rights related to microprocessor topology (circuit layouts), until the Semiconductor Chip Protection Act of 1984, a law sought by Intel and the Semiconductor Industry Association (SIA). During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the 80386 CPU. The lawsuits were noted to significantly burden the competition with legal bills, even if Intel lost the suits. Antitrust allegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMD brought further claims against Intel related to unfair competition. Reorganization and success with Intel Core (2005–2015) In 2005, CEO Paul Otellini reorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility). On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be using Intel's x86 processors for its Macintosh computers, switching from the PowerPC architecture developed by the AIM alliance. This was seen as win for Intel, although an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM. In 2006, Intel unveiled its Core microarchitecture to widespread critical acclaim; the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field. In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, fabricated using the 45 nm process node. Later that year, Intel released a processor with the Nehalem architecture to positive reception. On June 27, 2006, the sale of Intel's XScale assets was announced. Intel agreed to sell the XScale processor business to Marvell Technology Group for an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and the acquisition completed on November 9, 2006. In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy. In February 2011, Intel began to build a new microprocessor manufacturing facility in Chandler, Arizona, completed in 2013 at a cost of $5 billion. The building is now the 10 nm-certified Fab 42 and is connected to the other Fabs (12, 22, 32) on Ocotillo Campus via an enclosed bridge known as the Link. The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas. The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organisations that also includes Facebook, Google, and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income. Attempts at entering the smartphone market In April 2011, Intel began a pilot project with ZTE Corporation to produce smartphones using the Intel Atom processor for China's domestic market. In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group that would be responsible for the company's smartphone, tablet, and wireless efforts. Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with ARM. As a 32-nanometer processor, Medfield is designed to be energy-efficient, which is one of the core features in ARM's chips. At the Intel Developers Forum (IDF) 2011 in San Francisco, Intel's partnership with Google was announced. In January 2012, Google announced Android 2.3, supporting Intel's Atom microprocessor. In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome. In 2014, Intel cut thousands of employees in response to "evolving market trends", and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets. In April 2016, Intel cancelled the SoFIA platform and the Broxton Atom SoC for smartphones, effectively leaving the smartphone market. Intel Custom Foundry Finding itself with excess fab capacity after the failure of the Ultrabook to gain market traction and with PC sales declining, in 2013 Intel reached a foundry agreement to produce chips for Altera using 14-nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future. This was after poor sales of Windows 8 hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple. As of July 2013, five companies were using Intel's fabs via the Intel Custom Foundry division: Achronix, Tabula, Netronome, Microsemi, and Panasonic most are field-programmable gate array (FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22-nm Tri-Gate process. Several other customers also exist but were not announced at the time. The foundry business was closed in 2018 due to Intel's issues with its manufacturing. Security and manufacturing challenges (2016–2021) Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the seventh generation Core family (codenamed Kaby Lake), ushering in the process–architecture–optimization model. As Intel struggled to shrink their process node from 14 nm to 10 nm, processor development slowed down and the company continued to use the Skylake microarchitecture until 2020, albeit with optimizations. 10 nm process node issues While Intel originally planned to introduce 10 nm products in 2016, it later became apparent that there were manufacturing issues with the node. The first microprocessor under that node, Cannon Lake (marketed as 8th generation Core), was released in small quantities in 2018. The company first delayed the mass production of their 10 nm products to 2017. They later delayed mass production to 2018, and then to 2019. Despite rumors of the process being cancelled, Intel finally introduced mass-produced 10 nm 10th generation Intel Core mobile processors (codenamed "Ice Lake") in September 2019. Intel later acknowledged that their strategy to shrink to 10 nm was too aggressive. While other foundries used up to four steps in 10 nm or 7 nm processes, the company's 10 nm process required up to five or six multi-pattern steps. In addition, Intel's 10 nm process is denser than its counterpart processes from other foundries. Since Intel's microarchitecture and process node development were coupled, processor development stagnated. Security flaws In early January 2018, it was reported that all Intel processors made since 1995, excluding Intel Itanium and pre-2013 Intel Atom processors, have been subject to two security flaws dubbed Meltdown and Spectre. It is believed that "hundreds of millions" of systems could be affected by these flaws. More security flaws were disclosed on May 3, 2018, on August 14, 2018, on January 18, 2019, and on March 5, 2020. On March 15, 2018, Intel reported that it will redesign its CPUs to protect against the Spectre security vulnerability, will release the redesigned processors later in 2018. Both Meltdown and Spectre patches have been reported to slow down performance, especially on older computers. Renewed competition and other developments (2018–present) Due to Intel's issues with its 10 nm process node and the company's slow processor development, the company now found itself in a market with intense competition. The company's main competitor, AMD, introduced the Zen microarchitecture and a new chiplet based design to critical acclaim. Since its introduction, AMD, once unable to compete with Intel in the high-end CPU market, has undergone a resurgence, and Intel's dominance and market share have considerably decreased. In addition, Apple is switching from the x86 architecture and Intel processors to their own Apple silicon for their Macintosh computers from 2020 onwards. The transition is expected to affect Intel minimally; however, it might prompt other PC manufacturers to reevaluate their reliance on Intel and the x86 architecture. 'IDM 2.0' strategy On March 23, 2021, CEO Pat Gelsinger laid out new plans for the company. These include a new strategy, called IDM 2.0, that includes investments in manufacturing facilities, use of both internal and external foundries, and a new foundry business called Intel Foundry Services (IFS), a standalone business unit. Unlike Intel Custom Foundry, IFS will offer a combination of packaging and process technology, and Intel's IP portfolio including x86 cores. Other plans for the company include a partnership with IBM and a new event for developers and engineers, called "Intel ON". Gelsinger also confirmed that Intel's 7 nm process is on track, and that the first products with 7 nm (It is now called Intel 4) are Ponte Vecchio and Meteor Lake. In January 2022, Intel reportedly selected New Albany, Ohio, near Columbus, Ohio, as the site for a major new manufacturing facility. The facility will cost at least $20 billion. The company expects the facility to begin producing chips by 2025. Product and market history SRAMs, DRAMs, and the microprocessor Intel's first products were shift register memory and random-access memory integrated circuits, and Intel grew to be a leader in the fiercely competitive DRAM, SRAM, and ROM markets throughout the 1970s. Concurrently, Intel engineers Marcian Hoff, Federico Faggin, Stanley Mazor and Masatoshi Shima invented Intel's first microprocessor. Originally developed for the Japanese company Busicom to replace a number of ASICs in a calculator already produced by Busicom, the Intel 4004 was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor) In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the book Only the Paranoid Survive. A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular 8086 microprocessor. Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories, and ceased licensing the chip designs to competitors such as AMD. When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries. Early x86 processors and the IBM PC Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time. IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier. In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead. 386 microprocessor During this period Andrew Grove dramatically redirected the company, closing much of its DRAM business and directing resources to the microprocessor business. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract. Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories: Santa Clara, California; Hillsboro, Oregon; and Chandler, a suburb of Phoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.) As the success of Compaq's Deskpro 386 established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s. 486, Pentium, and Itanium Intel introduced the 486 microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. Engineers Vinod Dham and Rajeev Chandrasekhar (Member of Fart, India) were key figures on the core team that invented the 486 chip and later, Intel's signature Pentium chip. The P5 project was earlier known as "Operation Bicycle," referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the Intel Pentium, substituting a registered trademark name for the former part number (numbers, such as 486, cannot be legally registered as trademarks in the United States). The P6 followed in 1995 as the Pentium Pro and improved into the Pentium II in 1997. New architectures were developed alternately in Santa Clara, California and Hillsboro, Oregon. The Santa Clara design team embarked in 1993 on a successor to the x86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program with Hewlett-Packard engineers, though Intel soon took over primary design responsibility. The resulting implementation of the IA-64 64-bit architecture was the Itanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively with x86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the name Intel 64, previously EM64T). In 2017, Intel announced that the Itanium 9700 series (Kittson) would be the last Itanium chips produced. The Hillsboro team designed the Willamette processors (initially code-named P68), which were marketed as the Pentium 4. During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with only NutraSweet and a few others making attempts to do so. This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name. The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PC motherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged. The Systems Group campaign was lesser known than the Intel Inside campaign. Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up. At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time. During the 1990s, Intel Architecture Labs (IAL) was responsible for many of the hardware innovations for the PC, including the PCI Bus, the PCI Express (PCIe) bus, and Universal Serial Bus (USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video, but later its efforts were largely overshadowed by competition from Microsoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-president Steven McGeady at the Microsoft antitrust trial (United States v. Microsoft Corp.). Pentium flaw In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the P5 Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHz models) on customer request. The bug was discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics at Lynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet. Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum." During Thanksgiving, in 1994, The New York Times ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $475 million charge against Intel's 1994 revenue. Dr. Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers). The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression. Intel Core The Intel Core line originated from the original Core brand, with the release of the 32-bit Yonah CPU, Intel's first dual-core mobile (low-power) processor. Derived from the Pentium M, the processor family used an enhanced version of the P6 microarchitecture. Its successor, the Core 2 family, was released on July 27, 2006. This was based on the Intel Core microarchitecture, and was a 64-bit design. Instead of focusing on higher clock rates, the Core microarchitecture emphasized power efficiency and a return to lower clock speeds. It also provided more efficient decoding stages, execution units, caches, and buses, reducing the power consumption of Core 2-branded CPUs while increasing their processing capacity. In November 2008, Intel released the first generation Core processors based on the Nehalem microarchitecture. Intel also introduced a new naming scheme, with the three variants now named Core i3, i5, and i7. Unlike the previous naming scheme, these names no longer correspond to specific technical features. It was succeeded by the Westmere microarchitecture in 2010, with a die shrink to 32 nm and included Intel HD Graphics. In 2011, Intel released the Sandy Bridge-based 2nd generation Core processor family. This generation featured an 11% performance increase over Nehalem. It was succeeded by Ivy Bridge-based 3rd generation Core, introduced at the 2012 Intel Developer Forum. Ivy Bridge featured a die shrink to 22 nm, and supported both DDR3 memory and DDR3L chips. Intel continued its tick-tock model of a microarchitecture change followed by a die shrink until the 6th generation Core family based on the Skylake microarchitecture. This model was deprecated in 2016, with the release of the seventh generation Core family based on Kaby Lake, ushering in the process–architecture–optimization model. From 2016 until 2021, Intel later released more optimizations on the Skylake microarchitecture with Kaby Lake R, Amber Lake, Whiskey Lake, Coffee Lake, Coffee Lake R, and Comet Lake. Intel struggled to shrink their process node from 14 nm to 10 nm, with the first microarchitecture under that node, Cannon Lake (marketed as 8th generation Core), only being released in small quantities in 2018. In 2019, Intel released the 10th generation of Core processors, codenamed "Amber Lake", "Comet Lake", and "Ice Lake". Ice Lake, based on the Sunny Cove microarchitecture, was produced on the 10 nm process and was limited to low-power mobile processors. Both Amber Lake and Comet Lake were based on a refined 14 nm node, with the latter used for low-power mobile products and the latter being used for desktop and high performance mobile products. In September 2020, 11th generation Core mobile processors, codenamed Tiger Lake, were launched. Tiger Lake is based on the Willow Cove microarchitecture and a refined 10 nm node. Intel later released 11th generation Core desktop processors (codenamed "Rocket Lake"), fabricated using Intel's 14 nm process and based on the Cypress Cove microarchitecture, on March 30, 2021. It replaced Comet Lake desktop processors. All 11th generation Core processors feature new integrated graphics based on the Intel Xe microarchitecture. Both desktop and mobile products are set to be unified under a single process node with the release of 12th generation Intel Core processors (codenamed "Alder Lake") in late 2021. This generation will be fabricated using an Intel's 7 nm process, called Intel 4, for both desktop and mobile processors, and is based on a hybrid architecture utilizing high-performance Golden Cove cores and high-efficiency Gracemont (Atom) cores. Meltdown, Spectre, and other security vulnerabilities In early January 2018, it was reported that all Intel processors made since 1995 (besides Intel Itanium and pre-2013 Intel Atom) have been subject to two security flaws dubbed Meltdown and Spectre. The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published. Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th generation Core platforms, benchmark performance drops of 2–14 percent have been measured. Meltdown patches may also produce performance loss. It is believed that "hundreds of millions" of systems could be affected by these flaws. On March 15, 2018, Intel reported that it will redesign its CPUs (performance losses to be determined) to protect against the Spectre security vulnerability, and expects to release the newly redesigned processors later in 2018. On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws. On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws. On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines. Coffeelake-series CPUs are even more vulnerable, due to hardware mitigations for Spectre. On March 5, 2020, computer security experts reported another Intel chip security flaw, besides the Meltdown and Spectre flaws, with the systematic name (or, "Intel CSME Bug"). This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years". Use of Intel products by Apple Inc. (2005–2019) On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be transitioning the Macintosh from its long favored PowerPC architecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs. This was seen as a win for Intel, although an analyst called the move "risky" and "foolish", as Intel's current offerings at the time were considered to be behind those of AMD and IBM. The first Mac computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to Intel Xeon processors from November 2006 and was offered in a configuration similar to Apple's Mac Pro. Despite Apple's use of Intel products, relations between the two companies were strained at times. Rumors of Apple switching from Intel processors to their own designs began circulating as early as 2011. On June 22, 2020, during Apple's annual WWDC, Tim Cook, Apple's CEO, announced that they would be switching their entire Mac line from Intel CPUs to their custom processors in two years. In the short term, this transition is estimated to have minimal effects on Intel, as Apple only accounts for 2% to 4% of their revenue. However, Apple's shift to their own chips might prompt other PC manufacturers to reassess their reliance on Intel and the x86 architecture. By November 2020, Apple unveiled the Apple M1, their processor designed for the Mac. Solid-state drives (SSD) In 2008, Intel began shipping mainstream solid-state drives (SSDs) with up to 160 GB storage capacities. As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such as NAND flash, mSATA, PCIe, and NVMe. In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand name. In 2021, SK Hynix acquired most of Intel's NAND memory business for $7 billion, with a remaining transaction worth $2 billion expected in 2025. Intel also discontinued its consumer Optane products in 2021. Supercomputers The Intel Scientific Computers division was founded in 1984 by Justin Rattner, to design and produce parallel computers based on Intel microprocessors connected in hypercube internetwork topology. In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of the iWarp architecture was also subsumed. The division designed several supercomputer systems, including the Intel iPSC/1, iPSC/2, iPSC/860, Paragon and ASCI Red. In November 2014, Intel revealed that it is going to use light beams to speed up supercomputers. Fog computing On November 19, 2015, Intel, alongside ARM Holdings, Dell, Cisco Systems, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing. Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Faders, became the consortium's first president. Self-driving cars Intel is one of the biggest stakeholders in the self-driving car industry, having joined the race in mid 2017 after joining forces with Mobileye. The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the US. Safety levels of the technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back. It is important to mention that Intel included only 10 people in this study, which makes the study less credible. In a video posted on YouTube, Intel accepted this fact and called for further testing. Programmable devices Intel has sold Stratix, Arria, and Cyclone FPGAs since acquiring Altera in 2015. In 2019, Intel released Agilex FPGAs: chips aimed at data centers, 5G applications, and other uses. Competition, antitrust and espionage By the end of the 1990s, microprocessor performance had outstripped software demand for that CPU power. Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble", consumer systems ran effectively on increasingly low-cost systems after 2000. Intel's strategy of producing ever-more-powerful processors and obsoleting their predecessors stumbled, leaving an opportunity for rapid gains by competitors, notably AMD. This, in turn, lowered the profitability of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel. Intel's dominance in the x86 microprocessor market led to numerous charges of antitrust violations over the years, including FTC investigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit by Digital Equipment Corporation (DEC) and a patent suit by Intergraph. Intel's market dominance (at one time it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers) made it an attractive target for litigation, but few of the lawsuits ever amounted to anything. A case of industrial espionage arose in 1995 that involved both Intel and AMD. Bill Gaede, an Argentine formerly employed both at AMD and at Intel's Arizona plant, was arrested for attempting in 1993 to sell the i486 and P5 Pentium designs to AMD and to certain foreign powers. Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996. Corporate affairs Leadership and corporate structure Robert Noyce was Intel's CEO at its founding in 1968, followed by co-founder Gordon Moore in 1975. Andy Grove became the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as chairman, and Craig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over to Paul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the original IBM PC. The board of directors elected Otellini as president and CEO, and Barrett replaced Grove as Chairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman. In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such as Sanjay Jha and Patrick Gelsinger. Financial results revealed that, under Otellini, Intel's revenue increased by 55.8 percent (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion)., proving that his illegal business practices were more profitable than the fines levied against the company as punishment for employing them. On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO, a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis. Intel's software head Renée James was selected as president of the company, a role that is second to the CEO position. As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide. On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee. Bob Swan was named interim CEO, as the Board began a search for a permanent CEO. On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the seventh CEO to lead the company. On January 13, 2021, Intel announced that Swan would be replaced as CEO by Pat Gelsinger, effective February 15. Gelsinger is a former Intel chief technology officer who had previously been head of VMWare. Board of directors As of March 25, 2021: Omar Ishrak (chairman), chairman and former CEO of Medtronic Pat Gelsinger, CEO of Intel James Goetz, managing director of Sequoia Capital Alyssa Henry, Square, Inc. executive Risa Lavizzo-Mourey, former president and CEO of the Robert Wood Johnson Foundation Tsu-Jae King Liu, professor at the UC Berkeley College of Engineering Gregory Smith, CFO of Boeing Dion Weisler, former president and CEO of HP Inc. Andrew Wilson, CEO of Electronic Arts Frank Leary, managing member of Darwin Capital Ownership As of 2017, Intel shares are mainly held by institutional investors (The Vanguard Group, BlackRock, Capital Group Companies, State Street Corporation and others). Employment Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as chairman and as a member of the board of directors in 2005 at age 68. Intel's headquarters are located in Santa Clara, California, and the company has operations around the world. Its largest workforce concentration anywhere is in Washington County, Oregon (in the Portland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities. Outside the United States, the company has facilities in China, Costa Rica, Malaysia, Israel, Ireland, India, Russia, Argentina and Vietnam, in 63 countries and regions internationally. In the U.S. Intel employs significant numbers of people in California, Colorado, Massachusetts, Arizona, New Mexico, Oregon, Texas, Washington and Utah. In Oregon, Intel is the state's largest private employer. The company is the largest industrial employer in New Mexico while in Arizona the company has 12,000 employees as of January 2020. Intel invests heavily in research in China and about 100 researchersor 10% of the total number of researchers from Intelare located in Beijing. In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers in Kiryat Gat and between 600 and 1000 workers in the north. In January 2014, it was reported that Intel would cut about 5,000 jobs from its work force of 107,000. The announcement was made a day after it reported earnings that missed analyst targets. In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. , Intel employs 10,000 workers at four development centers and two production plants in Israel. Due to declining PC sales, in 2016 Intel cut 12,000 jobs. In 2021, Intel reversed course under new CEO Pat Gelsinger and started hiring thousands of engineers. Diversity Intel has a Diversity Initiative, including employee diversity groups as well as supplier diversity programs. Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups, and supports a Muslim employees group, a Jewish employees group, and a Bible-based Christian group. Intel has received a 100% rating on numerous Corporate Equality Indices released by the Human Rights Campaign including the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers by Working Mother magazine. In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole. In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report. The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female. NPR reports that Intel is facing a retention problem (particularly for African Americans), not just a pipeline problem. Economic impact in Oregon in 2009 In 2011, ECONorthwest conducted an economic impact analysis of Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs". Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy. School funding in New Mexico in 1997 In Rio Rancho, New Mexico, Intel is the leading employer. In 1997, a community partnership between Sandoval County and Intel Corporation funded and built Rio Rancho High School. Intel Israel Intel has been operating in the State of Israel since Dov Frohman founded the Israeli branch of the company in 1974 in a small office in Haifa. Intel Israel currently has development centers in Haifa, Jerusalem and Petah Tikva, and has a manufacturing plant in the Kiryat Gat industrial park that develops and manufactures microprocessors and communications products. Intel employed about 10,000 employees in Israel in 2013. Maxine Fesberg has been the CEO of Intel Israel since 2007 and the Vice President of Intel Global. In December 2016, Fesberg announced her resignation, her position of chief executive officer (CEO) has been filled by Yaniv Gerti since January 2017. Acquisitions and investments (2010–present) In 2010, Intel purchased McAfee, a manufacturer of computer security technology, for $7.68 billion. As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers. After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers. In September 2016, Intel sold a majority stake in its computer-security unit to TPG Capital, reversing the five-year-old McAfee acquisition. In August 2010, Intel and Infineon Technologies announced that Intel would acquire Infineon's Wireless Solutions business. Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips. In March 2011, Intel bought most of the assets of Cairo-based SySDSoft. In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches. The company used to be included on the EE Times list of 60 Emerging Startups. In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million. In July 2012, Intel agreed to buy 10% of the shares of ASML Holding NV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years. In July 2013, Intel confirmed the acquisition of Omek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million. The acquisition of a Spanish natural language recognition startup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms." In December 2014, Intel bought PasswordBox. In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million. In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability. In June 2015, Intel announced its agreement to purchase FPGA design company Altera for $16.7 billion, in its largest acquisition to date. The acquisition completed in December 2015. In October 2015, Intel bought cognitive computing company Saffron Technology for an undisclosed price. In August 2016, Intel purchased deep-learning startup Nervana Systems for over $400 million. In December 2016, Intel acquired computer vision startup Movidius for an undisclosed price. In March 2017, Intel announced that they had agreed to purchase Mobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion. In June 2017, Intel Corporation announced an investment of over for its upcoming Research and Development (R&D) centre in Bangalore. In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister. In November 2021, Intel recruited some of the employees of the Centaur Technology division from VIA Technologies, a deal worth $125 million, and effectively acquiring the talent and knowhow of their x86 division, it is not clear what will happen with the x86 license held by VIA. In December 2021, Intel said it will invest $7.1 billion to build a new chip-packaging and testing factory in Malaysia. The new investment will expand the operations of its Malaysian subsidiary across Penang and Kulim, creating more than 4,000 new Intel jobs and more than 5,000 local construction jobs. In December 2021, Intel announced its plan to take Mobileye automotive unit via an IPO of newly issued stock in 2022, maintaining its majority ownership of the company. Acquisition table (2009–present) Ultrabook fund (2011) In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next generation notebooks. The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks. Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick) notebook that utilizes Intel processors and also incorporates tablet features such as a touch screen and long battery life. At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips. Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power. Intel's goal for Ultrabook's price is below $1000; however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips. Open source support Intel has a significant participation in the open source communities since 1999. For example, in 2006 Intel released MIT-licensed X.org drivers for their integrated graphic cards of the i965 family of chipsets. Intel released FreeBSD drivers for some networking cards, available under a BSD-compatible license, which were also ported to OpenBSD. Binary firmware files for non-wireless Ethernet devices were also released under a BSD licence allowing free redistribution. Intel ran the Moblin project until April 23, 2009, when they handed the project over to the Linux Foundation. Intel also runs the LessWatts.org campaigns. However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware that must be included in the operating system for the wireless devices to operate. As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open source community. Linspire-Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source, as Intel did not want to upset their large customer Microsoft. Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open-source conference. In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles. Intel has also supported other open source projects such as Blender and Open 3D Engine. Corporate identity Logo In its history, Intel has had three logos. The first Intel logo featured the company's name stylized in all lowercase, with the letter e dropped below the other letters. The second logo was inspired by the "Intel Inside" campaign, featuring a swirl around the Intel brand name. The third logo, introduced in 2020, was inspired by the previous logos. It removes the swirl as well as the classic blue color in almost all parts of the logo, except for the dot in the "i". Intel Inside Intel has become one of the world's most recognizable computer brands following its long-running Intel Inside campaign. The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers, MicroAge. In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such as Advanced Micro Devices (now AMD), Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primary although indirect driver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for ... and they were right. Mion felt that the public didn't really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide. As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around. Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the US and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter. A case study, "Inside Intel Inside", was put together by Harvard Business School. The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising of Salt Lake City. The Intel swirl logo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove. The Intel Inside advertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers. Intel paid some of the advertiser's costs for an ad that used the Intel Inside logo and xylo-marimba jingle. In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet. Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing. The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that. Sonic logo The famous D♭  D♭  G♭  D♭  A♭ xylophone/xylomarimba jingle, sonic logo, tag, audio mnemonic was produced by Musikvergnuegen and written by Walter Werzowa, once a member of the Austrian 1980s sampling band Edelweiss. The sonic Intel logo was remade 1994 to coincide with the launch of the Pentium. It was modified in 1999 to coincide with the launch of the Pentium III, although it overlapped with the 1994 version which was phased out in 2002. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note. The sonic logo was remade a second time in 2004 to coincide with the new logo change. Again, it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006, with the melody unchanged. Another remake of the sonic logo is set to debut with Intel's new visual identity. While it has not been introduced as of early 2021, the company has made use of numerous variants since its rebranding in 2020 (including the 2004 version). Processor naming strategy In 2006, Intel expanded its promotion of open specification platforms beyond Centrino, to include the Viiv media center PC and the business desktop Intel vPro. In mid-January 2006, Intel announced that they were dropping the long running Pentium name from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the new Yonah chips, branded Core Solo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using a good-better-best strategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer. According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The first generation core products carry a 3 digit name, such as i5 750, and the second generation products carry a 4 digit name, such as the i5 2500. In both cases, a K at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name. In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide. Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies. Typography Neo Sans Intel is a customized version of Neo Sans based on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004. It was introduced alongside Intel's rebranding in 2006. Previously, Intel used Helvetica as its standard typeface in corporate marketing. Intel Clear is a global font announced in 2014 designed for to be used across all communications. The font family was designed by Red Peek Branding and Dalton Maag Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface. Intel Clear Hebrew, Intel Clear Arabic were added by Dalton Maag Ltd. Neo Sans Intel remained in logo and to mark processor type and socket on the packaging of Intel's processors. In 2020, as part of a new visual identity, a new typeface, Intel One, was designed. It replaced Intel Clear as the font used by the company in most of its branding, however, it is used alongside Intel Clear typeface. In logo, it replaced Neo Sans Intel typeface. However, it is still used to mark processor type and socket on the packaging of Intel's processors. Intel Brand Book It is a book produced by Red Peak Branding as part of new brand identity campaign, celebrating Intel's achievements while setting the new standard for what Intel looks, feels and sounds like. Litigation and regulatory attacks Patent infringement litigation (2006–2007) In October 2006, a Transmeta lawsuit was filed against Intel for patent infringement on computer architecture and power efficiency technologies. The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years. Antitrust allegations and litigation (2005–2009) In September 2005, Intel filed a response to an AMD lawsuit, disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries. Legal analysts predicted the lawsuit would drag on for a number of years, since Intel's initial response indicated its unwillingness to settle with AMD. In 2008, a court date was finally set. On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors. On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion. A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development." An antitrust lawsuit and a class-action suit relating to cold calling employees of other companies has been settled. Allegations by Japan Fair Trade Commission (2005) In 2005, the local Fair Trade Commission found that Intel violated the Japanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order. Allegations by the European Union (2007–2008) In July 2007, the European Commission accused Intel of anti-competitive practices, mostly against AMD. The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of their chips from Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions. Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly. General counsel Bruce Sewell responded that the commission had misunderstood some factual assumptions regarding pricing and manufacturing costs. In February 2008, Intel announced that its office in Munich had been raided by European Union regulators. Intel reported that it was cooperating with investigators. Intel faced a fine of up to 10% of its annual revenue if found guilty of stifling competition. AMD subsequently launched a website promoting these allegations. In June 2008, the EU filed new charges against Intel. In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, including Acer, Dell, HP, Lenovo and NEC, to exclusively use Intel chips in their products, and therefore harmed other, less successful companies including AMD. The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules". In addition to the fine, Intel was ordered by the commission to immediately cease all illegal practices. Intel has said that they will appeal against the commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal. Allegations by regulators in South Korea (2007) In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales if found guilty. In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD. Allegations by regulators in the United States (2008–2010) New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors. In June 2008, the Federal Trade Commission also began an antitrust investigation of the case. In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010. In November 2009, following a two-year investigation, New York Attorney General Andrew Cuomo sued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims. On July 22, 2010, Dell agreed to a settlement with the U.S. Securities and Exchange Commission (SEC) to pay $100M in penalties resulting from charges that Dell did not accurately disclose accounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10 percent of Dell's operating income in FY 2003 to 38 percent in FY 2006, and peaked at 76 percent in the first quarter of FY 2007." Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall. Corporate responsibility record Intel has been accused by some residents of Rio Rancho, New Mexico of allowing volatile organic compounds (VOCs) to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons of carbon tetrachloride was measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003. Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that a necropsy of lung tissue from two deceased dogs in the area indicated trace amounts of toluene, hexane, ethylbenzene, and xylene isomers, all of which are solvents used in industrial settings but also commonly found in gasoline, retail paint thinners and retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than of VOCs were released in June and July 2006. Intel's environmental performance is published annually in their corporate responsibility report. Conflict-free production In 2009, Intel announced that it planned to undertake an effort to remove conflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within the Democratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from the Enough Project and other organizations. During a keynote address at Consumer Electronics Show 2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year. In its 2012 rankings on the progress of consumer electronics companies relating to conflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress". In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals. Age discrimination complaints Intel has faced complaints of age discrimination in firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40. A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90 percent of people who have been laid off or fired from Intel are over the age of 40. Upside magazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any. Intel has denied that age plays any role in Intel's employment practices. FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47. Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees, which overturned in 2003 in Intel Corp. v. Hamidi. Tax dispute in India In August 2016, Indian officials of the Bruhat Bengaluru Mahanagara Palike (BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of . Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in the Karnataka high court in July, during which the court ordered Intel to pay BBMP half the owed amount of plus arrears by August 28 of that year. See also 5 nm ASCI Red Advanced Micro Devices Bumpless Build-up Layer Comparison of ATI Graphics Processing Units Comparison of Intel processors Comparison of Nvidia graphics processing units Cyrix Engineering sample (CPU) Graphics Processing Unit (GPU) Intel Developer Zone (Intel DZ) Intel Driver Update Utility Intel GMA (Graphics Media Accelerator) Intel HD and Iris Graphics Intel Loihi Intel Museum Intel Science Talent Search List of Intel chipsets List of Intel CPU microarchitectures List of Intel manufacturing sites List of Intel microprocessors List of Intel graphics processing units List of Semiconductor Fabrication Plants Wintel Intel related biographical articles on Wikipedia Andy Grove Bill Gaede Bob Colwell Craig Barrett (chief executive) Gordon Moore Justin Rattner Pat Gelsinger Paul Otellini Robert Noyce Sean Maloney Notes References External links 01.org — Open-source projects, contributed to and maintained by Intel engineers 1968 establishments in California American companies established in 1968 Companies based in Santa Clara, California Companies in the Dow Jones Industrial Average Companies listed on the Nasdaq Computer companies established in 1968 Computer companies of the United States Foundry semiconductor companies Linux companies Manufacturing companies based in the San Francisco Bay Area Manufacturing companies established in 1968 Mobile phone manufacturers Motherboard companies Multinational companies headquartered in the United States Netbook manufacturers Semiconductor companies of the United States Software companies based in the San Francisco Bay Area Software companies established in 1968 Superfund sites in California Technology companies of the United States Technology companies based in the San Francisco Bay Area Technology companies established in 1968 1970s initial public offerings Software companies of the United States Computer memory companies Computer storage companies
54218264
https://en.wikipedia.org/wiki/Dymas%20of%20Phrygia
Dymas of Phrygia
In Greek mythology, Dymas (Ancient Greek: Δύμας) was a Phrygian king. Mythology The father of Dymas was given as one Eioneus, son of Proteus, by some ancient mythographers. According to Dictys, he was a descendant of Phoenix, son of Agenor, as recounted by Helen to Hecuba to prove their kinship. Dymas' wife was called as Eunoë, a daughter of the river god Sangarius. In fact, Dymas and his Phrygian subjects are closely connected to the River Sangarius, which empties into the Black Sea. By his wife Eunoë or the naiad Evagora, Dymas was the father of Hecuba (also called Hecabe), wife to King Priam of Troy. King Dymas is also said by Homer to have had a son named Asius, who fought (and died) during the Trojan War - not to be confused with his namesake, Asius son of Hyrtacus, who also fought (and died) before Troy. The scholiasts credit Dymas with another son, named Otreus, who fought the Amazons a generation before the Trojan War. The etymology of the name Dymas is obscure, although it is probably non-Hellenic. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library. Kings of Phrygia Kings in Greek mythology Characters in Greek mythology
5369387
https://en.wikipedia.org/wiki/Android%2013
Android 13
Android 13, codenamed Tiramisu is an upcoming major release and 20th version of the Android mobile operating system. The first preview version was released on February 10, 2022. History Android 13 (internally codenamed Tiramisu) was announced in an Android blog posted on February 10, 2022, and the first Developer Preview was immediately released for the Google Pixel series (from Pixel 4 to Pixel 6, dropping support for the Pixel 3 and Pixel 3a). It was released after 4 months or so after releasing the stable version of Android 12. Another Developer Preview is planned for March, followed by the (currently planned) 4 beta versions, each being released in April, May, June and July. Platform stability will be reached in June, with the Beta 3. Features The first Developer Preview includes small changes, which will be expanded upon through the development phase. Notably, the split screen has a slightly tweaked UI, with the 2 apps having rounded corners. A new photo picker is introduced with the main goal of improving user privacy by restricting app media access. Most apps have not implemented this picker yet. In this same vein of privacy, a new permission level is introduced, NEARBY_WIFI_DEVICES. This permission allows access to various Wi-Fi related functions, such as searching for nearby devices and networks without needing to request access to location, as the implementation was in prior Android versions. The Quick Settings pulldown animation has been changed, and small changes to dialog windows such as the Internet toggle have been made. The media player is in the process of a redesign, but is not yet active as of Developer Preview 1. Additionally, silent mode now disables vibration completely, including haptics. The multiple users feature has been improved, with now the possibility of selecting which apps can be accessed from the guest user. App data is sandboxed between users, so no information is shared. This version opens the support for third-party apps to use themed Material You icons. Split Screen mode now persists through app changes, meaning it is now possible to use other apps and the phone launcher, and split screen apps will stay paired together in the Overview menu. Animations have been improved, notably the fingerprint scanner glow on the Pixel 6 series. The app label font has been changed in the Pixel Launcher, and subtle haptics have been added throughout the user experience. The version easter egg remains the same as Android 12, but the Android version has been changed to "Tiramisu" in settings and the Quick Settings panel. Many of the changes are from Android 12.1 "12L", such as the dock displayed on large screens, and other improvements for large format devices. These are mainly intended for foldables and tablets, but it can be enabled on phones too by changing the DPI settings. See also Android (operating system) Android version history Windows 11 References External Links Android 13 Developer Preview - Official Website Android (operating system) 2022 software
21014022
https://en.wikipedia.org/wiki/List%20of%20OpenGL%20applications
List of OpenGL applications
This is a non-exhaustive list of popular OpenGL programs. Many programs that use OpenGL are games. Games developed in OpenGL Ballenger a Platformer Sauerbraten an open source 3D FPS and also a game engine Doom (2016 video game) a FPS Minecraft a sandbox video game Photography and video Adobe After Effects, a digital motion graphics and compositing software Adobe Photoshop, a popular photo and graphics editing software Adobe Premiere Pro, a real-time, timeline based video editing software application ArtRage, traditional media painting software Kodi, a cross-platform, open source media center Modeling and CAD 3D Studio Max, modeling, animation and rendering package Autodesk AutoCAD, 2D/3D CAD Autodesk Maya, modeling, animation, sculpting, and rendering package that uses its own scripting language, MEL Blender, 3D CAD, animation and game engine Cadence Allegro, Computer-aided design, Electronics Google SketchUp, easy to use 3D modeler Modo (software), high-end 3D modeling, animation, rigging, rendering and visual effects package Houdini, modeling, animation, effects, rendering and compositing package developed by Side Effects Software Rhinoceros, NURBS Modeling for Windows SAP2000, structural analysis program LARSA4D, structural analysis program Scilab, Mathematical tool, clone of MATLAB VirtualMec, 3D CAD for the Meccano construction system Visualization and miscellaneous Algodoo, a freeware 2D physics simulator Avogadro, a 3D molecular viewer and editor BALLView, an interactive 3D visualizations software Celestia, 3D astronomy program Enhanced Machine Controller (EMC2), G-code interpreter for CNC machines Google Earth, Earth mapping software InVesalius, a cross-platform software, visualization medical images and reconstruction Mari (software), 3D texturing and painting software PyMOL, a 3D molecular viewer QuteMol, a 3D molecular renderer Really Slick Screensavers, 3D Screensavers SpaceEngine, Real and procedural 3D planetarium software Stellarium, High quality night sky simulator Universe Sandbox, an interactive space and gravity simulator Vectorworks, a cross-platform Mac/Windows 2D and 3D CAD for architectural & landscape design, offers a Renderworks module based on the Maxon CineRender engine Virtools, a real-time 3D engine Vizard, a platform for building and rendering enterprise and academic virtual reality applications developed by WorldViz See also List of OpenCL applications Lists of software
20695102
https://en.wikipedia.org/wiki/Merlin%20M4000
Merlin M4000
The BT Merlin M4000 was a Personal computer sold by British Telecom during the 1980s as part of the Merlin range of electronic machinery for businesses. It was not developed by BT but was a rebadged Logica VTS-2300 Kennet, and a completely different machine from the Merlin Tonto which was a rebadged ICL OPD. Merlin M4000 was designed as a general purpose computer but was not IBM PC compatible, and so could not run the major business applications around at the time as these were tied to the IBM PC hardware. Hardware Merlin M4000 computers were packaged inside a substantial and heavy steel desktop case weighing approximately 12 kg. Inside the case was the main board, power supply, floppy and hard drives, and expansion cards. The design was reasonably modular as the case and main board were able to accommodate expansion cards and additional memory. A separate keyboard with 114 keys connected to the main unit using a reversed British telephone plug with the clip on the left hand side. Most monitors were amber monochrome but later colour screens were sold. An 8086 CPU was used. The maximum RAM was 768 KB, made up of 256 KB on the main board plus two additional 256 KB RAM cards. A security socket was located on the rear of the main unit although it is unclear how it was used in practice. Networking was accomplished using ARCNET or Cambridge Ring (computer network) LAN cards. An RS-232 optical fibre modem was also available. The M4204T and M4213T computers were TEMPEST certified to BTR/01/202(4). Storage media The M4204T had two internal 720 kB 5¼-inch floppy drives and the M4213T had one internal 720 kB 5¼-inch floppy drive and one internal hard drive with a capacity of either 10 MB or 20 MB. An external 76 MB hard drive and/or a 150 MB Tandberg QIC tape drive could also be connected to the M4000. Software The CP/M-86 and Concurrent DOS (CDOS) operating systems were developed for Merlin M4000 computers. PC DOS and MS-DOS applications could not be run directly, but it was practical for vendors to cross-port their applications, if there was sufficient demand. Wordstar was available and Prospero Pascal was a popular development platform. Most Merlin M4000 computers were used to run bespoke software rather than off the shelf applications software. A few applications software packages were commercially available including: Lex9b word processor. MerlinWord word processor. A rather neat telephone directory / database program that was mainly used by switchboard operators. Software development tools including an 8086 assembler and COBOL compiler. An Asteroids game - initially as a demonstration of the bitmapped graphics. Usage Merlin M4000 computers were commonplace in the United Kingdom during the 1980s, although most were sold to the public sector as large contracts as opposed to the private sector. Major customers included the Royal Navy as part of the OASIS II project, with sales of subsequent models as part of the Oasis 4 project, and the Department of Health and Social Security, with sales also being made to HM Customs and Excise and the Forestry Commission. Merlin M4000 computers were installed in DSS offices across the country where they were used for Case Paper location (tracking files as they moved from one room to another) and calculating benefits. Some M4000 computers were used internally by BT although it is not clear if they were ever used in conjunction with System X telephone exchanges. Many theatres in the UK used Merlin M4000 computers running the RITA booking software that was written either by or in conjunction with the Royal Shakespeare Company. Successor The M4204T and M4213T computers were available in 1990 from the TEMPEST division of BT which sold TEMPEST certified computer equipment for high security applications. They were replaced by the M5000 range of IBM PC compatible TEMPEST certified computers running MS-DOS. References External links Merlin M4000 page at old-computers.com Computers designed in the United Kingdom BT Group Personal computers
234279
https://en.wikipedia.org/wiki/Loki%20Entertainment
Loki Entertainment
Loki Entertainment Software, Inc. was an American video game developer based in Tustin, California, that ported several video games from Microsoft Windows to Linux. It took its name from the Norse deity Loki. Although successful in its goal of bringing games to the Linux platform, the company folded in January 2002 after filing for bankruptcy. History Loki Software was founded on November 9, 1998, by Scott Draeker, a former lawyer who became interested in porting games to Linux after being introduced to the system through his work as a software licensing attorney. By December of that year Loki had gained the rights to produce a port of Activision's then-upcoming strategy game Civilization: Call to Power for Linux. This was to become Loki's first actual product, with the game hitting stores in May 1999. From there they gained contracts to port many other titles, such as Myth II: Soulblighter, Railroad Tycoon II, and Eric's Ultimate Solitaire. Throughout the next two years up until its eventual closure the company would continue to bring more games to Linux. After facing financial diificulties, Loki filed for bankruptcy in August 2001. The majority of the staff was laid off in January 2002 and Loki formally closed on January 31. Legacy Loki Software, although a commercial failure, is credited with the birth of the modern Linux game industry. Loki developed several free software tools, such as the Loki installer (also known as Loki Setup), and supported the development of the Simple DirectMedia Layer. They also started the OpenAL audio library project (now being run by Creative Technology and Apple Inc.) and with id Software wrote GtkRadiant. These are still often credited as being the cornerstones of Linux game development. They also worked on and extended several already developed tools, such as GCC and GDB. The book Programming Linux Games written in the early 2000s by Loki intern John R. Hall explains the major APIs Loki used to produce Linux games. Loki also offered a start to many figures still in the Linux and gaming industries. Ryan C. Gordon (also known as icculus), a former employee of Loki, has been responsible for the Linux and Mac OS X ports of many commercial games after the demise of the company. Mike Phillips would help start Linux Game Publishing, which was itself founded in response to Loki's closure. Nicholas Vining would go on to do some porting work and is currently the lead programmer at Gaslamp Games, which would later release their game Dungeons of Dredmor for Linux. Sam Lantinga would also later join Blizzard Entertainment and found Galaxy Gameworks to commercially support the Simple DirectMedia Layer; he would later also join Valve's Linux team. Although many Loki ports are unsupported since Loki's closure, Linux Game Publishing managed to pick up the rights to MindRover and offer a supported and updated version of the game's Linux port. id Software picked up the support for the Linux release of Quake III Arena, hiring Timothee Besset to maintain it; he would later also be responsible for porting some of id's later products to Linux. Running with Scissors, to celebrate the release of the movie Postal in 2007 published a multiplayer only version of Postal 2, without the single player campaign. In 2004 the source header files for Rune were released freely by Human Head Studios. But so far no one has updated the Linux version of Rune, though the company stated that a game sequel is in the making, and delayed the development of Prey 2. Software contractor Frank C. Earl claimed in 2010 to hold the porting rights for the entire Myth series and says he will port it to Linux. Kevin Bentley worked in 2009 on a Descent 3 patch for Linux, which was re-released in 2014 on Steam by Rebecca Heineman, who got blessed source code access. On October 16, 2011, Project Magma released a new version of Myth II: Soulblighter for Linux. Games published In addition to the published titles, there is also an unfinished port of Deus Ex. The later update of Deus Ex for Microsoft Windows features the OpenGL driver for the Unreal engine from Loki Software's Linux port. This makes the title more compatible with Wine. See also Linux Game Publishing Steam Ryan C. Gordon Linux gaming References External links Official website Icculus.org Ryan Gordon's site, hosting many Loki projects as well as other Linux gaming resources Activision and Loki Partner to Bring Games to Linux Linux PR, October 11, 1999 Linux.com - Loki: In The Trenches (Interview with Loki Software Staff) Companies that filed for Chapter 11 bankruptcy in 2001 Defunct companies based in Greater Los Angeles Linux companies Linux game porters Video game companies established in 1998 Video game companies disestablished in 2002 Defunct video game companies of the United States
2441752
https://en.wikipedia.org/wiki/Arsenate%20mineral
Arsenate mineral
Arsenate minerals usually refer to the naturally occurring orthoarsenates, possessing the (AsO4)3− anion group and, more rarely, other arsenates with anions like AsO3(OH)2− (also written HAsO42−) (example: pharmacolite Ca(AsO3OH).2H2O) or (very rarely) [AsO2(OH)2]− (example: andyrobertsite). Arsenite minerals are much less common. Both the Dana and the Strunz mineral classifications place the arsenates in with the phosphate minerals. Example arsenate minerals include: Annabergite Ni3(AsO4)2·8H2O Austinite CaZn(AsO4)(OH) Clinoclase Cu3(AsO4)(OH)3 Conichalcite CaCu(AsO4)(OH) Cornubite Cu5(AsO4)2(OH)4 Cornwallite Cu2+5(AsO4)2(OH)2 Erythrite Co3(AsO4)2·8H2O Mimetite Pb5(AsO4)3Cl Olivenite Cu2(AsO4)OH Nickel–Strunz Classification -08- Phosphates IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses it to modify the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Abbreviations: "*" - discredited (IMA/CNMNC status). "?" - questionable/doubtful (IMA/CNMNC status). "REE" - Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" - Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates: Neso: insular (from Greek νησος nēsos, island) Soro: grouping (from Greek σωροῦ sōros, heap, mound (especially of corn)) Cyclo: ring Ino: chain (from Greek ις [genitive: ινος inos], fibre) Phyllo: sheet (from Greek φύλλον phyllon, leaf) Tekto: three-dimensional framework Nickel–Strunz code scheme: NN.XY.##x NN: Nickel–Strunz mineral class number X: Nickel–Strunz mineral division letter Y: Nickel–Strunz mineral family letter ##x: Nickel–Strunz mineral/group number, x add-on letter Class: arsenates and vanadates 08.A Arsenates and vanadates without additional anions, without H2O 08.AA With small cations (some also with larger ones): 05 Alarsite 08.AB With medium-sized cations: 25 Xanthiosite, 30 Lammerite, 35 Mcbirneyite, 35 Stranskiite, 35 Pseudolyonsite, 40 Lyonsite 08.AC With medium-sized and large cations: 05 Howardevansite; 10 Arseniopleite, 10 Caryinite, 10 Johillerite, 10 Nickenichite, 10 Bradaczekite, 10 Yazganite, 10 Odanielite; 25 Berzeliite, 25 Manganberzeliite, 25 Palenzonaite, 25 Schäferite; 75 Ronneburgite, 80 Tillmannsite, 85 Filatovite 08.AD With only large cations: 10 Weilite, 10 Svenekite; 30 Schultenite, 35 Chernovite-(Y), 35 Dreyerite, 35 Wakefieldite-(La), 35 Wakefieldite-(Nd), 35 Wakefieldite-(Ce), 35 Wakefieldite-(Y); 40 Pucherite, 50 Gasparite-(Ce), 50 Rooseveltite; 55 Tetrarooseveltite, 60 Chursinite, 65 Clinobisvanite 08.B Arsenates and vanadates with additional anions, without H2O 08.BA With small and medium-sized cations: 10 Bergslagite 08.BB With only medium-sized cations, (OH, etc.):RO4 £1:1: 15 Sarkinite; 30 Zincolivenite, 30 Eveite, 30 Olivenite, 30 Adamite, 30 Auriacusite; 35 Paradamite, 40 Wilhelmkleinite, 50 Namibite, 60 Urusovite, 65 Theoparacelsite, 70 Turanite, 75 Stoiberite, 80 Fingerite, 85 Averievite 08.BC With only medium-sized cations, (OH, etc.):RO4 > 1:1 and < 2:1: 05 Angelellite, 15 Aerugite 08.BD With only medium-sized cations, (OH, etc.):RO4 = 2:1: 05 Cornwallite, 10 Arsenoclasite, 15 Parwelite, 20 Reppiaite, 30 Cornubite 08.BE With only medium-sized cations, (OH, etc.):RO4 > 2:1: 20 Clinoclase, 25 Gilmarite, 25 Arhbarite, 30 Allactite, 30 Flinkite, 35 Chlorophoenicite, 35 Magnesiochlorophoenicite, 40 Gerdtremmelite, 45 Arakiite, 45 Kraisslite, 45 Dixenite, 45 Hematolite, 45 Mcgovernite, 45 Turtmannite, 45 Carlfrancisite, 50 Synadelphite, 55 Holdenite, 60 Kolicite, 65 Sabelliite, 70 Jarosewichite, 75 Theisite, 80 Coparsite 08.BF With medium-sized and large cations, (OH, etc.):RO4 < 0.5:1: 20 Nabiasite 08.BG With medium-sized and large cations, (OH, etc.):RO4 = 0.5:1: 05 Arsenbrackebuschite, 05 Brackebuschite, 05 Gamagarite, 05 Arsentsumebite, 05 Feinglosite, 05 Bushmakinite, 05 Tokyoite, 05 Calderonite 08.BH With medium-sized and large cations, (OH, etc.):RO4 = 1:1: 10 Durangite, 10 Maxwellite, 10 Tilasite; 30 Carminite, 30 Sewardite; 35 Austinite, 35 Adelite, 35 Duftite, 35 Arsendescloizite, 35 Conichalcite, 35 Gabrielsonite, 35 Nickelaustinite, 35 Cobaltaustinite, 35 Tangeite, 35 Gottlobite; 40 Descloizite, 40 Cechite, 40 Pyrobelonite, 40 Mottramite; 45 Bayldonite, 45 Vesignieite; 50 Paganoite, 65 Leningradite 08.BK With medium-sized and large cations, (OH, etc.): 10 Medenbachite, 10 Cobaltneustadtelite, 10 Neustadtelite, 20 Heyite, 25 Jamesite 08.BL With medium-sized and large cations, (OH, etc.):RO4 = 3:1: 05 Beudantite, 05 Hidalgoite, 05 Gallobeudantite, 05 Kemmlitzite, 10 Segnitite, 10 Arsenogorceixite, 10 Arsenocrandallite, 10 Arsenogoyazite, 10 Dussertite, 10 Philipsbornite, 13 Arsenowaylandite, 13 Arsenoflorencite-(Ce), 13 Arsenoflorencite-(La), 13 Arsenoflorencite-(Nd), 13 Graulichite-(Ce) 08.BM With medium-sized and large cations, (OH, etc.):RO4 = 4:1: 05 Retzian-(Ce), 05 Retzian-(La), 05 Retzian-(Nd), 10 Kolitschite 08.BN With only large cations, (OH, etc.):RO4 = 0.33:1: 05 Fermorite, 05 Johnbaumite-M, 05 Johnbaumite, 05 Clinomimetite, 05 Hedyphane, 05 Mimetite-M, 05 Mimetite, 05 Morelandite, 05 Svabite, 05 Turneaureite, 05 Vanadinite 08.BO With only large cations, (OH, etc.):RO4 1:1: 10 Preisingerite, 10 Schumacherite, 15 Atelestite, 15 Hechtsbergite, 20 Kombatite, 20 Sahlinite, 35 Kuznetsovite, 45 Schlegelite 08.C Arsenates and vanadates without additional anions, with H2O 08.CA With small and large/medium cations: 30 Arsenohopeite, 35 Warikahnite, 50 Keyite, 55 Pushcharovskite, 60 Prosperite 08.CB With only medium-sized cations, RO4:H2O = 1:1: 10 Nyholmite, 10 Miguelromeroite, 10 Sainfeldite, 10 Villyaellenite; 15 Krautite, 15 Fluckite; 20 Cobaltkoritnigite, 20 Koritnigite; 25 Yvonite, 30 Geminite, 35 Schubnelite, 40 Radovanite, 45 Kazakhstanite, 50 Kolovratite, 55 Irhtemite, 60 Burgessite 08.CC With only medium-sized cations, RO4:H2O = 1:1.5: 10 Kaatialaite, 15 Leogangite 08.CD With only medium-sized cations, RO4:H2O = 1:2: 10 Mansfieldite, 10 Scorodite, 10 Yanomamite; 15 Parascorodite, 25 Sterlinghillite, 30 Rollandite 08.CE With only medium-sized cations, RO4:H2O £1:2.5: 05 Geigerite, 05 Chudobaite, 15 Brassite, 20 Rosslerite, 30 Veselovskyite, 30 Ondrušite, 30 Lindackerite, 30 Pradetite; 40 Ferrisymplesite, 40 Manganohörnesite, 40 Annabergite, 40 Erythrite, 40 Hörnesite, 40 Köttigite, 40 Parasymplesite, 45 Symplesite, 60 Kaňkite, 65 Steigerite, 70 Metaschoderite, 70 Schoderite, 85 Metaköttigite 08.CF With large and medium-sized cations, RO4:H2O > 1:1: 05 Grischunite 08.CG With large and medium-sized cations, RO4:H2O = 1:1: 05 Gaitite, 05 Parabrandtite, 05 Talmessite, 10 Roselite, 10 Rruffite, 10 Brandtite, 10 Zincroselite, 10 Wendwilsonite, 15 Ferrilotharmeyerite, 15 Cabalzarite, 15 Lotharmeyerite, 15 Cobaltlotharmeyerite, 15 Mawbyite, 15 Cobalttsumcorite, 15 Nickellotharmeyerite, 15 Schneebergite, 15 Nickelschneebergite, 15 Tsumcorite, 15 Thometzekite, 15 Manganlotharmeyerite, 15 Mounanaite, 15 Krettnichite; 20 Zincgartrellite, 20 Lukrahnite, 20 Gartrellite, 20 Helmutwinklerite, 20 Rappoldite, 20 Phosphogartrellite; 25 Pottsite, 35 Nickeltalmessite 08.CH With large and medium-sized cations, RO4:H2O < 1:1: 05 Walentaite, 15 Picropharmacolite; 55 Smolianinovite, 55 Fahleite; 60 Barahonaite-(Fe), 60 Barahonaite-(Al) 08.CJ With only large cations: 20 Haidingerite, 25 Vladimirite, 30 Ferrarisite, 35 Machatschkiite; 40 Phaunouxite, 40 Rauenthalite; 50 Pharmacolite, 55 Mcnearite, 65 Sincosite, 65 Bariosincosite, 75 Guerinite 08.D arsenates and vanadates 08.DA With small (and occasionally larger) cations: 05 Bearsite, 35 Philipsburgite, 50 Ianbruceite 08.DB With only medium-sized cations, (OH, etc.):RO4 < 1:1: 05 Pitticite, 35 Sarmientite, 40 Bukovskyite, 45 Zykaite, 75 Braithwaiteite 08.DC With only medium-sized cations, (OH, etc.):RO4 = 1:1 and < 2:1: 07 Euchroite, 10 Legrandite, 12 Strashimirite; 15 Arthurite, 15 Ojuelaite, 15 Cobaltarthurite, 15 Bendadaite; 20 Coralloite, 30 Maghrebite, 32 Tinticite, 55 Mapimite, 57 Ogdensburgite 08.DD With only medium-sized cations, (OH, etc.):RO4 = 2:1: 05 Chenevixite, 05 Luetheite; 10 Akrochordite, 10 Guanacoite 08.DE With only medium-sized cations, (OH, etc.):RO4 = 3:1: 15 Bulachite, 25 Ceruleite, 40 Juanitaite 08.DF With only medium-sized cations, (OH, etc.):RO4 > 3:1: 10 Liskeardite, 15 Rusakovite, 20 Liroconite, 30 Chalcophyllite, 35 Parnauite 08.DG With large and medium-sized cations, (OH, etc.):RO4 < 0.5:1: 05 Shubnikovite, 05 Lavendulan, 05 Lemanskiite, 05 Zdenekite 08.DH With large and medium-sized cations, (OH, etc.):RO4 < 1:1: 30 Arseniosiderite, 30 Kolfanite, 30 Sailaufite; 45 Mahnertite; 50 Andyrobertsite, 50 Calcioandyrobertsite; 60 Bouazzerite 08.DJ With large and medium-sized cations, (OH, etc.):RO4 = 1:l: 15 Camgasite, 45 Attikaite 08.DK With large and medium-sized cations, (OH, etc.):RO4 > 1:1 and < 2:1: Richelsdorfite, 10 Bariopharmacosiderite, 10 Pharmacosiderite, 10 Natropharmacosiderite, 10 Hydroniumpharmacosiderite; 12 Pharmacoalumite, 12 Natropharmacoalumite, 12 Bariopharmacoalumite 08.DL With large and medium-sized cations, (OH, etc.):RO4 = 2:1: 15 Agardite-(Ce), 15 Agardite-(La), 15 Agardite-(Nd), 15 Agardite-(Y), 15 Goudeyite, 15 Zalesiite, 15 Mixite, 15 Plumboagardite; 20 Cheremnykhite, 20 Dugganite, 20 Wallkilldellite-(Fe), 20 Wallkilldellite-(Mn) 08.DM With large and medium-sized cations, (OH, etc.):RO4 > 2:1: 05 Esperanzaite, 10 Clinotyrolite, 10 Tyrolite, 15 Betpakdalite-CaCa, 15 Betpakdalite-NaCa, 20 Phosphovanadylite-Ba, 20 Phosphovanadylite-Ca, 25 Yukonite, 40 Santafeite 08.E Uranyl arsenates 08.EA UO2:RO4 = 1:2: 05 Orthowalpurgite, 05 Walpurgite; 10 Hallimondite 08.EB UO2:RO4 = 1:1: 05 Metarauchite, 05 Heinrichite, 05 Kahlerite, 05 Novacekite, 05 Uranospinite, 05 Zeunerite; 10 Metazeunerite, 10 Metauranospinite, 10 Metaheinrichite, 10 Metakahlerite, 10 Metakirchheimerite, 10 Metalodevite, 10 Metanovacekite; 15 Uramarsite, 15 Trogerite, 15 Abernathyite, 15 Natrouranospinite; 20 Chistyakovaite, 25 Arsenuranospathite 08.EC UO2:RO4 = 3:2: 10 Arsenuranylite, 15 Hugelite, 20 Arsenovanmeersscheite, 45 Nielsbohrite 08.ED Unclassified: 10 Asselbornite 08.F Polyarsenates and [4]-polyvanadates 08.FA Polyarsenates and [4]-polyvanadates, without OH and H2O; dimers of corner-sharing RO4 tetrahedra: 05 Blossite, 10 Ziesite, 15 Chervetite, 25 Petewilliamsite 08.FC [4]-Polyvanadates, with H2O only: 05 Fianelite, 15 Pintadoite 08.FD [4]-Polyvanadates, with OH and H2O: 05 Martyite, 05 Volborthite 08.FE Ino-[4]-vanadates: 05 Ankinovichite, 05 Alvanite 08.X Unclassified Strunz arsenates and vanadates References
516645
https://en.wikipedia.org/wiki/MECC
MECC
The Minnesota Educational Computing Consortium (later Corporation), most commonly known as MECC, was an organization founded in 1973. The goal of the organization was to coordinate and provide computer services to schools in the state of Minnesota; however, its software eventually became popular in schools around the world. MECC had its headquarters in the Brookdale Corporate Center in Brooklyn Center, Minnesota. History Origins During the 1960s, Minnesota was a center of computer technology, what City Pages would describe 50 years later as a "Midwestern Silicon Valley". IBM, Honeywell, Control Data and other companies had facilities in the state. In 1963, their presence inspired a group of teachers at the University of Minnesota College of Education's laboratory school to introduce computers into classrooms via teleprinters and time-sharing. The group began with long-distance calls to Dartmouth College's General Electric computer to use John George Kemeny and Thomas E. Kurtz's new Dartmouth BASIC language, then moved to Minneapolis-based Pillsbury Company's own GE computer. In 1968, 20 Minneapolis-Saint Paul area school districts and the College of Education founded Total Information for Educational Systems (TIES) to provide time-sharing service on a HP 2000, training, and software. The presence of computer-company employees on many school boards accelerated TIES's expansion and helped make Minnesota a leader in computer-based education. TIES's success, and similar projects run by Minneapolis Public Schools and Minnesota State University, Mankato, led to the founding of MECC in 1973 by the state legislature. As a Joint Powers Authority, with the support of the University of Minnesota, the Minnesota State Colleges and Universities System, and the Minnesota Department of Education, MECC's role was to study and coordinate computer use in schools for both administrative and educational purposes. Schools, including the universities, had to get MECC's approval for most computing expenses, and were also its customers for computer-related services. After study of educational needs, a single educational computer center in the Minneapolis area was recommended for use by schools throughout the state (the University of Minnesota's MERITSS computer provided time-sharing services to its campuses and to state universities). MECC hoped that every Minnesota school, regardless of size, would have a terminal connected to the computer center. Computing facilities SUMITS, a UNIVAC 1110 mainframe was installed at the MECC facility at 1925 Sather, address later changed to 2520 Broadway Drive, next to Highway 280. A sturdy industrial building originally used for electrical maintenance, part of the building was already occupied by the University of Minnesota's Lauderdale computing facility. SUMITS was a batch processing system, however, not time-sharing, and its performance failed to meet the terms of the contract. In 1977 it was replaced with a Control Data Corporation Cyber 73 mainframe, known as the MECC Timesharing System (MTS). It became the largest such system for education in the world, with up to 448 simultaneous connections from up to 2000 terminals throughout the state, most of them Teletype Model 33 teleprinters, connected at 110 and 300 baud through telephones by using acoustically coupled modems. After several years most of the phone lines were replaced with direct circuits to schools across the state. By 1982 MTS had more than 950 programs in its library. One of the most popular was The Oregon Trail, originally written for the Minneapolis Public Schools' computer. Programming was the largest single use for MTS, with up to 45% of the system used for one of almost one dozen computer languages. To support its larger number of users—70 to 80% of all Minnesota public schools in 1981, and available to 96% of Minnesota students from 7 am to 11 pm daily by 1982—primarily using programs written in the BASIC language, both timesharing systems developed shared memory (MULTI) BASIC systems. Through this and less efficient methods, multiuser programs and chat systems appeared in addition to electronic mail and BBS programs; some of these were derived from MERITSS programs. While some of the ideas may have been derived from MERITSS, the multi programs were more efficient. The MERITSS chat program, even though it operated via fast access system files, could not match the efficiency of a MULTI chat program that copied the input/output into memory to be delivered to the user. The University of Minnesota Computer Center (UCC as it was called then) rejected implementing MULTI due to concerns about system stability. UCC tried to retrofit the MULTI-mail program for its own use because of the good user interface. It was not possible. They then tried again with an older fast access system file version, and while it worked, it was unreliable. After doing test runs with several other Universities mail programs, two developers at UCC implemented their own version, which also contained a message board feature, and was the campus wide e-mail solution for a couple of years. Microcomputer technology As MECC's Cyber 73 entered into service, microcomputers began to appear. In 1978 it appeared that features wished for in the classroom, such as a graphical display, were available. Through an evaluation and bidding process, the Apple II was chosen by MECC for state schools over other candidates, such as the Radio Shack TRS-80; the win was an important early deal in the history of Apple Inc. Any school in the state could buy Apple computers through MECC, which resold them at cost, without having to go through complex evaluation and purchasing procedures. Through what InfoWorld described as an "enviable showcase" for its products Apple sold more than 2,000 computers during the next three years and more than 5,000 by 1983,, making MECC's the company's largest reseller. In late 1981 MECC switched to a discount agreement for the Atari 400 and 800, and distributed software through the Atari Program Exchange. The use of microcomputers quickly increased, with 85% of school districts using them by 1981 compared to 75% for time-sharing, and the Cyber 73 shut down in 1983. By then each Minnesota public school had an average of three to four computers, compared to only 20 Milwaukee elementary schools of 110 with computers. MECC offered computer training to teachers and administrators, and 10 consortium consultants traveled throughout the state assisting school districts. MECC developed hundreds of microcomputer educational programs, many converted from the time-sharing original; by 1979 some MECC programs for the Apple II could be downloaded from the timesharing system. MECC distributed The Oregon Trail and others in its library to Minnesota schools for free, and charged others $10 to $20 for diskettes, each containing several programs. By July 1981 it had 29 software packages available. Projector slides, student worksheets, and other resources for teachers accompanied the software. As control over computer resources moved to local levels within Minnesota, MECC's focus on selling software grew. Beginning in 1980 with the Iowa Department of Education, 5,000 school districts around the world purchased site licenses for MECC software. It distributed 250,000 copies of MECC software around the world by 1982, and the "Institutional Membership" business became so successful that state subsidies ended. In 1983 MECC became a taxable, profit-making company, owned by the state of Minnesota but otherwise independent. By the 1985-1986 school year MECC offered more than 300 products and had about $7 million in annual sales. Activities During its lifetime, the company produced a number of programs that have become well-known to American Generation X and Y students. Besides Oregon Trail, others were The Secret Island of Dr. Quandary, The Yukon Trail, The Amazon Trail, Odell Lake, Zoyon Patrol, Number Munchers, Word Munchers, Fraction Munchers, Super Munchers, Lemonade Stand, Spellevator, Storybook Weaver, My Own Stories, Museum Madness, Jenny's Journeys, and DinoPark Tycoon. The game Freedom!, which had the player try to escape from slavery on the Underground Railroad, was released in 1992 but pulled from the market in 1993 following complaints from parents about its classroom use. Closure MECC was financially successful and dominated the market for Apple II software used within schools, but its management believed that the company needed more capital in order to compete for the home market and to develop software for other platforms, such as the IBM PC and the Macintosh. As the state of Minnesota did not have the capital to fund such plans, it spun off the company as a private corporation in 1991 to the venture capital fund North American Fund II for $5.25 million. An IPO followed in March 1994 and the publicly traded company, with about $30 million in annual revenue—about one third from The Oregon Trail—was acquired by SoftKey in 1995 for $370 million in stock. Although MECC continued to develop software, including the successful Oregon Trail II in 1995, its offices in Brooklyn Center, Minnesota closed in October 1999 due to layoffs. References External links MECC (Archive) The MECC Interactive Catalog maintained by Apple Pugetsound Program Library Exchange History of MECC from Stanford University Government of Minnesota Education companies of the United States Defunct software companies of the United States Software companies established in 1973 Software companies disestablished in 1999 Educational software companies Defunct video game companies of the United States 1973 establishments in Minnesota 1999 disestablishments in Minnesota
32851
https://en.wikipedia.org/wiki/Wiki
Wiki
A wiki ( ) is a hypertext publication collaboratively edited and managed by its own audience directly using a web browser. A typical wiki contains multiple pages for the subjects or scope of the project and could be either open to the public or limited to use within an organization for maintaining its internal knowledge base. Wikis are enabled by wiki software, otherwise known as wiki engines. A wiki engine, being a form of a content management system, differs from other web-based systems such as blog software, in that the content is created without any defined owner or leader, and wikis have little inherent structure, allowing structure to emerge according to the needs of the users. Wiki engines usually allow content to be written using a simplified markup language and sometimes edited with the help of a rich-text editor. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are open source, whereas others are proprietary. Some permit control over different functions (levels of access); for example, editing rights may permit changing, adding, or removing material. Others may permit access without enforcing access control. Other rules may be imposed to organize content. The online encyclopedia project, Wikipedia, is the most popular wiki-based website, and is one of the most widely viewed sites in the world, having been ranked in the top twenty since 2007. Wikipedia is not a single wiki but rather a collection of hundreds of wikis, with each one pertaining to a specific language. In addition to Wikipedia, there are hundreds of thousands of other wikis in use, both public and private, including wikis functioning as knowledge management resources, notetaking tools, community websites, and intranets. The English-language Wikipedia has the largest collection of articles: as of February 2020, it has over 6 million articles. Ward Cunningham, the developer of the first wiki software, WikiWikiWeb, originally described wiki as "the simplest online database that could possibly work." "Wiki" (pronounced ) is a Hawaiian word meaning "quick." Characteristics In their book The Wiki Way: Quick Collaboration on the Web, Ward Cunningham and co-author Bo Leuf described the essence of the Wiki concept: "A wiki invites all users—not just experts—to edit any page or to create new pages within the wiki Web site, using only a standard "plain-vanilla" Web browser without any extra add-ons." "Wiki promotes meaningful topic associations between different pages by making page link creation intuitively easy and showing whether an intended target page exists or not." "A wiki is not a carefully crafted site created by experts and professional writers and designed for casual visitors. Instead, it seeks to involve the typical visitor/user in an ongoing process of creation and collaboration that constantly changes the website landscape." A wiki enables communities of editors and contributors to write documents collaboratively. All that people require to contribute is a computer, Internet access, a web browser, and a basic understanding of a simple markup language (e.g. MediaWiki markup language). A single page in a wiki website is referred to as a "wiki page", while the entire collection of pages, which are usually well-interconnected by hyperlinks, is "the wiki". A wiki is essentially a database for creating, browsing, and searching through information. A wiki allows non-linear, evolving, complex, and networked text, while also allowing for editor argument, debate, and interaction regarding the content and formatting. A defining characteristic of wiki technology is the ease with which pages can be created and updated. Generally, there is no review by a moderator or gatekeeper before modifications are accepted and thus lead to changes on the website. Many wikis are open to alteration by the general public without requiring registration of user accounts. Many edits can be made in real-time and appear almost instantly online, but this feature facilitates abuse of the system. Private wiki servers require user authentication to edit pages, and sometimes even to read them. Maged N. Kamel Boulos, Cito Maramba, and Steve Wheeler write that the open wikis produce a process of Social Darwinism. "... because of the openness and rapidity that wiki pages can be edited, the pages undergo an evolutionary selection process, not unlike that which nature subjects to living organisms. 'Unfit' sentences and sections are ruthlessly culled, edited and replaced if they are not considered 'fit', which hopefully results in the evolution of a higher quality and more relevant page." Editing Source editing Some wikis have an Edit button or link directly on the page being viewed if the user has permission to edit the page. This can lead to a text-based editing page where participants can structure and format wiki pages with a simplified markup language, sometimes known as wikitext, wiki markup or wikicode (it can also lead to a WYSIWYG editing page; see the paragraph after the table below). For example, starting lines of text with asterisks could create a bulleted list. The style and syntax of wikitexts can vary greatly among wiki implementations, some of which also allow HTML tags. Layout consistency Wikis have favored plain-text editing, with fewer and simpler conventions than HTML for indicating style and structure. Although limiting access to HTML and Cascading Style Sheets (CSS) of wikis limits user ability to alter the structure and formatting of wiki content, there are some benefits. Limited access to CSS promotes consistency in the look and feel, and having JavaScript disabled prevents a user from implementing code that may limit other users' access. Basic syntax Visual editing Wikis can also make WYSIWYG editing available to users, usually through a JavaScript control that translates graphically entered formatting instructions into the corresponding HTML tags or wikitext. In those implementations, the markup of a newly edited, marked-up version of the page is generated and submitted to the server transparently, shielding the user from this technical detail. An example of this is the VisualEditor on Wikipedia. WYSIWYG controls do not, however, always provide all the features available in wikitext, and some users prefer not to use a WYSIWYG editor. Hence, many of these sites offer some means to edit the wikitext directly. Version history Some wikis keep a record of changes made to wiki pages; often, every version of the page is stored. This means that authors can revert to an older version of the page should it be necessary because a mistake has been made, such as the content accidentally being deleted or the page has been vandalized to include offensive or malicious text or other inappropriate content. Edit summary Many wiki implementations, such as MediaWiki, the software that powers Wikipedia, allow users to supply an edit summary when they edit a page. This is a short piece of text summarizing the changes they have made (e.g. "Corrected grammar," or "Fixed formatting in table."). It is not inserted into the article's main text but is stored along with that revision of the page, allowing users to explain what has been done and why. This is similar to a log message when making changes in a revision-control system. This enables other users to see which changes have been made by whom and why, often in a list of summaries, dates and other short, relevant content, a list which is called a "log" or "history." Navigation Within the text of most pages, there are usually many hypertext links to other pages within the wiki. This form of non-linear navigation is more "native" to a wiki than structured/formalized navigation schemes. Users can also create any number of index or table-of-contents pages, with hierarchical categorization or whatever form of organization they like. These may be challenging to maintain "by hand", as multiple authors and users may create and delete pages in an ad hoc, unorganized manner. Wikis can provide one or more ways to categorize or tag pages to support the maintenance of such index pages. Some wikis, including the original, have a backlink feature, which displays all pages that link to a given page. It is also typically possible in a wiki to create links to pages that do not yet exist, as a way to invite others to share what they know about a subject new to the wiki. Wiki users can typically "tag" pages with categories or keywords, to make it easier for other users to find the article. For example, a user creating a new article on cold-weather biking might "tag" this page under the categories of commuting, winter sports and bicycling. This would make it easier for other users to find the article. Linking and creating pages Links are created using a specific syntax, the so-called "link pattern". Originally, most wikis used CamelCase to name pages and create links. These are produced by capitalizing words in a phrase and removing the spaces between them (the word "CamelCase" is itself an example). While CamelCase makes linking easy, it also leads to links in a form that deviates from the standard spelling. To link to a page with a single-word title, one must abnormally capitalize one of the letters in the word (e.g. "WiKi" instead of "Wiki"). CamelCase-based wikis are instantly recognizable because they have many links with names such as "TableOfContents" and "BeginnerQuestions." a wiki can render the visible anchor of such links "pretty" by reinserting spaces, and possibly also reverting to lower case. This reprocessing of the link to improve the readability of the anchor is, however, limited by the loss of capitalization information caused by CamelCase reversal. For example, "RichardWagner" should be rendered as "Richard Wagner", whereas "PopularMusic" should be rendered as "popular music". There is no easy way to determine which capital letters should remain capitalized. As a result, many wikis now have "free linking" using brackets, and some disable CamelCase by default. Searching Most wikis offer at least a title search, and sometimes a full-text search. The scalability of the search depends on whether the wiki engine uses a database. Some wikis, such as PmWiki, use flat files. MediaWiki's first versions used flat files, but it was rewritten by Lee Daniel Crocker in the early 2000s (decade) to be a database application. Indexed database access is necessary for high speed searches on large wikis. Alternatively, external search engines such as Google Search can sometimes be used on wikis with limited searching functions to obtain more precise results. History WikiWikiWeb was the first wiki. Ward Cunningham started developing WikiWikiWeb in Portland, Oregon, in 1994, and installed it on the Internet domain c2.com on March 25, 1995. It was named by Cunningham, who remembered a Honolulu International Airport counter employee telling him to take the "Wiki Wiki Shuttle" bus that runs between the airport's terminals. According to Cunningham, "I chose wiki-wiki as an alliterative substitute for 'quick' and thereby avoided naming this stuff quick-web." Cunningham was, in part, inspired by the Apple HyperCard, which he had used. HyperCard, however, was single-user. Apple had designed a system allowing users to create virtual "card stacks" supporting links among the various cards. Cunningham developed Vannevar Bush's ideas by allowing users to "comment on and change one another's text." Cunningham says his goals were to link together people's experiences to create a new literature to document programming patterns, and to harness people's natural desire to talk and tell stories with a technology that would feel comfortable to those not used to "authoring". Wikipedia became the most famous wiki site, launched in January 2001 and entering the top ten most popular websites in 2007. In the early 2000s (decade), wikis were increasingly adopted in enterprise as collaborative software. Common uses included project communication, intranets, and documentation, initially for technical users. Some companies use wikis as their only collaborative software and as a replacement for static intranets, and some schools and universities use wikis to enhance group learning. There may be greater use of wikis behind firewalls than on the public Internet. On March 15, 2007, the word wiki was listed in the online Oxford English Dictionary. Alternative definitions In the late 1990s and early 2000s, the word "wiki" was used to refer to both user-editable websites and the software that powers them; the latter definition is still occasionally in use. Wiki inventor Ward Cunningham wrote in 2014 that the word "wiki" should not be used to refer to a single website, but rather to a mass of user-editable pages or sites so that a single website is not "a wiki" but "an instance of wiki". He wrote that the concept of wiki federation, in which the same content can be hosted and edited in more than one location in a manner similar to distributed version control, meant that the concept of a single discrete "wiki" no longer made sense. Implementations Wiki software is a type of collaborative software that runs a wiki system, allowing web pages to be created and edited using a common web browser. It may be implemented as a series of scripts behind an existing web server or as a standalone application server that runs on one or more web servers. The content is stored in a file system, and changes to the content are stored in a relational database management system. A commonly implemented software package is MediaWiki, which runs Wikipedia. Alternatively, personal wikis run as a standalone application on a single computer. Wikis can also be created on a "wiki farm", where the server-side software is implemented by the wiki farm owner. Some wiki farms can also make private, password-protected wikis. Free wiki farms generally contain advertising on every page. For more information, see Comparison of wiki hosting services. Trust and security Controlling changes Wikis are generally designed with the philosophy of making it easy to correct mistakes, rather than making it difficult to make them. Thus, while wikis are very open, they provide a means to verify the validity of recent additions to the body of pages. The most prominent, on almost every wiki, is the "Recent Changes" page—a specific list showing recent edits, or a list of edits made within a given time frame. Some wikis can filter the list to remove minor edits and edits made by automatic importing scripts ("bots"). From the change log, other functions are accessible in most wikis: the revision history shows previous page versions and the diff feature highlights the changes between two revisions. Using the revision history, an editor can view and restore a previous version of the article. This gives great power to the author to eliminate edits. The diff feature can be used to decide whether or not this is necessary. A regular wiki user can view the diff of an edit listed on the "Recent Changes" page and, if it is an unacceptable edit, consult the history, restoring a previous revision; this process is more or less streamlined, depending on the wiki software used. In case unacceptable edits are missed on the "recent changes" page, some wiki engines provide additional content control. It can be monitored to ensure that a page, or a set of pages, keeps its quality. A person willing to maintain pages will be warned of modifications to the pages, allowing them to verify the validity of new editions quickly. This can be seen as a very pro-author and anti-editor feature. A watchlist is a common implementation of this. Some wikis also implement "patrolled revisions", in which editors with the requisite credentials can mark some edits as not vandalism. A "flagged revisions" system can prevent edits from going live until they have been reviewed. Trustworthiness and reliability of content Critics of publicly editable wiki systems argue that these systems could be easily tampered with by malicious individuals ("vandals") or even by well-meaning but unskilled users who introduce errors into the content, while proponents maintain that the community of users can catch such malicious or erroneous content and correct it. Lars Aronsson, a data systems specialist, summarizes the controversy as follows: "Most people when they first learn about the wiki concept, assume that a Web site that can be edited by anybody would soon be rendered useless by destructive input. It sounds like offering free spray cans next to a grey concrete wall. The only likely outcome would be ugly graffiti and simple tagging and many artistic efforts would not be long lived. Still, it seems to work very well." High editorial standards in medicine and health sciences articles, in which users typically use peer-reviewed journals or university textbooks as sources, have led to the idea of expert-moderated wikis. Some wikis allow one to link to specific versions of articles, which has been useful to the scientific community, in that expert peer reviewers could analyse articles, improve them and provide links to the trusted version of that article. Noveck points out that "participants are accredited by members of the wiki community, who have a vested interest in preserving the quality of the work product, on the basis of their ongoing participation." On controversial topics that have been subject to disruptive editing, a wiki author may restrict editing to registered users. Security The open philosophy of wiki – allowing anyone to edit content – does not ensure that every editor's intentions are well-mannered. For example, vandalism (changing wiki content to something offensive, adding nonsense, maliciously removing encyclopedic content, or deliberately adding incorrect information, such as hoax information) can be a major problem. On larger wiki sites, such as those run by the Wikimedia Foundation, vandalism can go unnoticed for some period of time. Wikis, because of their open nature, are susceptible to intentional disruption, known as "trolling". Wikis tend to take a soft-security approach to the problem of vandalism, making damage easy to undo rather than attempting to prevent damage. Larger wikis often employ sophisticated methods, such as bots that automatically identify and revert vandalism and JavaScript enhancements that show characters that have been added in each edit. In this way, vandalism can be limited to just "minor vandalism" or "sneaky vandalism", where the characters added/eliminated are so few that bots do not identify them and users do not pay much attention to them. An example of a bot that reverts vandalism on Wikipedia is ClueBot NG. ClueBot NG can revert edits, often within minutes, if not seconds. The bot uses machine learning in lieu of heuristics. The amount of vandalism a wiki receives depends on how open the wiki is. For instance, some wikis allow unregistered users, identified by their IP addresses, to edit content, while others limit this function to just registered users. Edit wars can also occur as users repetitively revert a page to the version they favor. In some cases, editors with opposing views of which content should appear or what formatting style should be used will change and re-change each other's edits. This results in the page being "unstable" from a general user's perspective, because each time a general user comes to the page, it may look different. Some wiki software allows an administrator to stop such edit wars by locking a page from further editing until a decision has been made on what version of the page would be most appropriate. Some wikis are in a better position than others to control behavior due to governance structures existing outside the wiki. For instance, a college teacher can create incentives for students to behave themselves on a class wiki they administer by limiting editing to logged-in users and pointing out that all contributions can be traced back to the contributors. Bad behavior can then be dealt with under university policies. Potential malware vector Malware can also be a problem for wikis, as users can add links to sites hosting malicious code. For example, a German Wikipedia article about the Blaster Worm was edited to include a hyperlink to a malicious website. Users of vulnerable Microsoft Windows systems who followed the link would be infected. A countermeasure is the use of software that prevents users from saving an edit that contains a link to a site listed on a blacklist of malicious sites. Communities Applications The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all Web sites in terms of traffic. Other large wikis include the WikiWikiWeb, Memory Alpha, Wikivoyage, and Susning.nu, a Swedish-language knowledge base. Medical and health-related wiki examples include Ganfyd, an online collaborative medical reference that is edited by medical professionals and invited non-medical experts. Many wiki communities are private, particularly within enterprises. They are often used as internal documentation for in-house systems and applications. Some companies use wikis to allow customers to help produce software documentation. A study of corporate wiki users found that they could be divided into "synthesizers" and "adders" of content. Synthesizers' frequency of contribution was affected more by their impact on other wiki users, while adders' contribution frequency was affected more by being able to accomplish their immediate work. From a study of thousands of wiki deployments, Jonathan Grudin concluded careful stakeholder analysis and education are crucial to successful wiki deployment. In 2005, the Gartner Group, noting the increasing popularity of wikis, estimated that they would become mainstream collaboration tools in at least 50% of companies by 2009. Wikis can be used for project management. Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. In the mid-2000s, the increasing trend among industries toward collaboration placed a heavier impetus upon educators to make students proficient in collaborative work, inspiring even greater interest in wikis being used in the classroom. Wikis have found some use within the legal profession and within the government. Examples include the Central Intelligence Agency's Intellipedia, designed to share and collect intelligence, DKospedia, which was used by the American Civil Liberties Union to assist with review of documents about the internment of detainees in Guantánamo Bay; and the wiki of the United States Court of Appeals for the Seventh Circuit, used to post court rules and allow practitioners to comment and ask questions. The United States Patent and Trademark Office operates Peer-to-Patent, a wiki to allow the public to collaborate on finding prior art relevant to the examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. Cornell Law School founded a wiki-based legal dictionary called Wex, whose growth has been hampered by restrictions on who can edit. In academic contexts, wikis have also been used as project collaboration and research support systems. City wikis A city wiki (or local wiki) is a wiki used as a knowledge base and social network for a specific geographical locale. The term 'city wiki' or its foreign language equivalent (e.g. German 'Stadtwiki') is sometimes also used for wikis that cover not just a city, but a small town or an entire region. A city wiki contains information about specific instances of things, ideas, people and places. Much of this information might not be appropriate for encyclopedias such as Wikipedia (e.g. articles on every retail outlet in a town), but might be appropriate for a wiki with more localized content and viewers. A city wiki could also contain information about the following subjects, that may or may not be appropriate for a general knowledge wiki, such as: Details of public establishments such as public houses, bars, accommodation or social centers Owner name, opening hours and statistics for a specific shop Statistical information about a specific road in a city Flavors of ice cream served at a local ice cream parlor A biography of a local mayor and other persons WikiNodes WikiNodes are pages on wikis that describe related wikis. They are usually organized as neighbors and delegates. A neighbor wiki is simply a wiki that may discuss similar content or may otherwise be of interest. A delegate wiki is a wiki that agrees to have certain content delegated to that wiki. One way of finding a wiki on a specific subject is to follow the wiki-node network from wiki to wiki; another is to take a Wiki "bus tour", for example: . Participants The four basic types of users who participate in wikis are reader, author, wiki administrator and system administrator. The system administrator is responsible for the installation and maintenance of the wiki engine and the container web server. The wiki administrator maintains wiki content and is provided additional functions about pages (e.g. page protection and deletion), and can adjust users' access rights by, for instance, blocking them from editing. Growth factors A study of several hundred wikis showed that a relatively high number of administrators for a given content size is likely to reduce growth; that access controls restricting editing to registered users tends to reduce growth; that a lack of such access controls tends to fuel new user registration; and that higher administration ratios (i.e. admins/user) have no significant effect on content or population growth. Conferences Active conferences and meetings about wiki-related topics include: Atlassian Summit, an annual conference for users of Atlassian software, including Confluence. OpenSym (called WikiSym until 2014), an academic conference dedicated to research about wikis and open collaboration. SMWCon, a bi-annual conference for users and developers of Semantic MediaWiki. TikiFest, a frequently held meeting for users and developers of Tiki Wiki CMS Groupware. Wikimania, an annual conference dedicated to the research and practice of Wikimedia Foundation projects like Wikipedia. Former wiki-related events include: RecentChangesCamp (2006–2012), an unconference on wiki-related topics. RegioWikiCamp (2009–2013), a semi-annual unconference on "regiowikis", or wikis on cities and other geographic areas. Legal environment Joint authorship of articles, in which different users participate in correcting, editing, and compiling the finished product, can also cause editors to become tenants in common of the copyright, making it impossible to republish without permission of all co-owners, some of whose identities may be unknown due to pseudonymous or anonymous editing. Where persons contribute to a collective work such as an encyclopedia, there is, however, no joint ownership if the contributions are separate and distinguishable. Despite most wikis' tracking of individual contributions, the action of contributing to a wiki page is still arguably one of jointly correcting, editing, or compiling, which would give rise to joint ownership. Some copyright issues can be alleviated through the use of an open content license. Version 2 of the GNU Free Documentation License includes a specific provision for wiki relicensing; Creative Commons licenses are also popular. When no license is specified, an implied license to read and add content to a wiki may be deemed to exist on the grounds of business necessity and the inherent nature of a wiki, although the legal basis for such an implied license may not exist in all circumstances. Wikis and their users can be held liable for certain activities that occur on the wiki. If a wiki owner displays indifference and forgoes controls (such as banning copyright infringers) that he could have exercised to stop copyright infringement, he may be deemed to have authorized infringement, especially if the wiki is primarily used to infringe copyrights or obtains a direct financial benefit, such as advertising revenue, from infringing activities. In the United States, wikis may benefit from Section 230 of the Communications Decency Act, which protects sites that engage in "Good Samaritan" policing of harmful material, with no requirement on the quality or quantity of such self-policing. It has also been argued, however, that a wiki's enforcement of certain rules, such as anti-bias, verifiability, reliable sourcing, and no-original-research policies, could pose legal risks. When defamation occurs on a wiki, theoretically, all users of the wiki can be held liable, because any of them had the ability to remove or amend the defamatory material from the "publication." It remains to be seen whether wikis will be regarded as more akin to an internet service provider, which is generally not held liable due to its lack of control over publications' contents, than a publisher. It has been recommended that trademark owners monitor what information is presented about their trademarks on wikis, since courts may use such content as evidence pertaining to public perceptions. Joshua Jarvis notes, "Once misinformation is identified, the trademark owner can simply edit the entry." See also Comparison of wiki software Content management system CURIE Dispersed knowledge List of wikis Mass collaboration Universal Edit Button Wikis and education Notes References Further reading External links Exploring with Wiki, an interview with Ward Cunningham by Bill Verners WikiIndex and WikiApiary, directories of wikis WikiMatrix, a website for comparing wiki software and hosts WikiTeam, a volunteer group to preserve wikis associated with Archive Team Murphy, Paula (April 2006). Topsy-turvy World of Wiki. University of California. Ward Cunningham's correspondence with etymologists Hawaiian words and phrases Hypertext Self-organization Social information processing Articles containing video clips
51291118
https://en.wikipedia.org/wiki/Qualified%20digital%20certificate
Qualified digital certificate
In the context of Regulation (EU) No 910/2014 (eIDAS), a qualified digital certificate is a public key certificate issued by a qualified trust service provider that ensures the authenticity and data integrity of an electronic signature and its accompanying message and/or attached data. Description eIDAS defines several tiers of electronic signatures that can be used in conducting public sector and private transactions within and across the borders of EU member states. A qualified digital certificate, in addition to other specific services provided by a qualified trust service provider, is required to elevate the status of an electronic signature to that of being considered a qualified electronic signature. Using cryptography, the digital certificate, also known as a public key certificate, contains information to link it to its owner and the digital signature of the trust entity that verifies the authenticity of the content that has been signed. According to eIDAS, to be considered a qualified digital certificate, the certificate must meet the requirements provided in Annex I of Regulation (EU) No 910/2014, including, but not limited to: Identification that the certificate is a qualified certificate for electronic signature Identification of the qualified trust service provider who issued the qualified certificate, including such information Corresponding electronic signature validation data and electronic signature creation data Indication of the certificate’s period of validity Unique certificate identity code of the trust service provider Qualified trust service provider’s advanced electronic signature or electronic seal Vision The need for non-repudiation and authentication of electronic signatures was originally addressed in the Electronic Signatures Directive 1999/93/EC to help facilitate secure transactions, specifically those that occur across the borders of EU Member states. The eIDAS Regulation later replaced the Directive and defined the standards to be used in the creation of qualified digital certificates by trust service providers. Role of a qualified trust service provider A qualified digital certificate can only be issued by a qualified trust service provider that has received authorization from their member state’s supervisory body to provide qualified trust services for creating qualified electronic signatures. The provider must be listed upon the EU Trust List; otherwise, they are not permitted to provide qualified digital certificates or other qualified trust services. The trust service provider is required to abide by the guidelines established under eIDAS for creating qualified digital certificate, which include: Providing a valid date and time stamp of when the certificate was created, immediate revocation of any signature that has an expired certificate, providing appropriate training to all their employees who are involved with providing trust services, any equipment or software that is used for trust services must be trustworthy and capable of preventing certificates from being forged. Legal implications of electronic signatures with qualified digital certificates In court, a qualified electronic signature provided the highest level of probative value, which makes it difficult to refute its authorship. A qualified electronic signature, along with its qualified certificate is given the same consideration as a handwritten signature when used as evidence in legal proceedings. The validity of a qualified electronic signature that has been created with a qualified certificate must be accepted by other EU member states regardless of which member state the signature was produced in. Global perspective In other parts of the world, similar concepts have been created to define standards for electronic signatures. In Switzerland, the digital signing standard ZertES has comparable standards that address the conformity and regulation of trust service providers who product digital certificates. In the United States, the NIST Digital Signature Standard (DSS) does not provide a comparable standard for regulating qualified certificates that would address non-repudiation of a signatory’s qualified certificate. An amendment to NIST DSS is currently being discussed that would be more in-line with how eIDAS and ZertES handle trusted services. See also Qualified website authentication certificate References Authentication methods Signature Computer law Cryptography standards
2304415
https://en.wikipedia.org/wiki/UDP%20hole%20punching
UDP hole punching
UDP hole punching is a commonly used technique employed in network address translation (NAT) applications for maintaining User Datagram Protocol (UDP) packet streams that traverse the NAT. NAT traversal techniques are typically required for client-to-client networking applications on the Internet involving hosts connected in private networks, especially in peer-to-peer, Direct Client-to-Client (DCC) and Voice over Internet Protocol (VoIP) deployments. UDP hole punching establishes connectivity between two hosts communicating across one or more network address translators. Typically, third-party hosts on the public transit network are used to establish UDP port states that may be used for direct communications between the communicating hosts. Once port state has been successfully established and the hosts are communicating, port state may be maintained either by normal communications traffic, or in the prolonged absence thereof, by keep-alive packets, usually consisting of empty UDP packets or packets with minimal, non-intrusive content. Overview UDP hole punching is a method for establishing bidirectional UDP connections between Internet hosts in private networks using network address translators. The technique is not applicable in all scenarios or with all types of NATs, as NAT operating characteristics are not standardized. Hosts with network connectivity inside a private network connected via a NAT to the Internet typically use the Session Traversal Utilities for NAT (STUN) method or Interactive Connectivity Establishment (ICE) to determine the public address of the NAT that its communications peers require. In this process another host on the public network is used to establish port mapping and other UDP port state that is assumed to be valid for direct communication between the application hosts. Since UDP state usually expires after short periods of time in the range of tens of seconds to a few minutes, and the UDP port is closed in the process, UDP hole punching employs the transmission of periodic keep-alive packets, each renewing the life-time counters in the UDP state machine of the NAT. UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the well-known STUN server is restricted to receiving data from the well-known server, and therefore the NAT mapping the well-known server sees is not useful information to the endpoint. In a somewhat more elaborate approach both hosts will start sending to each other, using multiple attempts. On a Restricted Cone NAT, the first packet from the other host will be blocked. After that the NAT device has a record of having sent a packet to the other machine, and will let any packets coming from this IP address and port number through. This technique is widely used in peer-to-peer software and Voice over Internet Protocol telephony. It can also be used to assist the establishment of virtual private networks operating over UDP. The same technique is sometimes extended to Transmission Control Protocol (TCP) connections, though with less success because TCP connection streams are controlled by the host OS, not the application, and sequence numbers are selected randomly; thus any NAT device that performs sequence-number checking will not consider the packets to be associated with an existing connection and drop them. Flow Let A and B be the two hosts, each in its own private network; NA and NB are the two NAT devices with globally reachable IP addresses EIPA and EIPB respectively; S is a public server with a well-known, globally reachable IP address. A and B each begin a UDP conversation with S; the NAT devices NA and NB create UDP translation states and assign temporary external port numbers EPA and EPB. S examines the UDP packets to get the source port used by NA and NB (the external NAT ports EPA and EPB). S passes EIPA:EPA to B and EIPB:EPB to A. A sends a packet to EIPB:EPB. NA examines A's packet and creates the following tuple in its translation table: (Source-IP-A, EPA, EIPB, EPB). B sends a packet to EIPA:EPA. NB examines B's packet and creates the following tuple in its translation table: (Source-IP-B, EPB, EIPA, EPA). Depending on the state of NA's translation table when B's first packet arrives (i.e. whether the tuple (Source-IP-A, EPA, EIPB, EPB) has been created by the time of arrival of B's first packet), B's first packet is dropped (no entry in translation table) or passed (entry in translation table has been made). Depending on the state of NB's translation table when A's first packet arrives (i.e. whether the tuple (Source-IP-B, EPB, EIPA, EPA) has been created by the time of arrival of A's first packet), A's first packet is dropped (no entry in translation table) or passed (entry in translation table has been made). At worst, the second packet from A reaches B; at worst the second packet from B reaches A. Holes have been "punched" in the NAT and both hosts can directly communicate. If both hosts have Restricted cone NATs or Symmetric NATs, the external NAT ports will differ from those used with S. On some routers, the external ports are picked sequentially, making it possible to establish a conversation through guessing nearby ports. See also Hamachi Freenet ICMP hole punching TCP hole punching Hole punching (networking) WebRTC Port Control Protocol (PCP) Teredo tunneling References External links Peer-to-Peer Communication Across Network Address Translators, PDF contains a detailed explanation of the hole punching process STUNT Simple Traversal of UDP Through NATs and TCP too Network Address Translation and Peer-to-Peer Applications (NATP2P) Computer network security
3569061
https://en.wikipedia.org/wiki/Ciena
Ciena
Ciena Corporation is an American telecommunications networking equipment and software services supplier based in Hanover, Maryland. The company has been described by The Baltimore Sun as the "world's biggest player in optical connectivity." The company reported revenues of $3.57 billion for 2019. Ciena had approximately 6,000 employees, as of October 2018. Gary Smith serves as president and chief executive officer (CEO). Customers include AT&T, Deutsche Telekom, Korea Telecom, Sprint Corporation, and Verizon Communications. History Early history and initial public offering Ciena was founded in 1992 under the name HydraLite by electrical engineer David R. Huber. Huber served as chief executive officer, while Optelecom, a company building optical networking products, provided "management assistance and production facilities," and co-founder Kevin Kimberlin who "provided initial equity capital during the formation of the Company". The company subsequently received funding from Sevin Rosen Funds as a result of a demonstration at its laboratory attended by Jon Bayless, a partner at the firm, who saw the value in applying HydraLite's fiber-optic technology to cable television. Sevin Rosen offered funding immediately, investing $1.25 million in April 1994. Ciena received $40 million in venture capital financing, including $3.3 million from Sevin Rosen Funds. Other early investors in the company included Charles River Ventures, Japan Associated Finance Co., Star Venture, and Vanguard Venture Partners. Bayless also recruited physicist Patrick Nettles, a former colleague at the telecommunications company Optilink, to serve as Ciena's first CEO, and Lawrence P. Huang, another former colleague, to accept the sales chief role. Huber and Nettles, who changed the company's name to Ciena, began working from an office in Dallas in February 1994; Huber would remain with Ciena until 1995. The name of the company was changed to Ciena in 1994. Its first products were introduced in May 1996, and Sprint Corporation was the company's first customer. At $195 million, the company's first-year sales were the highest ever recorded by a startup at the time. Ciena had sold $54.8 million in products to Sprint alone by November 1996. WorldCom also became an early customer, and Sprint and WorldCom accounted for 97 percent of Ciena's revenue, as of early 1997. Ciena began diversifying its clientele and acquiring smaller contracts in 1997. Ciena went public on NASDAQ in February 1997, and was the largest initial public offering of a startup company to date, with a valuation of $3.4 billion. The company's headquarters were relocated to Maryland in March 1997. Ciena earned approximately $370 million in revenue and profits of $110 million for the fiscal year ending in October 1997. Customers at the time included AT&T, Bell Atlantic, and Digital Teleport. In March 1998, Nettles and Michael Birck of Tellabs began discussing a possible merger. Tellabs announced the purchase of Ciena for $7.1 billion in June. Revenue surpassed $700 million by August 1998, and Ciena had approximately 1,300 employees at the time. The merger was not completed. Financial performance and shareholder disapproval were cited in the media as reasons for the abandoned acquisition proposal in September 1998. 2000s–present Following the telecoms crash, Ciena's annual sales decreased from $1.6 billion to approximately $300 million. To address the company's challenges, Smith replaced Nettles as the company's CEO in 2001, and Nettles became executive chairman; Ciena was the second largest fiber optic networking equipment producer in the U.S. at the time. The company raised $1.52 billion by selling 11 million shares of stock and $600 million in convertible bonds in 2001. While many telecommunications companies experienced downturns during the early 2000s, Ciena's cash influx provided flexibility and allowed the company to expand its product portfolio to include a broader range of advanced networking solutions and other technologies. Ciena also completed a series of strategic acquisitions, buying 11 companies between 1997 and early 2004. Ciena spent more than $2 billion to purchase five networking technology companies during 2001–2004. AT&T, which previously tested select Ciena equipment, signed a supply agreement in 2001. In 2002, Ciena reported $361.1 million in sales and a loss of $1.59 billion, and had approximately 3,500 employees. The company was the fourth largest producer of fiber optic equipment in the U.S. by 2003. In 2003, a federal court jury determined that Corvis Corporation, another fiber optic telecommunications equipment provider established by Huber in 1997, infringed a patent owned by Ciena. In 2008, Ciena earned $902 million and reported a profit of $39 million. The company earned $653 million and reported a loss of $580 million in 2009; Ciena was generating approximately two-thirds of its revenue in the U.S. at the time. Ciena had net losses until 2015, when the company earned $2.4 billion in sales and posted a $12 million profit. Ciena's global workforce increased from 4,300 in 2011 to 5,345 by October 2015. The company's research and development budget for its Ottawa facilities was approximately $180 million per year, as of 2015. Ciena earned $2.8 billion in revenue in 2017, and reported annual sales of approximately $3.09 billion in 2018. The company ranked number 770 and number 744 on the Fortune 1000 in 2017 and 2018, respectively. Acquisitions Ciena acquired the telecommunications company AstraCom Inc. in 1997 for $13.1 million. Fourteen of AstraCom's engineers signed four-year contracts with Ciena, and joined the company's new research and development team in Alpharetta, Georgia. In early 1998, the company acquired Norcross, Georgia-based ATI Telecom International Ltd. and its subsidiary Alta Telecom in a transaction worth $52.5 million. Alta's engineering and installation products were used by service providers for switching, transport, and wireless communications; the company continued to operate as a subsidiary of Ciena. Ciena purchased Terabit Technology Inc., a producer of detectors for data transmission based in Santa Barbara, California, for $11.7 million in April 1998. The company acquired Cupertino, California-based Lightera Networks Inc. and Marlborough, Massachusetts-based Omnia Communications Inc. for $980 million in stock in 1999. The company purchased Cyras Corp. of Fremont, California during 2000–2001 for $2 billion in stock. ONI Systems, a San Jose, California-based producer of phone and computer data equipment, was acquired by Ciena for $900 million in stock in June 2002. The acquisitions of Cyras, which produced optical switch systems, and ONI, which made transport equipment for data transfer, allowed Ciena to focus on networks in metropolitan areas. Ciena purchased WaveSmith Networks Inc., an optical-networking equipment manufacturer based in Acton, Massachusetts, for $158 million in stock in 2003. Ciena acquired the Ottawa-based data storage networking company Akara Corp. for $45 million in 2003. Akara expanded Ciena's product line and storage networking capabilities, and continued to operate as a subsidiary. Catena Networks and New Jersey-based Internet Photonics were purchased by Ciena in 2004. The stock transactions were valued at $486.7 million and $150 million, respectively. Catena had approximately 220 employees at the time, and the purchase of Internet Photonics marked Ciena's entrance into the cable industry. In 2008, Ciena acquired World Wide Packets Inc. (WWP), a Spokane Valley, Washington-based producer of switches and software for Ethernet services, for approximately $296 million. WPP offered the LightningEdge operating system and network management tools, and had more than 100 customers in 25 countries at the time. WPP became a whole owned subsidiary, and the company's office and 65 employees in Spokane, Washington were used by Ciena until mid 2018. Ciena acquired Nortel's optical technology and Carrier Ethernet division for approximately $770 million during 2009–2010. Nortel's Metro Ethernet Networks business developed next-generation optical-transmission equipment and had more than 1,000 customers in 65 countries at the time. The business had approximately 1,400 employees in Canada, including 1,125 in Ottawa and 250 in Montreal. In 2017, Ciena's 1,600 Ottawa personnel were relocated to a new campus in Kanata, Ontario, along with employees of Catena. These 1,600, many of whom worked for Nortel, comprise less than 30 percent of Ciena's workforce, but represent the company's largest operational hub and complete half of its research and development work. Ciena acquired Cyan, which offers platforms and software systems for network operators, for approximately $400 million in 2015. The assets of TeraXion Inc., a network management system company based in Quebec City, were purchased for $32 million in 2016. Ciena acquired Packet Design, an Austin-based network performance management software company specializing in network optimization, route analytics, and topology, in 2016. In 2018, Ciena purchased software and services company DonRiver for an undisclosed amount. Operations in India Ciena opened a campus in Gurgaon, India, in 2006. The campus focuses on research and development, and was further expanded in 2018 to begin manufacturing products for local markets. There were approximately 1,500 employees on site, representing 20 percent of the company's global workforce, as of May 2018. Ciena and Sify partnered in mid 2018 to increase the information and communications technology company's network capacity from 100G to 400G. Ciena's converged packet optical products support big data analysis, cloud computing, and the Internet of things across 40 of Sify's data centers in India. In 2019, Bharti Airtel used Ciena equipment to build a 130,000 km photonic control plane network, connecting more than 4,000 locations in India. Ciena provides converged packet optical and Ethernet services to Bharti Airtel, Jio, and Vodafone Idea Limited, and supplies equipment to the Government of India, as of mid 2019. Rajesh Nambiar was named the chairman and president of Ciena India in mid 2019. Products Ciena develops and markets equipment, software and services, primarily for the telecommunications industry and large cloud service firms. Their products and services support the transport and management of voice and data traffic on communications networks. Network infrastructure Ciena's network equipment includes optical network switches and routing platforms to manage data load on telecommunications networks. The company launched its WaveLogic 5 modem platform in 2019. The platform provides network capacity up to 800G. Ciena also provides technology and equipment for undersea cable networks. Software and analytics The company's Blue Planet software platform is used by telecoms companies for programming communications networks, including for network automation. It includes a service that uses machine learning algorithms that analyze anomalies in a network to predict issues, and identify actions for the network operators to take in order to prevent network outages and further disruptions. References Companies listed on the New York Stock Exchange Companies formerly listed on the Nasdaq Companies based in Anne Arundel County, Maryland Networking companies of the United States Networking hardware companies Telecommunications equipment vendors American companies established in 1992 Telecommunications companies established in 1992 1992 establishments in Maryland 1997 initial public offerings
2504464
https://en.wikipedia.org/wiki/Oracle%20Applications
Oracle Applications
Oracle Applications comprise the applications software or business software of the Oracle Corporation both in the cloud and on-premises. The term refers to the non-database and non-middleware parts. The suite of applications includes enterprise resource planning, enterprise performance management, supply chain & manufacturing, human capital management, and advertising and customer experience. Oracle initially launched its application suite with financials software in the late 1980s. By 2009, the offering extended to supply chain management, human-resource management, warehouse-management, customer-relationship management, call-center services, product-lifecycle management, and many other areas. Both in-house expansion and the acquisition of other companies have vastly expanded Oracle's application software business. In February 2017, Oracle released Oracle E-Business Suite (EBS/e-BS) Release 12 (R12)a bundling of several Oracle Applications. The release date coincided with new releases of other Oracle-owned products: JD Edwards EnterpriseOne, Siebel Systems and PeopleSoft. Oracle also has a portfolio of enterprise applications for the cloud (SaaS) known as Oracle Fusion Cloud Applications. These cloud applications include Oracle Cloud ERP, Oracle Cloud EPM, Oracle Cloud HCM, Oracle Cloud SCM, and Oracle Advertising and CX. Cloud applications Oracle provides SaaS applications also known as Oracle Fusion Cloud Applications. The following enterprise cloud applications are available on Oracle Cloud. Oracle Enterprise Resource Planning (ERP) Cloud Oracle Enterprise Performance Management (EPM) Cloud Oracle Human Capital Management (HCM) Cloud Oracle Supply Chain Management (SCM) Cloud Oracle Advertising and Customer Experience (CX) Cloud Oracle Enterprise Resource Planning (ERP) Oracle Cloud ERP is a cloud-based ERP software application suite that manages enterprise functions including accounting, financial management, project management, and procurement. Oracle Enterprise Performance Management (EPM) Oracle Cloud EPM is a cloud-based EPM software application suite that manages enterprise operational processes including planning, budgeting, and reporting. Oracle Human Capital Management (HCM) Oracle Cloud HCM is a cloud-based HCM software application suite that manages global HR, talent, and workforce management. Oracle Cloud HCM was released in 2011 as a part of Oracle Fusion Applications. Oracle Supply Chain Management (SCM) Oracle Cloud SCM, also known as Oracle Supply Chain & Manufacturing, is a cloud-based SCM software application suite used by companies to build and manage intelligent supply chains. This includes support for procurement, order management, manufacturing, product lifecycle management, maintenance, logistics, and supply chain planning and execution. Oracle Advertising and Customer Experience (CX) Oracle Advertising and Customer Experience (CX) is a cloud-based application suite that includes tools for advertising, marketing, sales, e-commerce, and customer service. The suite also includes: Oracle CX (with Oracle Sales, Oracle Service, Oracle Marketing, Oracle Commerce) Oracle Advertising (with Oracle Activation and Oracle MOAT Measurement) Industry vertical applications ATG / Endeca—also branded as on-premises "Oracle Commerce" Oracle Retail Micros (Retail and Hospitality, acquired post 2012) Primavera Agile AutoVue (for processing CAD and graphics data) NetSuite NetSuite was a cloud computing company acquired by Oracle in 2016. In 2019, NetSuite moved onto Oracle Cloud. NetSuite is a cloud business software platform. On-premises applications Oracle E-Business Suite Oracle PeopleSoft Oracle Siebel CRM Oracle JD Edwards EnterpriseOne Oracle JD Edwards World Endeca Inquira Silver Creek Datanomic Hyperion Campus Solutions Oracle's E-Business Suite (also known as EB-Suite/EBS, eBus or "E-Biz") consists of a collection of enterprise resource planning (ERP), customer relationship management (CRM), human capital management (HCM), and supply-chain management (SCM) computer applications either developed or acquired by Oracle. The software utilizes Oracle's core Oracle relational database management system technology. The E-Business Suite contains several product lines often known by short acronyms. Significant technologies incorporated into the applications include the Oracle database technologies, (engines for RDBMS, PL/SQL, Java, .NET, HTML and XML), the "technology stack" (Oracle Forms Server, Oracle Reports Server, Apache Web Server, Oracle Discoverer, Jinitiator and Sun's Java). It makes the following enterprise applications available as part of Oracle eBusiness Suite: Asset Lifecycle Management Customer Relationship Management (CRM) Enterprise Resource Planning (ERP) Human Capital Management (HCM) Procurement Product Life-cycle Management Supply Chain Management Manufacturing See also CEMLI List of acquisitions by Oracle (includes acquisitions which extended Applications portfolio) Oracle Fusion Applications References Further reading Cameron, Melanie. Oracle General Ledger Guide (2009) McGraw-Hill. . Cameron, Melanie. Oracle Procure-to-Pay Guide (2009) McGraw-Hill. . External links Oracle Applications home page Oracle software Accounting software Project management software
9909291
https://en.wikipedia.org/wiki/Unified%20Video%20Decoder
Unified Video Decoder
Unified Video Decoder (UVD), previously called Universal Video Decoder, is the name given to AMD's dedicated video decoding ASIC. There are multiple versions implementing a multitude of video codecs, such as H.264 and VC-1. UVD was introduced with the Radeon HD 2000 Series and is integrated into some of AMDs GPUs and APUs. UVD occupies a considerable amount of the die surface and is not to be confused with AMD's Video Coding Engine (VCE). Overview The UVD is based on an ATI Xilleon video processor, which is incorporated onto the same die as the GPU and is part of the ATI Avivo HD for hardware video decoding, along with the Advanced Video Processor (AVP). UVD, as stated by AMD, handles decoding of H.264/AVC, and VC-1 video codecs entirely in hardware. The UVD technology is based on the Cadence Tensilica Xtensa processor, which was originally licensed by ATI Technologies Inc. in 2004. UVD/UVD+ In early versions of UVD, video post-processing is passed to the pixel shaders and OpenCL kernels. MPEG-2 decoding is not performed within UVD, but in the shader processors. The decoder meets the performance and profile requirements of Blu-ray and HD DVD, decoding H.264 bitstreams up to a bitrate of 40 Mbit/s. It has context-adaptive binary arithmetic coding (CABAC) support for H.264/AVC. Unlike video acceleration blocks in previous generation GPUs, which demanded considerable host-CPU involvement, UVD offloads the entire video-decoder process for VC-1 and H.264 except for video post-processing, which is offloaded to the shaders. MPEG-2 decode is also supported, but the bitstream/entropy decode is not performed for MPEG-2 video in hardware. Previously, neither ATI Radeon R520 series' ATI Avivo nor NVidia Geforce 7 series' PureVideo assisted front-end bitstream/entropy decompression in VC-1 and H.264 - the host CPU performed this work. UVD handles VLC/CAVLC/CABAC, frequency transform, pixel prediction and inloop deblocking, but passes the post processing to the shaders. Post-processing includes denoising, de-interlacing, and scaling/resizing. AMD has also stated that the UVD component being incorporated into the GPU core only occupies 4.7 mm² in area on 65 nm fabrication process node. A variation on UVD, called UVD+, was introduced with the Radeon HD 3000 series. UVD+ support HDCP for higher resolution video streams. But UVD+ was also being marketed as simply UVD. UVD 2 The UVD saw a refresh with the release of the Radeon HD 4000 series products. The UVD 2 features full bitstream decoding of H.264/MPEG-4 AVC, VC-1, as well as iDCT level acceleration of MPEG2 video streams. Performance improvements allow dual video stream decoding and Picture-in-Picture mode. This makes UVD2 full BD-Live compliant. The UVD 2.2 features a re-designed local memory interface and enhances the compatibility with MPEG2/H.264/VC-1 videos. However, it was marketed under the same alias as "UVD 2 Enhanced" as the "special core-logic, available in RV770 and RV730 series of GPUs, for hardware decoding of MPEG2, H.264 and VC-1 video with dual-stream decoding". The nature of UVD 2.2 being an incremental update to the UVD 2 can be accounted for this move. UVD 3 UVD 3 adds support for additional hardware MPEG2 decoding (entropy decode), DivX and Xvid via MPEG-4 Part 2 decoding (entropy decode, inverse transform, motion compensation) and Blu-ray 3D via MVC (entropy decode, inverse transform, motion compensation, in-loop deblocking). along with 120 Hz stereo 3D support, and is optimized to utilize less CPU processing power. UVD 3 also adds support for Blu-ray 3D stereoscopic displays. UVD 4 UVD 4 includes improved frame interpolation with H.264 decoder. UVD 4.2 was introduced with the AMD Radeon Rx 200 series and Kaveri APU. UVD 5 UVD 5 was introduced with the AMD Radeon R9 285. New to UVD is full support for 4K H.264 video, up to level 5.2 (4Kp60). UVD 6 The UVD 6.0 decoder and Video Coding Engine 3.1 encoder were reported to be first used in GPUs based on GCN 3, including Radeon R9 Fury series and "Carrizo"-APUs, followed by AMD Radeon Rx 300 Series (Pirate Islands GPU family) and AMD Radeon Rx 400 Series (Arctic Islands GPU family). The UVD version in "Fiji" and "Carrizo"-based graphics controller hardware is also announced to provide support for High Efficiency Video Coding (HEVC, H.265) hardware video decoding, up to 4K, 8-bits color (H.265 version 1, main profile); and there is support for the 10bit-color HDR both H.265 and VP9 video codec in the AMD Radeon 400 series with UVD 6.3. UVD 7 The UVD 7.0 decoder and Video Coding Engine 4.0 encoder are included in the Vega-based GPUs. But there is still no fixed function VP9 hardware decoding. UVD 7.2 AMD's Vega20 GPU, present in the Instinct Mi50, Instinct Mi60 and Radeon VII cards, include VCE 4.1 and two UVD 7.2 instances. VCN 1 Starting with the integrated graphics of the Raven Ridge APU (Ryzen 2200/2400G), the former UVD and VCE have been replaced by the new "Video Core Next" (VCN). VCN 1.0 adds full hardware decoding for the VP9 codec. Format support Availability Most of the Radeon HD 2000 series video cards implement the UVD for hardware decoding of 1080p high definition contents. However, the Radeon HD 2900 series video cards do not include the UVD (though it is able to provide partial functionality through the use of its shaders), which was incorrectly stated to be present on the product pages and package boxes of the add-in partners' products before the launch of the Radeon HD 2900 XT, either stating the card as featuring ATI Avivo HD or explicitly UVD, which only the former statement of ATI Avivo HD is correct. The exclusion of UVD was also confirmed by AMD officials. UVD2 is implemented in the Radeon RV7x0 and R7x0 series GPUs. This also includes the RS7x0 series used for the AMD 700 chipset series IGP motherboards. Feature overview APUs GPUs Operating system support The UVD SIP core needs to be supported by the device driver, which provides one or more interfaces such as VDPAU, VAAPI or DXVA. One of these interfaces is then used by end-user software, for example VLC media player or GStreamer, to access the UVD hardware and make use of it. AMD Catalyst, AMD's proprietary graphics device driver that supports UVD, is available for Microsoft Windows and some Linux distributions. Additionally, a free device driver is available, which also supports the UVD hardware. Linux Support for UVD has been available in AMD's proprietary driver Catalyst version 8.10 since October 2008 through X-Video Motion Compensation (XvMC) or X-Video Bitstream Acceleration (XvBA). Since April 2013, UVD is supported by the free and open-source "radeon" device driver through Video Decode and Presentation API for Unix (VDPAU). An implementation of VDPAU is available as Gallium3D state tracker in Mesa 3D. On 28 June 2014, Phoronix published some benchmarks on using Unified Video Decoder through the VDPAU interface running MPlayer on Ubuntu 14.04 with version 10.3-testing of Mesa 3D. Windows Microsoft Windows supported UVD since it was launched. UVD currently only supports DXVA (DirectX Video Acceleration) API specification for the Microsoft Windows and Xbox 360 platforms to allow video decoding to be hardware accelerated, thus the media player software also has to support DXVA to be able to utilize UVD hardware acceleration. Others Support for running custom FreeRTOS-based firmware on the Radeon HD 2400's UVD core (based on an Xtensa CPU), interfaced with a STM32 ARM-based board via I2C, was attempted as of January 2012. Predecessors and Successor Predecessors The Video Shader and ATI Avivo are similar technologies incorporated into previous ATI products. Successor The UVD was succeeded by AMD Video Core Next in the Raven Ridge series of APUs released in October 2017. The VCN combines both encode (VCE) and decode (UVD). See also Hardware video hardware technologies Nvidia PureVideo - Nvidia GeForce 256's Motion Compensation High-Definition Video Processor Video Processing Engine Nvidia NVENC Nvidia NVDEC AMD Unified Video Decoder - AMD Video Shader - ATI Intel Quick Sync Video - Intel Clear Video - Intel Qualcomm Qualcomm Hexagon Other VDPAU Video Decode and Presentation API for Unix, from NVIDIA Video Acceleration API (VA API) an alternative video acceleration API to XvBA for Linux/UNIX operating-system that supports XvBA as a backend Video Coding Engine AMD's hardware decoder and encoder (codec transcoder), first introduced end of 2011 with Radeon HD 7900. X-Video Bitstream Acceleration (XvBA) AMD's future hardware acceleration API for Linux/UNIX operating-system. Bit stream decoder (BSD) Comparison of AMD graphics processing units DirectX Video Acceleration (DxVA) Microsoft's hardware acceleration API for Microsoft Windows based operating-system. Notes References External links ATI Avivo HD Technology Brief, July 2008 AMD Video Technologies, October 2010 Presentation slides comparison between CPU decode, ATI Avivo HD and PureVideo HD and Decode comparison of VC-1 and H.264 video AMD Media Codecs (an optional download) ATI Technologies Video acceleration Advanced Micro Devices IP cores Video compression and decompression ASIC
300602
https://en.wikipedia.org/wiki/Internet%20access
Internet access
Internet access is the ability of individuals and organizations to connect to the Internet using computer terminals, computers, and other devices; and to access services such as email and the World Wide Web. Internet access is sold by Internet service providers (ISPs) delivering connectivity at a wide range of data transfer rates via various networking technologies. Many organizations, including a growing number of municipal entities, also provide cost-free wireless access and landlines. Availability of Internet access was once limited, but has grown rapidly. In 1995, only percent of the world's population had access, with well over half of those living in the United States, and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology, and by 2014, 41 percent of the world's population had access, broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second. History The Internet developed from the ARPANET, which was funded by the US government to support projects within the government and at universities and research laboratories in the US – but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up. An important factor in the rapid rise of Internet access speed has been advances in MOSFET (MOS transistor) technology. The MOSFET, originally invented by Mohamed Atalla and Dawon Kahng in 1959, is the building block of the Internet telecommunications networks. The laser, originally demonstrated by Charles H. Townes and Arthur Leonard Schawlow in 1960, was adopted for MOS light wave systems around 1980, which led to exponential growth of Internet bandwidth. Continuous MOSFET scaling has since led to online bandwidth doubling every 18 months (Edholm's law, which is related to Moore's law), with the bandwidths of telecommunications networks rising from bits per second to terabits per second. Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access" and so covers a wide range of technologies. The core of these broadband Internet technologies are complementary MOS (CMOS) digital circuits, the speed capabilities of which were extended with innovative design techniques. Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card. Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines. Broadband provides improved access to Internet services such as: Faster World Wide Web browsing Faster downloading of documents, photographs, videos, and other large files Telephony, radio, television, and videoconferencing Virtual private networks and remote system administration Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2005, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million. The broadband technologies in widest use are of digital subscriber line (DSL), ADSL and cable Internet access. Newer technologies include VDSL and optical fiber extended closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless, satellite and microwave Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless. Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE. Availability In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to LANs. Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based. Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled. Additionally, mobile broadband access allows smart phones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network. Speed The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection from 220 (V.42bis) to 320 (V.44) kbit/s. However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s. Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available. The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting much higher data rates for download (toward the user) than for upload (toward the Internet). Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer. Actual end-to-end data rates can be lower due to a number of factors. In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user. Network congestion Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well, and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked. Outages An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. On April 25, 1997, due to a combination of human error and software bug, an incorrect routing table at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours. Technologies When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks. A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a LAN which provides access in a limited area such as a home, school, computer laboratory, or office building. Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used in the past. Ethernet is the name of the IEEE 802.3 standard for physical LAN communication and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards. Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points. Many "modems" (cable modems, DSL gateways or Optical Network Terminals (ONTs)) provide the additional functionality to host a LAN so most Internet access today is through a LAN such as that created by a WiFi router connected to a modem or a combo modem router, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections, or in other words, how customers' modems (Customer-premises equipment) are most often connected to internet service providers (ISPs). Dial-up technologies Dial-up access Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection. Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet). Multilink dial-up Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel. It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking – and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking. Hardwired broadband access The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later. Integrated Services Digital Network Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies. Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s. Leased lines Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created. T-carrier technology dates to 1957 and provides data rates that range from 56 and (DS0) to (DS1 or T1), to (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and . T-carrier lines require special termination equipment that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP. In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels () on an E1 () and 512 user channels or 16 E1s on an E3 (). Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries . Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (), OC-48c (), OC-192c (), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams. The 1, 10, 40, and 100 gigabit Ethernet (GbE, 10 GbE, 40/100 GbE) IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to . Cable Internet access Cable Internet provides access using a cable modem on hybrid fiber coaxial wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. In a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means – usually fiber optic cable or digital satellite and microwave transmissions. Like DSL, broadband cable provides a continuous connection with an ISP. Downstream, the direction toward the user, bit rates can be as much as 1000 Mbit/s in some countries, with the use of DOCSIS 3.1. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s. DOCSIS 4.0 promises up to 10 Gbit/s downstream ands 6 Gbit/s upstream, however this technology is yet to have been implemented in real world usage. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings; commercial buildings do not always include wiring for coaxial cable networks. In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted. Digital subscriber line (DSL, ADSL, SDSL, and VDSL) Digital subscriber line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication. These frequency bands are subsequently separated by filters installed at the customer's premises. DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e., in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric. With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal. Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1) is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires and up to 85 Mbit/s down- and upstream on coaxial cable. VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection. VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases. DSL Rings DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s. Fiber to the home Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN). These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar in function and architecture to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. Fiber internet connections to customers are either AON (Active optical network) or more commonly PON (Passive optical network). Examples of fiber optic internet access standards are G.984 (GPON, G-PON) and 10G-PON (XG-PON). ISPs may instead use Metro Ethernet for corporate and institutional customers. The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, LTE) for final delivery to customers. In 2010, Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses. The project was abandoned by the subsequent LNP government, in favour of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country). Power-line Internet Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s. Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it. Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer. In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than a comparable European city. ATM and Frame Relay Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates. While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did. Wireless broadband access Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies. Satellite broadband Satellite Internet access provides fixed, portable, and mobile Internet access. Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north. Service can be adversely affected by moisture, rain, and snow (known as rain fade). The system requires a carefully aimed directional antenna. Satellites in geostationary Earth orbit (GEO) operate in a fixed position above the Earth's equator. At the speed of light (about ), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions. HughesNet, Exede, AT&T and Dish Network have GEO systems. Satellites in low Earth orbit (LEO, below ) and medium Earth orbit (MEO, between ) are less common, operate at lower altitudes, and are not fixed in their position above the Earth. Because of their lower altitude, more satellites and launch vehicles are needed for worldwide coverage. This makes the initial required investment very large which initially caused OneWeb and Iridium to declare bankruptcy. However, their lower altitudes allow lower latencies and higher speeds which make real-time interactive Internet applications more feasible. LEO systems include Globalstar, Starlink, OneWeb and Iridium. The O3b constellation is a medium Earth-orbit system with a latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7 ms. Mobile broadband Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers (cellular networks) to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used. New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G). The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates. WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed. In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage. 5G was designed to be faster and have lower latency than its predecessor, 4G. It can be used for mobile broadband in smartphones or separate modems that emit WiFi or can be connected through USB to a computer, or for fixed wireless. Fixed wireless Fixed wireless internet connections that do not use a satellite nor are designed to support moving equipment such as smartphones due to the use of, for example, customer premises equipment such as antennas that can't be moved over a significant geographical area without losing the signal from the ISP, unlike smartphones. Microwave wireless broadband or 5G may be used for fixed wireless. WiMAX Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. It enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL". The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates. Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi LAN. WiMAX signals also penetrate building walls much more effectively than Wi-Fi. WiMAX is most often used as a fixed wireless standard. Wireless ISP Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well, such as microwave and WiMAX. Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed by regulations), 802.11 can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems. With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages: usually regulatory bodies allow for more power and using (better-) directional antennae, there exists much more bandwidth to share, allowing both better throughput and improved coexistence, there are fewer consumer devices that operate over 5 GHz than over 2.4 GHz, hence fewer interferers are present, the shorter wavelengths don't propagate as well through walls and other structures, so much less interference leaks outside of the homes of consumers. Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a number of companies that provide this service. Local Multipoint Distribution Service Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz. Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. Distance is typically limited to about , but links of up to from the base station are possible in some circumstances. LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards. Hybrid Access Networks In some regions, notably in rural areas, the length of the copper lines makes it difficult for network operators to provide high-bandwidth services. One alternative is to combine a fixed-access network, typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardised an architecture for such Hybrid Access Networks. Non-commercial alternatives for using Internet services Grassroots wireless networking movements Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks. It is usually ordered by the local municipality from commercial WISPs. Grassroots efforts have also led to wireless community networks widely deployed at numerous countries, both developing and developed ones. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. Where radio spectrum regulation is not community-friendly, the channels are crowded or when equipment can not be afforded by local residents, free-space optical communication can also be deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable). Packet radio Packet radio connects computers or whole networks operated by radio amateurs with the option to access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet access and e-mail should be strictly related to the activities of hardware amateurs. Sneakernet The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing of sneakers as the transport mechanism for the data. For those who do not have access to or can not afford broadband at home, downloading large files and disseminating information is done by transmission through workplace or library networks, taken home and shared with neighbors by sneakernet. The Cuban El Paquete Semanal is an organized example of this. There are various decentralized, delay tolerant peer to peer applications which aim to fully automate this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots) and physically connected ones (USB storage, Ethernet, etc.). Sneakernets may also be used in tandem with computer network data transfer to increase data security or overall throughput for big data use cases. Innovation continues in the area to this day; for example, AWS has recently announced Snowball, and bulk data processing is also done in a similar fashion by many research institutes and government agencies. Pricing and spending Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society counts with an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population counts with merely US$9 per year to spend on ICT (US$0.75 per month). From Latin America it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the “magical number” of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries. Dial-up users pay the costs for making local or long-distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access. Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access. Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets. With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80–90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03. Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user. In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps. In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas. In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned. On August 1, 2012, in Nashville, Tennessee and on October 1, 2012, in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data. Digital divide Despite its tremendous growth, Internet access is not distributed equally within or between countries. The digital divide refers to “the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access”. The gap between people with Internet access and those without is one of many aspects of the digital divide. Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological 'have-nots'." Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011. In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet. The U.S. trade embargo is a barrier limiting Internet access in Cuba. Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. The majority of people in developing countries do not have Internet access. About 4 billion people do not have Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007). Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the “political, social, economic, educational, and career opportunities” available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world. Growth in number of users Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013. With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia, Africa, Latin America, the Caribbean, and the Middle East. There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available. Bandwidth divide Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s. Rural access One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project. Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service. Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option. The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy. In New Zealand, a fund has been formed by the government to improve rural broadband, and mobile phone coverage. Current proposals include: (a) extending fibre coverage and upgrading copper to support VDSL, (b) focussing on improving the coverage of cellphone technology, or (c) regional wireless. Several countries have started to Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks. Access as a civil or human right The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right. Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available or preventing the state from unreasonably restricting an individual's access to information and the Internet: Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web." Estonia: In 2000, the parliament launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the twenty-first century. Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection. France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information. Spain: Starting in 2011, Telefónica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain. In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights: 1. We, the representatives of the peoples of the world, assembled in Geneva from 10–12 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centred, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights. 3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs. The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating: 4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers." A poll of 27,973 adults in 26 countries, including 14,306 Internet users, conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right. 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion. The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access: 67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an “enabler” of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates. 78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights. 79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest. 85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population. Network neutrality Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken. In April 2017, a recent attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. The vote on whether or not to abolish net neutrality was passed on December 14, 2017, and ended in a 3–2 split in favor of abolishing net neutrality. Natural disasters and access Natural disasters disrupt internet access in profound ways. This is important—not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary to disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages. One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable. At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted. Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at “network edges where important emergency organizations such as hospitals and government agencies are mostly located”. Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service. The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted. A second way natural disasters destroy internet connectivity is by severing submarine cables—fiber-optic cables placed on the ocean floor that provide international internet connection. A sequence of undersea earthquakes cut six out of seven international cables connected to Taiwan and caused a tsunami that wiped out one of its cable and landing stations. The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe. With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012. AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone. This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram. See also Back-channel, a low bandwidth, or less-than-optimal, transmission channel in the opposite direction to the main channel Broadband mapping in the United States Comparison of wireless data standards Connectivity in a social and cultural sense Fiber-optic communication History of the Internet IP over DVB, Internet access using MPEG data streams over a digital television network List of countries by number of broadband Internet subscriptions National broadband plan Public switched telephone network (PSTN) Residential gateway White spaces (radio), a group of technology companies working to deliver broadband Internet access via unused analog television frequencies References External links European broadband Corporate vs. Community Internet, AlterNet, June 14, 2005, – on the clash between US cities' attempts to expand municipal broadband and corporate attempts to defend their markets Broadband data, from Google public data FCC Broadband Map Types of Broadband Connections, Broadband.gov Broadband Human rights by issue Rights
33386147
https://en.wikipedia.org/wiki/Royal%20Radar%20Establishment%20Automatic%20Computer
Royal Radar Establishment Automatic Computer
The Royal Radar Establishment Automatic Computer, or the RREAC, was an early solid-state computer in 1962. It was made with transistors; many of Britain's previous experimental computers used the thermionic valve, also known as a vacuum tube. History Background Britain had built the world's first electronic computer, the Colossus computer, during the war at Bletchley Park in late 1943 and early 1944, and the world's first stored-program computer, the Manchester Baby, on 21 June 1948. The Germans had built the electro-mechanical Z3 in 1941 in Berlin, which used relays. The world's first digital computing device was the Atanasoff–Berry Computer in 1942. ENIAC was built in 1946 at the Moore School of Electrical Engineering at the University of Pennsylvania. ENIAC and Colossus both claim to be the world's first electronic computer. Electronic Delay Storage Automatic Calculator (EDSAC) ran its first programs on 6 May 1949 at the University of Cambridge Mathematical Laboratory about a month after the Manchester Mark 1 was put to research work at the University of Manchester. In May 1952 Geoffrey Dummer thought up the idea of the integrated circuit at the TRE, the former name of the RRE. By April 1962 there were 323 computers installed in Britain, which had cost around £23 million (£ million in today's figures). At the time, the American government alone had over 900 computers, with over 10,000 in the whole country. However most of these performed simple tasks that a pocket calculator would later manage. Computer research in the UK took place at various sites including the National Physical Laboratory in Teddington, and the RRE in Worcestershire. Manchester University led the way again in 1962 with its Atlas Computer, then said to be the most powerful computer in the world, being one of the world's first supercomputers. Three were built: the first for Manchester University, and one each for BP and for the Atlas Computer Laboratory in Oxfordshire. A major British computer manufacturer at the time was International Computers and Tabulators (ICT), later part of Britain's International Computers Limited (ICL). The RRE College of Electronics, like the RRE itself, was run by the Ministry of Aviation in the 1960s. In September 1963 the government, via the Department of Industrial and Scientific Research, funded £1 million of research into electronics and computers, with half going to the RRE and NPL. Later in its existence, the RRE provided Britain's first connection to the Internet, when opened by the Queen in 1976 at UCL in London; it went via RRE to Norway and on to the USA. Later in 1984, the Internet's engineering task force first met at RRE's successor - the Royal Signals and Radar Establishment The RREAC Work in transistor technology at RRE took place in the Physics Department under Dr R.A. Smith. The RREAC was first announced in 1962. It was earlier known as the RRE All-Transistor Computer. It was built from 1960. George G. Macfarlane was one of the designers. RREAC had a 36-bit word and 24K words of core store, and used five-hole paper tape for input and output, and magnetic tape for data storage. The world's first transistorised computer was the Manchester Transistor Computer, operational in 1953. The 1955 Harwell CADET is a contender for the title of first fully transistorised computer. Many of the early transistorised computers used valves for non-computing elements such as the power supply and clock. Software RREAC used ALGOL 60 as its programming language - the first computer language to use nested functions, and the ancestor of some of today's main programming languages. See also Flex machine List of transistorized computers References The Times, 11 April 1962, page 23 One-of-a-kind computers Computer-related introductions in 1962 Early British computers Transistorized computers 1962 in computing
18038576
https://en.wikipedia.org/wiki/OVirt
OVirt
oVirt is a free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy-to-use web-based front-end with platform independent access. KVM on x86-64, PowerPC64 and s390x architecture are the only hypervisors supported, but there is an ongoing effort to support ARM architecture in a future releases. Architecture oVirt consists of two basic components, oVirt engine and oVirt node. The oVirt engine backend is written in Java, while the frontend is developed with GWT web toolkit. The oVirt engine runs on top of the WildFly (former JBoss) application server. The frontend can be accessed through a webadmin portal for administration, or a user portal with privileges, and features that can be fine tuned. User administration can be managed locally or by integrating oVirt with LDAP or AD services. The oVirt engine stores data in a PostgreSQL database. Data warehousing and reporting capabilities depend on additional history and reports databases that can be optionally instantiated during the setup procedure. A REST API is available for customizing or adding engine features. An oVirt node is a server running RHEL, CentOS, Scientific Linux, or experimentally Debian, with KVM hypervisor enabled and a VDSM (Virtual Desktop and Server Manager) daemon written in Python. Management of resources initiated from a webadmin portal are sent through the engine backend that issues appropriate calls to the VDSM daemon. VDSM controls all resources available to the node (compute, storage, networking) and virtual machines running on it and is also responsible for providing feedback to the engine about all initiated operations. Multiple nodes can be clustered from the oVirt engine webadmin portal to enhance RAS. The oVirt engine can be installed on a standalone server, or can be hosted on a cluster of nodes themselves inside a virtual machine (self-hosted engine). The self-hosted engine can be manually installed or automatically deployed via a virtual appliance. oVirt is built upon several other projects including libvirt, Gluster, PatternFly, and Ansible. Features Virtual datacenters, managed by oVirt, are categorized into storage, networking and clusters that consist of one or more oVirt nodes. Data integrity is ensured by fencing, with agents that can use various resources such as baseboard management controllers or uninterruptible power supplies. Storage is organized within entities called storage domains and can be local or shared. Storage domains can be created using the following storage solutions or protocols: NFS iSCSI Fibre Channel POSIX compliant filesystem GlusterFS Network management allows defining multiple VLANs that can be bridged to the network interfaces available on the nodes. Configuration of bonded interfaces, IP addresses, subnet masks and gateways on managed nodes are all supported within webadmin portal interface, as is SR-IOV on hardware configurations that support this feature. Management features for compute resources include CPU pinning, defining NUMA topology, enabling kernel same-page merging, memory over-provisioning, HA VM reservation etc. Virtual machine management enables selecting high availability priority, live migration, live snapshots, cloning virtual machines from snapshots, creating virtual machine templates, using cloud-init for automated configuration during provisioning and deployment of virtual machines. Supported guest operating systems include Linux, Microsoft Windows and FreeBSD. Access to virtual machines can be achieved from webadmin portal using SPICE, VNC and RDP protocols. oVirt can be integrated with many open source projects, including OpenStack Glance and Neutron for disk and network provisioning, Foreman/Katello for VM/node provisioning or pulling relevant errata information into webadmin portal and can be further integrated with ManageIQ for a complete virtual infrastructure lifecycle management. Disaster recovery features include the ability to import any storage domain into different oVirt engine instances and replication can be managed from oVirt with GlusterFS geo-replication feature, or by utilizing synchronous/asynchronous block level replication provided by storage hardware vendors. oVirt engine backups can be automated and periodically transferred to a remote location. oVirt supports hyper-converged infrastructure deployment scenarios. Self-hosted engine and Gluster-based storage domains allow centralized management of all resources that can be seamlessly expanded, simply by adding an appropriate number of nodes to the cluster, without having any single points of failure. oVirt provides deep integration with Gluster, including Gluster specific performance improvements. See also Red Hat Virtualization (RHV) Kernel-based Virtual Machine (KVM) Comparison of platform virtualization software References External links Free software for cloud computing Free software programmed in Java (programming language) Red Hat software Virtualization-related software for Linux
24801418
https://en.wikipedia.org/wiki/Bishop%20Stuart%20University
Bishop Stuart University
Bishop Stuart University (BSU) is a private, not-for-profit, multi-campus university in Uganda. Location BSU has its main campus, measuring approximately , at Kakoba Hill, off of Buremba Road, approximately east of downtown Mbarara. The coordinates of the main campus of the university are 0°36'10.0"S, 30°41'44.0"E (Latitude:-0.602778, Longitude:30.695556). The second campus is located at Ruharo Hill, also in the Mbarara Metropolitan Area. History BSU is named after Cyril Stuart who was the Anglican Bishop of Uganda in the middle of the 20th Century. BSU started operations in 2003 at the campus of then Kakoba National Teacher's College (KNTC) in the western Ugandan city of Mbarara. KNTC ceased operations at the end of 2005, and in 2006, BSU took over the premises and grounds previously occupied by the teachers' college. In 2006, BSU held its maiden graduation ceremony. However, the certificates, diplomas, and degrees were awarded by Uganda Christian University. The first batch of students to graduate on BSU letterheads were the graduates of 2009. On 10 October 2014, at the university's 10th graduation ceremony, Minister of Education Jessica Alupo announced that BSU had been cleared by the Uganda National Council for Higher Education to receive a university charter. The charter was granted and delivered to the university in late October 2014. Academic affairs , BSU had the following academic faculties and departments: Faculty of Agriculture, Environmental Sciences and Technology Department of Agriculture Faculty of Business, Economics & Governance Department of Social Work & Social Administration Department of Business Studies Department of Development Studies Department of Environmental Studies Department of Economics & Management Faculty of Education, Arts & Media Studies Department of Education Foundations Department of Humanities Department of Languages Department of Science Education Faculty of Law Department of Law Academic courses The university offers courses at certificate, diploma, undergraduate, and postgraduate levels. The programs offered include: Certificate/Short courses 1. Certificate in Computerized Accounting 2. Certificate in NGO Management 3. Certificate in Monitoring and Evaluation 4. Certificate in Project Planning and Management 5. Certificate in Research and Usage of Research Software 6. Certificate in Computer Applications 7. Certificate in Oil and Gas Management Essentials 8. Certificate in Community-Based Rehabilitation 9. Certificate in Human Resource Management 10. Certificate in Public Administration and Management 11. Certificate in Information Technology Essentials 12. Certificate in Graphic Design 13. Certificate in Website Design and Development 14. Certificate in Network Maintenance 15. Cisco Certified Network Associate 16. Certificate in Administrative Law 17. Advanced Certificate in Appropriate and Sustainable Technologies Diploma courses 1. Diploma in Computer Science 2. Diploma in Midwifery Extension 3. Diploma in Nursing Science 4. Diploma in Nursing Extension 5. Diploma in Public Health 6. Diploma in Agribusiness Management and Community Development 7. Diploma in Information Technology 8. Diploma in Animal Health And Production 9. Diploma in Law 10. Diploma in Ethics and Human Rights 11. Diploma in Primary Education 12. Diploma in Industrial Fine Art Design 13. Diploma in Science and Technology Education 14. Diploma in Early Childhood Education 15. Diploma in Journalism and Mass Communication 16. Diploma in Library and Information Science 17. Diploma in Office Management and Secretarial Studies 18. Diploma in Procurement and Supply Chain Management 19. Diploma in Records Management and Information Science 20. Diploma in Social Work and Social Administration 21. Diploma in Project Planning and Management 22. Diploma in Public Administration and Management 23. Diploma in Development Studies 24. Diploma in Accounting and Finance 25. Diploma in Microfinance & Business Enterprise Management 26. Diploma in Human Resource Management 27. Diploma in Business Administration 28. Diploma in Community Psychology 29. Diploma in Guidance and Counseling. Undergraduate courses 1. Bachelor of Science in Agricultural Economics and Resource Management 2. Bachelor of Public Health 3. Bachelor of Nursing Science 4. Bachelor of Animal Health and Production 5. Bachelor of Computer Science 6. Bachelor of Agribusiness Management and Community Development 7. Bachelor of Sports Science 8. Bachelor of Information Technology 9. Bachelor of Agriculture and Community Development 10. Bachelor of Laws 11. Bachelor of Arts in Ethics And Human Rights 12. Bachelor of Arts in Theology 13. Bachelor in Industrial Fine Art Design 14. Bachelor of Arts with Education 15. Bachelor of Primary Education 16. Bachelor of Secondary Education 17. Bachelor of Arts in Performing and Leisure Arts 18. Bachelor of Science and Technology Education 19. Bachelor of Science with Education 20. Bachelor of Arts in Development Management 21. Bachelor of Arts in Development Economics 22. Bachelor of Arts in Journalism and Mass Communication 23. Bachelor of Business Administration 24. Bachelor of Guidance and Counseling 25. Bachelor of Conservation & Natural Resources Environmental Management 26. Bachelor of Community Psychology 27. Bachelor of Library and Information Science 28. Bachelor of Office Management and Secretarial Studies 28. Bachelor of Planning and Community Development 29. Bachelor of Science in Economics and Statistics 30. Bachelor of Procurement and Supply Chain Management 31. Bachelor of Project Planning and Management 32. Bachelor of Records Management and Information Science 33. Bachelor of Science in Accounting & Finance 34. Bachelor of Microfinance & Business Enterprise Management 35. Bachelor of Social Work and Social Administration 36. Bachelor of Development Studies 38. Bachelor of Science in Environmental Sciences 39. Bachelor of Public Administration and Management 40. Bachelor of Science in Accounting and Finance 41. Bachelor of Human Resource Management 42. Bachelor of Tourism and Hospitality Management 43. Bachelor of Economics and Management 44. Bachelor of Cooperative Management and Development. Post Graduate, Masters and PhD courses 1. Doctor of Philosophy in Agriculture and Community Innovations 2. Master of Science in Climatic Change and Food Security 3. Master of Business Information Technology 4. Master of Public Health 5. Master of Agriculture and Rural Innovations 6. Master of Science in Agronomy (Dry Land Farming) 7. Postgraduate Diploma in Agriculture And Rural Innovations 8. Doctor of Philosophy in Language, Culture, and Society 9. Master of Arts in Literature and Communication 10. Master of Education in Administration and Planning 11. Postgraduate Diploma in Education 12. Postgraduate Diploma in Education Management 13. Doctor of Philosophy in Development Studies 14. PhD in Development Management 15. Master of Science in Counseling Psychology 16. Master of Social Work 17. Master of Social Economics and Community Management 18. Master of Business Administration 19. Master of Arts in Development Studies 20. Master of Arts in Public Administration and Management 21. Postgraduate Diploma in Counseling 22. Postgraduate Diploma in Development Studies 23. Postgraduate Diploma in Public Administration and Management 24. Postgraduate Diploma on Office Management and Secretarial Studies. See also Education in Uganda List of universities in Uganda List of university leaders in Uganda Mbarara Notable alumni Robert Mugabe Kakyebezi References External links Bishop Stuart University Homepage Mbarara Mbarara District Ankole sub-region Educational institutions established in 2002 2002 establishments in Uganda
106162
https://en.wikipedia.org/wiki/Poem%20code
Poem code
The poem code is a simple, and insecure, cryptographic method which was used during World War II by the British Special Operations Executive (SOE) to communicate with their agents in Nazi-occupied Europe. The method works by the sender and receiver pre-arranging a poem to use. The sender chooses a set number of words at random from the poem and gives each letter in the chosen words a number. The numbers are then used as a key for a transposition cipher to conceal the plaintext of the message. The cipher used was often double transposition. To indicate to the receiver which words had been chosen, an indicator group of letters is sent at the start of the message. Description To encrypt a message, the agent would select words from the poem as the key. Every poem code message commenced with an indicator group of five letters, whose position in the alphabet indicated which five words of an agent's poem would be used to encrypt the message. For instance, suppose the poem is the first stanza of Jabberwocky: ’Twas brillig, and the slithy toves      Did gyre and gimble in the wabe: All mimsy were the borogoves,      And the mome raths outgrabe. We could select the five words THE WABE TOVES TWAS MOME, which are at positions 4, 13, 6, 1, and 21 in the poem, and describe them with the corresponding indicator group DMFAU. The five words are written sequentially, and their letters numbered to create a transposition key to encrypt a message. Numbering proceeds by first numbering the A's in the five words starting with 1, then continuing with the B's, then the C's, and so on; any absent letters are simply skipped. In our example of THE WABE TOVES TWAS MOME, the two A's are numbered 1, 2; the B is numbered 3; there are no C's or D's; the four E's are numbered 4, 5, 6, 7; there are no G's; the H is numbered 8; and so on through the alphabet. This results in a transposition key of 15 8 4, 19 1 3 5, 16 11 18 6 13, 17 20 2 14, 9 12 10 7. This defines a permutation which is used for encryption. First, the plaintext message is written in the rows of a grid that has as many columns as the transposition key is long. Then the columns are read out in the order given by the transposition key. For example, the plaintext "THE OPERATION TO DEMOLISH THE BUNKER IS TOMORROW AT ELEVEN RENDEZVOUS AT SIX AT FARMER JACQUES" would be written on grid paper, along with the transposition key numbers, like this: 15 8 4 19 1 3 5 16 11 18 6 13 17 20 2 14 9 12 10 7 T H E O P E R A T I O N T O D E M O L I S H T H E B U N K E R I S T O M O R R O W A T E L E V E N R E N D E Z V O U S A T S I X A T F A R M E R J A C Q U E S X The columns would then be read out in the order specified by the transposition key numbers: PELA DOZC EBET ETTI RUVF OREE IOAX HHAS MOOU LRSS TKNR ORUE NINR EMVQ TSWT ANEA TSDJ IERM OHEX OTEA The indicator group (DMFAU) would then be prepended, resulting in this ciphertext: DMFAU PELAD OZCEB ETETT IRUVF OREEI OAXHH ASMOO ULRSS TKNRO RUENI NREMV QTSWT ANEAT SDJIE RMOHE XOTEA In most uses of code poems, this process of selecting an indicator group and transposing the text would be repeated once (double transposition) to further scramble the letters. As an additional security measure, the agent would add prearranged errors into the text as security checks. For example, there might be an intentional error in every 18th letter to ensure that, if the agent was captured or the poem was found, the enemy might transmit without the security checks. Analysis The code's advantage is to provide relatively strong security while not requiring any codebook. However, the encryption process is error-prone when done by hand, and for security reasons, messages should be at least 200 words long. The security check was usually not effective: if a code was used after being intercepted and decoded by the enemy, any security checks were revealed. Further, the security check could often be tortured out from the agent. There are a number of other weaknesses Because the poem is re-used, if one message is broken by any means (including threat, torture, or even cryptanalysis), past and future messages will be readable. If the agent used the same poem code words to send a number of similar messages, these words could be discovered easily by enemy cryptographers. If the words could be identified as coming from a famous poem or quotation, then all of the future traffic submitted in that poem code could be read. The German cryptologic units were successful in decoding many of the poems by searching through collections of poems. Since the poems used must be memorable for ease of use by an agent, there is a temptation to use well-known poems or poems from well-known poets, further weakening the encryption (e.g., SOE agents often used verses by Shakespeare, Racine, Tennyson, Molière, Keats, etc.). Development When Leo Marks was appointed codes officer of the Special Operations Executive (SOE) in London during World War II, he very quickly recognized the weakness of the technique, and the consequent damage to agents and to their organizations on the Continent, and began to press for changes. Eventually, the SOE began using original compositions (thus not in any published collection of poems from any poet) to give added protection (see The Life That I Have, an example). Frequently, the poems were humorous or overtly sexual to make them memorable ("Is de Gaulle's prick//Twelve inches thick//Can it rise//To the size//Of a proud flag-pole//And does the sun shine//From his arse-hole?"). Another improvement was to use a new poem for each message, where the poem was written on fabric rather than memorized. Gradually the SOE replaced the poem code with more secure methods. Worked-out Keys (WOKs) was the first major improvement – an invention of Marks. WOKs are pre-arranged transposition keys given to the agents and which made the poem unnecessary. Each message would be encrypted on one key, which was written on special silk. The key was disposed of, by tearing a piece off the silk, when the message was sent. A project of Marks, named by him "Operation Gift-Horse", was a deception scheme aimed to disguise the more secure WOK code traffic as poem code traffic, so that German cryptographers would think "Gift-Horsed" messages were easier to break than they actually were. This was done by adding false duplicate indicator groups to WOK-keys, to give the appearance that an agent had repeated the use of certain words of their code poem. The aim of Gift Horse was to waste the enemy's time, and was deployed prior to D-Day, when code traffic increased dramatically. The poem code was ultimately replaced with the one-time pad, specifically the letter one-time pad (LOP). In LOP, the agent was provided with a string of letters and a substitution square. The plaintext was written under the string on the pad. The pairs of letters in each column (such as P,L) indicated a unique letter on the square (Q). The pad was never reused while the substitution square could be reused without loss of security. This enabled rapid and secure encoding of messages. Bibliography Between Silk and Cyanide by Leo Marks, HarperCollins (1998) ; Marks was the Head of Codes at SOE and this book is an account of his struggle to introduce better encryption for use by field agents; it contains more than 20 previously unpublished code poems by Marks, as well as descriptions of how they were used and by whom. See also Book cipher The Life That I Have (also known as Yours, arguably the most famous code poem) Classical ciphers History of cryptography Special Operations Executive Cryptography
7379076
https://en.wikipedia.org/wiki/Apple%20menu
Apple menu
The Apple menu is a drop-down menu that is on the left side of the menu bar in the classic Mac OS, macOS and A/UX operating systems. The Apple menu's role has changed throughout the history of Apple Inc.'s operating systems, but the menu has always featured a version of the Apple logo. System 6 and earlier In System 6.0.8 and earlier, the Apple menu featured a Control Panel, as well as Desk Accessories such as a Calculator, the Scrapbook and Alarm Clock. If MultiFinder (an early implementation of computer multitasking) was active, the Apple menu also allowed the user to switch between multiple running applications. The Macintosh user could add third-party Desk Accessories via the System Utility "Font/DA Mover". However, there was a limitation on the number of Desk Accessories that could be displayed in the Apple menu. Third-party shareware packages such as OtherMenu added a second customizable menu (without the trademarked Apple logo) that allowed users to install Desk Accessories beyond Apple's limitations. System 7.0–9.2.2 System 7.0 introduced the Apple Menu Items folder in the System Folder. This allowed users to place alias(es) to their favorite software and documents in the menu. The Menu Manager forced these additions into alphabetical order, which prompted users to rename their aliases with leading spaces, numbers and other characters in order to get them into the order that suited them the best. Several third-party utilities provided a level of customization of the order of the items added to the Apple menu without having to rename each item. The Apple menu also featured a Shut Down command, implemented by a Desk Accessory. An alias to the Control Panels folder was also present. System 7.0 was also the first version to feature the rainbow striped logo, as opposed to the black logo found in previous versions. In System 7.0, the black logo was retained in grayscale modes, and was used when the Monitors control panel was set to display "Thousands" or "Millions" of grays, though the rest of the display was in color. System 7.0 featured built-in multitasking, so MultiFinder was removed as an option. The feature allowing users to switch between multiple running applications as in System 6 was given its own menu (appearing as the icon of the active application) on the opposite side of the menubar. Beginning in Mac OS 8.5, this new menu was given a unique "tear-off" capability, which detached the menu from the menu bar to become a free-floating window when the user dragged the cursor downwards off the bottom of the menu. In this case, it ran the application called "Application Switcher". System 7.5 added an Apple Menu Options control panel, which added submenus to folders and disks in the Apple Menu, showing the contents of the folder or disk. Prior versions of System 7 showed only a standard menu entry that opened the folder in Finder. Apple Menu Options also added Recent Applications, Recent Documents, and Recent Servers to the Apple Menu; the user could specify the desired number of Recent Items. macOS macOS (previously known as Mac OS X and OS X) features a completely redesigned Apple menu. System management functions from the Special menu have been merged into it. The Apple menu was missing entirely from the Mac OS X Public Beta, replaced by a nonfunctional Apple logo in the center of the menu bar, but the menu was restored in Mac OS X 10.0. The quick file access feature implemented in System 7 was removed, although a third-party utility, Unsanity's FruitMenu, restored the Apple menu to its classic functionality until it stopped working with the advent of OS 10.6 (Snow Leopard). The Apple menu is now dedicated to managing features of the Macintosh computer, with commands to get system information, update software, launch the Mac App Store, open System Preferences, set Dock preferences, set the location (network configuration), view recent items (applications, documents and servers), Force Quit applications, power management (sleep, restart, shut down), log out, etc. See also Start menu References Macintosh operating systems user interface MacOS user interface
52630003
https://en.wikipedia.org/wiki/Mach%2037
Mach 37
MACH37 is an American startup accelerator that was established in 2013 as a division of the Virginia-based Center for Innovative Technology (CIT) with funding from the Commonwealth of Virginia. In 2017 CIT partnered with VentureScope, a strategic innovation consultancy and venture firm, to revamp MACH37's operating model and curriculum. Following a successful partnership between CIT and VentureScope, MACH37 became fully owned and operated by VentureScope in 2020. MACH37 focuses primarily on honing and strengthening startups' product-market fit through extensive customer discovery and market research, expanding emerging companies' professional networks, fostering founder wellbeing, and providing emerging companies in the cyber security industry with access to investment capital and an immediate customer base. In an October 2020 article Forbes named MACH37 'the Granddaddy' of top cyber accelerators giving a nod to the fact that MACH37 was one of the first accelerators in the world dedicated to cyber and cyber adjacent technologies and it has lasted far longer than many of its peer accelerators while strengthening over time. The name 'MACH37' is a reference to the escape velocity of Earth's atmosphere. VentureScope applies Lean Startup methodology at MACH37 as an efficient and successful approach to assist startups to rapidly adapt their search for a successful business model and test their hypotheses about customer needs and market demands. Program MACH37 offers three separate accelerator programs - a pre-accelerator for those early stage startups that need to establish themselves further before enduring the rigor of a full accelerator program, a cyber accelerator that runs two 90-day cohorts of 5-8 cyber startup companies per year, and a growth stage accelerator that provides custom acceleration for more mature startups who are in the next phase of growing and scaling their businesses. MACH37 hosts their acceleration curriculum in a variety of forums to include live in-person sessions at their Tyson's office space, special in-person sessions hosted in strategic partner spaces to facilitate collaboration, virtual live online sessions, as well as asynchronous sessions on their virtual platform. Like other accelerator programs, they offer the program in exchange for equity as well as partner with investors to help participants received initial seed money investments, to include some regional economic development partners like CIT who offer investment if companies establish or relocate to Virginia. Each MACH37 Cyber accelerator 90-day cohort is intended to be an intense period of assessment, customer discovery, clarification, design, development, outreach and growth with the ultimate goal of finding product-market fit, market traction, additional investment. Companies conclude the program by giving a presentation and answering questions in front of a panel of potential investors at a large community event known as Launch Day. In an interview with Washington Business Journal, former CIT chief Pete Jobse stated "What the accelerator is designed to do is make sure the concepts [and] the markets that appear to be interesting for these new technologies are actually there for them". During the 90-day cohorts, MACH37 hosts a variety of events to foster connection across the cyber community, both locally in Virginia, Maryland and Washington, D.C., but also globally as companies and investors participate from countries across the world. MACH37 is known for providing an important platform and community where various cyber security industry leaders can speak, share ideas, and ultimately collaborate on key problems. MACH37 has been seen by some as an opportunity for emerging technology companies to gain access to government contracts relating to cyber security, but historically focused predominantly in the commercial sector. MACH37 was supported by Governor Terry McAuliffe, who initiated a memorandum of understanding between MACH37 and the University of Virginia's College at Wise. However, in 2016, McAuliffe also expressed a desire to transition ownership share to private corporations such as Amazon Web Services, one of MACH37's sponsors. The following year in 2017, CIT made good on that request and began to transition the program toward private ownership by VentureScope. MACH37 is currently managed by experienced entrepreneurs, VentureScope CEO Jason Chen and COO Jennifer Quarrie, who have worked globally in strategy, innovation, venture, entrepreneurship, wellbeing, product development, regional economic development and change management. History MACH37 has had 78 different startup companies as participants since its founding. The program was started in early 2013 as a division of the Virginia Center for Innovative Technology (CIT) with support from Virginia's government as part of an initiative to provide new technology for the intelligence agencies of the United States as well as creating new jobs and establishing Virginia as a cyber security capital. MACH37 was previously managed by a group known as the MACH37 Partners, which consisted of founders Rick Gordon, Dan Woolley, Robert Stratton, Tom Weithman, David Ihrie, and Pete Jobse, all of whom had previously worked in the software industry. Later, Pete Jobse, President of CIT, was succeeded by Ed Albrigo. Initially, MACH37 was sustained entirely through public funding by the Virginia government. In 2015, the aerospace\defense company General Dynamics agreed to partially fund MACH37's operating budget. This new funding from General Dynamics came at no reduction to the state budget for MACH37. This was considered the first major step of transferring control of MACH37 to the private sector. According to The Washington Post, MACH37 has operated differently than other accelerators because it looks to accept founders with extensive technical backgrounds but limited entrepreneurial experience and that have never run a company before. This has allowed participants to focus exclusively on product design and development while MACH37 aims to provide connections to financial backing that typically require additional resources and expertise from the startups. So far, 70% of MACH37 participants in 2015 and 63% in 2016 received additional funding from private investors after completing the program. As of 2016, MACH37 has partnerships with Microsoft BizSpark, Rackspace, Virtru, and Square 1 Bank and is sponsored by Amazon Web Services, General Dynamics, and SAP SE. MACH37 also has the not-for-profit organization MITRE as a part of its network. In July 2017, it was announced that Rick Gordon, Dan Woolley and Bob Stratton were no longer at MACH37. Tom Weithman became the President of the company, Jason Chen became the managing director of operations, and Mary Beth Borgwing became the managing director of cyber. See also Business incubator List of MACH37 startups References External links Financial services companies established in 2013 American companies established in 2013 Venture capital firms of the United States Business incubators of the United States Startup accelerators 2013 establishments in Virginia
3007056
https://en.wikipedia.org/wiki/Corsair%20Gaming
Corsair Gaming
Corsair Gaming, Inc. is an American computer peripherals and hardware company headquartered in Fremont, California. The company, known previously as Corsair Components and Corsair Memory, was incorporated in California in January 1994 as Corsair Microsystems and was reincorporated in Delaware in 2007. Corsair designs and sells a range of products for computers, including high-speed DRAM modules, ATX power supplies (PSUs), USB flash drives (UFDs), CPU/GPU and case cooling, gaming peripherals (such as keyboards or computer mice), computer cases, solid-state drives (SSDs), and speakers. Corsair maintains a production facility in Taoyuan City, Taiwan, for assembly, test, and packaging of select products, with distribution centers in North America, Europe, and Asia and sales and marketing offices in major markets worldwide. The company trades under the ticker symbol CRSR on the NASDAQ stock exchange. Lockdown orders associated with COVID-19 pandemic, and the rise in demand for computing equipment, including the computer gaming sector, led to a significant short-term increase in Corsair's revenue. History The company was founded as Corsair Microsystems Inc. in 1994 by Andy Paul, Don Lieberman, and John Beekley. Corsair originally developed level 2 cache modules, called cache on a stick (COASt) modules, for OEMs. After Intel incorporated the L2 cache in the processor with the release of its Pentium Pro processor family, Corsair changed its focus to DRAM modules, primarily in the server market. This effort was led by Richard Hashim, one of the early employees at Corsair. In 2002, Corsair began shipping DRAM modules that were designed to appeal to computer enthusiasts, who were using them for overclocking. Since then, Corsair has continued to produce memory modules for PCs, and has added other PC components as well. Corsair expanded its DRAM memory module production into the high end market for overclocking. This expansion allows for high power platforms and the ability to get more performance out of the CPU and RAM. The Corsair Vengeance Pro series and Corsair Dominator Platinum series are built for overclocking applications. Corsair has since expanded their product line to include many types of high-end gaming peripherals, high performance air and water cooling solutions, and other enthusiast-grade components. Around 2009, Corsair_Gaming contacted CoolIT Systems to integrate their liquid cooling technology into Corsair's offerings which resulted in a long term partnership. Transactions On July 26, 2017, EagleTree Capital entered into an agreement to acquire a majority stake in Corsair from Francisco Partners and several other minority shareholders in a deal valued at $525 million. Corsair Founder and CEO Andy Paul retains his equity stake and remains in his role as CEO. On June 27, 2018, Corsair announced that it will be acquiring Elgato Gaming from the Munich-based company Elgato, excluding their Eve division which was spun off as Eve Systems. On July 24, 2019 it was announced that Corsair Components, Inc. acquired ORIGIN PC Corp. On December 16, 2019, Corsair announced its intention to acquire SCUF Gaming. On August 21, 2020, Corsair filed registration documents with the U.S. Securities and Exchange Commission for a planned $100 million IPO. Products The company's products include: DRAM and DIMM memory modules for desktop and laptop PCs USB flash drives ATX and SFX PSUs Computer cases Pre-built high end gaming PCs Liquid CPU and GPU cooling solutions Computer fans Solid-state drives Audio headsets for gaming Headset stands Gaming Keyboards Computer mice Mousepads Gaming Chairs Microphones Capture Cards PC Components Since the custom computer industry has experienced an increased interest in products with RGB lighting, Corsair has added this feature to almost all of their product lines. In the gaming industry, Corsair has its biggest share of the market in memory modules (around 44%) and gaming keyboards (around 14%). See also List of computer hardware manufacturers References External links Corsair SEC Filings Companies based in Fremont, California Computer companies established in 1994 Computer memory companies Computer peripheral companies Computer power supply unit manufacturers Companies listed on the Nasdaq Impact of the COVID-19 pandemic on the video game industry Technology companies based in the San Francisco Bay Area 1994 establishments in California Computer enclosure companies Computer hardware cooling 2020 initial public offerings
1199209
https://en.wikipedia.org/wiki/Bruce%20Tognazzini
Bruce Tognazzini
Bruce "Tog" Tognazzini (born 1945) is an American usability consultant and designer. He currently works in partnership with Donald Norman and Jakob Nielsen in the Nielsen Norman Group, which specializes in human-computer interaction. He was with Apple Computer for fourteen years, then with Sun Microsystems for four years, then WebMD for another four years. He has written two books, Tog on Interface and Tog on Software Design, published by Addison-Wesley, and he publishes the webzine Asktog, with the tagline "Interaction Design Solutions for the Real World". Background Tog (as he is widely known in computer circles) built his first electro-mechanical computer in 1957, landing a job in 1959 working with the world's first check-reading computer, NCR's ERMA (Electronic Recording Method of Accounting), at Bank of America, in San Francisco. Tog was an early and influential employee of Apple Computer, there from 1978 to 1992. In June 1978, Steve Jobs, having seen one of his early programs, The Great American Probability Machine, had Jef Raskin hire him as Apple's first applications software engineer. He's listed on the back of his book Tog on Interface (Addison Wesley, 1991) as "Apple Employee #66" (the same employee number he held later at WebMD). In his early days at Apple, simultaneous with his developing Apple's first human interface, for the Apple II computer, he published Super Hi-Res Chess, a novelty program for the Apple II that, despite its name, did not play chess or have any hi-res (high-resolution) graphics; instead, it seemed to crash to the Applesoft BASIC prompt with an error message, but was actually a parody of Apple's BASIC command line interface that seemingly took over control of one's computer, refusing to give it back until the magic word was discovered. His extensive work in user-interface testing and design, including publishing the first edition, in September, 1978, and seven subsequent editions of The Apple Human Interface Guidelines, played an important role in the direction of Apple's product line from the early days of Apple into the 1990s. (Steve Smith and Chris Espinosa also played a key role, incorporating the initial material on the Lisa and Macintosh computers in the fourth and fifth editions in the early 1980s.) He and his partner, John David Eisenberg, wrote Apple Presents...Apple, the disk that taught new Apple II owners how to use the computer. This disk became a self-fulfilling prophecy: At the time of its authoring, there was no standard Apple II interface. Because new owners were all being taught Tog and David's interface, developers soon began writing to it, aided by Tog's Apple Human Interface Guidelines, and reinforced by AppleWorks, a suite of productivity applications for the Apple II into which Tog had also incorporated the same interface. Others often report him as one of the fathers of the Macintosh interface, a claim he has always been careful to refute. Although he did consult with Jef Raskin in the early days of the Macintosh, during the later, critical development period of the Mac, he was assigned to scale down the Lisa interface, not for the Mac, but for the Apple II. Although he and James Batson were able to develop a viable interface for the Apple II that matched the mousing speed of the much faster Macintosh, the Apple executive staff elected not to ship a mouse with the Apple II for fear of cannibalizing Macintosh sales. It was only after Steve Jobs' early departure from Apple, in 1985, that Tog came to oversee the interface for both machines. During this period, Tog was responsible for the design of the Macintosh's hierarchical menus and invented time-out dialog boxes, which, after a visible countdown, carry out the default activity without the user explicitly clicking. He also invented the "package" illusion later used by Apple for Macintosh applications: Applications, along with all their supporting files, reside inside a "package" that, in turn, appears to be the application itself, appearing as an application icon, not as a folder. This illusion makes possible the simple drag-and-drop installation and deletion of Mac applications. While working at Sun Microsystems, in 1992 and 1993, he produced the Starfire video prototype, in order to give an idea of a usability centered vision of the Office of the future. The video predicted the rise of a new technology that would become known as the World Wide Web. Popular Science Magazine reported, in March 2009, that Microsoft had just produced a new video showing life in the year 2019: "The 2019 Microsoft details with this video is almost identical to the 2004 predicted in this video produced by Sun Microsystems in 1992." While at Sun Microsystems, Tog also filed for 58 US patents, with 57 issued in the areas of aviation safety, GPS, and human-computer interaction. Among them is US Patent 6278660, the time-zone-tracking wristwatch with built-in GPS and simple time-zone maps that sets itself using the GPS satellite's atomic clock and re-sets itself automatically whenever crossing into a new time zone. In 2000, after his four-year stint at WebMD, Tog joined his colleagues as the third principal at the Nielsen Norman Group, along with Jakob Nielsen and Don Norman. Bibliography The Apple Human Interface Guidelines (1987) (uncredited, author is Apple Computer, Inc) Tog on Interface (1992) Tog on Software Design (1995) References External links Ask Tog - Bruce Tognazzini's official site. The Starfire Home Page, including link to download film Apple Inc. employees American people of Swiss descent Living people 1945 births People from the San Francisco Bay Area
26146089
https://en.wikipedia.org/wiki/Iraqi%20Ground%20Forces%20Command
Iraqi Ground Forces Command
The Ground Forces Command at Victory Base Complex near Baghdad Airport was the most important fighting formation in the Iraqi Army. The headquarters of the Iraqi Ground Forces Command and the Iraqi Joint Forces Command are the same entity. Since 2006, and probably up to U.S. withdrawal in 2011, the Ground Forces Command has supervised the bulk of the military units of the army. History From 2003 until 2006, the units of the reforming Iraqi Army were under U.S. Army operational control. Their formation had been managed by the Coalition Military Assistance Training Team, which then became part of Multi-National Security Transition Command – Iraq. After they became operational, they had been transferred to the operational control of Multi-National Corps Iraq or one of its subordinate formations. On May 3, 2006 a significant command-and-control development took place. The Iraqi Army command and control center opened in a ceremony at the IFGC headquarters at Camp Victory. The IGFC was established to exercise command and control of assigned Iraqi Army forces and, upon assuming Operational Control, to plan and direct operations to defeat the Iraqi insurgency. At the time, the IFGC was commanded by Lt. Gen. Abdul-Qadar. The JHQ-AST (Joint Headquarters Advisory Support Team) had been established in 2004 to guide the IGFC/IJFHQ through this process. The JHQ-AST was a subordinate element of MNSTC-I. The Advisory Support Team's mission was described as to 'mentor and assist the Iraqi Joint Headquarters in order to become capable of exercising effective national command and control of the Iraqi Armed Forces, contributing to the capability development process, and contributing to improving the internal security situation within Iraq in partnership with coalition forces.' In 2006 the ten planned divisions began to be certified and assume battlespace responsibility: the 6th and 8th before June 26, 2006, the 9th on June 26, 2006, the 5th on July 3, 2006, the 4th on August 8, 2006, and the 2nd on December 21, 2006. After divisions were certified, they began to be transferred from U.S. operational control to Iraqi control of the IGFC. On 7 September 2006, Prime Minister Nouri al-Maliki signed a document taking control of Iraq's small naval and air forces and the 8th Division of the Iraqi Army, based in the south. At a ceremony marking the occasion, Gen. George Casey, the top U.S. commander in Iraq stated "From today forward, the Iraqi military responsibilities will be increasingly conceived and led by Iraqis." Previously, the U.S.-led Multi-National Force Iraq, commanded by Casey, gave orders to the Iraqi armed forces through a joint American-Iraqi headquarters and chain of command. Senior U.S. and coalition officers controlled army divisions but smaller units were commanded by Iraqi officers. After the handover, the chain of command flows directly from the prime minister in his role as Iraqi commander in chief, through his Defense Ministry to an Iraqi military headquarters, the Iraqi Joint Forces Command. From there, the orders go to Iraqi units on the ground. The other nine Iraqi divisions remain under U.S. command, with authority gradually being transferred. U.S. military officials said there was no specific timetable for the transition. U.S. military spokesman Maj. Gen. William Caldwell said it would be up to al-Maliki to decide "how rapidly he wants to move along with assuming control. ... They can move as rapidly thereafter as they want. I know, conceptually, they've talked about perhaps two divisions a month." After the 8th Division's transfer on September 7, 2006, the 3rd Division was transferred on December 1, 2006. Another unspecified division also was transferred to IGFC control. Also transferred to the Iraqi chain of command were smaller logistics units: on November 1, 2006, the 5th Motor Transport Regiment (MTR) was the fifth of nine MTRs to be transferred to the Iraqi Army divisions. 2007 plans included, MNF-I said, great efforts to make the Iraqi Army able to sustain itself logistically. Transfers of divisions to IGFC control continued in 2007: the 1st Division on February 15, the 10th Division on February 23, and the 7th Division on November 1. The new 14th Division also held its opening ceremony in Basrah on November 14, 2007. Ministerial Order #151, dated 19 February 2008, directed that the brigades of all the divisions be renumbered sequentially. Instead of each division have 1st/2nd/3rd/4th Brigades, each brigade has a unique identifying number. Staff organisation M1: administration, personnel M2: military intelligence, arms control, weather and military geography M3: leadership, planning, operations, training and exercise planning for the Army M4: Logistical Tasks / Materials Management / Maintenance M5: Civil-Military Cooperation (CIMIC / CIMIC) M6: Communications / IT / Management Service – Staff Maj. Gen. Saad, Iraqi Joint Headquarters M6 Forces under Command The IGFC does not control all the fighting formations of the Iraqi Army. The Baghdad Operational Command reports separately to the National Operations Center. "The 9th (Mechanized) Division has the entire army armoured (tank) capability. It is ethnically diverse. Some of the battalions of the 10th Division are manned by Shi’a militia." It appears, from January 2010 reports, that the Operational Commands are to be the basis for future Iraqi Army corps. Iraqi Ground Force Command (IGFC) Nineveh Operational Command – Mosul 2nd Division – Mosul + 5 (Citadel) Motorized Bde 6 (Scorpions) Infantry (AAslt) Bde 7 Infantry Bde 8 Infantry Bde 2nd Motor Transport Regiment 3rd Motorised Division – Al-Kasik + 16th Division – (shared with Peshmerga) – 'Division number 16, which protects the area extending from Khaneqin to Ridar, and division number 15, which protects everything between Ridar, Badinan and Mosul, are under the command of the Iraqi Army and receive their military instructions from Baghdad. The rest of the border guard will be under the command of the regional presidency and the Kurdistan parliament. Each division comprises 14,750 fighters. The two divisions therefore make up 29,500 fighters.' 15th Division – (shared with Peshmerga) Diyala Operational Command – Sulamaniyah, Diyala, Kirkuk, Salahadin 4th Motorised Division – Tikrit – certified and assumed responsibility for most of Salah ad Din Governorate and At-Ta'mim Governorate provinces, including the major cities Samarra and Tikrit, on August 8, 2006. The 4th Division's battalions are former ING units, recruited locally. It is ethnically diverse and has operational control of a number of Strategic Infrastructure Battalions protecting oil pipelines. 14th Motorised (AAslt) Bde (1-4), 15th (Eagles) Motorised Bde (2-4), 16th Infantry Bde (3-4), 4 Bde (Samara brigade) (forming; 17th Bde planned for summer 2008?) 4th Motor Transport Regiment 5th Infantry Division (Iron) – Diyala Governorate – Division is certified and assumed responsibility for the battle space on July 3, 2006. The 5th Division’s brigade headquarters, and battalions were components of the NIA. 18th Infantry (AAslt) Bde (1-5) 19th (Desert Lions) Infantry (AAslt) Bde (former 2-5) 20th Motorised Bde (former 3-5) 21st Motorised Bde (former 4-5) 5th Motor Transport Regiment 12th Light Infantry Division – Tikrit (probably planned to become Mech) Split off from 4 Div in mid-2008 46th Light Infantry Brigade (former 1 Strategic Infrastructure Bde) 47th Light Infantry Brigade (former 2 Strategic Infrastructure Bde) 48th Light Infantry Brigade (former 9 Strategic Infrastructure Bde) 49th Brigade (4-4). Basrah Operational Command – Basrah 8th Commando Division – Diwaniyah – The 8th Division is composed of former ING units, some of which were formed as early as 2004, but the division headquarters did not assume control of its area of operations until January 2006. As of March 2007, the division commander was Maj. Gen. Othman Ali Farhood. 30th Commando (Mot) Brigade (Diwaniyah) (1-8) 31st Commando (Mot) Brigade (HQ Hillah) (former 2-8) 32nd Commando (Mot) Brigade (HQ Kut) (former 3-8) 33rd Commando (Mot) Brigade (HQ Hussaniyah (Karbala)) (4-8). 8th Field Engineer Regiment 8th Transport and Provisioning Regiment 10th Motorised Division – An Nasiriyah (Tallil) – On February 23, 2007, the 10th Division, at that time based in Basrah, was certified and operational responsibility was transferred to the IGFC. However, since that time, the 14th Division has been formed in Basrah and the 10th Division transferred north to An Nasiriyah. Division commander is General Abdul Al Lateef, as of November 2006. 38th Motorised Brigade (HQ Batria Airport, most battalions Al Amarah)(1-10) 39th Infantry Brigade (HQ Samawah) (2-10) – 2nd BN currently attached to the 8th DIV and operating in KHIDIR North BABIL 40th Motorised Brigade (3-10) 41st Motorised Brigade (Majaar al Kabir) – formed in November 2008 (fmr 4-10?) 10th Field Engineer Regiment 10th Transport and Provisioning Regiment (Nasiriyah (Camp Ur)) 14th Motorised Division – Camp Wessam, Basrah 50th Motorised Brigade (Basrah) 51st Motorised Brigade (Basrah) 52nd Motorised Brigade (Basrah) 53rd Motorised Brigade (Basrah) – forming in mid-2008 14th Field Engineer Regiment (Basra (Shaibah)) 14th Transport and Provisioning Regiment Anbar Operational Command – Ramadi 1st Division – Fallujah – 1 Infantry Bde – Ramadi 2 Infantry Bde – Lake Tharthar 3 Motorized Bde – temporarily assigned to the 5th Division in Diyala 4 Bde – forming 7th Infantry Division – Ramadi, West Al Anbar Province – transferred to IGFC, November 1, 2007. The 7th Division was raised in early 2005 to replace the disbanded, Sunni-dominated ING units which proved unreliable. 26 Infantry Bde (former 1-7) 27 Infantry Bde (former 2-7) 28 Infantry Bde (former 3-7) 29th Brigade (opérational since 3 April 2008). References External links Army units and formations of Iraq Military units and formations established in 2006
6153869
https://en.wikipedia.org/wiki/First%20Asia%20Institute%20of%20Technology%20and%20Humanities
First Asia Institute of Technology and Humanities
First Asia Institute of Technology and Humanities FAITH Colleges (First Asia Institute of Technology and Humanities) is an institution of higher learning and research located in the City of Tanauan in Batangas. Since its inception on 8 September 2000, FAITH Colleges is envisioned to be a premier educational organization in the high-growth region south of Metro Manila. It aims to contribute to the humane and holistic development of the Filipino nation and the individual by training and producing graduates who are technologically skilled, well-rounded and competent, as well as grounded on Christian humanistic values.  FAITH Colleges is committed to the pursuit of a culture of academic excellence and social and environmental awareness in the community it serves, and to actively undertake research in science, technology, and the humanities. FAITH Colleges offers K-to-12, college, and post-baccalaureate education: the FAITH Total Child Prep School for preschool; FAITH Catholic School for K-to-12; Fidelis Senior High for Senior High School; Tertiary Schools; and School of Graduate Studies. FAITH Colleges is the youngest HEI in the country to be recognized as a Center of Development for IT. FAITH has Level III PACUCOA accreditation for the following academic programs: Business Administration, Computer Science, Information Technology, and Psychology. Recently, FAITH Colleges became the first HEI in Region IV to receive PACUCOA Level III re-accreditation status for BS Psychology and BS Information Technology. The PACUCOA—Philippine Association of Colleges and Universities Commission on Accreditation—is a private accrediting agency which gives formal recognition to an educational institution by attesting that its academic program maintains excellent standards in its educational operations, in the context of its aims and objectives. FAITH Colleges is also currently one of the leading institutions with the most number of board passers in Accountancy, Engineering, Nursing, Criminology, Psychometrics, Elementary Education, and Secondary Education. FAITH Colleges is currently among the Top 100 biggest schools across the country offering senior high school. It has two distinct senior high schools accredited by the Department of Education—FAITH Catholic Senior High School and Fidelis Senior High—that offer Academic (STEM, HUMSS, ABM, and GAS), Arts & Design, Sports, and Technology and livelihood tracks. The FAITH Catholic School, on the other hand, is PAASCU-accredited. FAITH Catholic School gave the Philippines its first gold medal in World Robotics. PAASCU stands for Philippine Accrediting Association of Schools, Colleges and Universities. It is a service organization that accredits academic programs which meet standards of quality education, officially recognized by the Philippine Department of Education. The FAITH campus boasts of world–class facilities that rival many of Metro Manila's established colleges and universities. The five-hectare campus boasts of fully air-conditioned classrooms and laboratories used for all levels of education. Its Multiversity Library is a three-level structure that contains thousands of books, periodicals, and reference materials both in hard copies and digital formats. The campus also houses complementary facilities for sports, extra-curricular activities and school functions such as Indoor Sports Arena and Activity Center (ISAAC), a multi-purpose covered court, a football field, a baseball diamond, a multi-purpose hall, beach volleyball court, and a chess plaza. In January 2017, the Space Lounge was inaugurated. It serves as the main cafeteria for FAITH Colleges’ Tertiary and Senior High School students. In SY2017-2018, FAITH Colleges opened its state-of-the-art four-storey building Nuspace Center to house the growing FAITH Academic Community and other offices. In May 2018, no less than the Archbishop of Lipa His Excellency Gilbert Garcera led the blessing of Nuspace Center. Known for its innovation and technology-driven education, FAITH Colleges is dubbed as the garden campus of Batangas. Its expansive grounds feature the ASEAN Garden, the Japanese-inspired Serenity Garden, and the College Promenade where the statue of the Virgin Mary called Mater et Magistra (Mother and Teacher) atop a 30-ft vertical pillar watches over the entire campus, wide open spaces such as the College Promenade, and lush gardens for reflection. The School Chapel offers a recluse for prayer, fellowship, and worship. Degrees School of Technology Bachelor of Science in Information Technology Bachelor of Science in Entertainment and Multimedia Computing Bachelor of Science in Computer Science Bachelor of Science in Industrial Engineering Bachelor of Science in Computer Engineering Bachelor of Science in Electronics Engineering (EcE) Bachelor of Science in Electrical Engineering School of Management Bachelor of Science in Accountancy Bachelor of Science in Management Accounting Bachelor of Science in Business Administration Bachelor of Science in Entrepreneurship Bachelor of Science in Hospitality Management Bachelor of Science in Tourism Management School of Humanities Bachelor of Arts in Communications Bachelor of Arts in Multimedia Arts Bachelor of Science in Psychology Bachelor of Science in Nursing Bachelor of Science in Medical Technology Bachelor of Science in Secondary Education Bachelor of Elementary Education Bachelor of Physical Education, major in School P.E. Bachelor of Science in Criminology External links First Asia Institute of Technology and Humanities Schools in Batangas Universities and colleges in Batangas National Collegiate Athletic Association (Philippines) Education in Tanauan, Batangas
9465799
https://en.wikipedia.org/wiki/Morfeo%20Open-Source%20Software%20Community
Morfeo Open-Source Software Community
Morfeo Open-Source Software Community is a group that promotes the use of open source software, focused on improving technical transfer between companies, and on generating social networks for collaboration, and to encourage for small-sized companies providing certain resources for carrying out this task. The group is backed by the regional governments of Andalusia, Aragon, Castile-La Mancha, Extremadura, Catalonia and Valencia. It relies on its members' contributions, and Telefónica I+D releases proprietary software components and provides resources for the group. The organization works with projects including MyMobileWeb, SMARTFlow (a workflow platform), CORBA Components and service-oriented architecture (SOA) components. Other projects include B2Booking, EasyConf, MyMobileSearch and UptaZone. External links Morfeo Open-Source Software Community WIKI:Morfeo Open-Source Software Community Free and open-source software organizations
3022819
https://en.wikipedia.org/wiki/Credential%20service%20provider
Credential service provider
A credential service provider (CSP) is a trusted entity that issues security tokens or electronic credentials to subscribers. A CSP forms part of an authentication system, most typically identified as a separate entity in a Federated authentication system. A CSP may be an independent third party, or may issue credentials for its own use. The term CSP is used frequently in the context of the US government's eGov and e-authentication initiatives. An example of a CSP would be an online site whose primary purpose may be, for example, internet banking - but whose users may be subsequently authenticated to other sites, applications or services without further action on their part. History In any authentication system, some entity is required to authenticate the user on behalf of the target application or service. For many years there was poor understanding of the impact of security and the multiplicity of services and applications that would ultimately require authentication. The result of this is that not only are users burdened with many credentials that they must remember or carry around with them, but also applications and services must perform some level of registration and then some level of authentication of those users. As a result, Credential Service Providers were created. A CSP separates those functions from the application or service and typically provides trust to that application or service over a network (such as the Internet). CSP Process The CSP establishes a mechanism to uniquely identify each subscriber and the associated tokens and credentials issued to that subscriber. The CSP registers or gives the subscriber a token to be used in an authentication protocol and issues credentials as needed to bind that token to the identity, or to bind the identity to some other useful verified attribute. The subscriber may be given electronic credentials to go with the token at the time of registration, or credentials may be generated later as needed. Subscribers have a duty to maintain control of their tokens and comply with the responsibilities to the CSP. The CSP maintains registration records for each subscriber to allow recovery of registration records. In an e-authentication model, a claimant in an authentication protocol is a subscriber to some CSP. At some point, an applicant registers with a Registration Authority (RA), which verifies the identity of the applicant, typically through the presentation of paper credentials and by records in databases. This process is called identity proofing. The RA, in turn, vouches for the identity of the applicant (and possibly other verified attributes) to a CSP. The applicant then becomes a subscriber of the CSP. The CSP establishes a mechanism to uniquely identify each subscriber and the associated tokens and credentials issued to that subscriber. There is always a relationship between the RA and CSP. Importance CSPs can establishes confidence of a user identity through an electronic authentication process. As a result, some regulatory agencies can ask individuals to proof their identities through a CSP. Today, regulatory agencies require physicians to be authenticated electronically before physicians can issue any prescription for controlled dangerous substances (CDS). Physicians have to seek for federally approved CSPs in order to receive a two-factor authentication credential or digital certificates. The CPSs conduct identity proofing that meets National Institute of Standards and Technology Special Publication 800-63-1 Assurance Level 3. CSP and the US Government The federal government is currently the CSP for e-government transactions. However, the government plans to focus all their attention in the applications and leave the credential management business to other industries. In 2004, the US government proposed an E-authentication initiative. The goals of the initiative include: Build and enable mutual trust needed to support widespread use of electronic interactions between the public and the US Government. Minimize the burden on the public when obtaining trusted electronic services from the government. Deliver common interoperable authentication solutions, appropriately matching the levels of risk and business risks. As a result of this initiative, campuses may start offering to student, faculty and staff access to certain federal applications. However, before this happens, the government will impose the following requirements: FedFed Membership requirements for levels 1 & 2 Credential Assessment Signing Business and Operating Rules Technical Interoperability at SAML 1.0 FedFed Membership requirements for levels 3 & 4 Cross-certification with Federal PKI Service Provider Requirements to Join Federal Federation Directly Those services provides wishing to join the Federal Federation Directly will have to agree with: eAuthentication Business and Operating rules in Risk Analysis Service levels Security levels″ Compliance with FIPS and NIST SPs Reporting requirements Procedural, audit and documentation requirements. Providers Below is a short list of some CSPs with a short description of the services they provide. Equifax Equifax provides credentialing solutions certified that meet Federal security and privacy requirements. Equifax offers beyond basic name and address identification credential. Equifax provides methods of discerning an electronic identity in order to ensure that only trusted users have access to sensitive data and secure networks. MediQuin MediQuin is a credential service provider located in Irvine, California. MediQuin provides Medical Credentialing, provider applications, enrollment forms, verification services, and other medical related credential services. Med Advantage Med Advantage provide numerous verification services. Board Certification - Verify Current certificate level Criminal Background - Verify State and/or Federal Criminal History DEA/CDS Registration - Verify by NTIS and/or by certificate Education - Verify Medical Education & Post graduate Education FSMB - Query The Federation of State Medical Boards License- Verify State license(s) Malpractice Claims - Verify from the carrier Malpractice Insurance - Verify from the Carrier or Certificate NPDB - Query The National Practitioner Databank HIPDB - Query The Healthcare Integrity and Protection Databank Privileges - Verify Hospital admitting Privileges and Delineation of Privileges References - Verify Professional references Sanctions - Query Medicare/Medicaid and State License Work History - Extract Work History from the Curriculum Vitae Costs Below is a table that shows the approximate cost for a Credential Service Provider in different Categories. The Kantara Initiative The Initiative Identity Assurance Accreditation and Approval Program is a Kantara program that tries to use CPS in order to provide to private sectors with better reliable digital credentials. Windows Windows uses CSP to implement authentication protocols. With Windows Vista, a new authentication package called Credential Security Service Provider (CredSSP) was introduced. CredSSP uses the client-side CSP to enable applications delegate user's credentials to the target server. References Federated identity
224748
https://en.wikipedia.org/wiki/JOVIAL
JOVIAL
JOVIAL is a high-level programming language based on ALGOL 58, specialized for developing embedded systems (specialized computer systems designed to perform one or a few dedicated functions, usually embedded as part of a larger, more complete device, including mechanical parts). It was a major system programming language through the 1960s and 70s. History JOVIAL was developed as a new "high-order" programming language starting in 1959 by a team at System Development Corporation (SDC) headed by Jules Schwartz to compose software for the electronics of military aircraft. The name JOVIAL is an acronym for Jules' Own Version of the International Algebraic Language; International Algorithmic Language (IAL) was a name proposed originally for ALGOL 58. According to Schwartz, the language was originally called OVIAL, but this was opposed for various reasons. JOVIAL was then suggested, with no meaning attached to the J. Somewhat jokingly it was suggested that the language be named after Schwartz, since he was the meeting chairperson, and this unofficial name stuck. During the 1960s, JOVIAL was a part of the US Military L-project series, particularly the ITT 465L Strategic Air Command Control System (the Strategic Automated Command and Control System (SACCS) project), due to a lack of real-time computing programming languages available. Some 95 percent of the SACCS project, managed by International Telephone & Telegraph (ITT) with software mainly written by SDC, was written in JOVIAL. The software project took two years and fewer than 1,400 programmer years, less than half of the equivalent time in the SAGE L-project. During the late 1970s and early 1980s, the United States Air Force adopted a standardized central processing unit (CPU), the MIL-STD-1750A, and subsequent JOVIAL programs were built for that processor. Several commercial vendors provided compilers and related programming tools to build JOVIAL for processors such as the MIL-STD-1750A, including Advanced Computer Techniques (ACT), TLD Systems, Proprietary Software Systems (PSS), and others. JOVIAL was standardized during 1973 with MIL-STD-1589 and was revised during 1984 with MIL-STD-1589C. It is still used to update and maintain software on older military vehicles and aircraft. There are three dialects in common use: J3, J3B-2, and J73. , JOVIAL is no longer maintained and distributed by the USAF JOVIAL Program Office (JPO). Software formerly distributed by the JPO is still available through commercial resources at Software Engineering Associates, Inc., (SEA) as are other combinations of host/target processors including Windows, Linux, Mac OS X on PowerPC, SPARC, VAX, 1750A, PowerPC, TI-9989, Zilog Z800x, Motorola 680x0, and IBM System 360, System 370, and System z. Further, DDC-I, which acquired parts of Advanced Computer Techniques, also lists JOVIAL compilers and related tools . Most software implemented in JOVIAL is mission critical, and maintenance is growing more difficult. In December 2014, it was reported that software derived from JOVIAL code produced in the 1960s was involved in a major failure of the United Kingdom's air traffic control infrastructure, and that the agency that uses it, NATS Holdings, was having to train its IT staff in JOVIAL so they could maintain this software, which was not scheduled for replacement until 2016. Influence Languages influenced by JOVIAL include CORAL, SYMPL, Space Programming Language (SPL), and to some extent CMS-2. An interactive subset of JOVIAL called TINT, similar to JOSS was developed in the 1960s. Features JOVIAL includes features not found in standard ALGOL, such as items (now called structures), arrays of items, status variables (now called enumerations) and inline assembly language. It also included provisions for "packed" data within tables. Table packing refers to the allocation of items within an entry to words of storage (bits in a unit of data). This was important with respect to the limited memory and storage of the computing systems of the JOVIAL era. The Communication Pool (COMPOOL) in Jovial is similar to libraries of header files for languages such as PL/I and C. Applications Notable systems using embedded JOVIAL software include: Milstar communications satellite Advanced Cruise Missile B-52, B-1B, B-2 bombers C-130, C-141, C-17 transport aircraft F-111, F-15, F-16 (prior to Block 50), F-117 fighter aircraft LANTIRN U-2 aircraft Boeing E-3 Sentry AWACS aircraft (Prior to Block 40/45) Navy Aegis cruisers Army Multiple Launch Rocket System (MLRS) Army Sikorsky UH-60 Black Hawk helicopters F100, F117, F119 jet engines NORAD air defense & control system (Hughes HME-5118ME system) NATO Air Defence Ground Environment (NADGE) system RL10 rocket engines Civil NAS (National Airspace System) Air Traffic Control APG-70, APG-71, and APG-73 airborne radar systems Example The following example is taken from ''Computer Programming Manual for the JOVIAL (J73) Language. PROC RETRIEVE(CODE:VALUE); BEGIN ITEM CODE U; ITEM VALUE F; VALUE = -99999.; FOR I:0 BY 1 WHILE I<1000; IF CODE = TABCODE(I); BEGIN VALUE = TABVALUE(I); EXIT; END END This example defines a procedure named RETRIEVE which takes an unsigned integer input argument CODE and a floating-point output argument VALUE. It searches the 1000-element array TABCODE for an entry that matches CODE, and then sets the floating-point variable VALUE to the element of array TABVALUE having the same matching array index. If no matching element is found, VALUE is set to −99999.0. References External links The Development of Jovial April 2006 archive of the JOVIAL Program Office Page on Jules Schwartz, including film of a humorous talk on the development of JOVIAL DODSSP U.S. Department of Defense Single Stock Point for Military Specifications, Standards and Related Publications Software Engineering Associates DDC-I, Inc.: DDC-I JOVIAL Compiler System (DJCS) Archived at Ghostarchive and the Wayback Machine: Procedural programming languages Avionics programming languages Systems programming languages High Integrity Programming Language ALGOL 58 dialect
2916856
https://en.wikipedia.org/wiki/Ion%20mobility%20spectrometry
Ion mobility spectrometry
Ion mobility spectrometry (IMS) is an analytical technique used to separate and identify ionized molecules in the gas phase based on their mobility in a carrier buffer gas. Though heavily employed for military or security purposes, such as detecting drugs and explosives, the technique also has many laboratory analytical applications, including the analysis of both small and large biomolecules. IMS instruments are extremely sensitive stand-alone devices, but are often coupled with mass spectrometry, gas chromatography or high-performance liquid chromatography in order to achieve a multi-dimensional separation. They come in various sizes, ranging from a few millimeters to several meters depending on the specific application, and are capable of operating under a broad range of conditions. IMS instruments such as microscale high-field asymmetric-waveform ion mobility spectrometry can be palm-portable for use in a range of applications including volatile organic compound (VOC) monitoring, biological sample analysis, medical diagnosis and food quality monitoring. Systems operated at higher pressure (i.e. atmospheric conditions, 1 atm or 1013 hPa) are often accompanied by elevated temperature (above 100 °C), while lower pressure systems (1-20 hPa) do not require heating. History IMS was first developed primarily by Earl W. McDaniel of Georgia Institute of Technology in the 1950s and 1960s when he used drift cells with low applied electric fields to study gas phase ion mobilities and reactions. In the following decades, he coupled his new technique with a magnetic-sector mass spectrometer, with others also utilizing his techniques in new ways. IMS cells have since been attached to many other mass spectrometers, gas chromatographs and high-performance liquid chromatography setups. IMS is a widely used technique, and improvements and other uses are continually being developed. Applications Perhaps ion mobility spectrometry's greatest strength is the speed at which separations occur—typically on the order of tens of milliseconds. This feature combined with its ease of use, relatively high sensitivity, and highly compact design have allowed IMS as a commercial product to be used as a routine tool for the field detection of explosives, drugs, and chemical weapons. Major manufacturers of IMS screening devices used in airports are Morpho and Smiths Detection. Smiths purchased Morpho Detection in 2017 and subsequently had to legally divest ownership of the Trace side of the business[Smiths have Trace Products] which was sold on to Rapiscan Systems in mid 2017. The products are listed under ETD Itemisers. The latest model is a non-radiation 4DX. In the pharmaceutical industry IMS is used in cleaning validations, demonstrating that reaction vessels are sufficiently clean to proceed with the next batch of pharmaceutical product. IMS is much faster and more accurate than HPLC and total organic carbon methods previously used. IMS is also used for analyzing the composition of drugs produced, thereby finding a place in quality assurance and control. As a research tool ion mobility is becoming more widely used in the analysis of biological materials, specifically, proteomics and metabolomics. For example, IMS-MS using MALDI as the ionization method has helped make advances in proteomics, providing faster high-resolution separations of protein pieces in analysis. Moreover, it is a really promising tool for glycomics, as rotationally averaged collision cross section (CCS) values can be obtained. CCS values are important distinguishing characteristics of ions in the gas phase, and in addition to the empirical determinations it can also be calculated computationally when the 3D structure of the molecule is known. This way, adding CCS values of glycans and their fragments to databases will increase structural identification confidence and accuracy. Outside of laboratory purposes, IMS has found great usage as a detection tool for hazardous substances. More than 10,000 IMS devices are in use worldwide in airports, and the US Army has more than 50,000 IMS devices. In industrial settings, uses of IMS include checking equipment cleanliness and detecting emission contents, such as determining the amount of hydrochloric and hydrofluoric acid in a stack gas from a process. It is also applied in industrial purposes to detect harmful substances in air. In metabolomics the IMS is used to detect lung cancer, Chronic obstructive pulmonary disease, sarcoidosis, potential rejections after lung transplantation and relations to bacteria within the lung (see breath gas analysis). Ion mobility The physical quantity ion mobility K is defined as the proportionality factor between an ion's drift velocity vd in a gas and an electric field of strength E. Ion mobilities are commonly reported as reduced mobilities, correcting to standard gas density n0, which can be expressed in standard temperature T0 = 273 K and standard pressure p0 = 1013 hPa. This does not correct for other effects than the change in gas density and the reduced ion mobility is therefore still temperature dependent. The ion mobility K can, under a variety of assumptions, be calculated by the Mason-Schamp equation. where Q is the ion charge, n is the drift gas number density, μ is the reduced mass of the ion and the drift gas molecules, k is Boltzmann constant, T is the drift gas temperature, and σ is the collision cross section between the ion and the drift gas molecules. Often, N is used instead of n for the drift gas number density and Ω instead σ for the ion-neutral collision cross section. This relation holds approximately at a low electric field limit, where the ratio of E/N is small and thus the thermal energy of the ions is much greater than the energy gained from the electric field between collisions. With these ions having similar energies as the buffer gas molecules, diffusion forces dominate ion motion in this case. The ratio E/N is typically given in Townsends (Td) and the transition between low- and high-field conditions is typically estimated to occur between 2 Td and 10 Td. When low-field conditions no longer prevail, the ion mobility itself becomes a function of the electric field strength which is usually described empirically through the so-called alpha function. Ionization The molecules of the sample need to be ionized, usually by corona discharge, atmospheric pressure photoionization (APPI), electrospray ionization (ESI), or radioactive atmospheric-pressure chemical ionization (R-APCI) source, e.g. a small piece of 63Ni or 241Am, similar to the one used in ionization smoke detectors. ESI and MALDI techniques are commonly used when IMS is paired with mass spectrometry. Doping materials are sometimes added to the drift gas for ionization selectivity. For example, acetone can be added for chemical warfare agent detection, chlorinated solvents added for explosives, and nicotinamide added for drugs detection. Analyzers Ion mobility spectrometers exist based on various principles, optimized for different applications. A review from 2014 lists eight different ion mobility spectrometry concepts. Drift tube ion mobility spectrometry Drift tube ion mobility spectrometry (DTIMS) measures how long a given ion takes to traverse a given length in a uniform electric field through a given atmosphere. In specified intervals, a sample of the ions is let into the drift region; the gating mechanism is based on a charged electrode working in a similar way as the control grid in triodes works for electrons. For precise control of the ion pulse width admitted to the drift tube, more complex gating systems such as a Bradbury-Nielsen or a Field Switching Shutter are employed. Once in the drift tube, ions are subjected to a homogeneous electric field ranging from a few volts per centimeter up to many hundreds of volts per centimeter. This electric field then drives the ions through the drift tube where they interact with the neutral drift molecules contained within the system and separate based on the ion mobility, arriving at the detector for measurement. Ions are recorded at the detector in order from the fastest to the slowest, generating a response signal characteristic for the chemical composition of the measured sample. The ion mobility K can then be experimentally determined from the drift time tD of an ion traversing within a homogeneous electric field the potential difference U in the drift length L. A drift tube's resolving power RP can, when diffusion is assumed as the sole contributor to peak broadening, be calculated as where tD is the ion drift time, ΔtD is the Full width at half maximum, L is the tube length, E is the electric field strength, Q is the ion charge, k is Boltzmann's constant, and T is the drift gas temperature. Ambient pressure methods allow for higher resolving power and greater separation selectivity due to a higher rate of ion-molecule interactions and is typically used for stand-alone devices, as well as for detectors for gas, liquid, and supercriticial fluid chromatography. As shown above, the resolving power depends on the total voltage drop the ion traverses. Using a drift voltage of 25 kV in a 15 cm long atmospheric pressure drift tube, a resolving power above 250 is achievable even for small, single charged ions. This is sufficient to achieve separation of some isotopologues based on their difference in reduced mass μ. Low pressure drift tube Reduced pressure drift tubes operate using the same principles as their atmospheric pressure counterparts, but at drift gas pressure of only a few torr. Due to the vastly reduced number of ion-neutral interactions, much longer drift tubes or much faster ion shutters are necessary to achieve the same resolving power. However, the reduced pressure operation offers several advantages. First, it eases interfacing the IMS with mass spectrometry. Second, at lower pressures, ions can be stored for injection from an ion trap and re-focussed radially during and after the separation. Third, high values of E/N can be achieved, allowing for direct measurement of K(E/N) over a wide range. Travelling wave Though drift electric fields are normally uniform, non-uniform drift fields can also be used. One example is the travelling wave IMS, which is a low pressure drift tube IMS where the electric field is only applied in a small region of the drift tube. This region then moves along the drift tube, creating a wave pushing the ions towards the detector, removing the need for a high total drift voltage. A direct determination of collision cross sections (CCS) is not possible, using TWIMS. Calibrants can help circumvent this major drawback, however, these should be matched for size, charge and chemical class of the given analyte. An especially noteworthy variant is the "SUPER" IMS, which combines ion trapping by the so-called structures for lossless ion manipulations (SLIM) with several passes through the same drift region to achieve extremely high resolving powers. Trapped ion mobility spectrometry In trapped ion mobility spectrometry (TIMS), ions are held stationary (or trapped) in a flowing buffer gas by an axial electric field gradient (EFG) profile while the application of radio frequency (rf) potentials results in trapping in the radial dimension. TIMS operates in the pressure range of 2 to 5 hPa and replaces the ion funnel found in the source region of modern mass spectrometers. It can be coupled with nearly any mass analyzer through either the standard mode of operation for beam-type instruments or selective accumulation mode (SA-TIMS) when used with trapping mass spectrometry (MS) instruments. Effectively, the drift cell is prolonged by the ion motion created through the gas flow. Thus, TIMS devices do neither require large size nor high voltage in order to achieve high resolution, for instance achieving over 250 resolving power from a 4.7 cm device through the use of extended separation times. However, the resolving power strongly depends on the ion mobility and decreases for more mobile ions. In addition, TIMS can be capable of higher sensitivity than other ion mobility systems because no grids or shutters exist in the ion path, improving ion transmission both during ion mobility experiments and while operating in a transparent MS only mode. High-field asymmetric waveform ion mobility spectrometry DMS (differential mobility spectrometer) or FAIMS (field asymmetric ion mobility spectrometer) make use of the dependence of the ion mobility K on the electric field strength E at high electric fields. Ions are transported through the device by the drift gas flow and subjected to different field strengths in orthogonal direction for different amounts of time. Ions are deflected towards the walls of the analyzer based on the change of their mobility. Thereby only ions with a certain mobility dependence can pass the thus created filter Differential mobility analyzer A differential mobility analyzer (DMA) makes use of a fast gas stream perpendicular to the electric field. Thereby ions of different mobilities undergo different trajectories. This type of IMS corresponds to the sector instruments in mass spectrometry. They also work as a scannable filter. Examples include the differential mobility detector first commercialized by Varian in the CP-4900 MicroGC. Aspiration IMS operates with open-loop circulation of sampled air. Sample flow is passed via ionization chamber and then enters to measurement area where the ions are deflected into one or more measuring electrodes by perpendicular electric field which can be either static or varying. The output of the sensor is characteristic of the ion mobility distribution and can be used for detection and identification purposes. A DMA can separate charged aerosol particles or ions according to their mobility in an electric field prior to their detection, which can be done with several means, including electrometers or the more sophisticated mass spectrometers. Drift gas The drift gas composition is an important parameter for the IMS instrument design and resolution. Often, different drift gas compositions can allow for the separation of otherwise overlapping peaks. Elevated gas temperature assists in removing ion clusters that may distort experimental measurements. Detector Often the detector is a simple Faraday plate coupled to a transimpedance amplifier, however, more advanced ion mobility instruments are coupled with mass spectrometers in order to obtain both size and mass information simultaneously. It is noteworthy that the detector influences the optimum operating conditions for the ion mobility experiment. Combined methods IMS can be combined with other separation techniques. Gas chromatography When IMS is coupled with gas chromatography, common sample introduction is with the GC capillary column directly connected to the IMS setup, with molecules ionized as they elute from GC. A similar technique is commonly used for HPLC. A novel design for corona discharge ionization ion mobility spectrometry (CD–IMS) as a detector after capillary gas chromatography has been produced in 2012. In this design, a hollow needle was used for corona discharge creation and the effluent was entered into the ionization region on the upstream side of the corona source. In addition to the practical conveniences in coupling the capillary to IMS cell, this direct axial interfacing helps us to achieve a more efficient ionization, resulting in higher sensitivity. When used with GC, a differential mobility analyzer is often called a differential mobility detector (DMD). A DMD is often a type of microelectromechanical system, radio frequency modulated ion mobility spectrometry (MEMS RF-IMS) device. Though small, it can fit into portable units, such as transferable gas chromatographs or drug/explosives sensors. For instance, it was incorporated by Varian in its CP-4900 DMD MicroGC, and by Thermo Fisher in its EGIS Defender system, designed to detect narcotics and explosives in transportation or other security applications. Liquid chromatography Coupled with LC and MS, IMS has become widely used to analyze biomolecules, a practice heavily developed by David E. Clemmer, now at Indiana University (Bloomington). Mass spectrometry When IMS is used with mass spectrometry, ion mobility spectrometry-mass spectrometry offers many advantages, including better signal to noise, isomer separation, and charge state identification. IMS has commonly been attached to several mass spec analyzers, including quadropole, time-of-flight, and Fourier transform cyclotron resonance. Dedicated software Ion mobility mass spectrometry is a rather recently popularized gas phase ion analysis technique. As such there is not a large software offering to display and analyze ion mobility mass spectrometric data, apart from the software packages that are shipped along with the instruments. ProteoWizard, OpenMS, and msXpertSuite are free software according to the OpenSourceInitiative definition. While ProteoWizard and OpenMS have features to allow spectrum scrutiny, those software packages do not provide combination features. In contrast, msXpertSuite features the ability to combine spectra according to various criteria: retention time, m/z range, drift time range, for example. msXpertSuite thus more closely mimicks the software that usually comes bundled with the mass spectrometer. See also Electrical mobility Viehland-Mason Theory Explosive detection References Bibliography External links Mass spectrometry Explosive detection
32214697
https://en.wikipedia.org/wiki/School%20of%20Computing%20and%20Information%20Sciences%2C%20Saint%20Louis%20University%2C%20Baguio
School of Computing and Information Sciences, Saint Louis University, Baguio
School of Computing and Information Sciences CHED Center of Development for Information Technology The youngest School in SLU traces its roots to the vision of then VP for Finance and later University President, Rev. Fr. Ghisleen de Vos (1976–1983). Forward thinking and possessed with a progressive management style, Fr. de Vos foresaw the full automation of some university systems like the accounting and enrolment processes at a period when computerization was not yet widely practiced in the country. With the acquisition of the IBM systems in 1969 and in 1980, SLU also catered to the computing needs of other institutions in nearby regions. The SLU Computer Center handled these tasks until 1990 when it evolved into the Institute of Information and Computing Science and offered a course in Computer Science. The institute was soon after converted into a college in 1994, and eventually the management of computing and IT needs of the different sectors of the university were devolved into the newly installed MIS and SLU NET Offices. Courses in Information Technology, Mathematics, Information Management, and Library and Information Science were added over time. New as it was then, the school was already a trailblazer in IT education. It was the first in the region to offer a graduate program in IT in 1995. The advanced curriculum was further strengthened with globe-spanning linkages, faculty scholarships and trainings, and invitation of international lecturers. The School hosted the first ever Northern Luzon international IT conference in 2007 with students, professionals and experts from the world over in attendance. It has since conducted annual regional IT congresses which showcase researches and projects in the field from different universities and industries. This Center of Development in IT education continuously introduces program innovations to match current demands and skills in the profession. The School's ICT Research Laboratory designed and manages the University's Learning Management System, and the Research Digital Repository System which serve as online storehouse portals for course notes, researches, forums, and class records. The School has worked on and is currently completing studies on promising areas in IT research such as natural language processing using local dialects (e.g., Ilokano and Tagalog), computational mathematics and algorithm, mobile and wireless computing, and measurement of IT literacy and fluency. People skilled in Digital Arts technology are among the most in-demand workers in several industries today. To meet this demand, and in support to the Philippine government’s call for HEIs to offer ladderized technical or vocational programs, the School offers short diploma courses in digital animation, multimedia systems, digital design, editing and publishing, and the like. The latest addition to the School's graduate programs - the Masters of Science in Service Management Engineering (MSSME) - makes SLU the first in the country to offer this now trending academic initiative. The degree aims at advancing, managing, evaluating and optimizing systems in the global service industry. Developed in coordination with Prof. Dr. Guido Dedene, a renowned global IT expert, this course is a multidisciplinary program which also includes subjects from the Schools of Engineering and Architecture, and Accountancy and Business Management of SLU. The School is distinguished to be one of the select HEIs tapped by the Philippine Statistical Research and Training Center as the focal place for regional trainings to accelerate statistical capability building in the nation. Apart from producing technologically savvy professionals, the School wants to make itself socially relevant through the sharing of its expertise and resources. It donated numerous computer units in 2007 to the Baguio City National High School (BCNHS) as part of a collaborative project with the Close the Gap (CTG) alliance program of Belgium. As a component of the project, the School additionally designed and conducted a series of training programs for the teachers of the BCNHS on several computer and web-based applications. The School's future looks bright as it continues to soar with the speed of rapid modernization. The School of Computing and Information and Sciences recognizes though that the power to create, command, and control information technology comes with great responsibility. The School therefore primes itself not only on setting new academic directions towards the advancement of IT and Computing education and research, but also on advocating the ethical use of information and computing SLU was the first institutional internet service provider in Northern Luzon when it became a member - 1 of only 10 in the country then - of the Philippine Network Foundation (PHNet) consortium in 1994. References Universities and colleges in Baguio
13345651
https://en.wikipedia.org/wiki/Richard%20Kilmer
Richard Kilmer
Richard Kilmer (born Hemet, California, 1969) is a technology entrepreneur, software programmer and conference host and speaker in the open-source software community. He is an open-source contributor and developer of commercial software applications built in Ruby and Flash. His best known open-source software creation is of RubyGems, a package manager for the Ruby programming language most commonly used in downloads and deployments of the Ruby on Rails web application framework. He is currently the Co-Founder and CEO of CargoSense, Inc. In 2001, he co-founded both the non-profit corporation Ruby Central, Inc. dedicated to the promotion of the Ruby programming language, and the for-profit corporation InfoEther, Inc., created to focus on applying the Ruby computer language in business. He served as president and CEO of InfoEther until its acquisition by LivingSocial in March 2011. At LivingSocial he was appointed a vice president working in roles in R&D, and led the software development of numerous projects in Merchant Services and mobile. After several years at LivingSocial, he left in 2013 to form his current company, CargoSense, Inc. , a Software-as-a-Service (SaaS) company aimed at bringing innovation to the logistics supply chain in numerous industries using sensor technology in the Internet of Things arena. Prior to 2001, he was the co-founder and Chief Technology Officer for a leading-edge P2P software company where he was granted two U.S. patents and co-wrote a massive Java codebase. Between 2002 and 2005 his for-profit company performed work for DARPA on both a massively multi-agent logistics software system and the Semantic Web project developing an early Ontological Web Language (OWL) library. Both projects drew on his expertise in computer security gained as a systems security manager while in the U.S. Air Force stationed at The Pentagon. When an active board member in the non-profit Ruby Central, he played host to the annual international conferences put on by that organization for both Ruby and Ruby on Rails. By 2006, the Ruby on Rails conferences had become so large and popular that Ruby Central entered into an agreement with O'Reilly Media to co-promote Rails events in both the U.S. and Europe. Previously Rich had spoken at numerous O'Reilly Media open-source conferences. He has also been a consistent contributor at the Foo Camp events put on by O'Reilly Media and is a technology blogger. References External links CargoSense, Inc. Ruby Central, Inc. Rails Conferences Rich Kilmer video interview on the Power of Ruby 1969 births Living people Computer programmers American chief technology officers American technology chief executives
16352023
https://en.wikipedia.org/wiki/Automata-based%20programming%20%28Shalyto%27s%20approach%29
Automata-based programming (Shalyto's approach)
Automata-based programming is a programming technology. Its defining characteristic is the use of finite state machines to describe program behavior. The transition graphs of state machines are used in all stages of software development (specification, implementation, debugging and documentation). Automata-based programming technology was introduced by Anatoly Shalyto in 1991. Switch-technology was developed to support automata-based programming. Automata-based programming is considered to be rather general purpose program development methodology than just another one finite state machine implementation. Automata-based programming The main idea of suggested approach is to construct computer programs the same way the automation of technological processes (and other kinds of processes too) is done. For all that on the basis of data domain analysis the sources of input events, the control system (the system of interacting finite state machines) and the control objects implementing output actions are singled out. These control objects can also form yet another type of input actions that are transmitted through a feedback from control objects back to the finite state machines. Main features In recent years great attention has been paid to the development of the technology of programming for embedded systems and real-time systems. These systems have special requirements for the quality of software. One of the best known approaches for this field of tasks is synchronous programming. Simultaneously with the advance of synchronous programming in Europe, an approach to software development for crucial systems called automata-based programming or state-based programming was being created in Russia. The term event is being used more and more widely in programming; recently it has become one of the most commonly used terms in software development. As opposed to it, the offered approach is based on the term state (State-Driven Architecture). After introduction of the term input action, which could denote an input variable or an event, the term automaton without outputs might be brought in. After adding the term output action, the term “automaton” might be used. It is the finite deterministic automaton. That is why, the sort of programming, which is based on this term, was called “automata-based programming”. So the process of software creation could be named “automata software design”. The feature of this approach is that automata used for development are defined with the help of transition graphs. In order to distinguish the nodes of these graphs the term state coding has been introduced. With multivalued state coding a single variable can be used to distinguish states of automaton, the number of states is equal to the number of values this variable can take on. This allowed introducing of the term program observability (that is, the value of the state variable can be checked). Using the concept of “state” in contrast to the concepts of “events” and “variables”, allows one to understand and to specify the task and its parts (subtasks) more clearly. It is necessary to note that using automata-based programming implies debugging by drawing up the protocols (logging) in terms of automata. For this approach there is a formal and isomorphic method of transforming from the transition graph to the software source code. So when using high-level programming languages, the simplest way is to use a construct which is similar to the switch construct of the C programming language. That is why the first implementation of automata-based programming was called “Switch-Technology”. Additional information about automata-based programming can be found in the “Switch-technology” article. Nowadays automata-based programming has been developed in several ways, for different types of task to be solved and for various type of computing devices. Russian registration certificate was issued for the Automata-based programming core and for the Automata-based programming plug-in for Eclipse IDE. Logical control In 1996 Russian Foundation for Basic Research in the context of publishing project #96-01-14066 had supported publishing of a book, in which the offered technology was described in application to the logical control systems. In such systems there are no events, but input and output actions are binary variables and operating system is working in the scanning mode. Systems of this class are usually to be implemented on programmable logic controllers, which have relatively small amount of memory and programming is to be performed using specialized languages (for example, the language of ladder schemes or functional blocks). Methods of formal source code generation for such languages were developed for the cases in which the specification of the project being developed is represented by a system of transition graphs of interacting automata. State-based programming Henceforth automata approach was spread to the event-based (reactive) systems. In such systems all of the limitations mentioned above are taken away. It is obvious from the name of these systems that events are used among the input actions. Output actions could be represented by arbitrary functions. Any real-time operating system could be used as an environment. The automata implementation of event-based systems was made with the help of the procedural approach to software development, hence the name “state-based programming”. When using this method, output actions are assigned to the arcs, loops or nodes of the transition graphs (in general case mixed Moore-Mealy automata are to be used). This allows representing in a compact form the sequences of actions, which are the reactions to the corresponding input actions. One of the features of such approach to programming for the reactive systems is that the centralization of program logic is achieved by liquidation of logic in the event handlers and forming of system of interacting automata, which are called from these handlers. Automata in such system can interact by nesting, by ability to call each other and with the help of state numbers interchange. Another important feature of this approach is that automata in it are used thrice: for specification, for implementation (they remain in the source code) and for drawing up the protocol, which is performed, as said above, in terms of automata. The latter allows to verify the propriety of automata system functioning. Logging is performed automatically on the base of created program; it can be used for debugging of programs with complicated behavior. Also this approach allows effective documenting of the decisions made during design process, especially those related to formalization of program behavior. All this allowed to start the Foundation for Open Project Documentation, in the context of which many projects on perfecting of automata-based programming are being developed. State-based object-oriented programming The composite approach, based on both object-oriented and automata-based programming paradigms, may be rather useful for solving tasks from a very large spectrum. This approach was called “state-based object-oriented programming”. The main feature of this approach is that, like in Turing machines, controlling (automata) states are explicitly singled out. The number of these states is noticeably fewer than amount of all other objects' states (for example, run-time states). The term “states space” was introduced in programming. This term means the set of object's controlling states. So this approach provides more understandable behavior in comparison with the case when such space is not singled out explicitly. The minimal set of documents, which visually and clearly describe structural (static) and behavioral (dynamic) sides of a software project, is described. From the experience of adaptation of suggested approach one can conclude that application of automata makes programs' behavior clearer as using objects makes programs' structure clearer. Existence of high quality project documentation makes further program refactoring (changing of its structure while retaining its functionality) much easier. Computational algorithms Automata approach can be used for computational algorithms implementation. It was shown that arbitrary iterative algorithm can be implemented with the help of construction, that is equivalent to the loop operator do ... while, inside which there is single switch operator that implements automaton. Automata-based approach is very effective for implementation of some algorithms of discrete mathematics, for example, tree parsing algorithm. A new state-based approach to creation of algorithms' visualizers was offered. Such visualization software is widely used in the Computer Technologies department of Saint Petersburg State University of Information Technologies, Mechanics and Optics for students teaching in programming and discrete mathematics. This approach allows representing of visualizer's logic as a system of interacting finite state machines. This system consists of pairs of automata; each of this pairs contains “forward” and “backward” automata, which provides step-by-step forwards and backwards execution of algorithms respectively. Instrumentation Various software tools are developed to support automata programming. One of these tools is UniMod. This tool is based on the following concepts: UML, Switch-technology, Eclipse IDE, Java programming language, open source code (http://unimod.sourceforge.net/). All this enables one to talk about the UniMod as of the implementation of executable UML. Publications Collected articles on automata-based programming were published in ITMO University. The bulletin contains 28 articles on different problems of automata-based programming. In 2009, in St. Petersburg, Russia the first book about automata-based programming was published. See also Communicating sequential processes Executable UML References Bibliography Russian original External links Automata programming homepage (in Russian), (in English) UniMod Examples of using of UniMod tool (in Russian, in English) Programming paradigms ITMO University
42750
https://en.wikipedia.org/wiki/Jakarta%20Enterprise%20Beans
Jakarta Enterprise Beans
Jakarta Enterprise Beans (EJB; formerly Enterprise JavaBeans) is one of several Java APIs for modular construction of enterprise software. EJB is a server-side software component that encapsulates business logic of an application. An EJB web container provides a runtime environment for web related software components, including computer security, Java servlet lifecycle management, transaction processing, and other web services. The EJB specification is a subset of the Java EE specification. Specification The EJB specification was originally developed in 1997 by IBM and later adopted by Sun Microsystems (EJB 1.0 and 1.1) in 1999 and enhanced under the Java Community Process as JSR 19 (EJB 2.0), JSR 153 (EJB 2.1), JSR 220 (EJB 3.0), JSR 318 (EJB 3.1) and JSR 345 (EJB 3.2). The EJB specification provides a standard way to implement the server-side (also called "back-end") 'business' software typically found in enterprise applications (as opposed to 'front-end' user interface software). Such software addresses the same types of problem, and solutions to these problems are often repeatedly re-implemented by programmers. Jakarta Enterprise Beans is intended to handle such common concerns as persistence, transactional integrity and security in a standard way, leaving programmers free to concentrate on the particular parts of the enterprise software at hand. General responsibilities The EJB specification details how an application server provides the following responsibilities: Transaction processing Integration with the persistence services offered by the Jakarta Persistence (JPA) Concurrency control Event-driven programming using Jakarta Messaging (JMS) and Jakarta Connectors (JCA) Asynchronous method invocation Job scheduling Naming and directory services via Java Naming and Directory Interface (JNDI) Interprocess Communication using RMI-IIOP and Web services Security (JCE and JAAS) Deployment of software components in an application server Additionally, the Jakarta Enterprise Beans specification defines the roles played by the EJB container and the EJBs as well as how to deploy the EJBs in a container. Note that the EJB specification does not detail how an application server provides persistence (a task delegated to the JPA specification), but instead details how business logic can easily integrate with the persistence services offered by the application server. History Businesses found that using EJBs to encapsulate business logic brought a performance penalty. This is because the original specification allowed only for remote method invocation through CORBA (and optionally other protocols), even though the large majority of business applications actually do not require this distributed computing functionality. The EJB 2.0 specification addressed this concern by adding the concept of local interfaces which could be called directly without performance penalties by applications that were not distributed over multiple servers. The EJB 3.0 specification (JSR 220) was a departure from its predecessors, following a new light-weight paradigm. EJB 3.0 shows an influence from Spring in its use of plain Java objects, and its support for dependency injection to simplify configuration and integration of heterogeneous systems. EJB 3.0 along with the other version of the EJB can be integrated with MuleSoft-v4 using MuleSoft certified PlektonLabs EJB Connector. Gavin King, the creator of Hibernate, participated in the EJB 3.0 process and is an outspoken advocate of the technology. Many features originally in Hibernate were incorporated in the Java Persistence API, the replacement for entity beans in EJB 3.0. The EJB 3.0 specification relies heavily on the use of annotations (a feature added to the Java language with its 5.0 release) and convention over configuration to enable a much less verbose coding style. Accordingly, in practical terms EJB 3.0 is much more lightweight and nearly a completely new API, bearing little resemblance to the previous EJB specifications. Example The following shows a basic example of what an EJB looks like in code: @Stateless public class CustomerService { private EntityManager entityManager; public void addCustomer(Customer customer) { entityManager.persist(customer); } } The above defines a service class for persisting a Customer object (via O/R mapping). The EJB takes care of managing the persistence context and the addCustomer() method is transactional and thread-safe by default. As demonstrated, the EJB focuses only on business logic and persistence and knows nothing about any particular presentation. Such an EJB can be used by a class in e.g. the web layer as follows: @Named @RequestScoped public class CustomerBacking { @EJB private CustomerService customerService; public String addCustomer(Customer customer) { customerService.addCustomer(customer); context.addMessage(...); // abbreviated for brevity return "customer_overview"; } } The above defines a JavaServer Faces (JSF) backing bean in which the EJB is injected by means of the @EJB annotation. Its addCustomer method is typically bound to some UI component, such as a button. Contrary to the EJB, the backing bean does not contain any business logic or persistence code, but delegates such concerns to the EJB. The backing bean does know about a particular presentation, of which the EJB had no knowledge. Types of Enterprise Beans An EJB container holds two major types of beans: Session Beans that can be either "Stateful", "Stateless" or "Singleton" and can be accessed via either a Local (same JVM) or Remote (different JVM) interface or directly without an interface, in which case local semantics apply. All session beans support asynchronous execution for all views (local/remote/no-interface). Message Driven Beans (MDBs, also known as Message Beans). MDBs also support asynchronous execution, but via a messaging paradigm. Session beans Stateful Session Beans Stateful Session Beans are business objects having state: that is, they keep track of which calling client they are dealing with throughout a session and thus access to the bean instance is strictly limited to only one client at a time. If concurrent access to a single bean is attempted anyway the container serializes those requests, but via the @AccessTimeout annotation the container can instead throw an exception. Stateful session beans' state may be persisted (passivated) automatically by the container to free up memory after the client hasn't accessed the bean for some time. The JPA extended persistence context is explicitly supported by Stateful Session Beans. Examples Checking out in a web store might be handled by a stateful session bean that would use its state to keep track of where the customer is in the checkout process, possibly holding locks on the items the customer is purchasing (from a system architecture's point of view, it would be less ideal to have the client manage those locks). Stateless Session Beans Stateless Session Beans are business objects that do not have state associated with them. However, access to a single bean instance is still limited to only one client at a time, concurrent access to the bean is prohibited. If concurrent access to a single bean is attempted, the container simply routes each request to a different instance. This makes a stateless session bean automatically thread-safe. Instance variables can be used during a single method call from a client to the bean, but the contents of those instance variables are not guaranteed to be preserved across different client method calls. Instances of Stateless Session beans are typically pooled. If a second client accesses a specific bean right after a method call on it made by a first client has finished, it might get the same instance. The lack of overhead to maintain a conversation with the calling client makes them less resource-intensive than stateful beans. Examples Sending an e-mail to customer support might be handled by a stateless bean, since this is a one-off operation and not part of a multi-step process. A user of a website clicking on a "keep me informed of future updates" box may trigger a call to an asynchronous method of the session bean to add the user to a list in the company's database (this call is asynchronous because the user does not need to wait to be informed of its success or failure). Fetching multiple independent pieces of data for a website, like a list of products and the history of the current user might be handled by asynchronous methods of a session bean as well (these calls are asynchronous because they can execute in parallel that way, which potentially increases performance). In this case, the asynchronous method will return a Future instance. Singleton Session Beans Singleton Session Beans are business objects having a global shared state within a JVM. Concurrent access to the one and only bean instance can be controlled by the container (Container-managed concurrency, CMC) or by the bean itself (Bean-managed concurrency, BMC). CMC can be tuned using the @Lock annotation, that designates whether a read lock or a write lock will be used for a method call. Additionally, Singleton Session Beans can explicitly request to be instantiated when the EJB container starts up, using the @Startup annotation. Examples Loading a global daily price list that will be the same for every user might be done with a singleton session bean, since this will prevent the application having to do the same query to a database over and over again... Message driven beans Message Driven Beans are business objects whose execution is triggered by messages instead of by method calls. The Message Driven Bean is used among others to provide a high level ease-of-use abstraction for the lower level JMS (Java Message Service) specification. It may subscribe to JMS message queues or message topics, which typically happens via the activationConfig attribute of the @MessageDriven annotation. They were added in EJB to allow event-driven processing. Unlike session beans, an MDB does not have a client view (Local/Remote/No-interface), i. e. clients cannot look-up an MDB instance. An MDB just listens for any incoming message on, for example, a JMS queue or topic and processes them automatically. Only JMS support is required by the Java EE spec, but Message Driven Beans can support other messaging protocols. Such protocols may be asynchronous but can also be synchronous. Since session beans can also be synchronous or asynchronous, the prime difference between session- and message driven beans is not the synchronicity, but the difference between (object oriented) method calling and messaging. Examples Sending a configuration update to multiple nodes might be done by sending a JMS message to a 'message topic' and could be handled by a Message Driven Bean listening to this topic (the message paradigm is used here since the sender does not need to know the number of consumers, their location, or even their exact type). Submitting a job to a work cluster might be done by sending a JMS message to a 'message queue' and could also be handled by a Message Driven Bean, but this time listening to a queue (the message paradigm and the queue is used, since the sender doesn't have to care which worker executes the job, but it does need assurance that a job is only executed once). Processing timing events from the Quartz scheduler can be handled by a Message Driven Bean; when a Quartz trigger fires, the MDB is automatically invoked. Since Java EE doesn't know about Quartz by default, a JCA resource adapter would be needed and the MDB would be annotated with a reference to this. Execution EJBs are deployed in an EJB container, typically within an application server. The specification describes how an EJB interacts with its container and how client code interacts with the container/EJB combination. The EJB classes used by applications are included in the package. (The package is a service provider interface used only by EJB container implementations.) Clients of EJBs do not instantiate those beans directly via Java's new operator, but instead have to obtain a reference via the EJB container. This reference is usually not a reference to the implementation bean itself, but to a proxy, which either dynamically implements the local or remote business interface that the client requested or dynamically implements a sub-type of the actual bean. The proxy can then be directly cast to the interface or bean. A client is said to have a 'view' on the EJB, and the local interface, remote interface and bean type itself respectively correspond with the local view, remote view and no-interface view. This proxy is needed in order to give the EJB container the opportunity to transparently provide cross-cutting (AOP-like) services to a bean like transactions, security, interceptions, injections, and remoting. As an example, a client invokes a method on a proxy, which will first start a transaction with the help of the EJB container and then call the actual bean method. When the bean method returns, the proxy ends the transaction (i.e. by committing it or doing a rollback) and transfers control back to the client. The EJB Container is responsible for ensuring the client code has sufficient access rights to an EJB. Security aspects can be declaratively applied to an EJB via annotations. Transactions EJB containers must support both container managed ACID transactions and bean managed transactions. Container-managed transactions (CMT) are by default active for calls to session beans. That is, no explicit configuration is needed. This behavior may be declaratively tuned by the bean via annotations and if needed such configuration can later be overridden in the deployment descriptor. Tuning includes switching off transactions for the whole bean or specific methods, or requesting alternative strategies for transaction propagation and starting or joining a transaction. Such strategies mainly deal with what should happen if a transaction is or isn't already in progress at the time the bean is called. The following variations are supported: Alternatively, the bean can also declare via an annotation that it wants to handle transactions programmatically via the JTA API. This mode of operation is called Bean Managed Transactions (BMT), since the bean itself handles the transaction instead of the container. Events JMS (Java Message Service) is used to send messages from beans to clients, to let clients receive asynchronous messages from these beans. MDBs can be used to receive messages from clients asynchronously using either a JMS Queue or a Topic. Naming and directory services As an alternative to injection, clients of an EJB can obtain a reference to the session bean's proxy object (the EJB stub) using Java Naming and Directory Interface (JNDI). This alternative can be used in cases where injection is not available, such as in non-managed code or standalone remote Java SE clients, or when it's necessary to programmatically determine which bean to obtain. JNDI names for EJB session beans are assigned by the EJB container via the following scheme: (entries in square brackets denote optional parts) A single bean can be obtained by any name matching the above patterns, depending on the 'location' of the client. Clients in the same module as the required bean can use the module scope and larger scopes, clients in the same application as the required bean can use the app scope and higher, etc. E.g. code running in the same module as the CustomerService bean (as given by the example shown earlier in this article) could use the following code to obtain a (local) reference to it: CustomerServiceLocal customerService = (CustomerServiceLocal) new InitialContext().lookup("java:module/CustomerService"); Remoting/distributed execution For communication with a client that's written in the Java programming language a session bean can expose a remote-view via an @Remote annotated interface. This allows those beans to be called from clients in other JVMs which themselves may be located on other (remote) systems. From the point of view of the EJB container, any code in another JVM is remote. Stateless- and Singleton session beans may also expose a "web service client view" for remote communication via WSDL and SOAP or plain XML. This follows the JAX-RPC and JAX-WS specifications. JAX-RPC support however is proposed for future removal. To support JAX-WS, the session bean is annotated with the @WebService annotation, and methods that are to be exposed remotely with the @WebMethod annotation.. Although the EJB specification does not mention exposure as RESTful web services in any way and has no explicit support for this form of communication, the JAX-RS specification does explicitly support EJB. Following the JAX-RS spec, Stateless- and Singleton session beans can be root resources via the @Path annotation and EJB business methods can be mapped to resource methods via the @GET, @PUT, @POST and @DELETE annotations. This however does not count as a "web service client view", which is used exclusively for JAX-WS and JAX-RPC. Communication via web services is typical for clients not written in the Java programming language, but is also convenient for Java clients who have trouble reaching the EJB server via a firewall. Additionally, web service based communication can be used by Java clients to circumvent the arcane and ill-defined requirements for the so-called "client-libraries"; a set of jar files that a Java client must have on its class-path in order to communicate with the remote EJB server. These client-libraries potentially conflict with libraries the client may already have (for instance, if the client itself is also a full Java EE server) and such a conflict is deemed to be very hard or impossible to resolve. Legacy Home interfaces and required business interface With EJB 2.1 and earlier, each EJB had to provide a Java implementation class and two Java interfaces. The EJB container created instances of the Java implementation class to provide the EJB implementation. The Java interfaces were used by client code of the EJB. Required deployment descriptor With EJB 2.1 and earlier, the EJB specification required a deployment descriptor to be present. This was needed to implement a mechanism that allowed EJBs to be deployed in a consistent manner regardless of the specific EJB platform that was chosen. Information about how the bean should be deployed (such as the name of the home or remote interfaces, whether and how to store the bean in a database, etc.) had to be specified in the deployment descriptor. The deployment descriptor is an XML document having an entry for each EJB to be deployed. This XML document specifies the following information for each EJB: Name of the Home interface Java class for the Bean (business object) Java interface for the Home interface Java interface for the business object Persistent store (only for Entity Beans) Security roles and permissions Stateful or Stateless (for Session Beans) Old EJB containers from many vendors required more deployment information than that in the EJB specification. They would require the additional information as separate XML files, or some other configuration file format. An EJB platform vendor generally provided their own tools that would read this deployment descriptor, and possibly generated a set of classes that would implement the now deprecated Home and Remote interfaces. Since EJB 3.0 (JSR 220), the XML descriptor is replaced by Java annotations set in the Enterprise Bean implementation (at source level), although it is still possible to use an XML descriptor instead of (or in addition to) the annotations. If an XML descriptor and annotations are both applied to the same attribute within an Enterprise Bean, the XML definition overrides the corresponding source-level annotation, although some XML elements can also be additive (e.g., an activation-config-property in XML with a different name than already defined via an @ActivationConfigProperty annotation will be added instead of replacing all existing properties). Container variations Starting with EJB 3.1, the EJB specification defines two variants of the EJB container; a full version and a limited version. The limited version adheres to a proper subset of the specification called EJB 3.1 Lite and is part of Java EE 6's web profile (which is itself a subset of the full Java EE 6 specification). EJB 3.1 Lite excludes support for the following features: Remote interfaces RMI-IIOP Interoperability JAX-WS Web Service Endpoints EJB Timer Service (@Schedule, @Timeout) Asynchronous session bean invocations (@Asynchronous) Message-driven beans EJB 3.2 Lite excludes less features. Particularly it no longer excludes @Asynchronous and @Schedule/@Timeout, but for @Schedule it does not support the "persistent" attribute that full EJB 3.2 does support. The complete excluded list for EJB 3.2 Lite is: Remote interfaces RMI-IIOP Interoperability JAX-WS Web Service Endpoints Persistent timers ("persistent" attribute on @Schedule) Message-driven beans Version history EJB 4.0, final release (2020-05-22) Jakarta Enterprise Beans 4.0, as a part of Jakarta EE 9, was a tooling release that mainly moved API package names from the top level package to the top level package. Other changes included removal of deprecated APIs that were pointless to move to the new top level package and the removal of features that depended on features that were removed from Java or elsewhere in Jakarta EE 9. The following APIs were removed: methods relying on which has been removed from the Java 14. methods relying on Jakarta XML RPC to reflect the removal of XML RPC from the Jakarta EE 9 Platform. deprecated method. "Support for Distributed Interoperability" to reflect the removal of CORBA from Java 11 and the Jakarta EE 9 Platform. Other minor changes include marking the Enterprise Beans 2.x API Group as "Optional" and making the annotation repeatable. EJB 3.2.6, final release (2019-08-23) Jakarta Enterprise Beans 3.2, as a part of Jakarta EE 8, and despite still using "EJB" abbreviation, this set of APIs has been officially renamed to "Jakarta Enterprise Beans" by the Eclipse Foundation so as not to tread on the Oracle "Java" trademark. EJB 3.2, final release (2013-05-28) JSR 345. Enterprise JavaBeans 3.2 was a relatively minor release that mainly contained specification clarifications and lifted some restrictions that were imposed by the spec but over time appeared to serve no real purpose. A few existing full EJB features were also demanded to be in EJB 3 lite and functionality that was proposed to be pruned in EJB 3.1 was indeed pruned (made optional). The following features were added: Passivation of a stateful session bean can be deactivated via attribute on @Stateful annotation (passivationCapable = false) TimerService can retrieve all active timers in the same EJB module (could previously only retrieve timers for the bean in which the TimerService was called) Lifecycle methods (e.g. @PostConstruct) can be transactional for stateful session beans using the existing @TransactionAttribute annotation Autocloseable interface implemented by embeddable container EJB 3.1, final release (2009-12-10) JSR 318. The purpose of the Enterprise JavaBeans 3.1 specification is to further simplify the EJB architecture by reducing its complexity from the developer's point of view, while also adding new functionality in response to the needs of the community: Local view without interface (No-interface view) .war packaging of EJB components EJB Lite: definition of a subset of EJB Portable EJB Global JNDI Names Singletons (Singleton Session Beans) Application Initialization and Shutdown Events EJB Timer Service Enhancements Simple Asynchrony (@Asynchronous for session beans) EJB 3.0, final release (2006-05-11) JSR 220 - Major changes: This release made it much easier to write EJBs, using 'annotations' rather than the complex 'deployment descriptors' used in version 2.x. The use of home and remote interfaces and the ejb-jar.xml file were also no longer required in this release, having been replaced with a business interface and a bean that implements the interface. EJB 2.1, final release (2003-11-24) JSR 153 - Major changes: Web service support (new): stateless session beans can be invoked over SOAP/HTTP. Also, an EJB can easily access a Web service using the new service reference. EJB timer service (new): Event-based mechanism for invoking EJBs at specific times. Message-driven beans accepts messages from sources other than JMS. Message destinations (the same idea as EJB references, resource references, etc.) has been added. EJB query language (EJB-QL) additions: ORDER BY, AVG, MIN, MAX, SUM, COUNT, and MOD. XML schema is used to specify deployment descriptors, replaces DTDs EJB 2.0, final release (2001-08-22) JSR 19 - Major changes: Overall goals: The standard component architecture for building distributed object-oriented business applications in Java. Make it possible to build distributed applications by combining components developed using tools from different vendors. Make it easy to write (enterprise) applications: Application developers will not have to understand low-level transaction and state management details, multi-threading, connection pooling, and other complex low-level APIs. Will follow the "Write Once, Run Anywhere" philosophy of Java. An enterprise Bean can be developed once, and then deployed on multiple platforms without recompilation or source code modification. Address the development, deployment, and runtime aspects of an enterprise application’s life cycle. Define the contracts that enable tools from multiple vendors to develop and deploy components that can interoperate at runtime. Be compatible with existing server platforms. Vendors will be able to extend their existing products to support EJBs. Be compatible with other Java APIs. Provide interoperability between enterprise Beans and Java EE components as well as non-Java programming language applications. Be compatible with the CORBA protocols (RMI-IIOP). EJB 1.1, final release (1999-12-17) Major changes: XML deployment descriptors Default JNDI contexts RMI over IIOP Security - role driven, not method driven Entity Bean support - mandatory, not optional Goals for Release 1.1: Provide better support for application assembly and deployment. Specify in greater detail the responsibilities of the individual EJB roles. EJB 1.0 (1998-03-24) Announced at JavaOne 1998, Sun's third Java developers conference (March 24 through 27) Goals for Release 1.0: Defined the distinct "EJB Roles" that are assumed by the component architecture. Defined the client view of enterprise Beans. Defined the enterprise Bean developer’s view. Defined the responsibilities of an EJB Container provider and server provider; together these make up a system that supports the deployment and execution of enterprise Beans. References External links EJB 3.0 API Javadocs The EJB 3.0 Specification Sun's EJB 3.0 Tutorial EJB (3.0) Glossary EJB FAQ JSR 345 (EJB 3.2) JSR 318 (EJB 3.1) JSR 220 (EJB 3.0) JSR 153 (EJB 2.1) JSR 19 (EJB 2.0) "Working with Message-Driven Beans" from EJB3 in Action, Second Edition Client invokes an EJB Java enterprise platform Java specification requests Java platform software
54496
https://en.wikipedia.org/wiki/Lawrence%20Lessig
Lawrence Lessig
Lester Lawrence Lessig III (born June 3, 1961) is an American academic, attorney, and political activist. He is the Roy L. Furman Professor of Law at Harvard Law School and the former director of the Edmond J. Safra Center for Ethics at Harvard University. Lessig was a candidate for the Democratic Party's nomination for president of the United States in the 2016 U.S. presidential election but withdrew before the primaries. Lessig is a proponent of reduced legal restrictions on copyright, trademark, and radio frequency spectrum, particularly in technology applications. In 2001, he founded Creative Commons, a non-profit organization devoted to expanding the range of creative works available for others to build upon and to share legally. Prior to his most recent appointment at Harvard, he was a professor of law at Stanford Law School, where he founded the Center for Internet and Society, and at the University of Chicago. He is a former board member of the Free Software Foundation and Software Freedom Law Center; the Washington, D.C. lobbying groups Public Knowledge and Free Press; and the Electronic Frontier Foundation. He was elected to the American Philosophical Society in 2007. As a political activist, Lessig has called for state-based activism to promote substantive reform of government with a Second Constitutional Convention. In May 2014, he launched a crowd-funded political action committee which he termed Mayday PAC with the purpose of electing candidates to Congress who would pass campaign finance reform. Lessig is also the co-founder of Rootstrikers, and is on the boards of MapLight and Represent.Us. He serves on the advisory boards of the Democracy Café and the Sunlight Foundation. In August 2015, Lessig announced that he was exploring a possible candidacy for President of the United States, promising to run if his exploratory committee raised $1 million by Labor Day. After accomplishing this, on September 6, 2015, Lessig announced that he was entering the race to become a candidate for the 2016 Democratic Party's presidential nomination. Lessig described his candidacy as a referendum on campaign finance reform and electoral reform legislation. He stated that, if elected, he would serve a full term as president with his proposed reforms as his legislative priorities. He ended his campaign in November 2015, citing rule changes from the Democratic Party that precluded him from appearing in the televised debates. Academic career Lessig earned a B.A. degree in economics and a B.S. degree in management (Wharton School) from the University of Pennsylvania, an M.A. degree in philosophy from the University of Cambridge (Trinity) in England, and a J.D. degree from Yale Law School in 1989. After graduating from law school, he clerked for a year for Judge Richard Posner, at the 7th Circuit Court of Appeals in Chicago, Illinois, and another year for Justice Antonin Scalia at the Supreme Court. Lessig started his academic career at the University of Chicago Law School, where he was professor from 1991 to 1997. As co-director of the Center for the Study of Constitutionalism in Eastern Europe there, he helped the newly independent Republic of Georgia draft a constitution. From 1997 to 2000, he was at Harvard Law School, holding for a year the chair of Berkman Professor of Law, affiliated with the Berkman Klein Center for Internet & Society. He subsequently joined Stanford Law School, where he established the school's Center for Internet and Society. Lessig returned to Harvard in July 2009 as professor and director of the Edmond J. Safra Center for Ethics. In 2013, Lessig was appointed as the Roy L. Furman Professor of Law and Leadership; his chair lecture was titled "Aaron's Laws: Law and Justice in a Digital Age." In popular culture Lessig was portrayed by Christopher Lloyd in "The Wake Up Call", during season 6 of The West Wing. Political background Lessig has been politically liberal since studying philosophy at Cambridge in the mid-1980s. By the late 1980s, two influential conservative judges, Judge Richard Posner and Justice Antonin Scalia, selected him to serve as a law clerk, choosing him because they considered him brilliant rather than for his ideology and effectively making him the "token liberal" on their staffs. Posner would later call him "the most distinguished law professor of his generation." Lessig has emphasized in interviews that his philosophy experience at Cambridge radically changed his values and career path. Previously, he had held strong conservative or libertarian political views, desired a career in business, was a highly active member of Teenage Republicans, served as the youth governor for Pennsylvania through the YMCA Youth and Government program in 1978, and almost pursued a Republican political career. What was intended to be a year abroad at Cambridge convinced him instead to stay another two years to complete an undergraduate degree in philosophy and develop his changed political values. During this time, he also traveled in the Eastern Bloc, where he acquired a lifelong interest in Eastern European law and politics. Lessig remains skeptical of government intervention but favors some regulation, calling himself "a constitutionalist." On one occasion, Lessig also commended the John McCain campaign for discussing fair use rights in a letter to YouTube where it took issue with YouTube for indulging overreaching copyright claims leading to the removal of various campaign videos. Internet and computer activism "Code is law" In computer science, "code" typically refers to the text of a computer program (the source code). In law, "code" can refer to the texts that constitute statutory law. In his 1999 book Code and Other Laws of Cyberspace, Lessig explores the ways in which code in both senses can be instruments for social control, leading to his dictum that "Code is law." Lessig later updated his work in order to keep up with the prevailing views of the time and released the book as Code: Version 2.0 in December 2006. Remix culture Lessig has been a proponent of the remix culture since the early 2000s. In his 2008 book Remix he presents this as a desirable cultural practice distinct from piracy. Lessig further articulates remix culture as intrinsic to technology and the Internet. Remix culture is therefore an amalgam of practice, creativity, "read/write" culture and the hybrid economy. According to Lessig, the problem with the remix comes when it is at odds with stringent US copyright law. He has compared this to the failure of Prohibition, both in its ineffectiveness and in its tendency to normalize criminal behavior. Instead he proposes more lenient licensing, namely Creative Commons licenses, as a remedy to maintain "rule of law" while combating plagiarism. Free culture On March 28, 2004 Lessig was elected to the FSF's board of directors. He proposed the concept of "free culture". He also supports free and open-source software and open spectrum. At his free culture keynote at the O'Reilly Open Source Convention 2002, a few minutes of his speech was about software patents, which he views as a rising threat to free software, open source software and innovation. In March 2006, Lessig joined the board of advisors of the Digital Universe project. A few months later, Lessig gave a talk on the ethics of the Free Culture Movement at the 2006 Wikimania conference. In December 2006, his lecture On Free, and the Differences between Culture and Code was one of the highlights at 23C3 Who can you trust?. Lessig claimed in 2009 that, because 70% of young people obtain digital information from illegal sources, the law should be changed. In a foreword to the Freesouls book project, Lessig makes an argument in favor of amateur artists in the world of digital technologies: "there is a different class of amateur creators that digital technologies have ... enabled, and a different kind of creativity has emerged as a consequence." Lessig is also a well-known critic of copyright term extensions. Net neutrality Lessig has long been known to be a supporter of net neutrality. In 2006, he testified before the US Senate that he believed Congress should ratify Michael Powell's four Internet freedoms and add a restriction to access-tiering, i.e. he does not believe content providers should be charged different amounts. The reason is that the Internet, under the neutral end-to-end design is an invaluable platform for innovation, and the economic benefit of innovation would be threatened if large corporations could purchase faster service to the detriment of newer companies with less capital. However, Lessig has supported the idea of allowing ISPs to give consumers the option of different tiers of service at different prices. He was reported on CBC News as saying that he has always been in favour of allowing internet providers to charge differently for consumer access at different speeds. He said, "Now, no doubt, my position might be wrong. Some friends in the network neutrality movement as well as some scholars believe it is wrong—that it doesn't go far enough. But the suggestion that the position is 'recent' is baseless. If I'm wrong, I've always been wrong." Legislative reform Despite presenting an anti-regulatory standpoint in many fora, Lessig still sees the need for legislative enforcement of copyright. He has called for limiting copyright terms for creative professionals to five years, but believes that creative professionals' work, many of them independent, would become more easily and quickly available if bureaucratic procedure were introduced to renew trademarks for up to 75 years after this five-year term. Lessig has repeatedly taken a stance that privatization through legislation like that seen in the 1980s in the UK with British Telecommunications is not the best way to help the Internet grow. He said, "When government disappears, it's not as if paradise will take its place. When governments are gone, other interests will take their place," "My claim is that we should focus on the values of liberty. If there is not government to insist on those values, then who?" "The single unifying force should be that we govern ourselves." Legal challenges From 1999 to 2002, Lessig represented a high-profile challenge to the Sonny Bono Copyright Term Extension Act. Working with the Berkman Center for Internet and Society, Lessig led the team representing the plaintiff in Eldred v. Ashcroft. The plaintiff in the case was joined by a group of publishers who frequently published work in the public domain and a large number of amici including the Free Software Foundation, the American Association of Law Libraries, the Bureau of National Affairs, and the College Art Association. In March 2003, Lessig acknowledged severe disappointment with his Supreme Court defeat in the Eldred copyright-extension case, where he unsuccessfully tried to convince Chief Justice William Rehnquist, who had sympathies for de-regulation, to back his "market-based" approach to intellectual property regulation. In August 2013, Lawrence Lessig brought suit against Liberation Music PTY Ltd., after Liberation issued a takedown notice of one of Lessig's lectures on YouTube which had used the song "Lisztomania" by the band Phoenix, whom Liberation Music represents. Lessig sought damages under section 512(f) of the Digital Millennium Copyright Act, which holds parties liable for misrepresentations of infringement or removal of material. Lessig was represented by the Electronic Frontier Foundation and Jones Day. In February 2014, the case ended with a settlement in which Liberation Music admitted wrongdoing in issuing the takedown notice, issued an apology, and paid a confidential sum in compensation. Killswitch In October 2014, Killswitch, a film featuring Lawrence Lessig, as well as Aaron Swartz, Tim Wu, and Edward Snowden received its World Premiere at the Woodstock Film Festival, where it won the award for Best Editing. In the film, Lessig frames the story of two young hacktivists, Swartz and Snowden, who symbolize the disruptive and dynamic nature of the Internet. The film reveals the emotional bond between Lessig and Swartz, and how it was Swartz (the mentee) that challenged Lessig (the mentor) to engage in the political activism that has led to Lessig's crusade for campaign finance reform. In February 2015, Killswitch was invited to screen at the Capitol Visitor's Center in Washington DC by Congressman Alan Grayson. The event was held on the eve of the Federal Communications Commission's historic decision on Net Neutrality. Lessig, Congressman Grayson, and Free Press (organization) CEO Craig Aaron spoke about the importance of protecting net neutrality and the free and open Internet. Congressman Grayson states that Killswitch is "One of the most honest accounts of the battle to control the Internet -- and access to information itself." Richard von Busack of the Metro Silicon Valley, writes of Killswitch, "Some of the most lapidary use of found footage this side of The Atomic Café". Fred Swegles of the Orange County Register, remarks, "Anyone who values unfettered access to online information is apt to be captivated by Killswitch, a gripping and fast-paced documentary." Kathy Gill of GeekWire asserts that "Killswitch is much more than a dry recitation of technical history. Director Ali Akbarzadeh, producer Jeff Horn, and writer Chris Dollar created a human centered story. A large part of that connection comes from Lessig and his relationship with Swartz." The Electors Trust In December 2016 Lawrence Lessig and Laurence Tribe established The Electors Trust under the aegis of EqualCitizens.US to provide pro bono legal counsel as well as a secure communications platform for those of the 538 members of the United States Electoral College who are regarding a vote of conscience against Donald Trump in the presidential election Lessig hosts the podcast Another Way in conjunction with The Young Turks Network Money in politics activism At the iCommons iSummit 07, Lessig announced that he would stop focusing his attention on copyright and related matters and work on political corruption instead, as the result of a transformative conversation with Aaron Swartz, a young internet prodigy whom Lessig met through his work with Creative Commons. This new work was partially facilitated through his wiki, Lessig Wiki, which he has encouraged the public to use to document cases of corruption. Lessig criticized the revolving door phenomenon in which legislators and staffers leave office to become lobbyists and have become beholden to special interests. In February 2008, a Facebook group formed by law professor John Palfrey encouraged him to run for Congress from California's 12th congressional district, the seat vacated by the death of Representative Tom Lantos. Later that month, after forming an "exploratory project", he decided not to run for the vacant seat. Rootstrikers Despite having decided to forgo running for Congress himself, Lessig remained interested in attempting to change Congress to reduce corruption. To this end, he worked with political consultant Joe Trippi to launch a web based project called "Change Congress". In a press conference on March 20, 2008, Lessig explained that he hoped the Change Congress website would help provide technological tools voters could use to hold their representatives accountable and reduce the influence of money on politics. He is a board member of MAPLight.org, a nonprofit research group illuminating the connection between money and politics. Change Congress later became Fix Congress First, and was finally named Rootstrikers. In November 2011, Lessig announced that Rootstrikers would join forces with Dylan Ratigan's Get Money Out campaign, under the umbrella of the United Republic organization. Rootstrikers subsequently came under the aegis of Demand Progress, an organization co-founded by Aaron Swartz. Article V convention In 2010, Lessig began to organize for a national Article V convention. He co-founded Fix Congress First! with Joe Trippi. In a speech in 2011, Lessig revealed that he was disappointed with Obama's performance in office, criticizing it as a "betrayal", and he criticized the president for using "the (Hillary) Clinton playbook". Lessig has called for state governments to call for a national Article V convention, including by supporting Wolf-PAC, a national organization attempting to call an Article V convention to address the problem. The convention Lessig supports would be populated by a "random proportional selection of citizens" which he suggested would work effectively. He said "politics is a rare sport where the amateur is better than the professional." He promoted this idea at a September 24–25, 2011, conference he co-chaired with the Tea Party Patriots' national coordinator, in Lessig's October 5, 2011, book, Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It, and at the Occupy protest in Washington, DC. Reporter Dan Froomkin said the book offers a manifesto for the Occupy Wall Street protestors, focusing on the core problem of corruption in both political parties and their elections. An Article V convention does not dictate a solution, but Lessig would support a constitutional amendment that would allow legislatures to limit political contributions from non-citizens, including corporations, anonymous organizations, and foreign nationals, and he also supports public campaign financing and electoral college reform to establish the one person, one vote principle. New Hampshire Rebellion The New Hampshire Rebellion is a walk to raise awareness about corruption in politics. The event began in 2014 with a 185-mile march in New Hampshire. In its second year the walk expanded to include other locations in New Hampshire. From January 11 to 24, 2014, Lessig and many others, like New York activist Jeff Kurzon, marched from Dixville Notch, New Hampshire to Nashua (a 185-mile march) to promote the idea of tackling "the systemic corruption in Washington". Lessig chose this language over the related term "campaign finance reform," commenting that "Saying we need campaign finance reform is like referring to an alcoholic as someone who has a liquid intake problem." The walk was to continue the work of NH native Doris "Granny D" Haddock, and in honor of deceased activist Aaron Swartz. The New Hampshire Rebellion marched 16 miles from Hampton to New Castle on the New Hampshire Seacoast. The initial location was also chosen because of its important and visible role in the quadrennial "New Hampshire primaries", the traditional first primary of the presidential election. 2016 presidential candidacy Lessig announced the launch of his long shot presidential campaign on September 6, 2015. On August 11, 2015, Lessig announced that he had launched an exploratory campaign for the purpose of exploring his prospects of winning the Democratic Party's nomination for president of the United States in the 2016 election. Lessig pledged to seek the nomination if he raised $1 million by Labor Day 2015. The announcement was widely reported in national media outlets, and was timed to coincide with a media blitz by the Lessig 2016 Campaign. Lessig was interviewed in The New York Times and Bloomberg. Campaign messages and Lessig's electoral finance reform positions were circulated widely on social media. His campaign was focused on a single issue: The Citizen Equality Act, a proposal that couples campaign finance reform with other laws aimed at curbing gerrymandering and ensuring voting access. As an expression of his commitment to the proposal, Lessig initially promised to resign once the Citizen Equality Act became law and turn the presidency over to his vice president, who would then serve out the remainder of the term as a typical American president and act on a variety of issues. In October 2015, Lessig abandoned his automatic resignation plan and adopted a full policy platform for the presidency, though he did retain the passage of the Citizen Equality Act as his primary legislative objective. Lessig made a single campaign stop in Iowa, with an eye toward the first-in-the-nation precinct caucuses: at Dordt College, in Sioux Center, in late October. He announced the end of his campaign on November 2, 2015. Electoral College reform In 2017, Lessig announced a movement to challenge the winner-take-all Electoral College vote allocation in the various states, called Equal Votes. Lessig was also a counsel for electors in the Supreme Court case Chiafalo v. Washington where the court decided states could force electors to follow the state's popular vote. Awards and honors In 2002, Lessig received the Award for the Advancement of Free Software from the Free Software Foundation (FSF). He also received the Scientific American 50 Award for having "argued against interpretations of copyright that could stifle innovation and discourse online." Then, in 2006, Lessig was elected to the American Academy of Arts and Sciences. In 2011, Lessig was named to the Fastcase 50, "honoring the law's smartest, most courageous innovators, techies, visionaries, and leaders." Lessig was awarded honorary doctorates by the Faculty of Social Sciences at Lund University, Sweden in 2013 and by the Université catholique de Louvain in 2014. Lessig received the 2014 Webby Lifetime Achievement award for co-founding Creative Commons and defending net neutrality and the free and open software movement. Personal life Lessig was born in Rapid City, South Dakota, the son of Patricia (West), who sold real estate, and Lester L. "Jack" Lessig, an engineer. He grew up in Williamsport, Pennsylvania. In May 2005, it was revealed that Lessig had experienced sexual abuse by the director at the American Boychoir School, which he had attended as an adolescent. Lessig reached a settlement with the school in the past, under confidential terms. He revealed his experiences in the course of representing another student victim, John Hardwicke, in court. In August 2006, he succeeded in persuading the New Jersey Supreme Court to radically restrict the scope of immunity, which had protected nonprofits that failed to prevent sexual abuse from legal liability. Lessig is married to Bettina Neuefeind, a German-born Harvard University colleague. The two married in 1999. He and Neuefeind have three children: Willem, who is a Crypto-Miner, Coffy, who is a streamer, and Tess. Defamation lawsuit against the New York Times In 2019, during the criminal investigation of Jeffrey Epstein, it was discovered that the MIT Media Lab, under former president Joichi Ito, had accepted secret donations from Epstein after Epstein had been convicted on criminal charges. Ito eventually resigned as president following this discovery. After making supportive comments to Ito, Lessig wrote a Medium post in September 2019 to explain his stance. In his post, Lessig acknowledged that universities should not take donations from convicted criminals like Epstein who had become wealthy through actions unrelated to their criminal convictions; however, if such donations were to be accepted, it was better to take them secretly rather than publicly connect the university to the criminal. Lessig's essay drew criticism, and about a week later, Nellie Bowles of The New York Times had an interview with Lessig in which he reiterated his stance related to such donations broadly. The article used the headline "A Harvard Professor Doubles Down: If You Take Epstein’s Money, Do It in Secret", which Lessig confirmed was based on a statement he had made to the Times. Lessig took issue with the headline overlooking his argument that MIT should not accept such donations in the first place and also criticized the first two lines of the article which read "It is hard to defend soliciting donations from the convicted sex offender Jeffrey Epstein. But Lawrence Lessig, a Harvard Law professor, has been trying." He subsequently accused the Times of writing clickbait with the headline crafted to defame him, and stated that the circulation of the article on social media had hurt his reputation. In January 2020, Lessig filed a defamation lawsuit against the Times, including writer Bowles, business editor Ellen Pollock, and executive editor Dean Baquet. The Times stated they will "vigorously" defend against Lessig's claim, and believe that what they had published was accurate and had been reviewed by senior editors following Lessig's initial complaints. In April 2020, the New York Times changed its original headline to read: "What Are the Ethics of Taking Tainted Funds? A conversation with Lawrence Lessig about Jeffrey Epstein, M.I.T. and reputation laundering." Lessig reported he subsequently withdrew his defamation lawsuit. Notable cases Golan v. Gonzales (representing multiple plaintiffs) Eldred v. Ashcroft (representing plaintiff Eric Eldred) Lost Kahle v. Ashcroft (also see Brewster Kahle) Dismissed United States v. Microsoft (special master and author of an amicus brief addressing the Sherman Act) Lessig was appointed special master by Judge Thomas Penfield Jackson in 1997; the appointment was vacated by the United States Court of Appeals for the District of Columbia Circuit; the appellate court ruled that the powers granted to Lessig exceeded the scope of the Federal statute providing for special masters; Judge Jackson then solicited Lessig's amicus brief Lessig said about this appointment: "Did Justice Jackson pick me to be his special master because he had determined I was the perfect mix of Holmes, and Ed Felten? No, I was picked because I was a Harvard Law Professor teaching the law of cyberspace. Remember: So is 'fame' made." MPAA v. 2600 (submitted an amicus brief with Yochai Benkler in support of 2600) McCutcheon v. FEC (submitted an amicus brief in support of FEC) Chiafalo v. Washington (representing Chiafalo) Bibliography Code and Other Laws of Cyberspace (Basic Books, 1999) The Future of Ideas (Vintage Books, 2001) Free Culture (Penguin, 2004) Code: Version 2.0 (Basic Books, 2006) Remix: Making Art and Commerce Thrive in the Hybrid Economy (Penguin, 2008) Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It (Twelve, 2011) One Way Forward: The Outsider's Guide to Fixing the Republic (Kindle Single/Amazon, 2012) Lesterland: The Corruption of Congress and How to End It (2013, CC-BY-NC) Republic, Lost: The Corruption of Equality and the Steps to End It (Twelve, rev. ed., 2015) America, Compromised (University of Chicago Press, 2018) Fidelity & Constraint: How the Supreme Court Has Read the American Constitution (Oxford University Press, 2019) They Don't Represent Us: Reclaiming Our Democracy (Dey Street/William Morrow, 2019) Filmography RiP!: A Remix Manifesto, a 2008 documentary film The Internet's Own Boy: The Story of Aaron Swartz, 2014 documentary film Killswitch, 2015 documentary film The Swamp, 2020 documentary film Kim Dotcom: The Most Wanted Man Online, 2021 documentary film See also Copyleft Free software movement Free content FreeCulture.org Open educational resources Gratis versus libre Open content Law of the Horse Lobbying in the United States Second Constitutional Convention of the United States proposal for constitutional reform Killswitch (film) References External links (includes Curriculum Vitae and Lessig blog 2002–2009) Lessig Blog, beyond 2009 (Presidential Campaign site) 1961 births 21st-century American non-fiction writers 21st-century American politicians Access to Knowledge activists Alumni of Trinity College, Cambridge American bloggers American lawyers American legal scholars American people of German descent American political writers Articles containing video clips Candidates in the 2016 United States presidential election Computer law scholars Copyright activists Copyright scholars Creative Commons-licensed authors Harvard Law School faculty Law clerks of the Supreme Court of the United States Living people Massachusetts Democrats Members of the Creative Commons board of directors Open content activists People from Rapid City, South Dakota People from Williamsport, Pennsylvania Scholars of constitutional law Sexual abuse victim advocates Stanford Law School faculty University of Chicago faculty Webby Award winners Wharton School of the University of Pennsylvania alumni Wired (magazine) people Yale Law School alumni
36732
https://en.wikipedia.org/wiki/Bruce%20Schneier
Bruce Schneier
Bruce Schneier (; born January 15, 1963) is an American cryptographer, computer security professional, privacy specialist, and writer. Schneier is a Lecturer in Public Policy at the Harvard Kennedy School and a Fellow at the Berkman Klein Center for Internet & Society as of November, 2013. He is a board member of the Electronic Frontier Foundation, Access Now, and The Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. He is the author of several books on general security topics, computer security and cryptography and is a squid enthusiast. In 2015, Schneier received the EPIC Lifetime Achievement Award from Electronic Privacy Information Center. Early life Bruce Schneier is the son of Martin Schneier, a Brooklyn Supreme Court judge. He grew up in the Flatbush neighborhood of Brooklyn, New York, attending P.S. 139 and Hunter College High School. After receiving a physics bachelor's degree from the University of Rochester in 1984, he went to American University in Washington, D.C. and got his master's degree in computer science in 1988. He was awarded an honorary Ph.D from the University of Westminster in London, England in November 2011. The award was made by the Department of Electronics and Computer Science in recognition of Schneier's 'hard work and contribution to industry and public life'. Schneier was a founder and chief technology officer of Counterpane Internet Security (now BT Managed Security Solutions). He worked for IBM once they acquired Resilient Systems where Schneier was CTO until he left at the end of June 2019. Writings on computer security and general security In 1991, Schneier was laid off from his job and started writing for computer magazines. Later he decided to write a book on applied cryptography "since no such book existed". He took his articles, wrote a proposal to John Wiley and they bought the proposal. In 1994, Schneier published Applied Cryptography, which details the design, use, and implementation of cryptographic algorithms. "This book allowed me to write more, to start consulting, to start my companies, and really launched me as an expert in this field, and it really was because no one else has written this book. I wanted to read it so I had to write it. And it happened in a really lucky time when everything started to explode on the Internet". In 2010 he published Cryptography Engineering, which is focused more on how to use cryptography in real systems and less on its internal design. He has also written books on security for a broader audience. In 2000, Schneier published Secrets and Lies: Digital Security in a Networked World; in 2003, Beyond Fear: Thinking Sensibly About Security in an Uncertain World; in 2012, Liars and Outliers: Enabling the Trust that Society Needs to Thrive; and in 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. Schneier writes a freely available monthly Internet newsletter on computer and other security issues, Crypto-Gram, as well as a security weblog, Schneier on Security. The blog focuses on the latest threats, and his own thoughts. The weblog started out as a way to publish essays before they appeared in Crypto-Gram, making it possible for others to comment on them while the stories were still current, but over time the newsletter became a monthly email version of the blog, re-edited and re-organized. Schneier is frequently quoted in the press on computer and other security issues, pointing out flaws in security and cryptographic implementations ranging from biometrics to airline security after the September 11 attacks. Schneier revealed on his blog that in the December 2004 issue of the SIGCSE Bulletin, three Pakistani academics, Khawaja Amer Hayat, Umar Waqar Anis, and S. Tauseef-ur-Rehman, from the International Islamic University in Islamabad, Pakistan, plagiarized an article written by Schneier and got it published. The same academics subsequently plagiarized another article by Ville Hallivuori on "Real-time Transport Protocol (RTP) security" as well. Schneier complained to the editors of the periodical, which generated a minor controversy. The editor of the SIGCSE Bulletin removed the paper from their website and demanded official letters of admission and apology. Schneier noted on his blog that International Islamic University personnel had requested him "to close comments in this blog entry"; Schneier refused to close comments on the blog, but he did delete posts which he deemed "incoherent or hostile". Viewpoints Blockchain Schneier warns about misplaced trust in blockchain and the lack of use cases, calling blockchain a solution in search of a problem. "What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure." "I’ve never seen a legitimate use case for blockchain. I’ve never seen any system where blockchain provides security in a way that is impossible to provide in any other way." He goes on to say that cryptocurrencies are useless and are only used by speculators looking for quick riches. Cryptography To Schneier, peer review and expert analysis are important for the security of cryptographic systems. Mathematical cryptography is usually not the weakest link in a security chain; effective security requires that cryptography be combined with other things. The term Schneier's law was coined by Cory Doctorow in a 2004 speech. The law is phrased as: He attributes this to Bruce Schneier, who wrote in 1998: "Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis." Similar sentiments had been expressed by others before. In The Codebreakers, David Kahn states: "Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break", and in "A Few Words On Secret Writing", in July 1841, Edgar Allan Poe had stated: "Few persons can be made to believe that it is not quite an easy thing to invent a method of secret writing which shall baffle investigation. Yet it may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve." Schneier also coined the term "kid sister cryptography", writing in the Preface to Applied Cryptography that: Digital rights management Schneier is critical of digital rights management (DRM) and has said that it allows a vendor to increase lock-in. Proper implementation of control-based security for the user via trusted computing is very difficult, and security is not the same thing as control. Schneier insists that "owning your data is a different way of thinking about data." Full disclosure Schneier is a proponent of full disclosure, i.e. making security issues public. Homeland security Schneier has said that homeland security money should be spent on intelligence, investigation, and emergency response. Defending against the broad threat of terrorism is generally better than focusing on specific potential terrorist plots. According to Schneier, analysis of intelligence data is difficult but is one of the better ways to deal with global terrorism. Human intelligence has advantages over automated and computerized analysis, and increasing the amount of intelligence data that is gathered does not help to improve the analysis process. Agencies that were designed around fighting the Cold War may have a culture that inhibits the sharing of information; the practice of sharing information is more important and less of a security threat in itself when dealing with more decentralized and poorly funded adversaries such as al Qaeda. Regarding PETN—the explosive that has become terrorists' weapon of choice—Schneier has written that only swabs and dogs can detect it. He also believes that changes to airport security since 11 September 2001 have done more harm than good and he defeated Kip Hawley, former head of the Transportation Security Administration, in an Economist online debate by 87% to 13% regarding the issue. He is widely credited with coining the term "security theater" to describe some such changes. As a Fellow of Berkman Center for Internet & Society at Harvard University, Schneier is exploring the intersection of security, technology, and people, with an emphasis on power. Movie plot threat "Movie-plot threat" is a term Schneier coined that refers to very specific and dramatic terrorist attack scenarios, reminiscent of the behavior of terrorists in movies, rather than what terrorists actually do in the real world. Security measures created to protect against movie plot threats do not provide a higher level of real security, because such preparation only pays off if terrorists choose that one particular avenue of attack, which may not even be feasible. Real-world terrorists would also be likely to notice the highly specific security measures, and simply attack in some other way. The specificity of movie plot threats gives them power in the public imagination, however, so even extremely unrealistic security theater countermeasures may receive strong support from the public and legislators. Among many other examples of movie plot threats, Schneier described banning baby carriers from subways, for fear that they may contain explosives. Starting in April 2006, Schneier has had an annual contest to create the most fantastic movie-plot threat. In 2015, during the 8th and the last one, he mentioned that the contest may have run its course. System design Schneier has criticized security approaches that try to prevent any malicious incursion, instead arguing that designing systems to fail well is more important. The designer of a system should not underestimate the capabilities of an attacker, as technology may make it possible in the future to do things that are not possible at the present. Under Kerckhoffs's Principle, the need for one or more parts of a cryptographic system to remain secret increases the fragility of the system; whether details about a system should be obscured depends upon the availability of persons who can make use of the information for beneficial uses versus the potential for attackers to misuse the information. Cryptographic algorithms Schneier has been involved in the creation of many cryptographic algorithms. Publications Schneier, Bruce. Applied Cryptography, John Wiley & Sons, 1994. Schneier, Bruce. Protect Your Macintosh, Peachpit Press, 1994. Schneier, Bruce. E-Mail Security, John Wiley & Sons, 1995. Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. Schneier, Bruce; Kelsey, John; Whiting, Doug; Wagner, David; Hall, Chris; Ferguson, Niels. The Twofish Encryption Algorithm, John Wiley & Sons, 1996. Schneier, Bruce; Banisar, David. The Electronic Privacy Papers, John Wiley & Sons, 1997. Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons, 2000. Schneier, Bruce. Beyond Fear: Thinking Sensibly About Security in an Uncertain World, Copernicus Books, 2003. Ferguson, Niels; Schneier, Bruce. Practical Cryptography, John Wiley & Sons, 2003. Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons, 2004. Schneier, Bruce. Schneier on Security, John Wiley & Sons, 2008. Ferguson, Niels; Schneier, Bruce; Kohno, Tadayoshi. Cryptography Engineering, John Wiley & Sons, 2010. Schneier, Bruce. Liars and Outliers: Enabling the Trust that Society Needs to Thrive, John Wiley & Sons, 2012. Schneier, Bruce. Carry On: Sound Advice from Schneier on Security, John Wiley & Sons, 2013. Schneier, Bruce. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, W. W. Norton & Company, 2015. Schneier, Bruce. Click Here to Kill Everybody: Security and Survival in a Hyper-connected World, W. W. Norton & Company, 2018. Schneier, Bruce. We Have Root: Even More Advice from Schneier on Security, John Wiley & Sons, 2019. Activism Schneier is a board member of the Electronic Frontier Foundation. See also Attack tree Failing badly Snake oil (cryptography) Alice and Bob References External links Personal website, Schneier.com Bruce Schneier's books, Schneier.com/books.html Profile of Bruce Schneier in Politico Magazine "Glenn Greenwald's Encryption Guru," by Alex Carp, March 16, 2014 Talking security with Bruce Almighty Schneier at the 2009 RSA conference, video with Schneier participating on the Cryptographer's Panel, April 21, 2009, Moscone Center, San Francisco Bruce Schneier on Real Law Radio, Bruce talks with Bob DiCello on the legal news talk radio program, Real Law Radio, about the case involving a Philadelphia school that allegedly spied on its students via the webcam on their computers (Podcasts/Saturday February 27, 2010). Bruce Schneier at Google, 19 June 2013. Schneier discusses various aspects of Internet computing and global geo-politics including trust, power relations, control, cooperative systems, ethics, laws, and security technologies. (55 minutes) Bruce Schneier interviewed on The WELL by Jon Lebkowsky, August 2012 1963 births Living people American cryptographers American technology writers Berkman Fellows 20th-century American Jews American University alumni University of Rochester alumni People associated with computer security Modern cryptographers Cypherpunks Privacy activists American chief technology officers Hunter College High School alumni Writers about computer security Writers from New York City Writers from Minneapolis Wired (magazine) people 21st-century American Jews
44927189
https://en.wikipedia.org/wiki/Cloud%20Cruiser
Cloud Cruiser
Cloud Cruiser is a cloud-based financial management company based in Silicon Valley. The company's software manages the finances of cloud computing and traditional IT environments. It was founded in January 2010 by David Zabrowski and Gregory Howard. Cloud Cruiser has offices in Roseville, California, San Jose, California, and the Netherlands. History Cloud Cruiser was founded in January 2010 by David Zabrowski and Gregory Howard. Zabrowski was the general manager of Hewlett-Packard company. He also became CEO of Neterion in 2002, and was Entrepreneur in Residence at Wavepoint Ventures. Howard was a software developer and then the development director for CIMS Lab. He also worked for IBM Corporation and the Computer Task Group. Cloud Cruiser partnered with companies such as HP, Microsoft, VMware, Amazon Marketplace, and Cisco. In July 2010, Wavepoint Ventures led Cloud Cruiser's first round of funding. It included investments from Roger Akers of Akers Capital and other San Francisco Bay Area angel investors. In March 2011, Cloud Cruiser came out of stealth mode and released its eponymous software later that year. In June 2012, the company raised an additional $6 million in series B funding, led by ONSET Ventures. In 2013, Cloud Cruiser became available for Windows Server 2012 R2 through Microsoft Azure. In February 2014, the company's software was made available to OpenStack users through Rackspace. Cloud Cruiser started offering OpenStack integration in 2011. In October 2014, Cloud Cruiser 4 was released. In 2017, Hewlett Packard Enterprise acquired the company. See also Cloud computing Financial management References External links Internal HPE site Companies based in San Jose, California Companies based in Roseville, California Software companies established in 2010 Cloud computing providers Financial software companies Hewlett-Packard acquisitions 2017 mergers and acquisitions
52110877
https://en.wikipedia.org/wiki/MovieRide%20FX
MovieRide FX
MovieRide FX is a patented automated special visual effects video compositing engine used in the MovieRide FX mobile application for Android (requires Android 2.3 or later) and iOS (compatible with iPhone 4 and up, iPad, and iPod Touch (new generation), requires iOS 7 or later). MovieRide FX allows the user to personalize a “Hollywood-style” movie clip by inserting themself into the clip as the “actor”. Features The MovieRide FX app uses the relevant mobile device's camera to record a video of the user and insert it into a pre-packaged “Hollywood style" movie clip. The "actor" is extracted from their recorded video clip through various known effects such as masking, keying, and motion tracking. The "actor" is then inserted into one of the pre-packaged movie clips created by the MovieRide FX visual effects artists. This is done through an automated process requiring little or no artistic or technical skill from the user. The custom movie clips pre-packaged with MovieRide FX offer the user a variety of movie scenarios. Additional clips based on popular television and movie themes are continually being developed and are available on a freemium basis. Sharing Once the user's footage has automatically been composited into a movie clip and rendered as an .mp4 file, it can be shared via social media, such as Facebook, YouTube, and Twitter, and by e-mail. History 2012 MovieRide FX was created by Grant Waterston and Johann Mynhardt, who started development in 2012. 2013 The beta version was released on Google Play in July 2013. In August 2013 MovieRide FX was a New Media Award winner in the “New Media” category of the Accolade International Awards in Los Angeles. In October 2013 MovieRide FX was awarded exhibitor space in the ‘start-up village’ at the Apps-World Expo in London. 2014 MovieRide FX reached the 100 000 – 500 000 downloads category on the Google Play Store in June 2014. The official Android version was launched in July 2014. iOS version released in August 2014. MovieRide FX was selected as one of the "Top 150" startups at the Pioneer Festival in Vienna in September 2014. In November 2014 MovieRide FX was shortlisted for the Appster Awards in the “Best Entertainment App” and “Most Innovative App” categories and was awarded exhibitor space at the ‘start-up village’ at the Apps-World Expo in London. Patent applications were filed in South Africa, the EU and USA in April 2014. 2015 In September 2015 MovieRide FX was shortlisted for “Best Software innovation” at The Technology Expo Awards in London. 2016 In April 2016 MovieRide FX was nominated for a National Science and Technology Forum (NSTF) award for 'Research leading to Innovation by a corporate organization' In August 2016 Movie Ride FX won two Gold Awards at the 2016 Mobile Marketing Awards (MMA Smarties SA). These two Gold awards were for the 'Innovation' and 'Best in Show’ categories. In December 2016 FlicJam Inc. was formed in the US to access the larger global market. EU patent application was published in March 2016. 2017 South African patent was granted in February 2017. 2018 US patent was granted in March 2018. References Android (operating system) software IOS software 2013 software Cross-platform mobile software Mobile applications Compositing software Mobile video editing software Information technology in South Africa
1545557
https://en.wikipedia.org/wiki/10979%20Fristephenson
10979 Fristephenson
10979 Fristephenson, provisional designation , is a carbonaceous Sulamitis asteroid from the inner regions of the asteroid belt, approximately in diameter. It was discovered during the Palomar–Leiden Trojan survey on 29 September 1973, by Ingrid and Cornelis van Houten at Leiden, and Tom Gehrels at Palomar Observatory in California, United States. The dark C-type asteroid was named for British historian of astronomy Francis Richard Stephenson. Orbit and classification Fristephenson is a member of the Sulamitis family (), a small family of 300 known carbonaceous asteroids named after 752 Sulamitis. It orbits the Sun in the inner main-belt at a distance of 2.3–2.7 AU once every 3 years and 10 months (1,407 days; semi-major axis of 2.46 AU). Its orbit has an eccentricity of 0.08 and an inclination of 6° with respect to the ecliptic. The body's observation arc begins at Palomar on 19 September 1973, ten days after its official discovery observation. Palomar–Leiden Trojan survey The survey designation "T-2" stands for the second Palomar–Leiden Trojan survey, named after the fruitful collaboration of the Palomar and Leiden Observatory during the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Ingrid and Cornelis van Houten at Leiden Observatory where astrometry was carried out. The trio are credited with the discovery of several thousand asteroid discoveries. Physical characteristics Fristephenson has an absolute magnitude of 15.1. Based on the Moving Object Catalog (MOC) of the Sloan Digital Sky Survey, the asteroid has a spectral type of a carbonaceous C-type asteroid, which agrees with its classification into the Sulamitis family, as well as with its low Geometric albedo measured by the Wide-field Infrared Survey Explorer. As of 2018, no rotational lightcurve has been obtained from photometric observations. The body's rotation period, pole and shape remain unknown. Diameter and albedo According to the survey carried out by the NEOWISE mission of NASA's WISE telescope, Fristephenson measures 5.327 kilometers in diameter and its surface has an albedo of 0.057. Naming This minor planet was named after Francis Richard Stephenson (born 1941), a British historian of astronomy at Durham University. The official naming citation was published by the Minor Planet Center on 26 November 2004 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (10001)-(15000) – Minor Planet Center 010979 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels 4171 Minor planets named for people Named minor planets 4386 19730929
439697
https://en.wikipedia.org/wiki/Optical%20mark%20recognition
Optical mark recognition
Optical mark recognition (also called optical mark reading and OMR) is the process of reading information that people mark on surveys, tests and other paper documents. OMR is used to read questionnaires, multiple choice examination papers in the form of shaded areas. OMR background Many OMR devices have a scanner that shines a light onto a form. The device then looks at the contrasting reflectivity of the light at certain positions on the form. It will detect the black marks because they reflect less light than the blank areas on the form. Some OMR devices use forms that are printed on transoptic paper. The device can then measure the amount of light that passes through the paper. It will pick up any black marks on either side of the paper because they reduce the amount of light passing through. In contrast to the dedicated OMR device, desktop OMR software allows a user to create their own forms in a word processor or computer and print them on a laser laser printer. The OMR software then works with a common desktop image scanner with a document feeder to process the forms once filled out. OMR is generally distinguished from optical character recognition (OCR) by the fact that a complicated pattern recognition engine is not required. That is, the marks are constructed in such a way that there is little chance that the OMR device will not read them correctly. This does require the image to have high contrast and an easily recognizable or irrelevant shape. A related field to OMR and OCR is the recognition of barcodes, such as the UPC bar code found on product packaging. One of the most familiar applications of OMR is the use of #2 pencil (HB in Europe) bubble optical answer sheets in multiple choice question examinations. Students mark their answers, or other personal information, by darkening circles on a forms. The sheet is then graded by a scanning machine. In the United States and most European countries, a horizontal or vertical "tick" in a rectangular "lozenge" is the most commonly used type of OMR form; The most familiar form in the United Kingdom is the UK National lottery form. Lozenge marks represent a later technology that is easier to mark and easier to erase. The large "bubble" marks are legacy technology from very early OMR machines that were so insensitive a large mark was required for reliability. In most Asian countries, a special marker is used to fill in an optical answer sheet. Students, likewise, mark answers or other information by darkening circles marked on a pre-printed sheet. Then the sheet is automatically graded by a scanning machine. Many of today's OMR applications involve people filling in specialized forms. These forms are optimized for computer scanning, with careful registration in the printing, and careful design so that ambiguity is reduced to the minimum possible. Due to its extremely low error rate, low cost and ease-of-use, OMR is a popular method of tallying votes. OMR marks are also added to items of printed mail so folder inserter equipment can be used. The marks are added to each (normally facing/odd) page of a mail document and consist of a sequence of black dashes that folder inserter equipment scans in order to determine when the mail should be folded then inserted in an envelope. Optical answer sheet An optical answer sheet or bubble sheet is a special type of form used in multiple choice question examinations. OMR is used to detect answers. The Scantron Corporation creates many optical answer sheets, although certain uses require their own customized system. Optical answer sheets usually have a set of blank ovals or boxes that correspond to each question, often on separate sheets of paper. Bar codes may mark the sheet for automatic processing, and each series of ovals filled will return a certain value when read. In this way students' answers can be digitally recorded, or identity given. Reading The first optical answer sheets were read by shining a light through the sheet and measuring how much of the light was blocked using phototubes on the opposite side. As some phototubes are mostly sensitive to the blue end of the visible spectrum, blue pens could not be used, as blue inks reflect and transmit blue light. Because of this, number two pencils had to be used to fill in the bubbles—graphite is a very opaque substance which absorbs or reflects most of the light which hits it. Modern optical answer sheets are read based on reflected light, measuring lightness and darkness. They do not need to be filled in with a number two pencil, though these are recommended over other types (this is due to the lighter marks made by higher-number pencils and the smudges from number 1 pencils). Black ink will be read, though many systems will ignore marks that are the same color the form is printed in. This also allows optical answer sheets to be double-sided because marks made on the opposite side will not interfere with reflectance readings as much as with opacity readings. Most systems accommodate for human error in filling in ovals imprecisely—as long as they do not stray into the other ovals and the oval is almost filled, the scanner will detect it as filled in. Designing and Printing Design of OMR Sheet – There are specific dimensions of designing OMR Sheet with 0.05 mm precision on scale. If the dimensions are not up to the precision scale the accuracy of OMR sheet may vary. So it is advisable that the sheet should be designed and printed with perfection and the cutting of the sheet should be perfect. Types of OMR Sheet Single Part – Sheets are printed on 105 gsm to 120 gsm paper on A4/Legal sheet. Double Part (Carbonless) – Two Sheets are printed one 105 gsm paper and one on 60-70 gsm paper on A4 sheet . The bottom of first sheet and the top of second sheet is chemically treated so that the impression of the first sheet should come on the second sheet. Three Part (Carbonless) – Three Sheets are printed one 105 gsm paper and other two on 60-70 gsm paper on A4 sheet . The bottom of first sheet, the top and bottom of second sheet and the top of third sheet is chemically treated so that the impression of the first sheet should come on the second and third sheets. Errors It is possible for optical answer sheets to be printed incorrectly, such that all ovals will be read as filled. This occurs if the outline of the ovals is too thick, or is irregular. During the 2008 U.S. presidential election, this occurred with over 19,000 absentee ballots in the Georgia county of Gwinnett, and was discovered after around 10,000 had already been returned. The slight difference was not apparent to the naked eye, and was not detected until a test run was made in late October. This required all ballots to be transferred to correctly printed ones, by sequestered OMR software OMR software is a computer software application that makes OMR possible on a desktop computer by using an Image scanner to process surveys, tests, attendance sheets, checklists, and other plain-paper forms printed on a laser printer. OMR software is used to capture data from OMR sheets. While data capturing scanning devices focus on many factors like thickness of paper dimensions of OMR sheet and the designing pattern. Commercial OMR software One of the first OMR software packages that used images from common image scanners was Remark Office OMR, made by Gravic, Inc. (originally named Principia Products, Inc.). Remark Office OMR 1.0 was released in 1991. The need for OMR software originated because early optical mark recognition systems used dedicated scanners and special pre-printed forms with drop-out colors and registration marks. Such forms typically cost US$0.10 to $0.19 a page. In contrast, OMR software users design their own mark-sense forms with a word processor or built-in form editor, print them locally on a printer, and can save thousands of dollars on large numbers of forms. Identifying optical marks within a form, such as for processing census forms, has been offered by many forms-processing (Batch Transaction Capture) companies since the late 1980s. Mostly this is based on a bitonal image and pixel count with minimum and maximum pixel counts to eliminate extraneous marks, such as those erased with a dirty eraser that when converted into a black-and-white image (bitonal) can look like a legitimate mark. So this method can cause problems when a user changes their mind, and so some products started to use grayscale to better identify the intent of the marker—internally scantron and NCS scanners used grayscale. OMR Development Libraries Open source OMR software Some OMR software developed and distributed under free or open source licenses: History Optical mark recognition (OMR) is the scanning of paper to detect the presence or absence of a mark in a predetermined position. Optical mark recognition has evolved from several other technologies. In the early 19th century and 20th century patents were given for machines that would aid the blind. OMR is now used as an input device for data entry. Two early forms of OMR are paper tape and punch cards which use actual holes punched into the medium instead of pencil filled circles on the medium. Paper tape was used as early as 1857 as an input device for telegraph. Punch cards were created in 1890 and were used as input devices for computers. The use of punch cards declined greatly in the early 1970s with the introduction of personal computers. With modern OMR, where the presence of a pencil filled in bubble is recognized, the recognition is done via an optical scanner. The first mark sense scanner was the IBM 805 Test Scoring Machine; this read marks by sensing the electrical conductivity of graphite pencil lead using pairs of wire brushes that scanned the page. In the 1930s, Richard Warren at IBM experimented with optical mark sense systems for test scoring, as documented in US Patents 2,150,256 (filed in 1932, granted in 1939) and 2,010,653 (filed in 1933, granted in 1935). The first successful optical mark-sense scanner was developed by Everett Franklin Lindquist as documented in US Patent 3,050,248 (filed in 1955, granted in 1962). Lindquist had developed numerous standardized educational tests, and needed a better test scoring machine than the then-standard IBM 805. The rights to Lindquist's patents were held by the Measurement Research Center until 1968, when the University of Iowa sold the operation to Westinghouse Corporation. During the same period, IBM also developed a successful optical mark-sense test-scoring machine, as documented in US Patent 2,944,734 (filed in 1957, granted in 1960). IBM commercialized this as the IBM 1230 Optical mark scoring reader in 1962. This and a variety of related machines allowed IBM to migrate a wide variety of applications developed for its mark sense machines to the new optical technology. These applications included a variety of inventory management and trouble reporting forms, most of which had the dimensions of a standard punched card. While the other players in the educational testing arena focused on selling scanning services, Scantron Corporation, founded in 1972, had a different model; it would distribute inexpensive scanners to schools and make profits from selling the test forms. As a result, many people came to think of all mark-sense forms (whether optically sensed or not) as scantron forms. . In 1983, Westinghouse Learning Corporation was acquired by National Computer Systems (NCS). In 2000, NCS was acquired by Pearson Education, where the OMR technology formed the core of Pearson's Data Management group. In February 2008, M&F Worldwide purchased the Data Management group from Pearson; the group is now part of the Scantron brand. OMR has been used in many situations as mentioned below. The use of OMR in inventory systems was a transition between punch cards and bar codes and is not used as much for this purpose. OMR is still used extensively for surveys and testing though. Usage The use of OMR is not limited to schools or data collection agencies; many businesses and health care agencies use OMR to streamline their data input processes and reduce input error. OMR, OCR, and ICR technologies all provide a means of data collection from paper forms. OMR may also be done using an OMR (discrete read head) scanner or an imaging scanner. Applications There are many other applications for OMR, for examples: In the process of institutional research Community surveys Consumer surveys Tests and assessments Evaluations and feedback Data compilation Product evaluation Time sheets and inventory counts Membership subscription forms Lotteries and voting Geocoding (e.g. postal codes) Mortgage loan, banking, and insurance applications Field types OMR has different fields to provide the format the questioner desires. These fields include: Multiple, where there are several options but only one is chosen. For example, the form might ask for one of the options ABCDE; 12345; completely disagree, disagree, indifferent, agree, completely agree; or similar. Grid: the bubbles or lines are set up in a grid format for the user to fill in a phone number, name, ID number and so on. Add, total the answers to a single value Boolean, answering yes or no to all that apply Binary, answering yes or no to only one Dotted lines fields, developed by Smartshoot OMR, allow border dropping like traditional color dropping. Capabilities/requirements In the past and presently, some OMR systems require special paper, special ink and a special input reader (Bergeron, 1998). This restricts the types of questions that can be asked and does not allow for much variability when the form is being input. Progress in OMR now allows users to create and print their own forms and use a scanner (preferably with a document feeder) to read the information. The user is able to arrange questions in a format that suits their needs while still being able to easily input the data. OMR systems approach one hundred percent accuracy and only take 5 milliseconds on average to recognize marks. Users can use squares, circles, ellipses and hexagons for the mark zone. The software can then be set to recognize filled in bubbles, crosses or check marks. OMR can also be used for personal use. There are all-in-one printers in the market that will print the photos the user selects by filling in the bubbles for size and paper selection on an index sheet that has been printed. Once the sheet has been filled in, the individual places the sheet on the scanner to be scanned and the printer will print the photos according to the marks that were indicated. Disadvantages There are also some disadvantages and limitations to OMR. If the user wants to gather large amounts of text, then OMR complicates the data collection. There is also the possibility of missing data in the scanning process, and incorrectly or unnumbered pages can lead to their being scanned in the wrong order. Also, unless safeguards are in place, a page could be rescanned, providing duplicate data and skewing the data. As a result of the widespread adoption and ease of use of OMR, standardized examinations can consist primarily of multiple-choice questions, changing the nature of what is being tested. See also AI effect Applications of artificial intelligence Clock mark Electronic data capture Mark sense Object recognition Optical character recognition Pattern recognition Benjamin D. Wood Lists List of emerging technologies Outline of artificial intelligence References Artificial intelligence applications Optical character recognition ja:マークシート
56694947
https://en.wikipedia.org/wiki/IOTA%20%28technology%29
IOTA (technology)
IOTA is an open-source distributed ledger and cryptocurrency designed for the Internet of things (IoT). It uses a directed acyclic graph to store transactions on its ledger, motivated by a potentially higher scalability over blockchain based distributed ledgers. IOTA does not use miners to validate transactions, instead, nodes that issue a new transaction on the network must approve two previous transactions. Transactions can therefore be issued without fees, facilitating microtransactions. The network currently achieves consensus through a coordinator node, operated by the IOTA Foundation. As the coordinator is a single point of failure, the network is currently centralized. IOTA has been criticized due to its unusual design, of which it is unclear whether it will work in practice. As a result, IOTA was rewritten from the ground up for a network update called Chrysalis, or IOTA 1.5, which launched on 28 April 2021. In this update, controversial decisions such as ternary encoding and quantum proof cryptography were left behind and replaced with established standards. A testnet for a follow-up update called Coordicide, or IOTA 2.0, was deployed in late 2020, with the aim of releasing a distributed network that no longer relies on the coordinator for consensus in 2021. History The value transfer protocol IOTA, named after the smallest letter of the Greek alphabet, was created in 2015 by David Sønstebø, Dominik Schiener, Sergey Ivancheglo, and Serguei Popov. Initial development was funded by an online public crowdsale, with the participants buying the IOTA value token with other digital currencies. Approximately 1300 BTC were raised, corresponding to approximately US$500,000 at that time, and the total token supply was distributed pro-rata over the initial investors. The IOTA network went live in 2016. IOTA foundation In 2017, early IOTA token investors donated 5% of the total token supply for continued development and to endow what became later became the IOTA Foundation. In 2018, the IOTA Foundation was chartered as a Stiftung in Berlin, with the goal to assist in the research and development, education and standardisation of IOTA technology. The IOTA Foundation is a board member of International Association for Trusted Blockchain Applications (INATBA), and founding member of the Trusted IoT Alliance and Mobility Open Blockchain Initiative (MOBI), to promote blockchain and distributed ledgers in regulatory approaches, the IoT ecosystem and mobility. Following a dispute between IOTA founders David Sønstebø and Sergey Ivancheglo, Ivancheglo resigned from the board of directors on 23 June 2019. On 10 December 2020 the IOTA Foundation Board of Directors and supervisory board announced that the Foundation officially parted ways with David Sønstebø. DCI vulnerability disclosure On 8 September 2017, researchers Ethan Heilman from Boston University and Neha Nerula et al. from MIT's Digital Currency Initiative (DCI) reported on potential security flaws with IOTA's former Curl-P-27 hash function. The IOTA Foundation received considerable backlash in their handling of the incident. FT Alphaville reported legal posturing by an IOTA Founder against a security researcher for his involvement in the DCI report, as well as instances of aggressive language levelled against a Forbes contributor and other unnamed journalists covering the DCI report. The Center for Blockchain Technologies at the University College London severed ties with the IOTA Foundation due to legal threats against security researchers involved in the report. Attacks As a speculative blockchain and cryptocurrency-related technology, IOTA has been the target of phishing, scamming, and hacking attempts, which have resulted in the thefts of user tokens and extended periods of downtime. In January 2018, more than US$10 million worth of IOTA tokens were stolen from users that used a malicious online seed-creator, a password that protects their ownership of IOTA tokens. The seed-generator scam was the largest fraud in IOTA history to date, with over 85 victims. In January 2019, the UK and German law enforcement agencies arrested a 36-year-old man from Oxford, England believed to be behind the theft. On 26 November 2019 a hacker discovered a vulnerability in a third-party payment service, provided by MoonPay, integrated in the mobile and desktop wallet managed by the IOTA Foundation. The attacker compromised over 50 IOTA seeds, resulting in the theft of approximately US$2 Million worth in IOTA tokens. After receiving reports that hackers were stealing funds from user wallets, the IOTA Foundation shut down the coordinator on 12 February 2020. This had the side-effect of effectively shutting down the entire IOTA cryptocurrency. Users at-risk were given seven days to migrate their potentially compromised seed to a new seed, until 7 March 2020. The coordinator was restarted on 10 March 2020. IOTA 1.5 (Chrysalis) and IOTA 2.0 (Coordicide) The IOTA network is currently centralized, a transaction on the network is considered valid if and only if it is referenced by a milestone issued by a node operated by the IOTA foundation called the coordinator. In 2019 the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, using a two-stage network update, termed Chrysalis for IOTA 1.5 and Coordicide for IOTA 2.0. The Chrysalis update went live on 28 April 2021, and removed its controversial design choices such as ternary encoding and Winternitz one-time signatures, to create an enterprise-ready blockchain solution. In parallel Coordicide is currently developed, to create a distributed network that no longer relies on the coordinator for consensus. A testnet of Coordicide was deployed late 2020, with the aim of releasing a final version in 2021. Characteristics The Tangle The Tangle is the moniker used to describe IOTAs directed acyclic graph (DAG) transaction settlement and data integrity layer. It is structured as a string of individual transactions that are interlinked to each other and stored through a network of node participants. The Tangle does not have miners validating transactions, rather, network participants are jointly responsible for transaction validation, and must confirm two transactions already submitted to the network for every one transaction they issue. Transactions can therefore be issued to the network at no cost, facilitating micropayments. To avoid spam, every transaction requires computational resources based on Proof of Work (PoW) algorithms, to find the answer to a simple cryptographic puzzle. IOTA supports both value and data transfers. A second layer protocol provides encryption and authentication of messages, or data streams, transmitted and stored on the Tangle as zero-value transactions. Each message holds a reference to the address of a follow-up message, connecting the messages in a data stream, and providing forward secrecy. Authorised parties with the correct decryption key can therefore only follow a datastream from their point of entry. When the owner of the data stream wants to revoke access, it can change the decryption key when publishing a new message. This provides the owner granular controls over the way in which data is exchanged to authorised parties. IOTA token The IOTA token is a unit of value in the IOTA network. There is a fixed supply of 2,779,530,283,277,761 IOTA tokens in circulation on the IOTA network. IOTA tokens are stored in IOTA wallets protected by an 81-character seed, similar to a password. To access and spend the tokens, IOTA provides a cryptocurrency wallet. A hardware wallet can be used to keep credentials offline while facilitating transactions. As of 8 December 2021, each IOTA token has a value of $1.17, giving the cryptocurrency a market capitalisation of $3.26bn, according to CoinMarketCap data. Coordinator node IOTA currently requires a majority of honest actors to prevent network attacks. However, as the concept of mining does not exist on the IOTA network, it is unlikely that this requirement will always be met. Therefore, consensus is currently obtained through referencing of transactions issued by a special node operated by the IOTA foundation, called the coordinator. The coordinator issues zero value transactions at given time intervals, called milestones. Any transaction, directly or indirectly, referenced by such a milestone is considered valid by the nodes in the network. The coordinator is an authority operated by the IOTA foundation and as such single point of failure for the IOTA network, which makes the network centralized. Markets IOTA is traded in megaIOTA units (1,000,000 IOTA) on digital currency exchanges such as Bitfinex, and listed under the MIOTA ticker symbol. Like other digital currencies, IOTA's token value has soared and fallen. Fast Probabilistic Consensus (FPC) The crux of cryptocurrencies is to stop double spends, the ability to spend the same money twice in two simultaneous transactions. Bitcoin's solution has been to use Proof of Work (PoW) making it a significant financial burden to have a minted block be rejected for a double spend. IOTA has designed a voting algorithm called Fast Probabilistic Consensus to form a consensus on double spends. Instead of starting from scratch, the IOTA Foundation started with Simple Majority Consensus where the first opinion update is defined by, Where is the opinion of node at time . The function is the percent of all the nodes that have the opinion and is the threshold for majority, set by the implementation. After the first round, the successive opinions change at time to the function, Although, this model is fragile against malicious attackers which is why the IOTA Foundation decided not to use it. Instead the IOTA Foundation decided to augment the leaderless consensus mechanism called, Random neighbors majority consensus (RMC) which is similar to SMC although, the nodes in which their opinions are queries is randomized. They took RMC then augmented it to create FPC by having the threshold of majority be a random number generated from a Decentralized Random Number Generator (dRNG). For FPC, the first sound is the same, For success rounds though, Where where , is a randomized threshold for majority. Randomizing the threshold for majority makes it extremely difficult for adversaries to manipulate the consensus by either making it converge to a specific value or prolonging consensus. Note that FPC is only utilized to form consensus on a transaction during a double spend. Ultimately, IOTA uses Fast Probabilistic Consensus for consensus and uses Proof of Work as a rate controller. Because IOTA does not use PoW for consensus, its overall network and energy per transaction is extremely small. Applications and testbeds Proof-of-concepts building on IOTA technology are being developed in the automotive and IoT industry by corporates as Jaguar Land Rover, STMicroelectronics and Bosch. IOTA is a participant in smart city testbeds, to establish digital identity, waste management and local trade of energy. In project Alvarium, formed under the Linux Foundation, IOTA is used as an immutable storage and validation mechanism. The privacy centered search engine Xayn uses IOTA as a trust anchor for its aggregated AI model. On 11 February 2020, the Eclipse Foundation and IOTA Foundation jointly launched the Tangle EE (Enterprise Edition) Working Group. Tangle EE is aimed at enterprise users that can take IOTA technology and enable larger organizations to build applications on top of the project, where the Eclipse Foundation will provide a vendor-neutral governance framework . Announcements of partners were critically received. In 2017, IOTA released the data marketplace, a pilot for a market where connected sensors or devices can store, sell or purchase data. The data marketplace was received critically by the cryptocurrency community over the extent of the involvement of the participants of the data marketplace, suggesting that "the IOTA Foundation was actively asking publications to use Microsoft’s name following the data marketplace announcement.". Izabelle Kaminska criticized a Jaguar press release: "our interpretation is that it's very unlikely Jaguar will be bringing a smart-wallet-enabled marketplace any time soon." Criticism IOTA promises to achieve the same benefits that blockchain-based DLTs bring - decentralization, distribution, immutability and trust - but remove the downsides of wasted resources associated with mining as well as transaction costs. However, several of the design features of IOTA are unusual, and it is unclear whether they work in practice. The security of IOTA's consensus mechanism against double-spending attacks is unclear, as long as the network is immature. Essentially, in the IoT, with heterogeneous devices having varying levels of low computational power, sufficiently strong computational resources will render the tangle insecure. This is a problem in traditional proof-of-work blockchains as well, however, they provide a much greater degree of security through higher fault tolerance and transaction fees. At the beginning, when there is a lower number of participants and incoming transactions, a central coordinator is needed to prevent an attack on the IOTA tangle. Critics have opposed role of the coordinator for being the single source of consensus in the IOTA network. Polychain Capital founder Olaf Carlson-Wee, says "IOTA is not decentralized, even though IOTA makes that claim, because it has a central "coordinator node" that the network needs to operate. If a regulator or a hacker shut down the coordinator node, the network would go down." This was demonstrated during the Trinity attack incident, when the IOTA foundation shutdown the coordinator to prevent further thefts. Following a discovered vulnerability in October 2017, the IOTA foundation transferred potentially compromised funds to addresses under its control, providing a process for users to later apply to the IOTA Foundation in order to reclaim their funds. Additionally, IOTA has seen several network outages as a result of bugs in the coordinator as well as DDoS attacks. During the seed generator scam, a DDoS network attack was abused leaving initial thefts undetected. In 2020, the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, but implementation of this is still in an early development phase. References External links Official website Cryptocurrency projects Directed acyclic graphs
23981890
https://en.wikipedia.org/wiki/List%20of%20Nintendo%20development%20teams
List of Nintendo development teams
Nintendo is one of the world's biggest video game development companies, having created several successful franchises. Because of its storied history, the developer employs a methodical system of software and hardware development that is mainly centralized within its offices in Kyoto and Tokyo, in cooperation with its division Nintendo of America in Redmond, Washington. The company also owns several worldwide subsidiaries and funds partner affiliates that contribute technology and software for the Nintendo brand. Main offices Nintendo (NCL) has a central office located in Minami-ku, Kyoto, Kyoto Prefecture, Japan () and a nearby building, its pre-2000 headquarters, now serving as a research and development building, located in Higashiyama-ku, Kyoto, Kyoto Prefecture, Japan (). Its original Kyoto headquarters can still be found at (). Additionally, Nintendo has a third operation in Tokyo, Japan, where research and development and manufacturing are conducted. All three offices are interconnected and have video conferences often for communication and presentation purposes. In 2009, it was revealed that Nintendo was expanding both its Redmond and Kyoto offices. The new office building complex of Nintendo of America in Redmond is and would expand its localization, development, debugging, production, and clerical teams. Nintendo announced the purchase of a 40,000 square-meter lot that would house an all new research and development (R&D) office that would make it easier for the company's two other Kyoto R&D offices to collaborate as well as expand the total work force on new upcoming console development and new software for current and future hardware. Nintendo owns several buildings throughout Kyoto and Tokyo housing subsidiary and affiliated companies. One of the more famous buildings was the Nihonbashi, Chuo-ku, Tokyo building – previously known as the Nintendo Tokyo Prefecture Building – was jokingly called The Pokémon Building, accommodates the complete Pokémon family which included The Pokémon Company, Creatures Inc., and Genius Sonority. In 2020, Nintendo revealed that they were going to unify all four of their buildings in Tokyo into just one. With this, several divisions and affiliated companies came to be together in the same building, including Game Freak, Nintendo's subsidiary 1-Up Studio and after 13 years, HAL Laboratory with its Tokyo studio and headquarters. Buildings Former offices Nintendo Sapporo Office – Sapporo, Japan – closed Nintendo Fukuoka Office – Fukuoka, Japan – closed Nintendo Tokyo Prefecture Building – Tokyo, Japan – closed Nintendo Tokyo Office (previous) – Tokyo, Japan – closed Divisions Entertainment Planning and Development (EPD) The Nintendo Entertainment Planning & Development division was created on 16 September 2015, as part of a company-wide organizational restructure that took place under Nintendo's then newly appointed president, Tatsumi Kimishima. The division was created after the merger of two of its largest divisions, Entertainment Analysis & Development (EAD) and Software Planning & Development (SPD). The division assumed both of its predecessors' roles, focusing on the development of games and software for Nintendo platforms and mobile devices; it also manages and licenses the company's various intellectual properties. Shinya Takahashi, formerly general manager of the SPD division, serves as general manager of the new division, as well as supervisor for both the Business Development and Development Administration & Support divisions. Katsuya Eguchi and Yoshiaki Koizumi maintained their positions as Deputy General Managers of EPD, which they previously held under EAD. Platform Technology Development (PTD) The Nintendo Platform Technology Development division was created on 16 September 2015, as part of a company-wide organizational restructure that took place under Nintendo's then newly appointed president, Tatsumi Kimishima. The division was created after the merger of two Nintendo's divisions, the Integrated Research & Development (IRD), which specialized in hardware development, and System Development (SDD), which specialized operating system development and its development environment and network services. The new division assumed both of its predecessors' roles. Ko Shiota, formerly Deputy general manager of the IRD division, serves as the general manager (GM), while Takeshi Shimada, formerly Deputy general manager of the Software Environment Development Department of the SDD division, serves the same role. Business Development Division (BDD) The Nintendo Business Development division was formed following Nintendo's foray into software development for smart devices, such as mobile phones and tablets. They are responsible for refining Nintendo's business model for dedicated game system business, and for furthering Nintendo's venture into development for smart devices. Subsidiaries Although most of the research and development is done in Japan, there are also R&D facilities in the United States, Europe and China. Nintendo Software Technology (NST) Nintendo Software Technology Corp. (or NST) is an American video game developer located inside of Nintendo of America's headquarters in Redmond, Washington. The studio was created by Nintendo as a first-party developer to create games for the North American market, though their games have also been released in other territories such as Europe and Japan, exclusively for Nintendo consoles. The studio's best known projects include the Mario vs. Donkey Kong series, Crosswords series, Wii Street U and other video games and applications. Nintendo Technology Development (NTD) Nintendo Technology Development Inc. (or NTD) is a Washington-based hardware focused Research & Development group for Nintendo. The group focuses on the creation of various software technologies, hardware tools, and SDKs for first-party use and third-party licensing across Nintendo platforms, in collaboration with the Nintendo Integrated Research & Development division led by Genyo Takeda. Several side projects and unreleased prototypes are commonly linked to this Washington based subsidiary. NTD is also responsible for some low-level coding. Nintendo European Research and Development (NERD) Nintendo European Research & Development SAS (or NERD), formerly known as Mobiclip, is a Nintendo subsidiary, located in Paris, France. The team currently focuses on developing software technologies, such as video compression, and middleware for Nintendo platforms. While an independent company, Mobiclip was responsible for licensing video codecs for Sony Pictures Digital, Fisher-Price and Nintendo for the Game Boy Advance, Nintendo DS, Wii and Nintendo 3DS. The team has recently been involved in the development of the Wii U Chat application, in co-operation with Vidyo. Most external first-party software development is done in Japan, since the only overseas subsidiaries are Retro Studios in the United States and Next Level Games in Canada. Although these studios are all subsidiaries of Nintendo, they are often referred to as external resources when being involved in joint development processes with Nintendo's internal developers by the Nintendo Software Planning & Development division. 1-Up Studio , formerly , is a Japanese Nintendo-funded and owned video game development studio opened on 30 June 2000 and based in Tokyo, Japan. On 1 February 2013, Brownie Brown announced on their official website that due to their recent co-development efforts with Nintendo, Brownie Brown are undergoing a change in internal structure, which includes changing the name of their company to 1-Up Studio. The studio is known for the development of the Magical Vacation series, Mother 3 and A Kappa's Trail. Since 2013, it stands as a development support studio for Nintendo EPD. iQue Originally a Chinese joint venture between its founder, Wei Yen, and Nintendo, manufactures and distributes official Nintendo consoles and games for the mainland Chinese market, under the iQue brand. The product lineup for the Chinese market is considerably different from that for other markets. For example, Nintendo's only console in China is the iQue Player, a modified version of the Nintendo 64. In 2013, the company became a fully owned subsidiary of Nintendo. It became a translation and localization company for simplified Chinese since 2016 for Nintendo games. In 2018, it stopped to be a manufacturer for consoles at China and in 2019 began to hire programmers and testers to transition to be a supporting development company for Nintendo EPD. Mario Club Originally a team within Nintendo itself, Mario Club Co., Ltd. was separated into a subsidiary in July 2009. The studio handles testing, quality control and debugging for Nintendo published titles. Monolith Soft is a Japanese video game development company that has created video games for the PlayStation 2, Nintendo GameCube, Wii, Nintendo DS, and cell phones. The company currently has two main studios, its Tokyo Software Development Studio, which is housed in the company's headquarters, and the recently opened Kyoto Software Development Studio. The company was previously owned by Bandai Namco, until 2007 when Bandai Namco transferred 80% of its 96% stake to Nintendo. At a later date the remaining 16% was sold so the company is currently 96% Nintendo owned and 4% third parties. A majority of Monolith Soft's staff are former employees of Square Co., who transferred to the new company shortly after the creation of Chrono Cross. They were previously involved with the creation of Xenogears, from which the Xenosaga series is derived. Monolith Soft's Tokyo Software Development Studio is usually associated with the Xeno series, the Baten Kaitos series and Disaster: Day of Crisis, while its Kyoto Software Development Studio is currently a development co-operation studio. NDcube NDcube Co., Ltd. (エヌディーキューブ株式会社 Enudī Kyūbu Kabushiki Gaisha) is a Nintendo subsidiary and Japanese video game developer based in Japan with offices in Tokyo and Sapporo. The company was founded on 1 March 2000, through a joint venture between Nintendo and advertising firm Dentsu, hence the Nd in the name. In 2010, Nintendo decided to buy out 96% of the shares, with ad partner Dentsu stepping aside. Since NDcube was founded, they have kept a low profile, working on various Japanese GameCube and Game Boy Advance titles. Two notable games that have reached western shores are F-Zero: Maximum Velocity and Tube Slider. As seen in the credits for Mario Party 9, NDcube indeed houses many ex-Hudson Soft employees, some vary between folks who have focused primarily on many other entries in the Mario Party series. The company is currently best known for the Wii Party series and for taking over the Mario Party series, after Hudson Soft was absorbed into Konami. Next Level Games Next Level Games is a Canadian video game developer based in Vancouver. The company has been working with Nintendo since 2005 with Super Mario Strikers, while since 2014, the company began to work exclusively under contract with Nintendo. In January 2021, Nintendo revealed they had purchased Next Level Games, after over a decade working with the developer per contract basis and 6 years having them working exclusively. Next Level Games has worked on the two most recent entries in the Luigi's Mansion series, the Mario Strikers series, Punch-Out!! for the Wii, and Metroid Prime: Federation Force for the Nintendo 3DS. Retro Studios Retro Studios, Inc. is an American video game developer based in Austin, Texas. The company was founded in October 1998 by Nintendo and the video game veteran Jeff Spangenberg after leaving Acclaim Entertainment, as an independent studio making games exclusively for Nintendo. The studio started with four Nintendo GameCube projects which had a chaotic and unproductive development, and did not impress Nintendo producer Shigeru Miyamoto, but he suggested they create a new game in the Metroid series. Eventually the four games in development were cancelled so Retro could focus only on Metroid Prime, which was released for the GameCube in 2002, the same year Nintendo acquired the studio completely by purchasing the majority of Spangenberg's holding stock. Retro Studios is now one of the most renowned Nintendo first-party developers thanks to the development of the Metroid Prime series, assisting in Mario Kart 7, and for reviving the Donkey Kong Country series. SRD SRD Co., Ltd. also known as Systems Research and Development, is currently a Nintendo subsidiary located in Kyoto, Japan. The company was founded in 1979 and began work with Nintendo on the Famicom in 1982. Since then they have assisted in the programming of games on nearly every Nintendo console for nearly every Nintendo-developed game. During Nintendo's early years, SRD was essentially the programming team of Nintendo as the company didn't have those until the 90s, where F-Zero was the last title the company worked as the main programmers. After this, SRD became a programming supporting company to Nintendo and continued as such, until February 2022 when Nintendo acquired the company to be their subsidiary. Affiliate companies Former divisions and subsidiaries References Nintendo divisions and subsidiaries development teams
1408207
https://en.wikipedia.org/wiki/Nokia%20DX%20200
Nokia DX 200
DX 200 is a digital switching platform currently developed by Nokia Networks. Architecture DX 200 is a versatile, fault-tolerant, modular and highly scalable telephone exchange and general purpose server platform, designed for high performance, high availability applications. Its hardware is built from loosely coupled redundant computer units, backed by distributed software architecture. Architecture of DX 200 allows live migration as well as software update during live operation. Unlike in many other switching platforms, DX 200 performs live software update without code patching. Therefore, running code is not polluted by unnecessary jump instructions. Furthermore, as opposed to "integration guessing" of various software patches, DX 200 architecture makes proper integration testing of software components possible. Live software update requires two computer units. One executing the old code ("working"), the other with new software loaded, otherwise idle ("spare"). During the process called "warming" memory areas (e.g. dynamically allocated memory, with the exception of stack of procedures) are moved from the old to the new computer unit. That implies that handling of data structures must be compatible in the old and the new software versions. Copying data does not require any programming effort, as long as allocation of data is done using TNSDL language. Developing software for DX 200 platform is rather straightforward for any well educated software developer. TNSDL language, which plays vital role in producing asynchronously communicating fault-tolerant software modules, is easy to learn. Software architecture of DX 200 is a fine combination of highly efficient traditional solutions as well as modern actor model based, highly concurrent design. DX 200 products are famous for availability exceeding 99.999% "five nines" as well as unrivaled performance. Applications DX 200 is a generic architecture. It is suitable for versatile computing applications. Applications include traditional Mobile Switching Centers (MSC), LTE mobile packet switching gateways as well as VoIP application servers. Operating Systems Any generic operating system can be ported to DX 200 relatively easily. Linux, ChorusOS and DMX are the most frequent operating systems used on DX 200. DMX is the 'native' OS of DX 200. DMX has microkernel architecture. Advanced functions, like TCP/IP stack and live migration components are implemented as separate libraries. Hardware Flavors DX 200 has several hardware flavors. Sub-rack DX 200: Computer units are built up from several cards, packed together as sub-racks. Very similar to old style PC architecture, where mother board did not contain every vital piece, but disk controller, video card, network card etc. were separate extension card based. Cartridge DX 200: Computer units are standalone cards. Similar to modern PC architecture, where "everything" in integrated to the mother board. ATCA DX 200: Advanced Telecommunications Computing Architecture industry standard hardware. IPA2800: A specific version of Cartridge DX 200, suitable for very high throughput real-time media processing and transmission. Typical applications are Media gateways and Radio Network Controllers. Reborn Nokia Networks shifted its focus from hardware products to software and services. The highly valuable business logic was kept, while the products which used to be running on DX 200 hardware variants now available as cloud solutions, working on generic multi-purpose hardware of various vendors. The first generation of such products used virtual machines in place of DX 200 computer units. That software architecture supports virtual environments like Linux's KVM or VMware products. Resource usage dynamically adapts to needs, to have optimal performance while reducing costs. The next generation, in line with leading industry trends, is based on Linux, uses software containers, focused on microservices, dynamic service discovery and orchestration. In place of live migration offered by TNSDL language session data are stored in distributed database. While live migration happened from a certain computer to another one, in the new architecture those sessions may be moved to different containers, thus allowing scaling in (reducing the number of containers and VMs) in the system. And session data being stored in more generic serialized format (e.g. JSON) instead of binary structures allows more relaxed version updates. History Development of the system started at Televa, the Finnish state owned telecom equipment producer in the early 1970s, under the leadership of Keijo Olkkola. The first order was received in 1973 for a 100 subscriber local exchange for the small and remote island community of Houtskär, to be delivered in 1979. After the first installation in 1982, the DX 200 captured a 50% share of the Finnish fixed line exchange market. The exchange's modular design and development of microprocessors technology enabled a gradual increase in the system's capacity. By 1987 the installation base had grown to 400,000 subscriber lines. Early export markets included China, Nepal, United Arab Emirates, Sri Lanka, Sweden, Turkey and the Soviet Union. In 1984 development of a version of the exchange for the Nordic Mobile Telephone network was started. In 1991, the world's first GSM call was made using Nokia devices. Core network components were based on DX 200 platform. In 2005, DX 200 based VoIP server was provided to Finnish operator Saunalahti, providing state of the art fixed-mobile convergence solution. This is a prime example on how well DX 200 is suitable for internet server development and the overall flexibility of the whole DX 200 platform. In 2009, world's first voice calls in LTE networks using commercial, 3GPP-standardized user and network equipment. In 2013, NSN showed its truly operational telco cloud solution. That actually marks the end of the traditional DX200 hardware product line. References DX 200 Telephone exchange equipment
1609942
https://en.wikipedia.org/wiki/Forward-confirmed%20reverse%20DNS
Forward-confirmed reverse DNS
Forward-confirmed reverse DNS (FCrDNS), also known as full-circle reverse DNS, double-reverse DNS, or iprev, is a networking parameter configuration in which a given IP address has both forward (name-to-address) and reverse (address-to-name) Domain Name System (DNS) entries that match each other. This is the standard configuration expected by the Internet standards supporting many DNS-reliant protocols. David Barr published an opinion in RFC 1912 (Informational) recommending it as best practice for DNS administrators, but there are no formal requirements for it codified within the DNS standard itself. A FCrDNS verification can create a weak form of authentication that there is a valid relationship between the owner of a domain name and the owner of the network that has been given an IP address. While weak, this authentication is strong enough that it can be used for whitelisting purposes because spammers and phishers cannot usually by-pass this verification when they use zombie computers for email spoofing. That is, the reverse DNS might verify, but it will usually be part of another domain than the claimed domain name. Using an ISP's mail server as a relay may solve the reverse DNS problem, because the requirement is the forward and reverse lookup for the sending relay have to match, it does not have to be related to the from-field or sending domain of messages it relays. Other methods for establishing a relation between an IP address and a domain in email are the Sender Policy Framework (SPF) and the MX record. ISPs that will not or cannot configure reverse DNS will generate problems for hosts on their networks, by virtue of being unable to support applications or protocols that require reverse DNS agree with the corresponding A (or AAAA) record. ISPs that cannot or will not provide reverse DNS ultimately will be limiting the ability of their client base to use Internet services they provide effectively and securely. Applications Most e-mail mail transfer agents (server software) use a FCrDNS verification and if there is a valid domain name, put it into the "Received:" trace header field. Some e-mail mail transfer agents will perform FCrDNS verification on the domain name given on the SMTP HELO and EHLO commands. This can violate RFC 2821 and so e-mail is usually not rejected by default. The Sender Policy Framework email anti-forgery system uses a FCrDNS check in its "ptr:" mechanism. However, the use of this "ptr:" mechanism is discouraged since the first standardization of SPF in 2006 (in RFC 4408). Some e-mail spam filters use FCrDNS checks as an authentication method for domain names or for whitelisting purposes, according to RFC 8601, for example. SpamCop uses the FCrDNS check, which sometimes causes problems for SpamCop users who are also customers of Internet service providers who do not provide properly matching DNS and rDNS records for mail servers. Some FTP, Telnet and TCP Wrapper servers perform FCrDNS checks. Some IRC Servers perform FCrDNS checks to prevent abuse. References Domain Name System Email authentication Internet protocols Network protocols
19074058
https://en.wikipedia.org/wiki/ThinkVantage%20Technologies
ThinkVantage Technologies
ThinkVantage Technologies is a set of system support utilities to reduce total cost of ownership of Lenovo brand desktop and laptop computers. Utilities Access Connections to graphically and securely manage and switch network connections between ethernet, wireless LAN, and wireless WAN Lenovo Mobile Broadband Activation to support mobile broadband activation for Windows 7, Windows Vista, and Windows XP laptops that support it Active Protection System to enable the accelerometer to halt a laptop's spinning-platter hard drive Password Manager to save user login information for websites and Windows applications, and subsequently auto-fill those passwords on their respective sites and applications Client Security Solution to manage passwords, encryption keys and electronic credentials Fingerprint Software to manage biometric data for built-in fingerprint readers GPS for GPS use on Windows XP, Vista, and 7 computers LANDesk for ThinkVantage for client management. This is a Lenovo-exclusive version of LANDesk Management Suite designed to integrate with other ThinkVantage software Productivity Center to access online documentation and tools Power Manager to manage power usage in Windows XP, Vista, and 7 ThinkPad laptops as well as Windows XP, Vista, 7, and 8 desktops ThinkPad Help Center to access a user's guide for the ThinkVantage suite Access Help online User's Guide to search an online database of help documents Secure Data Disposal to shred confidential information System Update (TVSU) to generate a system tailored list of updates with their respective operative descriptions, and a choice of installation methods Base Software Administrator to customize Lenovo preloads Lenovo QuickLaunch to provide a simplified, customized version of Windows' Start menu Lenovo Solution Center to manage the ThinkVantage suite and certain system upkeep tasks on Windows 7 and 8 Lenovo SimpleTap for Windows 7 to provide easy access to on-screen tiles on touch-enabled ThinkPads and tablets, as well as certain ThinkCentre systems with multi-touch screens The Lenovo ThinkVantage Technologies that can also run on some other platforms are System Migration Assistant to transfer a user's personal data and environment between PC systems. Rescue and Recovery to deploy updates, recover from crashes, and provide remote access if the system will not boot or function while booted. Legacy ThinkVantage software ImageUltra Builder to create distributable software structures Hardware Password Manager to save BIOS, disk, and motherboard passwords in one place IBM developed ThinkVantage Technologies. They were included with the sale of their PC division to Lenovo Group in 2005. History In 2002 IBM heavily promoted these tools as part of its "Think" campaign, intended to instill confidence that IBM computers were easier to use and quicker to recover from disaster. In 2004 IBM provided two of the utilities, Rescue and Recovery with Rapid Restore and IBM System Migration Assistant as separately available software for use on non-IBM systems. References External links ThinkVantage Applications See the Related resources tab on the right hand side of the web page. ThinkVantage Home Thinkvantage Lenovo
167999
https://en.wikipedia.org/wiki/RISC%20OS
RISC OS
RISC OS is a computer operating system originally designed by Acorn Computers Ltd in Cambridge, England. First released in 1987, it was designed to run on the ARM chipset, which Acorn had designed concurrently for use in its new line of Archimedes personal computers. RISC OS takes its name from the reduced instruction set computer (RISC) architecture it supports. Between 1987 and 1998, RISC OS was included in every ARM-based Acorn computer model, including the Acorn Archimedes line, Acorn's R line (with RISC iX as a dual-boot option), RiscPC, A7000, and prototype models such as the Acorn NewsPad and Phoebe computer. A version of the OS, named NCOS, was used in Oracle Corporation's Network Computer and compatible systems. After the break-up of Acorn in 1998, development of the OS was forked and continued separately by several companies, including , Pace Micro Technology, and Castle Technology. Since then, it has been bundled with several ARM-based desktop computers such as the Iyonix PC and A9home. , the OS remains forked and is independently developed by and the community. Most recent stable versions run on the ARMv3/ARMv4 RiscPC, the ARMv5 Iyonix, ARMv7 Cortex-A8 processors (such as that used in the BeagleBoard and Touch Book) and Cortex-A9 processors (such as that used in the PandaBoard) and the low-cost educational Raspberry Pi computer. SD card images have been released for downloading free of charge to Raspberry Pi 1, 2, 3, & 4 users with a full graphical user interface (GUI) version and a command-line interface only version (RISC OS Pico, at 3.8 MB). History RISC OS was originally released in 1987 as Arthur 1.20. The next version, , became and was released in April 1989. RISC OS 3.00 was released with the A5000 in 1991, and contained many new features. By 1996, RISC OS had been shipped on over 500,000 systems. Acorn officially halted work on the OS in January 1999, renaming themselves Element 14. In March 1999 a new company, RISCOS Ltd, licensed the rights to develop a desktop version of RISC OS from Element 14, and continued the development of RISC OS 3.8, releasing it as RISC OS 4 in July 1999. Meanwhile, Element 14 had also kept a copy of RISC OS 3.8 in house, which they developed into NCOS for use in set-top boxes. In 2000, Element 14 sold RISC OS to Pace Micro Technology, who later sold it to Castle Technology Ltd. In May 2001, RISCOS Ltd launched RISC OS Select, a subscription scheme allowing users access to the latest RISC OS 4 updates. These upgrades are released as soft-loadable ROM images, separate to the ROM where the boot OS is stored, and are loaded at boot time. Select 1 was shipped in May 2002, with Select 2 following in November 2002 and the final release of Select 3 in June 2004. In the same month, RISC OS 4.39, dubbed RISC OS Adjust, was released. RISC OS Adjust was a culmination of all the Select Scheme updates to date, released as a physical set of replaceable ROMs for the RiscPC and A7000 series of machines. Meanwhile, in October 2002, Castle Technology released the Acorn clone Iyonix PC. This ran a 32-bit (in contrast to 26-bit) variant of RISC OS, named RISC OS 5. RISC OS 5 is a separate evolution of RISC OS based upon the NCOS work done by Pace. The following year, Castle Technology bought RISC OS from Pace for an undisclosed sum. In October 2006, Castle announced a shared source license plan, managed by RISC OS Open Limited, for elements of RISC OS 5. In October 2018, RISC OS 5 was re-licensed under the Apache 2.0 license. In December 2020, the source code of RISC OS 3.71 was leaked to The Pirate Bay. Supported hardware Versions of RISC OS run or have run on the following hardware. RISC OS Open Limited adopted the 'even numbers are stable' version numbering scheme post version 5.14, hence some table entries above include two latest releases – the last stable one and the more recent development one. A special cut down RISC OS Pico (for 16MiB cards and larger) styled to start up like a BBC Micro was released for BASIC's 50th anniversary. RISC OS has also been used by both Acorn and Pace Micro Technology in various TV connected set-top boxes, sometimes referred to instead as NCOS. RISC OS can also run on a range of computer system emulators that emulate the earlier Acorn machines listed above. Features OS core The OS is single-user and employs cooperative multitasking (CMT). While most current desktop OSes use preemptive multitasking (PMT) and multithreading, remains with a CMT system. By 2003, many users had called for the OS to migrate to PMT. The OS memory protection is not comprehensive. The core of the OS is stored in ROM, giving a fast bootup time and safety from operating system corruption. RISC OS 4 and 5 are stored in of flash memory, or as a ROM image on SD Card on single board computers such as the Beagleboard or Raspberry Pi, allowing the operating system to be updated without having to replace the ROM chip. The OS is made up of several modules. These can be added to and replaced, including soft-loading of modules not present in ROM at run time and on-the-fly replacement. This design has led to OS developers releasing rolling updates to their versions of the OS, while third parties are able to write OS replacement modules to add new features. OS modules are accessed via software interrupts (SWIs), similar to system calls in other operating systems. Most of the OS has defined application binary interfaces (ABIs) to handle filters and vectors. The OS provides many ways in which a program can intercept and modify its operation. This simplifies the task of modifying its behaviour, either in the GUI, or deeper. As a result, there are several third-party programs which allow customising the OS look and feel. File system The file system is volume-oriented: the top level of the file hierarchy is a volume (disc, network share) prefixed by the file system type. To determine file type, the OS uses metadata instead of file extensions. Colons are used to separate the file system from the rest of the path; the root is represented by a dollar ($) sign and directories are separated by a full stop (.). Extensions from foreign file systems are shown using a slash (example.txt becomes example/txt). For example, ADFS::HardDisc4.$ is the root of the disc named HardDisc4 using the Advanced Disc Filing System (ADFS) file system. filetypes can be preserved on other systems by appending the hexadecimal type as ',xxx' to filenames. When using cross-platform software, filetypes can be invoked on other systems by naming appending '/[extension]' to the filename under . A file system can present a file of a given type as a volume of its own, similar to a loop device. The OS refers to this function as an image filing system. This allows transparent handling of archives and similar files, which appear as directories with some special properties. Files inside the image file appear in the hierarchy underneath the parent archive. It is not necessary for the archive to contain the data it refers to: some symbolic link and network share file systems put a reference inside the image file and go elsewhere for the data. The file system abstraction layer API uses 32-bit file offsets, making the largest single file 4 GiB (minus 1 byte) long. However, prior to RISC OS 5.20 the file system abstraction layer and many RISC OS-native file systems limited support to 31 bits (just under 2 GiB) to avoid dealing with apparently negative file extents when expressed in two's complement notation. File formats The OS uses metadata to distinguish file formats. Some common file formats from other systems are mapped to filetypes by the MimeMap module. Kernel The RISC OS kernel is single-tasking and controls handling of interrupts, DMA services, memory allocation and the video display; the cooperative multi-tasking is provided by the WindowManager module. Desktop The WIMP interface is based on a stacking window manager and incorporates three mouse buttons (named Select, Menu and Adjust), context-sensitive menus, window order control (i.e. send to back) and dynamic window focus (a window can have input focus at any position on the stack). The icon bar (Dock) holds icons which represent mounted disc drives, RAM discs, running applications, system utilities and docked: files, directories or inactive applications. These icons have context-sensitive menus and support drag-and-drop operation. They represent the running application as a whole, irrespective of whether it has open windows. The GUI functions on the concept of files. The Filer, a spatial file manager, displays the contents of a disc. Applications are run from the Filer view and files can be dragged to the Filer view from applications to perform saves. Application directories are used to store applications. The OS differentiates them from normal directories through the use of an exclamation mark (also called a pling or shriek) prefix. Double-clicking on such a directory launches the application rather than opening the directory. The application's executable files and resources are contained within the directory, but normally they remain hidden from the user. Because applications are self-contained, this allows drag-and-drop installing and removing. The Style Guide encourages a consistent look and feel across applications. This was introduced in and specifies application appearance and behaviour. Acorn's own main bundled applications were not updated to comply with the guide until 's Select release in 2001. Font manager RISC OS was the first operating system to provide scalable anti-aliased fonts. Anti-aliased fonts were already familiar from Arthur, and their presence in RISC OS was confirmed in an early 1989 preview, featuring in the final RISC OS 2 product, launched in April 1989. A new version of the font manager employing "new-style outline fonts" was made available after the release of RISC OS, offering full support for the printing of scalable fonts, and was provided with Acorn Desktop Publisher. It was also made available separately and bundled with other applications. This outline font manager provides support for the rendering of font outlines to bitmaps for screen and printer use, employing anti-aliasing for on-screen fonts, utilising sub-pixel anti-aliasing and caching for small font sizes. At the time of the introduction of Acorn's outline font manager, the developers of rival desktop systems were either contemplating or promising outline font support for still-unreleased products such as Macintosh System 7 and OS/2 version 2. Since 1994, in RISC OS 3.5, it has been possible to use an outline anti-aliased font in the WindowManager for UI elements, rather than the bitmap system font from previous versions. RISC OS 4 does not support Unicode but "RISC OS 5 provides a Unicode Font Manager which is able to display Unicode characters and accept text in UTF-8, UTF-16 and UTF-32. Other parts of the RISC OS kernel and core modules support text described in UTF-8." Support for the characters of RISC OS (and some other historic computers) was added to Unicode 13.0 (in 2020). Bundled applications is available in several distributions, all of which include a small standard set of desktop applications, but some of which also include a much wider set of useful programs. Some of those richer distributions are freely available, some are paid for. Backward compatibility Limited software portability exists with subsequent versions of the OS and hardware. Single-tasking BBC BASIC applications often require only trivial changes, if any. Successive OS upgrades have raised more serious issues of backward compatibility for desktop applications and games. Applications still being maintained by their author(s) or others have sometimes historically been amended to provide compatibility. The introduction of the RiscPC in 1994 and its later StrongARM upgrade raised issues of incompatible code sequences and proprietary squeezing (data compression). Patching of applications for the StrongARM was facilitated and Acorn's UnsqueezeAIF software unsqueezed images according to their AIF header. The incompatibilities prompted release by The ARM Club of its Game On! and StrongGuard software. They allowed some formerly incompatible software to run on new and upgraded systems. The version of the OS for the A9home prevented the running of software without an AIF header (in accord with Application Note 295) to stop "trashing the desktop". The Iyonix PC () and A9home (custom ) saw further software incompatibility because of the deprecated addressing modes. Most applications under active development have since been rewritten. Static code analysis to detect -only sequences can be undertaken using ARMalyser. Its output can be helpful in making 32-bit versions of older applications for which the source code is unavailable. Some older 26-bit software can be run without modification using the Aemulor emulator. Additional incompatibilities were introduced with newer ARM cores, such as ARMv7 in the BeagleBoard and ARMv8 in the . This includes changes to unaligned memory access in ARMv6/v7 and removal of the SWP instructions in ARMv8. See also Acorn C/C++ ArtWorks Drobe riscos.info ROX Desktop, a graphical desktop environment for the X Window System, inspired by the user interface of RISC OS Sibelius (scorewriter), originally an application for RISC OS, rewritten for Windows in 1998 RISC OS character set References External links RISC OS Open Acorn operating systems ARM operating systems Desktop environments Free software operating systems Software using the Apache license Window-based operating systems 1987 software
49435246
https://en.wikipedia.org/wiki/George%20Parbury
George Parbury
George Parbury (1807–1881) was a British publisher with a special interest in India, a freemason in India and London, Master of Merchant Taylors livery company, Justice of the Peace for two counties and Deputy Lieutenant of the Tower Hamlets. Biography George Parbury was born 24 January 1807, and baptised on 18 February at St. Leonard's, Shoreditch. He was the second child and eldest son of Hannah Warne and Charles Parbury, the “head of the firm of Parbury, Allen, and Co., the eminent booksellers connected with India”. George was apprenticed to his father in March 1823. In December 1826 he was granted permission to travel to India and reside in Bengal; the surety of £500 was provided by “Charles Parbury and William H Allen, booksellers of Leadenhall Street”. George arrived in Calcutta on the steamship Enterprise in 1828. Parbury had been sent by his father to work with William Thacker's bookselling firm in Calcutta. Thacker (1791–1872) had received a licence from the East India Company, allowing him to reside at Fort William “to dispose of Messrs. Black Parbury and Co.’s consignment”, presumably shipped from England, thus marking the beginning of Thacker's company in Calcutta, and was later made a partner of W. Thacker and Co., St. Andrew's Library, Calcutta. There were also family connections between Thacker and Parbury: William Thacker's third marriage, at St Pancras church on 29 December 1841, was to Helen Parbury, George's youngest sister. Soon after he arrived in Calcutta George Parbury became a freemason: in August 1830 he was initiated in the Aurora Lodge of Candour and Cordiality No. 816, Calcutta. Later, after his return to England in 1832, Parbury joined Moira Lodge No. 109 (now No. 92) in London, and became Master of the Lodge in 1838. In England George met, or was re-acquainted with, 22 year old Mary Ann Joanna Ellis of Hertford, and married her in St Andrew's church there on 21 May 1833. In April 1834 their first child, George Edward Ellis, was born; the infant survived only four months, and was buried in the same church. Parbury gained Freedom of the City of London on 3 September 1835, followed three months later by Livery status in the Merchant Taylors’ Company. George and Mary's second child, also George, was born in July 1836, and a third (Emily) was born the following year. Eighteen months later, in May 1839, he sailed from Portsmouth on the Owen Glendower, arriving in Calcutta on 20 August. George remained there for some eight months, and then returned to England, this time overland to Bombay and then by ship. Early on 13 August 1840 Parbury departed by river from Calcutta on a steam vessel to Allahabad, which was as far as it could go at the time. He then travelled overland, via Agra, Delhi, Bahr and Simla, eventually reaching Bombay at the end of November, 109 days after setting out. On the first of December he was on the steamer Cleopatra, en route to Aden and Suez. After travelling overland and reaching Cairo on 21 December, he sailed from Alexandria on “the splendid steamer, Great Liverpool”, which set off on the 24th, travelling via Malta, Gibraltar, Falmouth and the Isle of Wight quarantine station. Parbury set foot on land on 16 January 1841, six and a half weeks after leaving Bombay, and just over 5 months from Calcutta. Soon afterwards Parbury published a description of his travels. The first edition, published anonymously, was dated London, 20 June 1841. A year later, a second edition was published under the name of George Parbury, Esq., MRAS. Parbury's book – a copy of which he had lodged in the Royal Asiatic Society library – was soon given a warm review in The Asiatic Journal J. H. Stocqueler had also written a Handbook of India based on his various experiences as a traveller and his residence in India for some twenty years. When he finally left India he sailed in distinguished company from Calcutta to Suez on the Hindostan, and thence overland to take a ship from Alexandria; his account appeared from the same publisher in 1844. Parbury's colleagues, W Thacker and Co., chose to attack Stocqueler's work in 1845, but without informing him first. They accused Stocqueler of not having acknowledged his "obligation to Mr Parbury’s 'Hand Book of India and Egypt' " when presenting his own book. Stocqueler's rejoinder was published as a letter to The Madras Athenaeum, stating in part: Has it ever occurred to him that in sending people to buy his personal narrative, facetiously dubbed a "Hand Book of India and Egypt," I should be guilty of leading people to purchase a volume which treats of the merest fraction of India in the most superficial style, and was found so ridiculously insufficient as a guide to Overland Travellers that none of the passengers by the Hindostan in 1843 (I speak of them as I was one of them, though doubtless others have been in the same predicament) could gather from its pages the slightest information that was of any use to them. I declare most solemnly that my Hand Book was solely undertaken and put forth because Mr Parbury's was so wretchedly imperfect, and for no other reason. Soon after the birth of his fourth and last child by Mary (Edward Fraser, in March 1843) George sailed again to India. He returned from Calcutta on the recently launched steamship Bentinck, departing in March 1844. In October of the following year, Mary died of consumption at Mansfield House, 37 Russell Square, aged 34. In 1849 Parbury was married again, this time to Lucy Wilson Key, the fourth child of John Key, later Sir John Key, first baronet, Lord Mayor of London and Master of the Stationers' Company. Lucy, who was 15 years Parbury's junior and only 10 years older than his first wife, provided George with five more children: three sons and two daughters, born variously in Germany, Calcutta and England. One of his grandchildren was Florence Tyzack Parbury. Parbury turned his attention to Merchant Taylors again. In July 1855 he was appointed a Warden and a member of the Court of Assistants, involving him more in the management of the guild. He was appointed Master of Merchant Taylors in 1866 and, in that capacity, he hosted the following year's annual banquet in Merchant Taylors’ Hall. Well over a hundred of the great and the good attended, including 20 MPs, the Presidents of learned institutions, members of the clergy, senior military figures and eminent members of the aristocracy, including the Marquis of Salisbury, the Earl of Sandwich and Viscount Stratford de Redcliffe. His Excellency the United States Minister (Charles Francis Adams, Sr., son of President John Quincy Adams) was a prominent guest, whose health was toasted, along with others, at the end of the evening. The principal speaker was the Chancellor of the Exchequer (Benjamin Disraeli). Parbury was appointed Deputy Lieutenant of the Tower Hamlets on 4 September 1858., and was a Justice of the Peace for the counties of Surrey and Middlesex. He died on 27 January 1881 at the family home: Thornbury House, Caterham in Surrey, and was buried in the family vault at Kensal Green Cemetery. References Further reading John Carpenter, "The Life of George Parbury, associate of Allen, Thacker and Spink", FIBIS Journal, Autumn 2015, p3 1807 births 1881 deaths 19th-century publishers (people) British East India Company Freemasons of the United Grand Lodge of England Worshipful Company of Merchant Taylors English justices of the peace Burials at Kensal Green Cemetery