id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
33019246 | https://en.wikipedia.org/wiki/IdeaPad%20S%20series | IdeaPad S series | The IdeaPad S Series is a series of notebook computers launched by Lenovo in October 2008. The IdeaPad S10 was initially scheduled for launch in September, but its release was delayed in the United States until October.
The S series began with the IdeaPad S10, the lowest cost model, powered by an Intel Atom processor in a 10.2-inch subnotebook. Later, more expensive laptops in the S-series also powered by Intel Atoms were released. Once the Atom CPU line was discontinued, the main line of lightweight S series laptops switched to alternatives, such as the low-power AMD A-series, Intel Celeron, Pentium, and low-cost versions of Y-series CPUs.
2008
The IdeaPad S10, the first laptop in the IdeaPad S Series of netbooks, was released in 2008.
S10
The IdeaPad S10 was Lenovo's first netbook. While Engadget found the design unremarkable, the low starting price was well-received. The S10 featured a TFT active matrix 1024×576 or 1024×600 display with an 80 or 160 GB hard disk drive and 512 MB or 1 GB DDR2 Random Access Memory, both of which could be upgraded via a trap door on the bottom of the netbook. The initial S10 featured 512 MB of RAM soldered to system board with an expansion SO-DIMM slot for further upgrades to 2 or 2.5 GB (2.5 GB was only usable with an operating system with support for sparse memory regions). The processor was an Intel Atom that ran at 1.6 GHz. The S10 supported IEEE 802.11 b/g wireless networking and had two USB ports, an ExpressCard expansion slot, a 4-in-1 media reader, and a VGA output. These computers received positive consumer reviews and a 9/10 rating from Wired magazine.
In May 2009 Lenovo introduced the S10-2. While the S10-2 shared many traits with the S10/S10e, it omitted the ExpressCard34 slot, featured a new physical design, added an additional USB port, and enlarged the keyboard, touchpad, and sizes of the hard drive and SSD.
2009
The IdeaPad S Series netbooks released by Lenovo in 2009 were the S10e, S10-2, and the S12.
S10e
The IdeaPad S10e was a re-launch of the IdeaPad S10, with features updated for the education market. The netbook included a quick start operating system and 5 hours of battery life at a low starting price. It weighed 2.8 lbs, with a form factor of 9.8 x 7.7 x 0.9–1.4-inches. The netbook offered a wide keyboard occupying almost the entire width of the chassis, and LAPTOP Magazine reported that it was easy for even adults to type on.
S10-2
The IdeaPad S10-2 was a 10-inch netbook with a 1.6 GHz Intel Atom processor, 1GB RAM, a 6-cell battery, and Intel GMA Integrated Graphics. Notebook Review reported that the netbook's design offered "a cleaner and smoother appearance all around". The specifications of the netbook are as follows:
Processor: Intel Atom N270 1.6 GHz or Intel Atom N280 1.66 GHz and Hyper-Threading
RAM: 1GB DDR2 667 MHz
Display: 10.1" (WSVGA, Glossy, LED-backlit, 1024x600)
Storage: 160GB 5400rpm
Graphics: Intel GMA 950 Integrated
Wi-Fi: Broadcom 802.11b/g
Card reader: 4-in-1
Dimensions: 10.2 x 7.6 x 0.7-1.8 (inches)
Operating system: Windows XP Home Edition (SP3)
S12
The IdeaPad S12 received a fairly positive review from PCMagazine. Its features that were well-received included the 12 inch widescreen with a 1280 x 800 resolution, keyboard, express card slot, and battery life. However, the netbook's price and weight were poorly received by the reviewers. The specifications of the netbook are as follows:
Processor: Intel Atom N270 1.6 GHz
RAM: 1GB (up to 3 GB) DDR2-667
Storage: 160GB 5400rpm SATA
Display: 12.1" (1280x800)
Graphics: Intel GMA 950
Wi-Fi: 802.11b/g
Dimensions: 11.5 x 9.0 x 1.4 (inches)
Weight:
Operating system: MS Windows XP Home
2010
The IdeaPad netbooks released in 2010 were the S10-3, S10-3t, and S10-3s.
S10-3
The IdeaPad S10-3 netbook was praised for its full-size keyboard, design, light chassis, and low price. It was criticized for its navigation experience, touchpad, low capacity hard drive, and the lack of options for customization. Michael Prospero from LAPTOP Magazine indicated in his review that Lenovo had addressed some of the issues raised about the S10-2 netbook and praised the keyboard and the design. He also indicated that the storage capacity was not on par with competitor offerings and that the touchpad could have been improved.
S10-3t
The IdeaPad S10-3t was a netbook that was also a convertible tablet. The S10-3t netbook was among the first computers to use the 1.83 GHz Intel Atom N470 processor. The software BumpTop was preloaded and offered a desk-like view of the desktop in 3D for ease of use.
S10-3s
The IdeaPad S10-3s was roughly an inch narrower than the S10-2, with a form factor of 10.6 x 6.6 x 1.4 inches. The netbook was also slightly lighter than similar netbooks and weighed 2.6 lbs. The netbook offered the following specifications:
Processor: Intel Atom N450 1.66 GHz
RAM: 1GB DDR2
Graphics: Intel GMA 3150
Storage: 160GB 5400RPM SATA
Display: 10.1" (maximum resolution of 1024x600)
2011
The IdeaPad S Series netbooks released in 2011 were the S205 and the S215.
S205
The S205 had an AMD Fusion E350 dual core processor, 11.6" widescreen display with a 16:9 aspect ratio, and ATI Mobility Radeon 6310M graphics. The specifications of the S205 are as follows:
Processor: Up to 1.60 GHz AMD Dual-Core E-350
RAM: Up to 4GB DDR3 1066 MHz
Graphics: Up to AMD Radeon HD 6310M (512 MB graphics memory)
Dimensions (mm): 290 x 18~26.3 x 193
Weight: starting at 1.35 kg
S215
The Lenovo IdeaPad S215 contained 500 GB, 5,400 RPM traditional HDD and 8 GB of solid-state storage.
2012
S300
Detailed specifications of the netbooks are as follows:
Processor: several (ie: Celeron 887)
RAM: 4GB
Storage: SATA 500GB HDD
Display: 14"
Graphics: Intel GMA 950
Operating system: MS Windows 7
References
External links
IdeaPad S Series on Lenovo
S |
1779657 | https://en.wikipedia.org/wiki/List%20of%20collaborative%20software | List of collaborative software | This list is divided into proprietary or free software, and open source software, with several comparison tables of different product and vendor characteristics. It also includes a section of project collaboration software, which is a standard feature in collaboration platforms.
Collaborative software
Comparison of notable software
Systems listed on a light purple background are no longer in active development.
General Information
Comparison of unified communications features
Comparison of collaborative software features
Comparison of targets
Open source software
The following are open source applications for collaboration:
Standard client–server software
Access Grid, for audio and video-based collaboration
Axigen
Citadel/UX, with support for native groupware clients (Kontact, Novell Evolution, Microsoft Outlook) and web interface
Cyn.in
EGroupware, with support for native groupware clients (Kontact, Novell Evolution, Microsoft Outlook) and web interface
Group-Office groupware and CRM
Kolab, various native PIM clients
Kopano
OpenGroupware.org
phpGroupWare
Scalix
SOGo, integrated email, calendaring with Apple iCal, Mozilla Thunderbird and native Outlook compatibility
Teambox, Basecamp-style project management software with focus on GTD task management and conversations. (Only V3 and prior are open-source.)
Zarafa
Zentyal, with support for native groupware clients (Kontact, Novell Evolution) natively for Microsoft Outlook and web interface
Zimbra
Zulip
Groupware: Web-based software
Axigen
Bricolage, content management system
BigBlueButton, Web meetings
Collabora Online, Enterprise-ready edition of LibreOffice enabling real-time collaborative editing of documents, spreadsheets, presentations and graphics
DotNetNuke, also called DNN: module-based, evolved from ASP 1.0 demo applications
EGroupware, a free open source groupware software intended for businesses from small to enterprises
EtherPad, collaborative drafting with chat
Feng Office Community Edition
FusionForge, has wiki, forums, mailing lists, FTP, SSH, subdomains, hosting, email alias, backups, CVS/SVN, task management
Group-Office, Web-based groupware for sharing calendars, files, e-mail, CRM, Projects, Mobile Synchronization and much more.
Horde
HumHub a free and open-source enterprise social network solution
IceWarp Server
Jumper 2.0, collaborative search engine and knowledge management platform
Kolab Groupware, integrated Roundcube web frontend
Kune, collaborative federated social network, based on Apache Wave
Loomio, for making decisions together (AGPL).
MediaWiki, which provides core content management and integrates with many other tools via extensions
Nextcloud, file hosting service, functionally similar to Dropbox, Office 365 or Google Drive when used with its integrated office suite solutions Collabora Online or OnlyOffice
OnlyOffice Community Server, available for Microsoft and Linux
OpenBroadcaster LPFM IPTV broadcast automation tools
Overleaf for creating LaTeX documents
phpGroupWare
Simple Groupware
SOGo, integrated email, calendaring with Apple iCal, Mozilla Thunderbird and native Outlook compatibility
Tiki Wiki CMS Groupware, has wiki, forums, calendar, ticket system, workflow engine
Tine 2.0
Tonido, free collaborative software with workspace synchronizing, Web access from personal desktop; cross-platform
Zarafa, full MAPI MS Exchange replacement for Linux, GPL+proprietary
Kopano, full MAPI MS Exchange replacement for Linux, GPL+proprietary
Zentyal
Zimbra
Other
Alfresco, enterprise content management system: document management, workflow, and portal
Drupal Framework, open source content management framework: document management, web pages, attachments, forums, photos, social profiles, collaboration tools
Liferay Enterprise Portal, open source enterprise portal: document management, wiki, social tools, workflow
LogicalDOC, document management system: document management, workflow
Nuxeo EP, enterprise content management system: document management, workflow
OpenKM, open source document management system: document management
Project collaboration software
Web-based software
Ceiton, workflow-based project management with Gantt chart, scheduling calendar and time-tracking
Central Desktop, has project management, wiki, file upload, review and approve, calendar, document management
Clarizen, online on-demand, collaborative project execution software
dotProject
Easy Projects
EGroupware, is free open source groupware software intended for businesses from small to enterprises
Feng Office Community Edition
Fle3
Gitea
GitLab
Group-Office, Web-based groupware for sharing calendars, files, e-mail, CRM, Projects, Mobile Synchronization and much more.
GroveSite, online collaboration, project and document management; online relational database
Horde
InLoox, web-based project management and collaboration software with Outlook integration
LiquidPlanner, web-based project management and collaboration software
Mindquarry, has document synchronizing, wiki, task management
PBworks is a commercial real-time collaborative editing (RTCE) system
phpGroupWare, has a project collaboration module
Plone, content management
project.net
Projectplace, full suite of collaborative project tools
Redmine, for software projects includes issue tracking, wiki, basic file and document management with hooks to major version control systems: SVN, Git, etc.
Simple Groupware
TeamLab, has forums, blogs, bookmarks, wiki, task management, instant messaging, mobile version, CRM, online document editors
Trac, has wiki, document management, ticket system and version control system
Traction TeamPage integrated action tracking, wiki, live status, notification, and streams organized by person, task, project, and shared permissioned space.
web2project, a dotproject fork with active current development and some innovations: subprojects, etc.
WiserEarth, social network and database that include an open-source Groupware (closed)
Wrike, interactive web-based project management software and tools for remote collaboration
Zoho Projects, a web-based project management software with collaboration features such as Interactive Feeds, Chat, Calendar, Forum, Wiki Pages and shared Document management
Other
Croquet project, collaborative virtual environment
Open Wonderland, open source Java toolkit to make collaborative 3D virtual worlds
Wiki engines: see List of wiki software
Realtime editors: see Collaborative real-time editor
Revision control for software engineering projects: see Comparison of revision control software
Collaborative development environment
Tools for collaborative writing such as O'Reilly Media's wiki-like git-managed authoring platform Atlas
Comparison
See also
Cloud collaboration
Collaborative workflow
Collaborative editor
Comparison of project management software
Comparison of wiki software
Document collaboration
Document-centric collaboration
List of wiki farms
List of wiki software
References
Collaborative software
Groupware
de:Kategorie:Kollaborationssoftware |
1229692 | https://en.wikipedia.org/wiki/MediaDefender | MediaDefender | MediaDefender, Inc. (now Peer Media Technologies) was a company that fought copyright infringement that offered services designed to prevent alleged copyright infringement using peer-to-peer distribution. They used unusual tactics such as flooding peer-to-peer networks with decoy files that tie up users' computers and bandwidth. MediaDefender was based in Los Angeles, California in the United States. As of March 2007, the company had approximately 60 employees and used 2,000 servers hosted in California with contracts for 9 Gbit/s of bandwidth.
These types of organizations are being hired to attempt to stymie peer-to-peer (P2P) traders through a variety of methods including posting fake files online and recording individuals who contribute copyrighted material, but also marketing to individuals using P2P networks. Clients include Universal Pictures, 20th Century Fox, Virgin Records, HBO, Paramount Pictures, and BMG. On August 1, 2005, the digital media entertainment company ARTISTdirect announced that it had acquired MediaDefender for $42.5 million in cash.
In May 2008, MediaDefender performed a distributed-denial-of-service attack on Revision3, despite the fact that they were not hosting unauthorized materials. Jim Louderback, Revision3 CEO charged that these attacks violated the Economic Espionage Act and the Computer Fraud and Abuse Act. As of May 2008, the Federal Bureau of Investigation was investigating the incident.
In August 2009, ARTISTdirect restructured MediaDefender and MediaSentry, creating Peer Media Technologies.
Miivi.com
In February 2007, MediaDefender launched a video sharing site called Miivi.com. On July 4, 2007, file-sharing news site TorrentFreak alleged that Miivi.com was created to trap uploaders of copyrighted content. The site's origins were discovered by a blogger who looked up Miivi.com domain registration information.
After the allegation was re-posted throughout the blogosphere, Miivi.com was shut down on July 4, 2007. In an interview with Ars Technica, chief executive Randy Saaf stated that "MediaDefender was working on an internal project that involved video and didn't realize that people would be trying to go to it and so we didn't password-protect the site". MediaDefender blamed file-sharing groups such as The Pirate Bay for starting the story. Following MediaDefender's subsequent email leak, TorrentFreak alleged that MediaDefender's statement was revealed to be a deliberate falsehood. Saaf denied that MiiVi was "a devious product" and that the company aimed to entrap users, stating only that it was part of MediaDefender's "trade secrets."
The MPAA denied any involvement with MediaDefender. On September 14, 2007, internal emails from MediaDefender were leaked on to BitTorrent file sharing networks, which contradicted MediaDefender's claims of MiiVi being an "internal test site," revealing additional detailed information about the website and that the site was closed when the connection between it and MediaDefender became public knowledge. It was scheduled to be re-launched as www.viide.com, but has not yet been opened up to the public.
Leaked information
Beginning on September 14, 2007, MediaDefender experienced a security breach caused by a group of hackers led by high school student "Ethan". This group called themselves MediaDefender-Defenders. According to an SEC filing, this ultimately cost parent company ARTISTdirect at least $825,000. The breach included emails, a phone conversation, and a number of internal anti-infringement tools, including some source code.
Leaked e-mails
On September 14, 2007, 6,621 of the company's internal e-mails were leaked, containing information contradicting previous statements and details of strategies intended to deceive copyright infringers. The emails link MediaDefender to projects that management previously denied involvement in. The Associated Press and other media outlets suggest that the leak may confirm speculation that MiiVi.com was an anti-copyright infringement "honeypot" site. One e-mail suggests using the MiiVi client program to turn users' PCs into drones for MediaDefender's eMule spoofing activities. The leaked e-mails discuss responses to unexpected and negative press, and expose upcoming projects, problems in and around the office, Domino's pizza orders, and other personal information about employees. Beyond strategic information, the leak also exposed login information for FTP and MySQL servers, making available a large library of MP3 files likely including artists represented by MediaDefender's clients. The emails also revealed that MediaDefender probably was negotiating with the New York Attorney General's office to allow them access to information about users accessing pornographic material. As of September 15, 2007, there had been no official response from the company. However, evidence exists that MediaDefender had been employing both legal and illegal actions to remove copies of the leaked emails from their respective hosting sites. In addition to the usual cease-and-desist letters from their legal department, IP addresses that are owned by MediaDefender were found to have been used in denial-of-service attacks against sites hosting the leaked emails.
The e-mails also revealed direction by MediaDefender founder Randy Saaf to have developer Ben Ebert attempt to eliminate the information about MiiVi from MediaDefender's English Wikipedia entry. Ebert responds in an email on the same day saying, "I will attempt to get all to miivi removed from wiki. I should easily be able to get It contested. We'll see if I can get rid of it."
Leaked phone conversation
On September 16, 2007, MediaDefender-Defenders released a 25-minute excerpt of a phone conversation between the New York Attorney General's office and MediaDefender as a torrent on The Pirate Bay. MediaDefender-Defenders claims in information released with the phone conversation that they have infiltrated the "internals" of the company.
Leaked source code
On September 20, 2007, MediaDefender-Defenders released the source code of TrapperKeeper, MediaDefender's decoy systems on The Pirate Bay. A large chunk of MediaDefender's software was available by Bittorrent.
Revision3 controversy
Revision3 is an Internet television network which distributes video content legally through various means, including the BitTorrent protocol. During the Memorial Day weekend in 2008, Revision3 came under a Denial of Service attack originating from MediaDefender IP addresses. The attack left the company's service inaccessible until mid-Tuesday the following week. Revision3 CEO Jim Louderback accused MediaDefender of injecting its decoy files into Revision3's BitTorrent service through a vulnerability, then automatically perpetrating the attack after Revision3 increased security.
Randy Saaf defended MediaDefender's actions by stating "Our systems were targeting a tracker not even knowing it was Revision3's tracker", adding that the denial-of-service attack resulted when "Revision3 changed some configurations" to their bittorrent tracker.
See also
Copyright social conflict
Cyberterrorism
BayTSP
Streisand effect
Torrent poisoning
References
External links
MediaDefender's Official Website
Net2EZ owned by Media Defender
"Leaked Media Defender e-mails reveal secret government project" - Arstechnica
"MPAA Caught Uploading Fake Torrents" — TorrentFreak (IP addresses of fake torrents traced back to MediaDefender)
"Anti-Piracy Gang Launches their own Video Download Site to Trap People" — TorrentFreak (The domain registration of a fake video upload/download service called miivi has been traced to MediaDefender.)
Torrent Freak article about the 9/14/2007 Media Defender internal email leak
P2P sites ridicule MediaDefender takedown notices in wake of e-mail leak
Post of a list of leaked Programs.
Torrentfreak's article on Media defender problems
Copyright enforcement companies
Cybercrime
Defunct technology companies of the United States |
36797393 | https://en.wikipedia.org/wiki/Lightwork%20Design%20Ltd. | Lightwork Design Ltd. | Lightwork Design Ltd. is a computer software company specialising in 3D Rendering software. Its headquarters are in Sheffield, United Kingdom.
Early history
Lightworks was founded in Sheffield in 1989, with the goal to create a software toolkit for producing photorealistic renders from 3D geometry. While originally based in the Sheffield Science Park, Lightworks is now based in Rutledge House, a building situated next to the Sheffield Botanical Gardens and originally built in the 1850s as the Victoria Park Hotel. The first Lightworks product was demonstrated at the 1990 Autofact exhibition in Detroit, USA. Sales of the initial Lightworks product commenced in early 1991, and the company signed their first major CAD developer, Unigraphics (now Siemens) in 1993. The company signed their first international customer, CPU of Japan, in 1994.
In 1995 Lightworks began to develop the MachineWorks toolkit. This led to the foundation of MachineWorks, which is now a separate company. In 1997, Lightworks launched a large model navigation system called Navisworks, which was sold in 2007 to Autodesk for $26 million. In 1999, Lightworks and Intel formed a partnership, and the company started to develop their products for the Linux platform.
Recent developments
Partnership with NVIDIA
At SIGGRAPH 2013, Lightworks announced a new partnership with NVIDIA to develop an SDK to provide access to NVIDIA's Iray technology. This SDK, called Iray+, is intended to provide physically accurate ray-tracing to clients who need to present their products to customers, and will have the ability to use cloud- and network-based rendering.
Products
Lightworks' current products are Lightworks Author and Iray+.
Lightworks Author
Lightworks Author was the first product launched by Lightwork Design. The main use of the product is in architectural design, interior design, engineering, and automotive design. The software works across Mac OS X, Windows, and Linux, in 32- and 64-bit binaries.
Lightworks Iray+
Lightworks Iray+ was introduced at SIGGRAPH 2013. It uses the GPU-accelerated ray tracing engine developed by the NVIDIA Advanced Rendering Center to provide interactive product visualisations. Lightworks are also developing an Iray plugin for 3DS Max.
Previous Products
Artisan
In its 20th year,
Lightworks announced a new product called Lightworks Artisan at SIGGRAPH 2009. Also known as Renditioner for Trimble's SketchUp, this is a ready-to-deliver rendering product that is used alongside CAD/CAM programmes such as ZWCAD, BricsCAD, Kubotek's KeyCreator, and several others.
In 2010, Artisan was integrated with Ascon's Kompas 3D and BeLight Software. The product introduced a number of new technologies including SnapShot capabilities and pre-loaded content libraries.
It is aimed towards 3D designers who want to create realistic images using a CAD programme. "SnapShot technology" refers to the ability to record the state of storage at any given moment. In Artisan, this allows users to save their renders at different stages, so that they may return to those stages and alter selected elements, such as textures, colour, or background images. Since 2014, Artisan has been marketed and sold by Pictorex Ltd.
Customers and Partners
Partners include Siemens, NVIDIA, and Spatial, while customers include PTC, Graphisoft, ACA, and Ascon.
Other information
Every year, Lightworks selects a local charity to support through fundraising events. In 2013, Lightworks raised money for Bluebell Wood Children's Hospice, while in 2014 the company is sponsoring a week of breakfasts at the Cathedral Archer project, a homelessness charity. A team from Lightworks is undertaking the "Ben Nevis Challenge" in June 2014, while the Lightworks running team, "Runworks", competes in the yearly Sheffield Half Marathon corporate charity running challenge, most recently running the "cancelled" 2014 race.
References
External links
Lightworks Iray+ demonstration using NVIDIA's Nitro
Lightworks Website
1989 establishments in England
Companies based in Sheffield
Software companies of the United Kingdom |
3552055 | https://en.wikipedia.org/wiki/Ensoniq%20Soundscape%20S-2000 | Ensoniq Soundscape S-2000 | Soundscape S-2000 was Ensoniq's first direct foray into the PC sound card market. The card arrived on the market in 1994. It is a full-length ISA digital audio and sample-based synthesis device, equipped with a 2 MiB Ensoniq-built ROM-based patch set. Some OEM versions of the card feature a smaller 1 MiB patch set. It was praised for its then-high quality music synthesis and sound output, high compatibility and good software support.
Hardware overview
Ensoniq advertisements for the Soundscape stated "Finally, A Sound Card from a company that knows sound!", claiming that "the same wavetable technology that drives our $3,000 keyboards is available for your PC". The card uses an 'OTTO' synthesizer chip with a companion 'Sequoia' chip for MIDI duties, along with a Motorola 68EC000 8 MHz controller (a low-cost variant of the ubiquitous 68000 with selectable bus width) and a small amount of RAM. Although it has RAM, the card does not support uploading of sound samples for the synthesizer. The on-board coprocessor was a much-advertised feature of the card, marketed to reduce the overhead of music synthesis through the device. Digital-to-analog audio conversion was handled by an Analog Devices CODEC such as the AD1848, which was capable of sampling at up to 48 kHz. The Soundscape S-2000's sample-based synthesizer lacked an effects processor, meaning digital effects such as reverb and chorus were not supported.
Advertisements also mentioned the card's three CD-ROM interface connectors. As was common with multimedia cards produced in the mid-90s, a variety of proprietary interfaces were supported. This was necessary as ATAPI IDE CD-ROM drives were not yet common, and the majority of optical drives produced required one of the many proprietary interfaces that were prevalent at the time. In addition, the majority of PCs purchased typically had only a single IDE channel, which might already be controlling two hard drives or a hard drive and a tape/disc backup device, so offering additional CD-ROM interfaces was often convenient to the consumer.
One advantage compared to competitors was that the card did not require TSRs, minimizing its conventional memory footprint. The card loaded its own operating system into the onboard RAM during system initialization via the card's 'SSINIT.EXE' utility program. The Soundscape also has a hardware MPU-401 implementation which, combined with the lack of TSRs, allowed a high degree of compatibility, which was a significant benefit as many DOS games often ran in a custom flavor of protected mode or were particularly demanding of conventional memory space.
Soundscape S-2000 has several descendants, including the SoundscapeDB, Soundscape Elite, Soundscape OPUS, Soundscape VIVO90, and AudioPCI. The original Soundscape S-2000 was replaced by the Soundscape II, a board based on the ELITE but without the daughtercard (it was an upgrade option).
Compatibility
The Soundscape was compatible with a variety of popular sound standards such as Ad Lib, Creative Sound Blaster 2.0, Microsoft Windows Sound System, General MIDI, Roland MT-32 (though with different instrument sounds) and MPU-401, in addition to its own, well-supported native Soundscape mode. Of critical importance at the time was support of the Creative Labs Sound Blaster, a card that was the ubiquitous sound standard of the day. Soundscape can emulate the Sound Blaster 2.0, an 8-bit monaural device with FM synthesis capability. While the digital sound emulation was quite good, the FM synthesis emulation leaves much to be desired. Emulating FM synthesis through software was too demanding for the CPUs of the time (typically an 80486), and so Ensoniq wrapped most of the FM synthesis commands to the card's sample-based synthesis engine, meaning that FM music did not sound correct because it was composed with FM synthesis in mind, not real instruments. This could be especially poor if the game was an older title that used FM synthesis for sound effects.
The reasoning behind using emulation instead of real hardware was cost and demand. FM synthesis hardware support for games at the time required an additional chip, the Yamaha OPL-2 or OPL-3. At the time Soundscape arrived, most games supported General MIDI output, offering high-quality music support for users who purchased the card and reducing the need for said OPL-3 synthesis chip. The board's SoundBlaster support could be toggled on and off, allowing use of a second sound card (usually an existing sound device) to take care of the actual OPL-3 FM synthesis if desired. The advantage of a single card that could do either, however, was very important for mind share amongst consumers, and was critical for OEM system sellers because adding a separate card would add cost to the system.
Software support
The SoundScape was well-supported by many mid-to-late 1990s programs, both directly and via General MIDI. Ensoniq released drivers for many operating systems, including IBM OS/2, MS-DOS, Microsoft Windows 3.1, Windows 95, and Windows NT. Ensoniq later released Microsoft DirectSound-capable drivers as well. In these operating systems, programs accessed the sound card through its driver, allowing full hardware support without the need for the software developer to support the card directly.
References
External links
Case, Loyd. "In Search Of The Ultimate... Sound Card". Computer Gaming World December 1994: p. 138-148.
Ensoniq Corp. Soundscape S-2000 Manual, Ensoniq, 1994.
Ensoniq Corp. Web Site by Ensoniq Corp., Multimedia Division Product Information and Support Pages, 1998, retrieved December 25, 2005
Ensoniq FAQ by Ensoniq Corp., Multimedia Division Product Information and Support Pages, 1997, retrieved December 27, 2005
Weksler, Mike & McGee, Joe. "CGW Sound Card Survey". Computer Gaming World October 1993: 76-84.
Sound cards |
5175808 | https://en.wikipedia.org/wiki/Stoned%20%28computer%20virus%29 | Stoned (computer virus) | Stoned is a boot sector computer virus created in 1987. It is one of the first viruses and is thought to have been written by a student in Wellington, New Zealand. By 1989 it had spread widely in New Zealand and Australia, and variants became very common worldwide in the early 1990s.
A computer infected with the original version had a one in eight probability that the screen would declare: "Your PC is now Stoned!", a phrase found in infected boot sectors of infected floppy disks and master boot records of infected hard disks, along with the phrase "Legalise Marijuana". Later variants produced a range of other messages.
Original version
The original "Your PC is now stoned. Legalise Marijuana" was thought to have been written by a student in Wellington, New Zealand.
This initial version appears to have been written by someone with experience only with IBM PC 360KB floppy drives, as it misbehaves on the IBM AT 1.2MB floppy, or on systems with more than 96 files in the root directory. On higher capacity disks, such as 1.2 MB disks, the original boot sector may overwrite a portion of the directory.
The message displays if the boot time was exactly divisible by 8. On many IBM PC clones at the time, boot times could vary, so the message would display randomly (1 time in 8). On some IBM PC compatible machines or on original IBM PC computers, the boot time was constant, so an infected computer would either never display the message or always display the message. An infected computer with a 360K disk and a 20MB or less hard disk, which never displayed the message was one of the first examples of an asymptomatic virus carrier, which would work with no impediment to its function, but which would infect any disks inserted into it.
On hard disks, the original master boot record is moved to cylinder 0, head 0, sector 7. On floppy disks, the original boot sector is moved to cylinder 0, head 1, sector 3, which is the last directory sector on 360 kB disks. The virus will "safely" overwrite the boot sector if the root directory has no more than 96 files.
The PC was typically infected by booting from an infected diskette. Computers, at the time, would default to booting from the A: diskette drive if it had a diskette. The virus was spread when a floppy diskette was accessed with an infected computer. That diskette was now, itself, a source for further spread of the virus. This was much like a recessive gene - difficult to eliminate - because a user could have any number of infected diskettes and yet not have their systems infected with the virus unless they inadvertently boot from an infected diskette. Cleaning the computer without cleaning all diskettes left the user susceptible to a repeat infection. The method also furthered the spread of the virus in that borrowed diskettes, if placed into the system, were now able to carry the virus to a new host.
Variants
The virus image is very easily modified (patched); in particular a person with no knowledge of programming can alter the message displayed. Many variants of Stoned circulated, some only with different messages.
Beijing, Bloody!
The virus has the string "Bloody! Jun. 4, 1989". On this date, the Tiananmen Square protests were suppressed by the People's Republic of China.
Swedish Disaster
The virus has the string "The Swedish Disaster".
Manitoba
Manitoba has no activation routine and does not store the original boot sector on floppies; Manitoba simply overwrites the original boot sector. 2.88MB EHD floppies are corrupted by the virus.
Manitoba uses 2KB memory while resident.
NoInt, Bloomington, Stoned III
NoInt tries to stop programs from detecting it. This causes read errors if the computer tries to access the partition table. Systems infected with NoInt have a decrease of 2 kB in base memory.
Flame, Stamford
A variant of Stoned was called Flame (later unrelated sophisticated malware was given the same name). The early Flame uses 1 kB of DOS memory. It stores the original boot sector or master boot record at cylinder 25, head 1, sector 1 regardless of the media.
Flame saves the current month of the system when it is infected. When the month changes, Flame displays colored flames on the screen and overwrites the master boot record.
Angelina
Angelina has stealth mechanisms. On hard disks, the original master boot record is moved to cylinder 0, head 0, sector 9.
Angelina contains the following embedded text, not displayed by the virus: "Greetings from ANGELINA!!!/by Garfield/Zielona Gora" (Zielona Góra is a town in Poland).
In October 1995 Angelina was discovered in new factory-sealed Seagate Technology 5850 (850MB) IDE drives.
In 2007 a batch of Medion laptops sold through the Aldi supermarket chain appeared to be infected with Angelina. A Medion press release explained that the virus was not really present; rather, it was a spurious warning caused by a bug in the pre-installed antivirus software, Bullguard. A patch was released to fix the error. The Bullguard malfunction highlights one of the issues (along with loss of performance and frustrating pop-ups asking the user for money) of OEMs pre-installing what Microsoft internally referred to as "craplets" onto Windows PCs to make up for the licensing costs of Windows. A practice widely condemned in the tech media, even from reporters who are usually friendly to Microsoft.
Bitcoin blockchain incident
On 15 May 2014, the signature of the Stoned virus was inserted into the bitcoin blockchain. This caused Microsoft Security Essentials to recognize copies of the blockchain as the virus, prompting it to remove the file in question, and subsequently forcing the node to reload the block chain from that point, continuing the cycle.
Only the signature of the virus had been inserted into the blockchain; the virus itself was not there, and if it were, it would not be able to function.
The situation was averted shortly thereafter, when Microsoft prevented the blockchain from being recognized as Stoned. Microsoft Security Essentials did not lose the ability to detect a real instance of Stoned.
See also
Brain (computer virus), an earlier boot sector virus
Michelangelo (computer virus), a boot sector virus based on Stoned
Comparison of computer viruses
References
Boot viruses
Hacking in the 1980s |
894924 | https://en.wikipedia.org/wiki/ISO/IEC%2015504 | ISO/IEC 15504 | ISO/IEC 15504 Information technology – Process assessment, also termed Software Process Improvement and Capability Determination (SPICE), is a set of technical standards documents for the computer software development process and related business management functions. It is one of the joint International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standards, which was developed by the ISO and IEC joint subcommittee, ISO/IEC JTC 1/SC 7.
ISO/IEC 15504 was initially derived from process lifecycle standard ISO/IEC 12207 and from maturity models like Bootstrap, Trillium and the Capability Maturity Model (CMM).
ISO/IEC 15504 has been superseded by ISO/IEC 33001:2015 Information technology – Process assessment – Concepts and terminology as of March, 2015.
Overview
ISO/IEC 15504 is the reference model for the maturity models (consisting of capability levels which in turn consist of the process attributes and further consist of generic practices) against which the assessors can place the evidence that they collect during their assessment, so that the assessors can give an overall determination of the organization's capabilities for delivering products (software, systems, and IT services).
History
A working group was formed in 1993 to draft the international standard and used the acronym SPICE. SPICE initially stood for Software Process Improvement and Capability Evaluation, but in consideration of French concerns over the meaning of evaluation, SPICE has now been renamed Software Process Improvement and Capability Determination. SPICE is still used for the user group of the standard, and the title for the annual conference. The first SPICE was held in Limerick, Ireland in 2000, SPICE 2003 was hosted by ESA in the Netherlands, SPICE 2004 was hosted in Portugal, SPICE 2005 in Austria, SPICE 2006 in Luxembourg, SPICE 2007 in South Korea, SPICE 2008 in Nuremberg, Germany and SPICE 2009 in Helsinki, Finland.
The first versions of the standard focused exclusively on software development processes. This was expanded to cover all related processes in a software business, for example project management, configuration management, quality assurance, and so on. The list of processes covered grew to cover six areas: organizational, management, engineering, acquisition supply, support, and operations.
In a major revision to the draft standard in 2004, the process reference model was removed and is now related to the ISO/IEC 12207 (Software Lifecycle Processes). The issued standard now specifies the measurement framework and can use different process reference models. There are five general and industry models in use.
Part 5 specifies software process assessment and part 6 specifies system process assessment.
The latest work in the ISO standards working group includes creation of a maturity model, which is planned to become ISO/IEC 15504 part 7.
The standard
The Technical Report (TR) document for ISO/IEC TR 15504 was divided into 9 parts. The initial International Standard was recreated in 5 parts. This was proposed from Japan when the TRs were published at 1997.
The International Standard (IS) version of ISO/IEC 15504 now comprises 6 parts. The 7th part is currently in an advanced Final Draft Standard form and work has started on part 8.
Part 1 of ISO/IEC TR 15504 explains the concepts and gives an overview of the framework.
Reference model
ISO/IEC 15504 contains a reference model. The reference model defines a process dimension and a capability dimension.
The process dimension in the reference model is not the subject of part 2 of ISO/IEC 15504, but part 2 refers to external process lifecycle standards including ISO/IEC 12207 and ISO/IEC 15288. The standard defines means to verify conformity of reference models.
Processes
The process dimension defines processes divided into the five process categories of:
customer-supplier
engineering
supporting
management
organization
With new parts being published, the process categories will expand, particularly for IT service process categories and enterprise process categories.
Capability levels and process attributes
For each process, ISO/IEC 15504 defines a capability level on the following scale:
The capability of processes is measured using process attributes. The international standard defines nine process attributes:
1.1 Process performance
2.1 Performance management
2.2 Work product management
3.1 Process definition
3.2 Process deployment
4.1 Process measurement
4.2 Process control
5.1 Process innovation
5.2 Process optimization
Each process attribute consists of one or more generic practices, which are further elaborated into practice indicators to aid assessment performance.
Rating scale of process attributes
Each process attribute is assessed on a four-point (N-P-L-F) rating scale:
Not achieved (0–15%)
Partially achieved (>15–50%)
Largely achieved (>50–85%)
Fully achieved (>85–100%).
The rating is based upon evidence collected against the practice indicators, which demonstrate fulfillment of the process attribute.
Assessments
ISO/IEC 15504 provides a guide for performing an assessment.
This includes:
the assessment process
the model for the assessment
any tools used in the assessment
Assessment process
Performing assessments is the subject of parts 2 and 3 of ISO/IEC 15504. Part 2 is the normative part and part 3 gives a guidance to fulfill the requirements in part 2.
One of the requirements is to use a conformant assessment method for the assessment process. The actual method is not specified in the standard although the standard places requirements on the method, method developers and assessors using the method. The standard provides general guidance to assessors and this must be supplemented by undergoing formal training and detailed guidance during initial assessments.
The assessment process can be generalized as the following steps:
initiate an assessment (assessment sponsor)
select assessor and assessment team
plan the assessment, including processes and organizational unit to be assessed (lead assessor and assessment team)
pre-assessment briefing
data collection
data validation
process rating
reporting the assessment result
An assessor can collect data on a process by various means, including interviews with persons performing the process, collecting documents and quality records, and collecting statistical process data. The assessor validates this data to ensure it is accurate and completely covers the assessment scope. The assessor assesses this data (using his expert judgment) against a process's base practices and the capability dimension's generic practices in the process rating step. Process rating requires some exercising of expert judgment on the part of the assessor and this is the reason that there are requirements on assessor qualifications and competency. The process rating is then presented as a preliminary finding to the sponsor (and preferably also to the persons assessed) to ensure that they agree that the assessment is accurate. In a few cases, there may be feedback requiring further assessment before a final process rating is made.
Assessment model
The process assessment model (PAM) is the detailed model used for an actual assessment. This is an elaboration of the process reference model (PRM) provided by the process lifecycle standards.
The process assessment model (PAM) in part 5 is based on the process reference model (PRM) for software: ISO/IEC 12207.
The process assessment model in part 6 is based on the process reference model for systems: ISO/IEC 15288.
The standard allows other models to be used instead, if they meet ISO/IEC 15504's criteria, which include a defined community of interest and meeting the requirements for content (i.e. process purpose, process outcomes and assessment indicators).
Tools used in the assessment
There exist several assessment tools. The simplest comprise paper-based tools. In general, they are laid out to incorporate the assessment model indicators, including the base practice indicators and generic practice indicators. Assessors write down the assessment results and notes supporting the assessment judgment.
There are a limited number of computer based tools that present the indicators and allow users to enter the assessment judgment and notes in formatted screens, as well as automate the collated assessment result (i.e. the process attribute ratings) and creating reports.
Assessor qualifications and competency
For a successful assessment, the assessor must have a suitable level of the relevant skills and experience.
These skills include:
personal qualities such as communication skills.
relevant education and training and experience.
specific skills for particular categories, e.g. management skills for the management category.
ISO/IEC 15504 related training and experience in process capability assessments.
The competency of assessors is the subject of part 3 of ISO/IEC 15504.
In summary, the ISO/IEC 15504 specific training and experience for assessors comprise:
completion of a 5-day lead assessor training course
performing at least one assessment successfully under supervision of a competent lead assessor
performing at least one assessment successfully as a lead assessor under the supervision of a competent lead assessor. The competent lead assessor defines when the assessment is successfully performed. There exist schemes for certifying assessors and guiding lead assessors in making this judgement.
Uses
ISO/IEC 15504 can be used in two contexts:
Process improvement, and
Capability determination (= evaluation of supplier's process capability).
Process improvement
ISO/IEC 15504 can be used to perform process improvement within a technology organization. Process improvement is always difficult, and initiatives often fail, so it is important to understand the initial baseline level (process capability level), and to assess the situation after an improvement project. ISO 15504 provides a standard for assessing the organization's capacity to deliver at each of these stages.
In particular, the reference framework of ISO/IEC 15504 provides a structure for defining objectives, which facilitates specific programs to achieve these objectives.
Process improvement is the subject of part 4 of ISO/IEC 15504. It specifies requirements for improvement programmes and provides guidance on planning and executing improvements, including a description of an eight step improvement programme. Following this improvement programme is not mandatory and several alternative improvement programmes exist.
Capability determination
An organization considering outsourcing software development needs to have a good understanding of the capability of potential suppliers to deliver.
ISO/IEC 15504 (Part 4) can also be used to inform supplier selection decisions. The ISO/IEC 15504 framework provides a framework for assessing proposed suppliers, as assessed either by the organization itself, or by an independent assessor.
The organization can determine a target capability for suppliers, based on the organization's needs, and then assess suppliers against a set of target process profiles that specify this target capability. Part 4 of the ISO/IEC 15504 specifies the high level requirements and an initiative has been started to create an extended part of the standard covering target process profiles. Target process profiles are particularly important in contexts where the organization (for example, a government department) is required to accept the cheapest qualifying vendor. This also enables suppliers to identify gaps between their current capability and the level required by a potential customer, and to undertake improvement to achieve the contract requirements (i.e. become qualified). Work on extending the value of capability determination includes a method called Practical Process Profiles - which uses risk as the determining factor in setting target process profiles. Combining risk and processes promotes improvement with active risk reduction, hence reducing the likelihood of problems occurring.
Acceptance of ISO/IEC 15504
ISO/IEC 15504 has been successful as:
ISO/IEC 15504 is available through National Standards Bodies.
It has the support of the international community.
Over 4,000 assessments have been performed to date.
Major sectors are leading the pace such as automotive, space and medical systems with industry relevant variants.
Domain-specific models like Automotive SPICE and SPICE 4 SPACE can be derived from it.
There have been many international initiatives to support take-up such as SPICE for small and very small entities.
On the other hand, ISO/IEC 15504 may not be as popular as CMMI for the following reasons:
ISO/IEC 15504 is not available as free download, but must be purchased from the ISO. (Automotive SPICE, on the other hand, can be freely downloaded from the link supplied below.) CMM, and later CMMI, were originally available as free downloads from the SEI website. However, beginning with CMMI v2.0 a license must now be purchased from SEI.
The CMM, and later CMMI, were originally sponsored by the US Department of Defense (DoD). Now, however, DoD no longer funds CMMI or mandates its use.
The CMM was created first, and reached critical 'market' share before ISO 15504 became available.
The CMM has subsequently been replaced by the CMMI, which incorporates many of the ideas of ISO/IEC 15504, but also retains the benefits of the CMM.
Like the CMM, ISO/IEC 15504 was created in a development context, making it difficult to apply in a service management context. But work has started to develop an ISO/IEC 20000-based process reference model (ISO/IEC 20000-4) that can serve as a basis for a process assessment model. This is planned to become part 8 to the standard (ISO/IEC 15504-8). In addition there are methods available that adapt its use to various contexts.
See also
ISO/IEC JTC 1/SC 7
Further reading
Cass, A. et al. “SPiCE in Action - Experiences in Tailoring and Extension.” Proceedings. 28th Euromicro Conference. IEEE Comput. Soc, 2003. Print.
Eito-Brun, Ricardo. “Comparing SPiCE for Space (S4S) and CMMI-DEV: Identifying Sources of Risk from Improvement Models.” Communications in Computer and Information Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. 84–94. Print.
International Conference on Software Process Improvement and Capability Determination (2011-2018)
Mesquida, Antoni Lluís, Antònia Mas, and Esperança Amengual. “An ISO/IEC 15504 Security Extension.” Communications in Computer and Information Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. 64–72. Print.
Schlager, Christian et al. “Hardware SPICE Extension for Automotive SPICE 3.1.” Communications in Computer and Information Science. Cham: Springer International Publishing, 2018. 480–491. Print.
External links
ISO/IEC 33001:2015 - Information technology — Process assessment — Concepts and terminology
VDA QMC Homepage for Automotive SPICE
References
Software engineering standards
Software development process
15504 |
28133958 | https://en.wikipedia.org/wiki/Michael%20T.%20Goodrich | Michael T. Goodrich | Michael T. Goodrich is a mathematician and computer scientist. He is a distinguished professor of computer science and the former chair of the department of computer science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine.
University career
He received his B.A. in Mathematics and Computer Science from Calvin College in 1983 and his PhD in Computer Sciences from Purdue University in 1987 under the supervision of Mikhail Atallah. He then served as a professor in the Department of Computer Science at Johns Hopkins University until 2001 and has since been a Chancellor's Professor at the University of California, Irvine in the Donald Bren School of Information and Computer Sciences.
Awards and honors
Goodrich is a Fellow of the American Association for the Advancement of Science, a Fulbright Scholar, a Fellow of the IEEE, and a Fellow of the Association for Computing Machinery. In 2018 he was elected as a foreign member of the Royal Danish Academy of Sciences and Letters.
He is also a recipient of the IEEE Computer Society Technical Achievement Award in 2006,
the DARPA Spirit of Technology Transfer Award,
and the ACM Recognition of Service Award.
References
External links
Michael T. Goodrich
Year of birth missing (living people)
Living people
American computer scientists
Researchers in geometric algorithms
Graph drawing people
Computer security academics
Calvin University alumni
Purdue University alumni
University of California, Irvine faculty
Johns Hopkins University faculty
Fellows of the Association for Computing Machinery
Fellows of the American Association for the Advancement of Science
Fellow Members of the IEEE
Members of the Royal Danish Academy of Sciences and Letters |
18146174 | https://en.wikipedia.org/wiki/China%E2%80%93Finland%20relations | China–Finland relations | Finnish-Chinese relations are the foreign relations between Finland and China.
History
Along with Sweden and Denmark, Finland was one of the first Western countries to recognize the People's Republic of China and form diplomatic relations with the country in 1950. The embassy in Beijing was opened in April 1952, and the first resident Finnish ambassador to China, Helge von Knorring, presented his letter of credence to Mao Zedong on 9 May 1952.
Later that same year, an economic department headed by Olavi J. Mattila was opened at the embassy to foster the development of trade relations. As a consequence, Finland became the first capitalist country to sign a bilateral trade agreement with the People's Republic of China in 1953.
These steps, as well as Finland's staunch support for PRC's membership in the UN, formed a solid basis to the nations' relations well into the 1980s. Since the early 1990s, there has been at least one official minister-level state visit from Finland to China each year.
Human rights
Hong Kong national security law
In June 2020, Finland openly opposed the Hong Kong national security law
Trade
Finland and China have had an agreement on economic, industrial, scientific and technological co-operation since 1973, and the agreement was last revised in 2005. The two principal trade organizations between the countries are Finland-China Trade Association and the China Council for Promotion of International Trade (CCPIT).
One of the fastest growing areas of trade between the two countries is in environmental protection and information technology.
Information technology
Linux, the open source code first developed by Finnish software engineer Linus Torvalds, is playing a major role in the development of China's IT sector – and in the country's rapid industrial development as a whole. Researcher, investor and open source code promoter Mikko Puhakka believes that Linux has established a good foundation for Finnish-Chinese cooperation.
The roots of the Chinese open source code are in Finland. Professor Gong Min started using and developing Linux when he was doing his PhD research at the Helsinki University of Technology in the early 1990s. Returning to China in 1996, Professor Gong Min took Linux with him on 20 floppy disks. This is seen as the birth of Chinese open source code development and Professor Gong Min is now among the 20 most influential people in China.
The first Linux server was started in May 1997 in Changzhou. Its domain name was cLinux.ml.org. Today a local distribution of Linux, the Red Flag Linux, is a compulsory subject in 1,000 Chinese universities by 2008. The Chinese view open source code as a critical factor in the country's development.
Nokia is the largest Finnish investor in China.
Sister cities
Shanghai-Espoo
The cities of Espoo and Shanghai are sister cities since 1998. Ever since, various collaboration programs have been carried out in areas of science and technology, urban development, culture, education, health care and environmental protection. The last high-level meeting was in June 2012 when the Mayor of Espoo, Mr Jukka Mäkelä, met the Mayor of Shanghai, Mr Han Zheng, during a visit to China.
In recent years, Finland-China relations have been strengthened by various Espoo-Shanghai actions.
Aalto University signed a Memorandum of Understanding with Tongji University and thereafter Aalto Design Factory Shanghai was established to Tongji University Campus during spring 2010.
China - Finland ICT Alliance was initiated in April 2009 with workshops in Beijing and Shanghai, opened by Mr. Matti Vanhanen, Former Prime Minister of Finland and Mr. Wan Gang, Minister of Science and Technology of China. The initiative is run by TIVIT Oy - Strategic Centre for Science, Technology and Innovation in ICT.
China Finland Golden Bridge Innovation Center, an Espoo-based transparent service platform supported by both countries' governments. In March 2010, the Golden Bridge Memorandum of understanding was signed by Chinese vice Minister of Commerce Mr GAO Hucheng and Finnish Minister of Economic Affairs Mr Mauri Pekkarinen.
Rovio, Espoo based Finnish video game developer and entertainment company, creator of Angry Birds, opened first oversea office in Baoshan district Shanghai in October 2011. In July 2012, Rovio opened Angry Birds brand store in Shanghai.
Zhongguancun ZPark Software Park, China's largest science and technology park, settled in Espoo in year 2012.
Spying in Finland
China and Russia are suspected of large-scale spying of the IT networks at the Finnish Ministry for Foreign Affairs. The spying focused on data traffic between Finland and the European Union, and is believed to have continued for four years. The spying was uncovered in spring 2013, and the Finnish Security Intelligence Service (Supo) was investigating the breach.
References
External links
Chinese Embassy in Finland
Finnish Embassies in China
Finland China Society
Finland
China, People's Republic of |
916132 | https://en.wikipedia.org/wiki/Nikon | Nikon | (, ; ), also known just as Nikon, is a Japanese multinational corporation headquartered in Tokyo, Japan, specializing in optics and imaging products. The companies held by Nikon form the Nikon Group.
Nikon's products include cameras, camera lenses, binoculars, microscopes, ophthalmic lenses, measurement instruments, rifle scopes, spotting scopes, and the steppers used in the photolithography steps of semiconductor fabrication, of which it is the world's second largest manufacturer. The company is the eighth-largest chip equipment maker as reported in 2017. Also, it has diversified into new areas like 3D printing and regenerative medicine to compensate for the shrinking digital camera market.
Among Nikon's notable product lines are Nikkor imaging lenses (for F-mount cameras, large format photography, photographic enlargers, and other applications), the Nikon F-series of 35 mm film SLR cameras, the Nikon D-series of digital SLR cameras, the Coolpix series of compact digital cameras, and the Nikonos series of underwater film cameras. Nikon's main competitors in camera and lens manufacturing include Canon, Sony, Fujifilm, Panasonic, Pentax, and Olympus.
Founded on July 25, 1917 as Nippon Kōgaku Kōgyō Kabushikigaisha ( "Japan Optical Industries Co., Ltd."), the company was renamed to Nikon Corporation, after its cameras, in 1988. Nikon is a member of the Mitsubishi group of companies (keiretsu).
History
Nikon Corporation was established on 25 July 1917 when three leading optical manufacturers merged to form a comprehensive, fully integrated optical company known as Nippon Kōgaku Tōkyō K.K. Over the next sixty years, this growing company became a manufacturer of optical lenses (including those for the first Canon cameras) and equipment used in cameras, binoculars, microscopes and inspection equipment. During World War II the company operated thirty factories with 2,000 employees, manufacturing binoculars, lenses, bomb sights, and periscopes for the Japanese military.
Reception outside Japan
After the war Nippon Kōgaku reverted to producing its civilian product range in a single factory. In 1948, the first Nikon-branded camera was released, the Nikon I. Nikon lenses were popularised by the American photojournalist David Douglas Duncan. Duncan was working in Tokyo when the Korean War began. Duncan had met a young Japanese photographer, Jun Miki, who introduced Duncan to Nikon lenses. From July 1950 to January 1951, Duncan covered the Korean War. Fitting Nikon optics (especially the NIKKOR-P.C 1:2 f=8,5 cm) to his Leica rangefinder cameras produced high contrast negatives with very sharp resolution at the centre field.
Names and brands
Founded in 1917 as Nippon Kōgaku Kōgyō Kabushikigaisha ( "Japan Optical Industries Corporation"), the company was renamed Nikon Corporation, after its cameras, in 1988. The name Nikon, which dates from 1946, was originally intended only for its small-camera line, spelled as "Nikkon", with an addition of the "n" to the "Nikko" brand name. The similarity to the Carl Zeiss AG brand "ikon", would cause some early problems in Germany as Zeiss complained that Nikon violated its trademarked camera. From 1963 to 1968 the Nikon F in particular was therefore labeled 'Nikkor'.
The Nikkor brand was introduced in 1932, a westernised rendering of an earlier version Nikkō (), an abbreviation of the company's original full name (Nikkō coincidentally means "sunlight" and is the name of a Japanese town.). Nikkor is the Nikon brand name for its lenses.
Another early brand used on microscopes was Joico, an abbreviation of "Japan Optical Industries Co". Expeed is the brand Nikon uses for its image processors since 2007.
Rise of the Nikon F series
The Nikon SP and other 1950s and 1960s rangefinder cameras competed directly with models from Leica and Zeiss. However, the company quickly ceased developing its rangefinder line to focus its efforts on the Nikon F single-lens reflex line of cameras, which was successful upon its introduction in 1959. For nearly 30 years, Nikon's F-series SLRs were the most widely used small-format cameras among professional photographers, as well as by some U.S. space program, the first in 1971 on Apollo 15 (as lighter and smaller alternative to the Hasselblad, used in the Mercury, Gemini and Apollo programs, 12 of which are still on the Moon) and later once in 1973 on the Skylab and later again on it in 1981.
Nikon popularized many features in professional SLR photography, such as the modular camera system with interchangeable lenses, viewfinders, motor drives, and data backs; integrated light metering and lens indexing; electronic strobe flashguns instead of expendable flashbulbs; electronic shutter control; evaluative multi-zone "matrix" metering; and built-in motorized film advance. However, as auto focus SLRs became available from Minolta and others in the mid-1980s, Nikon's line of manual-focus cameras began to seem out of date.
Despite introducing one of the first autofocus models, the slow and bulky F3AF, the company's determination to maintain lens compatibility with its F-mount prevented rapid advances in autofocus technology. Canon introduced a new type of lens-camera interface with its entirely electronic Canon EOS cameras and Canon EF lens mount in 1987. The much faster lens performance permitted by Canon's electronic focusing and aperture control prompted many professional photographers (especially in sports and news) to switch to the Canon system through the 1990s.
Post-millenium film camera production
Once Nikon introduced affordable consumer-level DSLRs such as the Nikon D70 in the mid-2000s, sales of its consumer and professional film cameras fell rapidly, following the general trend in the industry. In January 2006, Nikon announced it would stop making most of its film camera models and all of its large format lenses, and focus on digital models.
Nevertheless, Nikon remained the only major camera manufacturer still making film SLR cameras for a long time. The high-end Nikon F6 and the entry-level FM10 remained in production all the way up until October 2020.
Digital photography
Digital single-lens reflex and point and shoot cameras
Nikon created some of the first digital SLRs (DSLRs, Nikon NASA F4) for NASA, used in the Space Shuttle since 1991. After a 1990s partnership with Kodak to produce digital SLR cameras based on existing Nikon film bodies, Nikon released the Nikon D1 SLR under its own name in 1999. Although it used an APS-C-size light sensor only 2/3 the size of a 35 mm film frame (later called a "DX sensor"), the D1 was among the first digital cameras to have sufficient image quality and a low enough price for some professionals (particularly photojournalists and sports photographers) to use it as a replacement for a film SLR. The company also has a Coolpix line which grew as consumer digital photography became increasingly prevalent through the early 2000s. Nikon also never made any phones.
Through the mid-2000s, Nikon's line of professional and enthusiast DSLRs and lenses including their back compatible AF-S lens line remained in second place behind Canon in SLR camera sales, and Canon had several years' lead in producing professional DSLRs with light sensors as large as traditional 35 mm film frames. All Nikon DSLRs from 1999 to 2007, by contrast, used the smaller DX size sensor.
Then, 2005 management changes at Nikon led to new camera designs such as the full-frame Nikon D3 in late 2007, the Nikon D700 a few months later, and mid-range SLRs. Nikon regained much of its reputation among professional and amateur enthusiast photographers as a leading innovator in the field, especially because of the speed, ergonomics, and low-light performance of its latest models. The mid-range Nikon D90, introduced in 2008, was also the first SLR camera to record video. Since then video mode has been introduced to many more of the Nikon and non-Nikon DSLR cameras including the Nikon D3S, Nikon D7000, Nikon D5100, Nikon D3100, Nikon D3200 and Nikon D5100.
More recently, Nikon has released a photograph and video editing suite called ViewNX to browse, edit, merge and share images and videos. Despite the market growth of Mirrorless Interchangeable Lens Cameras, Nikon did not neglect their F-mount Single Lens Reflex cameras and have released some professional DSLRs like the D780, or the D6 in 2020.
Mirrorless interchangeable-lens cameras
In reaction to the growing market for Mirrorless cameras, Nikon released their first Mirrorless Interchangeable Lens Cameras and also a new lens mount in 2011. The lens mount was called Nikon 1, and the first bodies in it were the Nikon 1 J1 and the V1. The system was built around a 1 inch (or CX) format image sensor, with a 2.7x crop factor. This format was pretty small compared to their competitors. This resulted in a loss of image quality, dynamic range and fewer possibilities for restricting depth of field depth of field range. Probably this is the reason why it never became really successful so in 2018 Nikon officially discontinued it, after three years without a new camera body. (The last one was the Nikon 1 J5).
Also in 2018, Nikon introduced a whole new mirrorless system in their lineup. It was the Nikon Z system. The first cameras it used were the Z 6 and the Z 7, both with a Full Frame (FX) sensor format, In-Body Image Stabilization and a built-in electronic viewfinder. The Z-mount is not only for FX cameras though, as in 2019 Nikon introduced the Z 50 with a DX format sensor, without IBIS but with the compatibility to every Z-mount lens. The handling, the ergonomics and the button layout are similar to the Nikon DSLR cameras, which is friendly for those who are switching from them. This shows that Nikon is putting their focus more on their MILC line.
In 2020 Nikon updated both the Z 6 and the Z 7. The updated models are called the Z 6 II and the Z 7 II. The improvements over the original models include the new EXPEED 6 processor, an added card slot, improved video and AF features, higher burst rates, battery grip support and USB-C power delivery.
In 2021, Nikon released 2 mirrorless cameras, the Z fc and the Z 9. The Nikon Z fc is the second Z-series APS-C (DX) mirrorless camera in the line up, designed to evoke the company's famous FM2 SLR from the '80s. It offers manual controls, including dedicated dials for shutter speed, exposure compensation and ISO. The Z 9 became Nikon's new flagship product succeeding the D6, marking the start of a new era of Nikon cameras. It includes a 46 megapixel Full Frame (FX) format stacked CMOS sensor which is stabilized and has a very fast readout speed, making the mechanical shutter not only unneeded, but also absent from the camera. Along with the sensor, the 3.7 million dot, 760 nit EVF, the 30 fps continuous burst at full resolution with a buffer of 1000+ compressed raw photos, 4K 120 fps ProRes internal recording, the 8K 30 fps internal recording and the 120 hz subject recognition AF system make it one of the most advanced cameras on the market with its main rivals being the Canon EOS R3 and the Sony α1. (As of February 2022)
Movie camera production
Although few models were introduced, Nikon made movie cameras as well. The R10 and R8 SUPER ZOOM Super 8 models (introduced in 1973) were the top of the line and last attempt for the amateur movie field. The cameras had a special gate and claw system to improve image steadiness and overcome a major drawback of Super 8 cartridge design. The R10 model has a high speed 10X macro zoom lens.
Contrary to other brands, Nikon never attempted to offer projectors or their accessories.
Thai operations
Nikon has shifted much of its manufacturing facilities to Thailand, with some production (especially of Coolpix cameras and some low-end lenses) in Indonesia. The company constructed a factory in Ayuthaya north of Bangkok in Thailand in 1991. By the year 2000, it had 2,000 employees. Steady growth over the next few years and an increase of floor space from the original 19,400 square meters (208,827 square feet) to 46,200 square meters (497,300 square feet) enabled the factory to produce a wider range of Nikon products. By 2004, it had more than 8,000 workers.
The range of the products produced at Nikon Thailand include plastic molding, optical parts, painting, printing, metal processing, plating, spherical lens process, aspherical lens process, prism process, electrical and electronic mounting process, silent wave motor and autofocus unit production.
As of 2009, all of Nikon's Nikon DX format DSLR cameras and the D600, a prosumer FX camera, are produced in Thailand, while their professional and semi-professional Nikon FX format (full frame) cameras (D700, D3, D3S, D3X, D4, D800 and the retro-styled Df) are built in Japan, in the city of Sendai. The Thai facility also produces most of Nikon's digital "DX" zoom lenses, as well as numerous other lenses in the Nikkor line.
Nikon-Essilor Co. Ltd.
In 1999, Nikon and Essilor have signed a Memorandum of understanding to form a global strategic alliance in corrective lenses by forming a 50/50 joint venture in Japan to be called Nikon-Essilor Co. Ltd.
The main purpose of the joint venture is to further strengthen the corrective lens business of both companies.
This will be achieved through the integrated strengths of Nikon's strong brand backed up by advanced optical technology and strong sales network in Japanese market, coupled with the high productivity and worldwide marketing and sales network of Essilor, the world leader in this industry.
Nikon-Essilor Co. Ltd. started its business in January 2000, responsible for research, development, production and sales mainly for ophthalmic optics.
Recent development
Revenue from Nikon's camera business has dropped 30% in three years prior to fiscal 2015. In 2013, it forecast the first drop in sales from interchangeable lens cameras since Nikon's first digital SLR in 1999. The company's net profit has fallen from a peak of 75.4 billion (fiscal 2007) to 18.2 billion for fiscal 2015. Nikon plans to reassign over 1,500 employees resulting in job cuts of 1,000, mainly in semiconductor lithography and camera business, by 2017 as the company shifts focus to medical and industrial devices business for growth.
Film cameras
In January 2006 Nikon announced the discontinuation of all but two models of its film cameras, focusing its efforts on the digital camera market. It continues to sell the fully manual FM10, and still offers the high-end fully automatic F6. Nikon has also committed to service all the film cameras for a period of ten years after production ceases.
Film 35 mm SLR cameras with manual focus
High-end (Professional – Intended for professional use, heavy duty and weather resistance)
Nikon F series (1959, known in Germany for legal reasons as the Nikkor F)
Nikon F2 series (1971)
Nikon F3 series (1980)
Midrange
Nikkorex series (1960)
Nikkormat F series (1965, known in Japan as the Nikomat F series)
Nikon FM (1977)
Nikon FM2 series (1982)
Nikon FM10 (1995)
Nikon FM3A (2001)
Midrange with electronic features
Nikkormat EL series (1972, known in Japan as the Nikomat EL series)
Nikon EL2 (1977)
Nikon FE (1978)
Nikon FE2 (1983)
Nikon FA (1983)
Nikon F-601M (1990, known in North America as the N6000)
Nikon FE10 (1996)
Entry-level (Consumer)
Nikon EM (1979)
Nikon FG (1982)
Nikon FG-20 (1984)
Nikon F-301 (1985, known in North America as the N2000)
Film APS SLR cameras
Nikon Pronea 600i / Pronea 6i (1996)
Nikon Pronea S (1997)
Film 35 mm SLR cameras with autofocus
High-end (Professional – Intended for professional use, heavy duty and weather resistance)
Nikon F3AF (1983, modified F3 body with Autofocus Finder DX-1)
Nikon F4 (1988) – (World's first professional auto-focus SLR camera and world's first professional SLR camera with a built-in motor drive)
Nikonos RS (1992) (Professional when reviewed in underwater conditions) – (World's first underwater auto-focus SLR camera)
Nikon F5 (1996)
Nikon F6 (2004)
High-end (Prosumer – Intended for pro-consumers who want the main mechanic/electronic features of the professional line but don't need the same heavy duty/weather resistance)
Nikon F-501 (1986, known in North America as the N2020)
Nikon F-801 (1988, known in the U.S. as the N8008)
Nikon F-801S (1991, known in the U.S. as the N8008S)
Nikon F90 (1992, known in the U.S. as the N90)
Nikon F90X (1994, known in the U.S. as the N90S)
Nikon F80 (2000, known in the U.S. as the N80)
Nikon F100 (1999)
Mid-range (Consumer)
Nikon F-601 (1990, known in the U.S. as the N6006)
Nikon F70 (1994, known in the U.S. as the N70)
Nikon F75 (2003, known in the U.S. as the N75)
Entry-level (Consumer)
Nikon F-401 (1987, known in the U.S. as the N4004)
Nikon F-401S (1989, known in the U.S. as the N4004S)
Nikon F-401X (1991, known in the U.S. as the N5005)
Nikon F50 (1994, known in the U.S. as the N50)
Nikon F60 (1999, known in the U.S. as the N60)
Nikon F65 (2000, known in the U.S. as the N65)
Nikon F55 (2002, known in the U.S. as the N55)
Professional Rangefinder cameras
Nikon I (1948)
Nikon M (1949)
Nikon S (1951)
Nikon S2 (1954)
Nikon SP (1957)
Nikon S3 (1958)
Nikon S4 (1959) (entry-level)
Nikon S3M (1960)
Nikon S3 2000 (2000)
Nikon SP Limited Edition (2005)
Compact cameras
Between 1983 and the early 2000s a broad range of compact cameras were made by Nikon. Nikon first started by naming the cameras with a series name (like the L35/L135-series, the RF/RD-series, the W35-series, the EF or the AW-series). In later production cycles, the cameras were double branded with a series-name on the one and a sales name on the other hand. Sales names were for example Zoom-Touch for cameras with a wide zoom range, Lite-Touch for ultra compact models, Fun-Touch for easy to use cameras and Sport-Touch for splash water resistance. After the late 1990s, Nikon dropped the series names and continued only with the sales name. Nikon's APS-cameras were all named Nuvis.
The cameras came in all price ranges from entry-level fixed-lens-cameras to the top model Nikon 35Ti and 28Ti with titanium body and 3D-Matrix-Metering.
Movie cameras
Double 8 (8mm)
NIKKOREX 8 (1960)
NIKKOREX 8F (1963)
Super 8
Nikon Super Zoom 8 (1966)
Nikon 8X Super Zoom (1967)
Nikon R8 Super Zoom (1973)
Nikon R10 Super Zoom (1973)
Professional Underwater cameras
Nikonos I Calypso (1963, originally known in France as the Calypso/Nikkor)
Nikonos II (1968)
Nikonos III (1975)
Nikonos IV-A (1980)
Nikonos V (1984)
Nikonos RS (1992) (World's first underwater Auto-Focus SLR camera)
Digital cameras
Nikon's raw image format is NEF, for Nikon Electronic File. The "DSCN" prefix for image files stands for "Digital Still Camera – Nikon."
Digital compact cameras
The Nikon Coolpix series are digital compact cameras produced in many variants: Superzoom, bridge, travel-zoom, miniature compact and waterproof/rugged cameras. The top compact cameras are several "Performance" series indicated by a "P...".
Larger sensor compact cameras
Coolpix series since 2008 listed.
Nikon Coolpix P6000, 2008-08-07 (CCD, 14 megapixels, 4x zoom)
Nikon Coolpix P7000, 2010-09-08 (CCD, 10.1 megapixels, 7x zoom)
Nikon Coolpix P7100, 2011-08-24 (roughly same specifications as predecessor)
Nikon Coolpix P7700
Nikon Coolpix A, 2013-03-05 (16MP DX-CMOS sensor)
Nikon Coolpix A900
Nikon Coolpix P7800
Light-weight fast lens compact cameras
Nikon Coolpix P300
Nikon Coolpix P310
Nikon Coolpix P330
Nikon Coolpix P340
Bridge cameras
Nikon Coolpix L810, Feb, 2012–16 MP, 26x optical zoom, no wi-fi, fixed LCD, ISO 80–1600
Nikon Coolpix L820, Jan, 2013–16 MP, 30x optical zoom, no wi-fi, fixed LCD, ISO 125-3200
Nikon Coolpix L830, Jan, 2014–16 MP, 34x optical zoom with 68x Dynamic Fine Zoom, no wi-fi, tilting LCD, ISO 125-1600 (3200 in Auto)
Nikon Coolpix L840 Feb, 2015–16 MP, 38x optical zoom with 76x Dynamic Fine Zoom, built-in Wi-Fi and NFC (Near Field Communication), 3 inch high-resolution tilting LCD, ISO 125 – 1600
ISO 3200, 6400 (available when using Auto mode)
Nikon Coolpix P500, Feb, 2011–12.1 MP, 36x optical zoom, tilt LCD, ISO 160–3200
Nikon Coolpix P510, Feb, 2012–16.1 MP, 41.7x optical zoom (24–1000mm), no wi-fi, vari-angle LCD, ISO 100–3200
Nikon Coolpix P520, Jan, 2013–18.1 MP, 42x optical zoom, optional wi-fi, vari-angle LCD, ISO 80–3200
Nikon Coolpix P530, Feb, 2014–16.1 MP, 42x optical zoom & 84x Dynamic Fine Zoom, opt wi-fi, fixed LCD, ISO 100–1600 (ISO 3200, 6400 in PASM mode)
Nikon Coolpix P600, Feb, 2014–16.1 MP, 60x optical zoom and 120 Dynamic Fine Zoom, built in wi-fi, vari-angle LCD, ISO 100–1600 (ISO 3200, 6400 in PASM mode)
Nikon Coolpix P610
Nikon Coolpix B500, Feb, 2016-16 MP, 40x optical zoom, tilt LCD, ISO 160–6400
Nikon Coolpix P900
Nikon Coolpix P950
Nikon Coolpix P1000
Mirrorless interchangeable-lens cameras
Nikon Z series – Nikon Z-mount lenses
Nikon Z 7, FX/Full Frame sensor, August 23, 2018
Nikon Z 6, FX/Full Frame sensor, August 23, 2018
Nikon Z 50, DX/APS-C sensor, October 10, 2019
Nikon Z 5, FX/Full Frame sensor, July 21, 2020
Nikon Z 6II, FX/Full Frame sensor, October 14, 2020
Nikon Z 7II, FX/Full Frame sensor, October 14, 2020
Nikon Z fc, DX/APS-C sensor, July 2021
Nikon Z 9, FX/Full Frame sensor, October 28, 2021
Nikon 1 series – CX sensor, Nikon 1 mount lenses
Nikon 1 J1, September 21, 2011, : 10 MP
Nikon 1 V1, September 21, 2011, : 10 MP
Nikon 1 J2, August 10, 2012, : 10 MP
Nikon 1 V2, October 24, 2012, : 14 MP
Nikon 1 J3, January 8, 2013, : 14 MP
Nikon 1 S1, January 8, 2013, : 10 MP
Nikon 1 AW1, : 14 MP
Nikon 1 V3, : 18 MP, tilt LCD
Nikon 1 J4, : 18 MP
Nikon 1 J5, : 20 MP
Digital single lens reflex cameras
High-end (Professional – Intended for professional use, heavy duty and weather resistance)
Nikon D1, DX sensor, June 15, 1999 – Discontinued
Nikon D1X, DX sensor, February 5, 2001 – Discontinued
Nikon D1H, DX sensor, high speed, February 5, 2001 – Discontinued
Nikon D2H, DX sensor, high speed, July 22, 2003 – Discontinued
Nikon D2X, DX sensor, September 16, 2004 – Discontinued
Nikon D2HS, DX sensor, high speed, February 16, 2005 – Discontinued
Nikon D2XS, DX sensor, June 1, 2006 – Discontinued
Nikon D3, FX/Full Frame sensor, August 23, 2007 – Discontinued
Nikon D3X, FX/Full Frame sensor, December 1, 2008 – Discontinued
Nikon D3S, FX/Full Frame sensor, October 14, 2009 – Discontinued
Nikon D4, FX/Full Frame sensor, January 6, 2012 – Discontinued
Nikon D4S, FX/Full Frame sensor, February 25, 2014 – Discontinued (In U.S.A. only)
Nikon D5, FX/Full Frame sensor, January 5, 2016
Nikon D6, FX/Full Frame sensor, February 12, 2020
High-end (Prosumer – Intended for pro-consumers who want the main mechanical/weather resistance and electronic features of the professional line but don't need the same heavy duty)
Nikon D100, DX sensor, February 21, 2002 – Discontinued
Nikon D200, DX sensor, November 1, 2005 – Discontinued
Nikon D300, DX sensor, August 23, 2007 – Discontinued
Nikon D300S, DX sensor, July 30, 2009 – Discontinued
Nikon D700, FX/Full Frame sensor, July 1, 2008 – Discontinued
Nikon D800, FX/Full Frame sensor, February 7, 2012 – Discontinued
Nikon D800E, FX/Full Frame sensor, April 2012 – Discontinued
Nikon D600, FX/Full Frame sensor, September 13, 2012 – Discontinued
Nikon D610, FX/Full Frame sensor, October 2013
Nikon Df, FX/Full Frame sensor, November 2013
Nikon D810, FX/Full Frame sensor, June 2014
Nikon D750, FX/Full Frame sensor, September 11, 2014
Nikon D810A, FX/Full Frame Sensor, February 2015
Nikon D500, DX sensor, January 5, 2016
Nikon D850, FX/Full Frame sensor, announced July 25, 2017
Nikon D780, FX/Full Frame sensor, January 7, 2020
Midrange and professional usage cameras with DX sensor
Nikon D70, January 28, 2004 – Discontinued
Nikon D70S, April 20, 2005 – Discontinued
Nikon D80, August 9, 2006 – Discontinued
Nikon D90, August 27, 2008 – Discontinued
Nikon D7000, September 15, 2010 – Discontinued
Nikon D7100, February 21, 2013 – Discontinued ( In U.S.A. only )
Nikon D7200, March 2, 2015
Nikon D7500, April 12, 2017
Upper-entry-level (Consumer) – DX sensor
Along with the D750 and D500 above, these are the only Nikon DSLR's with the articulated (tilt-and-swivel) display.
Nikon D5000, April 14, 2009 – Discontinued
Nikon D5100, April 5, 2011 – Discontinued
Nikon D5200, November 6, 2012 Discontinued
Nikon D5300, October 17, 2013
Nikon D5500, January 5, 2015 – Discontinued
Nikon D5600, November 10, 2016
Entry-level (Consumer) – DX sensor
Nikon D50, April 20, 2005 – Discontinued
Nikon D40, November 16, 2006 – Discontinued
Nikon D40X, March 6, 2007 – Discontinued
Nikon D60, January 29, 2008 – Discontinued
Nikon D3000, July 30, 2009 – Discontinued
Nikon D3100, August 19, 2010 – Discontinued
Nikon D3200, April 19, 2012 – Discontinued
Nikon D3300, January 7, 2014 – Discontinued (In U.S.A. only)
Nikon D3400, August 17, 2016 – Discontinued
Nikon D3500, August 3, 2018
Photo optics
Lenses for Nikon Z-mount
Nikon introduced the Z-mount in 2018 for their system of digital full-frame and APS-C (DX) mirrorless cameras.
Lenses for F-mount cameras
The Nikon F-mount is a type of interchangeable lens mount developed by Nikon for its 35 mm Single-lens reflex cameras. The F-mount was first introduced on the Nikon F camera in 1959.
See Nikon F-mount → Nikkor
Lenses with integrated motors: List of Nikon F-mount lenses with integrated autofocus motors
Other lenses for photography and imaging
Electronic flash units
Nikon uses the term Speedlight for its electronic flashes. Recent models include the SB-R200, SB-300, SB-400, SB-600, SB-700, SB-800, SB-900, SB-910, SB-5000 and R1C1.
Film scanners
Nikon's digital capture line also includes a successful range of dedicated scanners for a variety of formats, including Advanced Photo System (IX240), 35 mm, and 60 mm film.
(1988) LS-3500 (4096x6144, 4000 dpi, 30 bits per pixel) HP-IB (requires a third-party NuBus card; intended for Mac platforms, for which there is a Photoshop plug-in).
(1992) Coolscan LS-10 (2700 dpi) SCSI. First to be named "Coolscan" to denote LED illumination.
(1994) LS-3510AF (4096x6144, 4000 dpi, 30 bits per pixel) Auto-focus SCSI (usually employed on Mac platforms with a Photoshop plug-in; TWAIN is available for PC platforms).
(1995) LS-4500AF (4 x 5 inch and 120/220 formats, 1000x2000 dpi, 35mm format 3000x3000). 12bit A/D. SCSI. Fitted with auto-focus lens.
(1996) Super Coolscan LS-1000 (2592x3888, 2700 dpi) SCSI. scan time cut by half
(1996) Coolscan II LS-20 E (2700 dpi) SCSI
(1998) Coolscan LS-2000 (2700 dpi, 12-bit) SCSI, multiple sample, "CleanImage" software
(1998) Coolscan III LS-30 E (2700 dpi, 10-bit) SCSI
(2001) Coolscan IV LS-40 ED (2900 dpi, 12-bit, 3.6D) USB, SilverFast, ICE, ROC, GEM
(2001) Coolscan LS-4000 ED (4000 dpi, 14-bit, 4.2D) Firewire
(2001) Coolscan LS-8000 ED (4000 dpi, 14-bit, 4.2D) Firewire, multiformat
(2003) Coolscan V LS-50 ED (4000 dpi, 14-bit, 4.2D) USB
(2003) Super Coolscan LS-5000 ED (4000 dpi, 16bit, 4.8D) USB
(2004) Super Coolscan LS-9000 ED (4000 dpi, 16bit, 4.8D) Firewire, multiformat
Nikon introduced its first scanner, the Nikon LS-3500 with a maximum resolution of 4096 x 6144 pixels, in 1988. Prior to the development of 'cool' LED lighting this scanner used a halogen lamp (hence the name 'Coolscan' for the following models). The resolution of the following LED based Coolscan model didn't increase but the price was significantly lower. Colour depth, scan quality, imaging and hardware functionality as well as scanning speed was gradually improved with each following model. The final 'top of the line' 35mm Coolscan LS-5000 ED was a device capable of archiving greater numbers of slides; 50 framed slides or 40 images on film roll. It could scan all these in one batch using special adapters. A single maximum resolution scan was performed in no more than 20 seconds as long as no post-processing was also performed. With the launch of the Coolscan 9000 ED Nikon introduced its most up-to-date film scanner which, like the Minolta Dimage scanners were the only film scanners that, due to a special version of Digital ICE, were able to scan Kodachrome film reliably both dust and scratch free. In late 2007 much of the software's code had to be rewritten to make it Mac OS 10.5 compatible. Nikon announced it would discontinue supporting its Nikon Scan software for the Macintosh as well as for Windows Vista 64-bit. Third-party software solutions like SilverFast or Vuescan provide alternatives to the official Nikon drivers and scanning software, and maintain updated drivers for most current operating systems. Between 1994 and 1996 Nikon developed three flatbed scanner models named Scantouch, which couldn't keep up with competitive flatbed products and were hence discontinued to allow Nikon to focus on its dedicated film scanners.
Sport optics
Binoculars
Sprint IV
Sportstar IV
Travelite V
Travelite VI
Travelite EX
Mikron
Action VII
Action VII Zoom
Aculon
Action EX
Sporter I
Venturer 8/10x32
Venturer 8x42
Prostaff 5
Prostaff 7
Monarch ATB
Monarch 3
Monarch 5
Monarch 7
Monarch HG
StabilEyes
Superior E
Marine
EDG II
Spotting scopes
Prostaff 3 16-48x60
Prostaff 5 60
Prostaff 5 80
Spotter XL II WP
Spotting Scope R/A II
Spotting Scope 80
Fieldscope 60mm
Fieldscope ED78/ EDII
Fieldscope III/EDIII
Fieldscope ED82
Fieldscope ED50
Fieldscopes EDG 65 /85
Fieldscope EDG 85 VR
Rifle scopes
BLACK
Monarch 7
Monarch 5
Monarch 3
Monarch
Laser IRT
Prostaff 5
Encore
Coyote Special
Slughunter
Inline
Buckmaster II
Buckmaster
AR
ProStaff II
Prostaff
Team REALTREE
Rimfire
Handgun
Nikon Metrology
Overview
Nikon Metrology, a division of Nikon, produces hardware and software products for 2D & 3D measurement from nano to large scale measurement volumes. Products include Optical Laser Probes, X-ray computed tomography, Coordinate-measuring machine (CMM),Laser Radar Systems (LR), Microscopes, Portable CMMs, Large Volume Metrology, Motion Measurement and Adaptive Robotic Controls, Semiconductor Systems, Metrology Software including CMM-Manager, CAMIO Studio, Inspect-X, Focus, and Automeasure. Measurements are performed using tactile and non-contact probes, measurement data is collected in software and processed for comparison to nominal CAD (Computer-aided design) or part specification or for recreating / reverse engineering physical work pieces.
Origins
The origins of Nikon go back to 1917 when three Jananese optical manufacturers joined to form Nippon Kogaku KK ('Japan Optics'). In 1925 the microscope having revolving nosepiece and interchangeable objectives was produced. Significant growth for the microscopy division occurs over the next 50 years as Nikon pioneers development of polarising and stereo microscopes along with new products for measuring and inspection (Metrology) markets. These new products include devices targeted for industrial use such as optical comparators, autocollimators, profile projector and automated vision based systems. Continued effort through the next three decades yield the release of products including the Optiphot and Labophot microscopes, Diaphot microscope, the Eclipse range of infinity optics, and finally the DS camera series and the Coolscope with the advent of digital sensors. With the acquisition of Metris in 2009 the Nikon Metrology division was born. Nikon Metrology products include a full range of both 2D & 3D, optical, tactile, non-contact, and X-Ray Metrology solutions ranging from nanometer resolution on microscopic samples to μm resolution in volumes large enough to house a commercial airliner.
Products
Coordinate-Measuring-Machines
Bridge, Gantry and Horizontal Arm CMMs
Digital / Analog Tactile and / or Non-Contact Optical sensors
Portable arms – 6 and 7 axis models
Laser Scanning – Optical Line Scanners in single Line and Multi-line (Cross Scanner) configurations
X-ray-and-CT-Inspection
Video-Microscope-Measuring – Optical Probe and Multi-Sensor options available
Microscope-Systems
Large Volume Systems
Application Software – several options available depending on specific application and hardware.
CMM-Manager – Multi-sensor 3D Metrology software for third party CMMs, Articulated Arms, and Nikon video-measurement systems
Automeasure, NIS Elements, E-Max, Automeasure Eyes – 2D / 3D imaging software for use on Nikon video-measurement systems
Focus, CMM-Manager, CAMIO – Software for 3D Metrology
Lithography equipment
Overview
Nikon manufactures scanners and steppers for the manufacture of integrated circuits and flat panel displays, and semiconductor device inspection equipment. The steppers and scanners represent about one third of the income for the company as of 2008.
Nikon developed the first lithography equipment from Japan. The equipment from Nikon enjoyed high demand from global chipmakers, the Japanese semiconductor companies and other major companies such as Intel, and Nikon was the world's leading producer of semiconductor lithography systems from the 1980s to 2002. Nikon saw a sharp drop in its market share from less than 40 percent in early 2000s to no more than 20 percent as of 2013. The company has been losing an estimated 17 billion a year in its precision instruments unit.
In contrast, ASML, a Dutch company, has grabbed over 80 percent of the lithography systems market as of 2015 by adopting an open innovation method of product development, which includes the acquisition of U.S-based light source manufacturer Cymer. In 2017, Nikon announced that it would cut nearly 1,000 jobs mainly in the lithography systems business and halt its development of next-generation equipment.
Legal disputes
In February 2019, Nikon, ASML and Carl Zeiss AG, a leading supplier to ASML, have entered into a definitive settlement and cross-license agreement relating to multiple disputes over patents for lithography equipment that had been underway since 2001 and agreed to drop all the world-wide lawsuits regarding the issue.
By the latest settlement, ASML and Zeiss paid approximately $170 million to Nikon. The two companies had paid a total of $87 million to Nikon in 2004 for similar legal dispute.
Market position and products
As of February 2018, Nikon held 10.3 percent revenue share in the semiconductor lithography market while the share of ASML was over 80 percent.
As of 2019, Nikon develops and sells the following lithography-related equipment:
Cutting-edge flat panel display lithography equipment (The FX series)
i-line steppers
KrF steppers
ArF steppers
ArF immersion steppers
Inspection and alignment equipment
Other products
Nikon also manufactures eyeglasses, sunglasses, and glasses frames, under the brands Nikon, Niji, Nobili-Ti, Presio, and Velociti VTI. Other Nikon's products include ophthalmic equipment, loupes, monoculars, binocular telescopes, metal 3D printers, material processing equipment, regenerative medicine contract manufacturing, cell sorting equipment, and cell culture observation systems.
Nikon no longer manufactures its own image sensors as it outsources the manufacturing to Sony.
Since 2019, Sendai Nikon, a Nikon group company, manufactures Lidar sensors for Velodyne as part of a partnership between the two companies.
Sponsorship
Awards and exhibitions
In Japan, Nikon runs the Nikon Salon exhibition spaces, the Nikkor Club for amateur photographers (to whom it distributes the series of Nikon Salon books), the Nikon Small World Photomicrography Competition and the Nikon Small World in Motion Competition, and arranges the Ina Nobuo Award, Miki Jun Award and Miki Jun Inspiration Awards.
Others
As of November 19, 2013, Nikon is the "Official Camera" of Walt Disney World Resort and Disneyland Resort.
Nikon is the official co-sponsor of Galatasaray SK Football Team.
In 2014 Nikon sponsored the Copa Sadia do Brasil 2014 and the AFC Champions League.
The company sponsors the Nikon-Walkley Press Photographer of the Year award, as well as the Nikon Photography Prizes, which are administered by the Walkley Foundation in Australia.
Cultural references
Singer Paul Simon referenced Nikon Cameras in his 1973 song "Kodachrome."
Dexter Morgan, main character of the Showtime series Dexter, can be seen using a Nikon camera throughout the show.
In the movie Hackers, the character "Lord Nikon" got his alias because of his photographic memory.
In the lyrics to the Oak Ridge Boys song "American Made", a reference to Nikon Cameras is made ( "I got a Nikon camera, a Sony color TV").
In the movie "The French Connection", the drug dealer gives his girlfriend a Nikon F camera.
In the film "The Most Beautiful" by Akira Kurosawa, the "East Asian Optical Company" scenes were filmed at the Nippon Kogaku factory in Totsuka, Yokohama, Japan.
In the TV show Veronica Mars, Veronica, the main character, Uses A Nikon coolpix 8800 throughout season one, and a nikon DSLR in all other seasons.
Awards and recognition
Nikon was ranked 134th among India's most trusted brands according to the Brand Trust Report 2012, a study conducted by Trust Research Advisory. In the Brand Trust Report 2013, Nikon was ranked 28th among India's most trusted brands and subsequently, according to the Brand Trust Report 2014, Nikon was ranked 178th among India's most trusted brands.
See also
Digital single-lens reflex camera
Full-frame digital SLR
History of the single-lens reflex camera
Lenses for SLR and DSLR cameras
Nikon Instruments
Nikkor
Nikon F
Nikon Coolpix series
Nikon Museum
Nikon F-mount
Nikon S-mount
Perspective control lens
Single-lens reflex camera
Canon Inc
Notes and references
External links
Optics manufacturing companies
Photography companies of Japan
Defense companies of Japan
Electronics companies of Japan
Equipment semiconductor companies
Electronics companies established in 1917
Technology companies established in 1917
Japanese brands
Lens manufacturers
Mitsubishi companies
Multinational companies headquartered in Japan
Companies listed on the Tokyo Stock Exchange
1917 establishments in Japan |
40922 | https://en.wikipedia.org/wiki/Communications%20security | Communications security | Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients.
In the North Atlantic Treaty Organization culture, including United States Department of Defense culture, it is often referred to by the abbreviation COMSEC. The field includes cryptographic security, transmission security, emissions security and physical security of COMSEC equipment and associated keying material.
COMSEC is used to protect both classified and unclassified traffic on military communications networks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links.
Voice over secure internet protocol VOSIP has become the de facto standard for securing voice communication, replacing the need for
Secure Terminal Equipment (STE) in much of NATO, including the U.S.A. USCENTCOM moved entirely to VOSIP in 2008.
Specialties
Cryptographic security: The component of communications security that results from the provision of technically sound cryptosystems and their proper use. This includes ensuring message confidentiality and authenticity.
Emission security (EMSEC): The protection resulting from all measures taken to deny unauthorized persons information of value that might be derived from communications systems and cryptographic equipment intercepts and the interception and analysis of compromising emanations from cryptographic—equipment, information systems, and telecommunications systems.
Transmission security (TRANSEC): The component of communications security that results from the application of measures designed to protect transmissions from interception and exploitation by means other than cryptanalysis (e.g. frequency hopping and spread spectrum).
Physical security: The component of communications security that results from all physical measures necessary to safeguard classified equipment, material, and documents from access thereto or observation thereof by unauthorized persons.
Related terms
AKMS = the Army Key Management System
AEK = Algorithmic Encryption Key
CT3 = Common Tier 3
CCI = Controlled Cryptographic Item - equipment which contains COMSEC embedded devices
ACES = Automated Communications Engineering Software
DTD = Data Transfer Device
ICOM = Integrated COMSEC, e.g. a radio with built in encryption
TEK = Traffic Encryption Key
TED = Trunk Encryption Device such as the WALBURN/KG family
KEK = Key Encryption Key
KPK = Key production key
OWK = Over the Wire Key
OTAR = Over the Air Rekeying
LCMS = Local COMSEC Management Software
KYK-13 = Electronic Transfer Device
KOI-18 = Tape Reader General Purpose
KYX-15 = Electronic Transfer Device
KG-30 = family of COMSEC equipment
TSEC = Telecommunications Security (sometimes referred to in error transmission security or TRANSEC)
SOI = Signal operating instructions
SKL = Simple Key Loader
TPI = Two person integrity
STU-III (obsolete secure phone, replaced by STE)
STE - Secure Terminal Equipment (secure phone)
Types of COMSEC equipment:
Crypto equipment: Any equipment that embodies cryptographic logic or performs one or more cryptographic functions (key generation, encryption, and authentication).
Crypto-ancillary equipment: Equipment designed specifically to facilitate efficient or reliable operation of crypto-equipment, without performing cryptographic functions itself.
Crypto-production equipment: Equipment used to produce or load keying material
Authentication equipment:
DoD Electronic Key Management System
The Electronic Key Management System (EKMS) is a United States Department of Defense (DoD) key management, COMSEC material distribution, and logistics support system. The National Security Agency (NSA) established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to provide COMSEC managers with an automated system capable of ordering, generation, production, distribution, storage, security accounting, and access control.
The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC management operations. It eliminates paper keying material, hardcopy SOI, and associated time and resource-intensive courier distribution. It has 4 components:
LCMS provides automation for the detailed accounting required for every COMSEC account, and electronic key generation and distribution capability.
ACES is the frequency management portion of AKMS. ACES has been designated by the Military Communications Electronics Board as the joint standard for use by all services in development of frequency management and cryptonet planning.
CT3 with DTD software is in a fielded, ruggedized hand-held device that handles, views, stores, and loads SOI, Key, and electronic protection data. DTD provides an improved net-control device to automate crypto-net control operations for communications networks employing electronically keyed COMSEC equipment.
SKL is a hand-held PDA that handles, views, stores, and loads SOI, Key, and electronic protection data.
Key Management Infrastructure (KMI) Program
KMI is intended to replace the legacy Electronic Key Management System to provide a means for securely ordering, generating, producing, distributing, managing, and auditing cryptographic products (e.g., asymmetric keys, symmetric keys, manual cryptographic systems, and cryptographic applications). This system is currently being fielded by Major Commands and variants will be required for non-DoD Agencies with a COMSEC Mission.
See also
Dynamic secrets
Electronics technician (United States Navy)
Information security
Information warfare
List of telecommunications encryption terms
NSA encryption systems
NSA product types
Operations security
Secure communication
Signals intelligence
Traffic analysis
References
National Information Systems Security Glossary
https://web.archive.org/web/20121002192433/http://www.dtic.mil/whs/directives/corres/pdf/466002p.pdf
Cryptography machines
Cryptography
Military communications
Military radio systems
Encryption devices |
30319957 | https://en.wikipedia.org/wiki/Angus%20Reach | Angus Reach | Angus Bethune Reach (23 January 1821 – 15 November 1856) was a 19th-century British writer, noted for both his journalism and fiction. He was an acquaintance of such contemporary novelists as William Makepeace Thackeray and Edmund Yates, and counted the journalist and novelist Shirley Brooks as his greatest friend.
Journalistic career
Reach was born in Inverness, Scotland, to solicitor Roderick Reach and his wife Ann. He attended school at Inverness Royal Academy, beginning early in life to contribute a series of articles to the local Inverness Courier. Following a short period of study at Edinburgh University he moved in 1841 to London, where he gained a job as a court reporter for the Morning Chronicle newspaper. Reach's early duties included coverage of events at the Old Bailey and later the House of Commons, before he gained greater recognition contributing to an investigative journalism series on the conditions of the urban poor in the manufacturing districts of England. He subsequently became the Chronicle'''s arts critic, a post he held for over ten years.
In addition to his work for the Chronicle, Reach wrote the gossip column Town and Table Talk for the Illustrated London News and corresponded from London for the Inverness Courier. He later joined the staff of the celebrated satirical journal Punch, having contributed previously to two of its rivals, The Man in the Moon and The Puppet Show. He developed a reputation as a humourist, including for his satires The Comic Bradshaw and The Natural History of Humbugs.
Other works
Reach's novel, originally serialised as Clement Lorimer, or, The Book with the Iron Clasps, ran in monthly instalments through 1848–9, before being collected in a single volume and later republished in two parts as Leonard Lindsay, or, The Story of a Buccaneer. The work, a crime thriller set in the world of horseracing, has been described as a "template for the pulp tradition." He also published works of travel writing, including Claret and Olives, an account of a tour of France originally serialised in the Chronicle.
Personal life
Reach was married and was survived by his wife.
Reach figured in the anecdotes of a number of his literary friends. One concerned his profound colourblindness, a condition of which Reach was apparently unaware until adulthood. Purportedly, while dining with a friend – the ophthalmologist Jabez Hogg – Reach asked a waiter to bring him ink to complete a letter to the Chronicle. The ink was brought in a wineglass and a distracted Reach, unable to distinguish it by colour from his glass of claret, had to be stopped by his friend from drinking the ink. Another tale, told by Thackeray, concerned the pronunciation of his name. On their first meeting, Thackeray reportedly pronounced Reach's name to rhyme with "beach", and the latter informed him that the correct rendering was disyllabic: "REE-ack". Thackeray apologised for his mistake but later, when offering Reach dessert from a bowl of peaches, asked him "Mr Re-ak, will you take a pe-ak?"
Illness and death
In 1854 Reach suffered an attack described variously in contemporary accounts as a "paralytic" illness and a "softening of the brain", and identified by modern biographers as a probable cerebral haemorrhage. The attack left Reach unable to work and to provide for his wife: his friends, led by the author Albert Richard Smith, organised a benefit performance at the Olympia Theatre in London to raise funds to support Reach's family during his incapacitation. The performance included many of the works Reach himself had written or translated: all the seats in the house sold out, and such figures as Charles Dickens numbered among the audience. A repeat performance, at the Drury Lane Theatre, was attended by Queen Victoria and Prince Albert. For another year Shirley Brooks fulfilled Reach's obligations to the Chronicle, writing his columns and paying the proceeds to Reach's wife, but Reach was never to recover and died in November 1856.
Contemporary commentators attributed Reach's illness to overwork, including as a result of the frequent changes of ownership experienced by the Chronicle''. Later biographers have suggested that alcohol consumption is likely to have contributed to his declining health.
Reach was buried in Norwood. Following his death his friend Thackeray contributed to the erection of a monument in his memory.
References
External links
Angus Reach at the Oxford Dictionary of National Biography
1821 births
1856 deaths
Victorian novelists
Alumni of the University of Edinburgh
British male journalists
19th-century British novelists
19th-century British journalists
Male journalists
British male novelists |
42283768 | https://en.wikipedia.org/wiki/Doctor%20in%20a%20cell | Doctor in a cell | By combining computer science and molecular biology, researchers have been able to work on a programmable biological computer that in the future may navigate within the human body, diagnosing diseases and administering treatments. This is what Professor Ehud Shapiro from the Weizmann Institute termed a “Doctor in a cell”.
Pioneering work
In 1998 Shapiro presented a conceptual design for an autonomous, programmable molecular Turing machine, realized at the time as a mechanical device, and a vision of how such machines can cause a revolution in medicine.
The vision, termed “Doctor in a Cell” suggested that smart drugs, made of autonomous molecular computing devices, programmed with medical knowledge, could supplant present day drugs by analyzing the molecular state of their environment (input) based on programmed medical knowledge (program), and if deemed necessary release a drug molecule in response (output).
First steps towards realization of the vision
To realize this vision, Shapiro set a wet lab at Weizmann. Within a few years the lab has made pioneering steps towards realizing this vision: (1) A molecular implementation of a programmable autonomous automaton in which the input was encoded as a DNA molecule, “software” (automaton transition rules) was encoded by short DNA molecules and the “hardware” was made of made DNA processing enzymes. (2) A simplified implementation of an automaton in which the DNA input molecule is used as fuel (3) A stochastic molecular automata in which transition probabilities can be programmed by varying the concentration of “software” molecules, specifically the relative concentrations of molecules encoding competing transition rules. And (4) Extending the stochastic automaton with input and output mechanisms, allowing it to interact with the environment in a pre-programmed way, and release a specific drug molecule for cancer upon detecting expression levels of mRNA characteristic of a specific cancer. These biomolecular computers were demonstrated in a test tube, wherein a number of cancer markers were pre-mixed to emulate different marker combinations. Biomolecular computers identified the presence of cancer markers (Simultaneously and independently identifying small-cell lung cancer markers and prostate cancer markers). The computer, equipped with medical knowledge, analysed the situation, diagnosed the type of cancer and then released the appropriate drug.
DNA computers capable of simple logical deductions
In 2009, Shapiro and PhD student Tom Ran presented the prototype of an autonomous programmable molecular system, based on the manipulation of DNA strands, which is capable of performing simple logical deductions. This prototype is the first simple programming language implemented on molecular-scale. Introduced into the body, this system has immense potential to accurately target specific cell types and administer the appropriate treatment, as it can perform millions of calculations at the same time and ‘think’ logically. Prof Shapiro’s team aims to make these computers perform highly complex actions and answer complicated questions, following a logical model first proposed by Aristotle over 2000 years ago. The biomolecular computers are extremely small: three trillion computers can fit into a single drop of water. If the computers were given the rule ‘All men are mortal’ and the fact ‘Socrates is a man’, they would answer ‘Socrates is mortal’. Multiple rules and facts were tested by the team and the biomolecular computers answered them correctly each time.
‘User-friendly’ DNA computers
The team has also found a way to make these microscopic computing devices ‘user-friendly’ by creating a compiler – a program for bridging between a high-level computer programming language and DNA computing code. They sought to develop a hybrid in silico/in vitro system that supports the creation and execution of molecular logic programs in a similar way to electronic computers, enabling anyone who knows how to operate an electronic computer, with absolutely no background in molecular biology, to operate a biomolecular computer.
DNA computers via computing bacteria
In 2012, Prof. Ehud Shapiro and Dr. Tom Ran have succeeded in creating a genetic device that operates independently in bacterial cells. The device has been programmed to identify certain parameters and mount an appropriate response. The device searches for transcription factors - proteins that control the expression of genes in the cell. A malfunction of these molecules can disrupt gene expression. In cancer cells, for example, the transcription factors regulating cell growth and division do not function properly, leading to increased cell division and the formation of a tumor. The device, composed of a DNA sequence inserted into a bacterium, performs a "roll call" of transcription factors. If the results match pre-programmed parameters, it responds by creating a protein that emits a green light—supplying a visible sign of a "positive" diagnosis. In follow-up research, the scientists plan to replace the light-emitting protein with one that will affect the cell's fate, for example, a protein that can cause the cell to commit suicide. In this manner, the device will cause only "positively" diagnosed cells to self-destruct. Following the success of the study in bacterial cells, the researchers are planning to test ways of recruiting such bacteria as an efficient system to be conveniently inserted into the human body for medical purposes (which shouldn't be problematic given our natural Microbiome; recent research reveals there are already 10 times more bacterial cells in the human body than human cells, that share our body space in a symbiotic fashion). Yet another research goal is to operate a similar system inside human cells, which are much more complex than bacteria.
References
Implants (medicine)
Medical technology
Nanotechnology |
149426 | https://en.wikipedia.org/wiki/Subnetwork | Subnetwork | A subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting.
Computers that belong to the same subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields: the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.
The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range to belong to this network, with as the subnet broadcast address. The IPv6 address specification is a large address block with 296 addresses, having a 32-bit routing prefix.
For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that, when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an IP address. For example, the prefix would have the subnet mask .
Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets.
The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency, or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure, or other structures such as meshes.
Network addressing and routing
Computers participating in a network such as the Internet each have at least one network address. Usually, this address is unique to each device and can either be configured automatically with the Dynamic Host Configuration Protocol (DHCP) by a network server, manually by an administrator, or automatically by stateless address autoconfiguration.
An address fulfills the functions of identifying the host and locating it on the network. The most common network addressing architecture is Internet Protocol version 4 (IPv4), but its successor, IPv6, has been increasingly deployed since approximately 2006. An IPv4 address consists of 32 bits. An IPv6 address consists of 128 bits. In both systems, an IP address is divided into two logical parts, the network prefix and the host identifier. All hosts on a subnetwork have the same network prefix. This prefix occupies the most-significant bits of the address. The number of bits allocated within a network to the prefix may vary between subnets, depending on the network architecture. The host identifier is a unique local identification and is either a host number on the local network or an interface identifier.
This addressing structure permits the selective routing of IP packets across multiple networks via special gateway computers, called routers, to a destination host if the network prefixes of origination and destination hosts differ, or sent directly to a target host on the local network if they are the same. Routers constitute logical or physical borders between the subnets, and manage traffic between them. Each subnet is served by a designated default router but may consist internally of multiple physical Ethernet segments interconnected by network switches.
The routing prefix of an address is identified by the subnet mask, written in the same form used for IP addresses. For example, the subnet mask for a routing prefix that is composed of the most-significant 24 bits of an IPv4 address is written as .
The modern standard form of specification of the network prefix is CIDR notation, used for both IPv4 and IPv6. It counts the number of bits in the prefix and appends that number to the address after a slash (/) character separator. This notation was introduced with Classless Inter-Domain Routing (CIDR).
In IPv6 this is the only standards-based form to denote network or routing prefixes.
For example, the IPv4 network with the subnet mask is written as , and the IPv6 notation designates the address and its network prefix consisting of the most significant 32 bits.
In classful networking in IPv4, before the introduction of CIDR, the network prefix could be directly obtained from the IP address, based on its highest order bit sequence. This determined the class (A, B, C) of the address and therefore the subnet mask. Since the introduction of CIDR, however, the assignment of an IP address to a network interface requires two parameters, the address and a subnet mask.
Given an IPv4 source address, its associated subnet mask, and the destination address, a router can determine whether the destination is on a locally connected network or a remote network. The subnet mask of the destination is not needed, and is generally not known to a router. For IPv6, however, on-link determination is different in detail and requires the Neighbor Discovery Protocol (NDP). IPv6 address assignment to an interface carries no requirement of a matching on-link prefix and vice versa, with the exception of link-local addresses.
Since each locally connected subnet must be represented by a separate entry in the routing tables of each connected router, subnetting increases routing complexity. However, by careful design of the network, routes to collections of more distant subnets within the branches of a tree hierarchy can be aggregated into a supernetwork and represented by single routes.
Internet Protocol version 4
Determining the network prefix
An IPv4 subnet mask consists of 32 bits; it is a sequence of ones (1) followed by a block of zeros (0). The ones indicate bits in the address used for the network prefix and the trailing block of zeros designates that part as being the host identifier.
The following example shows the separation of the network prefix and the host identifier from an address () and its associated subnet mask (). The operation is visualized in a table using binary address formats.
The result of the bitwise AND operation of IP address and the subnet mask is the network prefix . The host part, which is , is derived by the bitwise AND operation of the address and the one's complement of the subnet mask.
Subnetting
Subnetting is the process of designating some high-order bits from the host part as part of the network prefix and adjusting the subnet mask appropriately. This divides a network into smaller subnets. The following diagram modifies the above example by moving 2 bits from the host part to the network prefix to form four smaller subnets each one quarter of the previous size.
Special addresses and subnets
IPv4 uses specially designated address formats to facilitate recognition of special address functionality. The first and the last subnets obtained by subnetting a larger network have traditionally had a special designation and, early on, special usage implications. In addition, IPv4 uses the all ones host address, i.e. the last address within a network, for broadcast transmission to all hosts on the link.
The first subnet obtained from subnetting a larger network has all bits in the subnet bit group set to zero (0). It is therefore called subnet zero. The last subnet obtained from subnetting a larger network has all bits in the subnet bit group set to one (1). It is therefore called the all-ones subnet.
The IETF originally discouraged the production use of these two subnets. When the prefix length is not available, the larger network and the first subnet have the same address, which may lead to confusion. Similar confusion is possible with the broadcast address at the end of the last subnet. Therefore, reserving the subnet values consisting of all zeros and all ones on the public Internet was recommended, reducing the number of available subnets by two for each subnetting. This inefficiency was removed, and the practice was declared obsolete in 1995 and is only relevant when dealing with legacy equipment.
Although the all-zeros and the all-ones host values are reserved for the network address of the subnet and its broadcast address, respectively, in systems using CIDR all subnets are available in a subdivided network. For example, a network can be divided into sixteen usable networks. Each broadcast address, i.e. , , …, , reduces only the host count in each subnetwork.
Subnet host count
The number of subnetworks available and the number of possible hosts in a network may be readily calculated. For instance, the network may be subdivided into the following four subnets. The highlighted two address bits become part of the network number in this process.
The remaining bits after the subnet bits are used for addressing hosts within the subnet. In the above example, the subnet mask consists of 26 bits, making it 255.255.255.192, leaving 6 bits for the host identifier. This allows for 62 host combinations (26−2).
In general, the number of available hosts on a subnet is 2h−2, where h is the number of bits used for the host portion of the address. The number of available subnets is 2n, where n is the number of bits used for the network portion of the address.
There is an exception to this rule for 31-bit subnet masks, which means the host identifier is only one bit long for two permissible addresses. In such networks, usually point-to-point links, only two hosts (the end points) may be connected and a specification of network and broadcast addresses is not necessary.
Internet Protocol version 6
The design of the IPv6 address space differs significantly from IPv4. The primary reason for subnetting in IPv4 is to improve efficiency in the utilization of the relatively small address space available, particularly to enterprises. No such limitations exist in IPv6, as the large address space available, even to end-users, is not a limiting factor.
As in IPv4, subnetting in IPv6 is based on the concepts of variable-length subnet masking (VLSM) and the Classless Inter-Domain Routing methodology. It is used to route traffic between the global allocation spaces and within customer networks between subnets and the Internet at large.
A compliant IPv6 subnet always uses addresses with 64 bits in the host identifier. Given the address size of 128 bits, it therefore has a /64 routing prefix. Although it is technically possible to use smaller subnets, they are impractical for local area networks based on Ethernet technology, because 64 bits are required for stateless address autoconfiguration. The Internet Engineering Task Force recommends the use of subnets for point-to-point links, which have only two hosts.
IPv6 does not implement special address formats for broadcast traffic or network numbers, and thus all addresses in a subnet are acceptable for host addressing. The all-zeroes address is reserved as the subnet-router anycast address.
In the past, the recommended allocation for an IPv6 customer site was an address space with a 48-bit () prefix. However, this recommendation was revised to encourage smaller blocks, for example using 56-bit prefixes. Another common allocation size for residential customer networks has a 64-bit prefix.
See also
Autonomous system (Internet)
References
Further reading
External links
Cisco-IP Addressing and Subnetting for New Users
Netmask Quick Reference Chart
Routing
IP addresses
Internet architecture |
53220151 | https://en.wikipedia.org/wiki/GNOME%20SoundConverter | GNOME SoundConverter | GNOME SoundConverter is an unofficial GNOME-based free and open-source transcoder for digital audio files. It uses GStreamer for input and output files. It has multi threaded design and can also extract the audio from video files.
From many years ago, it is available in the repositories of many Linux distributions included Debian, Fedora, openSUSE, Ubuntu, Gentoo and Arch Linux.
Features
Change filenames based on custom pattern or predefined patterns
Create folder according to tags or selected location
Can delete original file
Adjust bitrate
Importing the all metadata including image from original file
See also
OggConvert
List of Linux audio software
Comparison of free software for audio
References
External links
2004 software
Audio software for Linux
Audio software that uses GTK
Free audio software
Free software programmed in Python
Software that uses GStreamer |
50999276 | https://en.wikipedia.org/wiki/TEA%20%28text%20editor%29 | TEA (text editor) | TEA is a graphical text editor for power users. It is designed for low resource consumption, a wide range of functions and adaptability, and is available for all desktop operating systems supported by Qt 6, 5 or 4.6+, thus also OS/2 and Haiku OS. Its user interface is localized in several languages.
UI concept
The functional scope of TEA exceeds that of a pure text editor since it is designed as a desktop environment for text editing. It has five tabs on the right border of the window:
edit
files
options
dates
manual
edit represents the actual text editor. On the top of the text editor there is a tab bar for switching between multiple opened text files. The edit tab contains the text editing window. Below that window there is another window which displays the editing history and below the history there is the FIF, the "Famous Input Field" follows. The FIF is a special command line for entering TEA-specific commands. The editing history and the FIF are also visible in the four other tabs.
The tab files contains a file manager for navigating in the computers file system opening files.
options is a settings tab, for changing the behavior of TEA and modifying the content of the menu bar.
dates contains a calendar.
The tab manual contains a detailed user manual including instructions for the FIF.
Features
Syntax highlighting: C, C++, Bash script, BASIC, C#, D, Fortran, Java, LilyPond, Lout, Lua, NASM, NSIS, Pascal, Perl, PHP, PO (gettext), Python, Seed7, TeX/LaTeX, Vala, Verilog, XML, HTML, XHTML, Dokuwiki, MediaWiki
TEA includes a selection of color schemes and themes for changing the display colors
In tune highlighting for the current line can be activated, a feature that is particularly useful for proofreading, where non-electronic texts and bitmaps containing text have to be compared to text on the screen. A typical use is editing of scanned text that were converted into text files with an OCR program, e.g. for creating corpora in linguistics.
In File manager there is a bookmark menu in which folder paths for quick navigation can be stored.
Spellchecker
Freely definable text snippets
Formatting for: HTML, XHTML, DocBook, LaTeX, Lout, DokuWiki and MediaWiki
Text conversion functions (upper case, lower case, Morse, etc.)
Text statistics functions: Text statistics; extract words; Words lengths; UNITAZ quantity sorting; UNITAZ sorting alphabet; Count the substring and count the substring (regexp)
Math functions
FIF
The Famous Input Field is a TEA specific command line. In order to find and replace text, enter e.g. SOURCETEXT~TARGETTEXT and click on Replace, Replace All or Replace all in opened files in the Search menu. The string SOURCETEXT will be replaced by the string TARGETTEXT in the chosen way.
In addition, the FIF includes three separate search buttons, located on the right side.
History
Originally TEA was a program for Windows. In version 1.0.0.49, released on 30 December 2001, it is evident that the acronym TEA then still meant Text Editing and Authoring. Later on a version for Linux using GTK + was written, which made it possible to compile the program for both Windows and Linux. TEA is one of those programs that were later on rewritten using Qt (cf. e.g. the media player VLC).
The program and the website were initially available only in Russian, which has had a negative impact on the popularity and reach outside of Ukraine and Russia. Meanwhile, the website is bilingual (Russian and English) and the program itself has been localized in several languages.
Website history:
References
External links
semiletov.org/tea/ & tea.ourproject.org
historically: tea-editor.sourceforge.net / www.roxton.kiev.ua
TEA: A Smooth Text Editor That Hits the Sweet Spot
The Qt-based Tea Text Editor: Managing Image and Text Files in One Application
OS/2 text editors
Linux text editors
MacOS text editors
Text editors
Unix text editors
Windows text editors
Linux integrated development environments
Free text editors
Free integrated development environments
Free software programmed in C++
Software that was ported from GTK to Qt |
30953704 | https://en.wikipedia.org/wiki/School%20of%20Computer%20Science%20and%20Electronic%20Engineering%2C%20Essex%20University | School of Computer Science and Electronic Engineering, Essex University | The School of Computer Science and Electronic Engineering at the University of Essex is an academic department that focuses on educating and researching into Computer Science and Electronic Engineering specific matters. It was formed by the merger of two departments, notable for being amongst the first in England in their fields, the Department of Electronic Systems Engineering(1966) and the Department of Computer Science (1966).
Achievements
The School/Department is notable for the following achievements:
The Department's MSc Masters course in Telecommunications was the first one in the world to cover the complete telecommunication system, including both switching and transmission.
The world's first telephone based system for deaf people to communicate with each other was invented and developed in the department by Don Pearson in 1981. The system was based on sign language - cameras and display devices were able to work within the limited telephone bandwidth to enable sign language communication two decades before the widespread use of broadband and web-cameras.
The department produced the first MSc on the Theory of Programming Languages (1970; Laski, Turner) called Program Linguistics.
Charles Broyden in 1970 developed the BFGS method for numerical optimisation. The method is still the industry standard, in constant use around the world after nearly 40 years.
Current notable research
The Photonics Hyperhighway project began in 2010 and is planned to run until 2016. It was funded by the Engineering and Physical Sciences Research Council (EPSRC) with an aims to focus on energy-efficient ultra high capacity ICT infrastructure. The project plans to make broadband internet 100 times faster including partnership with the BBC to help broadcast ultra high definition content.
Current research with the UK Research Network on Artificial Intelligence and Video Game technologies into artificial intelligence and Computer Games
Notable alumni and staff
Richard Bartle, Co-creator of MUD1 (the first Multi-User Dungeon) and author of Designing Virtual Worlds. Lecturer
Tony Brooker, University's founding Chair of Computer Science 1967.
Charles George Broyden, a senior lecturer in the department from 1967–1970, independently discovered the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. It has been a key technique in solving optimization problems, while he was also well known for Broyden's methods and Broyden family methods. In 2009, the Charles Broyden Prize was named after him to "honor this remarkable researcher" by Optimization Methods and Software in the international optimization community.
Riccardo Poli, Major contributor to the field of Genetic Programming. Current lecturer at the University.
Edward Tsang. known for his work on constraint satisfaction and computational finance* Ray Turner, notable for his work on logic in computer science and for his pioneering work in the philosophy of computer science. Emeritus Professor
References
External links
University of Essex School of Computer Science and Electronic Engineering
Essex
University of Essex
Educational institutions established in 2007
2007 establishments in England |
24782330 | https://en.wikipedia.org/wiki/Master%20boot%20record | Master boot record | A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-compatible systems and beyond. The concept of MBRs was publicly introduced in 1983 with PC DOS 2.0.
The MBR holds the information on how the logical partitions, containing file systems, are organized on that medium. The MBR also contains executable code to function as a loader for the installed operating system—usually by passing control over to the loader's second stage, or in conjunction with each partition's volume boot record (VBR). This MBR code is usually referred to as a boot loader.
The organization of the partition table in the MBR limits the maximum addressable storage space of a partitioned disk to 2 TiB . Approaches to slightly raise this limit assuming 32-bit arithmetics or 4096-byte sectors are not officially supported, as they fatally break compatibility with existing boot loaders and most MBR-compliant operating systems and system tools, and can cause serious data corruption when used outside of narrowly controlled system environments. Therefore, the MBR-based partitioning scheme is in the process of being superseded by the GUID Partition Table (GPT) scheme in new computers. A GPT can coexist with an MBR in order to provide some limited form of backward compatibility for older systems.
MBRs are not present on non-partitioned media such as floppies, superfloppies or other storage devices configured to behave as such.
Overview
Support for partitioned media, and thereby the master boot record (MBR), was introduced with IBM PC DOS 2.0 in March 1983 in order to support the 10 MB hard disk of the then-new IBM Personal Computer XT, still using the FAT12 file system. The original version of the MBR was written by David Litton of IBM in June 1982. The partition table supported up to four primary partitions, of which DOS could only use one. This did not change when FAT16 was introduced as a new file system with DOS 3.0. Support for an extended partition, a special primary partition type used as a container to hold other partitions, was added with DOS 3.2, and nested logical drives inside an extended partition came with DOS 3.30. Since MS-DOS, PC DOS, OS/2 and Windows were never enabled to boot off them, the MBR format and boot code remained almost unchanged in functionality, except for in some third-party implementations, throughout the eras of DOS and OS/2 up to 1996.
In 1996, support for logical block addressing (LBA) was introduced in Windows 95B and DOS 7.10 in order to support disks larger than 8 GB. Disk timestamps were also introduced. This also reflected the idea that the MBR is meant to be operating system and file system independent. However, this design rule was partially compromised in more recent Microsoft implementations of the MBR, which enforce CHS access for FAT16B and FAT32 partition types /, whereas LBA is used for /.
Despite sometimes poor documentation of certain intrinsic details of the MBR format (which occasionally caused compatibility problems), it has been widely adopted as a de facto industry standard, due to the broad popularity of PC-compatible computers and its semi-static nature over decades. This was even to the extent of being supported by computer operating systems for other platforms. Sometimes this was in addition to other pre-existing or cross-platform standards for bootstrapping and partitioning.
MBR partition entries and the MBR boot code used in commercial operating systems, however, are limited to 32 bits. Therefore, the maximum disk size supported on disks using 512-byte sectors (whether real or emulated) by the MBR partitioning scheme (without 33-bit arithmetic) is limited to 2 TiB. Consequently, a different partitioning scheme must be used for larger disks, as they have become widely available since 2010. The MBR partitioning scheme is therefore in the process of being superseded by the GUID Partition Table (GPT). The official approach does little more than ensuring data integrity by employing a protective MBR. Specifically, it does not provide backward compatibility with operating systems that do not support the GPT scheme as well. Meanwhile, multiple forms of hybrid MBRs have been designed and implemented by third parties in order to maintain partitions located in the first physical 2 TiB of a disk in both partitioning schemes "in parallel" and/or to allow older operating systems to boot off GPT partitions as well. The present non-standard nature of these solutions causes various compatibility problems in certain scenarios.
The MBR consists of 512 or more bytes located in the first sector of the drive.
It may contain one or more of:
A partition table describing the partitions of a storage device. In this context the boot sector may also be called a partition sector.
Bootstrap code: Instructions to identify the configured bootable partition, then load and execute its volume boot record (VBR) as a chain loader.
Optional 32-bit disk timestamp.
Optional 32-bit disk signature.
Disk partitioning
IBM PC DOS 2.0 introduced the FDISK utility to set up and maintain MBR partitions. When a storage device has been partitioned according to this scheme, its MBR contains a partition table describing the locations, sizes, and other attributes of linear regions referred to as partitions.
The partitions themselves may also contain data to describe more complex partitioning schemes, such as extended boot records (EBRs), BSD disklabels, or Logical Disk Manager metadata partitions.
The MBR is not located in a partition; it is located at a first sector of the device (physical offset 0), preceding the first partition. (The boot sector present on a non-partitioned device or within an individual partition is called a volume boot record instead.) In cases where the computer is running a DDO BIOS overlay or boot manager, the partition table may be moved to some other physical location on the device; e.g., Ontrack Disk Manager often placed a copy of the original MBR contents in the second sector, then hid itself from any subsequently booted OS or application, so the MBR copy was treated as if it were still residing in the first sector.
Sector layout
By convention, there are exactly four primary partition table entries in the MBR partition table scheme, although some operating systems and system tools extended this to five (Advanced Active Partitions (AAP) with PTS-DOS 6.60 and DR-DOS 7.07), eight (AST and NEC MS-DOS 3.x as well as Storage Dimensions SpeedStor), or even sixteen entries (with Ontrack Disk Manager).
Partition table entries
An artifact of hard disk technology from the era of the PC XT, the partition table subdivides a storage medium using units of cylinders, heads, and sectors (CHS addressing). These values no longer correspond to their namesakes in modern disk drives, as well as being irrelevant in other devices such as solid-state drives, which do not physically have cylinders or heads.
In the CHS scheme, sector indices have (almost) always begun with sector 1 rather than sector 0 by convention, and due to an error in all versions of MS-DOS/PC DOS up to including 7.10, the number of heads is generally limited to 255 instead of 256. When a CHS address is too large to fit into these fields, the tuple (1023, 254, 63) is typically used today, although on older systems, and with older disk tools, the cylinder value often wrapped around modulo the CHS barrier near 8 GB, causing ambiguity and risks of data corruption. (If the situation involves a "protective" MBR on a disk with a GPT, Intel's Extensible Firmware Interface specification requires that the tuple (1023, 255, 63) be used.) The 10-bit cylinder value is recorded within two bytes in order to facilitate making calls to the original/legacy INT 13h BIOS disk access routines, where 16 bits were divided into sector and cylinder parts, and not on byte boundaries.
Due to the limits of CHS addressing, a transition was made to using LBA, or logical block addressing. Both the partition length and partition start address are sector values stored in the partition table entries as 32-bit quantities. The sector size used to be considered fixed at 512 (29) bytes, and a broad range of important components including chipsets, boot sectors, operating systems, database engines, partitioning tools, backup and file system utilities and other software had this value hard-coded. Since the end of 2009, disk drives employing 4096-byte sectors (4Kn or Advanced Format) have been available, although the size of the sector for some of these drives was still reported as 512 bytes to the host system through conversion in the hard-drive firmware and referred to as 512 emulation drives (512e).
Since block addresses and sizes are stored in the partition table of an MBR using 32 bits, the maximum size, as well as the highest start address, of a partition using drives that have 512-byte sectors (actual or emulated) cannot exceed 2 TiB−512 bytes ( bytes or (232−1) sectors × 512 (29) bytes per sector). Alleviating this capacity limitation was one of the prime motivations for the development of the GPT.
Since partitioning information is stored in the MBR partition table using a beginning block address and a length, it may in theory be possible to define partitions in such a way that the allocated space for a disk with 512-byte sectors gives a total size approaching 4 TiB, if all but one partition are located below the 2 TiB limit and the last one is assigned as starting at or close to block 232−1 and specify the size as up to 232−1, thereby defining a partition that requires 33 rather than 32 bits for the sector address to be accessed. However, in practice, only certain LBA-48-enabled operating systems, including Linux, FreeBSD and Windows 7 that use 64-bit sector addresses internally actually support this. Due to code space constraints and the nature of the MBR partition table to only support 32 bits, boot sectors, even if enabled to support LBA-48 rather than LBA-28, often use 32-bit calculations, unless they are specifically designed to support the full address range of LBA-48 or are intended to run on 64-bit platforms only. Any boot code or operating system using 32-bit sector addresses internally would cause addresses to wrap around accessing this partition and thereby result in serious data corruption over all partitions.
For disks that present a sector size other than 512 bytes, such as USB external drives, there are limitations as well. A sector size of 4096 results in an eight-fold increase in the size of a partition that can be defined using MBR, allowing partitions up to 16 TiB (232 × 4096 bytes) in size. Versions of Windows more recent than Windows XP support the larger sector sizes, as well as Mac OS X, and Linux has supported larger sector sizes since 2.6.31 or 2.6.32, but issues with boot loaders, partitioning tools and computer BIOS implementations present certain limitations, since they are often hard-wired to reserve only 512 bytes for sector buffers, causing memory to become overwritten for larger sector sizes. This may cause unpredictable behaviour as well, and therefore should be avoided when compatibility and standard conformity is an issue.
Where a data storage device has been partitioned with the GPT scheme, the master boot record will still contain a partition table, but its only purpose is to indicate the existence of the GPT and to prevent utility programs that understand only the MBR partition table scheme from creating any partitions in what they would otherwise see as free space on the disk, thereby accidentally erasing the GPT.
System bootstrapping
On IBM PC-compatible computers, the bootstrapping firmware (contained within the ROM BIOS) loads and executes the master boot record. The PC/XT (type 5160) used an Intel 8088 microprocessor. In order to remain compatible, all x86 architecture systems start with the microprocessor in an operating mode referred to as real mode. The BIOS reads the MBR from the storage device into physical memory, and then it directs the microprocessor to the start of the boot code. Since the BIOS runs in real mode, the processor is in real mode when the MBR program begins to execute, and so the beginning of the MBR is expected to contain real-mode machine code.
Since the BIOS bootstrap routine loads and runs exactly one sector from the physical disk, having the partition table in the MBR with the boot code simplifies the design of the MBR program. It contains a small program that loads the Volume Boot Record (VBR) of the targeted partition. Control is then passed to this code, which is responsible for loading the actual operating system. This process is known as chain loading.
Popular MBR code programs were created for booting PC DOS and MS-DOS, and similar boot code remains in wide use. These boot sectors expect the FDISK partition table scheme to be in use and scans the list of partitions in the MBR's embedded partition table to find the only one that is marked with the active flag. It then loads and runs the volume boot record (VBR) of the active partition.
There are alternative boot code implementations, some of which are installed by boot managers, which operate in a variety of ways. Some MBR code loads additional code for a boot manager from the first track of the disk, which it assumes to be "free" space that is not allocated to any disk partition, and executes it. A MBR program may interact with the user to determine which partition on which drive should boot, and may transfer control to the MBR of a different drive. Other MBR code contains a list of disk locations (often corresponding to the contents of files in a filesystem) of the remainder of the boot manager code to load and to execute. (The first relies on behavior that is not universal across all disk partitioning utilities, most notably those that read and write GPTs. The last requires that the embedded list of disk locations be updated when changes are made that would relocate the remainder of the code.)
On machines that do not use x86 processors, or on x86 machines with non-BIOS firmware such as Open Firmware or Extensible Firmware Interface (EFI) firmware, this design is unsuitable, and the MBR is not used as part of the system bootstrap. EFI firmware is instead capable of directly understanding the GPT partitioning scheme and the FAT filesystem format, and loads and runs programs held as files in the EFI System partition. The MBR will be involved only insofar as it might contain a partition table for compatibility purposes if the GPT partition table scheme has been used.
There is some MBR replacement code that emulates EFI firmware's bootstrap, which makes non-EFI machines capable of booting from disks using the GPT partitioning scheme. It detects a GPT, places the processor in the correct operating mode, and loads the EFI compatible code from disk to complete this task.
Disk identity
In addition to the bootstrap code and a partition table, master boot records may contain a disk signature. This is a 32-bit value that is intended to identify uniquely the disk medium (as opposed to the disk unit—the two not necessarily being the same for removable hard disks).
The disk signature was introduced by Windows NT version 3.5, but it is now used by several operating systems, including the Linux kernel version 2.6 and later. Linux tools can use the NT disk signature to determine which disk the machine booted from.
Windows NT (and later Microsoft operating systems) uses the disk signature as an index to all the partitions on any disk ever connected to the computer under that OS; these signatures are kept in Windows Registry keys, primarily for storing the persistent mappings between disk partitions and drive letters. It may also be used in Windows NT BOOT.INI files (though most do not), to describe the location of bootable Windows NT (or later) partitions. One key (among many), where NT disk signatures appear in a Windows 2000/XP registry, is:
HKEY_LOCAL_MACHINE\SYSTEM\MountedDevices\
If a disk's signature stored in the MBR was (in that order) and its first partition corresponded with logical drive C: under Windows, then the REG_BINARY data under the key value \DosDevices\C: would be:
A8 E1 B9 D2 00 7E 00 00 00 00 00 00
The first four bytes are said disk signature. (In other keys, these bytes may appear in reverse order from that found in the MBR sector.) These are followed by eight more bytes, forming a 64-bit integer, in little-endian notation, which are used to locate the byte offset of this partition. In this case, corresponds to the hexadecimal value (). Under the assumption that the drive in question reports a sector size of 512 bytes, then dividing this byte offset by 512 results in 63, which is the physical sector number (or LBA) containing the first sector of the partition (unlike the sector count used in the sectors value of CHS tuples, which counts from one, the absolute or LBA sector value starts counting from zero).
If this disk had another partition with the values following the disk signature (under, e.g., the key value \DosDevices\D:), it would begin at byte offset (), which is also the first byte of physical sector .
Starting with Windows Vista, the disk signature is also stored in the Boot Configuration Data (BCD) store, and the boot process depends on it. If the disk signature changes, cannot be found or has a conflict, Windows is unable to boot. Unless Windows is forced to use the overlapping part of the LBA address of the Advanced Active Partition entry as pseudo-disk signature, Windows' usage is conflictive with the Advanced Active Partition feature of PTS-DOS 7 and DR-DOS 7.07, in particular if their boot code is located outside the first 8 GB of the disk, so that LBA addressing must be used.
Programming considerations
The MBR originated in the PC XT. IBM PC-compatible computers are little-endian, which means the processor stores numeric values spanning two or more bytes in memory least significant byte first. The format of the MBR on media reflects this convention. Thus, the MBR signature will appear in a disk editor as the sequence 55 AA.
The bootstrap sequence in the BIOS will load the first valid MBR that it finds into the computer's physical memory at address :. The last instruction executed in the BIOS code will be a "jump" to that address in order to direct execution to the beginning of the MBR copy. The primary validation for most BIOSes is the signature at offset , although a BIOS implementer may choose to include other checks, such as verifying that the MBR contains a valid partition table without entries referring to sectors beyond the reported capacity of the disk.
To the BIOS, removable (e.g. floppy) and fixed disks are essentially the same. For either, the BIOS reads the first physical sector of the media into RAM at absolute address , checks the signature in the last two bytes of the loaded sector, and then, if the correct signature is found, transfers control to the first byte of the sector with a jump (JMP) instruction. The only real distinction that the BIOS makes is that (by default, or if the boot order is not configurable) it attempts to boot from the first removable disk before trying to boot from the first fixed disk. From the perspective of the BIOS, the action of the MBR loading a volume boot record into RAM is exactly the same as the action of a floppy disk volume boot record loading the object code of an operating system loader into RAM. In either case, the program that the BIOS loaded is going about the work of chain loading an operating system.
While the MBR boot sector code expects to be loaded at physical address :, all the memory from physical address : (address : is the last one used by a Phoenix BIOS) to :, later relaxed to : (and sometimes up to :)the end of the first 640 KBis available in real mode. The INT 12h BIOS interrupt call may help in determining how much memory can be allocated safely (by default, it simply reads the base memory size in KB from segment:offset location :, but it may be hooked by other resident pre-boot software like BIOS overlays, RPL code or viruses to reduce the reported amount of available memory in order to keep other boot stage software like boot sectors from overwriting them).
The last 66 bytes of the 512-byte MBR are reserved for the partition table and other information, so the MBR boot sector program must be small enough to fit within 446 bytes of memory or less.
The MBR code examines the partition table, selects a suitable partition and loads the program that will perform the next stage of the boot process, usually by making use of INT 13h BIOS calls. The MBR bootstrap code loads and runs (a boot loader- or operating system-dependent) volume boot record code that is located at the beginning of the "active" partition. The volume boot record will fit within a 512-byte sector, but it is safe for the MBR code to load additional sectors to accommodate boot loaders longer than one sector, provided they do not make any assumptions on what the sector size is. In fact, at least 1 KB of RAM is available at address in every IBM XT- and AT-class machine, so a 1 KB sector could be used with no problem. Like the MBR, a volume boot record normally expects to be loaded at address :. This derives from the fact that the volume boot record design originated on unpartitioned media, where a volume boot record would be directly loaded by the BIOS boot procedure; as mentioned above, the BIOS treats MBRs and volume boot records (VBRs) exactly alike. Since this is the same location where the MBR is loaded, one of the first tasks of an MBR is to relocate itself somewhere else in memory. The relocation address is determined by the MBR, but it is most often : (for MS-DOS/PC DOS, OS/2 and Windows MBR code) or : (most DR-DOS MBRs). (Even though both of these segmented addresses resolve to the same physical memory address in real mode, for Apple Darwin to boot, the MBR must be relocated to : instead of :, since the code depends on the DS:SI pointer to the partition entry provided by the MBR, but it erroneously refers to it via :SI only.) It is important not to relocate to other addresses in memory because many VBRs will assume a certain standard memory layout when loading their boot file.
The Status field in a partition table record is used to indicate an active partition. Standard-conformant MBRs will allow only one partition marked active and use this as part of a sanity-check to determine the existence of a valid partition table. They will display an error message, if more than one partition has been marked active. Some non-standard MBRs will not treat this as an error condition and just use the first marked partition in the row.
Traditionally, values other than (not active) and (active) were invalid and the bootstrap program would display an error message upon encountering them. However, the Plug and Play BIOS Specification and BIOS Boot Specification (BBS) allowed other devices to become bootable as well since 1994. Consequently, with the introduction of MS-DOS 7.10 (Windows 95B) and higher, the MBR started to treat a set bit 7 as active flag and showed an error message for values .. only. It continued to treat the entry as physical drive unit to be used when loading the corresponding partition's VBR later on, thereby now also accepting other boot drives than as valid, however, MS-DOS did not make use of this extension by itself. Storing the actual physical drive number in the partition table does not normally cause backward compatibility problems, since the value will differ from only on drives other than the first one (which have not been bootable before, anyway). However, even with systems enabled to boot off other drives, the extension may still not work universally, for example, after the BIOS assignment of physical drives has changed when drives are removed, added or swapped. Therefore, per the BIOS Boot Specification (BBS), it is best practice for a modern MBR accepting bit 7 as active flag to pass on the DL value originally provided by the BIOS instead of using the entry in the partition table.
BIOS to MBR interface
The MBR is loaded at memory location : and with the following CPU registers set up when the prior bootstrap loader (normally the IPL in the BIOS) passes execution to it by jumping to : in the CPU's real mode.
CS:IP = : (fixed)
Some Compaq BIOSes erroneously use : instead. While this resolves to the same location in real mode memory, it is non-standard and should be avoided, since MBR code assuming certain register values or not written to be relocatable may not work otherwise.
DL is supported by IBM BIOSes as well as most other BIOSes. The Toshiba T1000 BIOS is known not to support this properly, and some old Wyse 286 BIOSes use DL values greater or equal to 2 for fixed disks (thereby reflecting the logical drive numbers under DOS rather than the physical drive numbers of the BIOS). USB sticks configured as removable drives typically get an assignment of DL = , , etc. However, some rare BIOSes erroneously presented them under DL = , just as if they were configured as superfloppies.
A standard conformant BIOS assigns numbers greater or equal to exclusively to fixed disk / removable drives, and traditionally only values and were passed on as physical drive units during boot. By convention, only fixed disks / removable drives are partitioned, therefore, the only DL value a MBR could see traditionally was . Many MBRs were coded to ignore the DL value and work with a hard-wired value (normally ), anyway.
The Plug and Play BIOS Specification and BIOS Boot Specification (BBS) allow other devices to become bootable as well since 1994. The later recommends that MBR and VBR code should use DL rather than internally hardwired defaults. This will also ensure compatibility with various non-standard assignments (see examples above), as far as the MBR code is concerned.
Bootable CD-ROMs following the El Torito specification may contain disk images mounted by the BIOS to occur as floppy or superfloppies on this interface. DL values of and may also be used by Protected Area Run Time Interface Extension Services (PARTIES) and Trusted Computing Group (TCG) BIOS extensions in Trusted mode to access otherwise invisible PARTIES partitions, disk image files located via the Boot Engineering Extension Record (BEER) in the last physical sector of a hard disk's Host Protected Area (HPA). While designed to emulate floppies or superfloppies, MBR code accepting these non-standard DL values allows to use images of partitioned media at least in the boot stage of operating systems.
DH bit 5 = 0: device supported through INT 13h; else: don't care (should be zero). DH is supported by some IBM BIOSes.
Some of the other registers may typically also hold certain register values (DS, ES, SS = ; SP = ) with original IBM ROM BIOSes, but this is nothing to rely on, as other BIOSes may use other values. For this reason, MBR code by IBM, Microsoft, Digital Research, etc. never did take any advantage of it. Relying on these register values in boot sectors may also cause problems in chain-boot scenarios.
Systems with Plug-and-Play BIOS or BBS support will provide a pointer to PnP data in addition to DL:
DL = boot drive unit (see above)
ES:DI = points to "$PnP" installation check structure
This information allows the boot loader in the MBR (or VBR, if passed on) to actively interact with the BIOS or a resident PnP / BBS BIOS overlay in memory in order to configure the boot order, etc., however, this information is ignored by most standard MBRs and VBRs. Ideally, ES:DI is passed on to the VBR for later use by the loaded operating system, but PnP-enabled operating systems typically also have fallback methods to retrieve the PnP BIOS entry point later on so that most operating systems do not rely on this.
MBR to VBR interface
By convention, a standard conformant MBR passes execution to a successfully loaded VBR, loaded at memory location :, by jumping to : in the CPU's real mode with the following registers maintained or specifically set up:
CS:IP = : (constant)
DL = boot drive unit (see above)
MS-DOS 2.0-7.0 / PC DOS 2.0-6.3 MBRs do not pass on the DL value received on entry, but they rather use the boot status entry in the partition table entry of the selected primary partition as physical boot drive unit. Since this is, by convention, in most MBR partition tables, it won't change things unless the BIOS attempted to boot off a physical device other than the first fixed disk / removable drive in the row. This is also the reason why these operating systems cannot boot off a second hard disk, etc. Some FDISK tools allow to mark partitions on secondary disks as "active" as well. In this situation, knowing that these operating systems cannot boot off other drives anyway, some of them continue to use the traditionally fixed value of as active marker, whereas others use values corresponding with the currently assigned physical drive unit (, ), thereby allowing to boot off other drives at least in theory. In fact, this will work with many MBR codes, which take a set bit 7 of the boot status entry as active flag rather than insisting on , however, MS-DOS/PC DOS MBRs are hard-wired to accept the fixed value of only. Storing the actual physical drive number in the partition table will also cause problems, when the BIOS assignment of physical drives changes, for example when drives are removed, added or swapped. Therefore, for a normal MBR accepting bit 7 as active flag and otherwise just using and passing on to the VBR the DL value originally provided by the BIOS allows for maximum flexibility. MS-DOS 7.1 - 8.0 MBRs have changed to treat bit 7 as active flag and any values .. as invalid, but they still take the physical drive unit from the partition table rather than using the DL value provided by the BIOS. DR-DOS 7.07 extended MBRs treat bit 7 as active flag and use and pass on the BIOS DL value by default (including non-standard values .. used by some BIOSes also for partitioned media), but they also provide a special NEWLDR configuration block in order to support alternative boot methods in conjunction with LOADER and REAL/32 as well as to change the detail behaviour of the MBR, so that it can also work with drive values retrieved from the partition table (important in conjunction with LOADER and AAPs, see NEWLDR offset 0x000C), translate Wyse non-standard drive units .. to .., and optionally fix up the drive value (stored at offset 0x019 in the Extended BIOS Parameter Block (EBPB) or at sector offset 0x01FD ) in loaded VBRs before passing execution to them (see NEWLDR offset 0x0014)—this also allows other boot loaders to use NEWLDR as a chain-loader, configure its in-memory image on the fly and "tunnel" the loading of VBRs, EBRs, or AAPs through NEWLDR.
The contents of DH and ES:DI should be preserved by the MBR for full Plug-and-Play support (see above), however, many MBRs, including those of MS-DOS 2.0 - 8.0 / PC DOS 2.0 - 6.3 and Windows NT/2000/XP, do not. (This is unsurprising, since those versions of DOS predate the Plug-and-Play BIOS standard, and previous standards and conventions indicated no requirements to preserve any register other than DL.) Some MBRs set DH to 0.
The MBR code passes additional information to the VBR in many implementations:
DS:SI = points to the 16-byte MBR partition table entry (in the relocated MBR) corresponding with the activated VBR. PC-MOS 5.1 depends on this to boot if no partition in the partition table is flagged as bootable. In conjunction with LOADER, Multiuser DOS and REAL/32 boot sectors use this to locate the boot sector of the active partition (or another bootstrap loader like IBMBIO.LDR at a fixed position on disk) if the boot file (LOADER.SYS) could not be found. PTS-DOS 6.6 and S/DOS 1.0 use this in conjunction with their Advanced Active Partition (AAP) feature. In addition to support for LOADER and AAPs, DR-DOS 7.07 can use this to determine the necessary INT 13h access method when using its dual CHS/LBA VBR code and it will update the boot drive / status flag field in the partition entry according to the effectively used DL value. Darwin bootloaders (Apple's boot1h, boot1u, and David Elliott's boot1fat32) depend on this pointer as well, but additionally they don't use DS, but assume it to be set to instead. This will cause problems if this assumption is incorrect. The MBR code of OS/2, MS-DOS 2.0 to 8.0, PC DOS 2.0 to 7.10 and Windows NT/2000/XP provides this same interface as well, although these systems do not use it. The Windows Vista/7 MBRs no longer provide this DS:SI pointer. While some extensions only depend on the 16-byte partition table entry itself, other extensions may require the whole 4 (or 5 entry) partition table to be present as well.
DS:BP = optionally points to the 16-byte MBR partition table entry (in the relocated MBR) corresponding with the activated VBR. This is identical to the pointer provided by DS:SI (see above) and is provided by MS-DOS 2.0-8.0, PC DOS 2.0-7.10, Windows NT/2000/XP/Vista/7 MBRs. It is, however, not supported by most third-party MBRs.
Under DR-DOS 7.07 an extended interface may be optionally provided by the extended MBR and in conjunction with LOADER:
AX = magic signature indicating the presence of this NEWLDR extension ()
DL = boot drive unit (see above)
DS:SI = points to the 16-byte MBR partition table entry used (see above)
ES:BX = start of boot sector or NEWLDR sector image (typically )
CX = reserved
In conjunction with GPT, an Enhanced Disk Drive Specification (EDD) 4 Hybrid MBR proposal recommends another extension to the interface:
EAX = ("!GPT")
DL = boot drive unit (see above)
DS:SI = points to a Hybrid MBR handover structure, consisting of a 16-byte dummy MBR partition table entry (with all bits set except for the boot flag at offset and the partition type at offset ) followed by additional data. This is partially compatible with the older DS:SI extension discussed above, if only the 16-byte partition entry, not the whole partition table is required by these older extensions.
Since older operating systems (including their VBRs) do not support this extension nor are they able to address sectors beyond the 2 TiB barrier, a GPT-enabled hybrid boot loader should still emulate the 16-byte dummy MBR partition table entry if the boot partition is located within the first 2 TiB.
ES:DI = points to "$PnP" installation check structure (see above)
Editing and replacing contents
Though it is possible to manipulate the bytes in the MBR sector directly using various disk editors, there are tools to write fixed sets of functioning code to the MBR. Since MS-DOS 5.0, the program FDISK has included the switch /MBR, which will rewrite the MBR code. Under Windows 2000 and Windows XP, the Recovery Console can be used to write new MBR code to a storage device using its fixmbr command. Under Windows Vista and Windows 7, the Recovery Environment can be used to write new MBR code using the BOOTREC /FIXMBR command.
Some third-party utilities may also be used for directly editing the contents of partition tables (without requiring any knowledge of hexadecimal or disk/sector editors), such as MBRWizard.
dd is also a commonly used POSIX command to read or write to any location on a storage device, MBR included. In Linux, ms-sys may be used to install a Windows MBR. The GRUB and LILO projects have tools for writing code to the MBR sector, namely grub-install and lilo -mbr. The GRUB Legacy interactive console can write to the MBR, using the setup and embed commands, but GRUB2 currently requires grub-install to be run from within an operating system.
Various programs are able to create a "backup" of both the primary partition table and the logical partitions in the extended partition.
Linux sfdisk (on a SystemRescueCD) is able to save a backup of the primary and extended partition table. It creates a file that can be read in a text editor, or this file can be used by sfdisk to restore the primary/extended partition table. An example command to back up the partition table is sfdisk -d /dev/hda > hda.out and to restore is sfdisk /dev/hda < hda.out. It is possible to copy the partition table from one disk to another this way, useful for setting up mirroring, but sfdisk executes the command without prompting/warnings using sfdisk -d /dev/sda | sfdisk /dev/sdb.
See also
Extended boot record (EBR)
Volume boot record (VBR)
GUID Partition Table (GPT)
BIOS Boot partition
EFI System partition
Boot engineering extension record (BEER)
Host protected area (HPA)
Device configuration overlay (DCO)
Apple partition map (APM)
Amiga rigid disk block (RDB)
Volume Table of Contents (VTOC)
BSD disklabel
Boot loader
Disk cloning
Recovery disc
GNU Parted
Partition alignment
Notes
References
Further reading
External links
Article on master boot record
The MBR and how it fits into the BIOS boot process
BIOS
Booting
Disk partitions |
13427539 | https://en.wikipedia.org/wiki/Apache%20CouchDB | Apache CouchDB | Apache CouchDB is an open-source document-oriented NoSQL database, implemented in Erlang.
CouchDB uses multiple formats and protocols to store, transfer, and process its data. It uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API.
CouchDB was first released in 2005 and later became an Apache Software Foundation project in 2008.
Unlike a relational database, a CouchDB database does not store data and relationships in tables. Instead, each database is a collection of independent documents. Each document maintains its own data and self-contained schema. An application may access multiple databases, such as one stored on a user's mobile phone and another on a server. Document metadata contains revision information, making it possible to merge any differences that may have occurred while the databases were disconnected.
CouchDB implements a form of multiversion concurrency control (MVCC) so it does not lock the database file during writes. Conflicts are left to the application to resolve. Resolving a conflict generally involves first merging data into one of the documents, then deleting the stale one.
Other features include document-level ACID semantics with eventual consistency, (incremental) MapReduce, and (incremental) replication. One of CouchDB's distinguishing features is multi-master replication, which allows it to scale across machines to build high-performance systems. A built-in Web application called Fauxton (formerly Futon) helps with administration.
History
Couch is an acronym for cluster of unreliable commodity hardware.
The CouchDB project was created in April 2005 by Damien Katz, a former Lotus Notes developer at IBM. He self-funded the project for almost two years and released it as an open-source project under the GNU General Public License.
In February 2008, it became an Apache Incubator project and was offered under the Apache License instead. A few months after, it graduated to a top-level project. This led to the first stable version being released in July 2010.
In early 2012, Katz left the project to focus on Couchbase Server.
Since Katz's departure, the Apache CouchDB project has continued, releasing 1.2 in April 2012 and 1.3 in April 2013. In July 2013, the CouchDB community merged the codebase for BigCouch, Cloudant's clustered version of CouchDB, into the Apache project. The BigCouch clustering framework is included in the current release of Apache CouchDB.
Native clustering is supported at version 2.0.0. And the new Mango Query Server provides a simple JSON-based way to perform CouchDB queries without JavaScript or MapReduce.
Main features
ACID Semantics
CouchDB provides ACID semantics. It does this by implementing a form of Multi-Version Concurrency Control, meaning that CouchDB can handle a high volume of concurrent readers and writers without conflict.
Built for Offline
CouchDB can replicate to devices (like smartphones) that can go offline and handle data sync for you when the device is back online.
Distributed Architecture with Replication
CouchDB was designed with bi-directional replication (or synchronization) and off-line operation in mind. That means multiple replicas can have their own copies of the same data, modify it, and then sync those changes at a later time.
Document Storage
CouchDB stores data as "documents", as one or more field/value pairs expressed as JSON. Field values can be simple things like strings, numbers, or dates; but ordered lists and associative arrays can also be used. Every document in a CouchDB database has a unique id and there is no required document schema.
Eventual Consistency
CouchDB guarantees eventual consistency to be able to provide both availability and partition tolerance.
Map/Reduce Views and Indexes
The stored data is structured using views. In CouchDB, each view is constructed by a JavaScript function that acts as the Map half of a map/reduce operation. The function takes a document and transforms it into a single value that it returns. CouchDB can index views and keep those indexes updated as documents are added, removed, or updated.
HTTP API
All items have a unique URI that gets exposed via HTTP. It uses the HTTP methods POST, GET, PUT and DELETE for the four basic CRUD (Create, Read, Update, Delete) operations on all resources.
CouchDB also offers a built-in administration interface accessible via Web called Futon.
Use cases and production deployments
Replication and synchronization capabilities of CouchDB make it ideal for using it in mobile devices, where network connection is not guaranteed, and the application must keep on working offline.
CouchDB is well suited for applications with accumulating, occasionally changing data, on which pre-defined queries are to be run and where versioning is important (CRM, CMS systems, by example). Master-master replication is an especially interesting feature, allowing easy multi-site deployments.
Users
Users of CouchDB include:
Amadeus IT Group, for some of their back-end systems.
Credit Suisse, for internal use at commodities department for their marketplace framework.
Meebo, for their social platform (Web and applications). Meebo was acquired by Google and most products were shut down on July 12, 2012.
npm, for their package registry.
Sophos, for some of their back-end systems.
The BBC, for its dynamic content platforms.
Canonical began using it in 2009 for its synchronization service "Ubuntu One", but stopped using it in November 2011.
CANAL+ for international on-demand platform at CANAL+ Overseas.
Protogrid, as storage back-end for their rapid application development framework
Data manipulation: documents and views
CouchDB manages a collection of JSON documents. The documents are organised via views. Views are defined with aggregate functions and filters are computed in parallel, much like MapReduce.
Views are generally stored in the database and their indexes updated continuously. CouchDB supports a view system using external socket servers and a JSON-based protocol. As a consequence, view servers have been developed in a variety of languages (JavaScript is the default, but there are also PHP, Ruby, Python and Erlang).
Accessing data via HTTP
Applications interact with CouchDB via HTTP. The following demonstrates a few examples using cURL, a command-line utility. These examples assume that CouchDB is running on localhost (127.0.0.1) on port 5984.
PouchDB
The PouchDB is a Javascript implementation of CouchDB which is API compatible with it. So you can use CouchDB on the server side and Pouch in the application itself and once the application comes online you can sync both. This is especially useful for progressive web applications that rely on an offline first approach.
Open source components
CouchDB includes a number of other open source projects as part of its default package.
See also
Document-oriented database
XML database
References
Bibliography
External links
CouchDB: The Definitive Guide
Complete HTTP API Reference
Simple PHP5 library to communicate with CouchDB
Asynchronous CouchDB client for Java
Asynchronous CouchDB client for Scala
CouchDB
Client-server database management systems
Cross-platform software
Database-related software for Linux
Distributed computing architecture
Document-oriented databases
Erlang (programming language)
Free database management systems
NoSQL
Structured storage
Unix network-related software
Free software programmed in Erlang
2005 software |
1123994 | https://en.wikipedia.org/wiki/LAN%20Manager | LAN Manager | LAN Manager was a network operating system (NOS) available from multiple vendors and developed by Microsoft in cooperation with 3Com Corporation. It was designed to succeed 3Com's 3+Share network server software which ran atop a heavily modified version of MS-DOS.
History
The LAN Manager OS/2 operating system was co-developed by IBM and Microsoft. It originally used the Server Message Block (SMB) protocol atop either the NetBIOS Frames (NBF) protocol or a specialized version of the Xerox Network Systems (XNS) protocol. These legacy protocols had been inherited from previous products such as MS-Net for MS-DOS, Xenix-NET for MS-Xenix, and the afore-mentioned 3+Share. A version of LAN Manager for Unix-based systems called LAN Manager/X was also available.
In 1990, Microsoft announced LAN Manager 2.0 with a host of improvements, including support for TCP/IP as a transport protocol. The last version of LAN Manager, 2.2, which included an MS-OS/2 1.31 base operating system, remained Microsoft's strategic server system until the release of Windows NT Advanced Server in 1993.
Versions
1987 – MS LAN Manager 1.0 (Basic/Enhanced)
1989 – MS LAN Manager 1.1
1991 – MS LAN Manager 2.0
1992 – MS LAN Manager 2.1
1992 – MS LAN Manager 2.1a
1993 – MS LAN Manager 2.2
1994 – MS LAN Manager 2.2a
Many vendors shipped licensed versions, including:
3Com Corporation 3+Open
HP LAN Manager/X
IBM LAN Server
Tapestry Torus
The Santa Cruz Operation
Password hashing algorithm
The LM hash is computed as follows:
The user's password is restricted to a maximum of fourteen characters.
The user’s password is converted to uppercase.
The user's password is encoded in the System OEM code page.
This password is NULL-padded to 14 bytes.
The “fixed-length” password is split into two 7-byte halves.
These values are used to create two DES keys, one from each 7-byte half, by converting the seven bytes into a bit stream with the most significant bit first, and inserting a parity bit after every seven bits (so 1010100 becomes 10101000). This generates the 64 bits needed for a DES key. (A DES key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. The parity bits added in this step are later discarded.)
Each of the two keys is used to DES-encrypt the constant ASCII string “KGS!@#$%”, resulting in two 8-byte ciphertext values. The DES CipherMode should be set to ECB, and PaddingMode should be set to NONE.
These two ciphertext values are concatenated to form a 16-byte value, which is the LM hash.
Security weaknesses
LAN Manager authentication uses a particularly weak method of hashing a user's password known as the LM hash algorithm, stemming from the mid 1980s when viruses transmitted by floppy disks were the major concern. Although it is based on DES, a well-studied block cipher, the LM hash has several weaknesses in its design.
This makes such hashes crackable in a matter of seconds using rainbow tables, or in a few minutes using brute force. Starting with Windows NT, it was replaced by NTLM, which is still vulnerable to rainbow tables, and brute force attacks unless long, unpredictable passwords are used, see password cracking. NTLM is used for logon with local accounts except on domain controllers since Windows Vista and later versions no longer maintain the LM hash by default. Kerberos is used in Active Directory Environments.
The major weaknesses of LAN Manager authentication protocol are:
Password length is limited to a maximum of 14 characters chosen from the 95 ASCII printable characters.
Passwords are not case sensitive. All passwords are converted into uppercase before generating the hash value. Hence LM hash treats PassWord, password, PaSsWoRd, PASSword and other similar combinations same as PASSWORD. This practice effectively reduces the LM hash key space to 69 characters.
A 14-character password is broken into 7+7 characters and the hash is calculated for each half separately. This way of calculating the hash makes it dramatically easier to crack, as the attacker only needs to brute-force 7 characters twice instead of the full 14 characters. This makes the effective strength of a 14-character password equal to only , or twice that of a 7-character password, which is 3.7 trillion times less complex than the theoretical strength of a 14-character single-case password. As of 2020, a computer equipped with a high-end graphics processor (GPUs) can compute 40 billion LM-hashes per second. At that rate, all 7-character passwords from the 95-character set can be tested and broken in half an hour; all 7-character alphanumeric passwords can be tested and broken in 2 seconds.
If the password is 7 characters or less, then the second half of hash will always produce same constant value (0xAAD3B435B51404EE). Therefore, a password is less than or equal to 7 characters long can be identified visibly without using tools (though with high speed GPU attacks, this matters less).
The hash value is sent to network servers without salting, making it susceptible to man-in-the-middle attacks such as replay the hash. Without salt, time–memory tradeoff pre-computed dictionary attacks, such as a rainbow table, are feasible. In 2003, Ophcrack, an implementation of the rainbow table technique, was published. It specifically targets the weaknesses of LM encryption, and includes pre-computed data sufficient to crack virtually all alphanumeric LM hashes in a few seconds. Many cracking tools, such as RainbowCrack, Hashcat, L0phtCrack and Cain, now incorporate similar attacks and make cracking of LM hashes fast and trivial.
Workarounds
To address the security weaknesses inherent in LM encryption and authentication schemes, Microsoft introduced the NTLMv1 protocol in 1993 with Windows NT 3.1. For hashing, NTLM uses Unicode support, replacing LMhash=DESeach(DOSCHARSET(UPPERCASE(password)), "KGS!@#$%") by NThash=MD4(UTF-16-LE(password)), which does not require any padding or truncating that would simplify the key. On the negative side, the same DES algorithm was used with only 56-bit encryption for the subsequent authentication steps, and there is still no salting. Furthermore, Windows machines were for many years configured by default to send and accept responses derived from both the LM hash and the NTLM hash, so the use of the NTLM hash provided no additional security while the weaker hash was still present. It also took time for artificial restrictions on password length in management tools such as User Manager to be lifted.
While LAN Manager is considered obsolete and current Windows operating systems use the stronger NTLMv2 or Kerberos authentication methods, Windows systems before Windows Vista/Windows Server 2008 enabled the LAN Manager hash by default for backward compatibility with legacy LAN Manager and Windows ME or earlier clients, or legacy NetBIOS-enabled applications. It has for many years been considered good security practice to disable the compromised LM and NTLMv1 authentication protocols where they aren't needed.
Starting with Windows Vista and Windows Server 2008, Microsoft disabled the LM hash by default; the feature can be enabled for local accounts via a security policy setting, and for Active Directory accounts by applying the same setting via domain Group Policy. The same method can be used to turn the feature off in Windows 2000, Windows XP and NT. Users can also prevent a LM hash from being generated for their own password by using a password at least fifteen characters in length.
--
NTLM hashes have in turn become vulnerable in recent years to various attacks that effectively make them as weak today as LanMan hashes were back in 1998.
Reasons for continued use of LM hash
Many legacy third party SMB implementations have taken considerable time to add support for the stronger protocols that Microsoft has created to replace LM hashing because the open source communities supporting these libraries first had to reverse engineer the newer protocols—Samba took 5 years to add NTLMv2 support, while JCIFS took 10 years.
Poor patching regimes subsequent to software releases supporting the feature becoming available have contributed to some organisations continuing to use LM Hashing in their environments, even though the protocol is easily disabled in Active Directory itself.
Lastly, prior to the release of Windows Vista, many unattended build processes still used a DOS boot disk (instead of Windows PE) to start the installation of Windows using WINNT.EXE, something that requires LM hashing to be enabled for the legacy LAN Manager networking stack to work.
See also
NT LAN Manager
Active Directory
Password cracking
Dictionary attack
Remote Program Load (RPL)
Security Account Manager
Notes
References
External links
Alt URL
Computer access control protocols
Discontinued Microsoft software
Network operating systems
OS/2
Password authentication
Broken hash functions
Microsoft Windows security technology
1987 software |
45714713 | https://en.wikipedia.org/wiki/Monero | Monero | Monero (; XMR) is a decentralized cryptocurrency. It uses a public distributed ledger with privacy-enhancing technologies that obfuscate transactions to achieve anonymity and fungibility. Observers cannot decipher addresses trading monero, transaction amounts, address balances, or transaction histories.
The protocol is open source and based on CryptoNote, a concept described in a 2013 white paper authored by Nicolas van Saberhagen. The cryptography community used this concept to design Monero, and deployed its mainnet in 2014. Monero uses ring signatures, zero-knowledge proofs, "stealth addresses", and IP address obscuring methods to obfuscate transaction details. These features are baked into the protocol, though users can optionally share view keys for third party auditing. Transactions are validated through a miner network running RandomX, a proof of work algorithm. The algorithm issues new coins to miners, and was designed to be resistant to ASIC mining.
Monero has the third largest developer community among cryptocurrencies, behind bitcoin and Ethereum. Its privacy features have attracted cypherpunks and users desiring privacy measures not provided in other cryptocurrencies. It is increasingly used in illicit activities such as money laundering, darknet markets, ransomware, and cryptojacking. The United States Internal Revenue Service (IRS) has posted bounties for contractors that can develop monero tracing technologies.
Background
Monero's roots can be traced back to CryptoNote, a cryptocurrency protocol first described in a white paper published by Nicolas van Saberhagen (presumed pseudonymous) in October 2013. The author described privacy and anonymity as "the most important aspects of electronic cash" and called bitcoin's traceability a "critical flaw". A Bitcointalk forum user "thankful_for_today" coded these ideas into a coin they dubbed BitMonero. Other forum users disagreed with thankful_for_today'''s direction for BitMonero, so forked it in 2014 to create monero. Monero translates to coin in Esperanto, and the Esperanto moneroj is sometimes used for plural. Both van Saberhagen and thankful_for_today remain anonymous.
Monero has the third largest community of developers, behind bitcoin and Ethereum. The protocol's lead maintainer was previously South African developer Riccardo Spagni. Much of the core development team chooses to remain anonymous.
Privacy
Monero's key features are those around privacy and anonymity. Even though it is a public and decentralized ledger, all transaction details are obfuscated.This contrasts to bitcoin, where all transaction details, user addresses, and wallet balances are public and transparent. These features have given monero a loyal following among crypto anarchists, cypherpunks, and privacy advocates.
The addresses of users sending monero are protected through ring signatures, which groups a sender's address with other addresses. Obfuscation of transaction amounts began in 2017 with the implementation of ring confidential transactions (RingCTs). Developers also implemented a zero-knowledge proof method, "Bulletproofs", which guarantee a transaction occurred without revealing its value. Monero recipients are protected through "stealth addresses", addresses generated by users to receive funds, but untraceable to an owner by a network observer. These privacy features are enforced on the network by default, though users have the option to share a private view key to permit third party auditing of their wallet, or a transaction key to audit a transaction.
Monero uses Dandelion++, a protocol which obscures the IP address of devices producing transactions. This is done through a method of transaction broadcast propagation; new transactions are initially passed to one node on monero's peer-to-peer network, and a repeated probabilistic method is used to determine when the transaction should be sent to just one node or broadcast to many nodes in a process called flooding. This method was motivated by the growing blockchain analysis market and the potential use of botnets for analysis.
Efforts to trace transactions
In April 2017, researchers highlighted three major threats to monero users' privacy. The first relies on leveraging the ring signature size of zero, and ability to see the output amounts. The second, "Leveraging Output Merging", involves tracking transactions where two outputs belong to the same user, such as when they send funds to themselves ("churning"). Finally, "Temporal Analysis", shows that predicting the right output in a ring signature could potentially be easier than previously thought. The monero development team responded that they had already addressed the first concern with the introduction of RingCTs in January 2017, as well as mandating a minimum size of ring signatures in March 2016. In 2018, researchers presented possible vulnerabilities in a paper titled "An Empirical Analysis of Traceability in the Monero Blockchain". The monero team responded in March 2018.
In September 2020, the United States Internal Revenue Service's criminal investigation division (IRS-CI), posted a $625,000 bounty for contractors who could develop tools to help trace monero, other privacy-enhanced cryptocurrencies, the bitcoin Lightning Network, or other "layer 2" protocols. The contract was awarded to blockchain analysis groups Chainalysis and Integra FEC.
Mining
Monero uses a proof of work algorithm, RandomX, to validate transactions. The method was introduced in November 2019 to replace the former algorithm CryptoNightR. Both algorithms were designed to be resistant to mining, which is commonly used to mine other cryptocurrencies such as Bitcoin. Monero can be mined somewhat efficiently on consumer grade hardware such as x86, x86-64, ARM and GPUs, a design decision which was based on Monero project's opposition to mining centralisation which mining creates, but has also resulted in Monero's popularity among malware-based non-consentual miners.
Illicit use
Monero's privacy features have made it popular for illicit purposes.Kshetri, Nir (2018). "Cryptocurrencies: Transparency Versus Privacy". Computer. IEEE Computer Society. 51 (11): 99–111. .
Darknet markets
Monero is a common medium of exchange on darknet markets. In August 2016, dark market AlphaBay permitted its vendors to start accepting monero as an alternative to bitcoin. The site was taken offline by law enforcement in 2017, but it was relaunched in 2021 with monero as the sole permitted currency. Reuters reported in 2019 that three of the five largest darknet markets accepted monero, though bitcoin was still the most widely used form of payment in those markets.
Mining malware
Hackers have embedded malware into websites and applications that hijack victim CPUs to mine monero (sometimes called cryptojacking). In late 2017, malware and antivirus service providers blocked Coinhive, a JavaScript implementation of a monero miner that was embedded in websites and apps, in some cases by hackers. Coinhive generated the script as an alternative to advertisements; a website or app could embed it, and use website visitor's CPU to mine the cryptocurrency while the visitor is consuming the content of the webpage, with the site or app owner getting a percentage of the mined coins. Some websites and apps did this without informing visitors, and some hackers implemented it in way that drained visitors' CPUs. As a result, the script was blocked by companies offering ad blocking subscription lists, antivirus services, and antimalware services. Coinhive had been previously found hidden in Showtime-owned streaming platforms, as well as Starbucks Wi-Fi hotspots in Argentina. In 2018, researchers found similar malware that mines monero and sends it to Kim Il-sung University in North Korea.
Ransomware
Monero is sometimes used by ransomware groups. According to CNBC, in the first half of 2018, monero was used in 44% of cryptocurrency ransomware attacks.
One group behind the 2017 WannaCry ransomware attack, the Shadow Brokers, attempted to exchange the ransom they collected in bitcoin to monero. Ars Technica and Fast Company reported that the exchange was successful, but BBC News reported that the service the criminal attempted to use, ShapeShift, denied any such transfer. The Shadow Brokers began accepting monero as payment later in 2017.
In 2021, CNBC, the Financial Times, and Newsweek reported that demand for monero was increasing following the recovery of a bitcoin ransom paid in the Colonial Pipeline cyber attack. The May 2021 hack forced the pipeline to pay a $4.4M ransom in bitcoin, though a large portion was recovered by the United States federal government the following month. The group behind the attack, DarkSide, normally requests payment in either bitcoin or monero, but charge a 10-20% premium for payments made in bitcoin due to its increased traceability risk. Ransomware group REvil removed the option of paying ransom in bitcoin in 2021, demanding only monero. Ransomware negotiators, groups that help victims pay ransoms, have contacted monero developers to understand the technology. Despite this, CNBC reported that bitcoin was still the currency of choice demanded in most ransomware attacks, as insurers refuse to pay monero ransom payments because of traceability concerns.
Regulatory responses
The attribution of monero to illlicit markets has influenced some exchanges to forgo listing it. This has made it more difficult for users to exchange monero for fiat currencies or other cryptocurrencies. Exchanges in South Korea and Australia have delisted monero and other privacy coins due to regulatory pressure.
In 2018, Europol and its director Rob Wainwright wrote that the year would see criminals shift from using bitcoin to using Monero, as well as Ethereum, dash, and zcash. Bloomberg and CNN'' reported that this demand for monero was because authorities were becoming better at monitoring the bitcoin blockchain.
Publicity
After many online payment platforms shut down access for white nationalists following the Unite the Right rally in 2017, some of them, including Christopher Cantwell and Andrew Auernheimer ("weev"), started using and promoting monero.
In December 2017, The Monero team announced a partnership with 45 musicians and several online stores for monero to be used as a form of payment for their merchandise.
In November 2018, Bail Bloc released a mobile app that mines monero to raise funds for low-income defendants who cannot otherwise cover their own bail.
References
External links
2014 software
Cryptocurrency projects
Currencies introduced in 2014
Private currencies |
872847 | https://en.wikipedia.org/wiki/SCO%E2%80%93Linux%20disputes | SCO–Linux disputes | The SCO–Linux disputes were a series of legal and public disputes between the software company SCO Group (SCO) and various Linux vendors and users. The SCO Group alleged that its license agreements with IBM meant that source code IBM wrote and donated to be incorporated into Linux was added in violation of SCO's contractual rights. Members of the Linux community disagreed with SCO's claims; IBM, Novell and Red Hat filed claims against SCO.
On August 10, 2007, a federal district court judge in SCO v. Novell ruled on summary judgment that Novell, not the SCO Group, was the rightful owner of the copyrights covering the Unix operating system. The court also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent". After the ruling, Novell announced they had no interest in suing people over Unix and stated "We don't believe there is Unix in Linux". The final district court ruling, on November 20, 2008, affirmed the summary judgment, and added interest payments and a constructive trust.
On August 24, 2009, the U.S. Court of Appeals for the Tenth Circuit partially reversed the district court judgment. The appeals court remanded back to trial on the issues of copyright ownership and Novell's contractual waiver rights. The court upheld the $2,547,817 award granted to Novell for the 2003 Sun agreement.
On March 30, 2010, following a jury trial, Novell, and not The SCO Group, was unanimously found to be the owner of the UNIX and UnixWare copyrights. The SCO Group, through bankruptcy trustee Edward Cahn, decided to continue the lawsuit against IBM for causing a decline in SCO revenues.
On March 1, 2016, SCO's lawsuit against IBM was dismissed with prejudice; SCO filed an appeal later that month.
Overview
Unix is a major computer operating system, developed in the United States of America. Prior to the events of this case, the intellectual property rights (IP) in Unix were held by Unix System Laboratories (USL), part of AT&T, but the area of IP ownership was complex. By 2003, the rights in Unix had been transferred several times and there was dispute as to the correct owner in law. Also, some of the code within Unix had been written prior to the Copyright Act of 1976, or was developed by third parties, or was developed or licensed under different licenses existing at the time. The software company SCO Group (SCO), formerly Caldera International, asserted in 2003 that it was the owner of Unix, and that other Unix-type operating systems—particularly the free operating system Linux and other variants of Unix sold by competitor companies—were violating their intellectual property by using Unix code without a license in their works.
SCO initially claimed, and tried to assert, a legal means to litigate directly against all end-users of these operating systems, as well as the companies or groups providing them—potentially a very substantial case and one that would throw fear into the market about using them. However, it was unable to formulate such a case, since the Unix copyrights were weakly worded, there was no basis in patent law, and breach of trade secrets would only affect the one or few companies who might have been alleged to have disclosed trade secrets. Lacking grounds to sue all users generally, SCO dropped this aspect of its cases.
The assertions were heavily contested. Claims of SCO's own copyright violations of these other systems were raised, along with claims related to SCO being bound by, or violating, the GPL licence, under which SCO conducted business related to these systems. Claims were also made that the case was substantially financed and promoted by Microsoft and investment businesses with links to Microsoft; around that time (1998–2004 onwards), Microsoft was fiercely engaged in various FUD tactics such as its Get the facts campaign, that sought to undermine or discredit Linux as a possible competitor to its own Windows operating systems and server systems.
In the end, SCO launched only a few main legal cases—against IBM for improper disclosure and breach of copyright related to its AIX operating system, against Novell for interference (clouding the issue of ownership), against DaimlerChrysler for non-compliance with a demand to certify certain matters related to Unix usage, and against Linux business and former client AutoZone for violating SCO's rights by using Linux. Separately, the Linux company Red Hat also filed a legal claim against SCO for making false claims that affected its (Red Hat's) business, and to seek a court declaration that SCO had no ownership rights in Linux code.
In 2007, a court ruled in SCO v. Novell that Novell and not SCO was the owner of the Unix copyrights. , most of these cases have been resolved, or largely resolved, and none of the rulings have been in SCO's favor.
Timeline and major cases
At the beginning of 2003, SCO claimed that there had been "misappropriation of its UNIX System V code into Linux". However, the company refused to identify the specific segments of code, claiming that it was a secret which they would reveal only to the court. They did say that the code could be found in the SMP, RCU and a few other parts of the Linux kernel.
On 6 March 2003, they announced that they were suing IBM for $1 billion, claiming that IBM transferred SCO trade secrets into Linux. That amount later rose to $3 billion, and then again to $5 billion. May 2003—Novell publicly states that SCO does not own the AT&T Unix intellectual property in question, Novell does.
Some educated parties note that the USL v. BSDi case had shown that the Unix copyrights are weak and unenforceable. SCO has not claimed patent infringement, as according to the US Patent and Trademark Office database, no AT&T or Novell patent was ever assigned to SCO. The UNIX trademark was not owned by SCO. That left arguing over trade secrets, which, after some opposition, was hard to take beyond a breach of contract between SCO and IBM, and consequentially, a claim only against IBM. SCO was looking for something directed at the greater Linux community, and has since explicitly dropped all trade secret claims from their case.
SCO now had little legal ground at this point and therefore began numerous legal claims and threats against many of the major names in the computer industry, including IBM, Hewlett-Packard, Microsoft, Novell, Silicon Graphics, Sun Microsystems and Red Hat.
By mid-2004, five major lawsuits had been filed:
SCO v. IBM
Red Hat v. SCO
SCO v. Novell (not directly related to Linux, the suit has more to do with Unix copyrights)
SCO v. DaimlerChrysler
SCO v. AutoZone
In cases, SCO publicly implied that a number of other parties have committed copyright infringement, including not only Linux developers, but also Linux users.
UNIX SVRx
SCO's claims are derived from several contracts that may have transferred UNIX System V Release 4 intellectual property assets. The UNIX IP rights originated with Unix System Laboratories (USL), a division of AT&T. In 1993, USL sold all UNIX rights and assets to Novell, including copyrights, trademarks, and active licensing contracts. Some of these rights and assets, plus additional assets derived from Novell's development work, were then sold to the Santa Cruz Operation in 1995. The Santa Cruz Operation had developed and was selling a PC-based UNIX until 2000, when it then resold its UNIX assets to Caldera Systems, which later reorganized into Caldera International and changed its name to SCO Group.
Through this chain of sales, SCO claims to be the "owner of UNIX". The validity of these claims is hotly contested by others. SCO claims copyright to all UNIX code developed by USL, referred to as SVRx, and licensing contracts originating with AT&T, saying that these are inherited through the same chain of sales. The primary document SCO presents as evidence of these claims is the "Asset Purchase Agreement", defining the sale between Novell and the Santa Cruz Operation. SCO says that this includes all copyrights to the UNIX code base and contractual rights to the licensing base. The other parties disagree.
UNIX copyrights ownership
The status of copyrights from USL is murky, since UNIX code is a compilation of elements with different copyright histories. Some code was released without copyright notice before the Copyright Act of 1976 made copyright automatic. This code may be in the public domain and not subject to copyright claims. Other code is affected by the USL v. BSDi case, and is covered by the BSD License.
Groklaw uncovered an old settlement made between Unix System Laboratories (USL) and The University of California in the case of USL v. BSDi. This settlement ended a copyright infringement suit against the university for making BSD source code freely available that USL felt infringed their copyrights. The university filed a counter suit, saying that USL had taken BSD source code and put it in UNIX without properly acknowledging the university's copyright. This settlement muddies the question of SCO's ownership of major parts of the UNIX source code. This uncertainty is particularly significant in regard to SCO's claims against Linux, which uses some BSD code.
Novell challenges SCO's interpretation of the purchase agreement. In response to a letter SCO sent to 1500 companies on May 12, 2003, Novell exchanged a series of letters with SCO beginning in May 2003, claiming that the copyrights for the core UNIX System V were not included in the asset purchase agreement and are retained by Novell. In October 2003, Novell registered those copyrights with the US Copyright Office.
In response to these challenges from Novell, SCO filed a "slander of title" suit against Novell, SCO v. Novell. This claimed that Novell was interfering with their business activities by clouding the ownership of UNIX copyrights. SCO's claim for special damages was dismissed on June 9, 2004, for "failure to specifically plead special damages." However, SCO was given 30 days "to amend its complaint to more specifically plead special damages". In the same ruling, the judge stated that it was questionable whether or not the Asset Purchase Agreement transferred the relevant copyrights, reasoning that the ASA amendment by which SCO was claiming to have acquired those rights contained no transfer language in the form of "seller hereby conveys to buyer" and that it used ambiguous language when it came to the question of when and how and which rights were to be transferred.
SCO filed an amended complaint. In late July, 2005, Novell filed an answer to SCO's complaint, denying all of its accusations. Novell also filed its own Slander of Title counter-lawsuit against SCO. Novell has also filed claims for numerous breaches of the APA (Asset Purchase Agreement) between Novell and the Santa Cruz Operation. Under the APA, Santa Cruz (and later SCO after SCO purchased Santa Cruz Operation's Unix Business) was given the right to market and sell Unixware as a product, retaining 100% of all revenues. Santa Cruz Operation (and later SCO) also was given the responsibility of administering Unix SVR4 license agreements on behalf of Novell. When money was paid for licensing, SCO was to turn over 100% of the revenue to Novell, and then Novell would return 5% as an Administration Fee. Novell claims that SCO signed Unix SVR4 licensing agreements with Microsoft and Sun Microsystems, as well as with numerous Linux End Users for Unix IP allegedly in the Linux Kernel, and then refused to turn the money over to Novell. Novell is suing for 100% of the revenue, claiming SCO is not entitled to the 5% administration fee since they breached their contract with Novell. Novell's counterclaims proposed asking the court to put appropriate funds from SCO into escrow until the case is resolved, since SCO's cash is diminishing quickly.
Novell also retained the right to audit SCO's Unix Licensing Business under the APA. Novell claims that SCO has not turned over vital information about the Microsoft, Sun, and Linux End User License Agreements, despite repeated demands by Novell for them to do so. Novell, in another claim that is part of their counter suit, is asking the court to compel SCO to allow Novell to perform this audit of SCO's Unix Business.
On August 10, 2007, Judge Dale Kimball, hearing the SCO v. Novell case, ruled that "the court concludes that Novell is the owner of the UNIX and UnixWare Copyrights".
License administration standing
The Novell to Santa Cruz Operation Asset Purchase Agreement also involved the administration of some 6000 standing licensing agreements between various UNIX users and the previous owners. These licensees include universities, software corporations and computer hardware companies. SCO's claimed ownership of the licenses has become an issue in three aspects of the SCO–Linux controversies. The first was the cancellation of IBM's license, the second was SCO's complaint against DaimlerChrysler (see SCO v. DaimlerChrysler), and the third is the derivative works claim of the SCO v. IBM case.
In May 2003, SCO canceled IBM's SVRx license to its version of UNIX, AIX. This was based on SCO's claim of unrestricted ownership of the System V licensing contracts inherited from USL. IBM ignored the license cancellation, claiming that an amendment to the original license made it "irrevocable". In addition, as part of the Purchase Agreement, Novell retained certain rights of control over the administration of the licenses which were sold, including rights to act on SCO's behalf in some cases. Novell exercised one of these rights by revoking SCO's cancellation of the IBM license. SCO disputed the validity of both of these actions, and amended its SCO v. IBM complaint to include copyright infringement, based on IBM's continued sale and use of AIX without a valid SVRx license.
In December 2003, SCO demanded that all UNIX licensees certify some items, some related to the use of Linux, that were not provided for in the license agreement language. Since DaimlerChrysler failed to respond, SCO filed the SCO v. DaimlerChrysler suit in March 2004. All claims related to the certification demands were summarily dismissed by the court.
Control of derivative works
The third issue based on the UNIX licensees agreement is related to SCO's claims of control of derivative works.
Many UNIX licensees have added features to the core UNIX SVRx system and those new features contain computer code not in the original SVRx code base. In most cases, software copyright is owned by the person or company that develops the code. SCO, however, claims that the original licensing agreements define this new code as a derivative work. They also claim that they have the right to control and restrict the use and distribution of that new code.
These claims are the basis of SCO v. IBM. SCO's initial complaint, said that IBM violated the original licensing agreement by not maintaining confidentiality with the new code, developed and copyrighted by IBM, and releasing it to the Linux project.
IBM claims that the license agreement (noted in the $Echo newsletter of April 1985) and subsequent licenses defines derivative works as the developer's property. This leaves IBM free to do as it wishes with its new code. In August 2004, IBM filed a motion for partial summary judgment. The motion stated that IBM has the right to do as it wishes with software not part of the original SVRx code. In February 2005, the motion was dismissed as premature, because discovery was not yet complete. IBM refiled this motion along with other summary judgment motions as noted below in September 2006.
SCO allegations of copyright and trade secret violations
SCO claims that Linux infringes SCO's copyright, trade secrets, and contractual rights. This claim is fundamental to the SCOsource program, where SCO has demanded that Linux users obtain licenses from SCOsource to be properly licensed to use the code in question. Exactly which parts of Linux are involved remains unclear as many of their claims are still under seal in the SCO v. IBM lawsuit.
SCO originally claimed in SCO v. IBM that IBM had violated trade secrets. But these alleged violations by IBM would not have involved Linux distributors or end users. SCO's trade secret claims were dropped by SCO in their amended complaint.
SCO also claimed line-for-line literal copying of code from UNIX code files to Linux kernel files and obfuscated copying of code, but originally refused to publicly identify which code was in violation. SCO submitted to the court evidence of their claims under seal but much of it was excluded from the case after it was challenged by IBM as not meeting the specificity requirements to be included.
These examples have fallen into two groups. The first are segments of files or whole files alleged to originate in UNIX SVRx code such as the errno.h header file. The second group are files and materials contributed by IBM that originated with IBM development work associated with AIX and Dynix, IBM's two UNIX products.
Each of these has a different set of issues. In order for copyright to be violated, several conditions must be met. First, the claimant must be able to show that they own the copyrights for the material in question. Second, all or a significant part of the source must be present in the infringing material. There must be enough similarity to show direct copying of material.
SVRx code allegedly in Linux
The issue of ownership of the SVRx code base was discussed above. Besides the unresolved issue of what was actually transferred from Novell to Santa Cruz Operation, there are also the portions of the SVRx code base that are covered by BSD copyrights or that are in the public domain.
SCO's first public disclosure of what they claimed is infringing code was at its SCO Forum conference in August 2003 at the MGM Grand Las Vegas. The first, known as the Berkeley Packet Filter, was distributed under the BSD License and is freely usable by anyone. The second example was related to memory allocation functions, also released under the BSD License. It is no longer in the Linux code base.
SCO has also claimed that code related to application programming interfaces was copied from UNIX. However, this code and the underlying standards they describe are in the public domain and are also covered by rights USL sold to The Open Group. A later claim was made to code segments related to ELF file format standards. This material was developed by the Tool Interface Standard (TIS) Committee and placed in the public domain. SCO claims that the TIS Committee had no authority to place ELF in the public domain, even though SCO's predecessor in interest was a member of the committee.
SCO has claimed that some are violating UNIX SVRx copyrights by putting UNIX code into Linux. They may or may not have brought this claim directly in any of their cases. The IBM case is about derivative works, not SVRx code (see below). The Novell case is about copyright ownership. DaimlerChrysler was about contractual compliance statements.
The "may or may not" comes from AutoZone's case. In AutoZone, SCO's complaint claimed damages for AutoZone's use of Linux. However, when objecting to AutoZone's request for a stay pending the IBM case, SCO apparently contradicted their written complaint, claiming that the case was entirely about AutoZone copying certain libraries (outside the Linux kernel) from a UNIX system to a Linux-based system to facilitate moving an internal application to the Linux platform faster; SCO's original complaint does not appear to mention these libraries. AutoZone denies having done this with UNIX libraries. If SCO's oral description of their case is the correct one, then their AutoZone claim has nothing to do with the Linux kernel or the actions of any distributors.
The copyright issue is addressed directly in two of the cases. The first is by IBM in their counterclaim in SCO v. IBM. The issue is central to a pending motion by IBM, stating that IBM violated no copyrights in its Linux related activities. It is also addressed by Red Hat in the Red Hat v. SCO case. Red Hat claims that SCO's statements about infringement in Linux are unproven and untrue, damaging to them and violates the Lanham Act. Red Hat asks for an injunction to stop claims of violations without proof. They also ask for a judgment that they violated no SCO copyrights. A hearing on the IBM motion was held on September 15, 2004. Judge Kimball took the motion under advisement. The Red Hat case is on hold.
Allegations of reverse copying
EWeek has reported allegations that SCO may have copied parts of the Linux kernel into SCO UNIX as part of its Linux Kernel Personality feature. If true, this would mean that SCO is guilty of a breach of the Linux kernel copyrights. SCO has denied this allegation, but according to Groklaw, one SCO employee confirmed it in a deposition.
IBM code in Linux
SCO has claimed a number of instances of IBM Linux code as breaches of contract. These examples include code related to symmetric multiprocessing (SMP), Journaled File System (JFS), Read-copy-update (RCU) and Non-Uniform Memory Access (NUMA). This code is questionably in the Linux kernel, and may have been added by IBM through the normal kernel submission process. This code was developed and copyrighted by IBM. IBM added features to AIX and Dynix.
SCO claims that they have "control rights" to this due to their licensing agreements with IBM. SCO disavows claiming that they own the code IBM wrote, rather comparing their "control rights" to an easement, rights which allow them to prohibit IBM from publicizing the code they wrote, even though IBM owns the copyrights. They base this claim on language in the original license agreement that requires non-disclosure of the code and claim that all code developed by UNIX licensees that is used with the code under license be held in confidence. This claim is discussed above at Control of derivative works.
SCO and the GPL
Before changing their name to the SCO Group, the company was known as Caldera International.
Caldera was one of the major distributors of Linux between 1994 and 1998. In August 1998, the company split into Caldera Systems and Caldera Thin Clients, with Caldera Systems taking over the Linux systems business and Caldera Thin Clients concentrating on the Thin Clients and embedded business. The parent and shell company Caldera, Inc. ceased to exist in 2000 after a settlement with Microsoft in the Caldera v. Microsoft lawsuit.
Caldera Systems was reorganized to become Caldera International in 2001, the company, which was renamed to The SCO Group in 2002.
Some, like Eben Moglen, have suggested that because Caldera distributed the allegedly infringing code under the GNU General Public License (GPL), that this act would license any proprietary code in Linux.
SCO has stated that they did not know their own code was in Linux, so releasing it under the GPL does not count. However, as late as July and August 2006, long after that claim was made, they were still distributing ELF files (the subject of one of SCO's claims regarding SVRx) under the GPL.
SCO has also claimed, in early stages of the litigation, that the GPL is invalid and non-binding and legally unenforceable. In response, supporters of the GPL, such as Eben Moglen, claimed that SCO's right to distribute Linux relied upon the GPL being a valid copyright license. Later court filings by the SCO Group in SCO v. IBM use SCO's alleged compliance with the license as a defense to IBM's counterclaims.
The GPL has become an issue in SCO v. IBM. Under U.S. copyright law, distribution of creative works whose copyright is owned by another party is illegal without permission from the copyright owner, usually in the form of a license; the GPL is such a license, and thus allows distribution, but only under limited conditions. Since IBM released the relevant code under the terms of the GPL, it claims that the only permission that SCO has to copy and distribute IBM's code in Linux is under the terms and conditions of the GPL, one of which requires the distributor to "accept" the GPL. IBM says that SCO violated the GPL by denouncing the GPL's validity, and by claiming that the GPL violates the U.S. Constitution, together with copyright, antitrust and export control laws. IBM also claims that SCO's SCOsource program is incompatible with the requirement that redistributions of GPLed works must be free of copyright licensing fees (fees may be charged for the acts of duplication and support). IBM has brought counterclaims alleging that SCO has violated the GPL and breached IBM's copyrights by collecting licensing fees while distributing IBM's copyrighted material.
Status of current lawsuits
SCO v. IBM
On March 7, 2003, SCO filed suit against IBM. Initially this lawsuit was about breach of contract and trade secrets. Later, SCO dropped the trade secrets claim, so the claim is breach of contract. SCO also added a copyright claim related to IBM's continued use of AIX, but not related to Linux. The judge subsequently stated that the SCO Group had indeed made a claim of copyright infringement against IBM regarding Linux. IBM filed multiple counter claims, including charges of both patent violations, which were later dropped, and violation of copyright law.
On February 8, 2005, Judge Kimball ruled that IBM's motions for summary judgment were premature but added:
Viewed against the backdrop of SCO's plethora of public statements concerning IBM's and others' infringement of SCO's purported copyrights to the UNIX software, it is astonishing that SCO has not offered any competent evidence to create a disputed fact regarding whether IBM has infringed SCO's alleged copyrights through IBM's Linux activities.
On June 28, 2006, Judge Brooke Wells granted, in part, IBM's motion to limit SCO's claims and excluded 186 of SCO's 294 items of allegedly misused intellectual property (IBM had challenged 201 of them for various reasons). Wells cited a number of factors including SCO's inability to provide sufficient specificity in these claims:
In December 2003, near the beginning of this case, the court ordered SCO to, "identify and state with specificity the source code(s) that SCO is claiming forms the basis of their action against IBM." Even if SCO lacked the code behind methods and concepts at this early stage, SCO could have and should have, at least articulated which methods and concepts formed "the basis of their action against IBM." At a minimum, SCO should have identified the code behind their method and concepts in the final submission pursuant to this original order entered in December 2003 and Judge Kimball’s order entered in July 2005.
This left about 100 of SCO's items of allegedly misused intellectual property (the merits of which have not yet been judged), out of 294 items originally disclosed by SCO.
Following the partial summary judgment rulings in the SCO v. Novell Slander of Title case, Judge Kimball asked the parties in SCO v IBM to prepare by August 31, 2007, a statement of the status of this case.
Red Hat v. SCO
Red Hat filed suit against SCO on August 4, 2003. Red Hat sued SCO for false advertising, deceptive trade practices and asked for a declaratory judgment of noninfringement of any of SCO's copyrights. This case has been stayed pending resolution of the IBM case.
SCO v. Novell
After SCO initiated their Linux campaign, they said that they were the owners of UNIX. Novell claimed these statements were false, and that they still owned the rights in question. After Novell registered the copyrights to some key UNIX products, SCO filed suit against Novell on January 20, 2004. Novell removed the suit to federal court on February 6, 2004.
On July 29, 2005, Novell filed its answer with the court, denying SCO's claims. Novell also filed counterclaims asking the court to force SCO to turn over the revenues it had received from UNIX licenses, less a 5% administrative fee. Additionally, Novell asked the court to place the funds in a "constructive trust" in order to ensure that SCO could pay Novell since the company's assets were depleting rapidly.
On August 10, 2007, Judge Dale Kimball, hearing the SCO v. Novell case, ruled that "the court concludes that Novell is the owner of the UNIX and UnixWare Copyrights". Novell was awarded summary judgments on a number of claims, and a number of SCO claims were denied. SCO was instructed to account for and pass to Novell an appropriate portion of income relating to SCOSource licences to Sun Microsystems and Microsoft. A number of matters are not disposed of by Judge Kimball's ruling, and the outcome of these are still pending.
On July 16, 2008, the trial court issued an order awarding Novell $2,547,817 and ruled that SCO was not authorized to enter into the 2003 agreement with Sun. On November 20, 2008, final judgment in the case affirmed the August 10 ruling, and added interest of $918,122 plus $489 per diem after August 29, 2008, along with a constructive trust of $625,486.90.
On August 24, 2009, the U.S. Court of Appeals for the Tenth Circuit partially reversed the August 10, 2007 district court summary judgment ruling. The appeals court remanded back to trial on the issues of copyright ownership and Novell's contractual waiver rights. The court upheld the $2,547,817 award granted to Novell for the 2003 Sun agreement. On March 30, 2010, after a three-week trial before Judge Ted Stewart, a jury returned a verdict "confirming Novell's ownership of the Unix copyrights."
On June 10, 2010, Judge Ted Stewart denied SCO's motion for another trial and ruled for Novell on all remaining issues.
On July 7, 2010, SCO appealed the new judgments to the United States Court of Appeals for the Tenth Circuit.
On August 30, 2011, the Tenth Circuit Court of Appeals affirmed the District Court ruling in its entirety, rejecting SCO's attempt to re-argue the case before the Court of Appeals.
SCO v. AutoZone
AutoZone, a corporate user of Linux and former user of SCO OpenServer, was sued by SCO on March 3, 2004. SCO claims AutoZone violated SCO's copyrights by using Linux. The suit was stayed pending the resolution of the IBM, Red Hat and Novell cases.
On September 26, 2008, Judge Robert C. Jones lifted the stay, effective December 31, 2008. He initially scheduled discovery for April 9, 2010. SCO filed an amended complaint on August 14, 2009. On August 31, 2009, AutoZone replied, and filed a motion to dismiss in part.
On October 22, 2009, Edward Cahn, SCO's Chapter 11 trustee, sought bankruptcy court approval for an agreement he reached with AutoZone. According to the court filings, the confidential settlement resolves all claims between SCO and AutoZone.
SCO v. DaimlerChrysler
In December 2003, SCO demanded that some UNIX licensees certify certain issues regarding their use of Linux. DaimlerChrysler, a former UNIX user and current Linux user, did not respond to this demand. On March 3, 2004, SCO filed suit against DaimlerChrysler for violating their UNIX license agreement by failing to respond to the certification request. Almost every claim SCO made has been ruled against in summary judgment. The last remaining issue, that of whether DaimlerChrysler made a timely response, was dismissed by agreement of SCO and DaimlerChrysler in December 2004. SCO retains the right to continue this case at a future date, providing it pays legal fees to DaimlerChrysler.
Other issues and conflicts
SCO announces that it will not sue its own customers
On June 23, 2003, SCO sent out a letter announcing that it would not be suing its own Linux customers. In the letter, it states:
SCO and SGI
In August 2003, SCO presented two examples of what they claimed was illegal copying of copyrighted code from UNIX to Linux. One of the examples (Berkeley Packet Filter) was not related to original UNIX code at all. The other example did, however, seem to originate from the UNIX code and was apparently contributed by a UNIX vendor, Silicon Graphics. However, an analysis by the Linux community later revealed that:
The code originated from an even older version of UNIX which at some point was published by Caldera, thus making any claim of copyright infringement shaky.
The code did not do anything. It was in a part of the Linux kernel that was written in anticipation of a Silicon Graphics architecture that was never released.
It had already been removed from the kernel two months earlier.
The contested segment was small (80 lines) and trivial.
SCO and BayStar Capital
In October 2003, BayStar Capital and Royal Bank of Canada invested US$50 million in The SCO Group to support the legal cost of SCO's Linux campaign. Later it was shown that BayStar was referred to SCO by Microsoft, whose proprietary Windows operating system competes with Linux. In 2003, BayStar looked at SCO on the recommendation of Microsoft, according to Lawrence R. Goldfarb, managing partner of BayStar Capital: "It was evident that Microsoft had an agenda".
On April 22, 2004, The New York Times reported that BayStar Capital, a private hedge fund which had arranged for $50M in funding for SCO in October 2003, was asking for its $20M back. The remainder of the $50M was from Royal Bank of Canada. SCO stated in their press release that they believed that BayStar did not have grounds for making this demand.
On August 27, 2004, SCO and BayStar resolved their dispute.
SCO and Canopy Group
The Canopy Group is an investment group with shares in a trust of different companies. It is a group owned by the Noorda family, also founders of Novell.
Until February 2005, Canopy held SCO shares, and the management of SCO held shares of Canopy. The two parties became embroiled in a bitter dispute when the Noorda family sought to oust board member Ralph Yarro III on claims of misappropriation. With internal problems not made public (which included the suicides of Canopy's director of information systems, Robert Penrose, and Val Kriedel, the daughter of Ray Noorda), the Canopy Group agreed to buy back all the shares that SCO had in Canopy in exchange for their SCO shares and cash.
SCO and Canopy Group are now mostly independent, though SCO continues to rent their Utah office space from Canopy.
Microsoft funding of SCO controversy
On March 4, 2004, a leaked SCO internal e-mail detailed how Microsoft had raised up to $106 million via the BayStar referral and other means. Blake Stowell of SCO confirmed the memo was real, but claimed it to be "a misunderstanding". BayStar claimed the deal was suggested by Microsoft, but that no money for it came directly from them.
In addition to the Baystar involvement, Microsoft paid SCO $6M (USD) in May 2003 for a license to "Unix and Unix-related patents", despite the lack of Unix-related patents owned by SCO. License deals between both companies may have reached at least $16M (USD) according to U.S. Securities and Exchange Commission (SEC) filings. This deal was widely seen in the press as a boost to SCO's finances which would help SCO with its lawsuit against IBM.
SCOsource
After their initial claim of copyright infringement in the Linux kernel, The SCO Group started their SCOsource initiative, which sells licenses of SCO's claimed copyrighted software, other than OpenServer and Unixware licenses. After a small number of high-profile sales (including one that was denied by the claimed purchaser), SCO claimed to offer corporate users of Linux a license at US$699 per processor running Linux. However, many individuals have found it impossible to buy such a license from SCO. SCO says that participants of the SCOsource initiative are not liable for any claims that SCO makes against Linux users.
The Michael Davidson E-Mail
On July 14, 2005, an email was unsealed that had been sent from Michael Davidson to Reg Broughton (both Caldera International employees) in 2002, before many of the lawsuits. In it, Davidson reported how the company had hired an outside consultant because (spelling as in the original):
The consultant was to review the Linux code and compare it to Unix source code, to find possible copyright infringement. Davidson himself said that he had not expected to find anything significant based on his own knowledge of the code and had voiced his opinion that it was "a waste of time". After 4 to 6 months of consultant's work, Davidson says:
See also
Timeline of SCO-Linux controversies
Copyfraud
Association of Licensed Automobile Manufacturers (ALAM) - similar attempt to sue buyers of automobiles
References
External links
The SCO Group- Official website
Free Software Foundation position regarding SCO's attacks 6 essays, by Eben Moglen, Richard Stallman, and Bradley Kuhn
Groklaw - An online community dedicated to following the progress of the various lawsuits and investigating the claims SCO makes
Tuxrocks - An archive of court documents related to the various lawsuits
SCO Controversy Timeline
SCO:Without Fear and Without Research
Linux's lucky lawsuit - Why the SCO lawsuit is a good thing in the long run
The Michael Davidson Email
Novell hits back at SCO in Unix dispute
Fact and fiction in the Microsoft-SCO relationship
Linus Torvalds explains to Groklaw that he was the original author of code that SCO claims to have authored
Computing-related controversies and disputes
Intellectual property law
sv:SCOs rättstvister |
10067215 | https://en.wikipedia.org/wiki/Quality%20engineering | Quality engineering | Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with a high quality standard.
Description
Quality engineering is the discipline of engineering that creates and implements strategies for quality assurance in product development and production as well as software development.
Quality Engineers focus on optimizing product quality which W. Edwards Deming defined as:
Quality engineering body of knowledge includes:
Management and leadership
The quality system
Elements of a quality system
Product and process design
Classification of quality characteristics
Design inputs and review
Design verification
Reliability and maintainability
Product and process control
Continuous improvement
Quality control tools
Quality management and planning tools
Continuous improvement techniques
Corrective action
Preventive action
Statistical process control (SPC)
Risk management
Roles
Auditor: Quality engineers may be responsible for auditing their own companies or their suppliers for compliance to international quality standards such as ISO9000 and AS9100. They may also be independent auditors under an auditing body.
Process quality: Quality engineers may be tasked with value stream mapping and statistical process control to determine if a process is likely to produce defective product. They may create inspection plans and criteria to ensure defective parts are detected prior to completion.
Supplier quality: Quality engineers may be responsible for auditing suppliers or performing root cause and corrective action at their facility or overseeing such activity to prevent the delivery of defective product.
Software
IT services are increasingly interlinked in workflows across platform boundaries, device and organisational boundaries, for example in cyber-physical systems, business-to-business workflows or when using cloud services. In such contexts, quality engineering facilitates the necessary all-embracing consideration of quality attributes.
In such contexts an "end-to-end" view of quality from management to operation is vital. Quality engineering integrates methods and tools from enterprise architecture-management, Software product management, IT service management, software engineering and systems engineering, and from software quality management and information security management. This means that quality engineering goes beyond the classic disciplines of software engineering, information security management or software product management since it integrates management issues (such as business and IT strategy, risk management, business process views, knowledge and information management, operative performance management), design considerations (including the software development process, requirements analysis, software testing) and operative considerations (such as configuration, monitoring, IT service management). In many of the fields where it is used, quality engineering is closely linked to compliance with legal and business requirements, contractual obligations and standards. As far as quality attributes are concerned, reliability, security and safety of IT services play a predominant role.
In quality engineering, quality objectives are implemented in a collaborative process. This process requires the interaction of largely independent actors whose knowledge is based on different sources of information.
Quality objectives
Quality objectives describe basic requirements for software quality. In quality engineering they often address the quality attributes of availability, security, safety, reliability and performance. With the help of quality models like ISO/IEC 25000 and methods like the Goal Question Metric approach it is possible to attribute metrics to quality objectives. This allows measuring the degree of attainment of quality objectives. This is a key component of the quality engineering process and, at the same time, is a prerequisite for its continuous monitoring and control. To ensure effective and efficient measuring of quality objectives the integration of core numbers, which were identified manually (e.g. by expert estimates or reviews), and automatically identified metrics (e.g. by statistical analysis of source codes or automated regression tests) as a basis for decision-making is favourable.
Actors
The end-to-end quality management approach to quality engineering requires numerous actors with different responsibilities and tasks, different expertise and involvement in the organisation.
Different roles involved in quality engineering:
Business architect,
IT architect,
Security officer,
Requirements engineer,
Software quality manager,
Test manager,
Project manager,
Product manager and
Security architect.
Typically, these roles are distributed over geographic and organisational boundaries. Therefore, appropriate measures need to be taken to coordinate the heterogeneous tasks of the various roles in quality engineering and to consolidate and synchronize the data and information necessary in fulfilling the tasks, and to make them available to each actor in an appropriate form.
Knowledge management
Knowledge management plays an important part in quality engineering. The quality engineering knowledge base comprises manifold structured and unstructured data, ranging from code repositories via requirements specifications, standards, test reports, enterprise architecture models to system configurations and runtime logs. Software and system models play an important role in mapping this knowledge. The data of the quality engineering knowledge base are generated, processed and made available both manually as well as tool-based in a geographically, organisationally and technically distributed context. Of prime importance is the focus on quality assurance tasks, early recognition of risks, and appropriate support for the collaboration of actors.
This results in the following requirements for a quality engineering knowledge base:
Knowledge is available in a quality as required. Important quality criteria include that knowledge is consistent and up-to-date as well as complete and adequate in terms of granularity in relation to the tasks of the appropriate actors.
Knowledge is interconnected and traceable in order to support interaction between the actors and to facilitate analysis of data. Such traceability relates not only to interconnectedness of data across different levels of abstraction (e.g. connection of requirements with the services realizing them) but also to their traceability over time periods, which is only possible if appropriate versioning concepts exist. Data can be interconnected both manually as well as (semi-) automatically.
Information has to be available in a form that is consistent with the domain knowledge of the appropriate actors. Therefore, the knowledge base has to provide adequate mechanisms for information transformation (e.g. aggregation) and visualization. The RACI concept is an example of an appropriate model for assigning actors to information in a quality engineering knowledge base.
In contexts, where actors from different organisations or levels interact with each other, the quality engineering knowledge base has to provide mechanisms for ensuring confidentiality and integrity.
Quality engineering knowledge bases offer a whole range of possibilities for analysis and finding information in order to support quality control tasks of actors.
Collaborative processes
The quality engineering process comprises all tasks carried out manually and in a (semi-)automated way to identify, fulfil and measure any quality features in a chosen context. The process is a highly collaborative one in the sense that it requires interaction of actors, widely acting independently from each other.
The quality engineering process has to integrate any existing sub-processes that may comprise highly structured processes such as IT service management and processes with limited structure such as agile software development. Another important aspect is change-driven procedure, where change events, such as changed requirements are dealt with in the local context of information and actors affected by such change. A pre-requisite for this is methods and tools, which support change propagation and change handling.
The objective of an efficient quality engineering process is the coordination of automated and manual quality assurance tasks. Code review or elicitation of quality objectives are examples of manual tasks, while regression tests and the collection of code metrics are examples for automatically performed tasks. The quality engineering process (or its sub-processes) can be supported by tools such as ticketing systems or security management tools.
See also
Seven Basic Tools of Quality
Engineering management
Manufacturing engineering
Mission assurance
Systems engineering
W. Edwards Deming
Associations
American Society for Quality
INFORMS
Institute of Industrial Engineers
External links
Txture is a tool for textual IT-Architecture documentation and analysis.
mbeddr is a set of integrated and extensible languages for embedded software engineering, plus an integrated development environment (IDE).
References
Information technology management
Enterprise architecture
Engineering disciplines
Systems engineering
Software quality
Knowledge management |
18586421 | https://en.wikipedia.org/wiki/Cybercrime%20in%20Canada | Cybercrime in Canada | Computer crime, or cybercrime in Canada, is an evolving international phenomenon. People and businesses in
Canada and other countries may be affected by computer crimes that may, or may not originate within the borders of their country. From a Canadian perspective, 'computer crime' may be considered to be defined by the Council of Europe – Convention on Cybercrime (November 23, 2001). Canada contributed, and is a signatory, to this international of criminal offences involving the use of computers:
Offences against the confidentiality, integrity and availability of computer data and systems;
Computer-related offences;
Content-related offences;
Offences related to infringements of copyright and related rights; and
Ancillary liability.
Canada is also a signatory to the Additional Protocol to the Convention on Cybercrime, concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems (January 28, 2003). As of July 25, 2008 Canada had not yet ratified the Convention on Cybercrime or the Additional Protocol to the Convention on cybercrime, concerning the criminalization of acts of a Discriminatory nature committed through computer systems.
Canadian computer crime laws
The Criminal Code contains a set of laws dealing with computer crime issues.
Criminal Offences Contained in the Convention on Cybercrime (November 23, 2001)
As Canada has not yet ratified the Convention on Cybercrime its Criminal Code may not fully address the areas of criminal law set out in the Convention on Cybercrime.
Computer-related offences
Computer-related forgery
Computer-related fraud
Content-related offences
Offences related to child pornography
Offences related to infringements of copyright and related rights
Ancillary liability
Attempt and aiding or abetting
Corporate liability
Criminal offences in the Additional Protocol to the Convention on Cybercrime
As Canada has not yet ratified this Additional Protocol to the Convention on cybercrime, its Criminal Code may not fully address the following criminal offences:
Dissemination of racist and xenophobic material through computer systems
Racist and xenophobic motivated threat
Racist and xenophobic motivated insult
Denial, gross minimization, approval or justification of genocide or crimes against humanity
Aiding and abetting
Laws
Criminal Code
Section 342 of the Criminal Code deals with theft, forgery of credit cards and unauthorized use of computer
Section 184 of the Criminal Code deals with privacy
Section 402 of the Criminal Code deals with Identity theft
Section 403 of the Criminal Code deals with Identity fraud
Canadian computer criminals
The Canadian hacker group 'The Brotherhood of Warez' hacked the Canadian Broadcasting Corporation's website on April 20, 1997; replacing the homepage with the message "The Media Are Liars"
References
Canada |
1751434 | https://en.wikipedia.org/wiki/Jargon%20Software | Jargon Software | Jargon Software Inc. is a computer software development company that specializes in development and deployment tools and business applications for mobile handheld devices such as Pocket PC and Symbol PDA devices.
The company is based in Minneapolis, Minnesota, United States, and is a privately held Minnesota corporation. It markets its products both directly and through selected resellers to corporate, governmental and other organizations around the world.
History
Jargon Software was formed in 1997 to create an Internet application architecture that would overcome the inherent limitations of traditional web-enabling technologies, not require long and complicated download and installation procedures, insulate the application from the turbulent times that lie ahead, and offer an upgrade path to emerging technologies. The company was originally named Viking Software Corp. The name was changed to Jargon Software in late 1998 in light-hearted recognition of the many buzzwords pertaining to the technology that underlies the company's products. The founders and principals of Jargon Software are Richard D. Rubenstein, Timothy J. Bloch, and Thomas L. Dietsche.
Products
Jargon Software products are used by developers to develop and deploy smart-client mobile software applications that can run both online and offline. Jargon Software designs mobile products that integrate sales-order processing, inventory, field service, and inspections for small and large companies in any industry anywhere in the world. The open Jargon design makes it possible to manage business on any PDA.
Industries where Jargon mobile software works especially well include motor vehicles & parts, furniture & home furnishings, building & construction materials, electronic equipment, appliances, garden equipment & supplies, foods & beverages, health & personal care, clothing & clothing accessories, sporting goods, real estate management, florists, office supplies, hardware, farm implements, jewelry, petroleum products, pets & pet supplies.
ForceField Mobile SFA
ForceField Mobile SFA (salesforce automation) is an extensible application that works like a remote control. It puts the power of the office into the hands of salespeople, letting them be with their customers rather than in the office.
All data is stored on the mobile PDA while also being able to operate real-time when a connection is present.
The development tool creates XML files that define the client-side user interface (UI), with embedded JavaScript for client-side logic. Developers create applications that directly manipulate individual client components via a server's responses to HTTP requests or via embedded JavaScript functions that are linked to UI events.
The deployment engine runs on various mobile devices, including handhelds, tablets, and laptop PCs. It interprets and executes XML applications that are downloaded from a host server (similar to reading web pages).
Since these XML pages are hosted, deployment overhead is essentially eliminated when installing new versions of the applications. The user can get the latest version by simply clicking a button while online, the devices do not need to be brought back into the home office to be restaged.
Successful mobile applications must be able to run when disconnected from the network, due to coverage dead spots or restrictions on use of wireless devices in certain areas (e.g. medical facilities). This results in the need to store data locally on the mobile device. Jargon Reader can store low-volume data automatically in text and table components. Embedded SQL databases (such as the Oracle Lite Database from Oracle Corporation) are used for higher volume storage requirements.
Client-host synchronization is achieved at the application level, resulting in more efficiency and eliminating the need for additional mobile middleware.
Jargon Writer
Jargon Writer is a platform-neutral and language-neutral development system that uses XML to construct a mobile application's graphical user interfaces with a robust yet lightweight set of components, an event handling framework, and locally executed procedures. A generic HTTP interface is used to run procedures on remote hosts via middleware products or built-in web server features such as Active Server Pages (ASP) or PHP scripts. The HTTP interface can also request text documents, images and other files from any server on the network to which the client is connected.
FTP uploads and downloads are also supported.
Jargon Writer uses a point-and-click, drag-and-drop WYSIWYG layout editor with fill-in-the blanks attribute tables, and a text editor for writing JavaScript functions. It also includes PDA Emulator and Windows versions of the Jargon Reader deployment products so that developers can run applications as they are being developed to see how they look and behave.
Jargon Reader
Jargon Reader deployment products are a family of high-performance client engines that use a patented methodology to automatically download and execute the XML-based applications developed with Jargon Writer.
Using the information in these XML client application files, Jargon Reader renders (draws) the graphical user interface and executes logic functions in response to user interface events such as selecting a button. Various peripherals such as barcode readers, mag-card readers, RFID readers and mobile printers are also supported.
Versions of Jargon Reader have been written for the following client platforms:
Jargon Reader for Pocket PC runs on WinCE and PocketPC-based handheld devices including HP, Dell and Symbol
Jargon Reader for Windows runs on all 32-bit Windows PC platforms (Windows 95/98/ME/NT/2000/XP)
Patents
, "System and method for deploying and implementing software applications over a distributed network", - Timothy J. Bloch, Thomas L. Dietsche, Richard D. Rubenstein - 2007
External links
Company homepage
Development software companies
Software companies established in 1997
Integrated development environments
Personal digital assistant software
Pocket PC software |
45079240 | https://en.wikipedia.org/wiki/Encryption%20ban%20proposal%20in%20the%20United%20Kingdom | Encryption ban proposal in the United Kingdom | The UK encryption ban is a pledge by former British prime minister David Cameron to ban online messaging applications that offer end-to-end encryption, such as WhatsApp, iMessage, and Snapchat, under a nationwide surveillance plan. Cameron's proposal was in response to the services which allow users to communicate without providing the UK security services access to their messages, which in turn could allegedly allow suspected terrorists a safe means of communication.
Proposal
On 15 January 2015, David Cameron asked American president Barack Obama to increase pressure on American Internet companies to work more closely with British intelligence agencies, in order to deny potential terrorists a "safe space" to communicate, as well as seeking co-operation to implement tighter surveillance controls. Under new proposals, messaging apps will have to either add a backdoor to their programs, or risk a potential ban within the UK. To justify the proposal to ban encryption, David Cameron claims that "In our country, do we want to allow a means of communication between people, which even in extremis, with a signed warrant from the home secretary personally, that we cannot read?" In defending surveillance of Internet messaging, Cameron pointed out that the British state already possessed the legal ability to read people's private letters and to surveil their private phone calls.
In July 2016, newly appointed home secretary Amber Rudd confirmed the proposed Investigatory Powers Bill would grant any Secretary of State the powers to force communication service providers to remove or disable end-to-end encryption.
Criticism
The UK's Information Commissioner Christopher Graham criticized the plans by saying "We must avoid knee-jerk reactions. In particular, I am concerned about any compromising of effective encryption for consumers of online services." The ISPA claims that the proposal risks "undermining the UK's status as a good and safe place to do business". While David Cameron had also claimed that app providers have "a social responsibility to fight the battle against terrorism", the founder of Lavabit had also criticized the proposals, saying the introduction of backdoors would leave systems more vulnerable.
Resultant legislation
The resulting legislation was the Investigatory Powers Act 2016 (nicknamed the Snoopers' Charter) which comprehensively sets out and in limited respects expands the electronic surveillance powers of the UK Intelligence Community and police. It also aims to improve the safeguards on the exercise of those powers.
See also
Mass surveillance in the United Kingdom
Internet censorship in the United Kingdom
Web blocking in the United Kingdom
References
External links
U.K. PM Backpedals On ‘Encryption Ban’, Sort Of
Cameron wants to ban encryption – he can say goodbye to digital Britain
Here's Why Britain's Proposed Encryption Ban Is Totally Unworkable
David Cameron in 'cloud cuckoo land' over encrypted messaging apps ban
Banning all encryption won't make us safer, no matter what David Cameron says
David Cameron Wants To Ban Encryption
Internet censorship in the United Kingdom |
57624225 | https://en.wikipedia.org/wiki/Structure%20of%20the%20Italian%20Air%20Force | Structure of the Italian Air Force | The article provides an overview of the entire chain of command and organization of the Italian Air Force as of 1 January 2018 and includes all currently active units. The Armed Forces of Italy are under the command of the Italian Supreme Defense Council, presided over by the President of the Italian Republic. The Italian Air Force is commanded by the Chief of the Air Force General Staff or "Capo di Stato Maggiore dell’Aeronautica Militare" in Rome.
The source for this article is the booklet "L’ORDINAMENTO IN AERONAUTICA MILITARE", which is published every year by the Air Force General Staff for the students of the Air Force Academy. The booklet for 2017–2018 in PDF-format can be found at on the website of the Italian Air Force
Chief of the Air Force General Staff
The Chief of the Air Force General Staff heads the Air Force General Staff in Rome, manages the operational aspects of the air force, and supervises four major commands.
Capo di Stato Maggiore Aeronautica Militare (Chief of the Air Force General Staff - CaSMA), in Rome
Air Force General Staff, in Rome
Comando della Squadra Aerea (Air Fleet Command - CSA), in Rome
Comando Logistico dell'AM (Air Force Logistic Command - CLOG AM), in Rome
Comando 1ª Regione Aerea (1st (North Italy) Air Region Command - 1ªRA), in Milan
Comando delle Scuole dell’AM/3ª Regione Aerea (Air Force Schools Command - 3rd (South Italy) Air Region - CSAM/3ªRA), in Bari
Air Force General Staff
The following offices report directly to the Chief of the Air Force General Staff.
Chief of the Air Force General
Ufficio Generale del Capo di SMA (Main Office of the Chief of the Air Force General Staff)
Segreteria Particolare del Capo di SMA (Special Secretariat of the Chief of the Air Force General Staff)
Direzione per l’Impiego del Personale Militare dell’Aeronautica (Air Force Military Personnel Employment Directorate - DIPMA)
Ufficio Generale di Coordinamento della Prevenzione Antinfortunistica e della Tutela Ambientale (Accident Prevention and Environmental Protection Main Coordination Office - UCOPRATA)
Ufficio Generale di Coordinamento della Vigilanza Antinfortunistica (Accident Vigilance Main Coordination Office - UCOVA)
Ufficio Generale Centro di Responsabilità Amministrativa – Aeronautica Militare (Administrative Responsibility Center Main Office - Air Force - UCRA-AM)
Ispettorato per la Sicurezza del Volo (Flight Safety Inspectorate - ISV)
Istituto Superiore per la Sicurezza del Volo (Higher Flight Safety Institute - ISSV)
Ufficio del Generale del Ruolo delle Armi (General of the Arms Office)
Capo del Corpo del Genio Aeronautico (Air Force Engineer Corps Chief)
Capo del Corpo di Commissariato Aeronautico (Air Force Commissariat Corps Chief)
Capo del Corpo Sanitario Aeronautico (Air Force Medical Corps Chief)
Commissioni di Avanzamento e la Segreteria Permanente (Advancement Commissions and )
Segreteria Permanente Commissione Superiore Avanzamento (Permanent Secretariat Higher Promotion Commission)
Commissione Ordinaria Avanzamento Ufficiali (Officers Promotion Commission)
Commissione Permanente Avanzamento Marescialli dell'A.M. (Non-commissioned Officer Promotion Commission)
Commissione Permanente Avanzamento Sergenti dell'A.M. (Sergeants Promotion Commission)
Commissione Permanente Avanzamento Volontari in S.P. dell'A.M. (Volunteers in Permanent Service Promotion Commission)
Aiutante di Volo del Capo di SMA (Chief of SMA Flight Adjutant)
Consulente Giuridico del Capo di SMA (Chief of SMA Legal Advisor)
Consulente Tecnico Militare Problematiche d’Avanzamento (Technical-Military Consultant Promotion Problems)
Presidente Capo dei Sottufficiali, Graduati e militari di Truppa per l’AM (Air Force Non-commissioned Officers, Graduates and Soldiers Chief President)
Ufficio Vicario Episcopale per l’AM (Air Force Episcopal Vicar Office)
Comando Carabinieri per l’AM (Air Force Carabinieri Command)
Deputy Chief of the Air Force General Staff
The Deputy Chief of the Air Force General Staff manages the bureaucratic aspects of the Air Force.
Sottocapo di Stato Maggiore Aeronautica Militare (Deputy Chief of the Air Force General Staff - SCaSMA)
Ufficio del Sottocapo (Deputy Chief Office)
Segreteria Particolare del SCaSMA (SCaSMA Special Secretariat)
Ufficiale Addetto del SCaSMA (SCaSMA Adjutant)
1° Reparto "Ordinamento e Personale" (1st Department "Regulations and Personnel" - SMA-ORD)
3° Reparto "Pianificazione dello Strumento Aerospaziale" (3rd Department "Planning of the Aeroespacial Instrument" - SMA-PIANI)
Centro di Eccellenza per Aeromobili a Pilotaggio Remoto (CDE APR) (Center of Excellence for Remotely Piloted Aircraft), at Amendola Air Base
4° Reparto "Logistica" (4th Department "Logistics" - SMA-LOG)
5° Reparto "Comunicazione" (5th Department "Communications" - SMA-COM)
6° Reparto "Affari Economici e Finanziari" (6th Department "Economic and Financial Affairs" - SMA-FIN))
Reparto Generale Sicurezza (General Security Department - SMA-SEC)
Centro Coordinamento Sicurezza (Security Coordination Center), at Rome Ciampino Airport
33x Nuclei Sicurezza (33x Security Squads)
Ufficio Generale per lo Spazio (Space Main Office - SMA-SPAZIO)
Ufficio Generale per la Circolazione Aerea Militare (Military Air Traffic Main Office - SMAUCAM)
Ufficio Generale Consulenza e Affari Giuridici Aeronautica Militare (Air Force Counsel and Legal Affairs Main Office - SMAUCAG)
Ufficio Generale per l'Innovazione Manageriale (Managerial Innovation Main Office - SMA-UIM)
Air Force Command Rome
The Air Force Command Rome (COMAER), in Centocelle Airport has territorial and liaison functions for the city of Rome and provides administrative support to the air force headquarter and units based at Centocelle Airport and Vigna di Valle Airport.
Comando Aeronautica Militare Roma (Air Force Command Rome - COMAER), at Centocelle Airport
Comando Supporti Enti di Vertice (Higher Commands Support Command - COMSEV), at Centocelle Airport
Distaccamento Aeronautico Terminillo (Aeronautical Detachment Terminillo), in Monte Terminillo
Comando Aeroporto di Centocelle / Quartier Generale del COMAER (Centocelle Air Base Command - COMAER Headquarter), at Centocelle Airport
Comando Aeroporto Vigna di Valle / Centro Storiografico e Sportivo dell’AM (Vigna di Valle Air Base Command / Air Force History & Sport Center), at Vigna di Valle Airport
Museo Storico (History Museum)
Centro Sportivo (Sport Center)
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Aviation Inspector for the Navy
The Ispettore dell’Aviazione per la Marina (Aviation Inspector for the Navy - ISPAVIAMAR) reports to the Chief of the Air Force General Staff and the Chief of the Navy General Staff. ISPAVIAMAR oversees the technical and logistic aeronautical aspects, and the training of the Italian military's airborne anti-submarine forces. The inspector is a brigadier general of the air force, whose office and staff reside in the navy's headquarter in Rome. The only unit assigned to ISPAVIAMAR is the 41° Stormo AntiSom Athos Ammannato, which is under operational control of the Italian Navy.
Chief of the Air Force General Staff / Chief of the Navy General Staff
Aviation Inspector for the Navy - ISPAVIAMAR
41° Stormo AntiSom "Athos Ammannato" (41st (Anti-submarine) Wing), at Sigonella Air Base
86° Gruppo CAE (86th Crew Training Squadron)
88° Gruppo AntiSom (88th Anti-submarine Squadron) with 4× P-72A ASW
441° Gruppo Servizi Tecnico-Operativi (441st Technical Services Squadron)
541° Gruppo Servizi Logistico-Operativi (541st Logistic Services Squadron)
941° Gruppo Efficienza Aeromobili (941st Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
Air Fleet Command
The Air Fleet Command (Comando della Squadra Aerea or CSA) controls all operative units, the intelligence, electronic warfare capabilities and the operational headquarter of the air force. The CSA ensures that unit is equipped, trained and prepared for combat duty and controls them during combat operations.
Air Fleet Command, at Centocelle Airport, in Rome
Comando Operazioni Aeree (Air Operations Command), in Poggio Renatico
Comando delle Forze da Combattimento (Combat Forces Command), in Milan
Comando Forze per la Mobilità e il Supporto (Support and Special Forces Command), in Rome
9ª Brigata Aerea ISTAR-EW (9th Intelligence, Surveillance, Target Acquisition, and Reconnaissance - Electronic Warfare (ISTAR-EW) Air Brigade), at Pratica di Mare Air Base
1ª Brigata Aerea Operazioni Speciali (1st Special Operations Air Brigade), in Furbara Air Base
Italian Air Force Delegation, at NATO's Tactical Leadership Programme (TLP), at Albacete Air Base (Spain)
Air Operations Command
The Comando Operazioni Aeree (Air Operations Command - COA) conducts all operations of the Aeronautica Militare. COA controls all military radar installations in Italy and its Air Operations Center commands and controls the defence of Italy's air-space.
Comando Operazioni Aeree (Air Operations Command - COA), in Poggio Renatico
Italian Air Operations Center (ITA-AOC), in Poggio Renatico, reports to NATO's Integrated Air Defense System CAOC Torrejón in Spain
Reparto Servizi Coordinamento e Controllo Aeronautica Militare (Air Force Coordination and Control Service Department - RSCCAM), at Ciampino Air Base (Air traffic management)
Servizo Coordinamento e Controllo Aeronautica Militare Abano Terme
Servizo Coordinamento e Controllo Aeronautica Militare Linate
Servizo Coordinamento e Controllo Aeronautica Militare Brindisi
Reparto Preparazione alle Operazioni (Operations Preparation Department - RPO), in Poggio Renatico
Reparto Mobile Comando e Controllo (Mobile Command and Control Regiment - RMCC), at Bari Air Base (Air-transportable command and control post)
Gruppo Sistemi TLC e, Comando e Controllo (Command and Control, and Telematic Systems Squadron)
Gruppo Servizi di Supporto (Support Services Squadron)
Reparto Supporto Servizi Generali (General Service Support Regiment - RSSG), in Poggio Renatico
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
Reparto Difesa Aerea Missilistica Integrata (Integrated Missile Air-defence Regiment - Rep. DAMI), in Poggio Renatico
11° Gruppo DAMI (11th Integrated Missile Air-defence Squadron - 11° GrDAMI), in Poggio Renatico
22° Gruppo Radar Aeronautica Militare (22nd Air Force Radar Squadron - 22° GrRAM), in Licola
Servizio Difesa Aerea (Air-defense Service)
Servizio Tecnico Operativo (Technical Service)
Servizio Logistico Operativo (Logistic Service)
Compagnia Protezione delle Forze (Force Protection Company)
Italian Air Force Delegation, at the French Air Force's Commandement de la défense aérienne et des opérations aériennes (CDAOA), in Paris (France)
Italian Air Force Delegation, at NATO's European Air Transport Command (EATC) at Eindhoven Air Base (Netherlands)
9th ISTAR-EW Air Brigade
9th Intelligence, Surveillance, Target Acquisition and Reconnaissance - Electronic Warfare (ISTAR-EW) Air Brigade, at Pratica di Mare Air Base
Comando Aeroporto di Pratica di Mare (Pratica di Mare Air Base Command)
Reparto Tecnico-Operativo (Technical Services Regiment)
Reparto Logistico-Operativo (Logistic Services Regiment)
Gruppo Protezione delle Forze (Force Protection Squadron)
Reparto Supporto Tecnico Operativo Guerra Elettronica (Electronic Warfare Technical-operational Support Regiment - ReSTOGE), at Pratica di Mare Air Base
Gruppo Supporto Operativo (Operational Support Squadron)
Gruppo Sistemi Difesa Aerospaziale (Air-space Defense Systems Squadron)
Gruppo Supporto Tecnico (Technical Support Squadron)
Reparto Addestramento Controllo Spazio Aereo (Air-space Control Training Regiment - RACSA), at Pratica di Mare Air Base
Gruppo Addestramento Operativo Traffico Aereo (Air-traffic Training Squadron)
Gruppo Addestramento Operativo Difesa Aerea (Air-defence Training Squadron)
Gruppo Supporto Tecnico (Technical Support Squadron)
Centro Informazioni Geotopografiche Aeronautiche (Air Force Geo-topographic Information Center - CIGA), at Pratica di Mare Air Base
Centro Nazionale Meteorologia e Climatologia Aeronautica (Air Force National Meteorological and Climatological Center - CNMCA), at Pratica di Mare Air Base
Centro Operativo per la Meteorologia (Meteorological Operational Center - COMET), at Pratica di Mare Air Base
Combat Forces Command
Comando delle Forze da Combattimento (Combat Forces Command), in Milan
Comando Aeroporto Aviano (Aviano Air Base Command)
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
313° Gruppo Addestramento Acrobatico (313th Acrobatic Training Squadron – Frecce Tricolori), at Rivolto Air Base with MB-339PAN, planned to be replaced by T-345A Trainer
Squadriglia Collegamenti Linate (Communication Flight Linate), at Linate Air Base with NH-500E helicopters and S.208M planes
Distaccamento Aeroportuale Piacenza (Airport Detachment Piacenza)
Italian Air Force Delegation, at Holloman Air Force Base (USA)
Italian Air Force Delegation, at Moody Air Force Base (USA)
2° Stormo "Mario D'Agostini" (2nd Wing), at Rivolto Air Base
Gruppo Missili (Missile Squadron) with Spada Air-defence systems with Aspide 2000 missiles (to be replaced with CAMM-ER missiles)
80° Gruppo OCU (Missile Systems Operational Conversion Squadron)
402° Gruppo Servizi Tecnico-Operativi (402nd Technical Services Squadron)
502° Gruppo Servizi Logistico-Operativi (502nd Logistic Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
4° Stormo "Amedeo d'Aosta" (4th Wing), at Grosseto Air Base
9° Gruppo Caccia (9th Fighter Squadron) with Eurofighter Typhoon
20° Gruppo OCU Caccia (20th Fighter Operational Conversion Squadron) with Eurofighter Typhoon (Twin-seat variant)
404° Gruppo Servizi Tecnico-Operativi (404th Technical Services Squadron)
504° Gruppo Servizi Logistico-Operativi (504th Logistic Services Squadron)
904° Gruppo Efficienza Aeromobili (904th Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
6° Stormo "Alfredo Fusco" (6th Wing), at Ghedi Air Base
102° Gruppo OCU/CBOC (102nd Operational Conversion/All-weather Fighter-Bomber Squadron) with Tornado IDS (the entire wing to be reequipped with F-35A Lightning II)
154° Gruppo CBOC/CRO (154th All-weather Fighter-Bomber/All-weather Reconnaissance Fighter Squadron) with Tornado IDS
155° Gruppo ETS (155th Electronic Warfare Tactical Suppression Squadron) with Tornado ECR
406° Gruppo Servizi Tecnico-Operativi (406th Technical Services Squadron)
506° Gruppo Servizi Logistico-Operativi (506th Logistic Services Squadron)
906° Gruppo Efficienza Aeromobili (904th Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
32° Stormo "Armando Boetto" (32nd Wing), at Amendola Air Base
13° Gruppo (13th Squadron) with F-35A Lightning II
28° Gruppo APR (28th Unmanned Aerial Vehicle Squadron) with MQ-1C Predator A+ and MQ-9A Predator B
61° Gruppo APR (61st Unmanned Aerial Vehicle Squadron) with MQ-1C Predator A+, forward based, at Sigonella Air Base (will receive P.1HH HammerHead UAVs)
432° Gruppo Servizi Tecnico-Operativi (432nd Technical Services Squadron)
532° Gruppo Servizi Logistico-Operativi (532nd Logistic Services Squadron)
932° Gruppo Efficienza Aeromobili (932nd Maintenance Squadron)
632^ Squadriglia Collegamenti (632nd Liaison Flight) with MB-339A and MB-339CDII for UAV-pilot training
Gruppo Protezione delle Forze (Force Protection Squadron)
Distaccamento Aeronautico Jacotenente (Aeronautical Detachment Jacotenente)
36° Stormo "Riccardo Hellmuth Seidl" (36th Wing), at Gioia del Colle Air Base
10° Gruppo Caccia (10th Fighter Squadron) with Eurofighter Typhoon
12° Gruppo Caccia (12th Fighter Squadron) with Eurofighter Typhoon
436° Gruppo Servizi Tecnico-Operativi (436th Technical Services Squadron)
536° Gruppo Servizi Logistico-Operativi (536th Logistic Services Squadron)
936° Gruppo Efficienza Aeromobili (936th Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
37° Stormo "Cesare Toschi" (37th Wing), at Trapani Air Base
18° Gruppo Caccia (18th Fighter Squadron) Eurofighter Typhoon
437° Gruppo Servizi Tecnico-Operativi (437th Technical Services Squadron)
537° Gruppo Servizi Logistico-Operativi (537th Logistic Services Squadron)
937° Gruppo Efficienza Aeromobili (937th Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
Distaccamento Aeroportuale Pantelleria (Airport Detachment Pantelleria)
Distaccamento Aeronautico Lampedusa (Aeronautical Detachment Lampedusa)
51° Stormo "Ferruccio Serafini" (51st Wing), at Istrana Air Base
132° Gruppo CIO/CBR (132nd mixed Squadron) with AMX, AMX-T and Eurofighter Typhoon
451° Gruppo Servizi Tecnico-Operativi (451st Technical Services Squadron)
551° Gruppo Servizi Logistico-Operativi (551st Logistic Services Squadron)
951° Gruppo Efficienza Aeromobili (951st Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
Distaccamento Aeroportuale San Nicolò (Airport Detachment San Nicolò)
Airlift and Support Forces Command
Comando delle Forze per la mobilità e il Supporto (Airlift and Support Forces Command), at Centocelle Airport
Comando Aeroporto Capodichino (Capodichino Air Base Command), supporting Naval Support Activity Naples home of the United States Sixth Fleet
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Comando Aeroporto Sigonella (Sigonella Air Base Command), supporting Naval Air Station Sigonella and the 41st (Anti-submarine) Wing
Italian Air Force Delegation, at NATO Air Base Geilenkirchen (Germany), at NATO's E-3A Component
Italian Air Force Delegation, at Little Rock Air Force Base (USA), at the US Air Force's 19th Airlift Wing (C-130J Super Hercules training)
Italian Air Force Delegation, at RAF Brize Norton (UK), at the Royal Air Force's No. 2 Group RAF
Italian Air Force Delegation, at the European Tactical Airlift Centre (ETAC), at Zaragoza Air Base (Spain)
14° Stormo "Sergio Sartof" (14th Wing), at Pratica di Mare Air Base
8° Gruppo (8th Squadron) with 4× KC-767 tankers
71° Gruppo (71st Electronic Warfare Squadron) with G550CAEW, P.180 Avanti
914° Gruppo Efficienza Aeromobili (914th Maintenance Squadron)
Centro Addestramento Equipaggi (Crew Training Center)
31° Stormo "Carmelo Raiti" (31st Wing), at Rome Ciampino Airport
93° Gruppo (93rd Squadron) with 3× A319CJ, 2× Falcon 50
306° Gruppo (306th Squadron) with 1× Falcon 900EX, 2× Falcon 900EASy, 4× VH-139A
431° Gruppo Servizi Tecnico-Operativi (431st Technical Services Squadron)
531° Gruppo Servizi Logistico-Operativi (531st Logistic Services Squadron)
931° Gruppo Efficienza Aeromobili (931st Maintenance Squadron)
Centro Addestramento Equipaggi (Crew Training Center)
Gruppo Protezione delle Forze (Force Protection Squadron)
15° Stormo "Stefano Cagna" (15th Search and Rescue Wing), at Cervia Air Base
23° Gruppo Volo (23rd Squadron) with AW-101A helicopters
80° Centro CSAR (80th Combat Search and Rescue Center), at Decimomannu Air Base with AW-139A helicopters
81° Centro Addestramento Equipaggi (81st Crew Training Center), at Cervia Air Base with AW-139A and AW-101A helicopters
82° Centro CSAR (82nd Combat Search and Rescue Center), at Trapani Air Base with AW-139A helicopters
83° Gruppo CSAR (83rd Combat Search and Rescue Squadron), at Cervia Air Base with AW-139A helicopters
84° Centro CSAR (84th Combat Search and Rescue Center), at Gioia del Colle Air Base with AW-139A helicopters
85° Centro CSAR (85th Combat Search and Rescue Center), at Pratica di Mare Air Base with AW-139A helicopters
415° Gruppo Servizi Tecnico-Operativi (415th Technical Services Squadron)
515° Gruppo Servizi Logistico-Operativi (515th Logistic Services Squadron)
915° Gruppo Efficienza Aeromobili (915th Maintenance Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
16° Stormo Protezione delle Forze (16th Force Protection Wing), at Martina Franca Air Base
Battaglione Fucilieri dell'Aria (Air-Fusiliers Battalion)
Gruppo Addestramento STO/FP (Survive to Operate / Force Protection Training Squadron)
Centro Cinofili dell’Aeronautica Militare (Air Force Canine Center), at Grosseto Air Base
416° Gruppo Servizi Tecnico-Operativi (916th Technical Services Squadron)
516° Gruppo Servizi Logistico-Operativi (916th Logistic Services Squadron)
46ª Brigata Aerea "Silvio Angelucci" (46th Air Brigade), at Pisa Air Base (:it:46ª Brigata aerea "Silvio Angelucci")
2° Gruppo (2nd Squadron) with C-130J Super Hercules
50° Gruppo (50th Squadron) with C-130J-30 Super Hercules
98° Gruppo (98th Squadron) with C-27J Spartan, EC-27J Jedi, MC-27J Praetorian
446° Gruppo Servizi Tecnico-Operativi (446th Technical Services Squadron)
546° Gruppo Servizi Logistico-Operativi (546th Logistic Services Squadron)
946° Gruppo Efficienza Aeromobili (946th Maintenance Squadron)
Centro Addestramento Equipaggi (Crew Training Center)
Gruppo Protezione delle Forze (Force Protection Squadron)
Distaccamento Aeroportuale Sarzana Luni (Airport Detachment Sarzana Luni)
1st Special Operations Air Brigade
1ª Brigata Aerea Operazioni Speciali "Vezio Mezzetti" (1st Special Operations Air Brigade), at Furbara Air Base
9° Stormo "Francesco Baracca" (9th Combat Search and Rescue Wing), at Grazzanise Air Base
21° Gruppo Volo (21st Squadron) with AB-212 helicopters (to be replaced with AW-101A helicopters)
Gruppo Fucilieri dell'Aria (Air-Fusiliers Squadron)
909° Gruppo Efficienza Aeromobili (909th Maintenance Squadron)
17° Stormo Incursori (17th Raider Wing), at Furbara Air Base (Tier-1 Special Forces)
Gruppo Operativo (Operational Squadron)
Gruppo Addestramento (Training Squadron)
Gruppo Servizi di Supporto (Support Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Air Fleet Command Structure Graphic
Air Force Logistic Command
The Air Force Logistic Command provides operational units with all the required necessary logistics, combat support and service support functions.
Comando Logistico dell'Aeronautica Militare (Air Force Logistic Command), in Rome
2ª Divisione – Supporto Tecnico Operativo Aeromobili, Armamento e Avionica (2nd Division – Aircraft, Armaments and Avionics Support), in Rome
3ª Divisione – Supporto Tecnico Operativo Sistemi Comando e Controllo, Comunicazioni e Telematica (3rd Division – Command and Control, Communication e IT Support), in Rome
Servizio dei Supporti (Support Service), in Rome
Servizio di Commissariato e Amministrazione (Commissariat and Administration Service), in Rome
Servizio Infrastrutture (Infrastructure Service), in Rome
Servizio Sanitario Aeronautica Militare (Air Force Medical Service), in Rome
Centro Sperimentale di Volo (Flight Test Center), at Pratica di Mare Air Base
Poligono Sperimentale e di Addestramento Interforze di Salto di Quirra (Joint Test and Training Range Salto di Quirra), in Perdasdefogu
2nd Division – Aircraft, Armaments and Avionics Support
2ª Divisione – Supporto Tecnico Operativo Aeromobili Armamento ed Avionica (2nd Division – Aircraft, Armaments and Avionics Support), in Rome
1° Reparto - Supporto Aeromobili (1st Department - Aircraft Support), in Rome
1° Servizio Tecnico Distaccato (1st Technical Service Detachment), in Caselle to liaison with Finmeccanica
2° Servizio Tecnico Distaccato (2nd Technical Service Detachment), in Turin to liaison with Leonardo S.p.A.
3° Servizio Tecnico Distaccato (3rd Technical Service Detachment), in Villanova d'Albenga to liaison with Piaggio Aerospace
4° Servizio Tecnico Distaccato (4th Technical Service Detachment), in Samarate to liaison with AgustaWestland
5° Servizio Tecnico Distaccato (5th Technical Service Detachment), in Venezia Tessera
6° Servizio Tecnico Distaccato (6th Technical Service Detachment), in Venegono Superiore to liaison with Alenia Aermacchi
7° Servizio Tecnico Distaccato (7th Technical Service Detachment), in Barlassina
8° Servizio Tecnico Distaccato (8th Technical Service Detachment), in Campi Bisenzio to liaison with SELEX Galileo
9° Servizio Tecnico Distaccato (9th Technical Service Detachment), in Foligno to liaison with Officine Meccaniche Aeronautiche
10° Servizio Tecnico Distaccato (10th Technical Service Detachment), in Pomezia to liaison with Leonardo and Northrop Grumman Italia
11° Servizio Tecnico Distaccato (11th Technical Service Detachment), in Frosinone to liaison with AgustaWestland
12° Servizio Tecnico Distaccato (12th Technical Service Detachment), in Capodichino to liaison with Tecnam
13° Servizio Tecnico Distaccato (13th Technical Service Detachment), in Brindisi to liaison with Alenia Aeronautica, AgustaWestland and Avio
2° Reparto - Supporto Sistemi Avionici e Armamento (2nd Department - Avionic and Armaments Support), in Rome
Centro Polifunzionale Velivoli Aerotattici (Multifunctional Tactical Aircraft Center - CEPVA), at Cameri Air Base
Comando Aeroporto di Cameri (Cameri Air Base Command)
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Nucleo Iniziale di Formazione (NIF) JSF (Initial JSF/F-35 Formation Unit)
1° Reparto Manutenzione Velivoli (1st Aircraft Maintenance Regiment) responsible for Tornado and Eurofighter Typhoon
2° Reparto Manutenzione Missili (2nd Missile Maintenance Regiment), at Padua Air Base responsible for air-launched and ground-launched missiles
Centro Manutenzione Armamento (Weapons Maintenance Center)
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
3° Reparto Manutenzione Velivoli (3rd Aircraft Maintenance Regiment), at Istrana Air Base responsible for AMX
5° Gruppo Manutenzione Velivoli (5th Aircraft Maintenance Squadron), at Capodichino Air Base responsible for air ground equipment
6° Reparto Manutenzione Elicotteri (6th Helicopter Maintenance Regiment), at Pratica di Mare Air Base responsible for helicopters and P.180 Avanti
10° Reparto Manutenzione Velivoli (10th Aircraft Maintenance Regiment), at Galatina Air Base responsible for MB-339 and T-345A Trainer
11° Reparto Manutenzione Velivoli (11th Aircraft Maintenance Regiment), at Sigonella Air Base responsible for P-72A ASW
Centro Logistico Polivalente (Multi-use Logistic Center), at Guidonia Air Base
Gruppo Logistica e Rifornimenti (Logistic and Supply Squadron)
Gruppo Calibrazione e Sopravvivenza (Calibration and Survival Squadron)
Centro Logistico Munizionamento e Armamento Aeronautica Militare (Air Force Ammunition and Weapons Logistic Center), in Orte
Gruppo Logistica e Rifornimenti (Logistic and Supply Squadron)
Gruppo Efficienza Materiale Armamento (Weapons Materiel Efficiency Squadron)
Gruppo Servizi Generali (General Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Gruppo Rifornimento Area Nord (Supply Squadron Nord), in Sanguinetto
Gruppo Rifornimento Area Sud (Supply Squadron South), in Francavilla Fontana
Italian Air Force Delegation, at the International Eurofighter Support Team, at BAE Systems Military Air & Information, in Warton (UK)
Italian Air Force Delegation, at the International Eurofighter Support Team, at EADS CASA, in Madrid (Spain)
Italian Air Force Delegation, at the International Eurofighter Support Team and International Weapon System Support Centre, at Eurofighter GmbH, in Hallbergmoos (Germany)
Italian Air Force Delegation, at the C-130J program, at Wright-Patterson Air Force Base, in Dayton (USA)
3rd Division – Command and Control, Communication e IT Support
3ª Divisione – Supporto Tecnico Operativo Sistemi Comando e Controllo, Comunicazioni e Telematica (3rd Division – Command and Control, Communication e IT Support), in Rome
1° Reparto - Sistemi Difesa Aerea, Assistenza al Volo, Telecomunicazioni (1st Department - Air-defence, Flight Support, and Communication Systems)
2° Reparto - Sistemi Automatizzati (2nd Department - Automatic Systems) providing hardware and software support
Reparto Gestione ed Innovazione Sistemi Comando e Controllo (Command and Control Systems Maintenance and Innovation Regiment - ReGISCC), at Pratica di Mare Air Base (Manages the air force's classified communication network)
Gruppo Gestione Sistemi Comando e Controllo (Command and Control Systems Management Squadron)
Gruppo Innovazione, Sviluppo e Sperimentazione C4-ISR (C4-ISR Innovations, Development and Experimentation Squadron)
Reparto Sistemi Informativi Automatizzati (Automatic Information Systems Regiment - ReSIA), in Acquasanta
4th Communication and Air-defence Systems and Flight Support Brigade
4ª Brigata Telecomunicazioni e Sistemi per la Difesa Aerea e l’Assistenza al Volo (4th Communication and Air-defence Systems and Flight Support Brigade - 4ª Brigata TLC e Sist. DA/AV), at Latina Air Base
Gruppo Addestramento e Formazione TLC e Sist. DA/AV (Training and Formation Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
1° Reparto Tecnico Comunicazioni (1st Technical Communications Regiment), in Milan
Gruppo Manutenzione (Maintenance Squadron)
Squadriglia TLC (Communications Flight), at Decimomannu Air Base
Squadriglia TLC (Communications Flight), at Padua Air Base
Centro Aeronautica Militare di Montagna (Air Force Mountain Center) on Monte Cimone
2° Reparto Tecnico Comunicazioni (2nd Technical Communications Regiment), at Bari Air Base
Gruppo Manutenzione (Maintenance Squadron)
Squadriglia TLC (Communications Flight), at Ciampino Air Base
Centro Tecnico per la Meteorologia (Meteorology Technical Center), at Vigna di Valle Airport
112ª Squadriglia Radar Remota (112th Remote Radar Station Flight), in Mortara
113ª Squadriglia Radar Remota (113th Remote Radar Station Flight), in Lame di Concordia
114ª Squadriglia Radar Remota (114th Remote Radar Station Flight), in Potenza Picena
115ª Squadriglia Radar Remota (115th Remote Radar Station Flight), in Capo Mele
121ª Squadriglia Radar Remota (121st Remote Radar Station Flight), in Poggio Ballone
123ª Squadriglia Radar Remota (123rd Remote Radar Station Flight), in Capo Frasca
131ª Squadriglia Radar Remota (131st Remote Radar Station Flight), in Jacotenente
132ª Squadriglia Radar Remota (132nd Remote Radar Station Flight), in Capo Rizzuto
133ª Squadriglia Radar Remota (133rd Remote Radar Station Flight), in San Giovanni Teatino
134ª Squadriglia Radar Remota (134th Remote Radar Station Flight), in Lampedusa
135ª Squadriglia Radar Remota (135th Remote Radar Station Flight), in Marsala
136ª Squadriglia Radar Remota (136th Remote Radar Station Flight), in Otranto
137ª Squadriglia Radar Remota (137th Remote Radar Station Flight), in Mezzogregorio
Services
Servizio dei Supporti (Support Service), in Rome
1° Reparto – Supporto Operativo (1st Department - Operational Support), in Rome
3° Stormo, at Villafranca Air Base (Out of area air base construction, management and support wing)
Gruppo Servizi Generali (General Services Squadron)
Gruppo Mobile Supporto Operativo (Mobile Operational Support Squadron)
Gruppo Servizi di Supporto Operativo (Operational Support Services Squadron)
Gruppo Autotrasporti (Transport Squadron)
Gruppo Protezione delle Forze (Force Protection Squadron)
Centro Addestrativo Personale Fuori Area (Out of Area Personnel Training Center)
Centro Tecnico Rifornimenti (Technical Supply Center), at Fiumicino Air Base
1° Gruppo Ricezione e Smistamento (GRS) (1st Reception and Sorting Squadron), in Novara
2° Gruppo Manutenzione Autoveicoli (2nd Motor-vehicles Maintenance Squadron), in Forlì
3° Gruppo Manutenzione Autoveicoli (3rd Motor-vehicles Maintenance Squadron), in Mungivacca
Comando Rete POL (Petroil Oil Lubricant) (POL Network Command), at Parma Air Base manages NATO's North Italian Pipeline System
2° Reparto – Servizio Chimico-Fisico (2nd Department - Chemical-Physical Service), in Rome
1° Laboratorio Tecnico di Controllo, at Padua Air Base
2° Laboratorio Tecnico di Controllo, at Fiumicino Air Base
3° Laboratorio Tecnico di Controllo, in Mungivacca
4° Laboratorio Tecnico di Controllo, at Parma Air Base
5° Laboratorio Tecnico di Controllo, at Decimomannu Air Base
6° Laboratorio Tecnico di Controllo, at Trapani Air Base
Distaccamento Aeroportuale di Brindisi (Airport Detachment Brindisi
Gruppo Servizi Generali (General Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Italian Air Force Delegation, at MoD Bicester (UK)
Italian Air Force Delegation, at the German Air Force's Weapon System Support Center 1, in Erding (Germany) (Turbo-Union RB199 and Eurojet EJ200 engines maintenance)
Italian Air Force Delegation, at Torrejón Air Base (Spain)
Servizio di Commissariato e Amministrazione (Commissariat and Administration Service), in Rome
1° Reparto – Commissariato (1st Department - Commissariat), provisioning, clothing, personal equipment department
2° Reparto – Amministrazione (2nd Department - Administration), human resources department
Servizio Infrastrutture (Infrastructure Service), in Rome
1° Reparto – Programmi (1st Department - Planning)
2° Reparto – Lavori (2nd Department - Construction)
1° Reparto Genio Aeronautica Militare (1st Air Force Engineer Regiment), at Villafranca Air Base
27° Gruppo Genio Campale (27th Field Engineer Squadron), at Villafranca Air Base
102° Servizio Tecnico Distaccato Infrastrutture (102nd Detached Infrastructure Technical Service), at Ghedi Air Base
106° Servizio Tecnico Distaccato Infrastrutture (106th Detached Infrastructure Technical Service), at Parma Air Base
108° Servizio Tecnico Distaccato Infrastrutture (108th Detached Infrastructure Technical Service), at Istrana Air Base
113° Servizio Tecnico Distaccato Infrastrutture (113th Detached Infrastructure Technical Service), in Poggio Renatico
2° Reparto Genio Aeronautica Militare (2nd Air Force Engineer Regiment), at Ciampino Air Base
8° Gruppo Genio Campale (8th Field Engineer Squadron), at Ciampino Air Base
201° Servizio Tecnico Distaccato Infrastrutture (201st Detached Infrastructure Technical Service), at Pisa Air Base
205° Servizio Tecnico Distaccato Infrastrutture (205th Detached Infrastructure Technical Service), at Decimomannu Air Base
208° Servizio Tecnico Distaccato Infrastrutture (208th Detached Infrastructure Technical Service), at Pratica di Mare Air Base
209° Servizio Tecnico Distaccato Infrastrutture (209th Detached Infrastructure Technical Service), at Grosseto Air Base
3° Reparto Genio Aeronautica Militare (3rd Air Force Engineer Regiment), at Bari Air Base
16° Gruppo Genio Campale (16th Field Engineer Squadron), at Bari Air Base
301° Servizio Tecnico Distaccato Infrastrutture (301st Detached Infrastructure Technical Service), at Amendola Air Base
302° Servizio Tecnico Distaccato Infrastrutture (302nd Detached Infrastructure Technical Service), at Gioia del Colle Air Base
304° Servizio Tecnico Distaccato Infrastrutture (304th Detached Infrastructure Technical Service), at Sigonella Air Base
308° Servizio Tecnico Distaccato Infrastrutture (308th Detached Infrastructure Technical Service), in Pozzuoli
Servizio Sanitario Aeronautica Militare (Air Force Medical Service), in Rome
Commissione Sanitaria di Appello (Medical Examination Commission), in Rome
Istituto Perfezionamento Addestramento Medicina Aeronautica e Spaziale (Aeronautical and Space Medicine Training Institute), in Rome
Infermeria Principale (Main Pharmacy), at Pratica di Mare Air Base
Istituto di Medicina Aerospaziale dell'A.M. di Milano (Air Force Medical Institute, Milan)
Istituto di Medicina Aerospaziale dell'A.M. di Roma (Air Force Medical Institute, Rome)
Centro Aeromedico Psicofisiologico (Air-medical Psychophysiological Center), at Bari Air Base
Dipartimento Militare di Medicina Legale dell'Aeronautica Militare (Air Force Forensic Medicine Military Department), at Bari Air Base
Flight Test Center
Centro Sperimentale di Volo (Flight Test Center), at Pratica di Mare Air Base
Reparto Sperimentale Volo (Test Flight Department)
311° Gruppo Volo (311th Squadron) with various types of aircraft
Gruppo Tecnico (Technical Squadron)
Gruppo Gestione Software (Software Management Squadron)
Gruppo Ingegneria per l’Aero-Spazio (Aero-Space Engineering Squadron)
Gruppo Armamento e Contromisure (Weapons and Countermeasures Squadron)
Reparto Tecnologie Materiali Aeronautici e Spaziali (Air and Space Technologies and Materials Department - RTMAS)
Gruppo Materiali Strutturali (Structural Materials Squadron)
Gruppo Materiali di Consumo (Fuel Materials Squadron)
Gruppo Indagini Balistiche (Ballistic Research Squadron)
Gruppo Indagini Tecniche (Technical Research Squadron)
Gruppo Analisi e prove Chimiche e Fisiche (Chemical and Physical Analysis and Test Squadron)
Reparto Medicina Aeronautica e Spaziale (Air and Space Medicine Department - RMAS)
Gruppo Alta Quota ed Ambienti Estremi (High Altitude and Extreme Environments Squadron)
Gruppo Biodinamica (Biodynamic Squadron)
Gruppo Fattori Umani (Human Factor Squadron)
Italian Air Force Delegation, at the École du personnel navigant d'essais et de réception (EPNER), at Istres-Le Tubé Air Base (France)
Joint Test and Training Range
Poligono Sperimentale e di Addestramento Interforze di Salto di Quirra (Joint Test and Training Range Salto di Quirra), in Perdasdefogu
Gruppo Impiego Operativo (Operational Squadron) with NH-500E and AB-212 helicopters (the latter to be replaced with AW-139A)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)]
Reparto Sperimentale e di Standardizzazione al Tiro Aereo (Air Firing Test and Standardization Regiment), at Decimomannu Air Base
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Centro Aeronautica Militare Sperimentale e di Standardizzazione al Tiro Aereo (Air Force Air Firing Test and Standardization Center)
Compagnia Protezione delle Forze (Force Protection Company)
Distaccamento Capo San Lorenzo (Capo San Lorenzo Detachment), in Villaputzu
Poligono Capo Frasca (Capo Frasca Training Range), in Arbus
1st Air Region
The 1st Air Region provides territorial functions and liaisons with communal, provincial and regional administrations, in the North of Italy.
1ª Regione Aerea (1st Air Region), in Milan
Comando Aeroporto / Quartier Generale della 1ª Regione Aerea - Linate (Air Base Command / Headquarters 1st Air Region), at Linate Air Base
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Centro Logistico di Supporto Areale / Istituto “U. Maddalena” (Area Logistic Support Center / "U. Maddalena" Institute), in Cadimare
Istituto "Umberto Maddalena" ("Umberto Maddalena" High School)
Gruppo Servizi Generali (General Services Squadron)
Distaccamento Aeronautica Militare di Capo Mele (Air Force Detachment Capo Mele)
Distaccamento Aeroportuale Dobbiaco (Airport Detachment Toblach)
Air Force Schools Command - 3rd Air Region
The Air Force Schools Command - 3rd Air Region is based in Bari and responsible for the formation and training of all members of the Aeronautica Militare, and also provides territorial functions and liaisons with communal, provincial and regional administrations in the South of Italy.
Comando Scuole dell'Aeronautica Militare - 3ª Regione Aerea (Air Force Schools Command - 3rd Air Region CSAM/3ªRA), in Bari
Accademia Aeronautica (Air Force Academy), in Pozzuoli
Reparto Servizi Tecnici Generali (General Technical Services Regiment)
Gruppo Servizi Tecnici (Technical Services Squadron)
Gruppo Servizi Vari (Various Services Squadron)
Gruppo Telematico (Telematic Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Centro di Formazione Aviation English (Aviation English Formation Center), in Loreto
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Istituto di Scienze Militari Aeronautiche (Military Aeronautical Sciences Institute), in Florence
Reparto Servizi Tecnici Generali (General Technical Services Regiment)
Gruppo Servizi Tecnici (Technical Services Squadron)
Gruppo Servizi Vari (Various Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Scuola Militare Aeronautica Giulio Douhet (Military Aeronautical High School Giulio Douhet), in Florence
Scuola Marescialli Aeronautica Militare / Comando Aeroporto Viterbo (Air Force Non Commissioned Officers School / Airport Command Viterbo), in Viterbo
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Scuola Specialisti Aeronautica Militare (Air Force Specialists School), in Caserta
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Scuola Volontari Aeronautica Militare (Air Force Volunteers School), in Taranto
Gruppo Servizi Generali (General Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Scuola di Aerocooperazione (Air-cooperation School – an inter-service coordination & training center), in Guidonia Montecelio (50% air force staffed)
Vice Comandante (Air Force Schools Command - Deputy Commander)
Quartier Generale CSAM/3ªRA (Headquarters CSAM/3ªRA), in Bari
Gruppo Servizi Tecnico-Operativi (Technical Services Squadron)
Gruppo Servizi Logistico-Operativi (Logistic Services Squadron)
Plotone Protezione delle Forze (Force Protection Platoon)
Centro di Selezione Aeronautica Militare (Air Force Selection Center), in Guidonia Montecelio
Distaccamento Aeroportuale Alghero (Airport Detachment Alghero)
Distaccamento Aeronautico Monte Scuro (Aeronautical Detachment Monte Scuro)
Centro di Sopravvivenza, in Montagna (Mountain Survival Center)
Distaccamento Aeronautico Otranto (Aeronautical Detachment Otranto)
Distaccamento Aeronautico Siracusa (Aeronautical Detachment Syracuse)
Centro Addestramento Equipaggi - Multi Crew (Crew Training Center - Multi Crew, for other armed services, police forces, and government agencies), at Pratica di Mare Air Base
204° Gruppo Volo (204th Squadron) with P.180 Avanti
Gruppo Istruzione Professionale (Professional Training Squadron)
Italian Air Force Delegation, at Kalamata Air Base (Greece), at the Hellenic Air Force's 120th Training Wing
Italian Air Force Delegation, at Sheppard Air Force Base (USA), at the Euro-NATO Joint Jet Pilot Training Programm
60° Stormo (60th (Glider) Wing), in Guidonia Montecelio
Gruppo di Volo a Vela (Glider Squadron) with G103 Twin Astir II, Nimbus-4D, Nimbus 4M and LAK-17A gliders, S.208M, MB-339A and MB-339CDII planes, and NH-500E helicopters
460° Gruppo Servizi Tecnico-Operativi (460th Technical Services Squadron)
560° Gruppo Servizi Logistico-Operativi (560th Logistic Services Squadron)
Servizio Efficienza Aeromobili (Maintenance Service)
Compagnia Protezione delle Forze (Force Protection Company)
61° Stormo (61st (Jet Training) Wing), at Galatina Air Base
212° Gruppo Volo (212nd Squadron) with M-346A Master
213° Gruppo Volo (213rd Squadron) with MB-339CDII, being replaced by T-345A Trainer
214° Gruppo Volo (214th Squadron) training navigators and weapon officers MB-339A, planned to be replaced by T-345A Trainer
461° Gruppo Servizi Tecnico-Operativi (461st Technical Services Squadron)
561° Gruppo Servizi Logistico-Operativi (561st Logistic Services Squadron)
961° Gruppo Efficienza Aeromobili (961st Maintenance Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Italian Air Force Delegation, at Cazaux Air Base (France), at the French Air Force's 8e Escadre de Chasse
70° Stormo Giulio Cesare Graziani (70th (Basic Training) Wing), at Latina Air Base
207° Gruppo Volo (207th Squadron) with SF.260EA and P.180 Avanti
Gruppo Istruzione Professionale (Professional Training Squadron)
470° Gruppo Servizi Tecnico-Operativi (470th Technical Services Squadron)
570° Gruppo Servizi Logistico-Operativi (570th Logistic Services Squadron)
970° Gruppo Efficienza Aeromobili (970th Maintenance Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
72° Stormo (72nd (Helicopter Training) Wing), at Frosinone Air Base
208° Gruppo Volo (208th Squadron) with NH-500E and AW-139A
Gruppo Istruzione Professionale (Professional Training Squadron)
472° Gruppo Servizi Tecnico-Operativi (472nd Technical Services Squadron)
572° Gruppo Servizi Logistico-Operativi (572nd Logistic Services Squadron)
972° Gruppo Efficienza Aeromobili (972nd Maintenance Squadron)
Compagnia Protezione delle Forze (Force Protection Company)
Air Force Structure Graphic
See also
Military of Italy
Carabinieri
Guardia di Finanza
Italian Army
Italian Navy
Structure of the Italian Army
References
External links
Aeronautica Militare web page
Military units and formations established in 1923
Aeronautica Militare
1923 establishments in Italy
Structure of contemporary air forces |
69981152 | https://en.wikipedia.org/wiki/Rank%20One | Rank One | Rank One is a high school activities management platform available as a web-based software and mobile app. Originally built for high school athletic departments, the platform has expanded its features to include management software for high school marching bands, drill teams and theatre departments. Rank One is located in the US, with corporate offices in Dallas, Texas.
History
Formation
Founded in 2007 by Brian Mann, Rank One began as Rank One Sport, and was created as a department management tool for high school athletic trainers in Texas. Early versions of the software provided athletic trainers with the ability to create and track schedules and rosters through an online dashboard. In 2009, the company introduced electronic forms, allowing athletic trainers to complete paperless student compliance forms in the software. In 2010, Rank One Sport expanded to include Oklahoma and began actively marketing to high schools outside of the Texas market and hired its first employee in 2011. In 2014, Rank One Sport was acquired by AllPlayers Network, inc. As of 2022, Jason McKay is listed as chairman and CEO of AllPlayers Network, including the Rank One brand.
Branding Variations
Rank One began as Rank One Sport in 2007. In 2018, Rank One Sport introduced a new product called Rank One Health, with similar branding and often appearing together. In 2019, the Rank One Sport and Rank One Health brands were combined under the name Rank One Sport + Health. In 2021, Rank One Sport + Health became Rank One, with features previously associated with the Rank One Health brand transferred to a new company called MedOutreach.
Contracts
In 2019 Rank One signed a State Management contract with TAPPS and a Contex Concussion and Health Management contract with University Interscholastic League (UIL) .
Partnerships
In 2018, Rank One Sport partnered with Children's Health to create a product called Rank One Health. Rank One Health was a mobile app that provided HIPAA compliant secure messaging between athletic trainers and local healthcare providers.
Rank One has established partnerships with other software companies, providing API integration to allow secure data sharing across multiple software platforms. As of 2021 Rank One has announced partnerships with companies such as Rapid Replay, From Now On and Hometown Ticketing.
Notes
External links
Rank One official website
Sports companies
Sports officiating technology
Sports software
American companies established in 2007
Software companies established in 2007 |
42173089 | https://en.wikipedia.org/wiki/PSIPRED | PSIPRED | PSI-blast based secondary structure PREDiction (PSIPRED) is a method used to investigate protein structure. It uses artificial neural network machine learning methods in its algorithm. It is a server-side program, featuring a website serving as a front-end interface, which can predict a protein's secondary structure (beta sheets, alpha helixes and coils) from the primary sequence.
PSIPRED is available as a web service and as software. The software is distributed as source code, licensed technically as proprietary software. It allows modifying, but enforces freeware provisions by forbidding for-profit distribution of the software and its results.
Secondary structure
Secondary structure is the general three-dimensional form of local segments of biopolymers such as proteins and nucleic acids (DNA, RNA). It does not, however, describe specific atomic positions in three-dimensional space, which are considered to be the tertiary structure. Secondary structure can be formally defined by the hydrogen bonds of the biopolymer, as observed in an atomic-resolution structure. In proteins, the secondary structure is defined by the patterns of hydrogen bond between backbone amino and carboxyl groups. Conversely, for nucleic acids, the secondary structure consists of the hydrogen bonding between the nitrogenous bases. The hydrogen bonding patterns may be significantly distorted, which makes automatic determination of secondary structure difficult. Efforts to use computers to predict protein secondary structures, based only on their given primary structure sequences, have been ongoing since the 1970s.
Secondary structure prediction involves a set of methods in bioinformatics that aim to predict the local secondary structures of proteins and RNA sequences based only on knowledge of their primary structure – amino acid or nucleotide sequence, respectively. For proteins, a prediction consists of assigning regions of the amino acid sequence as highly probable alpha helixes, beta strands (often noted as extended conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm applied to the crystal structure of the protein; for nucleic acids, it may be determined from the hydrogen bonding pattern. Specialized algorithms have been developed to detect specific well-defined patterns such as transmembrane helixes and coiled coils in proteins, or canonical micro-RNA structures in RNA.
Basic information
The idea of this method is to use the information of the evolutionarily related proteins to predict the secondary structure of a new amino acid sequence. PSIBLAST is used to find related sequences and to build a position-specific scoring matrix. This matrix is processed by an artificial neural network, which was constructed and trained to predict the secondary structure of the input sequence; in short, it is a machine learning method.
Prediction algorithm (method)
The prediction method or algorithm is split into three stages: generating a sequence profile, predicting initial secondary structure, and filtering the predicted structure. PSIPRED works to normalize the sequence profile generated by PSIBLAST.
Then, by using neural networking, initial secondary structure is predicted. For each amino acid in the sequence, the neural network is fed with a window of 15 acids. Added information is attached, indicating if the window spans the N or C terminus of the chain. This results in a final input layer of 315 input units, divided into 15 groups of 21 units. The network has one hidden layer of 75 units and 3 output nodes (one for each secondary structure element: helix, sheet, coil).
A second neural network is used to filter the predicted structure of the first network. This network is also fed with a window of 15 positions. The indicator on the possible position of the window at a chain terminus is also forwarded. This results in 60 input units, divided into 15 groups of four. The network has one hidden layer of 60 units and results in three output nodes (one for each secondary structure element: helix, sheet, coil).
The three final output nodes deliver a score for each secondary structure element for the central position of the window. Using the secondary structure with the highest score, PSIPRED generates the protein prediction. The Q3 value is the fraction of residues predicted correctly in the secondary structure states, namely helix, strand, and coil.
See also
Jpred
Protein design
Protein function prediction
De novo protein structure prediction
Molecular design software
List of protein structure prediction software
Comparison of software for molecular mechanics modeling
Modelling biological systems
Protein fragment library
Lattice proteins
Statistical potential
References
Structural bioinformatics software
Neural network software |
3871241 | https://en.wikipedia.org/wiki/Information%20assurance%20vulnerability%20alert | Information assurance vulnerability alert | An information assurance vulnerability alert (IAVA) is an announcement of a computer application software or operating system vulnerability notification in the form of alerts, bulletins, and technical advisories identified by US-CERT, https://www.us-cert.gov/
US-CERT is managed by National Cybersecurity and Communications Integration Center (NCCIC), which is part of Cybersecurity and Infrastructure Security Agency (CISA), within the U.S. Department of Homeland Security (DHS). CISA, which includes the National Cybersecurity and Communications Integration Center (NCCIC) realigned its organizational structure in 2017, integrating like functions previously performed independently by the U.S. Computer Emergency Readiness Team (US-CERT) and the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT).
These selected vulnerabilities are the mandated baseline, or minimum configuration of all hosts residing on the GIG. US-CERT analyzes each vulnerability and determines if it is necessary or beneficial to the Department of Defense to release it as an IAVA. Implementation of IAVA policy will help ensure that DoD Components take appropriate mitigating actions against vulnerabilities to avoid serious compromises to DoD computer system assets that would potentially degrade mission performance.
Information assurance vulnerability management (IAVM) program
The combatant commands, services, agencies and field activities are required to implement vulnerability notifications in the form of alerts, bulletins, and technical advisories. USCYBERCOM has the authority to direct corrective actions, which may ultimately include disconnection of any enclave, or affected system on the enclave, not in compliance with the IAVA program directives and vulnerability response measures (i.e. communication tasking orders or messages). USCYBERCOM will coordinate with all affected organizations to determine operational impact to the DoD before instituting a disconnection.
Background
On November 16, 2018, President Trump signed into law the Cybersecurity and Infrastructure Security Agency Act of 2018. This landmark legislation elevated the mission of the former National Protection and Programs Directorate (NPPD) within the Department of Homeland Security (DHS) and established CISA, which includes the National Cybersecurity and Communications Integration Center (NCCIC). NCCIC realigned its organizational structure in 2017, integrating like functions previously performed independently by the U.S. Computer Emergency Readiness Team (US-CERT) and the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT).
According to the memorandum, the alert system should:
Identify a system administrator to be the point of contact for each relevant network system,
Send alert notifications to each point of contact,
Require confirmation by each point of contact acknowledging receipt of each alert notification,
Establish a date for the corrective action to be implemented, and enable DISA to confirm whether the correction has been implemented.
The Deputy Secretary of Defense issued an Information Assurance Vulnerability Alert (IAVA) policy memorandum on December 30, 1999. Current events of the time demonstrated that widely known vulnerabilities exist throughout DoD networks, with the potential to severely degrade mission performance. The policy memorandum instructs the DISA to develop and maintain an IAVA database system that would ensure a positive control mechanism for system administrators to receive, acknowledge, and comply with system vulnerability alert notifications. The IAVA policy requires the Component Commands, Services, and Agencies to register and report their acknowledgement of and compliance with the IAVA database. According to the policy memorandum, the compliance data to be reported should include the number of assets affected, the number of assets in compliance, and the number of assets with waivers.
See also
Attack (computing)
Computer security
Information security
IT risk
Threat (computer)
Vulnerability (computing)
Security Technical Implementation Guide
Security Content Automation Protocol
External links
Office of the Inspector General, DoD Compliance with the Information Assurance Vulnerability Alert Policy, Dec 2001.
Chairman of the Joint Chiefs of Staff Instruction, 6510.01E, August 2007.
DoD IA Policy Chart DoD IA Policy Chart
IAVA Site
Security compliance
United States Department of Defense information technology
Cyberwarfare |
26956016 | https://en.wikipedia.org/wiki/VRPN | VRPN | VRPN (Virtual-Reality Peripheral Network) is a device-independent, network-based interface for accessing virtual reality peripherals in VR applications. It was originally designed and implemented by Russell M. Taylor II at the Department of Computer Science of the University of North Carolina at Chapel Hill. VRPN was maintained and supported by Sensics while it was business. It is currently maintained by ReliaSolve and developed in collaboration with a productive community of contributors. It is described more fully at vrpn.org and in VRPN-VRST.
The purpose of VRPN is to provide a unified interface to input devices, like motion trackers or joystick controllers. It also provides the following:
Time-stamping of data
Multiple simultaneous access to peripheral devices
Automatic re-connection of failed servers
Storage and playback of sessions
The VRPN system consists of programming interfaces for both the client application and the hardware drivers and a server application that communicates with the hardware devices. The client interfaces are written in C++ but have been wrapped in C#, Python and Java.
A typical application of VRPN is to encode and send 6DoF motion capture data through the network in real time.
Networking
A VRPN client can establish a connection with a VRPN server (the device providing the data) in two ways: either over TCP (reliable, but less efficient), or over UDP (unreliable, but lower-latency and more efficient). The "unreliable" mode is generally preferred when the latency is critical.
The "unreliable" connection initialization sequence makes use of both the TCP and UDP protocols. It works as follows:
the client opens a TCP socket for listening on an arbitrary port;
the client sends the port number of this socket, along with its own machine name, in a UDP datagram directed to a well known port of the VRPN server (the default is 3883);
the server opens a TCP connection with the client, to the port number communicated at step 2;
if the TCP connection is established, each device tells to the other the supported VRPN version;
if the versions are not compatible, the connection is dropped;
otherwise, each device begins to listen on a new UDP port (different from those used before) and sends the port number to the other device, by using the previously created TCP connection;
from now on, all the data is sent over the two UDP ports opened at step 6.
The advantages of this approach are: fast connection time and fast failure detection during connection.
However, the "unreliable" connection initialization protocol does not honor the strict layering protocol design principle, as the application-level VRPN payload leaks information about lower levels in the network stack, namely the machine names and TCP/UDP port numbers. Because of this design choice, it is impossible to establish a VRPN connection between two devices connected through a NAT: the router would need to translate not only the layer-3 information in the packet headers, but also the references to IP addresses and port numbers inside the VRPN payload.
To deal with this problem, VRPN offers a second "reliable", TCP-only connection initialization mode, which is a standard TCP server-client interaction: the VRPN server listens on a well-known TCP port and the client initiates a connection. In this mode, all the data is sent on the same TCP connection, and no UDP communication is required.
Supported Devices
Trackers (listed alphabetically)
3rdTech HiBall-3000 Wide Area Tracker (formerly the UNC Ceiling tracker).
Antilatency positional tracking system.
ART optical tracking systems, including Flystick2 and Flystick3. The receiving code is part of the standard source distribution.
Analog devices used as a tracker (Magellan, CerealBox with joysticks attached, Radamec SPI, Mouse, ...).
ARToolkit VRPN tracker available from Universidad de los Andes.
Ascension Flock-of-birds (either running through one serial port, or with each sensor connected to its own serial port). This driver (and the other tracker drivers) resets the tracker in case of power cycle, serial disconnect or other flukes. Use of this driver on a Nest of Birds will burn out the transmitter drive circuitry.
Button devices used as teleporters or trackers (Global Haptics GeoOrb, ...).
Crossbow RGA300 accelerometer using a serial interface.
GameTrak devices.
Immersion Microscribe.
Inertialmouse and Event Mouse from Bauhaus University Weimar.
InterSense IS-600 and IS-900 (using augmented Fastrak interface on any architecture).
Logitech 3D mouse.
Microsoft Kinect (two different VRPN servers available.)
Motion Analysis Corporation (VRPN server is built into the vender's server)
MotionNode inertial tracking device.
NDI Polaris optical tracking system.
Novint force-feedback device.
OptiTrack Motive (was NaturalPoint OptiTrack Tracking Tools) (VRPN server is built into vendor server).
Origin Systems DynaSight tracker (with passive reflector). This driver also supports the older tracker in the SeeReal D4D stereo Display.
OSVR Hacker Developer Kit
Other InterSense trackers (using InterSense native library, even USB-based ones); there is currently a discussion on the VRPN email list about whether the position and orientation information are returned consistently when using this interface.
PS-Tech optical tracking system.
PhaseSpace tracking system.
PNI SpacePoint.
Polhemus Fastrak tracker and 3Space trackers on several architectures, Liberty and LibertyHS tracker under at least Linux. The Patriot tracker is supported using the Liberty driver. G4 Powertrack.
Razer Hydra game controller.
Sensable Technologies PHANToM force-feedback device.
Sensics tracker.
Sensics zSight tracker.
Serial-port GPS device.
Vicon (VRPN server is built into the vendor's server).
Viewpoint Eye tracker.
Wintracker III magnetic tracking system from Virtual Realities Ltd.
WorldViz Precision Position Tracker PPT 1.2.
Yost Labs 3Space Sensor (and wireless 3Space sensors).
zSpace immersive interactive hardware and software platform (VRPN server built into vendor server).
Other devices (listed alphabetically)
3DConnexion SpaceNavigator, SpaceExplorer, Spacemouse Pro, Navigator for Notebooks, SpaceTraveler devices, and SpaceMouseWireless (buttons and 6DOF differential analog).
5DT glove tracker (analog device with five values for the fingers plus pitch and roll). Also, the 5DT16 glove is supported along with a driver to convert the 16 analog values into button presses.
B&G systems CerealBox button/dial/slider/joystick controllers plugged into any server-capable machine.
Biosciences Tools thermal-control system.
CH Products Fighterstick
DirectInput enabled joysticks (including force-feedback joysticks) on Windows (see howto). Also, DirectInput enabled rumble packs on Windows.
Dream Cheeky USB drum kit.
Euclideon Holographics Hologram Devices (Hologram Table, Hologram Room, Hologram Wall).
Fraunhofer IMK ADBox and Fakespace Cubic Mouse.
Global Haptics GeOrb (buttons and analogs).
Haydon-Kerk IDEA drives, linear-motion controllers.
Hillcrest Labs' Freespace devices.
Joystick controllers: Contour ShuttleXpress, Futaba InterLink Elite, Griffin PowerMate, Logitech Extreme 3D Pro, Saitek ST290 Pro, Microsoft SideWinder Precision 2, Microsoft SideWinder, Microsoft Xbox S (raw controller on all O/S), Microsoft Xbox 360 (raw controller on all O/S), Afterglow Ax1 For Xbox 360 (raw controller on all O/S).
Keyboard on Windows.
Logitech Magellan and Spaceball 6DOF motion controllers with buttons (including the Spaceball 5000).
LUDL XY stages through LibUSB.
Mouse devices on Linux (when logged in at the console) and Windows.
National Instruments A/D cards.
Nintendo Wii Remote (also acting as a tracker).
NRL ImmersionBox serial driver (support for buttons only).
Other joysticks on Windows.
PC joysticks running under Linux.
Radamec Serial Position Interface video/movie camera tracker (unscaled zoom/focus, untested motion base).
Retrolink GameCube.
Serial mice: The buttons on several styles of serial mice plugged into a serial port.
SGI button and dial boxes (on an SGI or other machines).
Totally Neat Gadget (TNGs) from MindTel (buttons and analogs).
Xbox 360 game controller.
UNC's hand-held controller (or any device with up to 5 buttons; can be plugged into the parallel port on a Linux or Windows box—its use is deprecated, use the TNG3 instead).
Wanda analog/button device.
Win32 sound servers, based on the Miles SDK (obsolete), the AuSIM sound hardware, and Microsoft DirectSound.
XKeys devices from P.I. Engineering: the Desktop, Professional, Jog&Shuttle, Joystick, and foot pedal.
Zaber.com's linear positioning elements.
References
External links
VRPN Home
VRPN Github Wiki
Sensics
Department of Computer Science at UNC
Home page of Russell M. Taylor II
Computer networks |
287790 | https://en.wikipedia.org/wiki/Filesystem%20Hierarchy%20Standard | Filesystem Hierarchy Standard | The Filesystem Hierarchy Standard (FHS) is a reference describing the conventions used for the layout of a UNIX system. It has been made popular by its use in GNU/Linux distributions, but it is used by other UNIX variants as well. It is maintained by the Linux Foundation. The latest version is 3.0, released on 3 June 2015.
Directory structure
In the FHS, all files and directories appear under the root directory /, even if they are stored on different physical or virtual devices. Some of these directories only exist on a particular system if certain subsystems, such as the X Window System, are installed.
Most of these directories exist in all Unix-like operating systems and are generally used in much the same way; however, the descriptions here are those used specifically for the FHS and are not considered authoritative for platforms other than Linux.
FHS compliance
Most Linux distributions follow the Filesystem Hierarchy Standard and declare it their own policy to maintain FHS compliance. GoboLinux and NixOS provide examples of intentionally non-compliant filesystem implementations.
Some distributions generally follow the standard but deviate from it in some areas. The FHS is a "trailing standard", and so documents common practices at a point in time. Of course, times change, and distribution goals and needs call for experimentation. Some common deviations include:
Modern Linux distributions include a /sys directory as a virtual filesystem (sysfs, comparable to /proc, which is a procfs), which stores and allows modification of the devices connected to the system, whereas many traditional Unix-like operating systems use /sys as a symbolic link to the kernel source tree.
Many modern Unix-like systems (like FreeBSD via its ports system) install third-party packages into /usr/local, while keeping code considered part of the operating system in /usr.
Some Linux distributions no longer differentiate between /lib and /usr/lib and have /lib symlinked to /usr/lib.
Some Linux distributions no longer differentiate between /bin and /usr/bin and between /sbin and /usr/sbin. They may symlink /bin to /usr/bin and /sbin to /usr/sbin. Other distributions choose to consolidate all four, symlinking them to /usr/bin.
Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs), which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, such data were stored in /var/run, but this was a problem in some cases because this directory is not always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory is not intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only.
For example, below are the changes Debian made in its 2013 Wheezy release:
/dev/.* → /run/*
/dev/shm → /run/shm
/dev/shm/* → /run/*
/etc/* (writeable files) → /run/*
/lib/init/rw → /run
/var/lock → /run/lock
/var/run → /run
/tmp → /run/tmp
History
FHS was created as the FSSTND (short for "Filesystem Standard"), largely based on similar standards for other Unix-like operating systems. Notable examples are these: the description of file system layout, which has existed since the release of Version 7 Unix (in 1979); the SunOS and its successor, the Solaris .
Release history
See also
Unix directory structure
XDG Base Directory Specification
Notes
References
External links
Full specification texts
objectroot – a proposal for a new filesystem hierarchy, based on object-oriented design principles
The Dotted Standard Filename Hierarchy, yet another very different hierarchy (used in cLIeNUX) (mirror)
Computer standards
Linux
System administration
Unix file system technology |
13764535 | https://en.wikipedia.org/wiki/Information%20Security%20Automation%20Program | Information Security Automation Program | The Information Security Automation Program (ISAP, pronounced “I Sap”) is a U.S. government multi-agency initiative to enable automation and standardization of technical security operations. While a U.S. government initiative, its standards based design can benefit all information technology security operations. The ISAP high level goals include standards based automation of security checking and remediation as well as automation of technical compliance activities (e.g. FISMA). ISAP's low level objectives include enabling standards based communication of vulnerability data, customizing and managing configuration baselines for various IT products, assessing information systems and reporting compliance status, using standard metrics to weight and aggregate potential vulnerability impact, and remediating identified vulnerabilities.
ISAP's technical specifications are contained in the related Security Content Automation Protocol (SCAP). ISAP's security automation content is either contained within, or referenced by, the National Vulnerability Database.
ISAP is being formalized through a trilateral memorandum of agreement (MOA) between Defense Information Systems Agency (DISA), the National Security Agency (NSA), and the National Institute of Standards and Technology (NIST). The Office of the Secretary of Defense (OSD) also participates and the Department of Homeland Security (DHS) funds the operation infrastructure on which ISAP relies (i.e., the National Vulnerability Database).
External links
Information Security Automation Program web site
Security Content Automation Protocol web site
National Vulnerability Database web site
This document incorporates text from Information Security Automation Program Overview (v1 beta), a public domain publication of the U.S. government.
Agencies of the United States government
Computer security
National security |
49470425 | https://en.wikipedia.org/wiki/Mass%20marketing%20fraud | Mass marketing fraud | Mass-marketing fraud (or mass market fraud) is a scheme that uses mass-communication media – including telephones, the Internet, mass mailings, television, radio, and personal contact – to contact, solicit, and obtain money, funds, or other items of value from multiple victims in one or more jurisdictions. The frauds where victims part with their money by promising cash, prizes, and services and high returns on investment are part of mass market fraud.
Characteristics and classification
Such scams or consumer frauds generally fall into four categories:
Pretending to sell something you do not have, and taking the money
Supplying goods or services which are of lower quality than those paid for, or failing to supply the goods and services sought
Persuading customers to buy something they do not really want through oppressive marketing techniques
Disguising ones identity in order to perpetrate a fraud
Alternatively, mass market fraud may also be classified as follows:
On basis of communication mechanism: 'Internet fraud', 'mail fraud' and 'tele-marketing fraud'
On basis of Scheme central to fraud: 'lottery fraud', 'insurance fraud', 'loan fraud', 'mobile tower fraud', 'quiz fraud'
Victim reporting reveals that Internet-based solicitations are among the most common: in the United States, web sites and e-mails accounted for 60 percent of reported contacts in 2009, and Canada noted a 46 percent spike in Internet-related complaints from 2008 to 2009.
As per United States Department of Justice the mass-marketing fraud schemes generally fall into three main categories: (i) advance-fee fraud schemes; (ii) bank and financial account schemes; and (iii) investment opportunities. Advance fee fraud schemes are most popular. This type of scheme is based on the concept that a victim will be promised a substantial benefit – such as a million-dollar prize, lottery winnings, a substantial inheritance, or some other item of value – but must pay in advance some purported fee or series of fees before the victim can receive that benefit.
The mobile phones, internet and electronic media have given following distinct advantages to the mass market fraudsters:
Low setup cost and mass scale reach
Fraud at a distance
Ease in financial transaction
Nature
Mass marketing fraud schemes are predominantly transnational/interstate in nature. Perpetrators operate from multiple foreign countries and utilize the financial infrastructure of one or more countries to transfer and launder funds. Law enforcement investigations have exposed that such schemes operating not only in multiple countries in North America, Europe, and Africa, but in other countries and jurisdictions as diverse as Brazil, Costa Rica, Hong Kong, India, Israel, the Philippines, Thailand, and the United Arab Emirates.
Many of the frauds perpetrated online, work on the principle of large number of victims losing relatively small sums of money. The Office for Fair Trading (2006) research illustrates that in many of the frauds relatively small sums of money are lost – frequently less than £100. The tactic of the fraudster is to secure such a sum of money that the victim will be less bothered to report the fraud.
Mass-marketing fraud – whether committed via the Internet, telemarketing "boiler rooms," the mail, television or radio advertising, mass meetings, or even one-on-one talks over people's kitchen tables, has two elements in common. First, the criminals who conduct any mass-marketing fraud scheme aim to defraud multiple individuals or businesses to maximize their criminal revenues. Second, the schemes invariably depend on persuading victims to transfer money or funds to the criminals based on promises of valuable goods, services, or benefits, and then never delivering the promised goods, services, or benefits to the victims.
The reporting of mass marketing fraud is quite low, as per OFT there is only 1 to 3% reporting.
Common examples
Common mass marketing scams in Australia, Canada, US, UK and other western countries include foreign lotteries and sweepstakes, traditional West African fraud schemes and 419 letter scams, charity scams, romance scams, boiler room or share sale fraud, credit card interest reduction schemes, auction and retail website schemes, investment schemes, counterfeit check fraud schemes (including schemes targeting attorneys), emergency assistance schemes, merchandise purchase/product misrepresentation schemes, psychic/clairvoyant schemes, bank and financial account schemes, recovery schemes, sale of merchandise/overpayment schemes, and service schemes. Bank and financial account schemes not only involve fraud but also identity theft through phishing and vishing.
In developing countries including India the lottery scam through e-mails, SMSs and other unsolicited messages, 'Nigerian letter fraud', phishing and pyramid scams are popular. Recently, the Mass Fraud Quiz Competition, normally known as "Chehra pechano" (identify the Face of Actor/Actress), is being run by fraudsters using communication mechanisms such as print media (daily newspaper), television and through online websites. The fraudsters run a quiz campaign in classified/business columns in daily newspapers, also airing on live TV programs on popular Indian channels such as Sony TV, Sahara One, B4U, Mahua, Houseful, Cinema TV and Maha Movie Channel etc., and through online websites.
At present, there is no authoritative statistical data available on financial loss due to mass market frauds. However, in 2006, the United Kingdom Office of Fair Trading (OFT) study estimated that each year 3.2 million United Kingdom adults (6.5 percent of the adult population) fall victim to mass-marketing schemes, collectively losing £3.5 billion. Similarly, a June 2008 study by the Australian Bureau of Statistics (ABS) found 453,100 of those victims (56.2 percent) reportedly lost AU $977 million (US$905.7 million as of June 27, 2008) in selected schemes such as lottery, pyramid, and phishing schemes. In India there is no single agency to estimate the loss due to such mass market fraud.
As per IMMFWG, there are strong indications that the order of magnitude of global mass-marketing fraud losses is in the tens of billions of dollars per year.
Initiatives to combat fraud
MMF frauds are committed by fraudsters and the illicit proceeds are received by their counterparts in countries on every continent in the world. Methods of MMF and its money laundering components are similar to drug trafficking. A project led by Financial Crimes Enforcement Network (FinCEN), examined global money flows relating to mass marketing fraud schemes. One of major step to combat the mass marketing Fraud is the formation of International Mass-Marketing Fraud Working Group (IMMFWG) in September 2007. It consists of law enforcement, regulatory, and consumer protection agencies from Australia, Belgium, Canada, Netherlands, Nigeria, UK, US and Europol.
US and Canada had made changes in substantive and procedural laws. Canada made changes in Competition Act (Bill C-20); the Extradition Act and Canada Evidence Act (Bill C-40); and Omnibus Criminal Code Amendments (Bill C-51), as well as legislation related to Proceeds of Crime, and Wiretapping which includes impact on telemarketing cases. US provided significant enhancement to fraud related criminal offenses in the Senior Citizens Against Marketing Scams Act of 1994 and the Deceptive Mail Prevention and Enforcement Act of 2000. One significant piece of legislation affecting mass-marketing fraud that was enacted since 2003 is the U.S. SAFEWEB Act.
Competition Bureau Canada partnered with Ontario Provincial Police (OPP) and Royal Canadian Mounted Police and upgraded earlier telemarketing fraud call centre PhoneBuster to a Canadian Anti-Fraud Centre (CAFC), which is now a central agency in Canada that collects information and criminal intelligence on such matters as mass marketing fraud (i.e.: telemarketing), advance fee fraud (i.e.: West African letters), Internet fraud and identification theft complaints.
Royal Canadian Mounted Police Canada has also started a Reporting Economic Crime On-Line (RECOL) initiative. Canada has created a system known as CANSHARE, a web-based database connecting consumer affairs departments across.
US has had a 'Consumer Sentinel' complaint database maintained by FTC since 1997. Consumer Sentinel collects information about all types of consumer fraud and identity theft from the FTC and over 125 other organizations and makes those data available to law enforcement partners around the world for use in their investigations.
In United States, the FBI and a nonprofit private-sector entity, the National White Collar Crime Center has started an 'Internet Crime Complaint Center' (IC3) for online reporting of complaints about Internet-related crime like Internet fraud and lottery/sweepstakes scams.
Canadian and American law enforcement authorities have shown great creativity in developing and participating in a wide range of public education and prevention measures that involve cross-border fraud. For example, in Canada, the "Stop Phone Fraud – It's a Trap' marketing campaign, as described above, provided public- and private-sector entities with a variety of educational materials and resources. In the United States, the National Consumers League, with a grant from the United States Department of Justice, developed a "Telemarketing Fraud Education Kit" for distribution to government agencies, nonprofit consumer, civic, community, and labor organizations, and schools.
ScamWatch is a central portal of Government of Australia administered by Australian Competition and Consumer Commission, for issuing news and alert on fraud and scams and online reporting of the frauds and scams. Stay Smart Online is the Australian Government's online safety and security website, designed to help everyone understand the risks and simple steps we can take to protect our personal and financial information online. The Little Book of Scams has been published by Australian Competition and Consumer Commission. The book is available free of cost. The little black book of scams highlights a variety of popular scams that regularly target Australian consumers and small business in areas such as fake lotteries, internet shopping, mobile phones, online banking, employment, investment opportunities.
The Office of Fair Trading in, which is closed now, was earlier responsible for all Mass Marketing Frauds in . OFT had provided the advice and support to the fraud victim through letter or email, leaflets, DVD, website with advice and prevention advice to chronic victims. OFT published the 'Research on impact of mass marketed scams' in December 2006, document on support for victims of fraud and fraud typologies and victims of fraud. The government has a separate act known as the 'Fraud Act' to cover frauds mainly committed by false representation, failing to disclose information and abuse of position.
The Action Fraud is the 's National fraud and internet crime reporting centre. Action fraud covers all type of A-Z frauds including Mass Marketing Frauds. Action Fraud has partnered with charity 'Victim Support' to help the victims.
The Government of New Zealand has also a scamwatch portal on their Consumer Protection Website. The portal is being maintained by Ministry of Business, Innovation and Employment. The aim of Scamwatch is to provide you with information you need to protect yourself from scams, so you can recognize a set-up and avoid the hook and the inevitable sting of a scam. Consumer Protection has teamed up with its trusted partner Netsafe to allow victim report through the ORB. Consumer Affairs does not have investigative or enforcement powers. Fraud Victim Report is sent to Consumer Protection who may publish information about it on the Alerts section of their Scamwatch website. ORB is working with partner agencies to direct victim reports through to the organization best able to investigate or advice on various types of online incidents that include scams and frauds, spam messages, objectionable material, privacy breaches and problems whilst shopping online.
References
Fraud |
58950754 | https://en.wikipedia.org/wiki/Don%20Granitz | Don Granitz | Donald L. Granitz (August 24, 1928 – January 28, 2016) was a Christian missionary and an American football player and coach. He served as the head football coach at Taylor University in Upland, Indiana from 1952 to 1954. After serving as a missionary in Brazil, Granitz returned to the United States to become the athletic director at Bethel College in Mishawaka, Indiana in 1971.
References
1928 births
2016 deaths
Bethel Pilots athletic directors
Taylor Trojans baseball coaches
Taylor Trojans football coaches
Taylor Trojans football players
People from Ambridge, Pennsylvania
Players of American football from Pennsylvania |
54256448 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Bhagalpur | Indian Institute of Information Technology, Bhagalpur | Indian Institute of Information Technology, Bhagalpur (abbreviated IIIT Bhagalpur) is one of the Indian Institutes of Information Technology (IIIT), a group of institutes of Higher education in India focused on Information Technology. It is established by the Ministry of Education (MoE), formerly the Ministry of Human Resource Development, Government of India, and few industry partners as a Not-for-profit Public Private Partnership (N-PPP) Institution. IIIT Bhagalpur was declared as an Institute of National Importance (INI) in September 2020.The institute started functioning from July 2017 in a 50-acres campus at Bhagalpur College of Engineering. It was being mentored by Indian Institute of Technology Guwahati (IITG) till April 2019 when it got its own director.
History
The Government of India decided to establish new IIITs in different states in 2010, and National Screening Committee (NSC), in its second meeting on 14 March 2012, asked the Government of Bihar for a detailed project report (DPR). The proposal for IIIT Bhagalpur was passed by the NSC on 2 September 2016.
IIIT Bhagalpur has been set up on a Public–private partnership (PPP) basis. Fifty percent of the stakes are held by Ministry of Education (MoE), Government of India, whereas thirty-five percent is held by the state government; the rest is held by industry partner Beltron. Indian Institute of Technology Guwahati (IITG) has declared as IIIT Bhagalpur's mentor institute. Pinakeswar Mahanta (director, NIT Arunachal Pradesh), the Dean Faculty Affairs at IITG was the director of IIIT Bhagalapur.Then after Dr. Saurabh Basu (retired), the dean of Outreach Education Program at IIT Guwahati was the director and now the permanent director has been selected and his name is Professor Arvind Choubey.
Emblem
The logo of the institute is designed by Mohijeet Das a designer who graduated from the Department of Design, IIT Guwahati. The logo takes inspiration from artifacts closely related to Bhagalpur such as the Vikaramshila Mahavidyalaya, Bhagalpuri saree to name a few.
Campus
The permanent campus of IIIT Bhagalpur will be set up on of land near Bhagalpur College of Engineering (BCE), and an initial budget of was allocated for the construction of the IIIT. The construction of the permanent campus is going to start from 07 October 2021 and is expected to be completed within 18 months. The IIIT building will be resistant to earthquakes and floods. The state government had initially identified of land in Chandi block, Nalanda district, but could not go ahead with the land acquisition due to protests from land owners.
Temporary campus
Currently, the academic operations are running in a temporary building. Bhagalpur College of Engineering (BCE) had provided the buildings to IIIT Bhagalpur on its campus.
Academics
IIIT Bhagalpur offers three B.Tech courses in Electronics and Communication Engineering, Computer Science Engineering and Mechatronics with an intake capacity of 150 students in computer science engineering 75 in electronics and communication engineering and 38 in mechatronics. The institute has started M.Tech and Ph.D. programs from August 2021.
Student life
The Dining and Recreation center which is also called CAC (Common Activity Center) of the institute contains a student mess and facilities for extra co-curricular activities, such as a music room and a gymnasium.
Student council
Student Council is a main elected student body that supervises all clubs and festivals. It has a budget which the council distributes to various clubs. Students can form new clubs, based on interests, after formal permission of the student council. The Student Senate is an elected student’s body, which focuses on academic many issues like hostels and mess Committee governance are a main part of the units.
Student clubs
To enhance extra-curricular activities and skills, different clubs have been formed. Student's Council is divided into 3 parts, i.e. Cultural Society, Sports Society, and Technical Society. The Technical Society includes 4 clubs i.e. AI/ML Club, Coding Club, Robotics Club, and Web Development Club. Cultural Society includes 7 clubs: Art and Craft, Dance Club, Dramatics Club, Literature Club, Music and Singing Club, Photography Club, and Quiz Club. Sports include Badminton, Volleyball, Cricket, and Athletics.
References
Engineering colleges in Bihar
Bhagalpur
Universities and colleges in Bhagalpur
Educational institutions established in 2017
2017 establishments in Bihar |
1324595 | https://en.wikipedia.org/wiki/Siebel%20Systems | Siebel Systems | Siebel CRM Systems, Inc. () was a software company principally engaged in the design, development, marketing, and support of customer relationship management (CRM) applications—notably Siebel CRM.
The company was founded by Thomas Siebel and Patricia House in 1993. At first known mainly for its sales force automation products, the company expanded into the broader CRM market. By the late 1990s, Siebel Systems was the dominant CRM vendor, peaking at 45% market share in 2002.
On September 12, 2005, Oracle Corporation announced it had agreed to buy Siebel Systems for $5.8 billion. "Siebel" is now a brand name owned by Oracle Corporation.
Siebel Systems is Oracle's on-premises CRM system, and Oracle's cloud applications for CRM are Oracle Advertising and Customer Experience (CX).
History
Siebel Systems, Inc. began in sales force automation software, then expanded into marketing and customer service applications, including CRM. From the time it was founded in 1993, the company grew quickly.
Benefiting from the explosive growth of the CRM market in the late 1990s, Siebel Systems was named the fastest growing company in the United States in 1999 by Fortune magazine.
Thomas Siebel, Pat House
Siebel's "first experience with sales technology was in the late 1980s, when he worked for .. Oracle." At the time,
Siebel Systems co-founder Pat House also was working for Oracle. Siebel left Oracle to try his hand at a startup. In 1992 House left Oracle and together they worked on what became Siebel Systems (in 1993).
Key dates
1993: Siebel Systems, Inc. is founded.
1995: Siebel delivers Siebel Sales Enterprise software for sales force automation.
1995: Siebel 2.0 (Release end of 1995)
Siebel Customer Relationship Management (CRM)
Siebel Sales Enterprise
1996: Siebel becomes a publicly traded company.
1997: Siebel 3.0 (Release Feb 1997)
1998: Siebel 98
1998: Siebel Systems acquires Scopus Technology, Inc. "for its customer-service and support products."
1999: Siebel 99
2000: Siebel 6 (also known as Siebel 2000)
2000: Revenue surpasses the $1 billion mark.
2001: Siebel 7.0 (Released 2001, was the first web-based version)
2002: Siebel 7.5 (Released in 2002)
2004: Siebel 7.7 (Released in 2004)
2005: Siebel 7.8 (Released in 2005)
2006: Oracle acquires Siebel Systems.
2007: Oracle Siebel 8.0 (Released in 2007)
2007: Oracle Business Intelligence Enterprise Edition Plus (released 2007)
2007: Oracle Business Intelligence Applications (Formerly Siebel Analytics) (released 2007)
2008: Oracle Siebel 8.1 (Released in 2008)
2011: Oracle Siebel 8.2 (Released in 2011)
Oracle Sales Cloud
Oracle Fusion CRM
Oracle CRM On Demand
2015: Oracle Siebel 15.0 (Released 11 May 2015)
2016: Oracle Siebel 16.0 (Released 29 Apr 2016)
2017: Oracle Siebel 17.0 (Released 31 Jul 2017)
2018: Oracle Siebel 18.0 (Released 23 Jan 2018)
2019: Oracle Siebel 19.0 (Released 21 Jan 2019)
2020: Oracle Siebel 20.0 (Released 21 Jan 2020)
2021: Oracle Siebel 21.0 (Released 21 Jan 2021)
2022: Oracle Siebel 22.0 (Released 21 Jan 2022)
See also
Oracle Advertising and Customer Experience (CX)
Oracle CRM
References
External links
Siebel Developer’s Reference
Companies based in Redwood Shores, California
Software companies based in California
Oracle acquisitions
CRM software companies
Software companies established in 1977
1977 establishments in California
Business services companies established in 1977
2006 mergers and acquisitions
Software companies of the United States |
1862027 | https://en.wikipedia.org/wiki/Id%20Tech%203 | Id Tech 3 | id Tech 3, popularly known as the Quake III Arena engine''', is a game engine developed by id Software for their video game Quake III Arena. It has been adopted by numerous games. During its time, it competed with the Unreal Engine; both engines were widely licensed.
While id Tech 3 is based on id Tech 2 engine, a large amount of the code was rewritten. Successor id Tech 4 was derived from id Tech 3, as was Infinity Ward's IW engine used in Call of Duty 2 onwards.
At QuakeCon 2005, John Carmack announced that the id Tech 3 source code would be released under the GNU General Public License v2.0 or later, and it was released on August 19, 2005. Originally distributed by id via FTP, the code can be downloaded from id's GitHub account.
Features
Graphics
Unlike most other game engines released at the time — including its primary competitor, the Unreal Engine, id Tech 3 requires an OpenGL-compliant graphics accelerator to run. The engine does not include a software renderer.
id Tech 3 introduced spline-based curved surfaces in addition to planar volumes, which are responsible for many of the surfaces present within the game.
Shaders
The graphical technology of the game is based tightly around a "shader" system where the appearance of many surfaces can be defined in text files referred to as "shader scripts." Shaders are described and rendered as several layers, each layer contains a texture, a "blend mode" which determines how to superimpose it over the previous layer and texture orientation modes such as environment mapping, scrolling, and rotation. These features can readily be seen within the game with many bright and active surfaces in each map and even on character models. The shader system goes beyond visual appearance, defining the contents of volumes (e.g. a water volume is defined by applying a water shader to its surfaces), light emission and which sound to play when a volume is trodden upon. In order to assist calculation of these shaders, id Tech 3 implements a specific fast inverse square root function, which attracted a significant amount of attention in the game development community for its clever use of integer operations.
Video
In-game videos all use a proprietary format called "RoQ", which was originally created by Graeme Devine, the co-designer of Quake 3, for the game The 11th Hour. Internally RoQ uses vector quantization to encode video and DPCM to encode audio. While the format itself is proprietary it was successfully reverse-engineered in 2001, and the actual RoQ decoder is present in the Quake 3 source code release. RoQ has seen little use outside games based on the id Tech 3 or id Tech 4 engines, but is supported by several video players (such as MPlayer) and a handful of third-party encoders exist. One notable exception is the Unreal Engine-based game Postal 2: Apocalypse Weekend, which uses RoQ files for its intro and outro cutscenes, as well as for a joke cutscene that plays after a mission at the end of part one.
Models
id Tech 3 loads 3D models in the MD3 format. The format uses vertex movements (sometimes called per-vertex animation) as opposed to skeletal animation in order to store animation. The animation features in the MD3 format are superior to those in id Tech 2's MD2 format because an animator is able to have a variable number of key frames per second instead of MD2's standard 10 key frames per second. This allows for more complex animations that are less "shaky" than the models found in Quake II.
Another important feature about the MD3 format is that models are broken up into three different parts which are anchored to each other. Typically, this is used to separate the head, torso and legs so that each part can animate independently for the sake of animation blending (i.e. a running animation on the legs, and shooting animation on the torso). Each part of the model has its own set of textures.
The character models are lit and shaded using Gouraud shading while the levels (stored in the BSP format) are lit either with lightmaps or Gouraud shading depending on the user's preference. The engine is able to take colored lights from the lightgrid and apply them to the models, resulting in a lighting quality that was, for its time, very advanced.
In the GPLed version of the source code, most of the code dealing with the MD4 skeletal animation files was missing. It is presumed that id simply never finished the format, although almost all licensees derived their own skeletal animation systems from what was present. Ritual Entertainment did this for use in the game, Heavy Metal: F.A.K.K.², the SDK to which formed the basis of MD4 support completed by someone who used the pseudonym Gongo.
Dynamic shadows
The engine is capable of three different kinds of shadows. One just places a circle with faded edges at the characters' feet, commonly known as the "blob shadow" technique. The other two modes project an accurate polygonal shadow across the floor. The difference between the latter two modes is one's reliance on opaque, solid black shadows while the other mode attempts (with mixed success) to project depth-pass stencil shadow volume shadows in a medium-transparent black. Neither of these techniques clip the shadow volumes, causing the shadows to extend down walls and through geometry.
Other rendering features
Other visual features include volumetric fog, mirrors, portals, decals, and wave-form vertex distortion.
Soundid Tech 3s sound system outputs to two channels using a looping output buffer, mixed from 96 tracks with stereo spatialization and Doppler effect. All of the sound mixing is done within the engine, which can create problems for licensees hoping to implement EAX or surround sound support. Several popular effects such as echoes are also absent.
A major flaw of the sound system is that the mixer is not given its own thread, so if the game stalls for too long (particularly while navigating the menus or connecting to a server), the small output buffer will begin to loop, a very noticeable artifact. This problem was also present in the Doom 3, Quake, and Quake II engines.
Networkingid Tech 3 uses a "snapshot" system to relay information about game "frames" to the client over UDP. The server updates object interaction at a fixed rate independent of the rate clients update the server with their actions and then attempts to send the state of all objects at that moment (the current server frame) to each client. The server attempts to omit as much information as possible about each frame, relaying only differences from the last frame the client confirmed as received (Delta encoding). All data packets are compressed by Huffman coding with static pre-calculated frequency data to reduce bandwidth use even further.Quake 3 also integrated a relatively elaborate cheat-protection system called "pure server." Any client connecting to a pure server automatically has pure mode enabled, and while pure mode is enabled only files within data packs can be accessed. Clients are disconnected if their data packs fail one of several integrity checks. The cgame.qvm file, with its high potential for cheat-related modification, is subject to additional integrity checks. Developers must manually deactivate pure server to test maps or mods that are not in data packs using the PK3 file format. Later versions supplemented pure server with PunkBuster support, though all the hooks to it are absent from the source code release because PunkBuster is closed source software and including support for it in the source code release would have caused any redistributors/reusers of the code to violate the GPL.
Virtual machineid Tech 3 uses a virtual machine to control object behavior on the server, effects and prediction on the client and the user interface. This presents many advantages as mod authors do not need to worry about crashing the entire game with bad code, clients could show more advanced effects and game menus than was possible in Quake II and the user interface for mods was entirely customizable.
Virtual machine files are developed in ANSI C, using LCC to compile them to a 32-bit RISC pseudo-assembly format. A tool called q3asm then converts them to QVM files, which are multi-segmented files consisting of static data and instructions based on a reduced set of the input opcodes. Unless operations which require a specific endianness are used, a QVM file will run the same on any platform supported by Quake 3.The virtual machine also contained bytecode compilers for the x86 and PowerPC architectures, executing QVM instructions via an interpreter.
ioquake3
Ioquake3 is a game engine project which aims to build upon the id Tech 3 source code release in order to remove bugs, clean up source code and to add more advanced graphical and audio features via SDL and OpenAL. ioquake3 is also intended to act as a clean base package, upon which other projects may be built. The game engine supports Ogg Vorbis format and video capture of demos in .avi format.
The project was started shortly after the source code release with the goal of creating a bug-free, enhanced open source Quake III engine source code distribution upon which new games and projects can be based. In addition, the project aims to provide an improved environment in which Quake III: Arena, the Team Arena expansion pack and all the popular mods can be played. Notable features added by the project include builtin VoIP support, Anaglyph stereo rendering (for viewing with 3D glasses), and numerous security fixes. A list of some of the features is available on the project's website.
Ioquake3 has been the basis of several game projects based on the id Tech 3 engine, such as OpenArena (mimicking Quake III Arena), Tremulous, Smokin' Guns, Urban Terror, Turtle Arena and World of Padman as well as game engine projects such as efport (a Star Trek: Voyager – Elite Force Holomatch engine recreation project), ioJedi Outcast, ioJedi Academy, ioDoom3 and OpenMoHAA. The engine and its associated games have been included in several Linux and BSD distributions.
The source code for the Return to Castle Wolfenstein and Wolfenstein: Enemy Territory engines was released under GNU GPL-3.0-or-later on August 12, 2010. The ioquake3 developers announced the start of respective engine projects (iortcw, iowolfet, Enemy territory:Legacy) soon after.
The ioquake3 project has also been used in the academic arena as the basis for a variety of research in institutions such as Stanford University's Center for Computer Research in Music and Acoustics (CCRMA), Notre Dame as the foundation for VR research, and Swinburne University of Technology's Centre for Advanced Internet Architectures.
There are even collaborative efforts from researchers at Carnegie Mellon University and the University of Toronto that use ioquake3 as a platform for their published researches. Students have used ioquake3 as the basis for advanced graphics work for their theses, as well, such as Stephan Reiter's work which has even been noted at the LLVM project due to his synthesis of the ioquake3 engine, ray-tracing rendering technique, and LLVM.
Though the name "ioquake3" is based on Ryan "Icculus" Gordon's site icculus.org, Ryan does not lead the project. Instead, he maintains a mentor role and provides hosting for the mailing lists and the SVN repository used by the project.
Games using the engine
Games based on the source release
OpenArena – An open source standalone game based heavily on the Quake III Arena-style deathmatch. The gameplay attempts to emulate Quake III Arena in that the player scores frags to win the game using a balanced set of weapons, each designed for different situations. OpenArena is also capable of running some Quake III Arena based mods such as Tremulous 1.0. OpenArena runs on ioquake3 and version 0.8 has been successfully ported to Android.
Space Trader – An action/strategy game from HermitWorks Entertainment.
Smokin' Guns – An open source first person game that intended to be a semi-realistic simulation of the "Old West's" atmosphere. Originally a Quake III Arena modification, but became a stand-alone game. It has been ported back to ioquake3 engine in 2009.
Urban Terror – A Quake III Arena total conversion mod while designed and released to work with the retail software Quake III Arena, it is also compatible with open source engine alternatives. The gameplay can be compared to Counter-Strike with a larger focus on movement with its parkour features. Urban Terror runs on the ioquake3 engine.
Tremulous – Tremulous is an open sourced asymmetric alien vs human team based first-person shooter with elements of real time strategy. Each team may construct and defend a base, consisting of essential and support structures which aid the players in some way. Victory for a team is typically done by eliminating enemy spawn structures and remaining players. Tremulous started as a Quake III Arena mod, but as of version 1.1 the game has become stand-alone on the ioquake3 engine.
Games using a proprietary license
Based on id Tech 3Quake III Arena (1999) – id SoftwareQuake III: Team Arena (2000) – id SoftwareQuake III Revolution (2001) – Bullfrog ProductionsStar Trek: Voyager – Elite Force (2000) – Raven SoftwareStar Trek: Voyager – Elite Force – Expansion Pack (2001) – Raven SoftwareReturn to Castle Wolfenstein (2001) – Gray Matter Interactive (SP) / Nerve Software (MP)Trinity: The Shatter Effect (Canceled) - Gray Matter InteractiveSoldier of Fortune II: Double Helix (2002) – Raven SoftwareStar Wars Jedi Knight II: Jedi Outcast (2002) – Raven SoftwareStar Wars Jedi Knight: Jedi Academy (2003) – Raven Software
Resident Evil: Dead Aim (2003) - Capcom/CaviaWolfenstein: Enemy Territory (2003) – Splash DamageCall of Duty (2003) – Infinity WardCall of Duty: United Offensive (2004) – Gray Matter Interactive / TreyarchCall of Duty Classic (2009) – Infinity WardSeverity (Canceled) – Cyberathlete Professional LeagueIron Grip: Warlord (2008) – IsotxDark Salvation (2009) – Mangled Eye StudiosQuake Live (2010) – id Software
Using id Tech 3 with ÜberToolsHeavy Metal: F.A.K.K.² (2000) – Ritual EntertainmentAmerican McGee's Alice (2000) – Rogue Entertainment007: Agent Under Fire (2001) – EA Redwood ShoresMedal of Honor: Allied Assault (2002) – 2015, Inc.Medal of Honor: Allied Assault – Spearhead (2002) – EA Los AngelesMedal of Honor: Allied Assault – Breakthrough (2003) – TKO SoftwareStar Trek: Elite Force II (2003) – Ritual Entertainment007: Everything or Nothing'' (2004) – EA Redwood Shores
See also
id Tech 4
List of game engines
References
External links
Original Quake III source code repository (id Tech 3) on idsoftware.com
id's current Quake III source code repository (id Tech 3) on github.com
ioquake3 project page, community continuation
1999 software
Formerly proprietary software
Free game engines
Game engines for Linux
Id Tech
Quake (series)
Virtual reality |
105985 | https://en.wikipedia.org/wiki/Lex%20%28software%29 | Lex (software) | Lex is a computer program that generates lexical analyzers ("scanners" or "lexers").
Lex is commonly used with the yacc parser generator. Lex, originally written by Mike Lesk and Eric Schmidt and described in 1975, is the standard lexical analyzer generator on many Unix systems, and an equivalent tool is specified as part of the POSIX standard.
Lex reads an input stream specifying the lexical analyzer and writes source code which implements the lexical analyzer in the C programming language.
In addition to C, some old versions of Lex could also generate a lexer in Ratfor.
Open source
Although originally distributed as proprietary software, some versions of Lex are now open-source. Open-source versions of Lex, based on the original proprietary code, are now distributed with open-source operating systems such as OpenSolaris and Plan 9 from Bell Labs. One popular open-source version of Lex, called flex, or the "fast lexical analyzer", is not derived from proprietary coding.
Structure of a Lex file
The structure of a Lex file is intentionally similar to that of a yacc file: files are divided into three sections, separated by lines that contain only two percent signs, as follows:
The definitions section defines macros and imports header files written in C. It is also possible to write any C code here, which will be copied verbatim into the generated source file.
The rules section associates regular expression patterns with C statements. When the lexer sees text in the input matching a given pattern, it will execute the associated C code.
The C code section contains C statements and functions that are copied verbatim to the generated source file. These statements presumably contain code called by the rules in the rules section. In large programs it is more convenient to place this code in a separate file linked in at compile time.
Example of a Lex file
The following is an example Lex file for the flex version of Lex. It recognizes strings of numbers (positive integers) in the input, and simply prints them out.
/*** Definition section ***/
%{
/* C code to be copied verbatim */
#include <stdio.h>
%}
%%
/*** Rules section ***/
/* [0-9]+ matches a string of one or more digits */
[0-9]+ {
/* yytext is a string containing the matched text. */
printf("Saw an integer: %s\n", yytext);
}
.|\n { /* Ignore all other characters. */ }
%%
/*** C Code section ***/
int main(void)
{
/* Call the lexer, then quit. */
yylex();
return 0;
}
If this input is given to flex, it will be converted into a C file, . This can be compiled into an executable which matches and outputs strings of integers. For example, given the input:
abc123z.!&*2gj6
the program will print:
Saw an integer: 123
Saw an integer: 2
Saw an integer: 6
Using Lex with other programming tools
Using Lex with parser generators
Lex and parser generators, such as Yacc or Bison, are commonly used together. Parser generators use a formal grammar to parse an input stream, something which Lex cannot do using simple regular expressions, as Lex is limited to simple finite state automata.
It is typically preferable to have a parser, one generated by Yacc for instance, accept a stream of tokens (a "token-stream") as input, rather than having to process a stream of characters (a "character-stream") directly. Lex is often used to produce such a token-stream.
Scannerless parsing refers to parsing the input character-stream directly, without a distinct lexer.
Lex and make
make is a utility that can be used to maintain programs involving Lex. Make assumes that a file that has an extension of .l is a Lex source file. The make internal macro LFLAGS can be used to specify Lex options to be invoked automatically by make.
See also
Flex lexical analyser
Yacc
Ragel
PLY (Python Lex-Yacc)
Comparison of parser generators
References
External links
Using Flex and Bison at Macworld.com
Compiling tools
Unix programming tools
Unix SUS2008 utilities
Plan 9 commands
Finite automata
Lexical analysis |
8369001 | https://en.wikipedia.org/wiki/Kerkythea | Kerkythea | Kerkythea is a standalone rendering system that supports raytracing and Metropolis light transport, uses physically accurate materials and lighting, and is distributed as freeware. Currently, the program can be integrated with any software that can export files in obj and 3ds formats, including 3ds Max, Blender, LightWave 3D, SketchUp, Silo and Wings3D.
History
Kerkythea started development in 2004 and released its first version on April 2005. Initially it was only compatible with Microsoft Windows, but an updated release on October 2005 made it Linux compatible. As of January 2016, it is also available for Mac OS X. In May 2009 it was announced that the development team started a new commercial renderer, although Kerkythea will be updated and it will stay free and available. A new version called 'Boost' has been released in 2013.
In June 2018 the main developer announced the third version of Kerkythea called "Kerkythea 2018 Boost".
Exporters
There are 6 official exporters for Kerkythea.
Blender
Blend2KT
Exporter to XML format
3D Studio Max
3dsMax2KT 3dsMax Exporter
Maya
Maya2KT Maya Exporter
GMax
GMax2KT GMax Exporter
SketchUp
SU2KT SketchUp Exporter
SU2KT Light Components
Features
Supported 3D file formats
3DS format
OBJ format
XML (internal) format
SIA (Silo) format (partially supported)
Supported image formats
All supported by FreeImage library (JPEG, BMP, PNG, TGA and HDR included)
Supported materials
Matte
Perfect reflections/refractions
Blurry reflections/refractions
Translucency (SSS)
Dielectric material
Thin glass material
Phong shading material
Ward anisotropic material
Anisotropic Ashikhmin material
Lafortune material
Layered material (additive combination of the above with use of alpha maps)
Supported shapes
Triangulated meshes
Sphere
Planes
Supported lights
Omni light
Spot light
Projector light
Point diffuse
Area diffuse
Point light with spherical soft shadows
Ambient lighting
Sky lighting (Physical sky, SkySphere bitmap (normal or HDRI))
Supported textures
Constant colors
Bitmaps (normal and HDRI)
Procedurals (Perlin noise, marble, wood, windy, checker, wireframe, normal ramp, Fresnel ramp)
Any weighted or multiplicative combination of the above
Supported features
Bump mapping
Normal mapping
Clip mapping
Bevel mapping (an innovative KT feature)
Edge outlining
Depth of field
Fog
Isotropic volume scattering
Faked caustics
Faked translucency
Dispersion
Anti-aliasing (Texture filtering, edge antialiasing)
Selection rendering
Surface and material instancing
Supported camera types
Planar projection (Pinhole, thin lens)
Cylindrical pinhole
Spherical pinhole
Supported rendering techniques
Classic ray tracing
Path tracing (Kajiya)
Bidirectional path tracing (Veach & Guibas)
Metropolis light transport (Kelemen, Kalos et al.)
Photon mapping (Jensen) (mesh maps, photon maps, final gathering, irradiance caching, caustics)
Diffuse interreflection (Ward)
Depth rendering
Mask rendering
Clay rendering
Application environment
OpenGL real-time viewer (basic staging capabilities)
Integrated material editor
Easy rendering customization
Sun/sky customization
Script system
Command line mode
See also
YafaRay, free and open-source ray tracing software that uses an XML scene description language.
POV-Ray, free and open-source ray tracing software
LuxRender, free and open-source "unbiased" rendering system
References
External links
Kerkythea's Forum, where you can find the new version (release candidate)
3D rendering software for Linux
Freeware 3D graphics software
Global illumination software
Proprietary freeware for Linux
Rendering systems |
36215164 | https://en.wikipedia.org/wiki/SquidGuard | SquidGuard | SquidGuard is a URL redirector software, which can be used for content control of websites users can access. It is written as a plug-in for Squid and uses blacklists to define sites for which access is redirected. SquidGuard must be installed on a Unix or Linux computer such as a server computer. The software's filtering extends to all computers in an organization, including Windows and Macintosh computers.
It was originally developed by Pål Baltzersen and Lars Erik Håland, and was implemented and extended by Lars Erik Håland in the 1990s at Tele Danmark InterNordia. Version 1.4, the current stable version, was released in 2009, and version 1.5 was in development as of 2010. New features in version 1.4 included optional authentication via a MySQL database.
SquidGuard is free software licensed under the GNU General Public License (GPL) version 2. It is included in many Linux distributions including Debian, openSUSE and Ubuntu.
Blacklist Sources
The url filtering capabilities of SquidGuard depend largely on the quality of the Blacklists used with it. Several options are available. Free lists can be found at Shallalist.de or at Université Toulouse 1 Capitole and commercial lists can be found at Squidblacklist.org.
See also
DansGuardian
Internet censorship
Content-control software
References
External links
WhatsApp Monitoring
Content-control software
Free network-related software
Linux security software
Unix network-related software
Unix security-related software |
28091049 | https://en.wikipedia.org/wiki/Richmond%20Shakespear | Richmond Shakespear | Sir Richmond Campbell Shakespear (11 May 1812 – 16 December 1861) was an Indian-born British Indian Army officer. He helped to influence the Khan of Khiva to abolish the capture and selling of Russian slaves in Khiva. This likely forestalled the Russian conquest of the Khiva, although it ultimately did not prevent it.
Background
Richmond Shakespear came from a family with deep ties to British activities in Asia. While his ancestors were rope-makers hailing from Shadwell (where there was a ropewalk named after them, Shakespear's Walk), by the seventeenth century, the Shakespears were involved in British military and civil service in Asia, and eventually raising families in India, although the children were still educated in England.
Richmond Shakespear was the youngest son of John Talbot Shakespear and Amelia Thackeray, who both served in the Bengal Civil Service. Amelia was the eldest daughter of William Makepeace Thackeray (I), the grandfather of the novelist William Makepeace Thackeray (II). He was born in India on 11 May 1812, educated at the Charterhouse School, and entered Addiscombe Military Seminary in 1827. He gained a commission as Second Lieutenant for the Bengal Artillery in 1828, and moved back to India in 1829. He was positioned in various stations in Bengal until 1837, when he became Assistant at Gorakhpur.
Intervention in Khiva
In 1839, he became Political Assistant to the British Mission to Herat, with his main duty as artillery instruction.
He was sent by his commander, Major d'Arcy Todd, to negotiate with the Khan of Khiva for the release of
the Russian captives held there; which the British worried might provoke a Russian invasion. Shakespear was successful in negotiations and marched to Orenburg with 416 Russian men, women, and children. Because of this accomplishment, he was posted to Moscow and St. Petersburg where he was received by Tsar Nicholas I.
When he returned to London, he was knighted by Queen Victoria on 31 August 1841.
After Khiva
In 1842, Shakespear became Secretary to Major General George Pollock, who was commanding forces in Peshwar
for the relief of Sir Robert Sale at Jalalabad. In 1843, he was appointed Deputy Commissioner of Sagar, and promoted to
Brevet Captain. Later that year, he was transferred to Gwalior, where he was promoted to Regimental Captain in 1846, and where he remained until 1848.
Between 1848 and 1849, he served in the Second Anglo-Sikh War. For his services he received the Punjab Medal.
Civil duties
Shakespear returned to civil duties in Gwalior toward the end of 1849. For the next ten years, he continued to advance in political and military roles in the British Raj, culminating in his appointment as Agent to the Governor-General for Central India.
He became a Companion of the Bath in 1860.
He died on 16 December 1861 from bronchitis, and was survived by his wife, six daughters, and three sons. His youngest son was John Shakespear (1861–1942).
References
Further reading
External links
Collection Papers of Richmond Shakespear at the UK National Archives
An account of the freeing of the Russian slaves at Khiva
The Shakespeare Family History Site
1812 births
1861 deaths
Bengal Artillery officers
Deaths from bronchitis
British military personnel of the First Anglo-Sikh War
British military personnel of the Second Anglo-Sikh War
People educated at Charterhouse School
Alumni of Addiscombe Military Seminary
Companions of the Order of the Bath
The Great Game |
858364 | https://en.wikipedia.org/wiki/Sun%20Public%20License | Sun Public License | The Sun Public License (SPL) is a software license that applies to some open-source software released by Sun Microsystems (such as NetBeans before the 5.5 version). It has been approved by the Free Software Foundation (FSF) as a free software license, and by the Open Source Initiative (OSI) as an open source license. It is derived from the Mozilla Public License.
This license has been superseded by the Common Development and Distribution License, which is also derived from the MPL.
References
External links
Sun Public License, version 1.0
Free and open-source software licenses
Copyleft software licenses |
9937024 | https://en.wikipedia.org/wiki/GNU%20IceCat | GNU IceCat | GNU IceCat, formerly known as GNU IceWeasel, is a completely free and open-source version of the Mozilla Firefox web browser distributed by the GNU Project. It is compatible with Linux, Windows, Android and macOS.
IceCat is released as a part of GNUzilla, GNU's rebranding of a code base that used to be the Mozilla Application Suite. As an internet suite, GNUzilla also includes a mail and newsgroup program, and an HTML composer.
Mozilla produces free and open-source software, but the binaries include trademarked artwork. The GNU Project attempts to keep IceCat in synchronization with upstream development of Firefox (long-term support versions) while removing all trademarked artwork and non-free add-ons. It also maintains a large list of free software plugins. In addition, it includes several security features not found in the mainline Firefox browser.
History
Origins of the name
The Mozilla Corporation owns the trademark to the Firefox name and denies the use of the name "Firefox" to unofficial builds that fall outside certain guidelines. Unless distributions use the binary files supplied by Mozilla, fall within the stated guidelines, or else have special permission, they must compile the Firefox source with a compile-time option enabled that creates binaries without the official branding of Firefox and related artwork, using either the built-in free artwork, or artwork provided at compile time.
This policy led to a long debate within the Debian Project in 2004 and 2005. During this debate, the name "Iceweasel" was coined to refer to rebranded versions of Firefox. The first known use of the name in this context is by Nathanael Nerode, in reply to Eric Dorland's suggestion of "Icerabbit". It was intended as a parody of "Firefox". Iceweasel was subsequently used as the example name for a rebranded Firefox in the Mozilla Trademark Policy, and became the most commonly used name for a hypothetical rebranded version of Firefox. By January 1, 2005, rebranding was being referred to as the "Iceweasel route".
In August 2005, the GNUzilla project adopted the GNU IceWeasel name for a rebranded distribution of Firefox that made no references to nonfree plugins.
The term "ice weasel" appeared earlier in a line which cartoonist Matt Groening fictionally attributed to Friedrich Nietzsche: "Love is a snowmobile racing across the tundra and then suddenly it flips over, pinning you underneath. At night, the ice weasels come."
Debian was originally given permission to use the trademarks, and adopted the Firefox name. However, because the artwork in Firefox had a proprietary copyright license at the time, which was not compatible with the Debian Free Software Guidelines, the substituted logo had to remain. In 2006, Mozilla withdrew their permission for Debian to use the Firefox name due to significant changes to the browser that Mozilla deemed outside the boundaries of its policy, changes which Debian felt were important enough to keep, and Debian revived the Iceweasel name in its place.
Subsequently, on 23 September 2007, one of the developers of the GNU IceWeasel package announced that the name would be changed to GNU IceCat from IceWeasel in the next release, so as to avoid confusion with Debian's separately maintained, unrelated rebranding of Firefox. The name change took place as planned and IceCat is the current name.
IceCat was ported to the Firefox 3 codebase during Google Summer of Code of 2008.
Version history
Distribution
GNU IceCat is freely downloadable for the IA-32, x86 64, and PowerPC architectures. Both binaries and source are available, though the current build is available only for Linux. Some distributions offer binary and source packages through their repositories, such as Trisquel, Parabola GNU/Linux-libre and Fedora.
IceCat is also available for macOS 10.4 and higher. Any Mac user with these versions of macOS can install IceCat through Fink.
For the Mac, it is available for both IA-32 and PowerPC architectures.
Unofficial builds are available for Windows (Vista or newer) and Android (2.3 or newer).
Additional security features
IceCat includes additional security features, such as the option to block third party zero-length image files resulting in third-party cookies, also known as web bugs (This feature is available in Firefox 1.0, 1.5, and 3.0, but the UI option was absent on 2.0). GNU IceCat also provides warnings for URL redirection.
In version 3.0.2-g1, the certificate of a certificate authority CAcert.org has been added to the list of trusted root certificates. Concern about that decision has been raised in a discussion on the savannah-hackers-public mailing list.
The GNU LibreJS extension detects and blocks non-free non-trivial JavaScript.
IceCat also has functionality to set a different user agent string each for different domains in about:config. For example, setting a mobile user agent string for a desired DNS domain would make it possible to view the mobile version of a website on a desktop operating system.
Licensing
Gnuzilla was available under the MPL/GPL/LGPL tri-license that Mozilla used for source code. Unlike Mozilla, IceCat's default icons are under the same tri-license.
See also
Comparison of web browsers
History of Mozilla Firefox
Mozilla software rebranded by Debian
SeaMonkey, a more traditional continuation of Mozilla Suite
References
External links
GNU.org, Homepage of Gnuzilla and IceCat
Mozilla
Free email software
Software forks
Free web browsers
Gecko-based software
IceCat
POSIX web browsers
Web browsers based on Firefox
Free and open-source Android software |
604522 | https://en.wikipedia.org/wiki/Indian%20Navy | Indian Navy | The Indian Navy is the naval branch of the Indian Armed Forces. The President of India is the Supreme Commander of the Indian Navy. The Chief of Naval Staff, a four-star admiral, commands the navy. As a blue-water navy, it operates significantly in Persian Gulf Region and the Horn of Africa to the Strait of Malacca, and routinely conducts anti-piracy operations and partners with other navies in the region. It also conducts routine two to three month-long deployments in the South and East China seas as well as the western Mediterranean sea simultaneously.
The primary objective of the navy is to safeguard the nation's maritime borders, and in conjunction with other Armed Forces of the union, act to deter or defeat any threats or aggression against the territory, people or maritime interests of India, both in war and peace. Through joint exercises, goodwill visits and humanitarian missions, including disaster relief, Indian Navy promotes bilateral relations between nations.
As of June 2019, Indian Navy has 67,252 active and 75,000 reserve personnel in service and has a fleet of 150 ships and submarines, and 300 aircraft. As of November 2021, the operational fleet consists of 1 active aircraft carrier and 1 amphibious transport dock, 8 landing ship tanks, 10 destroyers, 13 frigates, 1 ballistic missile submarine, 16 conventionally-powered attack submarines, 24 corvettes, one mine countermeasure vessel, 4 fleet tankers and numerous other auxiliary vessels, small patrol boats and sophisticated ships. It is considered as a multi-regional power projection blue-water navy.
History
Early maritime history
The maritime history of India dates back to 6,000 years with the birth of art of the navigation and navigating during the Indus Valley Civilisation. A Kutch mariner's log book from 19th century recorded that the first tidal dock India has been built at Lothal around 2300 BC during the Indus Valley Civilisation, near the present day harbour of Mangrol on the Gujarat coast. The Rig Veda, credits Varuna, the Hindu god of water and the celestial ocean, with knowledge of the ocean routes and describes the use of ships having hundred oars in the naval expeditions by Indians. There are also references to the side wings of a ship called Plava, which stabilizes the vessel during storms. Plava is considered to be the precursor of modern-day stabilizers. The first use of mariner's compass, called as Matsya Yantra, was recorded in 4 and 5 AD.
Alexander the Great during his conquest over India, built a harbour at Patala. His army retreated to Mesopotamia on the ships built at Sindh. In the later of his conquest, records show that the Emperor of Maurya Empire, Chandragupta Maurya, as a part of war office, established an Admiralty Division under the Superintendent of Ships. Many historians from ancient India recorded the Indian trade relations with many countries, and even with countries as far as Java and Sumatra. There were also references to the trade routes of countries in the Pacific and Indian Ocean. India also had trade relations with the Greeks and the Romans. At one instance Roman historian Gaius Plinius Secundus mentioned of Indian traders carrying away large masses of gold and silver from Rome, in payment for skins, precious stones, clothes, indigo, sandalwood, herbs, perfumes, and spices.
During 5–10 AD, the Kalinga Empire conquered Western Java, Sumatra and Malaya. The Andaman and Nicobar Islands served as an important halt point for trade ships en route to these nations and as well as China. During 844–848 AD the daily revenue from these nations was expected to be around 200 maunds () of gold. During 984–1042 AD, under the reign of Raja Raja Chola I, Rajendra Chola I and Kulothunga Chola I, the naval expedition by Chola dynasty captured lands of Burma, Sumatra, Sri Lanka, and Malaya, and simultaneously repressing pirate activities by Sumatran warlords.
During 14th and 15th centuries, Indian shipbuilding skills and their maritime ability was sophisticated enough to produce ships with a capacity to carry over hundred men. Ships also had compartments included in their design, so that even if one compartment was damaged, the ship would remain afloat. These features of were developed by Indians even before Europeans were aware of the idea.
However, by the end of thirteenth century Indian naval power had started to decline, and had reached its low by the time the Portuguese entered India. Soon after they set foot in India, the Portuguese started to hunt down all Asian vessels not permitting their trade. Amidst this, in 1529, a naval war at Bombay Harbour resulted in the surrender of Thane, Karanja, and Bandora. By 1534, the Portuguese took complete control over the Bombay Harbour. The Zamorin of Calicut challenged the Portuguese trade when Vasco da Gama refused to pay the customs levy as per the trade agreement. This resulted in two major naval wars, the first one—Battle of Cochin, was fought in 1504, and the second engagement happened four years later off Diu. Both these wars, exposed the weakness of Indian maritime power and simultaneously helped the Portuguese to gain mastery over the Indian waters.
In the later seventeenth century Indian naval power observed remarkable revival. The alliance of the Moghuls and the Sidis of Janjira was marked as a major power on the west coast. On the southern front, the 1st Sovereign of the Maratha Empire, Chhatrapati Shivaji Maharaj, started creating his own fleet. His fleet was commanded by notable admirals like Sidhoji Gujar and Kanhoji Angre. The Maratha Navy under the leadership of Angre kept the English, Dutch and Portuguese away from the Konkan coast. However, the Marathas witnessed remarkable decline in their naval capabilities following the death of Angre in 1729.
1612 origins to independence
The origins of the Indian Navy date to 1612, when an English vessel under the command of Captain Best encountered the Portuguese. Although the Portuguese were defeated, this incident along with the trouble caused by the pirates to the merchant vessels, forced the British to maintain fleet near Surat, Gujarat. East India Company (HEIC) formed a naval arm, and the first squadron of fighting ships reached the Gujarat coast on 5 September 1612. Their objective was to protect British merchant shipping off the Gulf of Cambay and up the Narmada and Tapti rivers. As the HEIC continued to expand its rule and influence over different parts of India, the responsibility of Company's Marine increased too.
Over time, the British predominantly operated from Bombay, and in 1686, the HEIC's naval arm was renamed the Bombay Marine. At times the Bombay Marine engaged Dutch, French, Maratha, and Sidi vessels. Much later, it was also involved in the First Anglo-Burmese War of 1824. In 1834, the Bombay Marine became Her Majesty's Indian Navy. The Navy saw action in the First Opium War of 1840 and in the Second Anglo-Burmese War in 1852. Due to some unrecorded reasons, the Navy's name reverted to the Bombay Marine from 1863 to 1877, after which it was named Her Majesty's Indian Marine. At that time, the Marine operated in two divisions—the Eastern Division at Calcutta under the Superintendent of Bay of Bengal, and the Western Division at Bombay Superintendent of Arabian Sea.
In 1892 the Marine was rechristened the Royal Indian Marine, and by the end of the 19th century it operated over fifty ships. The Marine participated in World War I with a fleet of patrol vessels, troop carriers, and minesweepers. In 1928, D. N. Mukherji was the first Indian to be granted a commission, in the rank of an Engineer Sub-lieutenant. Also in 1928, the RIM was accorded combatant status, which entitled it to be considered a true fighting force and to fly the White Ensign of the Royal Navy. In 1934, the Marine was upgraded to a full naval force, thus becoming the Royal Indian Navy (RIN), and was presented the King's colours in recognition of its services to the British Crown.
During the early stages of World War II, the tiny Royal Indian Navy consisted of five sloops, one survey vessel, one depot ship, one patrol vessel and numerous assorted small craft; personnel strength was at only 114 officers and 1,732 sailors. The onset of war led to an expansion in numbers of vessels and personnel. By June 1940, the navy had doubled its number in terms of both personnel and material, and expanded nearly six times of its pre-war strength by 1942. The navy was actively involved in operations during the war around the world and was heavily involved in operations around the Indian Ocean, including convoy escorts, mine-sweeping and supply, as well as supporting amphibious assaults.
When hostilities ceased in August 1945, the Royal Indian Navy had expanded to a personnel strength of over 25,000 officers and sailors. Its fleet comprised seven sloops, four frigates, four corvettes, fourteen minesweepers, sixteen trawlers, two depot ships, thirty auxiliary vessels, one hundred and fifty landing craft, two hundred harbour craft and several offensive and defensive motor launches. During World War II the Navy suffered two hundred and seventy five casualties—twenty seven officers, two warrant officers and 123 ratings killed in action, two ratings missing in action and a further 14 officers, two warrant officers and 123 ratings wounded. For their role in the war, the officers and ratings of the Navy received the following honours and decorations—a KBE (Mil.), a knighthood, a CB (Mil.), 10 CIEs, two DSOs, a CBE, 15 DSCs, an OBE, 28 DSMs, eight OBIs, two IOMs, 16 BEMs, 10 Indian Defence Service Medals, a Royal Humane Society Medal, 105 mentions in dispatches and 118 assorted commendations. Immediately after the war, the navy underwent a rapid, large-scale demobilisation of vessels and personnel.
From the inception of India's naval force, some senior Indian politicians had voiced concerns about the degree of "Indianisation" of the Navy and its subordination to the Royal Navy in all important aspects. On the eve of WWII, the RIN had no Indian senior line officers and only a single Indian senior engineer officer. Even by the war's end, the Navy remained a predominantly British-officered service; in 1945, no Indian officer held a rank above engineer commander and only a few Indian officers in the executive branch held substantive senior line officer rank. This situation, coupled with inadequate levels of training and discipline, poor communication between officers and ratings, instances of racial discrimination and the ongoing trials of ex-Indian National Army personnel ignited the Royal Indian Navy mutiny by Indian ratings in 1946. A total of 78 ships, 20 shore establishments and 20,000 sailors were involved in the strike, which spread over much of India. After the strike began, the sailors received encouragement and support from the Communist Party in India; unrest spread from the naval ships, and led to student and worker hartals in Bombay. The strike ultimately failed as the sailors did not receive substantial support from either the Indian Army or from political leaders in Congress or the Muslim League. On 21 July 1947, H.M.S. Choudhry and Bhaskar Sadashiv Soman, both of whom would eventually command the Pakistani and Indian Navies, respectively, became the first Indian RIN officers to attain the acting rank of captain.
Independence to the end of the 20th century
Following independence and the partition of India on 15 August 1947, the RIN's depleted fleet of ships and remaining personnel were divided between the newly independent Dominion of India and Dominion of Pakistan. 21 percent of the Navy's officer cadre and 47 percent of its sailors opted to join the portion of the fleet which became the Royal Pakistan Navy. The Indian share of the Navy consisted of 32 vessels along with 11,000 personnel. Effective from the same date, all British officers were compulsorily retired from the Navy and its reserve components, with Indian officers being promoted to replace British senior officers. However, a number of British flag and senior officers were invited to continue serving in the RIN, as only nine of the Navy's Indian commissioned officers had more than 10 years' service, with the majority of them only having served from five to eight years. Rear Admiral John Talbot Savignac Hall headed the Navy as its first Commander-in-Chief (C-in-C) post-Independence. In January 1948, D.N. Mukherji, the first Indian officer in the RIN, became the first Indian to be promoted acting engineer captain. In May 1948, Captain Ajitendu Chakraverti became the first Indian officer to be appointed to the rank of commodore. When India became a republic on 26 January 1950, the Royal prefix was dropped and the name Indian Navy was officially adopted. The prefix for naval vessels was changed from His Majesty's Indian Ship (HMIS) to Indian Naval Ship (INS). At the same time, the imperial crown in insignia was replaced with the Lion Capital of Ashoka and the Union Jack in the canton of the White Ensign was replaced with the Indian Tricolour.
By 1955, the Navy had largely overcome its post-Independence personnel shortfalls. During the early years following independence, many British officers continued to serve in the Navy on secondment from the Royal Navy, due to the post-Independence retirement or transfer of many experienced officers to the Royal or the Pakistan navies. The first C-in-C of the Navy was Admiral Sir Edward Parry who took over from Hall in 1948 and handed over to Admiral Sir Charles Thomas Mark Pizey in 1951. Admiral Pizey also became the first Chief of the Naval Staff in 1955, and was succeeded by Vice Admiral Sir Stephen Hope Carlill the same year The pace of "Indianising" continued steadily through the 1950s. By 1952, senior Naval appointments had begun to be filled by Indian officers, and by 1955, basic training for naval cadets was entirely conducted in India. In 1956, Ram Dass Katari became the first Indian flag officer, and was appointed the first Indian Commander of the Fleet on 2 October. On 22 April 1958, Vice Admiral Katari assumed the command of the Indian Navy from Carlill as the first Indian Chief of Staff of the Indian Navy. With the departure in 1962 of the last British officer on secondment to the Navy, Commodore David Kirke, the Chief of Naval Aviation, the Indian Navy finally became an entirely Indian service.
The first engagement in action of the Indian Navy was against the Portuguese Navy during the liberation of Goa in 1961. Operation Vijay followed years of escalating tension due to Portuguese refusal to relinquish its colonies in India. On 21 November 1961, Portuguese troops fired on the passenger liner Sabarmati near Anjadip Island, killing one person and injuring another. During Operation Vijay, the Indian Navy supported troop landings and provided fire support. The cruiser sank one Portuguese patrol boat, while frigates and destroyed the Portuguese frigate . The 1962 Sino-Indian War was largely fought over the Himalayas and the Navy had only a defensive role in the war.
At the outbreak of Indo-Pakistani War of 1965, the Navy had one aircraft carrier, two cruisers, nineteen destroyers and frigates, and one tanker. Of these twenty-ships ten were under refit. The others were largely involved coastal patrols. During the war, the Pakistani Navy attacked the Indian coastal city of Dwarka, although there were no military resources in the area. While this attack was insignificant, India deployed naval resources to patrol the coast and deter further bombardment. Following these wars in the 1960s, India resolved to strengthen the profile and capabilities of its Armed Forces.
The dramatic change in the Indian Navy's capabilities and stance was emphatically demonstrated during the Indo-Pakistani War of 1971. Under the command of Admiral Sardarilal Mathradas Nanda, the navy successfully enforced a naval blockade of West and East Pakistan. Pakistan's lone long-range submarine was sunk following an attack by the destroyer off the coast of Visakhapatnam in the midnight of 3–4 December 1971. On 4 December, the Indian Navy successfully executed Operation Trident, a devastating attack on the Pakistan Naval Headquarters of Karachi that sank a minesweeper, a destroyer and an ammunition supply ship. The attack also irreparably damaged another destroyer and oil storage tanks at the Karachi port. To commemorate this, 4 December is celebrated as the Navy Day. This was followed by Operation Python on 8 December 1971, further deprecating the Pakistan Navy's capabilities. Indian frigate , commanded by Captain M. N. Mulla was sunk by , while was damaged on the west coast. In the Bay of Bengal, the aircraft carrier was deployed to successfully enforce the naval blockade on East Pakistan. Sea Hawk and the Alizé aircraft from INS Vikrant sank numerous gunboats and Pakistani merchant marine ships. To demonstrate its solidarity as an ally of Pakistan, the United States sent Task Force 74 centred around the aircraft carrier into the Bay of Bengal. In retaliation, Soviet Navy submarines trailed the American task force, which moved away from the Indian Ocean towards Southeast Asia to avert a confrontation. In the end, the Indian naval blockade of Pakistan choked off the supply of reinforcements to the Pakistani forces, which proved to be decisive in the overwhelming defeat of Pakistan.
Since playing a decisive role in the victory, the navy has been a deterrent force maintaining peace for India in a region of turmoil. In 1983, the Indian Navy planned for Operation Lal Dora to support the government of Mauritius against a feared coup. In 1986, in Operation Flowers are Blooming, the Indian Navy averted an attempted coup in the Seychelles. In 1988, India launched Operation Cactus, to successfully thwart a coup d'état by PLOTE in the Maldives. Naval maritime reconnaissance aircraft detected the ship hijacked by PLOTE rebels. and Indian marine commandos recaptured the ship and arrested the rebels. During the 1999 Kargil War, the Western and Eastern fleets were deployed in the Northern Arabian Sea, as a part of Operation Talwar. They safeguarded India's maritime assets from a potential Pakistani naval attack, as also deterred Pakistan from attempting to block India's sea-trade routes. The Indian Navy's aviators flew sorties and marine commandos fought alongside Indian Army personnel in the Himalayas.
In October 1999, the Navy along with the Indian Coast Guard rescued MV Alondra Rainbow, a pirated Japanese cargo ship.
21st century onwards
In the 21st century, the Indian Navy has played an important role in maintaining peace for India on the maritime front, in spite of the state of foment in its neighbourhood. It has been deployed for humanitarian relief in times of natural disasters and crises across the globe, as well as to keep India's maritime trade routes free and open.
The Indian Navy was a part of the joint forces exercises, Operation Parakram, during the 2001–2002 India–Pakistan standoff. More than a dozen warships were deployed to the northern Arabian Sea.
In October, the Indian Navy took over operations to secure the Strait of Malacca, to relieve US Navy resources for Operation Enduring Freedom.
The navy plays an important role in providing humanitarian relief in times of natural disasters, including floods, cyclones and tsunamis. In the aftermath of the 2004 Indian Ocean earthquake and tsunami, the Indian Navy launched massive disaster relief operations to help affected Indian states as well as Maldives, Sri Lanka and Indonesia. Over 27 ships, dozens of helicopters, at least six fixed-wing aircraft and over 5000 personnel of the navy were deployed in relief operations. These included Operation Madad in Andhra Pradesh and Tamil Nadu, Operation Sea Waves in the Andaman and Nicobar Islands, Operation Castor in Maldives, Operation Rainbow in Sri Lanka and Operation Gambhir in Indonesia. Gambhir, carried out following the 2004 Indian Ocean tsunami, was one of the largest and fastest force mobilisations that the Indian Navy has undertaken. Indian naval rescue vessels and teams reached neighbouring countries less than 12 hours from the time that the tsunami hit. Lessons from the response led to decision to enhance amphibious force capabilities, including the acquisition of landing platform docks such as , as well as smaller amphibious vessels.
During the 2006 Israel-Lebanon conflict, the Indian Navy launched Operation Sukoon and evacuated 2,280 persons from 20 to 29 July 2006 including 436 Sri Lankans, 69 Nepalese and 7 Lebanese nationals from war-torn Lebanon. In 2006, Indian naval doctors served for 102 days on board to conduct medical camps in the Philippines, Bangladesh, Indonesia and East Timor. In 2007, Indian Navy supported relief operations for the survivors of Cyclone Sidr in Bangladesh. In 2008, Indian Naval vessels were the first to launch international relief operations for victims of Cyclone Nargis in Myanmar. In 2008, the navy deployed and into the Gulf of Aden to combat piracy in Somalia. Tabar prevented numerous piracy attempts, and escorted hundreds of ships safely through the pirate-infested waters. The navy also undertook anti-piracy patrols near the Seychelles, upon that country's request.
In February 2011, the Indian Navy launched Operation Safe Homecoming and rescued Indian nationals from war torn Libya. Between January–March, the navy launched Operation Island Watch to deter piracy attempts by Somali pirates off the Lakshadweep archipelago. This operation has had numerous successes in preventing pirate attacks. During the 2015 crisis in Yemen, the Indian Navy was part of Operation Raahat and rescued 3074 individuals of which 1291 were foreign nationals. On 15 April 2016, a Poseidon-8I long-range patrol aircraft managed to thwart a piracy attack on the high seas by flying over MV Sezai Selaha, a merchant vessel, which was being targeted by a pirate mother ship and two skiffs around from Mumbai.
Current role
Currently, the principal roles of the Indian Navy are:
In conjunction with other Armed Forces of the union, act to deter or defeat any threats or aggression against the territory, people or maritime interests of India, both in war and peace;
Project influence in India's maritime area of interest, to further the nation's political, economic and security objectives;
In co-operation with the Indian Coast Guard, ensure good order and stability in India's maritime zones of responsibility.
Provide maritime assistance (including disaster relief) in India's maritime neighbourhood.
Command and organisation
Organisation
While the President of India serves as the Supreme Commander of the Indian Armed Forces, the organizational structure of Indian Navy is headed by the Chief of Naval Staff (CNS), who holds the rank of Admiral. While the provision for the rank of Admiral of the Fleet exists, it is primarily intended for major wartime use and honour. No officer of the Indian Navy has yet been conferred this rank. The CNS is assisted by the Vice Chief of Naval Staff (VCNS), a vice-admiral; the CNS also heads the Integrated Headquarters (IHQ) of the Ministry of Defence (Navy), based in New Delhi. The Deputy Chief of Naval Staff (DCNS), a vice-admiral, is a Principal Staff Officer, along with the Chief of Personnel (COP) and the Chief of Materiel (COM), both of whom are also vice-admirals. The Director General Medical Services (Navy) is a Surgeon Vice-Admiral, heads the medical services of the Indian Navy.
The Indian Navy operates two operational commands and one training command. Each command is headed by a Flag Officer Commanding-in-Chief (FOC-in-C) of the rank of Vice Admiral. The Eastern and Western commands each have a Fleet commanded by a Rear Admiral. The Western Fleet based at Mumbai is commanded by the Flag Officer Commanding Western Fleet (FOCWF) and the Eastern Fleet, based at Visakhapatnam, is commanded by the Flag Officer Commanding Eastern Fleet (FOCEF). They each also have a Commodore commanding submarines (COMCOS) - the Commodore Commanding Submarines (East) and the Commodore Commanding Submarines (West). The Flag Officer Submarines, the single-point class authority for submarines is based at the Eastern Naval Command. The Southern Naval Command is home to the Flag Officer Sea Training (FOST).
Additionally, the Andaman and Nicobar Command is a unified Indian Navy, Indian Army, Indian Air Force, and Indian Coast Guard theater command based at the capital, Port Blair. Commander-in-Chief, Andaman and Nicobar Command (CINCAN) receives staff support from, and reports directly to the Chairman Chiefs of Staff Committee (COSC) in New Delhi. The Command was set up in the Andaman and Nicobar Islands in 2001.
Facilities
Indian Navy has its operational and training bases in Gujarat, Karnataka, Goa, Maharashtra, Lakshadweep, Kerala, Odisha, Tamil Nadu, Andhra Pradesh, West Bengal, and Andaman and Nicobar Islands. These bases are intended for various purposes such as logistics and maintenance support, ammunition support, air stations, hospitals, MARCOS bases, coastal defence, missile defence, submarine and missile boat bases, forward operating bases etc. Of these, INS Shivaji is one of the oldest naval bases in India. Commissioned in February 1945 as HMIS Shivaji, it now serves as the premier Technical Training Establishment (TTE) of the Indian Navy.
In May 2005, the Indian Navy commissioned at Karwar, from Goa. Built under the first phase of the Project Seabird, at first it was an exclusively Navy controlled base without sharing port facilities with commercial shipping. The Indian Navy also has berthing rights in Oman and Vietnam. The Navy operates a monitoring station, fitted with radars and surveillance gear to intercept maritime communication, in Madagascar. It also plans to build a further 32 radar stations in Seychelles, Mauritius, Maldives and Sri Lanka. According to Intelligence Online, published by a France-based global intelligence gathering organisation, Indigo Publications, the Navy is believed to be operating a listening post in Ras al-Hadd, Oman. The post is located directly across from Gwadar Port in Balochistan, Pakistan, separated by approximately of the Arabian Sea.
The navy operates , a VLF and ELF transmission facility at Vijayanarayanapuram near Tirunelveli in Tamil Nadu. and are two bases dedicated for MARCOS. Project Varsha is a highly classified project undertaken by the Navy to construct a hi-tech base under the Eastern Naval Command. The base is said to house nuclear submarines and also a VLF facility.
Training
Indian Navy has a specialized training command which is responsible for organisation, conduct and overseeing of all basic, professional and specialist training throughout the Navy. The Commander in Chief of Southern Command also serves as the Commander in Chief of Training Command. The Chief of Personnel (CoP) at HQ of Indian Navy is responsible for the framework of training, and exercises the responsibility through Directorate of Naval Training (DNT). The training year of Indian Navy is defined from 1 July to 30 June of the following year.
Officer training is conducted at Indian Naval Academy (INA) at Ezhimala, on the coast of Kerala. Established in 2009, it is the largest naval academy in Asia. Cadets from National Defence Academy also move to INA for their later terms. The Navy also has specialized training establishments for gunnery, aviation, leadership, logistics, music, medicine, physical training, educational training, engineering, hydrography, submarines etc. at several naval bases along the coastline of India. Naval officers also attend the tri-service institutions National Defence College, College of Defence Management and Defence Services Staff College for various staff courses to higher command and staff appointments. The Navy's War college is the Naval War College, Goa. A dedicated wing for naval architecture under Directorate of Naval Architecture at IIT Delhi is operated by the Navy. Indian Navy also trains officers and men from the navies of friendly foreign countries.
Rank structure
, the Navy has 10,393 officers and 56,835 sailors against a sanctioned strength of 11,827 officers and 71,656 sailors. This is inclusive of naval aviation, marine commandos and Sagar Prahari Bal personnel.
Officers
India uses the Midshipman rank in its navy, and all future officers carry the rank upon entering the Indian Naval Academy. They are commissioned Sub-lieutenants upon finishing their course of study.
While the provision for the rank of Admiral of the Fleet exists, it is primarily intended for major wartime use and honour. No officer of the Indian Navy has yet been conferred this rank. Both the Army and Air Force have had officers who have been conferred with the equivalent rank – Field Marshals Sam Manekshaw and Cariappa of the Army and Marshal of the Indian Air Force (MIAF) Arjan Singh.
The highest ranked naval officer in organization structure is the Chief of Naval Staff, who holds the rank of admiral.
Rating personnel
In the Indian Navy, the sailors are initially listed as, Seaman 2nd class. As they grow through the ranks they attain the highest rank of enlisted personnel, Master chief petty officer 1st class. Sailors who possess leadership qualities and fulfill requisite conditions in terms of education, age etc. may be commissioned through Commission worthy and Special Duties (CW & SD) scheme.
Naval Air Arm
The naval air-arm of the Indian Navy currently operates twenty-one air squadrons. Of these, ten operate fixed-wing aircraft, eight are helicopter squadrons and the remaining three are equipped with unmanned aerial vehicles (UAV). Building on the legacy inherited from the Royal Navy prior to Indian independence, the concept of naval aviation in India started with the establishment of Directorate of Naval Aviation at Naval Headquarters (NHQ) in early 1948. Later that year officers and sailors from the Indian Navy were sent to Britain for pilot training. In 1951, the Fleet Requirement Unit (FRU) was formed to meet the aviation requirements of the navy.
On 1 January 1953, the charge of Cochin airfield was handed over to the navy from the Directorate General of Civil Aviation. On 11 March, the FRU was commissioned at Cochin with ten newly acquired Sealand aircraft. The navy's first air station, INS Garuda, was commissioned two months later. From February 1955 to December 1958, ten Firefly aircraft were acquired. To meet the training requirements of the pilots, the indigenously developed HAL HT-2 trainer was inducted into the FRU. On 17 January 1959, the FRU was commissioned as Indian Naval Air Squadron (INAS) 550, to be the first Indian naval air squadron.
Currently the air arm operates an aircraft carrier INS Vikramaditya with ability to carry over thirty aircraft including MiG 29K, Kamov 31, Kamov 28, Sea King and domestic-built HAL-Dhruv and Chetak helicopters. The Kamov-31 choppers also provide the airborne early warning cover for the fleet. In the anti-submarine role, the Sea King, Ka-28, and the domestic built HAL Dhruv are used. The MARCOS also use Sea King and HAL Dhruv helicopters while conducting operations. Maritime patrol and reconnaissance operations are carried out by the Boeing P-8 Poseidon and the Ilyushin 38. The UAV arm consists of the IAI Heron and Searcher-IIs that are operated from both surface ships and shore establishments for surveillance missions.
The Indian Navy also maintains an aerobatic display team, the Sagar Pawan. The Sagar Pawan team will be replacing their present Kiran HJT-16 aircraft with the newly developed HJT-36 aircraft.
MARCOS
The Marine Commando Force (MCF), also known as MARCOS, is a special operations unit that was raised by the Indian Navy in 1987 for Amphibious warfare, Close Quarter Combat Counter-terrorism, Direct action, Special reconnaissance, Unconventional warfare, Hostage rescue, Personnel recovery, Combat search and rescue, Asymmetric warfare, Foreign internal defence, Counterproliferation, Amphibious reconnaissance including Hydrographic reconnaissance. Since their inception MARCOS proved themselves in various operations and wars, notable of them include Operation Pawan, Operation Cactus, UNOSOM II, Kargil War and Operation Black Tornado. They are also actively deployed on anti-piracy operations throughout the year.
Equipment
Ships
The names of all in service ships and naval bases of the Indian Navy are prefixed with the letters INS, designating Indian Naval Ship or Indian Navy Station, whereas the sail boats are prefixed with INSV (Indian Naval Sailing Vessel). The fleet of the Indian Navy is a mixture of domestic built and foreign vessels, , the surface fleet comprises 1 aircraft carrier, 1 amphibious transport dock, 8 Landing ship tanks, 11 destroyers, 13 frigates, 23 corvettes, 10 large offshore patrol vessels, 4 fleet tankers, 7 Survey ships, 1 research vessel, 3 training vessels and various auxiliary vessels, Landing Craft Utility vessels, and small patrol boats.
After INS Viraat was decommissioned on 6 March 2017, the Navy is left with only one aircraft carrier in active service, INS Vikramaditya, which serves as the flagship of the fleet. Vikramaditya (formerly Admiral Gorshkov) is a modified procured at a total cost $2.3 billion from Russia in December 2013. The Navy has an amphibious transport dock of the , re-christened as INS Jalashwa in Indian service. It also maintains a fleet of landing ship tanks.
The navy currently operates three , three and three -class guided-missile destroyers. The ships of the Rajput class will be replaced in the near future by the next-generation s (Project 15B) which will feature a number of improvements.
In addition to destroyers, the navy operates several classes of frigates such as three (Project 17 class) and six -class frigates. Seven additional Shivalik-class frigates (Project 17A class frigates) are on order. The older frigates will systematically be replaced one by one as the new classes of frigates are brought into service over the next decade.
Smaller littoral zone combatants in service are in the form of corvettes, of which the Indian Navy operates the Kamorta, , , and corvettes. Replenishment tankers such as the Jyoti-class tanker, and the new - help improve the navy's endurance at sea.
Aircraft
Submarines
, the Navy's sub-surface fleet includes one nuclear-powered attack submarine, one ballistic missile submarine, 15 conventionally-powered attack submarines. The conventional attack submarines of the Indian Navy consist of the (French design), the (Russian design) and the (German Type 209/1500 design) classes.
India also possesses a single -class nuclear-powered attack submarine named . She is under lease to India for a period of ten years. Three hundred Indian Navy personnel were trained in Russia for the operation of these submarines. Negotiations are on with Russia for the lease of the second Akula-class submarine.
was launched on 26 July 2009 in Visakhapatnam, and was secretly commissioned into active service in August 2016. The Navy plans to have six nuclear-powered ballistic missile submarines in service in the near future. Arihant is both the first boat of the nuclear-powered ballistic missile submarines and the first nuclear-powered submarine to be built in India.
Weapon systems
The Navy use a mix of indigenously developed and foreign made missile systems. These include submarine-launched ballistic missiles, ship-launched ballistic missiles, cruise and anti-ship missiles, air-to-air missiles, surface-to-air missiles, torpedoes, air-to-air guns, main guns and anti-submarine rocket launchers. Its inventory comprises AK 190 gun with a range of , KH-35E 4 Quad Uran, ASW RBU-2000 etc.
In the recent years BrahMos has been one of the most advanced missile system adapted by the Indian Navy. It has been jointly developed by India's Defence Research and Development Organisation (DRDO) and Russian NPO Mashinostroyeniya. BrahMos is the world's fastest anti-ship cruise missile in operation. The BrahMos has been tailored to meet Indian needs and features a large proportion of India-designed components and technology, including its fire control systems, transporter erector launchers, and its onboard navigational attack systems. The successful test of Brahmos from provides Indian Navy with precision land attack capability.
India has also fitted its Boeing P-8I reconnaissance aircraft with all-weather, active-radar-homing, over-the-horizon AGM-84L Harpoon Block II missiles and Mk 54 All-Up-Round Lightweight Torpedoes. Indian warships' primary air-defence shield is provided by Barak 1 surface-to-air missile while an advanced version Barak 8 is in development in collaboration with Israel. India's next-generation submarines will be armed with Exocet anti-ship missile system. Among indigenous missiles, ship-launched version of Prithvi-II is called Dhanush, which has a range of and can carry nuclear warheads.
The K-15 Sagarika (Oceanic) submarine-launched ballistic missile (SLBM), which has a range of at least 700 km (some sources claim 1000 km) forms part of India's nuclear triad and is extensively tested to be integrated with the Arihant class of nuclear submarines. A longer range submarine launched ballistic missile called K-4 is under induction process, to be followed by K-5 SLBM.
Electronic warfare and systems management
Sangraha is a joint electronic warfare programme between Defence Research and Development Organisation (DRDO) and the Indian Navy. The programme is intended to develop a family of electronic warfare suites, for use on different naval platforms capable of detecting, intercepting, and classifying pulsed, carrier wave, pulse repetition frequency agile, frequency agile and chirp radars. The systems are suitable for deployment on various platforms like helicopters, vehicles, and ships. Certain platforms, along with ESM (Electronic Support Measures) capabilities, have ECM (Electronic Countermeasure) capabilities such as multiple-beam phased array jammers.
The Indian Navy also relies on information technology to face the challenges of the 21st century. The Indian Navy is implementing a new strategy to move from a platform centric force to a network centric force by linking all shore-based installations and ships via a high-speed data networks and satellites. This will help in increased operational awareness. The network is referred to as the Navy Enterprise Wide Network (NEWN). The Indian Navy has also provided training to all its personnel in Information Technology (IT) at the Naval Institute of Computer Applications (NICA) located in Mumbai. Information technology is also used to provide better training, like the usage of simulators and for better management of the force.
The Navy has a dedicated cadre for matters pertaining to information technology cadre named as Information Technology Cadre, under the Directorate of Information Technology (DRI). The cadre is responsible for implementation for enterprise wide networking and software development projects, development activities with respect to cyber security products, administration of shore and on-board networks, and management of critical Naval Networks and software applications.
Naval satellite
India's first exclusive defence satellite GSAT-7 was successfully launched by European space consortium Arianespace's rocket from Kourou spaceport in French Guiana in August 2013. GSAT-7 was fabricated by the Indian Space Research Organisation (ISRO) to serve for at least seven years in its orbital slot at 74°E, providing UHF, S-band, C-band and Ku-band relay capacity. Its Ku-band allows high-density data transmission, including both audio and video. This satellite also has a provision to reach smaller and mobile terminals.
GSAT-7 approximately has a footprint of over the Indian Ocean region, including both the Arabian Sea and the Bay of Bengal region. This enables the Navy to operate in a network-centric atmosphere having real-time networking of all its operational assets at sea and on land.
On 15 June 2019 the navy placed an order for GSAT-7R satellite as a replacement for GSAT-7. The satellite costs Rs 1589 crores (US$225.5 million) and is expected to be launched by 2020.
Activities
Fleet reviews
The President of India is entitled to inspect his/her fleet, as he/she is the supreme commander of the Indian Armed Forces. The first president's fleet review by India was hosted by Dr. Rajendra Prasad on 10 October 1953. President's reviews usually take place once in the President's term. In all, ten fleet reviews have taken place, including in February 2006, when former president Dr. APJ Abdul Kalam took the review. The latest, in February 2016, by President Pranab Mukherjee.
The Indian Navy also conducted an International fleet review named Bridges of Friendship in February 2001 in Mumbai. Many ships of friendly Navies from all around the world participated, including two from the US Navy. The second international fleet review, the International Fleet Review 2016, was held off Visakhapatnam coast in February 2016 where Indian Navy's focus was on improving diplomatic relations and military compatibility with other nations.
Naval exercises
India often conducts naval exercises with other friendly countries designed to increase naval cooperation and also to strengthen cooperative security relationship. Some such exercises take place annually or biennially:
Coordinated patrols include: Indo–Thai CORPAT (28 editions), Indonesia–India CORPAT (33 editions), IMCOR with Myanmar (8 editions). The Indian Navy conducted a naval exercise with the People's Liberation Army Navy in 2003, and also sent ships to the South China Sea to participate in the fleet review. In 2005, TROPEX (Theatre-level Readiness Operational Exercises) was held during which Indian Navy experimented the doctrine of influencing a land and air battle to support the Indian Army and the Indian Air Force. TROPEX has been conducted annually every year with an exception to 2016. In 2007, Indian Navy conducted naval exercises with Japan Maritime Self-Defence Force and U.S Navy in the Pacific, and also signed an agreement with Japan in October 2008 for joint naval patrolling in the Asia-Pacific region. In 2007, India conducted naval exercises with Vietnam, Philippines, and New Zealand. In 2007, India and South Korea conducted an annual naval exercise, alongside India's participation in the South Korean International Fleet Review in 2008. The first Atlantic Ocean deployment of the Indian Navy happened in 2009. During this deployment, the Indian Naval fleet conducted exercises with the French, German, Russian and British navies. Once in two years navies from the Indian Ocean region meet at the Andaman and Nicobar Islands for the Exercise MILAN. In 2021 India assisted in US-led Exercise Cutlass Express as a trainer.
In 2007 India held the first Indian Ocean Naval Symposium (IONS) with an objective to provide a forum for all the littoral nations of the Indian Ocean to co-operate on mutually agreed areas for better security in the region. Since the past decade, Indian naval ships have made goodwill port calls to Israel, Turkey, Egypt, Greece, Thailand, Indonesia, Australia, New Zealand, Tonga, South Africa, Kenya, Qatar, Oman, United Arab Emirates, Bahrain, Kuwait, and various other countries.
Exploration
The Indian Navy regularly conducts adventure expeditions. The sailing ship and training vessel began circumnavigating the world on 23 January 2003, intending to foster good relations with various other nations; she returned to India in May 2004 after visiting 36 ports in 18 nations.
Lt. Cdr. M. S. Kohli led the Indian Navy's first successful expedition to Mount Everest in 1965; the Navy's ensign was again flown atop Everest on 19 May 2004 by a similar expedition. Another Navy team also successfully scaled Everest from the north face, a technically more challenging route. The expedition was led by Cdr Satyabrata Dam of the submarine arm. Cdr. Dam is a mountaineer of international repute and has climbed many mountains including the Patagonias, the Alps among others. In 2017, to commemorate 50 years of the Navy's first expedition in 1965, a team set off to climb Mount Everest.
An Indian Navy team comprising 11 members successfully completed an expedition to the Arctic pole. To prepare, they first traveled to Iceland, where they attempted to summit a peak. The team next flew to eastern Greenland; in the Kulusuk and Angmassalik areas, they used Inuit boats to navigate the region's ice-choked fjords. They crossed northward across the Arctic Circle, reaching seventy degrees North on skis. The team scaled an unnamed peak of height and named it '’Indian Peak'’.
The Indian Naval ensign first flew in Antarctica in 1981. The Indian Navy succeeded in Mission Dakshin Dhruv 2006 by traversing to the South Pole on skis. With this historic expedition, they have set the record for being the first military team to have successfully completed a ski traverse to the Geographic South Pole. Also, three of the ten member team—the expedition leader—Cdr. Satyabrata Dam, leading medical assistants Rakesh Kumar and Vikas Kumar are now among the few people in the world to have visited the two poles and summited Mt. Everest. Indian Navy became the first organisation to reach the poles and Mt. Everest. Cdr. Dilip Donde completed the first solo circumnavigation by an Indian citizen on 22 May 2010.
Future of the Indian Navy
By the end of the 14th Plan (2020), the Indian Navy expects to have over 150 ships and close to 500 aircraft. In addition to the existing mission of securing both sea flanks in the Bay of Bengal and the Arabian sea, the navy would be able to respond to emergency situations far away from the main land. Marine assault capabilities will be enhanced by setting up a new amphibious warfare facility at Kakinada, Andhra Pradesh.
The Indian Navy has initiated Phase II expansion of INS Kadamba, the third largest naval base, near Karwar. Phase II will involve expansion of the berthing facilities to accommodate 40–45 more front-line warships, including the aircraft carrier INS Vikramaditya, raise manpower to 300 officers and around 2,500 sailors, and build a naval air station with a 6,000-foot runway. This is to be followed by Phase IIA and IIB, at the end of which INS Kadamba will be able to base 50 front-line warships. The Indian Navy is also in the process of constructing a new naval base, INS Varsha, at Rambilli for its Arihant Class submarines.
India plans to construct a pair of aircraft carriers. The first, INS Vikrant, was launched in 2013 by Cochin Shipyard and undocked in June 2015. It is expected to be completed by February 2021 and undergo extensive sea trials thereafter with commissioning planned for end of 2021. Vikrant displaces 44,000 tonnes and will be capable of operating up to 40 aircraft, including 30 HAL Tejas and MiG-29K fighters. The second ship, INS Vishal (formerly known as Indigenous Aircraft Carrier-II), will displace around 65,000 tonnes and is expected to be delivered to the Indian Navy by late 2030s. With the future delivery of Vishal, the Navy's goal to have three aircraft carriers in service, with two fully operational carriers and the third in refit, will be achieved.
As of November 2011, the Defence Acquisition Council launched the Multi-Role Support Vessel. The Indian Navy has subsequently sent out an international RFP for up to 2 large landing helicopter docks. The contenders are expected to tie up with local shipyards for construction of the ships.
In addition to aircraft carriers and large amphibious assault ships, the Indian Navy is acquiring numerous surface combatants such as; the Visakhapatnam-class destroyers, and frigates, ASW shallow water corvettes, ASuW corvettes, and MCM vessels. New submarine types include; the conventional , Project 75I, and the nuclear . New auxiliary ships include; five Replenishment Oilers, a Missile Range Instrumentation Ship and an Ocean Surveillance Ship.
The Indian Navy is planning to procure 22 General Atomics Sea Guardian drones at an estimated cost of $2 billion. This is the first instance of General Atomics drones being sold to a non-NATO military.
Accidents
Accidents in the Indian navy have been attributed to ageing ships in need of maintenance, delayed acquisitions by the Ministry of Defence, and human error. However naval commentators also argue that as India's large navy of 160 ships clocks around 12,000 ship-days at sea every year, in varied waters and weather, some incidents are inevitable. Captains of erring ships are dismissed from their command following an enquiry. The accident on board led to the resignation of the then Chief of Naval Staff (CNS) Admiral D K Joshi on 26 February 2014, who owned moral responsibility. The navy is envisaging a new 'Safety Organisation' to improve safety of its warships, nuclear submarines and aircraft in view of its planned increase in fleet strength over the next decade.
Indian Naval Ensign
The Indian Navy from 1950 to 2001 used a modified version of the British White Ensign, with the Union flag replaced with the Indian Tricolor in the canton. In 2001, this flag was replaced with a white ensign bearing the Indian Navy crest, as the previous ensign was thought to reflect India's colonial past. However complaints arose that the new ensign was indistinguishable as the blue of the naval crest easily merged with the sky and the ocean. Hence in 2004, the ensign was changed back to the St. George's cross design, with the addition of the emblem of India in the intersection of the cross. In 2014, the ensign as well as the naval crest was further modified to include the Devanagari script: सत्यमेव जयते (Satyameva Jayate) which means 'Truth Alone Triumphs' in Sanskrit.
The traditional crest of Indian Navy ships is topped by a crown featuring three sailing ships symbolising India's rich maritime history. The ribbon of the crown depicts the Ashoka Chakra surrounded by a horse and a bull. Each ship has a unique motif which is encircled by a ring of lotus buds.
Documents
The Indian Maritime Doctrine is a foundational primer document of the Indian Navy. Three editions have been published in 2004, 2009 and 2014. It has to be considered along with other foundational documents such as the naval strategy Freedom to Use the Seas (2007) and the updated edition Ensuring Secure Seas (2015).
The 2004 edition (INBR 8) was published amidst a larger strategic overhaul in the country. It contains a large number of key words along with their definitions and grouped into a number of sections. Select themes pervade throughout the document. Some themes are subtle such as the ongoing and future transition to a blue-water navy and others are louder such as the text related to nuclear submarines and aircraft carriers. There is justification and explaination for India's need for these transitions and acquisitions. The 2009 edition was updated to include counter-terror, counter-piracy and coordination with other navies in these aspects.
See also
Indian Ocean Naval Symposium
Information Management and Analysis Centre (IMAC)
Naval ranks and insignia of India
Integrated Defence Staff, tri-services
Exclusive economic zone of India, protected by IN
Indian Navy Football Club
List of ships of the Indian Navy
References
Sources
External links
Official web site
Defence agencies of India
Military units and formations established in 1612 |
53769340 | https://en.wikipedia.org/wiki/Suresh%20Venkatasubramanian | Suresh Venkatasubramanian | Suresh Venkatasubramanian is an Indian computer scientist and professor at Brown University. In 2021, Prof. Venkatasubramanian was appointed to the White House Office of Science and Technology Policy, advising on matters relating to fairness and bias in tech systems. He was formerly a professor at the University of Utah. He is known for his contributions in computational geometry and differential privacy, and his work has been covered by news outlets such as Science Friday, NBC News, and Gizmodo. He also runs the Geomblog, which has received coverage from the New York Times, Hacker News, KDnuggets and other media outlets. He has served as associate editor of the International Journal of Computational Geometry and Applications and as the academic editor of PeerJ Computer Science, and on program committees for the IEEE International Conference on Data Mining, the SIAM Conference on Data Mining, NIPS, SIGKDD, SODA, and STACS.
Career
Suresh Venkatasubramanian attended the Indian Institute of Technology Kanpur for his BTech and received his PhD from Stanford University in 1999 under the joint supervision of Rajeev Motwani and Jean-Claude Latombe. Following his PhD he joined AT&T Labs and served as an adjunct professor at the University of Pennsylvania where he taught courses on computational geometry and streaming algorithms for GPGPUs. In 2007 he joined the University of Utah School of Computing as the John E. and Marva M. Warnock Presidential Endowed Chair for Faculty Innovation in Computer Science. He received a National Science Foundation CAREER Award in 2010, and in 2013-2014 he was a visiting scientist at the Simons Institute for the Theory of Computing at UC Berkeley and at Google. In 2021, Prof. Venkatasubramanian was appointed to the White House Office of Science and Technology Policy, advising on matters relating to fairness and bias in tech systems, in addition, he moved to Brown University to join the Computer Science department and their Data Science Initiative. At Brown, Prof. Venkatasubramanian will be starting a new center on Computing for the People, to help think through what it means to do computer science that truly responds to the needs of people, instead of hiding behind a neutrality that merely gives more power to those already in power.
References
Theoretical computer scientists
Researchers in geometric algorithms
Living people
Stanford University alumni
University of Utah faculty
Science bloggers
IIT Kanpur alumni
Year of birth missing (living people) |
18032711 | https://en.wikipedia.org/wiki/Geoff%20Tootill | Geoff Tootill | Geoff C. Tootill (4 March 1922 – 26 October 2017) was an electronic engineer and computer scientist who worked in the Electrical Engineering Department at the University of Manchester with Freddie Williams and Tom Kilburn developing the Manchester Baby, "the world's first wholly electronic stored-program computer".
Education
Tootill attended King Edward's School, Birmingham on a Classics scholarship and in 1940 gained an entrance exhibition to study Mathematics at Christ's College, Cambridge. He was forced to do the course in two years (missing Part One of the Mathematics Tripos) as his studies were cut short by World War II. After the successful operation of the Manchester Baby computer, he was awarded an MSc by the Victoria University of Manchester for his thesis on "Universal High-Speed Digital Computers: A Small-Scale Experimental Machine".
Career
On leaving Cambridge in 1942, Tootill managed to get assigned to work on airborne radar at the Telecommunications Research Establishment (TRE) in Malvern. Here, he went out to airfields to troubleshoot problems with the operation of radar in night fighters, designed modifications and oversaw their implementation. He later said that this was the most responsible job that he had in his life.
In 1947, he was recruited by Frederic Calland Williams to join another ex-TRE colleague, Tom Kilburn, at Manchester University developing the world's first wholly electronic stored-program computer. In the UK, three projects were then underway to develop a stored program computer (in Cambridge, the NPL and Manchester) and the main technical hurdle was the memory technology. In order to test the cathode ray tube memory designed by FC Williams when it was constructed, Kilburn and Tootill designed an elementary computer, known as the "Manchester Baby". The computer could store 32 instructions or numbers using a single Cathode Ray Tube (CRT). On 21 June 1948, after months of patient work constructing and testing the Baby piece by piece, coping with the unreliable electronic components of the day, the machine finally ran a routine written by Kilburn (they didn't use the word "program" then) to find the highest proper factor of a number. In Tootill's words "And we saw the thing had done a computation". A day or two later, the Baby ran successfully for 52 minutes to find the highest proper factor of 218, which required c. 3.5m arithmetic operations.
After the Baby's first operation in June 1948, Alan Turing moved to Manchester so he could use the Baby for a project that he was working on at the National Physical Laboratory, where they had also been working on developing a computer. Tootill instructed Alan Turing on use of the Manchester Baby and debugged a program Turing had written to run on the Baby.
In 1949, Tootill joined Ferranti where he developed the logic design of the first commercial computer (which was based on the Baby). He stayed at Ferranti only briefly and later the same year, he joined the Royal Military College of Science at Shrivenham as a Senior Lecturer on a considerably higher salary, lecturing and leading lab studies on digital computing.
In 1956, Tootill joined the Royal Aircraft Establishment (RAE), Farnborough, researching issues for air traffic control systems. Here he wrote, with Stuart Hollingdale, "Electronic Computers", Penguin 1965, which ran through eight printings and was translated into Spanish and Japanese. Tootill was also a founding member of the British Computer Society in 1956.
In 1963, Tootill joined the newly formed European Space Research Organisation (ESRO, now the European Space Agency). He set up and directed the Control Centre of ESRO, with its ground stations. In 1969, he was assigned to a bureaucratic post in London, which he did not enjoy. In 1973, he joined the National Physical Laboratory at Teddington, where he developed communications standards for the European Informatics Network, an experimental computer network.
Tootill retired in 1982, but remained active in computing.
In 1997, drawing on his linguistics background (notably Latin, Greek, French and German), he designed a phonetic algorithm for encoding English names (to recognise that e.g. Deighton and Dayton, Shore and Shaw sound the same) which garnered over 2,000 corporate users as part of a data matching package developed by his son Steve.
In 1998, the Computer Conservation Society (in a project led by Christopher P Burton) unveiled a replica of the Baby (which is now in the Museum of Science and Industry (Manchester)) to commemorate the 50th anniversary of the running of the first electronically stored program, based in large part on Tootill's notes and recollections. A page from his June 1948 notebook details the code of the first ever software program, written by Tom Kilburn.
Personal life
As a boy, Tootill was interested in electronics, and built a radio set. He met Pamela Watson while in Malvern during World War II, where they were both members of the "Flying Rockets Concert Party". He and Pam were married in 1947 and had three sons, Peter, Colin and Stephen and two grandchildren, Mia and Duncan.
His first wife Pam died in 1979, and in 1981, Tootill married Joyce Turnbull, who survived him.
Books
References
1922 births
2017 deaths
People from Chadderton
People educated at King Edward's School, Birmingham
Alumni of Christ's College, Cambridge
English computer scientists
English electrical engineers
History of computing in the United Kingdom
Members of the British Computer Society
Scientists of the National Physical Laboratory (United Kingdom)
People associated with the Victoria University of Manchester
People associated with the Department of Computer Science, University of Manchester
European Space Agency personnel
Deaths from pneumonia in the United Kingdom |
8221202 | https://en.wikipedia.org/wiki/Xenomai | Xenomai | Xenomai is a real-time development framework cooperating with the Linux kernel to provide pervasive, interface-agnostic, hard real-time support to user space applications seamlessly integrated into the Linux environment.
The Xenomai project was launched in August 2001. In 2003 it merged with the Real-Time Application Interface (RTAI) project to produce RTAI/fusion, a production-grade real-time free software platform for Linux on top of Xenomai's abstract real-time operating system (RTOS) core. Eventually, the RTAI/fusion effort became independent from RTAI in 2005 as the Xenomai project.
Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interface, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it.
Xenomai vs. RTAI
There is a long list of differences between Xenomai and RTAI, though both projects share a few ideas and support the RTDM layer. The major differences derive from the goals the projects aim for, and from their respective implementation. While RTAI is focused on lowest technically feasible latencies, Xenomai also considers clean extensibility (RTOS skins), portability, and maintainability as very important goals. Xenomai's path towards Ingo Molnár's PREEMPT_RT support is another major difference compared to RTAI's objectives.
See also
Adeos
RTAI
References
External links
Radboud Univ. - Xenomai see the Xenomai exercises
Real-time operating systems |
21694 | https://en.wikipedia.org/wiki/NeXT | NeXT | NeXT, Inc. (later NeXT Computer, Inc. and NeXT Software, Inc.) was an American technology company that specialized in computer workstations intended for higher-education and business use. Based in Redwood City, California, and founded by Apple Computer co-founder and CEO Steve Jobs after he was forced out of Apple, the company introduced their first product, the NeXT Computer, in 1988, and then the smaller NeXTcube and NeXTstation in 1990. These computers had relatively limited sales, with only about 50,000 units shipped in total. Nevertheless, their object-oriented programming and graphical user interfaces, as trendsetters of computer innovation, were highly influential.
NeXT partnered with Sun Microsystems to create a programming environment called OpenStep, which is the NeXTSTEP operating system's application layer hosted on a third-party operating system. In 1993, NeXT withdrew from the hardware industry to concentrate on marketing OPENSTEP for Mach, its own OpenStep implementation, for several original equipment manufacturers (OEMs). NeXT also developed WebObjects, one of the first enterprise web application frameworks, and although it was not very popular because of its high price of $50,000, it remains a prominent early example of a web server that is based on dynamic page generation rather than static content.
Apple purchased NeXT in 1997 for $429 million and 1.5 million shares of Apple stock, and Jobs, the Chairman and CEO of NeXT, was given an advisory role at Apple. Apple also promised that NeXT's operating system would be ported to Macintosh hardware, and combined with the Mac OS operating system, which would yield Mac OS X, later called macOS.
History
Background
In 1985, Apple co-founder and CEO Steve Jobs led a division campaign called SuperMicro, which was responsible for developing the Macintosh and Lisa computers. They were commercial successes on university campuses because Jobs had personally visited a few notable of these institutions to promote his products, and also because of Apple University Consortium, a marketing program that allowed academics to buy them at a discount. The Consortium had earned over $50 million on computer sales by February 1984.
Jobs met Paul Berg, a Nobel Laureate in chemistry, at a luncheon held in Silicon Valley to honor President of France François Mitterrand. Berg was frustrated by the time and expense of researching recombinant DNA via wet laboratories, and suggested that Jobs should use his influence to create a "3M computer" that is designed for higher education use.
Jobs was intrigued by Berg's concept of a workstation and contemplated starting a higher-education computer company in late 1985, amid increasing turmoil at Apple. Jobs's division did not release the upgraded versions of the Macintosh computer and much of the Macintosh Office software. As a result its sales plummeted, and Apple was forced to write off millions of dollars in unsold inventory. In 1985, John Sculley ousted Jobs from his executive role at Apple and replaced him with Jean-Louis Gassée. Later that year, Jobs began a power struggle to regain control over his company. The board of directors sided with Sculley, and Jobs took a business trip to Western Europe and the Soviet Union on behalf of Apple.
Original NeXT team
In September 1985, after several months of being sidelined, Jobs resigned from Apple. He told the board he was leaving to set up a new computer company, and that he would be taking several Apple employees from the SuperMicro division with him, but he also promised that his new company would not compete with Apple and might even consider licensing their designs to them under the Macintosh brand.
A number of former Apple employees followed him to NeXT, including Joanna Hoffman, Bud Tribble, George Crow, Rich Page, Susan Barnes, Susan Kare, and Dan'l Lewin. After consulting with major educational buyers from around the country, including a follow-up meeting with Paul Berg, a tentative specification for the workstation was drawn up. It was designed to be powerful enough to run wet-lab simulations and affordable enough for college students to use in their dormitory rooms. Before the specifications were finished, however, Apple sued NeXT for "nefarious schemes" to take advantage of the cofounders' insider information. Jobs argued, "It is hard to think that a $2 billion company with 4,300-plus people couldn't compete with six people in blue jeans." The suit was eventually dismissed before trial.
In 1986, Jobs recruited the graphic designer Paul Rand to create a brand identity for . Jobs recalled, "I asked him if he would come up with a few options, and he said, 'No, I will solve your problem for you and you will pay me. You don't have to use the solution. If you want options go talk to other people. Rand created a 20-page brochure detailing the brand, including the precise angle used for the logo (28°) and a new company name spelling, NeXT.
1987–1993: NeXT Computer
First generation
In mid-1986, NeXT changed its business plan to develop both computer hardware and software, rather than just workstations. Rich Page, a NeXT co-founder who had previously directed Apple Lisa's team, led a team to develop the hardware, while Mach kernel engineer Avie Tevanian led the development of NeXT's operating system, NeXTSTEP. NeXT's first factory was established in Fremont, California in 1987, and it was capable of manufacturing about 150,000 machines per year. NeXT's first workstation was named the NeXT Computer. It was also nicknamed as "the cube" due to its distinctive magnesium cubic case. The case was designed by Hartmut Esslinger and his team at Frog Design Inc.
In 1987, Ross Perot became NeXT's first major outside-investor. He invested $20 million for 16% of NeXT's stock after seeing a segment about NeXT on a 1986 PBS documentary titled Entrepreneurs. In 1988, he joined the company's board of directors.
NeXT and Adobe collaborated on Display PostScript (DPS), a 2D graphics engine that was released in 1987. NeXT engineers wrote an alternative windowing engine edition to take full advantage of NeXTSTEP. NeXT engineers used Display PostScript to draw on-screen graphic designs such as title-bar and scroller for NeXTSTEP's user-space windowing system library.
The original design team anticipated to complete the computer in early 1987 and launch it for by mid-year. On October 12, 1988, the NeXT Computer received standing ovations when it was revealed at a private gala event, "NeXT Introductionthe Introduction to the NeXT Generation of Computers for Education" at the Louise M. Davies Symphony Hall in San Francisco, California. The following day, selected educators and software engineers were invited to attend the first public technical overview of the NeXT computer at the event "The NeXT Day" held at the San Francisco Hilton. The event gave developers interested in NeXT software an insight into their architecture, object-oriented programming, and the NeXT Computer. The luncheon speaker was Steve Jobs.
The first NeXT Computers were experimented in 1989, after which NeXT sold a limited number of these devices to universities with a beta version of the NeXTSTEP operating system pre-installed. Initially, the NeXT Computer targeted the United States higher-education institutions only, with a base price of . The computer was widely reviewed in magazines, primarily the hardware portion. When asked if he was upset that the computer's debut was delayed by several months, Jobs responded, "Late? This computer is five years ahead of its time!"
The NeXT Computer was based on the 25 MHz Motorola 68030 central processing unit (CPU). The Motorola 88000 RISC chip was originally considered, but it was not available in sufficient quantities. The computer included between 8 and 64 MB of random-access memory (RAM), a 256 MB magneto-optical (MO) drive, a 40 MB (swap-only), 330 MB, or 660 MB hard disk drive, 10BASE2 Ethernet, NuBus, and a 17-inch MegaPixel grayscale display measuring 1120 by 832 pixels. In 1989, a typical new PC, Macintosh, or Amiga computer included a few megabytes of RAM, a 640×480 16-color or a 320x240 4,096-color display, a 10- to 20-megabyte hard drive, and few networking capabilities. It was the first computer to ship with a general-purpose DSP chip (Motorola 56001) on the motherboard. This supported sophisticated music and sound processing, including the Music Kit software.
The magneto-optical (MO) drive manufactured by Canon Inc. was the primary mass storage device. This drive technology was relatively new to the market, and the NeXT was the first computer to use it. MO drives were cheaper but much slower than hard drives, with an average seek time of 96 ms; Jobs negotiated Canon's initial price of $150 per blank MO disk so that they could sell at retail for only $50. The disk drive's design made it impossible to move files between computers without a network, because each NeXT Computer has only one MO drive and the disk could not be removed without shutting down the system. The drive's limited speed and capacity made it insufficient as the primary medium running the NeXTSTEP operating system.
In 1989, NeXT struck a deal for former Compaq reseller Businessland to sell the NeXT Computer in international markets. Selling through a retailer was a major change from NeXT's original business model of only selling directly to students and educational institutions. Businessland founder David Norman predicted that sales of the NeXT Computer would surpass sales of Compaq computers after 12 months.
That same year, Canon invested US$100 million in NeXT, for a 16.67 percent stake, making NeXT worth almost $600 million. Canon invested in NeXT with the condition of installing the NeXTSTEP environment on its own workstations, which would mean a greatly expanded market for the software. After NeXT exited the hardware business, Canon produced a line of PCs called object.station—including models 31, 41, 50, and 52—specifically designed to run NeXTSTEP on Intel chips. Canon also served as NeXT's distributor in Japan.
The NeXT Computer was released in 1990 for . In June 1991, Perot resigned from the board of directors to concentrate more time in his company, Perot Systems, a Plano, Texas–based software system integrator.
Second generation
In 1990, NeXT released a second generation of workstations: a revised NeXT Computer NeXTcube and the NeXTstation. The NeXTstation was nicknamed "the slab" for its low-rise box form-factor. Jobs ensured that NeXT staffers did not nickname the NeXTstation "pizza box" to avoid inadvertent comparison with competitor Sun workstations, which already had that nickname.
The machines were initially planned to use the 2.88 MB floppy drive, but the 2.88 MB floppy disks were expensive, and its technology failed to supplant the 1.44 MB floppy. Realizing this, NeXT used the CD-ROM drive instead, which would eventually become the standard for storage. Color graphics were available on the NeXTstation Color and NeXTdimension graphics processor hardware for the NeXTcube. The new computers, with the new Motorola 68040 processor, were cheaper and faster than their predecessors.
In 1992, NeXT launched "Turbo" variants of the NeXTcube and NeXTstation, with a 33 MHz 68040 processor and the maximum RAM capacity increased to 128 MB. In 1992, NeXT sold 20,000 computers, counting upgraded motherboards on back order as system sales. This was a small number compared with competitors, but the company reported sales of $140 million for the year, which encouraged Canon to invest a further $30 million to keep the company afloat.
In total, 50,000 NeXT machines were sold, including thousands to the then super-secret National Reconnaissance Office located in Chantilly, Virginia. NeXT's long-term plan was to migrate to the emerging high-performance industry standard Reduced Instruction Set Computing (RISC) architecture, with the NeXT RISC Workstation (NRW). Initially, the NRW was to be based on the Motorola 88110 processor, but it was later redesigned around dual PowerPC 601s, due to a lack of confidence in Motorola's commitment to the 88000-series architecture in the time leading up to the AIM alliance's transition to PowerPC.
1993–1996: NeXT Software, Inc.
In late 1991, because of NeXT's future withdrawal from the hardware industry, the company started porting the NeXTSTEP operating system to the Intel 80486-based IBM PC compatible computers. In January 1992, a demonstration of the port was shown at NeXTWorld Expo. By mid-1993 the process was completed, and version 3.1 (NeXTSTEP 486) was released.
NeXTSTEP 3.x was later ported to PA-RISC- and SPARC-based platforms, for a total of four versions: NeXTSTEP/NeXT (for NeXT's own hardware), NeXTSTEP/Intel, NeXTSTEP/PA-RISC, and NeXTSTEP/SPARC. Although the latter three ports were not widely used, NeXTSTEP gained popularity at institutions such as First Chicago NBD, Swiss Bank Corporation, O'Connor and Company, and other organizations, owing to its programming model. The software was used by many U.S. government agencies, including the United States Naval Research Laboratory, the National Security Agency, the Advanced Research Projects Agency, the Central Intelligence Agency, and the National Reconnaissance Office. Some IBM PC clone vendors offered somewhat customized hardware solutions that were delivered running NeXTSTEP on Intel, such as the Elonex NextStation and the Canon object.station 41.
In 1993, NeXT withdrew from the hardware industry, and the company was renamed to NeXT Software, Inc. Consequently, 230 of the 530 staff employees were laid off. NeXT negotiated to sell the hardware business, including the Fremont factory, to Canon, which later pulled out of the deal. Work on the PowerPC machines was stopped, along with all hardware production. Sun CEO, Scott McNealy, announced plans to invest $10 million in 1993 and use NeXT software in future Sun systems. NeXT partnered with Sun to create a programming environment called OpenStep, which is NeXTSTEP's application layer hosted on a third party operating system. In 1994, Microsoft and NeXT were collaborating on a Windows NT port of OpenStep which was never released.
After exiting the hardware business, NeXT focused on other operating systems, in effect returning to its original business plan. New products based on OpenStep were released, including OpenStep Enterprise, a version for Microsoft's Windows NT. NeXT launched WebObjects, a platform for building large-scale dynamic web applications. It failed to achieve wide popularity, partly because of the initial high price of US$50,000; but it remains the first and most prominent early example of a web application server that enabled dynamic page generation based on user interactions as opposed to static content. The platform was bundled with macOS Server and Xcode, but was removed in 2009 and discontinued in 2016. It was used for a short of period of time by many large businesses, including Dell, Disney, WorldCom, and the BBC. WebObjects would eventually be used to power Apple's iTunes Store and most of its corporate website until the iTunes Store was discontinued in August 2017.
1996–2006: Acquisition by Apple
On December 20, 1996, Apple Computer announced its intention to acquire NeXT. Apple paid $429 million in cash, which went to the initial investors, and 1.5 million Apple shares, which went to Jobs, who was deliberately not given cash for his part in the deal. The main purpose of the acquisition was to use NeXTSTEP as a foundation to replace the dated classic Mac OS. The deal was finalized on February 7, 1997, bringing Jobs back to Apple as a consultant, who was later appointed as interim CEO. In 2000, Jobs took the CEO position as a permanent assignment, holding the position until his resignation on August 24, 2011, shortly before his death on October 5, 2011.
Several NeXT executives replaced their Apple counterparts when Jobs restructured the company's board of directors. Over the next five years the NeXTSTEP operating system was ported to the PowerPC architecture. At the same time, an Intel port and OpenStep Enterprise toolkit for Windows were both produced. That operating system was code-named Rhapsody, while the crossplatform toolkit was called "Yellow Box". For backward compatibility, Apple added the "Blue Box" to Rhapsody, allowing existing Mac applications to be run in a self-contained cooperative multitasking environment.
A server version of the new operating system was released as Mac OS X Server 1.0 in 1999, and the first consumer version, Mac OS X 10.0, in 2001. The OpenStep developer toolkit was renamed Cocoa. Rhapsody's Blue Box was renamed Classic Environment and changed to run applications full-screen without requiring a separate window. Apple included an updated version of the original Macintosh toolbox, called Carbon, that gave existing Mac applications access to the environment without the constraints of Blue Box. Some of NeXTSTEP's interface features are used in Mac OS X, including the Dock, the Services menu, the Finder's "Column" view, and the Cocoa text system.
NeXTSTEP's processor-independent capabilities were retained in Mac OS X, leading to both PowerPC and Intel-x86 versions (although only PowerPC versions were publicly available before 2006). Apple moved to Intel processors by August 2006.
Corporate culture and community
Jobs created a different corporate culture at NeXT in terms of facilities, salaries, and benefits. Jobs had experimented with some structural changes at Apple, but at NeXT he abandoned conventional corporate structures, instead making a "community" with "members" instead of employees. There were only two different salaries at NeXT until the early 1990s. Team members who joined before 1986 were paid and those who joined afterward were paid . This caused a few awkward situations where managers were paid less than their employees. Later, employees were given performance reviews and raises every six months. To foster openness, all employees had full access to the payrolls, although few employees ever took advantage of the privilege. NeXT's health insurance plan offered benefits to not only married couples but unmarried and same-sex couples, although the latter privilege was later withdrawn due to insurance complications. The payroll schedule was also very different from other companies in Silicon Valley at the time, because instead of employees being paid twice a month at the end of the pay period, they were paid once a month in advance.
Jobs found office space in Palo Alto, California, at 3475 Deer Creek Road, occupying a glass-and-concrete building that featured a staircase designed by the architect I. M. Pei. The first floor had hardwood flooring and large worktables where the workstations would be assembled. To avoid inventory errors, NeXT used the just-in-time (JIT) inventory strategy. The company contracted out for all major components, such as mainboards and cases, and had the finished components shipped to the first floor for assembly. On the second floor was office space with an open floor plan. The only enclosed rooms were Jobs's office and a few conference rooms.
As NeXT expanded, more office space was needed. The company rented an office at 800 and 900 Chesapeake Drive, in Redwood City, also designed by Pei. The architectural centerpiece was a "floating" staircase with no visible supports. The open floor plan was retained, with furnishings that were luxurious, such as $5,000 chairs, $10,000 sofas, and Ansel Adams prints.
NeXT's Palo Alto office was subsequently occupied by Internet Shopping Network (a subsidiary of Home Shopping Network) in 1994, and later by SAP AG. Its Redwood City office was later occupied by ApniCure and OncoMed Pharmaceuticals Inc.
The first issue of NeXTWORLD magazine was printed in 1991. It was edited by Michael Miley and, later, Dan Ruby and was published in San Francisco by Integrated Media. It was the only mainstream periodical to discuss NeXT computers, their operating system, and NeXT application software. The publication was discontinued in 1994 after only four volumes. A developer conference, NeXTWORLD Expo, was held in 1991 and 1992 at the San Francisco Civic Center and in 1993 and 1994 at the Moscone Center in San Francisco, with Jobs as the keynote speaker.
Legacy
Though not very profitable, the company had a wide-ranging impact on the computer industry. Object-oriented programming and graphical user interfaces became more common after the 1988 release of the NeXTcube and NeXTSTEP. The technologically successful platform was often held as the trendsetter when other companies started to emulate the success of NeXT's object-oriented system.
Widely seen as a response to NeXT, Microsoft announced the Cairo project in 1991; the Cairo specification included similar object-oriented user-interface features for a coming consumer version of Windows NT. Although Cairo was ultimately abandoned, some elements were integrated into other projects.
By 1993, Taligent was considered by the press to be a competitor in objects and operating systems, even without any product release, with NeXT being a main point of comparison. For the first few years, Taligent's theoretical innovation was often compared to NeXT's older but mature and commercially established platform, but Taligent's debut release in 1995 was called "too little, too late", especially when compared with NeXT.
Several developers used the NeXT platform to write pioneering programs. For example, in 1990, Computer Scientist Tim Berners-Lee used a NeXT Computer to develop the first web browser and web server. The game series Doom, and Quake were developed by id Software with NeXT computers. Other commercial programs were released for NeXT computers, including Altsys Virtuoso—a vector-drawing program with page-layout features, which was ported to Mac OS and Microsoft Windows as Aldus FreeHand v4—and the Lotus Improv spreadsheet program.
See also
NeXT character set
Multi-architecture binary
Notes
References
Further reading
External links
Defunct computer companies based in California
Defunct software companies of the United States
Steve Jobs
Apple Inc. acquisitions
American companies established in 1985
American companies disestablished in 1997
Technology companies based in the San Francisco Bay Area
Computer companies established in 1985
Computer companies disestablished in 1997
Companies based in Redwood City, California
1985 establishments in California
1997 disestablishments in California
Defunct companies based in the San Francisco Bay Area
Privately held companies based in California
1997 mergers and acquisitions
Defunct computer hardware companies |
32478 | https://en.wikipedia.org/wiki/Vim%20%28text%20editor%29 | Vim (text editor) | Vim (; a contraction of Vi IMproved) is a free and open-source, screen-based text editor program for Unix. It is an improved clone of Bill Joy's vi. Vim's author, Bram Moolenaar, derived Vim from a port of the Stevie editor for Amiga and released a version to the public in 1991. Vim is designed for use both from a command-line interface and as a standalone application in a graphical user interface. Vim is released under the Vim license that includes some charityware clauses, encouraging users who enjoy the software to consider donating to children in Uganda. The Vim license is compatible with the GNU General Public License through a special clause allowing distribution of modified copies under the GNU GPL version 2.0 or later.
Since its release for the Amiga, cross-platform development has made it available on many other systems. In 2006, it was voted the most popular editor amongst Linux Journal readers; in 2015 the Stack Overflow developer survey found it to be the third most popular text editor, and in 2019 the fifth most popular development environment.
History
Vim's forerunner, Stevie (ST Editor for VI Enthusiasts), was created by Tim Thompson for the Atari ST in 1987 and further developed by Tony Andrews and G.R. (Fred) Walter.
Basing his work on Stevie, Bram Moolenaar began working on Vim for the Amiga computer in 1988, with the first public release (Vim v1.14) in 1991.
At the time of its first release, the name "Vim" was an acronym for "Vi IMitation", but this changed to "'Vi IMproved" late in 1993.
Interface
Like vi, Vim's interface is not based on menus or icons but on commands given in a text user interface; its GUI mode, gVim, adds menus and toolbars for commonly used commands but the full functionality is still expressed through its command line mode. Vi (and by extension Vim) tends to allow a typist to keep their fingers on the home row, which can be an advantage for a touch typist.
Vim has a built-in tutorial for beginners called vimtutor. It's usually installed along with Vim, but it exists as a separate executable and can be run with a shell command. There is also the Vim Users' Manual that details Vim's features and a FAQ. This manual can be read from within Vim, or found online.
Vim also has a built-in help facility (using the :help command) that allows users to query and navigate through commands and features.
Modes
Vim has 12 different editing modes, 6 of which are variants of the 6 basic modes. The most important modes are:
Normal mode – used for editor commands. This is also the default mode, unless the insertmode option is specified.
Visual mode – similar to normal mode, but used to highlight areas of text. Normal commands can be run on the highlighted area, for instance to move or edit a selection.
Insert mode – similar to editing in most modern editors. In this mode, buffers can be modified with the text inserted.
Command-line or Cmdline mode – supports a single line input at the bottom of the Vim window. Normal commands (beginning with :), and some other keys for specific actions (including pattern search and the filter command) activate this mode. On completion of the command, Vim returns to the previous mode.
Customization
Vim is highly customizable and extensible, making it an attractive tool for users who demand a large amount of control and flexibility over their text editing environment. Text input is facilitated by a variety of features designed to increase keyboard efficiency. Users can execute complex commands with "key mappings," which can be customized and extended. The "recording" feature allows for the creation of macros to automate sequences of keystrokes and call internal or user-defined functions and mappings. Abbreviations, similar to macros and key mappings, facilitate the expansion of short strings of text into longer ones and can also be used to correct mistakes. Vim also features an "easy" mode for users looking for a simpler text editing solution.
There are many plugins available that extend or add new functionality to Vim. These plugins are usually written in Vim's internal scripting language, vimscript (also known as VimL), but can be written in other languages as well.
There are projects bundling together complex scripts and customizations and aimed at turning Vim into a tool for a specific task or adding a major flavour to its behaviour. Examples include Cream, which makes Vim behave like a click-and-type editor, or VimOutliner, which provides a comfortable outliner for users of Unix-like systems.
Features and improvements over vi
Vim has a vi compatibility mode, but when that mode isn't used, Vim has many enhancements over vi. However, even in compatibility mode, Vim is not entirely compatible with vi as defined in the Single Unix Specification and POSIX (e.g., Vim does not support vi's open mode, only visual mode). Vim's developers state that it is "very much compatible with Vi".
Some of Vim's enhancements include completion, comparison and merging of files (known as vimdiff), a comprehensive integrated help system, extended regular expressions, scripting languages (both native and through alternative scripting interpreters such as Perl, Python, Ruby, Tcl, etc.) including support for plugins, a graphical user interface (known as gvim), limited integrated development environment-like features, mouse interaction (both with and without the GUI), folding, editing of compressed or archived files in gzip, bzip2, zip, and tar format and files over network protocols such as SSH, FTP, and HTTP, session state preservation, spell checking, split (horizontal and vertical) and tabbed windows, Unicode and other multi-language support, syntax highlighting, trans-session command, search and cursor position histories, multiple level and branching undo/redo history which can persist across editing sessions, and visual mode.
While running, Vim saves the user's changes in a swap file with the ".swp" extension. The swap file can be used to recover after a crash. If a user tries to open a file and a swap file already exists, Vim will warn the user, and if the user proceeds, Vim will use a swap file with the extension ".swo" (or, if there is already more than one swap file, ".swn", ".swm", etc.). This feature can be disabled.
Vim script
Vim script (also called Vimscript or VimL) is the scripting language built into Vim. Based on the ex editor language of the original vi editor, early versions of Vim added commands for control flow and function definitions. Since version 7, Vim script also supports more advanced data types such as lists and dictionaries and (a simple form of) object-oriented programming. Built-in functions such as map() and filter() allow a basic form of functional programming, and Vim script has lambda since version 8.0. Vim script is mostly written in an imperative programming style.
Vim macros can contain a sequence of normal-mode commands, but can also invoke ex commands or functions written in Vim script for more complex tasks. Almost all extensions (called plugins or more commonly scripts) of the core Vim functionality are written in Vim script, but plugins can also utilize other languages like Perl, Python, Lua, Ruby, Tcl, or Racket. These plugins can be installed manually, or through a plugin manager such as Vundle, Pathogen, or Vim-Plug.
Vim script files are stored as plain text, similarly to other code, and the filename extension is usually .vim. One notable exception to that is Vim's config file, .vimrc.
Examples
" This is the Hello World program in Vim script.
echo "Hello, world!"
" This is a simple while loop in Vim script.
let i = 1
while i < 5
echo "count is" i
let i += 1
endwhile
unlet i
Availability
Whereas vi was originally available only on Unix operating systems, Vim has been ported to many operating systems including AmigaOS (the initial target platform), Atari MiNT, BeOS, DOS, Windows starting from Windows NT 3.1, OS/2, OS/390, MorphOS, OpenVMS, QNX, RISC OS, Linux, BSD, and Classic Mac OS. Also, Vim is shipped with every copy of Apple macOS.
Independent ports of Vim are available for Android and iOS.
Neovim
Neovim is a forkwith additionsof Vim that strives to improve the extensibility and maintainability of Vim. Neovim has the same configuration syntax as Vim; thus the same configuration file can be used with both editors, although there are minor differences in details of options. If the added features of Neovim are not used, Neovim is compatible with almost all of Vim's features.
The Neovim project was started in 2014, with some Vim community members offering early support of the high-level refactoring effort to provide better scripting, plugins, and integration with modern GUIs. The project is free software and its source code is available on GitHub.
Neovim had a successful fundraising in March 2014, supporting at least one full-time developer. Several frontends are under development, making use of Neovim's capabilities.
The Neovim editor is available in a Personal Package Archive, hosted by Ubuntu and some more conventional package managers, making it possible to install it on a variety of operating systems.
See also
Learning the vi and Vim Editors, a tutorial book for vi and vim, published by O'Reilly Media
Editor war – the rivalry between users of the Emacs and vi (Vim) text editors
List of text editors
Comparison of text editors
Vimperator
Pentadactyl
References
External links
1991 software
Amiga software
BeOS text editors
Classic Mac OS text editors
Computer science in the Netherlands
Cross-platform free software
DOS text editors
Free file comparison tools
Free software programmed in C
Free text editors
Information technology in the Netherlands
Linux text editors
MacOS text editors
MorphOS software
OpenVMS text editors
OS/2 text editors
Termcap
Unix text editors
Vi
Windows text editors
Text editors that use GTK
Free HTML editors
Linux integrated development environments
Hex editors
Free integrated development environments
Free integrated development environments for Python
Free and open-source software
Command-line software
Console applications |
1162908 | https://en.wikipedia.org/wiki/Whitesmiths | Whitesmiths | Whitesmiths Ltd. was a software company founded in New York City by P. J. Plauger, Mark Krieger and Gabriel Pham, and last located in Westford, Massachusetts. It sold a Unix-like operating system called Idris, as well as the first commercial C compiler, Whitesmiths C.
The Whitesmiths compiler, first written for the PDP-11, was released in 1978 and compiled a version of C similar to that accepted by Version 6 Unix (Dennis Ritchie's original C compiler). It was an entirely new implementation, borrowing no code from Unix. Today, it is mainly remembered for lending its name to a particular indentation style, originally used in the code examples which accompanied it. Whitesmith's first customer for their C compiler was Fischer & Porter, a process control company then located in Warminster, Pennsylvania. Besides PDP-11, the compiler had code generators for Intel 8080/Zilog Z80, Motorola MC68000, and VAX-11, and it was commonly used as a cross compiler. Whitesmiths also developed a Pascal front-end for the compiler, that emitted C-language code for input to the C compiler.
By 1983 Whitesmiths was one of several vendors of Unix-like operating systems. That year Whitesmiths formed a technical and business alliance with France-based COSMIC Software. At that time, Whitesmiths published 16-bit compilers for machines like PDP-11 while COSMIC published 8-bit compilers for Intel and Motorola CPUs. This technology alliance improved compilers for both markets. Whitesmiths was actively involved in developing the original ANSI C standard supplying several members to the standards committee and hosting some technical sessions. They were one of the first suppliers of an ANSI C compliant compiler.
The company's president from 1978 to 1988 was P. J. Plauger. Whitesmiths merged with Intermetrics in December 1988, leading to further mergers and acquisitions.
References
External links
Whitesmiths Ltd. C Programmers' Manual
Official homepage of Cosmic Software
Software companies of the United States
Unix history |
22178437 | https://en.wikipedia.org/wiki/Daseuplexia | Daseuplexia | Daseuplexia is a genus of moths of the family Noctuidae. The genus was erected by George Hampson in 1906.
Species
Daseuplexia brevipennata Hreblay, Peregovits & Ronkay, 1999 northern Vietnam
Daseuplexia chloromagna Hreblay & Ronkay, 1998 Nepal
Daseuplexia duplicata Hreblay & Ronkay, 1998 Nepal
Daseuplexia erlangi Benedek, Babics & Saldaitis, 2011
Daseuplexia inexpecta Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia issekutzi Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia khami Benedek, Babics & Saldaitis, 2011 Sichuan
Daseuplexia lagenifera (Moore, 1882) Darjeeling
Daseuplexia lageniformis (Hampson, 1894) Sikkim
Daseuplexia majseae Benedek, Babics & Saldaitis, 2013 Sichuan
Daseuplexia marmorata Hreblay & Ronkay, 1998 Nepal
Daseuplexia minshana Benedek, Babics & Saldaitis, 2013 northern Sichuan
Daseuplexia nekrasovi Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia oroplexina Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia pittergabori Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia shangrilai Benedek, Babics & Saldaitis, 2011 Yunnan
Daseuplexia tertia Hreblay & Ronkay, 1999 Nepal
Daseuplexia unicata Ronkay, Ronkay, Gyulai & Hacker, 2010
Daseuplexia viridicincta Hreblay & Ronkay, 1998 Nepal
References
Cuculliinae |
62382429 | https://en.wikipedia.org/wiki/Command%20Control%20%28event%29 | Command Control (event) | Command Control (also called CMD CTRL) is an annual, multi-day summit organized by Messe München that focuses primarily on cybersecurity topics. The event was organized for the first time in 2018. The next summit was supposed to take place in Munich from March 3 to March 4, 2020. However, the corona crisis led to the cancellation of Command Control in 2020. In addition, Messe München has decided to discontinue Command Control as an independent event.
History
A survey was commissioned by the organisers before the event took place. This survey showed that every second company in Germany became the target of cyberattacks in 2017. In addition, according to this study, many companies pay too little attention to their employees when defending themselves against cyber threats. The first summit took place from September 20 to September 22, 2018 in Munich (Germany).
An index (Command Control Cybersecurity-Index 2020) was created in 2019 based on surveys. According to the index, 78 percent consider a change of strategy in their company to be necessary when it comes to cyber security. The next Summit will take place from March 3 to March 4, 2020.
Speakers (selection)
See also
Cyberattack
Cybercrime
Computer security (Cybersecurity)
Hacker
References
Weblinks
Official Website
Computer security conferences |
2215 | https://en.wikipedia.org/wiki/Sid%20Meier%27s%20Alpha%20Centauri | Sid Meier's Alpha Centauri | Sid Meier's Alpha Centauri is a 4X video game, considered a spiritual sequel to the Civilization series. Set in a science fiction depiction of the 22nd century, the game begins as seven competing ideological factions land on the planet Chiron ("Planet") in the Alpha Centauri star system. As the game progresses, Planet's growing sentience becomes a formidable obstacle to the human colonists.
Sid Meier, designer of Civilization, and Brian Reynolds, designer of Civilization II, developed Alpha Centauri after they left MicroProse to join with Jeff Briggs in creating a new video game developer: Firaxis Games. Electronic Arts released both Alpha Centauri and its expansion, Sid Meier's Alien Crossfire, in 1999. The following year, Aspyr Media ported both titles to Classic Mac OS while Loki Software ported them to Linux.
Alpha Centauri features improvements on Civilization IIs game engine, including simultaneous multiplay, social engineering, climate, customizable units, alien native life, additional diplomatic and spy options, additional ways to win, and greater mod-ability. Alien Crossfire introduces five new human and two non-human factions, as well as additional technologies, facilities, secret projects, native life, unit abilities, and a victory condition.
The game received wide critical acclaim, being compared favorably to Civilization II. Critics praised its science fiction storyline (comparing the plot to works by Stanley Kubrick, Frank Herbert, Arthur C. Clarke, and Isaac Asimov), the in-game writing, the voice acting, the user-created custom units, and the depth of the technology tree. Alpha Centauri also won several awards for best game of the year and best strategy game of the year.
Synopsis
Setting
Space-race victories in the Civilization series conclude with a journey to Alpha Centauri. Beginning with that premise the Alpha Centauri narrative starts in the 22nd century, after the United Nations sends "Unity", a colonization mission, to Alpha Centauri's planet Chiron ("Planet"). Unbeknownst to humans, advanced extraterrestrials ("Progenitors") had been conducting experiments in vast distributed nervous systems, culminating in a planetary biosphere-sized presentient nervous system ("Manifold") on Chiron, leaving behind monoliths and artifacts on Planet to guide and examine the system's growth. Immediately prior to the start of the game, a reactor malfunction on the Unity spacecraft wakes the crew and colonists early and irreparably severs communications with Earth. After the captain is assassinated, the most powerful leaders on board build ideological factions with dedicated followers, conflicting agendas for the future of mankind, and "desperately serious" commitments. As the ship breaks up, seven escape pods, each containing a faction, are scattered across Planet.
In the Alien Crossfire expansion pack, players learn that alien experiments led to disastrous consequences at Tau Ceti, creating a hundred-million-year evolutionary cycle that ended with the eradication of most complex animal life in several neighboring inhabited star systems. After the disaster (referred to by Progenitors as "Tau Ceti Flowering"), the Progenitors split into two factions: Manifold Caretakers, opposed to further experimentation and dedicated to preventing another Flowering; and Manifold Usurpers, favoring further experimentation and intending to induce a controlled Flowering in Alpha Centauris Planet. In Alien Crossfire, these factions compete along with the human factions for control over the destiny of Planet.
Characters and factions
The game focuses on the leaders of seven factions, chosen by the player from the 14 possible leaders in Alpha Centauri and Alien Crossfire, and Planet (voiced by Alena Kanka). The characters are developed from the faction leaders' portraits, the spoken monologues accompanying scientific discoveries and the "photographs in the corner of a commlink – home towns, first steps, first loves, family, graduation, spacewalk." The leaders in Alpha Centauri comprise: Lady Deirdre Skye, a Scottish activist (voiced by Carolyn Dahl), of Gaia's Stepdaughters; Chairman Sheng-Ji Yang, a Chinese Legalist official (voiced by Lu Yu), of the Human Hive; Academician Prokhor Zakharov, a Russian academic (voiced by Yuri Nesteroff) of the University of Planet; CEO Nwabudike Morgan, a Namibian businessman (voiced by Regi Davis), of Morgan Industries; Colonel Corazon Santiago, a Puerto Rican militiawoman (voiced by Wanda Niño), of the Spartan Federation; Sister Miriam Godwinson, an American minister and social psychologist (voiced by Gretchen Weigel), of the Lord's Believers; and Commissioner Pravin Lal, an Indian surgeon and diplomat (voiced by Hesh Gordon), of the Peacekeeping Forces.
The player controls one of the leaders and competes against the others to colonize and conquer Planet. The Datalinks (voiced by Robert Levy and Katherine Ferguson) are minor characters who provide information to the player. Each faction excels at one or two important aspects of the game and follows a distinct philosophical belief, such as technological utopianism, Conclave Christianity, "free-market" capitalism, militarist survivalism, Chinese Legalism, U.N. Charter humanitarianism, or Environmentalist Gaia philosophy. The game takes place on Planet, with its "rolling red ochre plains" and "bands of lonely terraformed green".
The seven additional faction leaders in Alien Crossfire are Prime Function Aki Zeta-Five, a Norwegian research assistant-turned-cyborg (voiced by Allie Rivenbark), of The Cybernetic Consciousness; Captain Ulrik Svensgaard, an American fisherman and naval officer (voiced by James Liebman), of The Nautilus Pirates; Foreman Domai, an Australian labor leader (voiced by Frederick Serafin), of The Free Drones; Datajack Sinder Roze, a Trinidadian hacker (voiced by Christine Melton), of The Data Angels; Prophet Cha Dawn, a human born on Planet (voiced by Stacy Spenser) of The Cult of Planet; Guardian Lular H'minee, a Progenitor leader (voiced by Jeff Gordon), of The Manifold Caretakers; and Conqueror Judaa Maar, a Progenitor leader (voiced by Jeff Gordon), of The Manifold Usurpers.
Plot
The story unfolds via the introduction video, explanations of new technologies, videos obtained for completing secret projects, interludes, and cut-scenes. The native life consists primarily of simple wormlike alien parasites and a type of red fungus that spreads rapidly via spores. The fungus is difficult to traverse, provides invisibility for the enemy, provides few resources, and spawns "mindworms" that attack population centres and military units by neurally parasitising them. Mindworms can eventually be captured and bred in captivity and used as terroristic bioweapons, and the player eventually discovers that the fungus and mindworms can think collectively.
A voice intrudes into the player's dreams and soon waking moments, threatening more attacks if the industrial pollution and terraforming by the colonists is not reversed. The player discovers that Planet is a dormant semi-sentient hive organism that will soon experience a metamorphosis which will destroy all human life. To counter this threat, the player or a computer faction builds "The Voice of Alpha Centauri" secret project, which artificially links Planet's distributed nervous system into the human Datalinks, delaying Planet's metamorphosis into full self-awareness but incidentally increasing its ultimate intelligence substantially by giving it access to all of humanity's accumulated knowledge. Finally, the player or a computer faction embraces the "Ascent to Transcendence" in which humans too join their brains with the hive organism in its metamorphosis to godhood. Thus, Alpha Centauri closes "with a swell of hope and wonder in place of the expected triumphalism", reassuring "that the events of the game weren’t the entirety of mankind’s future, but just another step."
Gameplay
Alpha Centauri, a turn-based strategy game with a science fiction setting, is played from an isometric perspective. Many game features from Civilization II are present, but renamed or slightly tweaked: players establish bases (Civilization II's cities), build facilities (buildings) and secret projects (Wonders of the World), explore territory, research technology, and conquer other factions (civilizations). In addition to conquering all non-allied factions, players may also win by obtaining votes from three quarters of the total population (similar to Civilization IVs Diplomatic victory), "cornering the Global Energy Market", completing the Ascent to Transcendence secret project, or for alien factions, constructing six Subspace Generators.
The main map (the upper two thirds of the screen) is divided into squares, on which players can establish bases, move units and engage in combat. Through terraforming, players may modify the effects of the individual map squares on movement, combat and resources. Resources are used to feed the population, construct units and facilities, and supply energy. Players can allocate energy between research into new technology and energy reserves. Unlike Civilization II, new technology grants access to additional unit components rather than pre-designed units, allowing players to design and re-design units as their factions' priorities shift. Energy reserves allow the player to upgrade units, maintain facilities, and attempt to win by the Global Energy Market scenario. Bases are military strongpoints and objectives that are vital for all winning strategies. They produce military units, house the population, collect energy, and build secret projects and Subspace Generators. Facilities and secret projects improve the performance of individual bases and of the entire faction.
In addition to terraforming, optimizing individual base performance and building secret projects, players may also benefit their factions through social engineering, probe teams, and diplomacy. Social engineering modifies the ideologically based bonuses and penalties forced by the player's choice of faction. Probe teams can sabotage and steal information, units, technology, and energy from enemy bases, while diplomacy lets the player create coalitions with other factions. It also allows the trade or transfer of units, bases, technology and energy. The Planetary Council, similar to the United Nations Security Council, takes Planet-wide actions and determines population victories.
In addition to futuristic technological advances and secret projects, the game includes alien life, structures and machines. "Xenofungus" and "sea fungus" provide movement, combat, and resource penalties, as well as concealment for "mind worms" and "spore launchers". Immobile "fungal towers" spawn native life. Native life, including the seaborne "Isles of the Deep" and "Sealurks" and airborne "Locusts of Chiron", use psionic combat, an alternate form of combat which ignores weapons and armor. Monoliths repair units and provide resources; artifacts yield new technology and hasten secret projects; landmarks provide resource bonuses; and random events add danger and opportunity. Excessive development leads to terraforming-destroying fungus blooms and new native life.
Alpha Centauri provides a single player mode and supports customization and multiplayer. Players may customize the game by choosing options at the beginning of the game, using the built-in scenario and map editors, and modifying Alpha Centauris game files. In addition to a choice of seven (or 14 in Alien Crossfire) factions, pre-game options include scenario game, customized random map, difficulty level, and game rules that include victory conditions, research control, and initial map knowledge. The scenario and map editors allow players to create customized scenarios and maps. The game's basic rules, diplomatic dialog, and the factions' starting abilities are in text files, which "the designers have done their best to make it reasonably easy to modify..., even for non-programmers." Alpha Centauri supports play by email ("PBEM") and TCP/IP mode featuring simultaneous movement, and introduces direct player-to-player negotiation, allowing the unconstrained trade of technology, energy, maps, and other elements.
Development
Inspirations
In 1996, MicroProse released the lauded Civilization II, designed by Brian Reynolds. Spectrum Holobyte who owned MicroProse at the time, opted to consolidate their business under the MicroProse name, moving the company from Maryland to California by the time the game shipped, and laying off several MicroProse employees. Disagreements between the new management and its employees prompted Reynolds, Jeff Briggs, and Sid Meier (designer of the original Civilization) to leave MicroProse and found Firaxis. Although unable to use the same intellectual property as Civilization II, the new company felt that players wanted "a new sweeping epic of a turn-based game". Having just completed a game of human history up to the present, they wanted a fresh topic and chose science fiction.
With no previous experience in science fiction games, the developers believed future history was a fitting first foray. For the elements of exploring and terraforming an alien world, they chose a plausible near future situation of a human mission to colonize the solar system's nearest neighbour and human factions. Reynolds researched science fiction for the game's writing. His inspiration included "classic works of science fiction", including Frank Herbert's The Jesus Incident and Hellstrom's Hive, A Fire Upon the Deep by Vernor Vinge, and The Mote in God's Eye by Larry Niven and Jerry Pournelle for alien races; Kim Stanley Robinson's Red Mars, Slant by Greg Bear, and Stephen R. Donaldson's The Real Story for future technology and science; and Dune by Herbert and Bear's Anvil of Stars for negative interactions between humans.
Alpha Centauri set out to capture the whole sweep of humanity's future, including technology, futuristic warfare, social and economic development, the future of the human condition, spirituality, and philosophy. Reynolds also said that "getting philosophy into the game" was one of the attractions of the game. Believing good science fiction thrives on constraint, the developers began with near-future technologies. As they proceeded into the future, they tried to present a coherent, logical, and detailed picture of future developments in physics, biology, information technology, economics, society, government, and philosophy. Alien ecologies and mysterious intelligences were incorporated into Alpha Centauri as external "natural forces" intended to serve as flywheels for the backstory and a catalyst for many player intelligences. Chris Pine, creator of the in-game map of Planet, strove to make Planet look like a real planet, which resulted in evidence of tectonic action. Another concern was that Planet matched the story, which resulted in the fungus being connected across continents, as it is supposed to be a gigantic neural network.
Terraforming is a natural outgrowth of colonizing an alien world. The first playable prototype was just a map generator that tested climate changes during the game. This required the designers to create a world builder program and climatic model far more powerful than anything they'd done before. Temperature, wind, and rainfall patterns were modeled in ways that allow players to make changes: for example, creating a ridge-line and then watching the effects. In addition to raising terrain, the player can also divert rivers, dig huge boreholes into the planet's mantle, and melt ice caps.
In addition to scientific advances, the designers speculated on the future development of human society. The designers allow the player to decide on a whole series of value choices and choose a "ruthless", "moderate", or "idealistic" stance. Reynolds said the designers don't promote a single "right" answer, instead giving each value choice positive and negative consequences. This design was intended to force the player to "think" and make the game "addictive". He also commented that Alpha Centauris fictional nature allowed them to draw their characters "a lot more sharply and distinctly than the natural blurring and greyness of history".
Chiron, the name of the planet, is the name of the only non-barbaric centaur in Greek mythology and an important loregiver and teacher for humanity. The name also pays homage to James P. Hogan's 1982 space opera novel Voyage from Yesteryear, in which a human colony is artificially planted by an automatic probe on a planet later named by colonists as Chiron. In the game, Chiron has two moons, named after the centaurs Nessus and Pholus, with the combined tidal force of Earth's Moon, and is the second planet out from Alpha Centauri A, the innermost planet being the Mercury-like planet named after the centaur Eurytion. Alpha Centauri B is also dubbed Hercules, a reference to him killing several centaurs in mythology, and the second star preventing the formation of larger planets. The arrival on Chiron is referred to as "Planetfall", which is a term used in many science fiction novels, including Robert A. Heinlein's Future History series, and Infocom's celebrated comic interactive fiction adventure Planetfall. Vernor Vinge's concept of technological singularity is the origin of the Transcendence concept. The game's cutscenes use montages of live-action video, CGI, or both; most of the former is from the 1992 experimental documentary Baraka.
Alpha Centauri
In July 1996, Firaxis began work on Alpha Centauri, with Reynolds heading the project. Meier and Reynolds wrote playable prototype code and Jason Coleman wrote the first lines of the development libraries. Because the development of Gettysburg took up most of Firaxis' time, the designers spent the first year prototyping the basic ideas. By late 1996, the developers were playing games on the prototype, and by the middle of the next year, they were working on a multiplayer engine. Although Firaxis intended to include multiplayer support in its games, an important goal was to create games with depth and longevity in single-player mode because they believed that the majority of players spend most of their time playing this way. Reynolds felt that smart computer opponents are an integral part of a classic computer game, and considered it a challenge to make them so. Reynolds' previous games omitted internet support because he believed that complex turn-based games with many player options and opportunities for player input are difficult to facilitate online.
Reynolds said that the most important principle of game design is for the designer to play the game as it is developed; Reynolds claimed that this was how a good artificial intelligence (AI) was built. To this end, he would track the decisions he made and why he made them as he played the game. The designer also watched what the computer players did, noting "dumb" actions and trying to discover why the computer made them. Reynolds then taught the computer his reasoning process so the AI could find the right choice when presented several attractive possibilities. He said the AI for diplomatic personalities was the best he had done up to that point.
Doug Kaufman, a co-designer of Civilization II, was invited to join development as a game balancer. Reynolds cited the Alpha Centauris balance for the greater sense of urgency and the more pressing pacing than in his earlier game, Sid Meier's Colonization. According to producer Timothy Train, in designing the strengths and weaknesses of the factions, the goal was to suggest, without requiring, certain strategies and give the player interesting and fun things to do without unbalancing the game. He didn't want a faction to be dependent on its strength or a faction's power to be dominant over the rest. Train felt that fun meant the factions always have something fun to do with their attributes.
Around the summer of 1997, the staff began research on the scientific realities involved in interstellar travel. In late 1997, Bing Gordon—then Chief Creative Officer of Electronic Arts—joined the team, and was responsible for the Planetary Council, extensive diplomacy, and landmarks. A few months before the 1998 Electronic Entertainment Expo (E3), the team incorporated the Explore/Discover/Build/Conquer marketing campaign into the game. The game was announced in May 1998 at E3.
In the latter half of 1998, the team produced a polished and integrated interface, wrote the game manual and foreign language translations, painted the faction leader portraits and terrain, built the 3D vehicles and vehicle parts, and created the music. Michael Ely directed the Secret Project movies and cast the faction leaders. 25 volunteers participated in Firaxis' first public beta test. The beta testers suggested the Diplomatic and Economic victories and the Random Events.
The design team started with a very simple playable game. They strengthened the "fun" aspects and fixed or removed the unenjoyable ones, a process Sid Meier called "surrounding the fun". After the revision, they played it again, repeating the cycle of revision and play. Playing the game repeatedly and in-depth was a rule at Firaxis. In the single-player mode, the team tried extreme strategies to find any sure-fire paths to victory and to see how often a particular computer faction ends up at the bottom. The goal was a product of unprecedented depth, scope, longevity, and addictiveness, where the player is always challenged by the game to come up with new strategies with no all-powerful factions or unstoppable tactics. According to Reynolds, the process has been around since Sid Meier's early days at Microprose. At Firaxis, as iterations continue, they expand the group giving feedback, bringing in outside gamers with fresh perspectives. Alpha Centauri was the first Firaxis game with public beta testers.
Finally, Brian Reynolds discussed the use of the demo in the development process. Originally a marketing tool released prior to the game, they started getting feedback. They were able to incorporate many suggestions into the retail version. According to Brian Reynolds, they made improvement in the game's interface, added a couple of new features and fixed a few glitches. They also improved some rules, fine-tuned the game balance and improved the AI. Finally, he adds that they continued to add patches to enhance the game after the game was released.
In the months leading to the release of Alpha Centauri, multimedia producer Michael Ely wrote the 35 weekly episodes of Journey to Centauri detailing the splintering of the U.N. mission to Alpha Centauri.
Alien Crossfire
A month after Alpha Centauris February 1999 release, the Firaxis team began work on the expansion pack, Sid Meier's Alien Crossfire. Alien Crossfire features seven new factions (two that are non-human), new technologies, new facilities, new secret projects, new alien life forms, new unit special abilities, new victory conditions (including the new "Progenitor Victory") and several additional concepts and strategies. The development team included Train as producer and designer, Chris Pine as programmer, Jerome Atherholt and Greg Foertsch as artists, and Doug Kaufman as co-designer and game balancer.
The team considered several ideas, including a return to a post-apocalyptic earth and the conquest of another planet in the Alpha Centauri system, before deciding to keep the new title on Planet. The premise allowed them to mix and match old and new characters and delve into the mysteries of the monoliths and alien artifacts. The backstory evolved quickly, and the main conflict centered on the return of the original alien inhabitants. The idea of humans inadvertently caught up in an off-world civil war focused the story.
Train wanted to improve the "build" aspects, feeling that the god-game genre had always been heavily slanted towards the "Conquer" end of the spectrum. He wanted to provide "builders" with the tools to construct an empire in the face of heated competition. The internet community provided "invaluable" feedback. The first "call for features" was posted around April 1999 and produced the Fletchette Defense System, Algorithmic Enhancement, and The Nethack Terminus.
The team had several goals: factions should not be "locked-in" to certain strategies; players should have interesting things to do without unbalancing the game, and the factions must be fun to play. The team believed the "coolness" of the Progenitor aliens would determine the success or failure of Alien Crossfire. They strove to make them feel significantly different to play, but still compatible with the existing game mechanics. The developers eventually provided the aliens with Battle Ogres, a Planetary survey, non-blind research, and other powers to produce "a nasty and potent race that would take the combined might of humanity to bring them down". Chris Pine modified the AI to account for the additions. The team also used artwork, sound effects, music, and diplomatic text to set the aliens apart. Other than the aliens, the Pirates proved to be the toughest faction to balance because their ocean start gave them huge advantages.
Upon completion, the team felt that Alien Crossfire was somewhere between an expansion and a full-blown sequel. In the months leading to the release of Alien Crossfire, multimedia producer Michael Ely wrote the 9 episodes of Centauri: Arrival, introducing the Alien Crossfire factions. The game initially had a single production run. Electronic Arts bundled Alpha Centauri and Alien Crossfire in the Alpha Centauri Planetary Pack in 2000 and included both games in The Laptop Collection in 2003. In 2000, both Alpha Centauri and Alien Crossfire were ported to Classic Mac OS by Aspyr Media and to Linux by Loki Software.
Reception
Alpha Centauri received wide critical acclaim upon its release, with reviewers voicing respect for the game's pedigree, especially that of Reynolds and Meier. The video game review aggregator websites GameRankings and Metacritic, which collect data from numerous review websites, listed scores of 92% and 89%, respectively. The game was favorably compared to Reynold's previous title, Civilization II, and Rawn Shah of IT World Canada praised the expansion for a "believable" plot. However, despite its critical reception, it sold the fewest copies of all the games in the Civilization series. It sold more than 100,000 copies in its first two months of release. This was followed by 50,000 copies in April, May and June. In the United States, Alpha Centauri was the tenth-best-selling computer game of 1999's first half. Its sales in that country alone reached 224,939 copies by the end of 1999, and rose to 281,115 units by September 2000.
Critical reaction
The game showed well at the 1998 Electronic Entertainment Expo (E3). Walter Morbeck of GameSpot said that Alpha Centauri was "more than hi-tech physics and new ways to blow each other up", and that the game would feature realistic aliens. Terry Coleman of Computer Gaming World predicted that Alpha Centauri would be "another huge hit". OGR awarded it "Most Promising Strategy Game" and one of the top 25 games of E3 '98. In a vote of 27 journalists from 22 gaming magazine, Alpha Centauri won "Best Turn Based Strategy" of E3 Show Award. Aaron John Loeb, the Awards Committee Chairman, said "for those that understand the intricacies, the wonder, the glory of turn based 'culture building,' this is the game worth skipping class for."
Alpha Centauri's science fiction storyline received high praise; IGN considered the game an exception to PC sci-fi cliches, and GamePro compared the plot to the works of writers Stanley Kubrick and Isaac Asimov. J.C. Herz of The New York Times suggested that the game was a marriage of SimCity and Frank Herbert's Dune. GamePros Dan Morris said "As the single-player campaign builds to its final showdown, the ramifications of the final theoretical discoveries elevate Alpha Centauri from great strategy game to science-fiction epic." Game Revolution said, "The well crafted story, admirable science-fiction world, fully realized scenario, and quality core gameplay are sure to please." Edge praised the uniqueness of expression saying it was "the same kind of old-fashioned, consensual storytelling that once drew universes out of ASCII." The in-game writing and faction leaders were also well-received for their believability, especially the voice acting. GameSpot reviewer Denny Atkin called the factions and their abilities Alpha Centauris "most impressive aspect". Greg Tito of The Escapist said, "the genius of the game is how it flawlessly blends its great writing with strategy elements."
Alpha Centauri's turn-based gameplay, including the technology trees and factional warfare, was commonly compared to Civilization and Civilization II. The Adrenaline Vault's Pete Hines said, "While Alpha Centauri is the evolutionary off-spring to [Civilization] and [Civilization II], it is not [Civilization II] in space. Although the comparison is inevitable because of the lineage, it is still short-sighted." Edge in 2006 praised "Alpha Centauri's greater sophistications as a strategy game." IGN said "Alpha Centauri is a better game than Civilization II; it's deep, rich, rewarding, thought-provoking in almost every way." Game Revolution's reviewer was less magnanimous, saying "Alpha Centauri is at least as good a game as Civilization 2. But it is its great similarity that also does it the most detriment. Alpha Centauri simply does not do enough that is new; it just doesn't innovate enough to earn a higher grade." The ability to create custom units was praised, as was the depth of the tech tree. The artificial intelligence of computer-controlled factions, which featured adaptability and behavioral subtlety, was given mixed comments; some reviewers thought it was efficient and logical, while others found it confusing or erratic. Edge was disappointed in the game's diplomacy, finding "no more and no less than is expected from the genre" and unhappy with "the inability to sound out any real sense of relationship or rational discourse."
The game's graphics were widely acknowledged to be above average at the time of its release, but not revolutionary. Its maps and interface were
considered detailed and in accordance with a space theme, but the game was released with a limited color palette. The in-game cutscenes, particularly the full motion video that accompanied technological advances, were praised for their quality and innovation. Alpha Centauri's sound and music received similar comments; FiringSquad said "[The sound effect quality] sort of follows the same line as the unit graphics – not too splashy but enough to get the job done."
Next Generation reviewed the PC version of the game, rating it five stars out of five, and stated that "Sid Meier creates yet another masterpiece in this game that, at a glance, looks all too familiar."
Alpha Centauri has won several Game of the Year awards, including those from the Denver Post and the Toronto Sun. It won the "Turn-based Strategy Game of the Year" award from GameSpot as well. The Academy of Interactive Arts & Sciences named Alpha Centauri the "Computer Strategy Game of the Year" (along with nominations for "Game of the Year" and "Outstanding Achievement in Interactive Design"), and in 2000, Alpha Centauri won the Origins Award for Best Strategy Computer Game of 1999. The editors of PC Gamer US named Alpha Centauri their "Best Turn-Based Strategy Game" of 1999, and wrote that it "set a new standard for this venerable genre." Alpha Centauri has the distinction of receiving gaming magazine PC Gamers highest score to date as of 2019 (98%), alongside Half Life 2 and Crysis, surpassing Civilization IIs score (97%). Alien Crossfire was a runner-up for Computer Games Strategy Pluss 1999 "Add-on of the Year" award, which ultimately went to Heroes of Might and Magic III: Armageddon's Blade.
Legacy
There have been no direct sequels beyond Alien Crossfire, something that writer Greg Tito attributed to Reynolds leaving Firaxis in 2000 to form Big Huge Games. Alien Crossfire producer and lead designer Timothy Train also left Firaxis with Reynolds. However, a spiritual sequel, Civilization: Beyond Earth, was announced by Firaxis in April 2014 and released on October 24, 2014; several of those that worked on Alpha Centauri helped to develop the new title. A review in Polygon noted however that while the new game has better graphics, its story fails to rival the original, a sentiment echoed by another review in the PC Gamer. Another in Endgadget noted "as a spiritual successor to Sid Meier's Alpha Centauri, however, it's a cut-rate disappointment".
Many of the features introduced in Alpha Centauri were carried over into subsequent Civilization titles; upon its release, Civilization III was compared negatively to Alpha Centauri, whose Civilization characteristics were reminiscent of faction bonuses and penalties. The government system in Civilization IV closely resembles Alpha Centauris, and Civilization V includes a new victory condition: the completion of the 'Utopia project', which is reminiscent of the Ascent to Transcendence secret project.
According to Edge magazine, Alpha Centauri remained "highly regarded" in 2006. A decade after its release, Sold-Out Software and GOG.com re-released the game for online-download sales. Escapist Magazine reviewed the game in 2014, noting that "Alpha Centauri is still playable. It still has a unique flavor that is unlike anything else".
After the release of the expansion, multimedia producer Michael Ely wrote a trilogy of novels based on the game. Illustrator Rafael Kayanan also wrote a graphic novel entitled Alpha Centauri: Power of the Mindworms. Steve Jackson Games published GURPS Alpha Centauri, a sourcebook for the GURPS role-playing game set in the Alpha Centauri universe.
See also
Alpha Centauri in fiction
Group mind (science fiction)
Survivalism in fiction
Sid Meier's Civilization: Beyond Earth
Notes
References
Further reading
– covers the early years of colonization of the planet Chiron and describes the siege of United Nations HQ by the Spartans, the loss of Peacekeeper sovereignty and the consequent flight by the United Nations survivors into Gaian territory.
– occurs years after the events of Centauri Dawn and describes the Gaia's Stepdaughters' use of "mindworms" to rebuff an attack by the technologically superior Morgan Industries.
– follows the tension between the University of Planet and the Lord's Believers and describes the use of singularity bombs to destroy Morgan Industries and the Spartan Federation and the native life uprising which destroys humanity.
External links
Official website mirrored by alphacentauri2.info.
Unofficial patches
1999 video games
4X video games
Alpha Centauri in fiction
Aspyr games
City-building games
Civilization (series)
Firaxis Games games
Interactive Achievement Award winners
Interstellar travel in fiction
Linux games
Loki Entertainment games
Classic Mac OS games
MacOS games
Multiplayer and single-player video games
Origins Award winners
Religion in science fiction
Science fiction video games
Alpha Centauri
Turn-based strategy video games
Video games about extraterrestrial life
Video games developed in the United States
Video games scored by Jeff Briggs
Video games set on fictional planets
Video games with expansion packs
Video games with isometric graphics
Video games with voxel graphics
Windows games |
554281 | https://en.wikipedia.org/wiki/Haiku%20%28operating%20system%29 | Haiku (operating system) | Haiku is a free and open-source operating system compatible with the now discontinued BeOS. Its development began in 2001, and the operating system became self-hosting in 2008. The first alpha release was made in September 2009, and the last was November 2012; the first beta was released in September 2018, followed by beta 2 in June 2020 and beta 3 in July 2021.
Haiku is supported by Haiku, Inc., a non-profit organization based in Rochester, New York, United States, founded in 2003 by former project leader Michael Phipps.
History
Haiku began as the OpenBeOS project in 2001, the same year that Be, Inc. was bought by Palm, Inc. and BeOS development was discontinued. The focus of the project was to support the BeOS user community by creating an open-source, backward-compatible replacement for BeOS. The first project by OpenBeOS was a community-created "stop-gap" update for BeOS 5.0.3 in 2002.
Branding and style
In 2003, the non-profit organization Haiku, Inc. was registered in Rochester, New York, to financially support development, and in 2004, after a notification of infringement of Palm's trademark of the BeOS name was sent to OpenBeOS, the project was renamed Haiku. Original logo was designed by Stuart McCoy (nick "stubear") who was apparently heavily involved in the early days of the Haiku Usability & Design Team, and created mockups for Haiku R2.
Haiku developer and artist Stephan Assmus (nick "Stippi"), who co-developed graphic editing software WonderBrush for Haiku, updated it and developed the HVIF icon vector format used by Haiku, as well as Haiku icon set chosen by popular vote in a contest in 2007.
Milestones
Haiku reached its first milestone in September 2009 with the release of Haiku R1/Alpha 1. In November 2012, R1/Alpha 4.1 was released while work continued on nightly builds. After years in between official releases, Haiku R1/Beta 1 was released on 19 September 2018, followed by Haiku R1/Beta 2 on 9 June 2020. Haiku's latest release, R1/Beta 3, was released on 26 July 2021.
In between official releases, 'Nightly' builds (mainly meant for developer testing) are regularly listed on the Haiku Nightly page in both 64-bit and 32-bit (x86) editions.
Beyond R1
After the initial full BeOS 5 compatibility as target, in 2009 community decision updated the vision for R1 with more ambitious support for modern hardware, web standards and compatibility with FLOSS libraries.
Initial planning for R2 has started through the "Glass Elevator" project (a reference to the children's novel Charlie and the Great Glass Elevator). The only detail confirmed so far is that it will switch to a current GCC release.
A compatibility layer is planned that will allow applications developed for Haiku R1 to run on Haiku R2 and later. This was mentioned in a discussion on the Haiku mailing list by one of the lead developers, Axel Dörfler. Suggested new features include file indexing on par with Unix's Beagle, Google Desktop and macOS's Spotlight, greater integration of scalable vector graphics into the desktop, proper support for multiple users, and additional kits.
On "having a future"
At 2010 edition of FOSDEM in Brussels, Niels Sascha Reedijk gave a talk HAIKU OS has no Future cited the work of queer theory by Lee Edelman on queer futurity, and Mathew Fuller’s software studies, stating the Haiku OS is a “queer” operating system: “Our work will not ever define the future of operating systems, but what it does do is undermine the monotone machinery of the competition. It is in this niche that we can operate best. … Because even though we have no future, it does not mean that there will not arrive one eventually. Let us get there the most pleasant way possible.”
Release history
Technology
Haiku is written in C++ and provides an object-oriented API.
The modular design of BeOS allowed individual components of Haiku to initially be developed in teams in relative isolation, in many cases developing them as replacements for the BeOS components prior to the completion of other parts of the operating system. The original teams developing these components, including both servers and APIs (collectively known in Haiku as "kits"), included:
App/Interface: develops the Interface, App and Support kits.
BFS: develops the Be File System, which is mostly complete with the resulting OpenBFS.
Game: develops the Game Kit and its APIs.
Input Server: the server that handles input devices, such as keyboards and mice, and how they communicate with other parts of the system.
Kernel: develops the kernel, the core of the operating system.
Media: develops the audio server and related APIs.
MIDI: implements the MIDI protocol.
Network: writes drivers for network devices and APIs relating to networking.
OpenGL: develops OpenGL support.
Preferences: recreates the preferences suite.
Printing: works on the print servers and drivers for printers.
Screen Saver: implements screen saver function.
Storage: develops the storage kit and drivers for required filesystems.
DataTranslations: recreates the reading/writing/conversion modules for the different file formats and data types.
A few kits have been deemed feature complete and the rest are in various stages of development.
The Haiku kernel is a modular hybrid kernel which began as a fork of NewOS, a modular monokernel written by former Be Inc. engineer Travis Geiselbrecht. Like the rest of the system, it is currently still under heavy development. Many features have been implemented, including a virtual file system (VFS) layer and symmetric multiprocessing (SMP) support.
Package management
, Haiku includes a package management system called "Haiku Depot", enabling software to be compiled into dependency-tracking compressed packages. Packages can also be activated by installing them from remote repositories with pkgman, or dropping them over a special packages directory. Haiku package management mounts activated packages over a read-only system directory. The Haiku package management system performs dependency solving with libsolv from the openSUSE project.
Compatibility with BeOS
Haiku R1 aims to be compatible with BeOS at both the source and binary level, allowing software written and compiled for BeOS to be compiled and run without modification on Haiku. This provides Haiku users with an instant library of applications to choose from (even programs whose developers are no longer in business or have no interest in updating them), in addition to allowing development of applications to resume from where they had been terminated following the demise of Be, Inc.
This dedication to compatibility has its drawbacks though — requiring Haiku to use a forked version of the GCC compiler, based on version 2.95, released in 2001, which is now years old. Switching to the newer version 7 of GCC breaks compatibility with BeOS software; therefore Haiku supports being built as a hybrid GCC7/GCC2 environment. This allows the system to run both GCC version 2 and version 7 binaries at the same time. The changes done to GCC 2.95 for Haiku include wide characters support and backport of fixes from GCC 3 and later.
This compatibility applies to 32-bit x86 systems only. The PowerPC version of BeOS R5 is not supported. As a consequence, the ARM, 68k, 64-bit x86 and PPC ports of Haiku use only the GCC version 7 compiler.
Despite these attempts, compatibility with a number of system add-ons that use private APIs will not be implemented. These include additional filesystem drivers and media codec add-ons, although the only affected add-ons for BeOS R5 not easily re-implemented are those for Indeo 5 media decoders, for which no specification exists.
R5 binary applications that run successfully under Haiku () include: Opera, Firefox, NetPositive, Quake II, Quake III, SeaMonkey, Vision and VLC.
Driver compatibility is incomplete, and unlikely to cover all kinds of BeOS drivers. 2D graphics drivers in general work exactly the same as on R5, as do network drivers. Moreover, Haiku offers a source-level FreeBSD network driver compatibility layer, which means that it can support any network hardware that will work on FreeBSD. Audio drivers using API versions prior to BeOS R5 are as-yet unsupported, and unlikely to be so; however, R5-era drivers work.
Low-level device drivers, namely those for storage devices and SCSI adapters, will not be compatible. USB drivers for both the second- (BeOS 5) and third- (BeOS Dano) generation USB stacks will work, however.
In some other aspects, Haiku is already more advanced than BeOS. For example, the interface kit allows the use of a layout system to automatically place widgets in windows, while on BeOS the developer had to specify the exact position of each widget by hand. This allows for GUIs that will render correctly with any font size and makes localization of applications much easier, as a longer string in a translated language will make the widget grow, instead of being partly invisible if the widget size were fixed.
System requirements
Intel Pentium P5 (microarchitecture) or better
Memory: 256 MB (2 GB is needed to compile Haiku within itself)
Hard disk: 1.5 GB free space
Reception
Jesse Smith from DistroWatch Weekly reviewed Haiku OS in 2010:
Rebecca Chapnik wrote a review of Haiku OS for MakeTechEasier.com in 2012.
Dedoimedo.com reviewed Haiku Alpha 4 in September 2013.
Jeremy Reimer wrote a review for Ars Technica in 2013. His review of Haiku Alpha 4 mentions that:
Jesse Smith reviewed Haiku OS again in 2016.
In October 2018, Jack Wallen reviewed Haiku OS with an extensive coverage of community statements in Linux.com: "To BeOS or not to BeOS, that is the Haiku''" As of 2018, the FSF has included Haiku in a list of non-endorsed operating systems. They state the reason being because, "Haiku includes some software that you're not allowed to modify. It also includes nonfree firmware blobs."
See also
BeOS
Be File System
BeOS API
Comparison of operating systems
Haiku Vector Icon Format
KDL
List of BeOS applications
References
External links
Haiku Inc. company website
2002 software
BeOS
Free software operating systems
Free software programmed in C++
Object-oriented operating systems
Operating system distributions bootable from read-only media
Self-hosting software
Software using the MIT license
X86 operating systems |
12286078 | https://en.wikipedia.org/wiki/Workflow%20application | Workflow application | A workflow application is a software application which automates, to at least some degree, a process or processes. The processes are usually business-related but can be any process that requires a series of steps to be automated via software. Some steps of the process may require human intervention, such as an approval or the development of custom text, but functions that can be automated should be handled by the application. Advanced applications allow users to introduce new components into the operation.
For example, consider a purchase order that moves through various departments for authorization and eventual purchase. The order may be moved from department to department for approval automatically. When all authorizations are obtained, the requester of the purchase order is notified and given the authorization. A workflow process may involve frequent maintenance. For example, the normal approver of purchase orders may be on vacation, in which case, the application will request approval from alternate approvers.
Development
Workflow applications can be developed using a graphical designer, a programming language, or a combination of the two.
Some software products provide a means to create workflow applications with a diagram-based graphical designer alone. These types of systems rely on the ability to capture all of the information relevant to the workflow process through a specialized interface aimed at non-programmers, and then compile that information into a functional workflow application. Sometimes, however, the need for utilizing a programming language arises when more complex rules need to be integrated into the workflow, such as calculations for validating data in input forms.
For code-based workflow design, workflow applications can be developed with any general-purpose programming language, but specialized workflow languages also exist. These usually come with an associated graphical notation (such as BPMN), but some are textual or XML-based. Specialized languages that can be used for workflow definition in this way include:
XPDL
YAWL (Yet Another Workflow Language)
SCUFL (Simple Conceptual Unified Flow Language)
The above languages are based on XML syntax and while suitable for manipulation by software, they may be difficult for non-technical people to work with. Therefore, their use is generally augmented by graphical notations enabling the creation of flowchart-like diagrams that are easier for people to develop and interpret: creating such diagrams is in effect a form of "graphical" programming. The software package that allows a user to develop a workflow diagram will typically translate a diagram into its XML equivalent.
Another approach to develop workflow applications is to use a programming language in conjunction with libraries and interfaces that capture abstractions for task coordination. The following are examples of such libraries and interfaces:
Windows Workflow Foundation (WF)
Workflow OSID
The use of libraries is generally complementary to diagramming techniques, which are not always sufficient by themselves to create fully functional applications (unless the diagramming tool is part of a specific workflow management system). WF workflows, for example, can be created using Microsoft Visual Studio diagrammatically (their XML equivalent is XAML), and their functionality augmented with code written in C# or VB.NET: a given workflow can be called by an existing software application as a Web service. Software development tools such as Visual Studio or the numerous coding environments for Java will also allow particular components to be designed entirely in code and then used as building blocks in workflow diagrams after they are compiled.
One limitation of certain purely diagram-based techniques, such as BPMN above, is that to fit the purpose of workflow specification, such notations need to be enhanced with additional constructs to capture data passing, data transformations and routing conditions, to bind tasks to their implementation, etc. BPMN, while intended to serve as a standard, is deficient in this regard, and so several commercial packages (such as Microsoft Biztalk) address these needs in proprietary ways (specifically, by enhancing the basic set of diagramming icons with additional icons that support the needed functionality).
For the purpose of static analysis, e.g. to detect semantic errors at design-time, it is also possible to represent workflow in a mathematical form using a formal notation such as Petri nets.
References
External links
Application |
17124425 | https://en.wikipedia.org/wiki/Nokia%20Internet%20tablet | Nokia Internet tablet | Nokia Internet Tablets is the name given to a range of Nokia mobile Internet appliances products. These tablets fall in the range between a personal digital assistant (PDA) and an Ultra-Mobile PC (UMPC), and slightly below Intel's Mobile Internet device (MID).
Early trials and predecessors
Nokia had plans for an Internet tablet since before 2000. An early model was test manufactured in 2001, the Nokia M510, which was running on EPOC and featuring an Opera browser, speakers and a 10-inch 800x600 screen, but it was not released because of fears that the market was not ready for it. The M510 was first leaked to the public in 2014.
Prior to the introduction of Nokia's Internet tablets, Nokia unveiled two "media devices" in 2003-04 which were mobile phones but had a form factor similar to the Internet tablets that followed them. The first of this type of device was the Nokia 7700 which was intended for mass production but ended up being canned in favor of the Nokia 7710 which had a slightly more traditional form-factor and better specs.
Maemo
Nokia Internet Tablets run the Debian Linux-based Maemo, which draws much of its GUI, frameworks, and libraries from the GNOME project. It uses the embedded-targeted Matchbox as its window manager and uses Hildon, a lightweight GTK-based toolkit designed for handheld devices, as its GUI and application framework.
Alternative distributions
Maemo can be replaced entirely by a number of other Linux distributions.
NITdroid is a port of Google's Android.
Ubuntu has been ported.
Mer is a new distribution created by combining Ubuntu with the open source packages from Maemo.
Gentoo an unofficial port of Gentoo Linux is available.
Models
See also
Internet appliance
Mobile Internet device
Tablet computer
WiMAX
CrunchPad
SmartQ 5
Nokia N1
Nokia Lumia 2520
Nokia T20
Notes
External links
Ari Jaaksi's Blog, Nokia's director of open source software operations
Internet Tablet Talk, an active web forum about Nokia's Internet Tablets (obsolete, see next)
talk-maemo-org TMO, the new URL of "Internet Tablet Talk"
, tutorials for Internet Tablet users
Information appliances |
170533 | https://en.wikipedia.org/wiki/Tcpdump | Tcpdump | tcpdump is a data-network packet analyzer computer program that runs under a command line interface. It allows the user to display TCP/IP and other packets being transmitted or received over a network to which the computer is attached. Distributed under the BSD license, tcpdump is free software.
Tcpdump works on most Unix-like operating systems: Linux, Solaris, FreeBSD, DragonFly BSD, NetBSD, OpenBSD, OpenWrt, macOS, HP-UX 11i, and AIX. In those systems, tcpdump uses the libpcap library to capture packets. The port of tcpdump for Windows is called WinDump; it uses WinPcap, the Windows version of libpcap.
History
tcpdump was originally written in 1988 by Van Jacobson, Sally Floyd, Vern Paxson and Steven McCanne who were, at the time, working in the Lawrence Berkeley Laboratory Network Research Group. By the late 1990s there were numerous versions of tcpdump distributed as part of various operating systems, and numerous patches that were not well coordinated. Michael Richardson (mcr) and Bill Fenner created www.tcpdump.org in 1999.
Common uses
tcpdump prints the contents of network packets. It can read packets from a network interface card or from a previously created saved packet file. tcpdump can write packets to standard output or a file.
It is also possible to use tcpdump for the specific purpose of intercepting and displaying the communications of another user or computer. A user with the necessary privileges on a system acting as a router or gateway through which unencrypted traffic such as Telnet or HTTP passes can use tcpdump to view login IDs, passwords, the URLs and content of websites being viewed, or any other unencrypted information.
The user may optionally apply a BPF-based filter to limit the number of packets seen by tcpdump; this renders the output more usable on networks with a high volume of traffic.
Example of available capture interfaces on a Linux system:
$ tcpdump -D
1.eth0 [Up, Running, Connected]
2.any (Pseudo-device that captures on all interfaces) [Up, Running]
3.lo [Up, Running, Loopback]
4.bluetooth-monitor (Bluetooth Linux Monitor) [Wireless]
5.usbmon2 (Raw USB traffic, bus number 2)
6.usbmon1 (Raw USB traffic, bus number 1)
7.usbmon0 (Raw USB traffic, all USB buses) [none]
8.nflog (Linux netfilter log (NFLOG) interface) [none]
9.nfqueue (Linux netfilter queue (NFQUEUE) interface) [none]
10.dbus-system (D-Bus system bus) [none]
11.dbus-session (D-Bus session bus) [none]
12.bluetooth0 (Bluetooth adapter number 0)
13.eth1 [none, Disconnected]
Privileges required
In some Unix-like operating systems, a user must have superuser privileges to use tcpdump because the packet capturing mechanisms on those systems require elevated privileges. However, the -Z option may be used to drop privileges to a specific unprivileged user after capturing has been set up. In other Unix-like operating systems, the packet capturing mechanism can be configured to allow non-privileged users to use it; if that is done, superuser privileges are not required.
See also
Tcptrace, a tool for analyzing the logs produced by tcpdump
EtherApe, a network mapping tool that relies on sniffing traffic
Ngrep, a tool that can match regular expressions within the network packet payloads
netsniff-ng, a free Linux networking toolkit
Wireshark, a GUI based alternative to tcpdump
References
External links
Official site for tcpdump (and libpcap)
Official site for WinDump
A tcpdump Tutorial and Primer
ngrep, a tcpdump-like tool
Portable version of tcpdump for Windows
Official site for tcpdump for Android devices
Tutorial video for tcpdump in Linux
WinDump Color Highlighting
Network analyzers
Unix network-related software
Windows network-related software
Free software programmed in C
Cross-platform free software
Free network management software
Software using the BSD license |
1102237 | https://en.wikipedia.org/wiki/Joint%20Tactical%20Radio%20System | Joint Tactical Radio System | The Joint Tactical Radio System (JTRS) aimed to replace existing radios in the American military with a single set of software-defined radios that could have new frequencies and modes (“waveforms”) added via upload, instead of requiring multiple radio types in ground vehicles, and using circuit board swaps in order to upgrade. JTRS has seen cost overruns and full program restructurings, along with cancellation of some parts of the program. JTRS is widely seen as one of the DoD's greatest acquisition failures, having spent $6B over 15 years without delivering a radio.
JTRS HMS (Handheld, Manpack & Small Form-Fit) radios are jointly developed and manufactured by Thales and General Dynamics Mission Systems. These software-defined radios are designed as successors to the JTRS-compatible CSCHR (PRC-148 and PRC-152) handhelds, securely transmitting voice and data simultaneously using Type 2 cryptography and the new Soldier Radio Waveform.
The Army announced in June 2015 a Request for Proposal (RFP) for full-rate production of the HMS program. Goal was set for assessment in 2015-2016 and for full rate production in 2017.
Overview
Launched with a Mission Needs Statement in 1997 and a subsequent requirements document in 1998 (which was revised several times), JTRS was a family of software-defined radios that were to work with many existing military and civilian radios. It included integrated encryption and Wideband Networking Software to create mobile ad hoc networks (MANETs).
The JTRS program was beset by delays and cost overruns, particularly Ground Mobile Radios (GMR), run by Boeing. Problems included a decentralized management structure, changing requirements, and unexpected technical difficulties that increased size and weight goals that made it harder to add the required waveforms.
The JTRS was built on the Software Communications Architecture (SCA), an open-architecture framework that tells designers how hardware and software are to operate in harmony. It governs the structure and operation of the JTRS, enabling programmable radios to load waveforms, run applications, and be networked into an integrated system. A Core Framework, providing a standard operating environment, must be implemented on every hardware set. Interoperability among radio sets was increased because the same waveform software can be easily ported to all radios.
The Object Management Group (OMG), a not-for-profit consortium that produces and maintains computer industry specifications for interoperable enterprise applications, is working toward building an international commercial standard based on the SCA.
JTRS Program of Record
The Joint Tactical Radio System (JTRS) evolved from a loosely associated group of radio replacement programs to an integrated effort to network multiple weapon system platforms and forward combat units where it matters most – at the last tactical mile. In 2005, JTRS was restructured under the leadership of a Joint Program Executive Officer (JPEO) headquartered in San Diego, California. The JPEO JTRS provides an enterprise acquisition and management approach to successfully and efficiently develop, produce, integrate, test and field the JTRS networking capability.
The JTRS Enterprise was composed of five ACAT 1D programs of record - Network Enterprise Domain (NED), Ground Mobile Radios (GMR), Handheld, Manpack, & Small Form Fit (HMS), Multifunctional Information Distribution System (MIDS) JTRS, and Airborne, Maritime Fixed/Station (AMF) and one ACAT III program - Handheld JTRS Enhanced Multi-Band Intra-Team Radio (JEM).
JTRS Network Enterprise Domain (NED)
JTRS NED was responsible for the development, sustainment, and enhancement of interoperable networking and legacy software waveforms. NED's product line consists of:
14 Legacy Waveforms
Bowman VHF
Collection Of Broadcasts From Remote Assets (COBRA)
Enhanced Position Location Reporting System (EPLRS)
Have Quick II
High Frequency Single sideband / Automatic link establishment (HF SSB/ALE)
NATO Standardization Agreement 5066 (HF 5066)
Link 16
Single Channel Ground and Airborne Radio System (SINCGARS)
Ultra High Frequency Demand Assigned Multiple Access Satellite communications (UHF DAMA SATCOM) 181/182/183/184
Ultra High Frequency Line-of-Sight Communications System (UHF LOS)
Very High Frequency Line-of-Sight Communications System (VHF LOS)
three Mobile Ad Hoc Networking Waveforms
Wideband Networking Waveform [WNW]
Soldier Radio Waveform [SRW]
Mobile User Objective System [MUOS]–Red Side Processing)
Network Enterprise Services (NES) including
JTRS WNW Network Manager (JWNM)
Soldier Radio Waveform Network Manager (SRWNM)
JTRS Enterprise Network Manager (JENM)
Enterprise Network Services (ENS)
JTRS Ground Mobile Radios (GMR)
JTRS GMR are a key enabler of the DoD and Army Transformation and will provide critical communications capabilities across the full spectrum of Joint operations. Through software reconfiguration, JTRS GMR can emulate current force radios and operate new internet protocol-based networking waveforms offering increased data throughput utilizing self-forming, self-healing, and managed communication networks. The GMR route and retransmit functionality links various waveforms in different frequency bands to form one internetwork. GMR can scale from one to four channels supporting multiple security levels and effectively use the frequency spectrum within the 2 megahertz to 2 gigahertz frequency range. The radios are Software Communications Architecture compliant with increased bandwidth through future waveforms and are interoperable with 4+ current force radio systems and the JTRS family of radios.
Now that the GMR contract has been completed, the Army plans to leverage knowledge gained from the GMR Program in the upcoming Mid-Tier Networking Vehicular Radio solicitation.
JTRS Handheld, Manpack & Small Form Fit (HMS)
JTRS HMS is a materiel solution meeting the requirements of the Office of the Assistant Secretary of Defense for Networks and Information Integration/DoD Chief Information Officer for a Software Communications Architecture (SCA) compliant hardware system hosting SCA-compliant software waveforms (applications).
The JTRS HMS contract was structured to address Increment 1, consisting of Phases 1 and 2. Increment 1, Phase 1 was to develop the AN/PRC-154 Rifleman Radio sets and embedded SFF-A (one channel), SFF-A (two channel) and SFF-D (one channel) versions. The AN/PRC-154 Rifleman Radio sets and SFFs were to utilize the Soldier Radio Waveform (SRW) in a sensitive but unclassified environment (Type 2). In order to mitigate program waveform porting and integration challenges, the SRW application, which is managed by the JTRS Network Enterprise Domain, was developed on a Waveform Development Environment with HMS as the lead platform for porting.
Increment 1, Phase 2 was to develop the two channel manpack, two channel handheld, and embedded SFF-B, versions that are all Type 1 compliant for use in a classified environment. Waveforms on the phased sets include Ultra High Frequency (UHF) Satellite Communications, Soldier Radio Waveform (SRW), High Frequency (HF), Enhanced Position Location and Reporting System (EPLRS), Mobile-User Objective System (MUOS), and Single Channel Ground to Air Radio System (SINCGARS).
Multifunctional Information Distribution System (MIDS) JTRS
MIDS is a secure, scalable, modular, wireless, and jam-resistant digital information system currently providing Tactical Air Navigation (TACAN), Link-16, and J-Voice to airborne, ground, and maritime joint and coalition warfighting platforms. MIDS provides real-time and low-cost information and situational awareness via digital and voice, communications within the Joint Tactical Radio System (JTRS) Enterprise. The MIDS Program includes MIDS-LVT and the MIDS JTRS Terminal. MIDS-LVT is in full rate production and MIDS JTRS is in evolutionary development and limited production. MIDS JTRS is a “form fit function” replacement for MIDS–LVT and adds three additional channels for JTRS waveforms as required by joint and coalition warfighter.
JTRS Airborne & Maritime/Fixed Station (AMF)
AMF will provide a four-channel, full duplex, software-defined radio integrated into airborne, shipboard, and fixed-station platforms, enabling maritime and airborne forces to communicate seamlessly and with greater efficiency through implementation of five initial waveforms (i.e., Ultra-High Frequency Satellite Communications, Mobile User Objective System, Wideband Network Waveform, Soldier Radio Waveform, and Link 16) providing data, voice, and networking capabilities. JTRS AMF is software-reprogrammable, multi-band/multi-mode capable, mobile ad hoc network capable, and provides simultaneous voice, data, and video communications. The system is flexible enough to provide point-to-point and netted voice and data, whether it is between Service Command Centers, Shipboard Command Centers, Joint Operations Centers or other functional centers (e.g., intelligence, logistics, etc.). AMF will assist U.S. Armed Forces in the conduct of prompt, sustained, and synchronized operations, allowing warfighters the freedom to achieve information dominance in all domains; land, sea, air, space, and information.
JTRS Product Delivery
JTRS Network Enterprise Domain (NED)
Legacy waveform upgrades planned (VHF/UHF LOS, HQII, Bowman, EPLRS, Link 16)
Networking waveforms/management completed FQT, in JTRS IR (WNW 4.0.2, SRW 1.01.1c, SRWNM 1.0R, JWNM v4.0 3); Interim versions in JTRS IR (SRWNM v1.0+, TTNT v6.0)
Legacy waveforms completed FQT, in JTRS IR (VHF/UHF LOS, HQ II, COBRA, SATCOM 181/182/183/184, SINCGARS, EPLRS, JTRS Bowman, Link 16, HF)
JTRS Ground Mobile Radios (GMR)
GMR LUT (3QFY11)
System Integration Testing (Sept 2010)
91 sets for GMR DT/OT - 91 delivered
PEO-I purchasing 153 EDMs through Boeing Prime/ Boeing GMR agreement for SDD, Test, and fielding to IBCT #1 – 30 Delivered for Test
121 GMR Pre-EDMs and 73 open chassis radios delivered for GMR/WF development & test; 71 pre-EDMs for E-IBCT SO1
The GMR contract was completed 31 March 2012.
JTRS Handheld, Manpack & Small Form Fit (HMS)
MS C AN/PRC-154 (4QFY11); AN/PRC-155 (FY11/FY12); MUOS capable AN/PRC-155 (1QFY13)
BCT Integration Exercise (July 2010)
MUOS HPA PDR (July 2010)
EDMs Delivered: 14 Manpacks (AN/PRC- 155); 21 JTRS Rifleman Radio (AN/PRC-154); 163 JTRS Rifleman Radio (AN/PRC-154) (CV1); 213 SFF A; 21 SFF-D
Multifunctional Information Distribution System (MIDS) JTRS
IOC with F/A-18E/F (Jan 2011)
Operational Test (Jul – Oct 2010); > 170 flight tests ; >513 total flight test hours conducted on F/A-18E/F platform
TRL 7 achieved (May/Jun 2010); Completed DT Flight Test (Apr 2010); NSA Certification (Mar 2010)
MIDS JTRS Limited Production & Fielding Decision (Dec 2009) – 41 production terminals to support F/A-18E/F and JSTARS
Delivered 7 terminals to F/A-18 for OT
JTRS Airborne & Maritime/Fixed Station (AMF)
Delivered pre-production representative unit to AH-64D (Long Bow Apache) to support platform integration (Sep 2010)
Completed Initial Hardware/Software Demonstration – Small Airborne (Aug 2010)
Completed System CDR (Dec 2009)
Air-to-Air-to-Ground SRW demonstration (Jun 2009)
SDD contract awarded (Mar 2008)
Consolidated Single Channel Handheld Radios (CSCHR)
Delivering over 150,000 radios and accessories to the Services
Waveforms
JTRS was originally planned to use frequencies from 2 megahertz to 2 gigahertz. The addition of the Soldier Radio Waveform (SRW) waveform means the radios will also use frequencies above 2 GHz. Waveforms that were to be supported included:
Soldier Radio Waveform (SRW)
Single Channel Ground Air Radio System (SINCGARS) with Enhanced SINCGARS Improvement Program (ESIP), 30-88 MHz, FM, frequency hopping and single frequency
HAVE QUICK II military aircraft radio, 225-400 MHz, AM, frequency hopping
UHF SATCOM, 225-400 MHz, MIL-STD-188-181, -182, -183 and -184 protocols
Mobile User Objective System (MUOS): It is important to note that the JTRS HMS manpack is the only radio program of record that will deliver terminals supporting the next generation UHF TACSAT MUOS program. 85% of all MUOS terminals are expected to be ground radios, so if JTRS HMS fails, MUOS (funded in the billions) fails as well - unless a COTS solution is developed. This is only true from the Army viewpoint, other services also use MUOS, not related to JTRS.
Enhanced Position Location Reporting System (EPLRS), 420-450 MHz spread spectrum
Wideband Networking Waveform (WNW) (under development)
Link-4A, -11B, - 16, -22/TADIL tactical data links, 960-1215 MHz+
VHF-AM civilian Air Traffic Control, 108-137 MHz, 25 (US) and 8.33 (European) kHz channels
High Frequency (HF) - Independent sideband (ISB) with automatic link establishment (ALE), and HF Air Traffic Control (ATC), 1.5-30 MHz
VHF/UHF-FM Land Mobile Radio (LMR), low-band 25-54 MHz, mid-band 72-76 MHz, high-band 136-175 MHz, 220-band 216-225 MHz, UHF/T 380-512 MHz, 800-band 764-869 MHz, TV-band 686-960 MHz, includes P25 public safety and homeland defense standard
civilian marine VHF-FM radio, 156 MHz band
Second generation Anti-jam Tactical UHF Radio for NATO (SATURN), 225-400 MHz PSK Anti-jam
Identification Friend or Foe (IFF), includes Mark X & XII/A with Selective Identification Feature (SIF) and Air Traffic Control Radar Beacon System (ATCRBS), Airborne Collision Avoidance System (ACAS) and Traffic Alert & Collision Avoidance System (TCAS), and Automatic Dependent Surveillance – Addressable (ADS-A) and Broadcast (ADS-B) functionality, 1030 & 1090 MHz
Digital Wideband Transmission System (DWTS) Shipboard system for high capacity secure & nonsecure, line-of-sight (LOS), ship-to-ship, and ship-to-shore, 1350-1850 MHz
Soldier Radio & Wireless Local Area Network (WLAN), 1.755-1.850, 2.450-2.483.5 GHz, Army Land Warrior program 802.11
Cellular telephone & PCS, includes multiple US and overseas standards and NSA/NIST Type 1 through 4 COMSEC (SCIP)
Mobile Satellite Service (MSS), includes both VHF and UHF MSS bands and both fielded and emerging low Earth orbit and medium Earth orbit systems and standards, such as Iridium, Globalstar, et al. Includes capability for NSA/NIST Type 1 through 4 COMSEC, 1.61-2 [2.5] GHz. May allow use of geosynchronous satellites with special antenna.
Integrated Broadcast Service Module (IBS-M). Currently three legacies UHF military broadcasts (TIBS, TDDS, and TRIXS) which will be replaced in the future with a Common Interactive Broadcast (CIB).
BOWMAN, the UK Tri-Service HF, VHF and UHF tactical communications system.
Several of the above waveforms will not be supported in JTRS Increment 1 and have been deferred to "later increments". Currently, only Increment 1 is funded. The requirements document for JTRS Increment 2 is under development. JTRS Increment 1 threshold waveforms include:
Waveform/Applicable radios (based on JTRS ORD Amendment 3.2.1 dtd 28 Aug 06)
SRW: Small Form Fit, Manpack, AMF-Small Airborne, Ground Mobile Radio
WNW: Ground Mobile Radio, AMF-Small Airborne
MUOS: AMF-Small Airborne, AMF-Maritime, Manpack (funding was recently added for the manpack)
Link-16: AMF-Small Airborne, MIDS-J
UHF SATCOM DAMA: Manpack, Ground Mobile Radio, AMF-Maritime
SINCGARS ESIP with INC: Ground Mobile Radio
SINCGARS ESIP: Handheld, SFF, Manpack, Ground Mobile Radio
EPLRS: Handheld, SFF, Manpack, Ground Mobile Radio
HF SSB/ISB w/ALE: Ground Mobile Radio
HF SSB w/ALE: Manpack
JAN-TE: MIDS-J
Problems and restructuring
In March 2005, the JTRS program was restructured to add a Joint Program Executive Office, a unified management structure to coordinate development of the four radio versions.
In March 2006, the JPEO recommended changing the management structure, reducing the scope of the project, extending the deadline, and adding money. The JPEO's recommendations were accepted.
The program is focusing on the toughest part: transformational networking. The JTRS radio was to be a telephone, computer and router in one box that can transmit from 2 MHz to 2 GHz.
A September 2006 Government Accountability Office report said these changes had helped reduce the risk of more cost and schedule overruns to "moderate."
The U.S. military no longer plans to quickly replace all of its 750,000 tactical radios. The program is budgeted at $6.8 billion to produce 180,000 radios, an average cost per radio of $37,700. Program delays forced DOD to spend an estimated $11 billion to buy more existing tactical radios, such as the U.S. Marine Corps' Integrated, Intra-Squad Radio, the AN/PRC-117F and the AN/PRC-150.
On June 22, 2007, the Joint Program Executive Office issued the first JTRS-Approved radio (not JTRS-Certified) production contract. It gave Harris Corporation $2.7 billion and Thales Communications Inc. $3.5 billion for first-year procurement and allowed the firms to compete for more parts of the five-year program. Harris could make up to $7 billion; Thales, $9 billion.
In July 2008, the head of OSD AT&L conducted a 10-hour program review after costs continued to grow. Additionally, the JTRS Ground Mobile Radio program, originally funded at around $370 million, has now exceeded $1 billion despite reduced requirements.
In 2012, after the first 100 General Dynamics Manpack radios showed "poor reliability", the US Army placed a $250 million order for nearly four thousand more of them.
History
CONDOR (Command and Control on the Move Network, Digital Over the Horizon Relay)
During the Iraq War, the USMC developed their own system using commercial off the shelf technology, combining satcoms with wifi, that has worked ever since.
Ground Mobile Radios
GMR - formerly Cluster 1, run by the Army, was to equip Marine and Army ground vehicles, Air Force Tactical Air Control Parties (TACPs), and Army helicopters. Cluster 1 also included the development of a Wideband Networking Waveform (WNW), a next-generation Internet protocol (IP)-based waveform designed to allow mobile ad hoc networking (MANET). In 2005, the cluster was renamed Ground Mobile Radios (GMR) with the Air Force TACP and Army helicopter radios deleted.
Handeheld Manpack & Small Form Fit
HMS - formally Cluster 5, led by the Army, developed handheld, man-portable, and smaller radios. In 2006, it was renamed HMS, for Handheld, Manpack, and Small Form Factor.
Airborne & Maritime/Fixed Station
AMF - formerly Clusters 3 and 4: Cluster 3 aimed to develop a maritime / fixed radio. It was led by the Navy and grew out of the Navy's previous Digital Modular Radio program. Cluster 4, led by the Air Force, aimed to provide radios to Air Force and Navy fixed-wing aircraft and helicopters. In 2004, Clusters 3 and 4 were combined into the Airborne and Maritime / Fixed-Station program. In 2006, the Army helicopter radio was added to this cluster. In early 2008, JTRS AMF attained Milestone B after it received an additional $700 million. Cost estimates conducted by OSD's CAIG determined that the original amount, just over $500 million, was too little. On March 28, 2008, Lockheed Martin announced that the JTRS Joint Program Executive Office picked it to design and provide tactical communications and networking gear for the Air Force, Army, Navy and other users. The initial System Development and Demonstration (SDD) contract value is $766 million. Subcontractors will include BAE Systems, General Dynamics, Northrop Grumman, and Raytheon. Work will be conducted at Scottsdale, Ariz.; San Diego, Calif.; Tampa, Fla.; Fort Wayne, Ind.; Gaithersburg, Md.; St. Paul, Minn.; Wayne, N.J.; Charleston, S.C.; and Chantilly and Reston, Va.
MIDS JTRS - In 2006, the JTRS program took over the effort to improve the Multifunctional Information Distribution System Low Volume Terminal (MIDS-LVT) design, which was developed by a 5-nation consortium in the 1990s. This program was renamed MIDS-JTRS and also experienced cost growth and delays.
Special Radios
JEM. - formally Cluster 2, was renamed the JTRS JEM program, adds JTRS capability to the existing handheld AN/PRC-148 Multiband Inter/Intra Team Radio (MBITR) to create the JTRS Enhanced MBITR (JEM). Led by U.S. Special Operations Command, the development effort has certified and fielded the radio.
Joint Tactical Networking Center (JTNC) and Joint Tactical Networks (JTN) - July 2012..
See also
Global Information Grid
References
External links
Software Communications Architecture Homepage
Joint Program Executive Office for the Joint Tactical Radio System Homepage
JTRS pages at globalsecurity.org
Failure to Communicate: Article describing the problems of the project
Military radio systems of the United States |
4357001 | https://en.wikipedia.org/wiki/Vintage%20Computer%20Festival | Vintage Computer Festival | The Vintage Computer Festival (VCF) is an international event celebrating the history of computing. It is held annually in various locations around the United States and various countries internationally. It was founded by Sellam Ismail in 1997. As of February 2015, most rights to the Vintage Computer Festival franchise are owned by the Vintage Computer Federation Inc., a 501(c)(3) educational non-profit organization.
Purpose
The Vintage Computer Festival promotes the preservation of "obsolete" computers by offering the public a chance to experience the technologies, people and stories that embody the remarkable tale of the computer revolution. VCF events include hands-on exhibit halls, VIP keynote speeches, consignment, technical classes, and other attractions depending on venue. It is consequently one of the premiere physical markets for antique computer hardware.
Events
The Vintage Computer Federation owns VCF East (Wall Township, New Jersey), VCF Pacific Northwest (Seattle, Washington), VCF West (Mountain View, California), and future editions. Independent editions include VCF Midwest (metro Chicago, Illinois), VCF Europa (Munich and Berlin, Germany; Vintage Computer Festival Zürich, Switzerland), Vintage Computer Festival GB, and VCF Southeast (Atlanta, Georgia).
See also
WinWorld
References
External links
Vintage Computer Federation
Vintage Computer Festival Midwest
Vintage Computer Festival Europa
Vintage Computer Festival Zürich
Vintage Computer Festival West
Vintage Computer Festival Archives- Past show notes, exhibits, photos
Computer-related events
History of computing
Computing culture |
59915990 | https://en.wikipedia.org/wiki/Kiwaka | Kiwaka | Kiwaka is an educational game for iOS, macOS and tvOS designed to teach children about astronomy. The app was developed by the Portuguese software company Landka in collaboration with scientific institutions such as the European Space Agency (ESA) and the European Southern Observatory (ESO).
Kiwaka explores the concept of tangential learning. In the game, an elephant lights up stars to complete constellations and, after each level, an education module is presented to learn more about these constellations. The importance of rules that regulate learning modules and game experience is discussed by Moreno, C., in a case study about Kiwaka.
The app was featured in the Kids section of the App Store and reached the top sales of apps for kids in 9 countries.
Features
Kiwaka is part game and part lesson, combining entertainment and education. The concept behind the app is a true legend according to witch "fireflies carry lights from the stars". The purpose of the game is to help a pink elephant to catch fireflies revealing constellations in the night sky. Once all the stars in a constellation are completed, detailed information about the constellation is provided, such as the description of the associated greek myth, a video explaining how to find the constellation in the night sky and the location and description of the most important stars, galaxies and nebulae. Soundtrack was composed by the Emmy nominee David Ari Leon.
Game Story
The game takes place in Kiwaka (a real location in Democratic Republic of Congo, Africa) where four creatures learn about an ancient legend according to witch fireflies carry the light from the stars. The creatures take on a journey to collect fireflies and learn about the mysteries and ancient myths behind each constellation. An interactive book app "Kiwaka Story" telling the tale of the characters was launched simultaneously with the game. The book app is targeted at young children and narrated by Diogo Morgado.
Gameplay
Kiwaka is a side-scrolling game with a simple tap control system. The player controls a pink elephant (Kudi) as he travels in a floating soap bubble collecting fireflies and avoiding different obstacles. Each firefly will light up a star in the night sky revealing hidden constellations. At the end of each level the player can look at a star map and learn about the constellations "earned" throughout the game. Two interactive star maps are presented: one for the northern hemisphere constellations and the other for the southern hemisphere constellations.
Tangential Learning
Kiwaka engages tangential learning by providing relevant scientific information about stars and constellations "earned" throughout the game. This information includes astronomy details about the constellations, an explanation on how to find the constellation in the night sky, and the locations, descriptions and images of the most important stars, galaxies and nebulae. Most of these images come from the European Space Agency (ESA) and the European Southern Observatory (ESO) image libraries and depict deep-space objects that can actually be found in the constellations. Links are provided to ESA and ESO websites for more information about these objects.
Kiwaka also presents information about the classical mythology behind each constellation. Users are introduced to classical Greek literature examples, such as the story of Cassiopeia and the myth of Perseus and the Medusa. The drawings of constellations depicted in the app are from Firmamentum Sobiescianum sive Uranographia, the famous Atlas of constellations by Johannes Hevelius.
Development and release
The project Kiwaka was developed by Landka over a period of two years leading to the simultaneously publication of the iOS game "Kiwaka" and the book app "Kiwaka Story" on June 5, 2014. Kiwaka was featured in Kids section of the App Store and reached the top sales of apps for kids in 9 countries. The macOS and tvOS versions were released on May, 2017. The app received generally positive reviews from press and scientific community.
References
External links
2014 video games
IOS games
Indie video games
MacOS games
Side-scrolling platform games
Educational video games
Educational software
Video games developed in Portugal
Video games scored by David Ari Leon |
40402063 | https://en.wikipedia.org/wiki/1934%20Pittsburgh%20Panthers%20football%20team | 1934 Pittsburgh Panthers football team | The 1934 Pittsburgh Panthers football team, coached by Jock Sutherland, represented the University of Pittsburgh in the 1934 college football season. The Panthers finished the regular season with eight wins and a single loss (to Minnesota at home) and were considered the champions of the East. According to a 1967 Sports Illustrated article, Parke H. Davis, whose selections for 1869 to 1933 (all made in 1933) are recognized as "major" in the official NCAA football records book, named Pitt as one of that season's national champions, along with Minnesota, six months after his death on June 5, 1934. The article contained a "list of college football's mythical champions as selected by every recognized authority since 1924," which has served as the basis of the university's historical national championship claims, with Davis being the only major selector for three of them, including the posthumous 1934 pick (post-1933 selections are not "major").
Schedule
Preseason
The Pitt coaching staff underwent some off-season changes. Assistant coach Andy Gustafson was hired at Dartmouth University. Bill Kern was promoted to head assistant coach. Dr. Eddie Baker became the head backfield coach. On February 5, the staffing was completed with the appointments of Walter Milligan and Howard O'Dell by the athletic committee. Milligan coached the guards and O'Dell assisted Eddie Baker with the backfield.
Prior to spring football practice, Coach Sutherland and his staff held daily meetings at the stadium by position. Each player attended one meeting per week. One night was for centers, one night for tackles and so forth.
Coach Sutherland needed to replace 14 Panther players (8 of whom were regulars) who would graduate in June (Robert Hogan, Joseph Skladany, Tarciscio Onder, Frank Walton, James Simms, Frank Tiernan, Arthur Craft, Howard Gelini, Robert Timmons, John Meredith, Richard Matesic, Mike Sebastian, Howard O'Dell and John Love). On March 15, his task began in earnest with the arrival of 73 candidates to the first spring practice session. 41 members of the varsity and 32 rising sophomores from last year's freshman team reported. Another twenty or so were expected in a few days. Coach Sutherland was his usual cautious self: "There are some good prospects for the team, but it is a question of teaching them football. What we learn this spring will mean a lot next fall when we meet Washington and Jefferson, West Virginia, Southern California, Minnesota, Westminster, Nebraska, Notre Dame, Carnegie Tech, and Navy. We can hope for a fair season." On May 5, the varsity concluded the spring practice period with a game against an alumni team. The varsity prevailed 14 to 0.
On September 9, fifty potential squad members bussed to Camp Hamilton for 2 weeks of preseason conditioning to prepare for the hardest schedule of coach Sutherland's eleven year career at Pitt. "Jock immediately launched the most intensive preliminary training period that a Pitt squad has experienced, with the result that the players quickly rounded into splendid physical condition and were in near mid-season form when the schedule opened." On the last morning of camp, the varsity defeated the reserves 40 to 0 in a full game scrimmage. After lunch the Panther entourage returned to Pittsburgh for the start of the fall semester.
The Panther athletic department reinstated the 25 cents admittance price for children. "We hope to be able to admit the youngsters for every game, although reservations for one or two of the big contests may interfere," Director W. Don Harrison declared.
Coaching staff
Roster
Game summaries
Washington & Jefferson
On September 29, Pitt and W. & J. met for the last time at Pitt Stadium. This was the thirty-second all-time meeting and Pitt led the series 16–13–2. The Presidents last won in 1924 and had not scored against Pitt in the previous 6 outings. Third-year coach Hank Day hoped to improve on the previous year's 2–7–1 record. Coach Day opined: "You can't build a brick house without bricks. We haven't got the man power to beat Pitt and all I hope is we keep the score down to respectable size."
Despite losing 8 starters and 6 reserves to graduation, coach Sutherland's lineup for the opening game against the Presidents listed 10 veterans and one sophomore - halfback (Bobby LaRue).
15,000 fans sat through a steady rainfall to watch the Panthers beat the Presidents 26–6. Pitt gained 317 yards and 18 first downs to W. & J.'s 163 yards and 3 first downs. After a scoreless first quarter, Pitt back Bobby LaRue returned a punt to the Presidents 39-yard line. On first down Leo Malarkey lost 4 yards. On second down Henry Weisenbaugh raced 43 yards for the first touchdown of the season. Isadore Weinstock converted the point after and Pitt led 7 to 0. The Panther defense held and Pitt regained possession on their own 45-yard line. A nine play, 55-yard drive ended with a 1-yard plunge into the end zone by Weinstock for Pitt's second touchdown. He missed the placement, but Pitt led at the half 13 to 0. Pitt drove the ball to the Presidents' 13-yard line on the opening drive of the third quarter and lost the ball on downs. On first down W. & J. halfback Don Croft sprinted 87 yards around end for the first W. & J. touchdown against Pitt since 1924. He missed the point after and the score read: Pitt 13 to W. & J. 6. After an exchange of punts, Pitt gained possession on their own 45-yard line. When the quarter ended the Panthers were on the W. & J. 3-yard line. "On the first play Weinstock battered right guard for a touchdown. Weinstock's try for the extra point was wide, but W. & J. was offside and Pitt was awarded the point. Score: Pitt 20; W. & J. 6." The Presidents advanced the ball to the Pitt 35-yard line, but turned it over on downs. The Panther offense proceeded to march 65 yards in three plays. Leon Shedlosky carried the ball the final 26 yards for the touchdown. Robert McClure missed the point after. Final score: Pitt 26; W. & J. 6.
The Pitt starting lineup for the game against Washington & Jefferson was Harvey Rooker (left end), Robert Hoel (left tackle), Charles Hartwig (left guard), George Shotwell (center), Ken Ormiston (right guard), Arthur Detzel (right tackle), Verne Baxter (right end), Miller Munjas (quarterback0, Mike Nicksick (left halfback), Bobby LaRue (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Edward Quarantillo, John Valenti, William Glassford, Nick Kliskey, Marwood Stark, Averell Daniell, Karl Seiffert, Robert McClure, Arnold Greene, Leo Malarkey, Hub Randour, Leon Shedlosky, Stanley O'Neil, Henry Weisenbaugh and Leonard Rector.
at West Virginia
For the third year in a row, the Panthers traveled to Morgantown for their annual battle with the Mountaineers of West Virginia. Pitt led the series 20–8–1 and had won nine of the past 10 games. Coach Sutherland wanted his team to be more focused than they were in the W. & J. game, so the Panthers had a hard week of practice. The Panthers were healthy and the same lineup took the field to start the game.
The Mountaineers were led by first year coach Charles “Trusty” Tallman. He played end on the 1920-23 Mountaineer squads. He was the head coach at Marshall University from 1925-28 with a record of 22–9–7. He returned to Morgantown and coached the freshman team for five years prior to his appointment as head coach. West Virginia was 2–0 on the season with victories over West Virginia Wesleyan (19–0) at home and Duquesne (7–0) at Forbes Field. Jess Carver of the Sun-Telegraph noted: "So seriously has West Virginia taken the game that the new coach, Trusty Tallman, took his charges out of town yesterday afternoon into a nearby mountain retreat, something unheard of for so early in the season."
The Panthers disappointed the home team fans with a convincing 27–6 victory over the Mountaineers. The Pittsburgh Press noted: "Pitt made 16 first downs to three and not until the second half did the Mountaineers cross the 50-yard line." The Panthers' offense started their second possession from their own 37-yard line. On first down Bobby LaRue lost a yard. On second down Mike Nicksick threw a 64-yard touchdown pass to end Harvey Rooker. Isadore Weinstock converted the point after and Pitt led 7 to 0. In the second period, the Panther offense sustained a 72-yard drive that ended with a Leon Shedlosky 4-yard scamper to the end zone. Henry Weisenbaugh converted the point after and Pitt led 14 to 0 at halftime. The Mountaineers' fans had reason to cheer when a Eck Allen pass to Mickey Heath gained 56 yards to the Pitt 4-yard line. Allen bulled into the end zone on fourth down for the first West Virginia points against Pitt since 1929. Angelo Onder (brother of Pitt grad Tarciscio Onder) missed the placement. The Panther offense answered with two more touchdowns. A 77-yard drive culminated with a 35-yard end run by Weisenbaugh, and Verne Baxter caught a 40-yard touchdown pass from Leo Malarkey. Weisenbaugh converted one of the placements, and Pitt went home with a 27 to 6 victory.
The Mountaineers finished the season with a 6–4 record.
The Pitt starting lineup for the game against West Virginia was Harvey Rooker (left end), Robert Hoel (left tackle), Charles Hartwig (left guard), George Shotwell (center), Ken Ormiston (right guard), Arthur Detzel (right tackle), Verne Baxter (right end), Miller Munjas (quarterback), Mike Nicksick (left halfback), Bobby LaRue (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Edward Quarantillo, Leslie Wilkins, Averell Daniell, John Valenti, William Glassford, Leon Wohlgemuth, Nick Kliskey, Charles Gangloff, Frank Kutz, Marwood Stark, Stanley Olejniczak, Karl Seiffert, Vincent Sites,Louis Wojcihovski, Robert McClure, Arnold Greene, Leo Malarkey, Hub Randour, Joseph Troglione, Leon Shedlosky, Stanley O'Neil, Henry Weisenbaugh and Leonard Rector.
USC
On October 13, the Panthers welcomed the USC Trojans. USC had never been east of South Bend, IN to play football. Jock Sutherland and Pitt got shellacked by the Trojans in two Rose Bowl appearances (47–14 in 1930 and 35–0 in 1933). Jack Sell of the Post-Gazette noted: "A peek into the files reveals that ten Pitt players who are likely to see action today participated in the last meeting of these two schools, on January 2, 1933 in the Rose Bowl game at Pasadena. Captain Charles Hartwig, Weinstock, Rooker, Ormiston, Hoel, Shotwell, Wojcihovski, Munjas, Weisenbaugh and Nicksick all know just how it feels to lose by 35–0 and take the long train ride back home."
Howard Jones was in his tenth season as head coach of the Trojans. He had attained a record of 84–11–3. The Trojans came east with a 3–1 record on the season. They opened with three straight wins and then lost at home to Washington State (19–0). After that loss Jack Franklin, the editor of the school newspaper, the "Daily Trojan", wrote an editorial criticizing the effort of the team. He wrote members of the team "were Hollywood-struck boys who were as toys to some henna-haired film beauty or magnate." He added the loss to Washington State: "It marked the victory of a team that plays football for the game's sake over a team of Hollywood-struck boys who once knew how to play football, but having been persuaded that they are already all-Americans now only go through the motions. The handwriting has been on the wall for a long time." Coach Jones was not pleased nor were the players, but the coach admitted to the Pittsburgh Press: "That's the way they've been playing, I'm not alibiing for these boys, they'll have to play better than they have been."
In front of 55,000 fans, Pitt won the game 20 to 6. They became the only team other than Notre Dame to beat USC in an intersectional game. Plus, this was the first time a Howard Jones team lost 2 games straight in the same season.
The Panthers got on the scoreboard in the first quarter. Captain Charles Hartwig recovered Trojan halfback Clifford Probst's fumble on the USC 20-yard line. The Panther offense advanced the ball to the 1-yard line and Isadore Weinstock plunged into the end zone for the initial score. Weinstock missed the point after. Score: Pitt 6 to USC 0. In the second period the Panthers added to their score on a 22-yard touchdown dash by Henry Weisenbaugh. Weinstock converted the point after and Pitt led 13 to 0. The Trojans answered right before halftime with an 80-yard drive. USC quarterback Cotton Warburton completed a 6-yard pass to Calvin Clemens for the touchdown. Clemens' try for point was blocked by Pitt center George Shotwell. The halftime score read: Pitt 13 to USC 6. The Panthers managed another touchdown in the third stanza with one play, after gaining possession by blocking a punt on the USC 35-yard line. Substitute halfback Hubert Randour passed to end Verne Baxter for the final touchdown of the game. Weinstock converted the point for the 20 to 0 win.
The editor of the "Daily Trojan" may have been correct as the USC Trojans never recovered their early season form and won only one more game, while finishing the season 4-6-1.
The Pitt starting lineup for the game against USC was Harvey Rooker (left end), Robert Hoel (left tackle), Charles Hartwig (left guard), George Shotwell (center),Ken Ormiston (right guard), Arthur Detzel (right tackle), Verne Baxter (right end), Miller Munjas (quarterback), Mike Nicksick (left halfback), Bobby LaRue (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Edward Quarantillo, Averell Daniell, William Glassford, Nick Kliskey, Frank Kutz, Stanley Olejnicsak, Karl Sieffert, Robert McClure, Leo Malarkey, Hubert Randour and Henry Weisenbaugh.
Minnesota
On October 20, the Panthers played host to the Minnesota Gophers led by third-year coach Bernie Bierman. The Gophers were 2–0 on the season and on a 10 game unbeaten streak since the final game of 1932. The Gopher line-up featured three consensus All-Americans: halfback/fullback Pug Lund, end Frank Larson and guard Bill Bevan. Fullback Stan Kostka and tackle Phil Bengston were named to the North American Newspaper Alliance second-team and end Bob Tenner made the United Press second-team. Coach Bierman told the Tribune: “My boys are in first class condition and an excellent spirit exists on the squad. If the boys go out there and play the brand of football of which they are capable, they should win. But they will have to play 60 minutes of great football.”
Coach Sutherland acknowledged that the Gophers should be favored, but felt the game would be close.
Panther starting end Verne Baxter was sick and did not play. He was replaced by Karl Seiffert. Stanley Olejnicsak started at right tackle in place of Arthur Detzel.
The Minnesota Gophers rallied in the final period to score two touchdowns, and crushed the Panther hopes of a national title with a 13 to 7 victory. The first quarter and most of the second was a punting duel between Gopher All-American Lund and Pitt quarterback Miller Munjas. Late in the second period Pitt gained possession on their own 36-yard line. On first down Isadore Weinstock ran 12 yards around left end and prior to being tackled, he lateraled the ball to Mike Nicksick, who was trailing the play. Nicksick raced untouched to the end zone for the first score of the game. Weinstock was good on the placement and Pitt led 7 to 0 at halftime. Late in the third stanza, Pitt back Bobby LaRue fumbled Lund's punt and Gopher end Larson recovered for Minnesota on the Pitt 40-yard line. The Gopher offense advanced the ball to the Pitt 22-yard line as the third period came to a close. On the first play of the fourth quarter Julius Alfonse skirted left end for the first Gopher touchdown. Bevan tied the score with his point after. After the kick-off, the Panthers failed to make a first down and had to punt. The Gophers gained possession on their own 46-yard line. Six plays moved the ball to the Pitt 16-yard line. "A beautiful double lateral, Glen Seidel to Kostka to Lund, who threw a forward to Tenner, sent the latter racing around the right side for a touchdown. Bevan missed the try for point." Final score: Minnesota 13 to Pitt 7.
Chester L. Smith of The Press noted that 65,000 fans went home sad, but proud that the Pitt team fought to the bitter end. He added: "There were other thousands who were bitterly disappointed because Pitt athletic authorities ruled again against a nation-wide broadcast of the intersectional battle. The same ruling was made by Pitt wast[sic] week in the game between the Panthers and Southern
California."
"During the first quarter, one of the Pitt cheer leader-acrobats attempted to chin himself on the west goal post crossbar. One end of the crossbar broke loose from an upright and sagged noticeably. The officials stopped the game until umpire Thorpe could climb the post and fasten the crossbar in place with a strip of Trainer Bud Moore's adhesive tape."
Coach Sutherland praised the Gophers: "Minnesota impressed me very much. They are a really fine football team. We were afraid of their power, and our advance strategy was to keep them in the hole as long as possible. We did a good job of it in the first half, but one fumble got us in a jam, and then the Gophers went to work. ..I thought my boys played to the limit of their ability, but the Gophers were just too good for us."
Minnesota finished the season with an 8–0 record and shared the national championship with Alabama and Pitt.
The Pitt starting lineup for the game against Minnesota was Harvey Rooker (left end), Robert Hoel (left tackle), Charles hartwig (left guard), George Shotwell (center), Ken Ormiston (right guard), Stanley Olejnicsak (right tackle), Karl Seiffert (right end), Miller Munjas (quarterback), Mike Nicksick (left halfback), Bobby LaRue (right halfback) and Isadore Weinstock (fullback). Sugstitutes appearing in the game for Pitt were Edward Quarantillo, Averell Daniell, William Glassford, Leon Wohlgemuth, Nick Kliskey, Frank Kutz, Arthur Detzel, Leslie Wilkins, Hubert Randour, Leon Shedlosky and Henry Weisenbaugh.
at Westminster
On October 27, the Panthers bussed about 60 miles north to New Castle, PA to play the Westminster Titans. The Titans led by twin brothers Bill and Tom Gilbane were 3–2 on the season. They beat Slippery Rock, Edinboro and Thiel, while losing to Fordham and John Carroll. The Gilbanes were grads of Brown University and were honorable mention All-Americans in 1932.
Coach Sutherland took the entire squad: "It's the one trip of the year we try to give all our players. I hope to use all of them against the Titans. I'll use as many as I dare, that is certain."
The New Castle News reported: "Playing under weather conditions that were more suitable for pneumonia or lumbago, than football, the University of Pittsburgh Panthers sloshed and slipped through mud and water at Taggart Stadium Saturday afternoon to take a 30 to 0 victory over a fighting Westminster College football team. About 1,500 hardy ardent field fans sat huddled in rain coats and blankets as the football men put on the show."
The Panthers managed five touchdowns but missed all the extra points. Isadore Weinstock and Hubert Randour each scored two touchdowns and Mike Nicksick added one. The Panther offense earned eighteen first downs and their defense surrendered one first down to the Titans. Thirty-three Panthers participated in the one-sided romp. The officials shortened the quarters to 12 minutes due to the rain, sleet and cold. The Titans finished the season with a 3–5–1 record.
The Pitt starting lineup for the game against Westminster was Leslie Wilkins (left end), Averell Daniell (left tackle), William Glassford (left guard) Nick Kliskey (center), Frank Kutz (right guard), Arthur Detzel (right tackle), Vincent Sites (right end), Robert McClure (quarterback), Hubert Randour (left halfback), Leon Shedlosky (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Edward Quarantillo, Harvey Rooker, Regis Flynn, Verne Baxter, John Valenti, Robert Hoel,Stanley Olejniczak, Gene Stoughton, Leon Wohlgemuth, Marwood Stark, Charles Hartwig, George Shotwell, Charles Gongloff, Arnold Greene, Miller Munjas, Stanley O'Neil, Mike Nicksick, Leo Malarkey, Joseph Trogleone, Bobby LaRue, Arthur Ruff, Leonard Rector and Henry Weisenbaugh.
Notre Dame
On November 3, the Homecoming opponent for the Panthers was the "Fighting Irish" of Notre Dame. The Irish were led by first-year head coach Elmer Layden. His Irish arrived in Pittsburgh with a 3–1 record. They lost their home opener to Texas (7–6) and then won three straight against Purdue(18–7), Carnegie Tech (13–0) and Wisconsin(19–0). The Irish line was anchored by consensus All-American center Jack Robinson. Notre Dame led the all-time series 4–2–1, but Pitt won the past two games. Notre Dame brought 42 players for the game and they were housed at the Pittsburgh Athletic Association. The South Bend Tribune reported: "Coach Layden indicated this morning that he would use his first string line for at least half the game, doing most of the substituting in the backfield. Every member of the squad is in good shape and spirit is running high." Earlier in the week Layden told a reporter: "Pittsburgh will beat us by at least two touchdowns. Jock Sutherland has too many experienced players for us."
Coach Sutherland started almost the same lineup that faced Minnesota. Starting right end Verne Baxter recovered from the flu and replaced Vincent Sites. French Lane of The Chicago Tribune wrote: "Coach Jock Sutherland of the Panthers, with tears ready to roll down his wrinkled cheeks, said: 'We will be satisfied to win by 2 to 0, but I'm wondering how we can score a safety'."
Under ideal weather conditions, the Pittsburgh Panthers became the second team to defeat Notre Dame three years in a row as they shut out the Irish 19 to 0 in front of 64,000 fans. Most of the first quarter was a punting contest. The Panther offense advanced the ball on one possession to the Notre Dame 5-yard line, but lost the ball on downs. At the beginning of the second period, Layden substituted a new backfield and Sutherland replaced the entire Pitt lineup except left end Harvey Rooker. The Pitt defense forced a punt. Irish halfback Andy Pilney punted to Leon Shedloskey on the Pitt 42-yard line. With blockers in front, he sprinted 58 yards for the touchdown. Isadore Weinstock missed the point after and Pitt led 6 to 0. Late in the half Notre Dame recovered a Weinstock fumble on the Pitt 44-yard line. Pitt third string quarterback Arnold Greene intercepted a pass and carried the ball to the end zone. The officials ruled he had stepped out of bounds on the 2-yard line. Time ran out before the next play and Pitt led 6 to 0 at halftime. Mid-third quarter, the Panthers gained possession on their 35-yard line. Five running plays advanced the ball to the Irish 46-yard line. From there, Mike Nicksick "slashed through left tackle, reversed his field, and ran forty-six yards for a touchdown as Pitt blockers scattered the Irish players like so many tenpins. Weinstock placekicked the extra point." Score: Pitt 13; Notre Dame 0. In the last quarter, the Irish offense penetrated to the Pitt 33-yard line. On third down Henry Weisenbaugh intercepted an errant pass and returned the ball to the Notre Dame 32-yard line. On first down Shedlosky was forced out of bounds on the Irish 3-yard line after a gain of 29 yards. On the next play, Nicksick scored the final touchdown. Weisenbaugh was wide on the placement. Final score: Pitt 19; Notre Dame 0. Notre Dame finished the season with a 6–3 record.
The Pitt offense made 9 first downs and netted 232 yards from scrimmage. The Panther defense intercepted 6 passes and held the Irish to 5 first downs and 97 yards total offense.
Les Biederman of The Press spoke with a smiling Coach Sutherland: "I think it was a darn hard-fought game. Notre Dame has a good team but Pitt looked awful good to me today."
The Pitt starting lineup for the game against Notre Dame was Harvey Rooker (left end), Robert Hoel (left tackle), Charles Hartwig (left guard), George Shotwell (center), Ken Ormiston (right guard), Stanley Olejniczak (right tackle), Verne Baxter (right end), Miller Munjas (quarterback), Mike Nicksack (left halfback), Bobby LaRue (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Edward Quarantillo, Averell Daniell, Frank Kutz, Marwood Stark, William Glassford, Nick Kliskey, Arthur Detzel,Vincent Sites, Robert McClure, Arnold Greene, Leon Shedlosky, Hubert Randour, Joseph Trogleone, Stanley O'Neil, Henry Weisenbaugh and Leonard Rector.
List of national championship selectors
A "list of college football's mythical champions as selected by every recognized authority since 1924," printed in Sports Illustrated in 1967, revealed that Parke Davis' selection of Pitt after he was dead was the historical basis of the university's 1934 national championship claim, a selection that is not documented in the official NCAA football records book. After the death of Davis in June, 1934, Walter R. Okeson became the editor of the annual Spalding's Official Foot Ball Guide, which Davis had previously edited. In the Guide, Davis had compiled a list titled, "Outstanding Nationwide and Sectional Teams," for the seasons from 1869 onward. For several years, Okeson continued to add annual selections to this list, described as "Originally Compiled by the late Parke H. Davis." The 1935 Guide stated, in Okeson's review of the 1934 season, "Minnesota — Undefeated and untied, team was generally conceded to be national leader," and "Pittsburgh — Defeated only by Minnesota, team was generally rated as strongest in East." Okeson listed both schools as "Outstanding Nationwide Teams" for 1934.
These are the selectors that determined Pitt to be national champion in 1934, as recognized by College Football Data Warehouse: none
However, there are 39 selectors who chose Alabama and Minnesota (who defeated Pitt in Pittsburgh) as national champions for 1934, including 13 "major" selectors (i.e., those that were "national in scope").
All-Americans
Charles Hartwig, guard, Pitt's team captain. The following season his picture was put on a Wheaties cereal box for being a football hero. He battled back from an injury that caused him to miss his entire sophomore year. A media guide referred to him as a brilliant defensive player and workmanlike on offense. He was a Panther standout in the 1933 Rose Bowl. Played the 1935 East–West Shrine Game.
George Shotwell, center became an All-American for his offensive line play in 1934. He was highly regarded for his all-around skills. Shotwell was an intelligent football player known as a keen diagnostician of plays. "I have never seen his superior in this respect, and only a coach knows how valuable this quality is," Coach Jock Sutherland said.
Isadore Weinstock, fullback, a smart and aggressive fullback who became an All-American in 1934. He was known as a crack ball-handler, especially on trick plays such as double passes and fake reverses. Weinstock was a fine blocker and also played defensive back, kicked extra points and handled kickoff duties. After suffering a broken nose he became one of the first players to wear a face mask. He led the Panthers in scoring in 1934 with 63 points. After Pitt he went on to the NFL, where he played three seasons at quarterback for Philadelphia and Pittsburgh.
*Bold - Consensus All-American
References
Pittsburgh
Pittsburgh Panthers football seasons
College football national champions
Pittsburgh Panthers football |
64534021 | https://en.wikipedia.org/wiki/Firefox%20Send | Firefox Send | Firefox Send was a free and open-source end-to-end encrypted file sharing web service developed by Mozilla. It was launched on March 12, 2019 and was taken offline on July 7, 2020 after the discovery that it was used to spread malware and spear phishing attacks. The shut down was made permanent on September 17, 2020, following employee lay-offs in August that likely included the staff who would have been responsible for implementing abuse prevention and malware reporting mechanisms.
Functionality
Firefox Send allowed users to upload computer files, including large files up to 2.5 gigabytes, to the Send website, generating links from which the file could be accessed and downloaded. Users could also set expiration dates or maximum number of downloads for the links.
The service was end-to-end encrypted, meaning only the uploader and those who the links are shared with can view the file.
History
In August 2017, Mozilla launched Firefox Send via its Test Pilot program. The developers wanted to experiment with end-to-end encrypted data syncing and this service allowed them to try out encrypting large files, gigabytes in size, as opposed to the megabytes usually synced by browsers.
Firefox Send was launched to the public on March 12, 2019. On July 7, 2020, the service was suspended because absence of any form of authentication and abuse reporting mechanisms attracted cybercriminals who used it to send malware and mount spear phishing attacks. The suspension was supposed to be temporary and should have lasted only as long as necessary to implement mandatory authentication via Firefox Account for file uploading and mechanisms for reporting previously uploaded malware.
On September 17, 2020, as a part of Mozilla's business and products refocusing plans, the service was shut down permanently, along with Firefox Notes.
See also
describing "Encrypted Content-Encoding for HTTP", an encoding used by Firefox Send to bundle multiple uploaded encrypted files into one file for storing together on the server.
References
Firefox
Free and open-source software
Mozilla
Software using the Mozilla license
Discontinued software |
86020 | https://en.wikipedia.org/wiki/Iris%20%28mythology%29 | Iris (mythology) | In Greek mythology, Iris (; ; , ) is a daughter of the gods Thaumas and Electra, the personification and goddess of the rainbow and messenger of the gods.
Family
According to Hesiod's Theogony, Iris is the daughter of Thaumas and the Oceanid Electra and the sister of the Harpies: Arke and Ocypete. During the Titanomachy, Iris was the messenger of the Olympian gods while her sister Arke betrayed the Olympians and became the messenger of the gods enemy, Titans.
She is the goddess of the rainbow. She also serves nectar to the goddesses and gods to drink.
Zephyrus, who is the god of the west wind is her consort. Together they had a son named Pothos, or alternatively they were the parents of Eros, the god of love, according to sixth century BC Greek lyric poet Alcaeus, though Eros is usually said to be the son of Ares and Aphrodite. According to the Dionysiaca of Nonnus, Iris' brother is Hydaspes.
She is also known as one of the goddesses of the sea and the sky. Iris links the gods to humanity. She travels with the speed of wind from one end of the world to the other and into the depths of the sea and the underworld.
Mythology
Messenger of the gods
In some records Iris is a sister to fellow messenger goddess Arke (arch), who flew out of the company of Olympian gods to join the Titans as their messenger goddess during the Titanomachy, making the two sisters enemy messenger goddesses. Iris was said to have golden wings, whereas Arke had iridescent ones. She is also said to travel on the rainbow while carrying messages from the gods to mortals. During the Titan War, Zeus tore Arke's iridescent wings from her and gave them as a gift to the Nereid Thetis at her wedding, who in turn gave them to her son, Achilles, who wore them on his feet. Achilles was sometimes known as podarkes (feet like [the wings of] Arke). Podarces was also the original name of Priam, king of Troy.
Following her daughter Persephone's abduction by Hades, the goddess of agriculture Demeter withdrew to her temple in Eleusis and made the earth barren, causing a great famine which killed off mortals, and as a result sacrifices to the gods ceased. Zeus then sent Iris to Demeter, calling her to join the other gods and lift her curse; but as her daughter was not returned, Demeter was not persuaded.
According to the lost epic Cypria by Stasinus, it was Iris who informed Menelaus, who had sailed off to Crete, of what had happened back in Sparta while he was gone, namely his wife Helen's elopement with the Trojan Prince Paris as well as the death of Helen's brother Castor.
Iris is frequently mentioned as a divine messenger in The Iliad, which is attributed to Homer. She does not, however, appear in The Odyssey, where her role is instead filled by Hermes. Like Hermes, Iris carries a caduceus or winged staff. By command of Zeus, the king of the gods, she carries a ewer of water from the River Styx, with which she puts to sleep all who perjure themselves. In Book XXIII, she delivers Achilles's prayer to Boreas and Zephyrus to light the funeral pyre of Patroclus.
Iris also appears several times in Virgil's Aeneid, usually as an agent of Juno. In Book 4, Juno dispatches her to pluck a lock of hair from the head of Queen Dido, that she may die and enter Hades. In book 5, Iris, having taken on the form of a Trojan woman, stirs up the other Trojan mothers to set fire to four of Aeneas' ships in order to prevent them from leaving Sicily.
According to the Roman poet Ovid, after Romulus was deified as the god Quirinus, his wife Hersilia pleaded with the gods to let her become immortal as well so that she could be with her husband once again. Juno heard her plea and sent Iris down to her. With a single finger, Iris touched Hersilia and transformed her into an immortal goddess. Hersilia flew to Olympus, where she became one of the Horae and was permitted to live with her husband forevermore.
Other myths
According to the "Homeric Hymn to Apollo", when Leto was in labor prior to giving birth to her twin children Apollo and Artemis, all the goddesses were in attendance except for two, Hera and Eileithyia, the goddess of childbirth. On the ninth day of her labor, Leto told Iris to bribe Ilithyia and ask for her help in giving birth to her children, without allowing Hera to find out. According to Callimachus, Iris along with Ares ordered, on Hera's orders, all cities and other places to shun the pregnant Leto and deny her shelter where she could bring forth her twins.
According to Apollonius Rhodius, Iris turned back the Argonauts Zetes and Calais, who had pursued the Harpies to the Strophades ("Islands of Turning"). The brothers had driven off the monsters from their torment of the prophet Phineus, but did not kill them upon the request of Iris, who promised that Phineus would not be bothered by the Harpies again.
In a lesser known narrative, Iris once came close to being raped by the satyrs after she attempted to disrupt their worship of Dionysus, perhaps at the behest of Hera. About fifteen black-and-red-figure vase paintings dating from the fifth century BC depict said satyrs either menancingly advancing toward or getting hold of her when she tries to interfere with the sacrifice.
In Euripides' play Herakles, Iris appears alongside Lyssa, cursing Heracles with the fit of madness in which he kills his three sons and his wife Megara.
Worship
Cult
There are no known temples or sanctuaries to Iris. While she is frequently depicted on vases and in bas-reliefs, few statues are known to have been made of Iris during antiquity. She was however depicted in sculpture on the west pediment of Parthenon in Athens.
Iris does appear to have been the object of at least some minor worship, but the only trace preserved of her cult is the note by Athenaeus in Scholars at Dinner that the people of Delos sacrificed to Iris, offering her cheesecakes called basyniae, a type of cake of wheat-flour, suet and honey, boiled up together.
Epithets
Iris had numerous poetic titles and epithets, including chrysopteros ( "golden winged"), podas ōkea ( "swift footed") or podēnemos ōkea ( "wind-swift footed"), roscida ("dewy", Latin), and Thaumantias ( "Daughter of Thaumas, Wondrous One"), aellopus ( "storm-footed, storm-swift). She also watered the clouds with her pitcher, obtaining the water from the sea.
Representation
Iris is represented either as a rainbow or as a beautiful young maiden with wings on her shoulders. As a goddess, Iris is associated with communication, messages, the rainbow, and new endeavors. This personification of a rainbow was once described as being a link to the heavens and earth.
In some texts she is depicted wearing a coat of many colors. With this coat she actually creates the rainbows she rides to get from place to place. Iris' wings were said to be so beautiful that she could even light up a dark cavern, a trait observable from the story of her visit to Somnus in order to relay a message to Alcyone.
While Iris was principally associated with communication and messages, she was also believed to aid in the fulfillment of humans' prayers, either by fulfilling them herself or by bringing them to the attention of other deities.
Gallery
Notes
References
Ancient
Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Evelyn-White, Hugh, The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White. Homeric Hymns. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914.
Euripides, The Complete Greek Drama', edited by Whitney J. Oates and Eugene O'Neill, Jr. in two volumes. 2. The Phoenissae, translated by E. P. Coleridge. New York. Random House. 1938.
Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853–1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project.
Callimachus. Hymns, translated by Alexander William Mair (1875–1928). London: William Heinemann; New York: G.P. Putnam's Sons. 1921. Online version at the Topos Text Project.
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Vergil, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Ovid. Metamorphoses, Volume I: Books 1–8. Translated by Frank Justus Miller. Revised by G. P. Goold. Loeb Classical Library No. 42. Cambridge, Massachusetts: Harvard University Press, 1977, first published 1916. . Online version at Harvard University Press.
Modern
Grimal, Pierre (1996). "Iris". The Dictionary of Classical Mythology. . pp. 237–238.
Peyré, Yves (2009). "Iris". A Dictionary of Shakespeare's Classical Mythology, ed. Yves Peyré.
Smith, William (1873). "Iris". Dictionary of Greek and Roman Biography and Mythology. London.
External links
IRIS from The Theoi Project
IRIS from Greek Mythology Link
Hesiod, the Homeric Hymns, and Homerica by Hesiod (English translation at Project Gutenberg)
The Iliad by Homer (English translation at Project Gutenberg)
The Argonautica, by c. 3rd century BC Apollonius Rhodius (English translation at Project Gutenberg)
Deities in the Iliad
Greek goddesses
Messenger goddesses
Characters in the Argonautica
Rainbows in culture
Rape of Persephone
Metamorphoses characters
Sky and weather goddesses
Personifications in Greek mythology
Deities in the Aeneid |
145908 | https://en.wikipedia.org/wiki/Apache%20License | Apache License | The Apache License is a permissive free software license written by the Apache Software Foundation (ASF). It allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties. The ASF and its projects release their software products under the Apache License. The license is also used by many non-ASF projects.
History
Beginning in 1995, the Apache Group (later the Apache Software Foundation) released successive versions of the Apache HTTP Server. Its initial license was essentially the same as the original 4-clause BSD license, with only the names of the organizations changed, and with an additional clause forbidding derivative works from bearing the Apache name.
In July 1999, the Berkeley Software Distribution accepted the argument put to it by the Free Software Foundation and retired their advertising clause (clause 3) to form the new 3-clause BSD license. In 2000, Apache did likewise and created the Apache License 1.1, in which derived products are no longer required to include attribution in their advertising materials, only in their documentation. Individual packages licensed under the 1.1 version may have used different wording due to varying requirements for attribution or mark identification, but the binding terms were the same.
In January 2004, ASF decided to depart from the BSD model and produced the Apache License 2.0. The stated goals of the license included making it easier for non-ASF projects to use, improving compatibility with GPL-based software, allowing the license to be included by reference instead of listed in every file, clarifying the license on contributions, and requiring a patent license on contributions that necessarily infringe a contributor's own patents. This license requires preservation of the copyright notice and disclaimer.
Licensing conditions
The Apache License is permissive; unlike copyleft licenses, it does not require a derivative work of the software, or modifications to the original, to be distributed using the same license. It still requires application of the same license to all unmodified parts. In every licensed file, original copyright, patent, trademark, and attribution notices must be preserved (excluding notices that do not pertain to any part of the derivative works). In every licensed file changed, a notification must be added stating that changes have been made to that file.
If a NOTICE text file is included as part of the distribution of the original work, then derivative works must include a readable copy of these notices within a NOTICE text file distributed as part of the derivative works, within the source form or documentation, or within a display generated by the derivative works (wherever such third-party notices normally appear).
The contents of the NOTICE file do not modify the license, as they are for informational purposes only, and adding more attribution notices as addenda to the NOTICE text is permissible, provided that these notices cannot be understood as modifying the license. Modifications may have appropriate copyright notices, and may provide different license terms for the modifications.
Unless explicitly stated otherwise, any contributions submitted by a licensee to a licensor will be under the terms of the license without any terms and conditions, but this does not preclude any separate agreements with the licensor regarding these contributions.
Apache License 2.0
The Apache License 2.0 makes sure that the user does not have to worry about infringing any patents by using the software. The user is granted a license to any patent that covers the software. This license is terminated if the user sues anyone over patent infringement related to this software. This condition is added in order to prevent patent litigations.
Compatibility
The Apache Software Foundation and the Free Software Foundation agree that the Apache License 2.0 is a free software license, compatible with the GNU General Public License (GPL) version 3, meaning that code under GPLv3 and Apache License 2.0 can be combined, as long as the resulting software is licensed under the GPLv3.
The Free Software Foundation considers all versions of the Apache License to be incompatible with the previous GPL versions 1 and 2. Furthermore, it considers Apache License versions before 2.0 incompatible with GPLv3. Because of version 2.0's patent license requirements, the Free Software Foundation recommends it over other non-copyleft licenses.
Reception and adoption
In October 2012, 8,708 projects located at SourceForge.net were available under the terms of the Apache License. In a blog post from May 2008, Google mentioned that over 25% of the nearly 100,000 projects then hosted on Google Code were using the Apache License, including the Android operating system.
, according to Black Duck Software and GitHub, the Apache license is the third most popular license in the FOSS domain after MIT License and GPLv2.
The OpenBSD project does not consider the Apache License 2.0 to be an acceptable free license because of its patent provisions. The OpenBSD policy believes that when the license forces one to give up a legal right that one otherwise has, that license is no longer free. Moreover, the project objects to involving contract law with copyright law, stating "...Copyright law is somewhat standardized by international agreements, contract law differs wildly among jurisdictions. So what the license means in different jurisdictions may vary and is hard to predict."
See also
Comparison of free and open-source software licenses
Software using the Apache license (category)
References
External links
Quick Summary of the Apache License 2.0
License
Free and open-source software licenses
Permissive software licenses
Software using the Apache license |
56209204 | https://en.wikipedia.org/wiki/Spectre%20%28security%20vulnerability%29 | Spectre (security vulnerability) | Spectre is a class of security vulnerabilities that affects modern microprocessors that perform branch prediction and other forms of speculation.
On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes a side channel through which an attacker may be able to extract information about the private data using a timing attack.
Two Common Vulnerabilities and Exposures IDs related to Spectre, (bounds check bypass, Spectre-V1, Spectre 1.0) and (branch target injection, Spectre-V2), have been issued. JIT engines used for JavaScript were found to be vulnerable. A website can read data stored in the browser for another website, or the browser's memory itself.
In early 2018, Intel reported that it would redesign its CPUs to help protect against the Spectre and related Meltdown vulnerabilities (especially, Spectre variant 2 and Meltdown, but not Spectre variant 1). On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors.
History
In 2002 and 2003, Yukiyasu Tsunoo and colleagues from NEC showed how to attack MISTY and DES symmetric key ciphers, respectively. In 2005, Daniel Bernstein from the University of Illinois, Chicago reported an extraction of an OpenSSL AES key via a cache timing attack, and Colin Percival had a working attack on the OpenSSL RSA key using the Intel processor's cache. In 2013 Yuval Yarom and Katrina Falkner from the University of Adelaide showed how measuring the access time to data lets a nefarious application determine if the information was read from the cache or not. If it was read from the cache the access time would be very short, meaning the data read could contain the private key of encryption algorithms.
This technique was used to successfully attack GnuPG, AES and other cryptographic implementations. In January 2017, Anders Fogh gave a presentation at the Ruhruniversität Bochum about automatically finding covert channels, especially on processors with a pipeline used by more than one processor core.
Spectre proper was discovered independently by Jann Horn from Google's Project Zero and Paul Kocher in collaboration with Daniel Genkin, Mike Hamburg, Moritz Lipp and Yuval Yarom. Microsoft Vulnerability Research extended it to browsers' JavaScript JIT engines. It was made public in conjunction with another vulnerability, Meltdown, on 3 January 2018, after the affected hardware vendors had already been made aware of the issue on 1 June 2017. The vulnerability was called Spectre because it was "based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time."
On 28 January 2018, it was reported that Intel shared news of the Meltdown and Spectre security vulnerabilities with Chinese technology companies, before notifying the U.S. government of the flaws.
On 29 January 2018, Microsoft was reported to have released a Windows update that disabled the problematic Intel Microcode fix—which had, in some cases, caused reboots, system instability, and data loss or corruption—issued earlier by Intel for the Spectre Variant 2 attack. Woody Leonhard of ComputerWorld expressed a concern about installing the new Microsoft patch.
Since the disclosure of Spectre and Meltdown in January 2018, a lot of research on vulnerabilities related to speculative execution had been done. On 3 May 2018, eight additional Spectre-class flaws provisionally named Spectre-NG by c't (German computer magazine) were reported affecting Intel and possibly AMD and ARM processors. Intel reported that they were preparing new patches to mitigate these flaws. Affected are all Core-i processors and Xeon derivates since Nehalem (2010) and Atom-based processors since 2013. Intel postponed their release of microcode updates to 10 July 2018.
On 21 May 2018, Intel published information on the first two Spectre-NG class side-channel vulnerabilities (Rogue System Register Read, Variant 3a) and (Speculative Store Bypass, Variant 4), also referred to as Intel SA-00115 and HP PSR-2018-0074, respectively.
According to Amazon Deutschland, Cyberus Technology, SYSGO, and Colin Percival (FreeBSD), Intel has revealed details on the third Spectre-NG variant (Lazy FP State Restore, Intel SA-00145) on 13 June 2018. It is also known as Lazy FPU state leak (abbreviated "LazyFP") and "Spectre-NG 3".
On 10 July 2018, Intel revealed details on another Spectre-NG class vulnerability called "Bounds Check Bypass Store" (BCBS), aka "Spectre 1.1" (), which was able to write as well as read out of bounds. Another variant named "Spectre 1.2" was mentioned as well.
In late July 2018, researchers at the universities of Saarland and California revealed ret2spec (aka "Spectre v5") and SpectreRSB, new types of code execution vulnerabilities using the Return Stack Buffer (RSB).
At the end of July 2018, researchers at the University of Graz revealed "NetSpectre", a new type of remote attack similar to Spectre V1, but which does not need attacker-controlled code to be run on the target device at all.
On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its latest processors.
In November 2018, five new variants of the attacks were revealed. Researchers attempted to compromise CPU protection mechanisms using code to exploit the CPU pattern history table, branch target buffer, return stack buffer, and branch history table.
In August 2019, a related transient execution CPU vulnerability, Spectre SWAPGS (), was reported.
In late April 2021, a related vulnerability was discovered that breaks through the security systems designed to mitigate Spectre through use of the micro-op cache. The vulnerability is known to affect Skylake and later processors from Intel and Zen-based processors from AMD.
Mechanism
Spectre is a vulnerability that tricks a program into accessing arbitrary locations in the program's memory space. An attacker may read the content of accessed memory, and thus potentially obtain sensitive data.
Instead of a single easy-to-fix vulnerability, the Spectre white paper describes a whole class of potential vulnerabilities. They are all based on exploiting side effects of speculative execution, a common means of hiding memory latency and so speeding up execution in modern microprocessors. In particular, Spectre centers on branch prediction, which is a special case of speculative execution. Unlike the related Meltdown vulnerability disclosed at the same time, Spectre does not rely on a specific feature of a single processor's memory management and protection system, but is instead a more generalized idea.
The starting point of the white paper is that of a side-channel timing attack applied to the branch prediction machinery of modern out-of-order executing microprocessors. While at the architectural level documented in processor data books, any results of misprediction are specified to be discarded after the fact, the resulting speculative execution may still leave side effects, like loaded cache lines. These can then affect the so-called non-functional aspects of the computing environment later on. If such side effects including but not limited to memory access timing are visible to a malicious program, and can be engineered to depend on sensitive data held by the victim process, then these side effects can result in such data becoming discernible. This can happen despite the formal architecture-level security arrangements working as designed; in this case, lower, microarchitecture-level optimizations to code execution can leak information not essential to the correctness of normal program execution.
The Spectre paper displays the attack in four essential steps:
First, it shows that branch prediction logic in modern processors can be trained to reliably hit or miss based on the internal workings of a malicious program.
It then goes on to show that the subsequent difference between cache hits and misses can be reliably timed, so that what should have been a simple non-functional difference can in fact be subverted into a covert channel which extracts information from an unrelated process's inner workings.
Thirdly, the paper synthesizes the results with return-oriented programming exploits and other principles with a simple example program and a JavaScript snippet run under a sandboxing browser; in both cases, the entire address space of the victim process (i.e. the contents of a running program) is shown to be readable by simply exploiting speculative execution of conditional branches in code generated by a stock compiler or the JavaScript machinery present in an existing browser. The basic idea is to search existing code for places where speculation touches upon otherwise inaccessible data, manipulate the processor into a state where speculative execution has to contact that data, and then time the side effect of the processor being faster, if its by-now-prepared prefetch machinery indeed did load a cache line.
Finally, the paper concludes by generalizing the attack to any non-functional state of the victim process. It briefly discusses even such highly non-obvious non-functional effects as bus arbitration latency.
Meltdown can be used to read privileged memory in a process's address space which even the process itself would normally be unable to access (on some unprotected OSes this includes data belonging to the kernel or other processes). It was shown, that under certain circumstances, the Spectre vulnerability is also capable of reading memory outside of the current processes memory space.
The Meltdown paper distinguishes the two vulnerabilities thus: "Meltdown is distinct from the Spectre Attacks in several ways, notably that Spectre requires tailoring to the victim process's software environment, but applies more broadly to CPUs and is not mitigated by KAISER."
Remote exploitation
While Spectre is simpler to exploit with a compiled language such as C or C++ by locally executing machine code, it can also be remotely exploited by code hosted on remote malicious web pages, for example interpreted languages like JavaScript, which run locally using a web browser. The scripted malware would then have access to all the memory mapped to the address space of the running browser.
The exploit using remote JavaScript follows a similar flow to that of a local machine code exploit: Flush Cache → Mistrain Branch Predictor → Timed Reads (tracking hit / miss).
The absence of the availability to use the clflush instruction (cache-line flush) in JavaScript requires an alternate approach. There are several automatic cache eviction policies which the CPU may choose, and the attack relies on being able to force that eviction for the exploit to work. It was found that using a second index on the large array, which was kept several iterations behind the first index, would cause the least recently used (LRU) policy to be used. This allows the exploit to effectively clear the cache just by doing incremental reads on a large dataset.
The branch predictor would then be mistrained by iterating over a very large dataset using bitwise operations for setting the index to in-range values, and then using an out-of-bounds address for the final iteration.
A high-precision timer would then be required in order to determine if a set of reads led to a cache-hit or a cache-miss. While browsers like Chrome, Firefox, and Tor Browser (based on Firefox) have placed restrictions on the resolution of timers (required in Spectre exploit to determine if cache hit/miss), at the time of authoring the white paper, the Spectre author was able to create a high-precision timer using the web worker feature of HTML5.
Careful coding and analysis of the machine code executed by the just-in-time compilation (JIT) compiler was required to ensure the cache-clearing and exploitive reads were not optimized-out.
Impact
As of 2018, almost every computer system is affected by Spectre, including desktops, laptops, and mobile devices. Specifically, Spectre has been shown to work on Intel, AMD, ARM-based, and IBM processors. Intel responded to the reported security vulnerabilities with an official statement. AMD originally acknowledged vulnerability to one of the Spectre variants (GPZ variant 1), but stated that vulnerability to another (GPZ variant 2) had not been demonstrated on AMD processors, claiming it posed a "near zero risk of exploitation" due to differences in AMD architecture. In an update nine days later, AMD said that "GPZ Variant 2...is applicable to AMD processors" and defined upcoming steps to mitigate the threat. Several sources took AMD's news of the vulnerability to GPZ variant 2 as a change from AMD's prior claim, though AMD maintained that their position had not changed.
Researchers have indicated that the Spectre vulnerability can possibly affect some Intel, AMD, and ARM processors. Specifically, processors with speculative execution are affected with these vulnerabilities.
ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected by the Spectre vulnerability: Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73 and ARM Cortex-A75 cores. Other manufacturers' custom CPU cores implementing the ARM instruction set, such as those found in newer members of the Apple A series processors, have also been reported to be vulnerable. In general, higher-performance CPUs tend to have intensive speculative execution, making them vulnerable to Spectre.
Spectre has the potential of having a greater impact on cloud providers than Meltdown. Whereas Meltdown allows unauthorized applications to read from privileged memory to obtain sensitive data from processes running on the same cloud server, Spectre can allow malicious programs to induce a hypervisor to transmit the data to a guest system running on top of it.
Mitigation
Since Spectre represents a whole class of attacks, most likely, there cannot be a single patch for it. While work is already being done to address special cases of the vulnerability, the original website devoted to Spectre and Meltdown states: "As [Spectre] is not easy to fix, it will haunt us for a long time." At the same time, according to Dell: "No 'real-world' exploits of these vulnerabilities [i.e., Meltdown and Spectre] have been reported to date [7 February 2018], though researchers have produced proof-of-concepts."
Several procedures to help protect home computers and related devices from the vulnerability have been published. Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer eighth-generation Core platforms, benchmark performance drops of 2–14 percent have been measured. On 18 January 2018, unwanted reboots, even for newer Intel chips, due to Meltdown and Spectre patches, were reported.
It has been suggested that the cost of mitigation can be alleviated by processors which feature selective translation lookaside buffer (TLB) flushing, a feature which is called process-context identifier (PCID) under Intel 64 architecture, and under Alpha, an address space number (ASN). This is because selective flushing enables the TLB behavior crucial to the exploit to be isolated across processes, without constantly flushing the entire TLB the primary reason for the cost of mitigation.
In March 2018, Intel announced that they had developed hardware fixes for Meltdown and Spectre-V2 only, but not Spectre-V1. The vulnerabilities were mitigated by a new partitioning system that improves process and privilege-level separation.
On 8 October 2018, Intel is reported to have added hardware and firmware mitigations regarding Spectre and Meltdown vulnerabilities to its Coffee Lake-R processors and onwards.
On 2 March 2019, Microsoft is reported to have released an important Windows 10 (v1809) software mitigation to the Spectre v2 CPU vulnerability.
Particular software
Several procedures to help protect home computers and related devices from the vulnerability have been published.
Initial mitigation efforts were not entirely without incident. At first, Spectre patches were reported to significantly slow down performance, especially on older computers. On the newer eighth generation Core platforms, benchmark performance drops of 2–14 percent were measured. On 18 January 2018, unwanted reboots were reported even for newer Intel chips.
Since exploitation of Spectre through JavaScript embedded in websites is possible, it was planned to include mitigations against the attack by default in Chrome 64. Chrome 63 users could manually mitigate the attack by enabling the Site Isolation feature (chrome://flags#enable-site-per-process).
As of Firefox 57.0.4, Mozilla was reducing the resolution of JavaScript timers to help prevent timing attacks, with additional work on time-fuzzing techniques planned for future releases.
On January 15th, 2018, Microsoft introduced mitigation for SPECTRE in Visual Studio. This can be applied by using the /Qspectre switch. A developer would need to download and install the appropriate libraries using the Visual Studio installer.
General approaches
On 4 January 2018, Google detailed a new technique on their security blog called "Retpoline" (return trampoline) which can overcome the Spectre vulnerability with a negligible amount of processor overhead. It involves compiler-level steering of indirect branches towards a different target that does not result in a vulnerable speculative out-of-order execution taking place. While it was developed for the x86 instruction set, Google engineers believe the technique is transferable to other processors as well.
On 25 January 2018, the current status and possible future considerations in solving the Meltdown and Spectre vulnerabilities were presented.
On 18 October 2018, MIT researchers suggested a new mitigation approach, called DAWG (Dynamically Allocated Way Guard), which may promise better security without compromising performance.
On 16 April 2019, researchers from UC San Diego and University of Virginia proposed Context-Sensitive Fencing, a microcode-based defense mechanism that surgically injects fences into the dynamic execution stream, protecting against a number of Spectre variants at just 8% degradation in performance.
Controversy
When Intel announced that Spectre mitigation can be switched on as a "security feature" instead of being an always-on bugfix, Linux creator Linus Torvalds called the patches "complete and utter garbage". Ingo Molnár then suggested the use of function tracing machinery in the Linux kernel to fix Spectre without Indirect Branch Restricted Speculation (IBRS) microcode support. This would, as a result, only have a performance impact on processors based on Intel Skylake and newer architecture. This ftrace and retpoline-based machinery was incorporated into Linux 4.15 of January 2018.
Immune hardware
ARM:
A53
A32
A7
A5
See also
Foreshadow (security vulnerability)
Microarchitectural Data Sampling
Row hammer
SPOILER (security vulnerability)
Transient execution CPU vulnerabilities
References
Further reading
External links
Website detailing the Meltdown and Spectre vulnerabilities, hosted by Graz University of Technology
Google Project Zero write-up
Meltdown/Spectre Checker Gibson Research Corporation
Spectre & Meltdown vulnerability/mitigation checker for Linux
Speculative execution security vulnerabilities
Hardware bugs
Side-channel attacks
2018 in computing
X86 architecture
X86 memory management |
4075738 | https://en.wikipedia.org/wiki/Actor%20model%20later%20history | Actor model later history | In computer science, the Actor model, first published in 1973 , is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were investigation of the basic power of the model, study of issues of compositionality, development of architectures, and application to Open systems. It is the follow on article to Actor model middle history which reports on the initial implementations, initial applications, and development of the first proof theory and denotational model.
Power of the Actor Model
Investigations began into the basic power of the Actor model. Carl Hewitt [1985] argued that because of the use of Arbiters that the Actor model was more powerful than logic programming (see indeterminacy in concurrent computation).
A family of Prolog-like concurrent message passing systems using unification of shared variables and data structure streams for messages were developed by Keith Clark, Hervé Gallaire, Steve Gregory, Vijay Saraswat, Udi Shapiro, Kazunori Ueda, etc. Some of these authors made claims that these systems were based on mathematical logic. However, like the Actor model, the Prolog-like concurrent systems were based on message passing and consequently were subject to indeterminacy in the ordering of messages in streams that was similar to the indeterminacy in arrival ordering of messages sent to Actors. Consequently Carl Hewitt and Gul Agha [1991] concluded that the Prolog-like concurrent systems were neither deductive nor logical. They were not deductive because computational steps did not follow deductively from their predecessors and they were not logical because no system of mathematical logic was capable of deriving the facts of subsequent computational situations from their predecessors
Compositionality
Compositionality concerns composing systems from subsystems. Issues of compositionality had proven to be serious limitations for previous theories of computation including the lambda calculus and Petri nets. E.g., two lambda expressions are not a lambda expression and two Petri nets are not a Petri net and cannot influence each other.
In his doctoral dissertation Gul Agha addressed issues of compositionality in the Actor model. Actor configurations have receptionists that can receive messages from outside and may have the addresses of the receptionists of other Actor configurations. In this way two Actor configurations can be composed into another configuration whose subconfigurations can communicate with each other. Actor configurations have the advantage that they can have multiple Actors (i.e. the receptionists) which receive messages from outside without the disadvantage of having to poll to get messages from multiple sources (see issues with getting messages from multiple channels).
Open Systems
Carl Hewitt [1985] pointed out that openness was becoming a fundamental challenge in software system development. Open distributed systems are required to meet the following challenges:
Monotonicity
Once something is published in an open distributed system, it cannot be taken back.
Pluralism
Different subsystems of an open distributed system include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in open distributed systems.
Unbounded nondeterminism
Asynchronously, different subsystems can come up and go down and communication links can come in and go out between subsystems of an open distributed system. Therefore the time that it will take to complete an operation cannot be bounded in advance (see unbounded nondeterminism).
Inconsistency
Large distributed systems are inevitably inconsistent concerning their information about the information system interactions of their human users
Carl Hewitt and Jeff Inman [1991] worked to develop semantics for Open Systems to address issues that had arisen in Distributed Artificial Intelligence. Carl Hewitt and Carl Manning [1994] reported on the development of Participatory Semantics for Open Systems.
Computer Architectures
Researchers at Caltech under the leadership of Chuck Seitz developed the Cosmic Cube which was one of the first message-passing Actor architectures. Subsequently at MIT researchers under the leadership of Bill Dally developed the J Machine.
Attempts to relate Actor semantics to algebra and linear logic
Kohei Honda and Mario Tokoro 1991, José Meseguer 1992, Ugo Montanari and Carolyn Talcott 1998, M. Gaspari and G. Zavattaro 1999 have attempted to relate Actor semantics to algebra. Also John Darlington and Y. K. Guo 1994 have attempted to relate linear logic to Actor semantics.
However, none of the above formalisms addresses the crucial property of guarantee of service (see unbounded nondeterminism).
Recent developments
Recent developments in the Actor model have come from several sources.
Hardware development is furthering both local and nonlocal massive concurrency. Local concurrency is being enabled by new hardware for 64-bit many-core microprocessors, multi-chip modules, and high performance interconnect. Nonlocal concurrency is being enabled by new hardware for wired and wireless broadband packet switched communications. Both local and nonlocal storage capacities are growing exponentially. These hardware developments pose enormous modelling challenges. Hewitt [Hewitt 2006a, 2006b] is attempting to use the Actor model to address these challenges.
References
Carl Hewitt. The Challenge of Open Systems Byte Magazine. April 1985. Reprinted in The foundation of artificial intelligence---a sourcebook Cambridge University Press. 1990.
Carl Manning. Traveler: the actor observatory ECOOP 1987. Also appears in Lecture Notes in Computer Science, vol. 276.
William Athas and Charles Seitz Multicomputers: message-passing concurrent computers IEEE Computer August 1988.
William Dally and Wills, D. Universal mechanisms for concurrency PARLE 1989.
W. Horwat, A. Chien, and W. Dally. Experience with CST: Programming and Implementation PLDI. 1989.
Carl Hewitt. Towards Open Information Systems Semantics Proceedings of 10th International Workshop on Distributed Artificial Intelligence. October 23–27, 1990. Bandera, Texas.
Akinori Yonezawa, Ed. ABCL: An Object-Oriented Concurrent System MIT Press. 1990.
K. Kahn and Vijay A. Saraswat, "Actors as a special case of concurrent constraint (logic) programming", in SIGPLAN Notices, October 1990. Describes Janus.
Carl Hewitt. Open Information Systems Semantics Journal of Artificial Intelligence. January 1991.
Carl Hewitt and Jeff Inman. DAI Betwixt and Between: From "Intelligent Agents" to Open Systems Science IEEE Transactions on Systems, Man, and Cybernetics. November /December 1991.
Carl Hewitt and Gul Agha. Guarded Horn clause languages: are they deductive and Logical? International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991.
Kohei Honda and Mario Tokoro. An Object Calculus for Asynchronous Communication ECOOP 91.
José Meseguer. Conditional rewriting logic as a unified model of concurrency in Selected papers of the Second Workshop on Concurrency and compositionality. 1992.
William Dally, et al. The Message-Driven Processor: A Multicomputer Processing Node with Efficient Mechanisms IEEE Micro. April 1992.
S. Miriyala, G. Agha, and Y.Sami. Visulatizing actor programs using predicate transition nets Journal of Visual Programming. 1992.
Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott: A Foundation for Actor ComputationJournal of Functional Programming January 1993.
Carl Hewitt and Carl Manning. Negotiation Architecture for Large-Scale Crisis Management AAAI-94 Workshop on Models of Conflict Management in Cooperative Problem Solving. Seattle, WA. August 4, 1994.
John Darlington and Y. K. Guo: Formalizing Actors in Linear Logic International Conference on Object-Oriented Information Systems. Springer-Verlag. 1994.
Carl Hewitt and Carl Manning. Synthetic Infrastructures for Multi-Agency Systems Proceedings of ICMAS '96. Kyoto, Japan. December 8–13, 1996.
S. Frolund. Coordinating Distributed Objects: An Actor-Based Approach for Synchronization MIT Press. November 1996.
W. Kim. ThAL: An Actor System for Efficient and Scalable Concurrent Computing PhD thesis. University of Illinois at Urbana Champaign. 1997.
Mauro Gaspari and Gianluigi Zavattaro: An Algebra of Actors, Technical Report UBLCS-97-4, University of Bologna, May 1997
Ugo Montanari and Carolyn Talcott. Can Actors and Pi-Agents Live Together? Electronic Notes in Theoretical Computer Science. 1998.
M. Gaspari and G. Zavattaro: An Algebra of Actors Formal Methods for Open Object Based Systems, 1999.
N. Jamali, P. Thati, and G. Agha. An actor based architecture for customizing and controlling agent ensembles IEEE Intelligent Systems. 14(2). 1999.
P. Thati, R. Ziaei, and G. Agha. A Theory of May Testing for Actors Formal Methods for Open Object-based Distributed Systems. March 2002.
P. Thati, R. Ziaei, and G. Agha. A theory of may testing for asynchronous calculi with locality and no name matching Algebraic Methodology and Software Technology. Springer Verlag. September 2002. LNCS 2422.
Gul Agha and Prasanna Thati. An Algebraic Theory of Actors and Its Application to a Simple Object-Based Language, From OO to FM (Dahl Festschrift) LNCS 2635. Springer-Verlag. 2004.
Carl Hewitt. The repeated demise of logic programming and why it will be reincarnated What Went Wrong and Why: Lessons from AI Research and Applications. Technical Report SS-06-08. AAAI Press. March 2006b.
Carl Hewitt What is Commitment? Physical, Organizational, and Social COIN@AAMAS. 2006a.
Actor model (computer science)
History of computing |
70009709 | https://en.wikipedia.org/wiki/Gorretti%20Byomire | Gorretti Byomire | Gorretti Byomire is a Ugandan computer scientist, academic and disability rights activist. She is a lecturer in the Department of Applied Computing & Information Technology at Makerere University Business School (MUBS), in Kampala, Uganda. She concurrently serves as the Director of the Disability Resource & Learning Centre at MUBS.
Background and education
Goretti, a Ugandan by birth, was born circa 1984. She attended St. Theresa Namagunga Primary School. She then studied at Trinity College Nabbingo, for both her O-Level and A-level studies.
She holds a Bachelor of Business Computing degree and a Master of Science in Information Technology degree, both obtained from Makerere University, Uganda's oldest and largest public university. As of February 2022, she was pursuing a Doctor of Philosophy in Information Systems at the University of South Africa, in Pretoria.
Work experience
Goretti's career in the Information Technology arena goes back to 2007, after her first degree. She was hired as a graduate teaching assistant at MUBS, while she concurrently pursued her second degree. Over the years, she was promoted to Assistant Lecturer and then to full Lecturer.
Other considerations
Among her many responsibilities, she is a member of MUBS University Council, where she represents people with disabilities (PWDS). She is also a member of the MUBS Technical Advisory Disability Committee (TADC). In addition, she serves as the "focal person" for the Uganda National Council for Disability (UNCD). She is reported to specialize in "disability rights, inclusive education, policy advocacy, technology"... and the rights of youth, particularly girls and those of women.
Goretti Byomire is a Mandela Washington Fellow, Class of 2021. While there, she studied public management at the University of Minnesota. Three years earlier, in 2018, she had studied public management at Kenyatta University as a Fellow of the Young African Leaders Institute Regional Leadership Center (YALI RLC).
See also
Amanda Ngabirano
References
External links
Personal Profile at LinkedIn.com
Photos: MUBS Students With Different Abilities Call for Tolerance at Disability Awareness Day As of 2019.
1984 births
Living people
Ugandan women scientists
Makerere University academics
Makerere University alumni
Disability rights activists
University of South Africa alumni
People educated at Trinity College Nabbingo
21st-century Ugandan women scientists |
18782132 | https://en.wikipedia.org/wiki/HP%2064000 | HP 64000 | The HP 64000 Logic Development System, introduced 17 September 1979, is a tool for developing hardware and software for products based on commercial microprocessors from a variety of manufacturers. The systems assisted software development with assemblers and compilers for Pascal and C, provided hardware for in-circuit emulation of processors and memory, had debugging tools including logic analysis hardware, and a programmable read-only memory (PROM) chip programmer. A wide variety of optional cards and software were available tailored to particular microprocessors. When introduced the HP 64000 had two distinguishing characteristics. First, unlike most microprocessor development systems of the day, such as the Intel Intellec and Motorola EXORciser, it was not dedicated to a particular manufacturer's microprocessors, and second, it was designed such that up to six workstations would be connected via the HP-IB (IEEE-488) instrumentation bus to a common hard drive and printer to form a tightly integrated network.
Models
64100A, introduced in 1979. It was a desktop workstation which contained ten expansion slots for various optional cards. The initial offering of this workstation required an external hard disk for all disk storage, although the disk could be shared by up to six workstations via the HP-IB (IEEE-488) instrumentation bus. Later, a dual floppy drive option was added so that a workstation could be used without the shared hard drive. This workstation used the same custom HP 16-bit microprocessor found in the HP 9845C workstation. Software and hardware was offered to develop 8-bit and 16-bit microprocessors.
64110A, a more portable workstation with five card slots, was introduced in 1983. It used the same HP processor as the 64100A.
64120A card cage introduced in 1986. It fit the same option cards as the 64100A and 64110A, and was connected via an IEEE-488 bus to a standard HP 9000 Series 300 workstation running the HP-UX operating system rather than using a specially designed workstation such as the 64100A and 64110A. The name "HP 64000-UX Microprocessor Development Environment" was used with these systems. Software and hardware was introduced for development of 32-bit microprocessors.
64700A card cage was introduced in 1988. It was marketed as a lower cost development system (compared to the 64120A) that could be operated with an IBM PC-compatible personal computer rather than a workstation. Cards for this system carried the numbers 647xx, and were not compatible with the other systems.
Description
Terminology
As shown in the block diagram to the right, a 64000 system consisted of a number of components whose names had specific definitions:
Mainframe is the physical workstation or card cage holding the option cards.
Host is the processor that operates the mainframe. In the 64100A and 64110A the Host Bus is the workstation processor's address, data, input/output and control buses, which also connect to the cards in the card cage.
User system is the microprocessor system being developed. The terms user processor and user memory describe those components in the system being developed.
Emulation or Emulator refers to optional cards and other hardware that are connected to the mainframe via the plug-in cards and can replace the processor and/or memory in the user system. Emulation and analysis cards are interconnected with an Emulation Bus that is completely separated from the Host Bus.
Software Development
The 64000 provided a file system and text editor for writing software. There was a generic assembler / linker (manual Bitsavers), Pascal compiler (manual Bitsavers), and C compiler (manual Bitsavers), which were supplemented with add-on cross-assemblers and cross-compilers for each particular microprocessor. A list of these by product number is:
* HPCM is the Hewlett Packard Computer Museum
In addition, there was a Pascal "Host Compiler", product number 64817A manual at Bitsavers, disk image at HPCM, which could be used to write programs to execute on the workstation host processor.
In-Circuit Emulation
The 64000 system, through the use of optional cards and software, could perform in-circuit emulation of a variety of microprocessors and their memory. A complete emulation system typically consisted of:
A microprocessor emulator controller card, specific to each microprocessor.
An emulation "pod" or "probe", which contained interface electronics and was an external module to the mainframe. The processor in the user system was removed from its socket, and a cable from the emulation pod was connected in its place. The emulation pod contained a copy of the user processor that ran program code just as the user processor would, and it appeared to the user system as the normal processor.
An emulation memory controller card and one or more emulation memory cards. The emulation memory could be used to substitute for memory in the user system so that, for instance, user program code could be placed in the emulation memory and executed rather than needing to program ROM chips.
An "internal" analyzer card, which was a logic analyzer that monitored the operation of the emulated processor and memory.
Emulator software that allowed the operator to start and stop the emulated processor, examine the contents of memory and register locations, measure signal timing, observe program flow, and so on.
The photo at right shows a 64100A workstation emulating the processor of a user system via an emulator pod. The photo also shows a data acquisition pod for an "external" logic analyzer card in the 64100A that was measuring additional digital signals in the user system.
* HPCM is the Hewlett Packard Computer Museum
Emulator control boards connected to both the host (mainframe) bus and the emulation bus. They acted to pass control signals and data between the host and emulated systems. Depending on the model, the control board might also contain hardware to flag illegal opcodes or memory accesses or to act as an internal logic analyzer.
Memory Emulation allows RAM and/or ROM in the user system to be replaced by memory in the 64000 system. Two emulation memory controller boards were offered:
64151A Emulation Memory Controller (manual at Bitsavers), which had 16 address lines so could address 64 KB of memory, and
64155A Wide Address Memory Controller (manual at Bitsavers), which had 24 address lines so could address 16 MB of memory.
Memory maps for the user system could be specified in terms of RAM, ROM and protected memory. Attempted writes to ROM or accessing of protected memory was detected by the memory controller and could trigger actions such as program breakpoints.
Memory cards of various capacities of static RAM were offered. The 64152B, 53B and 54B cards provided 32, 16 and 8 KB, respectively, and the 64161A, 62A and 63A cards provided 128, 64 and 32 KB, respectively. They could each be configured for 8-bit or 16-bit data buses. Memory cards were connected together and to the memory controller through an emulation memory bus. Accesses to emulation memory by either the host or user systems was through the controller card.
Once the emulated processor and memory took the place of the processor and memory in the user system, the designer could write and compile program code, load it into emulation memory and start the user system, running the program in the emulated processor.
Analysis
A 64000 system could act as a logic analyzer to measure digital signals within the user system. Two types of logic analysis cards were offered, "internal" analyzers which measured signals directly off the emulation bus within the mainframe, and "external" analyzers which used separate probes to physically connect to elements of the user system. Similar to the processor and memory emulation products, analysis functions were often divided into controller cards and data acquisition cards. Some of the emulation processor controller cards offered internal analysis functions without separate hardware.
Logic analysis hardware was also divided into state analyzers and timing analyzers. The former measured signals in synchronization with a system clock and could, for example, record the states of the address, data and control buses in the user system at each CPU cycle. This data was normally presented as a trace, showing the value on each bus for each CPU cycle. For many microprocessors, an "inverse assembler" was available that would convert values measured on the data bus to Opcodes for the user processor.
The second form of logic analysis was timing analysis. A timing, or asynchronous logic, analyzer measured digital signals at specified time intervals, not necessarily synchronized to the user system clock. Such analysis could be used to find glitches or verify digital signals had proper timings.
In addition to these logic analyzer functions, "software analysis" options were available. These tools acted as what are now commonly called debuggers and profilers.
A list of analysis products is:
Similar to the way the emulation hardware used "pods" with interface hardware tailored to each microprocessor, the analysis hardware used preprocessors to act as an interface to the microprocessor. Aside from the 64304A Emulation Bus Preprocessor (manual at Bitsavers), each of the CPU specific preprocessor interfaces was a circuit board that fit within the 64650A General Purpose Preprocessor module (manual at Bitsavers). That, in turn, connected to the logic analyzer card cables.
PROM Programmer
The 64100A has a space to the right of the keyboard that can accept a PROM programmer module. A common PROM programmer control card, the 64500A (manual at Bitsavers), was installed in the card cage. At least 11 programmer modules, numbered from 64502A to 64520A were available for a variety of PROM and programmable microcontroller chips from different manufacturers.
MAME Emulator
An emulation of the 64100A workstation is part of the MAME (Multiple Arcade Machine Emulator) system, under Manufacturer HP and titled "HP 64000". The emulator is open source and the source code is available.
References
External links
PDF documentation for HP 64000 from Bitsavers
first 64000 system from HP Computer Museum
64000 |
47978415 | https://en.wikipedia.org/wiki/HashiCorp | HashiCorp | HashiCorp is a software company with a freemium business model based in San Francisco, California. HashiCorp provides open-source tools and commercial products that enable developers, operators and security professionals to provision, secure, run and connect cloud-computing infrastructure. It was founded in 2012 by and .
HashiCorp is headquartered in San Francisco, but their employees are distributed across the United States, Canada, Australia, India, and Europe.
HashiCorp offers both open-source and proprietary products.
History
On 29 November 2021, HashiCorp set terms for its IPO at 15.3 million shares at $68-$72 at a valuation of $13 billion.
HashiCorp considers their 1,500 workers to be remote workers first rather than coming into an office on a full time basis.
Open-source tools
HashiCorp provides a suite of open-source tools intended to support the development and deployment of large-scale service-oriented software installations. Each tool is aimed at specific stages in the life cycle of a software application, with a focus on automation. Many have a plugin-oriented architecture in order to provide integration with third-party technologies and services. Additional proprietary features for some of these tools are offered commercially and are aimed at enterprise customers.
The main product line consists of the following tools:
Vagrant (first released in 2010): supports the building and maintenance of reproducible software-development environments via virtualization technology.
(first released in June 2013): a tool for building virtual-machine images for later deployment.
Terraform (first released in July 2014): infrastructure as code software which enables provisioning and adapting virtual infrastructure across all major cloud providers.
Consul (first released in April 2014): provides service mesh, DNS-based service discovery, distributed KV storage, RPC, and event propagation. The underlying event, membership, and failure-detection mechanisms are provided by Serf, an open-source library also published by HashiCorp.
Vault (first released in April 2015): provides secrets management, identity-based access, encrypting application data and auditing of secrets for applications, systems, and users.
Nomad (released in September 2015): supports scheduling and deployment of tasks across worker nodes in a cluster.
Serf (first released in 2013): a decentralized cluster membership, failure detection, and orchestration software product.
Sentinel (first released in 2017): a policy as code framework for HashiCorp products.
Boundary (first released in October 2020): provides secure remote access to systems based on trusted identity. Alternatives to Hashicorp Boundary include Teleport Community Edition, Bastion Hosts, and strongDM.
Waypoint (first released in October 2020): provides a modern workflow to build, deploy, and release across platforms.
Security issue
Around April 2021, a supply chain attack using code auditing tool codecov allowed hackers limited access to HashiCorp's customers networks. As a result private credentials were leaked. HashiCorp revoked a private signing key and asked its customers to use a new rotated key.
References
External links
Software companies based in the San Francisco Bay Area
Companies based in San Francisco
Free software companies
Software companies established in 2012
Software companies of the United States
American companies established in 2012
2012 establishments in California
2021 initial public offerings
Companies listed on the Nasdaq |
21893088 | https://en.wikipedia.org/wiki/Moviestorm | Moviestorm | Moviestorm is a real-time 3D animation app published by Moviestorm Ltd. The software is available to and used by people of all age groups and appeals to those with a diverse range of backgrounds and interests, from amateur and professional film makers, through to businesses and education, as well as people just looking to simply tell stories or create messages to share using video. Moviestorm enables the user to create animated movies, using machinima technology. It takes the user from initial concept to finished, distributed movies. Sets and characters can be created and customised, and scenes can be filmed using multiple cameras.
Moviestorm is being used predominantly in education by students of film and media studies as a means to develop their skills and expand their portfolio, as well as a collaborative cross-curricular creative tool in education sectors from elementary to high school.
The software's website features a Web 2.0 social media service, which includes a video hosting service, and an online community where movie-makers can talk about their movies, find collaborators, and organise online events. Moviestorm also makes use of Twitter, YouTube and Facebook to release the latest news on the software and to interact with both current and potential users.
History
Founded as a startup in Cambridge by Machinima experts Matt Kelland and Dave Lloyd, Moviestorm got three investment rounds of £400k in 2005, £900k in 2007 and $3M in 2008.
Moviestorm has been generally available since August 2008 and over 160,000 people have now registered to use it.
The interface has undergone fairly radical change since its first incarnation. Many User Interface improvements were implemented with the release of version 1.3 in June 2010 and version 1.4, released in August 2010, contained some long-awaited upgrades especially in the Dressing room which allows much more control over facial morphing of avatars. This release also features a completely new lighting system which more closely resembles the 3-light systems used in real live action filming. Version 1.5 was released on 8 December, and featured many upgrades to the program, including an auto save feature, a new video export format, and a "terrain editor", where users can now edit the default green mountains surrounding the set.
More recently, Moviestorm have released an iPad app that provides users with a simplified video creation solution, an approach to the genre suited to cross-curricular teaching and learning, converting Powerpoints into slideshows that present themselves, and fun video messaging.
Moviestorm Ltd interacts with customers in its active online forums.
Business model
Users new to the program can try it for 14 days for free by registering at the website. Thereafter users can purchase the application outright with different content bundling options. Moviestorm Points can also be bought to acquire additional content from the online marketplace, or gifted to other users in return for advice or assistance or in payment for a user-created modification. Subscribers have access to the Modders Workshop a tool which allows them to create their own 'props' and a wizard allows the direct import of models from Google SketchUp version 6. As of 2011, users can create their own custom "gestures" with the release of the Moviestorm skeletons.
Subscribers can also increase their points at any time by buying more points from the online marketplace. A subscription can be discontinued at any time, and resumed later with no penalty.
Examples of use
Children's animation
Blockhouse TV, based in Norwich, UK, utilised Moviestorm in their animated series for children, Jack and Holly. The first season, Jack and Holly's Christmas Countdown, was released in 2010. The second season, Jack and Holly's Cosmic Stories, was released in 2011.
Film and media teaching
Moviestorm has been used in film schools and media courses in many countries. Wan Smolbag Theatre in Vanuatu was one of the first to adopt it in 2008, under tutor John Herd. Students trained on Moviestorm have gone on to successful careers with the island's TV network. It is in use at many different educational sectors, from elementary schools to sixth form colleges and universities.
In addition to film teaching, Moviestorm has been used in educational contexts for a variety of other media, including computer games and music.
Other education
Some teachers have found Moviestorm useful as a cross-curricular tool for collaborative creative expression. Paul Carr at Sakuragaoka Junior and Senior High School, Japan uses it to help teach English to Japanese students. One of his techniques is to create silent videos for which the students then have to compose dialog. Other teachers have found it useful for helping autistic students to make presentations, since they can prepare their presentation as a video instead of having to stand up in front of a class.
Business
Commercial companies including Oracle Corporation and Fujitsu have used Moviestorm to create low-cost training videos. Other companies have used it to create cheap advertising content that can be produced in-house. Think Industries in Eastern England is an advertising and marketing company that uses Moviestorm to pitch its ideas to prospective clients. "Pitching is key, and you have to stand out," said owner Philip Morley in an interview in 2011. "Video is just so much more powerful than text. People will watch even if they don’t read documents. It’s now cost-effective to create custom videos for every pitch. I can re-use a lot of the material I already have, and just tweak it as I need. I can more or less change things in real time if necessary."
Music video
Moviestorm has been used as a low-cost alternative for bands wanting to create animated videos. The first commercial band to do so was Vice Romania in November 2008. Their video to This Is It
was created by Lucinda McNary of Two Moon Graphics in Kansas. Moviestorm footage was combined with a character filmed in DAZ3D and composited using greenscreen.
In 2009, Priscilla Angelique started using Moviestorm to create videos for several tracks on her London-based label A Priscilla Thing. "Music videos are a very expensive and time consuming process but Moviestorm allows me to achieve shots and effects that even with a modest budget would still be very out of reach," she said in an interview in late 2010.
In November 2011, Chicago chiptune band I Fight Dragons ran a contest challenging Moviestorm users to make the official video for their single, Working. (Moviestorm user and then-film student Kera "162" Hildebrandt would win the contest with her entry.)
Previsualization and film pitching
Moviestorm's rapid production has led to it being used by live action filmmakers and scriptwriters for pre-production. Since the footage used in previsualization is not intended to be included in the final product, the quality of the graphics is not a critical consideration. Independent filmmaker D.L. Watson in Oregon used it to create a complete animated storyboard on his short film The Letter (2009). London-based scriptwriter Dean P. Wells uses it to test out movie ideas and then creates trailers based on his scripts.
See also
iClone
Muvizu
Xtranormal
Shark 3D
References
External links
Computeractive article
Moviestorm Ltd's page about the software
Business Weekly article
Review in Microfilmmaker Magazine
Article on Softpedia website
Fallopian On-Line Magazine article on Moviestorm's use by Wan Smolbag Theatre's Youth Centre in Vanuatu
Chipchick.com Moviestorm Lets You Make 3D Animated Movies on your Home Computer
Software companies established in 2008
2008 software
3D animation software
Machinima |
56570997 | https://en.wikipedia.org/wiki/Pierre%20Verbaeten | Pierre Verbaeten | Petrus Verbaeten, born April 23rd, usually called Pierre, is a Belgian professor emeritus in the Computer Science Department at the KU Leuven, and has more than 226 publications to its name. He managed the internet domain field .be from 1989 to 2000.
Biography
Verbaeten studied electronics at Katholieke Universiteit Leuven and graduated in 1969. The direction computer science was founded in 1971. His first contact with computer science was during his military service. He then followed Applied Mathematics, in which a few computer science fields occurred.
Functions
Former chairman of the Department of Computer Science at the Katholieke Universiteit Leuven
Chairman of the Board of Directors EURid of 2004
Member in the DistriNet research group at KU Leuven
Administrator of the top-level domain .be between 1989 and 2000
Member DNS.be of 2000
Professor at KU Leuven since 1982
References
External links
Pierre Verbaeten on be.linkedin.com
Interview Beyond the beginning of .be
Belgian computer scientists
KU Leuven faculty
KU Leuven alumni
Living people
Year of birth missing (living people) |
11901366 | https://en.wikipedia.org/wiki/Kdenlive | Kdenlive | Kdenlive (; acronym for KDE Non-Linear Video Editor) is a free and open-source video editing software based on the MLT Framework, KDE and Qt. The project was started by Jason Wood in 2002, and is now maintained by a small team of developers.
With the release of Kdenlive 15.04.0 in 2015 it became part of the official KDE Projects.
Kdenlive packages are freely available for Linux, FreeBSD, and Microsoft Windows. As a whole it is distributed under the GPL-3.0-or-later license and parts of the source code are available under different licenses such as GPL-2.0-or-later and GPL-3.0-or-later and other licenses.
History
The project was initially started by Jason Wood in 2002. The development of Kdenlive moved on from the K Desktop Environment 3 version (which wasn't originally made for MLT) to KDE Platform 4, with an almost complete rewrite. This was completed with Kdenlive 0.7, released on 12 November 2008. Kdenlive 0.9.10 released on 1 October 2014 was the last KDE 4 release.
Kdenlive started to plan a move into the KDE Projects and its infrastructure in 2014. Port to KDE Frameworks 5 was finished with the release of 2015.04.0 as part of KDE Applications 5. The move to KDE is ongoing.
In early 2017 the development team started working on a refactoring of the program, and by June 2017 a first preview was available. By December 2017 the refactoring became the main focus of the development team with the release of the first usable preview. Release of the refactoring version was originally planned for August 2018 in the KDE 18.08 Applications release. The refactored version of Kdenlive was released on 22 April 2019 in the KDE 19.04 Applications release.
Features
KDE's Kdenlive makes use of MLT, Frei0r effects, SoX and LADSPA libraries. Kdenlive supports all of the formats supported by FFmpeg or libav (such as QuickTime, AVI, WMV, MPEG, and Flash Video, among others), and also supports 4:3 and 16:9 aspect ratios for both PAL, NTSC and various HD standards, including HDV and AVCHD. Video can also be exported to DV devices, or written to a DVD with chapters and a simple menu.
Multi-track editing with a timeline and supports an unlimited number of video and audio tracks.
A built-in title editor and tools to create, move, crop and delete video clips, audio clips, text clips and image clips.
Ability to add custom effects and transitions.
A wide range of effects and transitions. Audio signal processing capabilities include normalization, phase and pitch shifting, limiting, volume adjustment, reverb and equalization filters as well as others. Visual effects include options for masking, blue-screen, distortions, rotations, colour tools, blurring, obscuring and others.
Configurable keyboard shortcuts and interface layouts.
Rendering is done using a separate non-blocking process so it can be stopped, paused and restarted.
Kdenlive also provides a script called the Kdenlive Builder Wizard (KBW) that compiles the latest developer version of the software and its main dependencies from source, to allow users to try to test new features and report problems on the bug tracker.
Project files are stored in XML format.
An archiving feature allows exporting a project among all assets into a single folder or compressed archive.
Built-in audio mixer
See also
List of video editing software
Comparison of video editing software
References
External links
Free video software
KDE Applications
Software that uses FFmpeg
Video editing software
Video software that uses Qt
Video editing software for Linux
Video editing software for Windows |
18589829 | https://en.wikipedia.org/wiki/Thacker | Thacker | Thacker may refer to:
People
Angela Thacker (born 1964), American long jumper
Blaine Thacker (1941–2020), Member of Canadian Parliament
Brian Thacker (born 1945), American army officer; recipient of the Medal of Honor for action during the Vietnam war
Cathy Gillen Thacker (contemporary), American author of romance novels
Charles M. Thacker (1866–1918), Justice of the Oklahoma Supreme Court
Charles P. Thacker (1943–2017), American computer pioneer
David Thacker (born 1950), English award-winning theatre director
D. D. Thacker (1884–1961), Indian coal miner and philanthropist
Edwin Thacker (1913–1974), South African athlete
Eugene Thacker, American philosopher
Frank Thacker (1876–1949), English footballer
Gail Thacker (contemporary), avant-garde photographer and theater manager
Harry Thacker, (born 1994), English rugby union footballer
Henry Thacker (1870–1939), New Zealand physician and politician; member of Parliament 1914–22
Herbert Cyril Thacker (1870–1953), Canadian army general; Chief of the General Staff 1927–29
Jeremy Thacker, 18th-century writer and watchmaker
Julie Thacker (contemporary), American television writer
Lawrence Thacker, rugby league footballer of the 1930s and 1940s for England, and Hull
Mary Rose Thacker (1922–1983), former Canadian singles figure skater
Moe Thacker (1934–1997), American professional baseball player
Paul D. Thacker (contemporary), American journalist in medical topics
Ralph Thacker (1880–after 1915), American college football coach
Ransley Thacker 1891–1965), British lawyer and judge
Robert E. Thacker (1918–2020), American test pilot and model aircraft enthusiast
Stephanie Thacker (born 1965), United States Circuit Judge
Tab Thacker (1962–2007), American collegiate wrestler and actor
Thomas Thacker (died 1548), steward of Thomas Cromwell, Repton Priory
Tom Thacker (basketball) (born 1939), American professional basketball player
Tom Thacker (musician) (born 1974) Canadian singer and lead guitarist
Places
Thacker, West Virginia, U.S.
Thacker Creek
Other uses
Thacker Shield, a rugby league football trophy
Kevin Thacker, a fictional character in the 2008 film The Coverup, previously known as The Thacker Case
William Thacker, a fictional character in the 1999 film Notting Hill
Thackers, a fictional farm in the novel "A Traveller in Time” by Alison Uttley
See also
Thatcher (disambiguation)
Occupational surnames |
12640727 | https://en.wikipedia.org/wiki/Underground%20%28Dreyfus%20book%29 | Underground (Dreyfus book) | Underground: Tales of Hacking, Madness and Obsession on the Electronic Frontier is a 1997 book by Suelette Dreyfus, researched by Julian Assange. It describes the exploits of a group of Australian, American, and British black hat hackers during the late 1980s and early 1990s, among them Assange himself.
Craig Bowen (nickname), administrator of two important Australian BBS (Pacific Island and Zen)
Par, a.k.a. The Parmaster, an American hacker who avoided capture by the United States Secret Service from July 1989 to November 1991
Phoenix, Electron and Nom, who were convicted in the first major Australian trial for computer crimes
Pad and Gandalf, the British founders of the notorious 8lgm group
the Australian Mendax (Julian Assange) and Prime Suspect, who managed to penetrate the DDN, NIC and the Nortel internal network, and the phreaker Trax. Together, the three were known as the "International Subversives".
Anthrax, another Australian hacker and phreaker
The book also mentions other hackers who had contacts with the protagonists, among them Erik Bloodaxe of the Legion of Doom and Corrupt of the Masters of Deception.
The first chapter of Underground relates the diffusion and reactions of the computer security community to the WANK worm that attacked DEC VMS computers over the DECnet in 1989 and was purportedly coded by a Melbourne hacker.
, the book has sold 10,000 copies.
The author made the electronic edition of the book freely available in 2001, when it was announced on Slashdot, the server housing the book crashed due to the demand for the book. It reached 400,000 downloads within two years.
The 2002 documentary In the Realm of the Hackers, directed by Kevin Anderson and centered on Phoenix and Electron, was inspired by this book.
See also
List of computer books
References
External links
Book Website
Online version of the book
Computer security books
1997 non-fiction books
Hacker culture
Julian Assange
Books about computer hacking
Works about computer hacking |
73956 | https://en.wikipedia.org/wiki/Adobe%20FrameMaker | Adobe FrameMaker | Adobe FrameMaker is a document processor designed for writing and editing large or complex documents, including structured documents. It was originally developed by Frame Technology Corporation, which was bought by Adobe.
Overview
FrameMaker became an Adobe product in October 1995 when Adobe purchased Frame Technology Corp. Adobe added SGML support, which eventually morphed into today's XML support. In April 2004, Adobe stopped supporting FrameMaker for the Macintosh.
This reinvigorated rumors surfacing in 2001 that product development and support for FrameMaker were being wound down. Adobe denied these rumors in 2001, later releasing FrameMaker 8 at the end of July 2007, FrameMaker 9 in 2009, FrameMaker 10 in 2011, FrameMaker 11 in 2012, FrameMaker 12 in 2014, FrameMaker (2015 release) in June 2015, FrameMaker 2017 in January 2017, FrameMaker 2019 in August 2018, and FrameMaker 2020 in 2020.
FrameMaker has two ways of approaching documents: structured and unstructured.
Structured FrameMaker is used to achieve consistency in documentation within industries such as aerospace, where several models of the same complex product exist, or pharmaceuticals, where translation and standardization are important requirements in communications about products. Structured FrameMaker uses SGML and XML concepts. The author works with an EDD (Element Definition Document), which is a FrameMaker-specific DTD (Document Type Definition). The EDD defines the structure of a document where meaningful units are designated as elements nested in each other depending on their relationships, and where the formatting of these elements is based on their contexts. Attributes or Metadata can be added to these elements and used for single source publishing or for filtering elements during the output processes (such as publishing for print or for Web-based display). The author can view the conditions and contexts in a tree-like structure derived from the grammar (as specified by the DTD) or as formatted in a typical final output form.
Unstructured FrameMaker uses tagged paragraphs without any imposed logical structure, except that expressed by the author’s concept, topic organization, and the formatting supplied by paragraph tags.
When a user opens a structured file in unstructured FrameMaker, the structure is lost.
MIF
MIF (Maker Interchange Format) is a markup language that functions as a companion to FrameMaker. The purpose of MIF is to represent FrameMaker documents in a relatively simple ASCII-based format, which can be produced or understood by other software systems and also by humans. Any document that can be created interactively in FrameMaker can also be represented, exactly and completely, in MIF (the reverse, however, is not true: a few FrameMaker features are available only through MIF). All versions of FrameMaker can export documents in MIF, and can also read MIF documents, including documents created by an earlier version or by another program.
History
While working on his master's degree in astrophysics at Columbia University, Charles "Nick" Corfield, a mathematician alumnus of the University of Cambridge, decided to write a WYSIWYG document editor on a Sun-2 workstation. He got the idea from his college roommate at Columbia, Ben Meiry, who went to work at Sun Microsystems as a technical consultant and writer, and saw that there was a market for a powerful and flexible desktop publishing (DTP) product for the professional market.
The only substantial DTP product at the time of FrameMaker's conception was Interleaf, which also ran on Sun workstations in 1981. Meiry saw an opportunity for a product to compete with Interleaf, enlisted Corfield to program it, and assisted him in acquiring the hardware, software, and technical connections to get him going in his Columbia University dorm room (where Corfield was still finishing his degree).
Corfield programmed his algorithms quickly. After only a few months, Corfield had completed a functional prototype of FrameMaker. The prototype caught the eyes of salesmen at the fledgling Sun Microsystems, which lacked commercial applications to showcase the graphics capabilities of their workstations. They got permission from Corfield to use the prototype as demoware for their computers, and hence, the primitive FrameMaker received plenty of exposure in the Unix workstation arena.
Steve Kirsch saw the demo and realized the potential of the product. Kirsch used the money he earned from Mouse Systems to fund a startup company, Frame Technology Corp., to commercialize the software.
Corfield chose to sue Meiry for release of rights to the software so they could more easily obtain additional investment capital with Kirsch. Meiry had little means to fight a lengthy and expensive lawsuit with Corfield and his new business partners, and he chose to relinquish his rights to FrameMaker and move on.
Originally written for SunOS (a variant of UNIX) on Sun machines, FrameMaker was a popular technical writing tool, and the company was profitable early on. Because of the flourishing desktop publishing market on the Apple Macintosh, the software was ported to the Mac as its second platform.
In the early 1990s, a wave of UNIX workstation vendors—Apollo, Data General, MIPS, Motorola and Sony—provided funding to Frame Technology for an OEM version for their platforms.
At the height of its success, FrameMaker ran on more than thirteen UNIX platforms, including NeXT Computer's NeXTSTEP, Dell's System V Release 4 UNIX and IBM's AIX operating systems.
Sun Microsystems and AT&T were promoting the OPEN LOOK GUI standard to win over Motif, so Sun contracted Frame Technology to implement a version of FrameMaker on their PostScript-based NeWS windowing system. The NeWS version of FrameMaker was successfully released to those customers adopting the OPEN LOOK standards.
At this point, FrameMaker was considered an extraordinary product for its day, not only enabling authors to produce highly structured documents with relative ease, but also giving users a great deal of typographical control in a reasonably intuitive and totally WYSIWYG way. The output documents could be of very high typographical quality.
Frame Technology later ported FrameMaker to Microsoft Windows, but the company lost direction soon after its release. Up to this point, FrameMaker had been targeting a professional market for highly technical publications, such as the maintenance manuals for the Boeing 777 project, and licensed each copy for $2,500. But the Windows version brought the product to the $500 price range, which cannibalized its own non-Windows customer base.
The company's attempt to sell sophisticated technical publishing software to the home DTP market was a disaster. A tool designed for a 1,000-page manual was too cumbersome and difficult for an average home user to type a one-page letter. And despite some initially enthusiastic users, FrameMaker never really took off in the academic market, because of the company's unwillingness to incorporate various functions (such as support for endnotes or long footnotes split across pages), or to improve the equation editor.
Sales plummeted and brought the company to the verge of bankruptcy. After several rounds of layoffs, the company was stripped to the bare bones.
Adobe Systems acquired the product and returned the focus to the professional market. Then, they released a new version under the name Adobe FrameMaker 5.1 in 1996. Today, Adobe FrameMaker is still a widely used publication tool for technical writers, although no version has been released for the Mac OS X operating system, limiting use of the product. The decision to cancel FrameMaker caused considerable friction between Adobe and Mac users, including Apple itself, which relied on it for creating documentation. As late as 2008, Apple manuals for OS X Leopard and the iPhone were still being developed on FrameMaker 7 in Classic mode; Apple has since switched to using InDesign.
FrameMaker versions 5.x through 7.2 (from mid-1995 to 2005) did not contain updates to major parts of the program (including its general user interface, table editing, and illustration editing), concentrating instead on bug fixes and the integration of XML-oriented features (previously part of the FrameMaker+SGML premium product). FrameMaker did not feature multiple undo until version 7.2 (its 2005 release).
FrameMaker 8 (2007) introduced Unicode, Flash, 3D, and built-in DITA support. Platform support included Windows (2000, XP, and Vista) and Sun Solaris (8, 9, and 10).
FrameMaker 9 (2009) introduced a redesigned user interface and several enhancements, including: full support for DITA, support for more media types, better PDF output, and enhanced WebDAV-based CMS integration. Platform support for Sun Solaris and Windows 2000 was dropped, leaving Windows XP and Windows Vista as the sole remaining platforms.
FrameMaker 10 (2011) again refined the user interface and introduced several changes, including: integration with content management systems via EMC Documentum 6.5 with Service Pack 1 and Microsoft SharePoint Server 2007 with Service Pack 2.
Other FrameMaker tools
FrameMaker Publishing Server is an online document processor server for automated creation of multi-use content types. The web interface enables users to direct aggregation of differing information sources routinely into detailed a presentation in multiple environments on numerous devices.
Alternatives and competition
There were several major competitors in the technical publishing market, such as Arbortext, Interleaf, and Corel Ventura. Many academic users now use LaTeX, because modern editors have made that system increasingly user-friendly, and LyX allows LaTeX to be generated with little or no knowledge of LaTeX. Several formats, including DocBook XML, target authors of technical documents about computer hardware and software. Lastly, alternatives to FrameMaker for technical writing include Help authoring tools and XML editors. Also, Scribus is an open source desktop publishing alternative.
See also
Comparison of word processors
References
External links
Adobe FrameMaker Official Page
Blog post about FrameMaker (2019 release)
Element Descriptions in Structured FrameMaker 10 Using Element Descriptions to cut down writers’ training costs and efforts.
FrameUsers.com FrameMaker users' largest online reference site and community
History of FrameMaker
Adobe FrameMaker Online Forum
1986 software
FrameMaker
Desktop publishing software
IRIX software
NeXTSTEP software
Solaris software
Technical communication tools
Text editors
Typesetting software
XML
XML editors
XML software |
632275 | https://en.wikipedia.org/wiki/Tapwave%20Zodiac | Tapwave Zodiac | The Tapwave Zodiac is a mobile entertainment console. Tapwave announced the system in May 2003 and began shipping in October of that same year. The Zodiac was designed to be a high-performance mobile entertainment system centered on video games, music, photos, and video for 18- to 34-year-old gamers and technology enthusiasts. By running an enhanced version of the Palm Operating System (5.2T), Zodiac also provided access to Palm's personal information management software and many other applications from the Palm developer community. The company was based in Mountain View, California.
The Zodiac console was initially available in two models, Zodiac 1 (32MB) for US$299, and Zodiac 2 (128MB) for US$399. Some of the game titles for the product included Tony Hawk's Pro Skater 4 (Activision); Mototrax (Activision); Spy Hunter (Midway); Madden NFL 2005 (EA/MDM); Doom II (id Software); Golden Axe III and Altered Beast (Sega); Warfare Incorporated (Handmark); and Duke Nukem Mobile (3D Realms/MachineWorks).
Due to insufficient funding and strong competitive pressure from the PlayStation Portable (PSP) from Sony (which was pre-announced at E3 on May 16, 2003, and shipped in North America on March 24, 2005), and the DS from Nintendo (released on November 21, 2004), Tapwave sold the company to an undisclosed multibillion-dollar corporation in Asia in July 2005.
The Zodiac console garnered strong product reviews and received many industry awards including Popular Sciences Best of What's New Award, Stuff magazine's Top 10 Gadgets of the Year, Wired magazine's Fetish Award, CNET's Editor's Choice Award, PC Worlds 2004 Next Gear Innovations Award; PC Magazines 1st Place Last Gadget Standing at CES; Handheld Computing magazine's Most Innovative PDA of 2003; Time magazine Best Gear of 2003; and the Business Week Best Products of 2003.
History of Tapwave
May 2001: Tapwave was founded by former Palm executives
May 2002: Tapwave closed initial Series-A funding
May 2002: Tapwave signed Palm OS licensing agreement with PalmSource
May 2003: Company was formally launched at Palm Developers Conference & E3
September 2003: Zodiac entertainment console launched at DEMO conference
October 2003: Zodiac console began shipping to customers directly from livescribe.com
November 2003: Tapwave announced that “over 1200 game developers” had signed up for the Tapwave developer program
February 2004: PalmGear and Tapwave announced partnership to launch an online store to feature the best applications, game titles and ebooks available on the Palm OS platform
April 2004: Synchronization between Zodiac and Mac OS X desktops enabled by MarkSpace
June 2004: Zodiac launched into United States retail distribution with CompUSA
October 2004: Zodiac launched in United Kingdom and sold through PC World, Dixons, and Currys
October 2004: Zodiac launched in Singapore and distributed by ECS
November 2004: Zodiac launched in South Korea and co-branded with Sonokong (OEM)
December 2004: Audible announces audio book support for the Zodiac
December 2004: Tapwave announces Wi-Fi SD card for the Zodiac with “enhanced mail application and web browser”
January 2005: Tapwave and Virgin Digital announced strategic alliance for audio download and subscription services
July 2005: Tapwave discontinued the sale of the Zodiac mobile entertainment console and sold substantially all of its assets to an undisclosed multibillion-dollar corporation in Asia and wound down operations
Primary features
Music, images, and video
An MP3 music player is included in the system's applications, and allows the creation of custom playlists using drag-and-dropping of files. MP3 music files can be played from either SD slot, or the internal memory of the device. MP3 files can also be used as alarms, along with conventional Palm OS alarms.
Photos (JPEG or PNG format) could be downloaded to the device using the Palm Desktop software or loaded onto SD cards, and could be shared and made into a slideshow (with background music) on the device.
The bundled video player on the device, Kinoma, would only play videos in a proprietary format, converted using the Kinoma Producer software (which supported conversion of MPEG1, MPEG4, QuickTime, AVI and DivX). The software however was limited in its conversion abilities, enticing users to pay for the full version. It has been suggested that this difficulty in converting video for the device diminished the Zodiac's success. Several aftermarket DivX and XviD players have been developed (such as the TCPMP), and, at the time of bankruptcy, Tapwave were working on an update to supply MPEG-4 hardware decoding.
Device design
Due to the metal construction of the Zodiac, the device was seen to be more solid than other PDAs. However, on some models the adhesive on the shoulder buttons failed, and occasionally the screen was scratched by the screen cover when grit entered. Furthermore, due to the insecure clip holding the stylus, they could be knocked loose and potentially lost. Some alternative cases solved this problem with their own stylus holder.
Compatibility
The Zodiac is a Palm OS 5-compatible device, and most software compatible with Palm OS 5 runs without issue. In particular, most Palm OS 5-compatible games play on the Zodiac. Tapwave also provided proprietary APIs to allow developers to take advantage of the Zodiac's graphics and sound hardware. A great deal of freeware and shareware games and emulators are therefore available. For example, there are versions of Doom, Quake, Hexen, Hexen II, and Heretic as well as versions of emulators such as UAE, ScummVM, and LJZ/LJP, a multi-system emulator. There have also been attempts to emulate PlayStation games onto the Zodiac, the most successful emulator being PPSX. It is, however, nowhere near completion and many games are not playable yet.
Battery
The device has a total battery life of about 3 hours when using video, backlight+screen and CPU-intensive tasks, and while running as a dedicated audio player it is closer to 6 hours. The original battery was a 1500mAh Li-Ion; third party replacements with 2000mAh capacity are still available from some manufacturers.
Software
The Zodiac used a modified version of the Palm OS, designated version 5.2T. The main navigation menus consisted of 8 radially-arranged choices selected using either the touchscreen or thumbstick. It also came with the Palm OS Productivity Suite (containing a calendar, to do list etc.), an eBook reader, the Wordsmith word processor and the powerOne graphing application. It came bundled with two games, AcidSolitare (by Red Mercury) and Stunt Car Extreme (by Vasara Games).
Models
The Zodiac console was initially available in two models, Zodiac 1 (32MB), and Zodiac 2 (128MB). The Zodiac 2 was $100 more expensive than the original Zodiac.
Games
Games which utilized some or all of the Zodiac's hardware/software are incompatible with standard Palm OS devices. This does exclude platforms outside of Palm OS (e.g., Doom II is also out for PC, but the Zodiac version listed here won't run on standard Palm OS handhelds). This list also excludes standard Palm OS games which are also available for Zodiac handhelds, which were either identical or slightly improved on Zodiac, called "Zodiac tuned" (e.g. a game available for standard Palm OS only has the extra features of vibration and shoulder buttons as extra usable buttons when played on Zodiac).
Some of the games were never released due to the discontinuation of the Zodiac in July 2005. However, the testing builds of some of these games were leaked and are playable.
Zodiac exclusive titles
Acedior
Altered Beast
Animated Dudes
Anotherball
Atari Retro
Avalanche
Bike or Die
Billiards
Bubble Shooter 2
Colony
Crossword Puzzles
Daedalus 3D – The Labyrinth
Dreamway
Firefly: Pacman clone
FireHammer
Fish Tycoon
Frutakia
Galactic Realms
Gloop Zero by AeonFlame: (was shareware, but is now freeware) puzzle game where you direct the flow of liquid slime material to its goal by drawing platform lines and using other tools.
Golden Axe
Interstellar Flames
Jet Ducks
Kickoo's Breakout
Legacy
MegaBowling
MicroQuad
Orbz (was shareware, but is now freeware as of September 2006)
Paintball
Pocket Mini Golf
Pocket Mini Golf eXtra
RifleSLUGS-W: Wild Web Wars
Stunt Car Extreme: 3D, 1st-person or 3rd-person racing game. Comes with the Zodiac CD.
Table Tennis 3D
The Green Myste
Tots ‘n’ Togs
Xploids
ZapEm
Zodiac exclusive titles, also available on SD card
Doom II
Duke Nukem Mobile
GTS Racing Challenge
Spy Hunter
Tiger Team: Apache vs Hind
Tony Hawk's Pro Skater 4
Z Pak: Adventure
Z Pak: Fun
"Zodiac tuned" titles
Madden NFL 2005 (CD-ROM; uploaded from PC)
Warfare Incorporated
Unreleased but leaked games
Street Hoops, tech demo
MTX: Mototrax, complete
Hockey Rage 2004, complete, but crash on exit
Neverwinter Nights, tech demo
Terminator 3: Complete game, few sound elements missing.
Tomb Raider, complete original first and second game
Ports
Several homebrew (freeware) games were released on ports.
ZDoomZ, a ZDoom port to Palm/Zodiac
ZHeretic, a Heretic port to Palm/Zodiac
ZHexen, a Hexen port to Palm/Zodiac
ZHexen2, Hexen II port to Palm/Zodiac
LJP, a multisystem emulator for Palm/Zodiac
LJZ, the old version of LJP, discontinued.
pPSX, Emulates psx games at limited speed without sound. Low compatibility. Incomplete.
ReverZi, an Othello/Reversi clone for Zodiac
ZodMAME, a MAME port to the Zodiac
ZodNEO, a NeoGeo port to the Zodiac
ZodSCUMM, ScummVM port to the Zodiac
ZSpectrum, a ZX Spectrum port to the Zodiac
REminiscence, a Zodiac port of Flashback
Thruster, a fast-paced cave flyer.
Noiz2sa
Orbital Sniper, Look down from high above and shoot hostiles in a city grid layout while protecting innocent lives. (Freeware)
Zodtris, Zodiac only version of Tetris. (Freeware)
Zap 'Em, a close conversion of Zoop for PC (Freeware)
ZoT
Zyrian
Another World
ZodTTD, an OpenTTD port to the Zodiac
TCPMP, a media player that could play back many codecs that the Zodiac did not originally support
Hardware specifications
Two versions of the Zodiac are available, differing only in the amount of memory and case colour
CPU: Motorola i.MX1 ARM9 processor (200 MHz)
Memory: Zodiac 1 had 32 MB. Zodiac 2 had 128 MB. Both have 10 MB Dedicated to the System Dynamic RAM
Graphic Accelerator: ATI Imageon W4200 2D graphics accelerator (with 8 MB dedicated SDRAM)
Controls: Analog controller (or joystick) with 360 degrees of motion, built-in triggers and action button array similar to other gaming consoles.
Display: 3.8 inch transflective 480×320 (half VGA), 16-bit colour backlit display (65,536 colours)
Sound: Yamaha sound and stereo speakers, 3.5 mm earphone plug
External Connectors: 2 expansion slots (both are MMC / SD capable, one is also SDIO capable), Zodiac Connector, 3.5 mm headphone jack
Wireless: Infrared, Bluetooth (Compatible with some Wifi SDIO cards depending on drivers)
Battery: Rechargeable Lithium Batteries – Dual totaling to 1540 mA·h
Size and Weight: ,
Compare to the Palm TX which is smaller at 78×15×121 mm due to fewer buttons, but includes WiFi
Colors: Zodiac 1: Slate Gray, Zodiac 2: Charcoal Gray
Casing: Synthetic rubber, anodized aluminum, plastic
Peripherals and accessories
5V regulated DC switch mode battery charger, using proprietary connector
USB PC synchronization cable, incorporating pass-through female charger connector (allowing charging from mains while synchronizing)
Car battery charger
Cradle attachment for sync cable (poorly designed, unreliable electrical contacts)
Folding Keyboard (some 3rd party Bluetooth & IR models, unknown whether dedicated keyboard capable of using sync cable connector existed)
Some SDIO cameras could be used, such as the Veo Camera
See also
Handheld game console
References
External links
(Archive)
OpenHandhelds Zodiac File Archive
Tapwave Reborn Zodiac File Archive
Handheld game consoles
Palm OS devices
Sixth-generation video game consoles
Products and services discontinued in 2005 |
4224170 | https://en.wikipedia.org/wiki/Niles%20East%20High%20School | Niles East High School | Niles East High School was a public 4–year school in Skokie, Illinois. Operated by Niles Township High Schools District 219, Niles East was first opened in 1938 and closed after the 1979–1980 school year. Niles East's sister schools Niles West High School and Niles North High School remain open. The school was known as Niles Township High School until Niles West High School opened in 1959. The school sports teams were named the Trojans. The school's greatest claims to fame are its two Nobel Laureate alumni—perhaps even more notable because the school was open for only 42 years. It ranks high among schools around the world on the list "Nobel Prize laureates by secondary school affiliation." The school buildings were demolished by Oakton Community College.
History
In 1975 Niles Township High School District 219 announced that Niles East would be closed in 1980 and all students and faculty were moved to Niles West and Niles North. On the evening of November 2, 1978, then President Jimmy Carter attended a "Get out the Vote" Rally at Niles East, where he was given an honorary diploma from the school.
After closure
After Oakton Community College moved from their original Morton Grove campus to Des Plaines, Oakton opened a branch campus in the former Niles East building. District 219 administrative offices were temporarily located in the shuttered Niles East. Centre East for the Performing Arts was located in the former Niles East Auditorium until their current facility opened near Golf Road and Skokie Boulevard. Oakton Community College demolished the original high school buildings in stages as new buildings opened. The only remaining structures of Niles East as of 2017 are the courtyard flagpole and the basement under the gymnasium that is now used for storage.
Pop culture
After its closing in 1980, Exterior and interior shots of the school was used in scenes from films such as; Risky Business (1983) and the John Hughes films Sixteen CandlesPretty in Pink,p (1984) and Weird Science (1985).
School Songs
Fight Song
Nilehi, Nilehi,
Go out and win this game,
We'll help you try.
The Trojans were a mighty race,
They fought with lots of vim.
Let's keep our fighting spirit and we'll win!
Let's go now!
Gold and Blue,
We're true to you,
We'll stand behind you always to a man.
Let's keep our colors flying high,
Our motto is to do or die,
Let's win this game, Nilehi!
Let's go, Nilehi!
Let's go, Trojans!
Fight hard, Nilehi!
VICTORY IS OURS!!
Alma Mater
Gold and Blue
Gold and Blue we sing to you
To you we bring our hearts so true
When we go off to College, we will think of you old school
Where we gained lots of knowledge and learned the golden rule
Though years may come and years may go
Deep in our hearts we'll always know
That there's only one real high school
And so we sing anew
We love you Gold and Blue
Athletics
Niles East competed as a member of the Central Suburban League from 1972 until its closing in 1980. It was always a member of the Illinois High School Association (IHSA), which governs most athletic competition in Illinois. The IHSA currently recognizes Niles West High School as the caretaker of Niles East's competitive history. The following teams finished in the top four of their respective IHSA state championship tournament:
Baseball: 2nd place (1957–58)
Gymnastics (boys): 4th place (1961–62, 1967–68, 1974–75); 3rd place (1968–69); 2nd place (1962–63, 1963–64)
Swimming & Diving (boys): 4th place (1952–53)
Tennis (boys): 3rd place (1960–61)
Wrestling: 2nd place (1960–61)
Fencing: 1st place - State Champion Team (1969–70)
Notable alumni
Robert Horvitz (class of 1964) was the co–recipient of the 2002 Nobel Prize in Physiology or Medicine for discoveries concerning genetic regulation of organ development and programmed cell death.
Martin Chalfie (class of 1965) was the co–recipient of the 2008 Nobel Prize in Chemistry for the discovery and development of the green fluorescent protein, GFP.
William Campbell (class of 1949) U.S. Air Force lieutenant general.
David Kaplan (class of 1978) ESPN 1000 radio sportscaster and host of Sports Talk Live on Comcast SportsNet Chicago.
William Nack (class of 1959) author of best seller Secretariat, consultant to and bit part in the movie, and Ruffian, also made into a movie. Political/government reporter for Long Island Newsday and later senior editor for Sports Illustrated.
Jill Wine-Banks (class of 1961?) television news legal commentator, Watergate prosecutor, first woman executive director of the American Bar Association. first woman general counsel of the U.S. Army.
References
External links
Niles Township High School District 219
Niles East Alumni Directory
Oakton Community College
Centre East for the Performing Arts
Schools of Skokie, Illinois 1900-1996 - Skokie Historical Society
Illinois High School "Glory Days"
"Teacher's Strike" 1976 documentary film
Educational institutions established in 1938
Educational institutions disestablished in 1980
Former high schools in Illinois
High schools in Skokie, Illinois
1938 establishments in Illinois |
34351618 | https://en.wikipedia.org/wiki/The%20Geochemist%27s%20Workbench | The Geochemist's Workbench | The Geochemist's Workbench (GWB) is an integrated set of interactive software tools for solving a range of problems in aqueous chemistry. The graphical user interface simplifies the use of the geochemical code.
History
The GWB package was originally developed at the Department of Geology of the University of Illinois at Urbana-Champaign over a period of more than twenty years, under the sponsorship initially of a consortium of companies and government laboratories, and later through license fees paid by a community of users. In 2011, the GWB development team moved to the Research Park at the University of Illinois, and subsequently off campus in Champaign, IL, where they operate as an independent company named Aqueous Solutions LLC. Since its release, many thousands of licensed copies have been installed in more than 90 countries. In 2014, a free Student Edition of the software was released, and was later expanded in 2021 to a Community Edition free to all aqueous chemists.
An early version of the software was one of the first applications of parallel vector computing, the predecessor to today's multi-core processors, to geological research. The current release is multithreaded, and as such retains features of the early parallel vector architecture.
Overview
The GWB is an integrated geochemical modeling package used for balancing chemical reactions, calculating stability diagrams and the equilibrium states of natural waters, tracing reaction processes, modeling reactive transport, plotting the results of these calculations, and storing the related data. The Workbench, designed for personal computers running Microsoft Windows, is distributed commercially in three packages: GWB Professional, Standard, and Essentials, as well as in the free GWB Community Edition.
GWB reads datasets of thermodynamic equilibrium constants (most commonly compiled from 0 to 300 °C along the steam saturation curve) with which it can calculate chemical equilibria. Thermodynamic datasets from other popular programs like PHREEQC, PHRQPITZ, WATEQ4F, and Visual MINTEQ have been formatted for the GWB, enabling comparison and validation of the different codes. The programs K2GWB, DBCreate, and logKcalc were written to generate thermodynamic data for GWB under pressures and temperatures beyond the limits of the default datasets. The GWB can couple chemical reaction with hydrologic transport to produce simulations known as reactive transport models. GWB can calculate flow fields dynamically, or import flow fields as numeric data or calculated directly from the USGS hydrologic flow code MODFLOW.
Uses in science and industry
Geochemists working in the field, office, lab, or classroom store their analyses, calculate the distribution of chemical mass, create plots and diagrams, evaluate their experiments, and solve real-world problems.
The software is used by environmental chemists, engineers, microbiologists, and remediators to gain quantitative understanding of the chemical and microbiological reactions which control the fate and mobility of contaminants in the biosphere. With this knowledge, they can develop predictive models of contaminant fate and transport, and test the effectiveness of costly remediation schemes before implementing them in the field.
Within the energy industry, petroleum engineers, mining geologists, environmental geochemists and geothermal energy developers use the software to search for resources, optimize recovery, and manage wastes, all while using safe and environmentally friendly practices. Geoscientists manage the side effects of energy production in carbon sequestration projects and in the design of nuclear waste repositories.
Uses in education
Hundreds of scholarly articles cite or use GWB and several textbooks apply the software to solve common problems in environmental protection and remediation, the petroleum industry, and economic geology.
Geochemistry students can save time performing routine but tedious tasks that are easily accomplished with the software. Instead of balancing chemical reactions and constructing Eh-pH diagrams by hand, for example, students can spend time exploring advanced topics like multi-component equilibrium, kinetic theory, or reactive transport. A free download of The Geochemist's Workbench Community Edition is available from the developer's website.
Other geochemical modeling programs in common use
Aqion
ChemEQL
ChemPlugin
CHESS, HYTEC
CHILLER, CHIM-XPT
CrunchFlow
EQ3/EQ6
GEOCHEM-EZ
GEMS-PSI
GWB Community Edition
HYDROGEOCHEM
MINEQL+
MINTEQA2
PHREEQC
SOLMINEQ.88, GAMSPATH.99
TOUGHREACT
Visual MINTEQ
WATEQ4F
WHAM
See also
Earth Science
Environmental Engineering
Groundwater
Geochemistry
Hydrogeology
Groundwater model
Geochemical model
Reactive transport model
References
External links
Community Edition website
Users group
Scientific simulation software
Geology software |
30642008 | https://en.wikipedia.org/wiki/Censorship%20of%20Twitter | Censorship of Twitter | Censorship of Twitter refers to Internet censorship by governments that block access to Twitter, or censorship by Twitter itself. Twitter censorship also includes governmental notice and take down requests to Twitter, which Twitter enforces in accordance with its Terms of Service when a government or authority submits a valid removal request to Twitter indicating that specific content (such as a tweet) is illegal in their jurisdiction.
Restrictions based on government request
Twitter acts on complaints by third parties, including governments, to remove illegal content in accordance with the laws of the countries in which people use the service. On processing a successful complaint about an illegal tweet from "government officials, companies or another outside party", the social networking site will notify users from that country that they may not see it.
France
Following the posting of an antisemitic and racist posts by anonymous users, Twitter removed those posts from its service. Lawsuits were filed by the Union of Jewish Students (UEJF), a French advocacy group and, on January 24, 2013, Judge Anne-Marie Sauteraud ordered Twitter to divulge the personally identifiable information about the user who posted the antisemitic post, charging that the posts violated French laws against hate speech. Twitter responded by saying that it was "reviewing its options" regarding the French charges. Twitter was given two weeks to comply with the court order before daily fines of €1,000 (about US$1,300) would be assessed. Issues over jurisdiction arise, because Twitter has no offices nor employees within France, so it is unclear how a French court could sanction Twitter.
India
Twitter accounts spoofing the Prime Minister of India such as "PM0India", "Indian-pm" and "PMOIndiaa" were blocked in India in August 2012 following violence in Assam.
During the curfew in Jammu and Kashmir after Indian revocation of Jammu and Kashmir's special status on 5 August 2019, the Indian government approached Twitter to suspend accounts which were spreading rumours and anti-India content. This included the Twitter account of Syed Ali Shah Geelani, a Kashmiri separatist leader. On 3 August 2019, Geelani tweeted "India is about to launch the biggest genocide in the history of mankind", leading which, his account was suspended on request by authorities. Two days later, on August 5, the Indian parliament passed resolution to bifurcate the Jammu and Kashmir state into two union territories.
In February 2021, Twitter blocked hundreds of accounts that were posting about the Indian farmers protest from being accessed by users in India, by request of the Ministry of Home Affairs; the government ministry alleged that the accounts were spreading misinformation. Later that month, Twitter became subject to the national Social Media Ethics Code, which expects all social media companies operating in the country to remove content by request of the government within 36 hours, and appoint a local representative who is an Indian resident and passport holder
On May 18, 2021, Bhartiya Janata Party national spokesperson Sambit Patra posted an image alleged to be from an internal Indian National Congress (INC) document, detailing a social media campaign against Prime Minister Narendra Modi to criticize his handling of the COVID-19 pandemic in India. The INC disputed the posts and claimed that they were fabricated. Twitter subsequently marked the post as containing manipulated media. The Ministry of Communications and Information Technology issued a request for Twitter to remove the label, alleging that Twitter's decision was "prejudged, prejudiced, and a deliberate attempt to colour the investigation by the local law enforcement agency.". After Twitter refused to remove the label, its offices in New Delhi were raided by police.
In June 2021, Twitter lost its immunity as an "intermediary" under the Information Technology Act for its failure to appoint a local representative. It will be considered publisher of all materials posted on the platform. Later the same month, police in Uttar Pradesh registered a case against Twitter accusing it of distribution of child pornography.
Israel
In 2016, access to comments by the American blogger Richard Silverstein about a criminal investigation, which involved a minor and therefore was under a gag order according to Israeli law, was blocked to Israeli IP addresses, following a request by Israel's Ministry of Justice.
Pakistan
As of May 2014, Twitter regularly disables the ability to view specific "tweets" inside Pakistan, at the request of the Government of Pakistan on the grounds that they are blasphemous, having done so five times in that month.
On November 25, 2017, the NetBlocks internet shutdown observatory and Digital Rights Foundation collected evidence of nation-wide blocking of Twitter alongside other social media services, imposed by the government in response to the religious political party Tehreek-e-Labaik protests. The technical investigation found that all major Pakistani fixed-line and mobile service providers were affected by the restrictions, which were lifted by the PTA the next day when protests abated following the resignation of Minister for Law and Justice Zahid Hamid.
Russia
On May 19, 2014, Twitter blocked a pro-Ukrainian political account for Russian users. It happened soon after a Russian official had threatened to ban Twitter entirely if it refused to delete "tweets" that violated Russian law, according to the Russian news site Izvestia.
On July 27, 2014, Twitter blocked an account belonging to a hacker collective that has leaked several internal Kremlin documents to the Internet.
On March 10, 2021, Russia's Federal Service for Supervision of Communications, Information Technology and Mass Media began throttling Twitter on all mobile devices and 50% of computers due to claims that Twitter regulatory board failed to remove illegal content that includes suicide, child pornography, and drug use. They issued Twitter could be blocked in Russia if it did not comply. In an e-mail statement Twitter stated it was "deeply concerned to throttle online public conversation."
From March to April 2021 Roskomnadzor considered a ban and the removal of the IP of Twitter from Russia completely. The government agency was met with denials and lack of urgency from the social network. Roskomnadzor has the necessary “technical capabilities” to completely remove Twitter from Russian domain. The severity of the situation occurred when over 3,000 posts containing child pornography in violation of Community Guidelines have been detected in 2021 by the agency that was later sent to Twitter regulatory board for verification. However Twitter sent no response back to the agency concerning the illegal content and has thereafter been charged of withholding its duty to maintain the social network's Community Guidelines.
On April 2, 2021, a Russian court found Twitter guilty on three counts of "violating regulations on restricting unlawful content," and ordered Twitter to pay three fines adding up to $117,000. On April 5, 2021, Russia extended its throttling of Twitter until May 15, 2021. On May 17, 2021, Roskomnadzor said that Twitter had removed 91% of the banned content and backed off on blocking Twitter. Barring 600 posts still pending removal, the government agency also said they would continue throttling Twitter on Mobile Devices only saying that Twitter needed to remove all the banned items and in the future delete reportedly illegal posts within 24 hours for all restrictions to be lifted.
South Korea
In August 2010, the Government of South Korea tried to block certain content on Twitter due to the North Korean government opening a Twitter account. The North Korean Twitter account created on August 12, uriminzok, loosely translated to mean "our people" in Korean, acquired over 4,500 followers in less than one week. On August 19, 2010, South Korea's state-run Communications Standards Commission banned the Twitter account for broadcasting "illegal information." According to BBC US and Canada, experts claim that North Korea has invested in "information technology for more than 20 years" with knowledge of how to use social networking sites. This appears to be "nothing new" for North Korea as the reclusive country has always published propaganda in its press, usually against South Korea, calling them "warmongers." With only 36 "tweets", the Twitter account was able to accumulate almost 9,000 followers. To date, the South Korean Commission has banned 65 sites, including this Twitter account.
Tanzania
On October 29, 2020, the ISPs in Tanzania blocked social media in their country during the election week. Other social media sites have been unblocked since then, but Twitter remains blocked across all ISPs.
Turkey
On April 20, 2014, Frankfurter Allgemeine Zeitung, FAZ, reported Twitter had blocked two regime hostile accounts in Turkey, @Bascalan and @Haramzadeler333, both known for pointing out corruption. In fact, on March 26, 2014, Twitter announced that it started to use its Country Withheld Content tool for the first time in Turkey. As of June 2014, Twitter was withholding 14 accounts and "hundreds of tweets" in Turkey.
Turkey submitted the highest volume of removal requests to Twitter in 2014, 2015, 2016, 2017 and 2018. While in 2019 was third.
Venezuela
Twitter images were temporarily blocked in Venezuela in February 2014, along with other sites used to share images, including Pastebin.com and Zello, a walkie-talkie app. In response to the block, Twitter offered Venezuelan users a workaround to use their accounts via text message on their mobile phones.
On February 27, 2019, internet monitoring group NetBlocks reported the blocking of Twitter by state-run Internet provider CANTV for a duration of 40 minutes. The disruption followed the sharing of a tweet made by opposition leader Juan Guaidó linking to a highly critical recording posted to SoundCloud, which was also restricted access during the incident. The outages were found to be consistent with a pattern of brief, targeted filtering of other social platforms established during the country's presidential crisis.
Government blocking of Twitter access
In some cases, governments and other authorities take unilateral action to block Internet access to Twitter or its content.
, the governments of China, Iran, North Korea, and Turkmenistan have blocked access to Twitter.
China
Twitter is officially blocked in China; however, many Chinese people circumvent the block to use it. Even major Chinese companies and national medias, such as Huawei and CCTV, use Twitter through a government approved VPN. The official account of China's Ministry of Foreign Affairs started tweeting in English in December 2019, meanwhile dozens of Chinese diplomats, embassies and consulates run their accounts on Twitter. In 2010, Cheng Jianping was sentenced to one year in a labor camp for "retweeting" a comment that suggested boycotters of Japanese products should instead attack the Japanese pavilion at the 2010 Shanghai Expo. Her fiancé, who posted the initial comment, claims it was actually a satire of anti-Japanese sentiment in China. According to the report of the Washington Post, in 2019, state security officials visited some users in China to request them deleting tweets. The Chinese police would produce printouts of tweets and advise users to delete either the specific messages or their entire accounts. The New York Times described "the crackdown (of the twitter users in China) is unusually broad and punitive". The targets of the crackdown even included those Twitter lurkers with very few followers. In 2019, a Chinese student at the University of Minnesota was arrested and sentenced to six months in prison when he returned to China, for posting tweets mocking Chinese paramount leader Xi Jinping while in US. On 3 July 2020, Twitter announced that all data and information requests for Hong Kong authorities were immediately paused after Hong Kong national security law, which was imposed by the Chinese government, went into effect. According to the official verdicts as of 2020, hundreds of Chinese were sentenced to prison due to their tweeting, retweeting and liking on Twitter. According to the documents obtained by the New York Times in 2021, Shanghai police were trying to use technology means to find out the true identities of Chinese users of specific accounts on foreign social media, including Twitter.
Egypt (2011 temporary block)
Twitter was inaccessible in Egypt on January 25, 2011, during the 2011 Egyptian protests. Some news reports blamed the government of Egypt for blocking it. Vodafone Egypt, Egypt's largest mobile network operator, denied responsibility for the action in a tweet. Twitter's news releases did not state who the company believed instituted the block. As of January 26, Twitter was still confirming that the service was blocked in Egypt. On January 27, various reports claimed that access to the entire Internet from within Egypt had been shut down.
Shortly after the Internet shutdown, engineers at Google, Twitter, and SayNow, a voice-messaging startup company acquired by Google in January, announced the Speak To Tweet service. Google stated in its official blog that the goal of the service was to assist Egyptian protesters in staying connected during the Internet shutdown. Users could phone in a "tweet" by leaving a voicemail and use the Twitter hashtag #Egypt. These comments could be accessed without an Internet connection by dialing the same designated phone numbers. Those with Internet access could listen to the comments by visiting twitter.com/speak2tweet.
On February 2, 2011, connectivity was re-established by the four main Egyptian service providers. A week later, the heavy filtering that occurred at the height of the revolution had ended.
Iran
In 2009, during 2009 Iranian presidential election, the Iranian government blocked Twitter due to fear of protests being organised. In September 2013, the blocking of both Twitter and Facebook was briefly lifted without notice due to a technical error, however, within a day the sites were blocked again.
Nigeria (2021 block)
On June 2, 2021, President of Nigeria Muhammadu Buhari made posts on Twitter that threatened retaliatory action against the Eastern Security Network (ESN), a paramilitary organization of the separatist group Indigenous People of Biafra responsible for attacks on government structures, military and police personnel in the South Eastern part of the country. President Buhari's tweets evoked the Nigerian Civil War as a theme, including the statement "those of us in the fields for 30 months, who went through the war, will treat them in the language they understand." After criticism of the posts, Twitter removed them claiming violations of its policy on abusive content, and temporarily suspended President Buhari's account.
President Buhari considered the actions to be a violation of his freedom of speech, while Minister of Information and Culture Lai Mohammed accused Twitter of operating under a "double standard". On June 4, 2021, Mohammed announced that Twitter's operations in Nigeria would be "suspended" indefinitely, arguing that the company had been engaging in activities "capable of undermining Nigeria’s corporate existence." He also stated that the National Broadcasting Commission (NBC) would be compelled to establish a licensing system for social media and "OTT" services.
On June 5, under directives issued pursuant to the suspension, Twitter was blocked by all internet service providers in the country. The block is performed by internet service providers blocking the IP addresses used by Twitter, as well as DNS filtering so that the DNS servers run by these internet service providers do not return an IP address to be communicated with when an attempt is made to connect to the Twitter domain.
A large number of users attempted to bypass the blocks using VPN services; Attorney General Abubakar Malami signed a directive making any use of Twitter a prosecutable offense. On June 6, the diplomatic missions of Canada, the European Union, Ireland, the United Kingdom, and the United States issued a joint statement condemning the ban and related actions, as they "inhibit access to information and commerce and precisely the moment Nigeria needs to foster inclusive dialogue and expression of opinions". The block received criticism from the Newspaper Proprietors' Association of Nigeria. The Nigerian Guild of Editors asked the government to seek other methods of resolving the dispute with Twitter.
On June 7, all broadcast media outlets were ordered by the NBC to cease use of Twitter, including use of the service as a source for information gathering.
North Korea
In April 2016, North Korea started to block Twitter "in a move underscoring its concern with the spread of online information". Anyone who tries to access it without special permission from the North Korean government, including foreign visitors and residents, is subject to punishment.
Turkey (2014 temporary block)
On March 21, 2014, access to Twitter in Turkey was temporarily blocked, after a court ordered that "protection measures" be applied to the service. This followed earlier remarks by Prime Minister Tayyip Erdogan who vowed to "wipe out Twitter" following damaging allegations of corruption in his inner circle. However, on March 27, 2014, Istanbul Anatolia 18th Criminal Court of Peace suspended the above-mentioned court order. Turkey's constitutional court later ruled that the ban is illegal. Two weeks after the Turkish government blocked the site, the Twitter ban was lifted. However, , Twitter reports that the government of Turkey accounts for more than 52 percent of all content removal requests worldwide.
Turkmenistan
, foreign news and opposition websites are blocked in Turkmenistan, and international social networks such as Twitter are "often inaccessible".
United Kingdom (2011 threat of temporary block)
Then-Prime Minister David Cameron threatened to shut down Twitter among other social networking sites for the duration of the 2011 England riots, but no action was taken.
Suspending and restricting users
Under Twitter's Terms of Service which requiring users agreement, Twitter retains the right to temporarily or permanently suspend user accounts based on violations. One such example took place on December 18, 2017, when it banned the accounts belonging to Paul Golding, Jayda Fransen, Britain First, and the Traditionalist Worker Party. Donald Trump, the former President of the United States, faced a limited degree of censorship in 2019, and following the 2021 storming of the United States Capitol has been completely suspended on January 8, 2021, according to an interpretation of two tweets by moderation. Trump has used the platform extensively as a means of communication, and has escalated tensions with other nations through his tweets. On January 8, 2021, at 6:21 EST, Twitter permanently suspended Trump's personal Twitter account. The President then posted four status updates on the POTUS Twitter account which were subsequently removed. Twitter said they would not suspend government accounts, but will "instead take action to limit their use."
Twitter's policies have been described as subject to manipulation by users who may coordinate to flag politically controversial tweets as allegedly violating the platform's policies, resulting in deplatforming of controversial users. The platform has long been criticized for its failure to provide details of underlying alleged policy violations to the subjects of Twitter suspensions and bans.
In 2018, Twitter rolled out a "quality filter" that hid content and users deemed "low quality" from search results and limited their visibility, leading to accusations of shadow banning. After conservatives claimed it censors users from the political right, Alex Thompson, a writer for VICE, confirmed that many prominent Republican politicians had been "shadow banned" by the filter. Twitter later acknowledged the problem, stating that the filter had a software bug that would be fixed in the near future.
In October 2020, Twitter prevented users from tweeting about a New York Post article about the Biden–Ukraine conspiracy theory, relating to emails about Hunter Biden allegedly introducing a Ukrainian businessman to his father, Joe Biden. Senators Marsha Blackburn and Ted Cruz described the blocking of the New York Post on Twitter as "election interference". The New York Times reported in September 2021 that a Federal Election Commission inquiry into a complaint about the matter found Twitter had acted with a valid commercial reason, rather than a political purpose. The FEC inquiry also found that allegations Twitter had violated election laws by allegedly shadow banning Republicans and other means were "vague, speculative and unsupported by the available information."
See also
Deplatforming
Shadow banning
Twitter suspensions
References
Twitter
Twitter |
18023495 | https://en.wikipedia.org/wiki/Nokia%206280%20Series | Nokia 6280 Series | The Nokia 6280 Series, is a series of slider type phones first released in Q4, 2005.
Nokia 6280
The Nokia 6280 is a 3G mobile phone from Nokia. It is the 3G sister product to the 2G Nokia 6270. It features two cameras, a rear two megapixel camera with an 8x digital zoom and flash, and a front-mounted VGA camera for video calling only. It also has expandable memory via miniSD memory cards. The 6280 uses the Nokia Series 40 mobile platform and can be network locked using the base band 5 locking mechanism. It is available in four colours: Black, Plum, Burnt Orange and Silver.
The phone is slightly smaller than its 2G relative, at 100 x 46 x 21 mm in size and 115 g in weight.
Video
The 6280 can play back MPEG 4 ".mp4" video files, such as those designed to be played on an iPod, provided they have not been encrypted under DRM. AVI files can be transcoded using software on the PC. During video playback, the audio track tends to stop after about 20 minutes. To get around this problem it is possible to split mp4 files into several pieces, but in software version 6.x or later the problem does not occur. The default format used by the phone for video encoding is ".3GP", which QuickTime will decode. This is the format used by the phone when video is selected from the camera option.
There are many free programs that can convert the video files from and to the phone, such as FFmpeg or Nokia PC Suite's Multimedia Player .
Firmware update
Many firmware updates have been released for the Nokia 6280 that fix many of the issues related to the original firmware. The original firmware has been known to drop out and crash without warning. There is also an issue related to the silent profile on the phone. The new firmware has fixed some of these issues as well as adding more complete ID3 functionality for the music player and stereo bluetooth capability. The newer firmware versions are similar to those of the Nokia 6288.
Although the options to update the phone over the air OTA or by data cable are available in the phone, the phone can not be updated by either of these options. This is because of v3 firmware models, when using the PC software update (when the 6280 was listed) ended up being 'bricked' due to certain coding not being written into the phone. However, a later version of the phone is technically capable of it, however, the 6280 has been removed from the software update program.
Software updates can be completed by taking the phone to a licensed Nokia repair centre (such as Carphone Warehouse in the UK). These can be done for free under warranty (even if the phone isn't faulty), but can be done out-of-warranty at a cost. The latest firmware version available from Carphone Warehouse UK is 6.43, however, higher versions may exist from Nokia directly.
Specifications
Nokia 6288
The Nokia 6288 is a 3G mobile phone made by Nokia. It is an updated version of the Nokia 6280. The mobile phone is a slide design phone, featuring a 2.2 inch (320x240 pixels), 262,144 color TFT screen, and two cameras: one at the rear being a 2 MP camera (complete with 8x digital zoom and flash) and a front camera for video calling only. The phone can record .3gp video in 640x480 resolution, while it can play back both 3GP and MP4 videos. It has 6 MB of built-in memory and the option to expand up to 2 GB via the use of a miniSD card. The 6288 is able to be used on TriBand GSM (900/1800/1900 MHz) and WCDMA (2100 MHz). It also features PTT support, IM and email clients. The phone has Bluetooth 2 (with A2DP/AVRCP profile support), infrared and USB connections. The Phone comes in two colours, Black and White. The phone also has Java built into it and supports most generic java applications as well as specific S40 version applications.
Images
Specifications
References
A thread on the Nokia forums concerning the issue of the faults with the software of the 6280
External links
Nokia 6280 Product Page on Asia-Pacific Site
Nokia Support Discussions
How To Remove Dust From Under The Screen
Nokia 6280 page on GSMArena
Nokia 6280 page on IndiaGSM
Nokia 6288
Nokia 6288 Review - CNET.com.au
6280
Mobile phones introduced in 2005
Mobile phones with infrared transmitter
Slider phones |
716483 | https://en.wikipedia.org/wiki/DTN%20%28company%29 | DTN (company) | DTN, previously known as Telvent DTN, Data Transmission Network and Dataline, is a private company based in Burnsville, Minnesota that specializes in subscription-based services for the analysis and delivery of real-time weather, agricultural, energy, and commodity market information. As of 2018 the company has approximately 600,000 subscribers, mostly in the United States. DTN is known for its accurate meteorological forecasting and large network of weather stations, its market analysis services, and its early use of radio and satellite systems to transmit reports to its Midwestern consumers. DTN also operates The Progressive Farmer magazine. DTN was previously owned by Telvent and Schneider Electric, and since 2017 has been owned by Zurich-based TBG.
History
Formation (1984–1987)
In the early 1980s the Omaha-based company Scoular Grain was a growing agribusiness led by Nebraska grain industry executive Marshall Faith. Faith, along with several other investors, had acquired what was then Scoular-Bishop Grain Company in 1967 and expanded its operations from three grain elevators to dozens of locations in multiple states, and was beginning to branch out beyond grain warehousing. On 9 April 1984 the company created a new subsidiary incorporated under the name Scoular Information Services with the goal of improving communications with farmers. The project was led by Omaha native Roger Brodersen, Scoular's chief operating officer and the executive who supervised new-project development and corporate acquisitions for the company. The subsidiary soon became known as Dataline.
Dataline's chief product was an FM radio receiver unit that would pick up Dataline broadcasts transmitted via sideband signals. Farmers and agribusinesses would buy the receivers to get agricultural information and commodities updates. Dataline's customers preferred to lease the receivers rather than buy them, but Scoular was unwilling to finance the equipment, so Brodersen bought Dataline in 1986 and a year later took its stock public in order to raise the capital necessary to acquire the receivers and to develop other information products. Under the new model, subscribers would receive a radio signal receiver and a video terminal at no charge, and then pay a monthly subscription to receive 24-hour-a-day broadcasts of 20+ pages of market information, weather reports, and analysis. In late 1986 the monthly fees were $17.50 and the service was available in Nebraska, Minnesota, Iowa, and Illinois. Public broadcasting station WILL-TV in Champaign was one broadcaster that carried the Dataline signal.
At the beginning of 1987 Dataline had 5,300 subscribers; that grew to 10,000 by May and 13,500 in ten states by October. In addition to being accessible on terrestrial FM carriers, Dataline was also broadcast from the Galaxy 1 communications satellite. Monthly fees were $19.50 for 65 videotext pages of market quotes, grain and livestock information, commentary, weather, and reports. In September Dataline partnered with a subsidiary of ConAgra to add an "electronic catalog" feature to their information feed, allowing subscribers to browse farm supplies, equipment, and other products.
On 17 September 1987 the company re-incorporated and was renamed Data Transmission Network.
Expansion (1988–1997)
The company, commonly identified simply as DTN, expanded rapidly through the late 1980s and early 1990s; its operating cash flow grew roughly 30% each year beginning in 1989 and by 1993 was $12.9M. By mid-1994 DTN had roughly 77,000 subscribers, about 60,000 of which were in the agricultural sector; the remainder were mostly in the finance and energy industries. Subscriptions in 1994 were $26 a month; satellite connection was an extra $7/month and a color monitor an extra $20/month. By 1997 it had 152,000 customers, 110,000 of which were in agriculture.
Beginning in the mid-1990s DTN also grew its operations by acquiring other data and meteorology companies. In May 1996 it acquired its chief competitor FarmDayta (formerly Broadcast Partners) of Urbandale, Iowa for $73M, and adopted its 38,500 subscribers. In 1998 DTN merged its weather services division with Minneapolis-based Kavouras, a company founded in 1977 that specialized in radar-based weather reporting and prediction; DTN also acquired Weather Services Corporation (WSC), a Lexington, Massachusetts-based forecasting and climate prediction company that developed meteorological databases. The union of the three weather organizations became known as DTN/Meteorlogix.
The company also worked in the '90s to expand into niche markets by offering more specialized service offerings.
Sale and bankruptcy (1998–2003)
In the late 1990s, DTN entered a period of financial uncertainty, in part due to increasing competition with new free information channels like CNBC, the Weather Channel, and websites run by Farm Journal and Pioneer. In response, DTN sought to expand its customer base by selling a broader array of services, including inventory-tracking for auto dealers, coupon-printing kiosks for grocery stores, fire-alert information for forestry services, and other ventures. DTN gained 7,600 new subscribers but lost 4,500 from its core group of agricultural customers, and its stock fell 38% in 1999.
Since April 1998, DTN had been seeking an organization to buy the 1300-employee company, but after 11 months had not found anyone willing to pay price that the board felt was appropriate, so in mid-March 1999 founder Roger Brodersen called off the search for a buyer. One week later on 24 March, Brodersen and four other directors resigned and Peter Kamin was named the new chairman. Kamin, the leader of a stockholders group that pressed to sell the company, resumed the search for a buyer.
In April 2000, the New York City-based private equity firm Veronis Suhler Stevenson agreed to buy out DTN for $451M, including $91M in debt. DTN at the time had approximately 166,000 clients. Veronis implemented a new strategy by splitting DTN into four separate divisions: agriculture, energy, weather, and financial services. Robert Gordon, who would later become DTN CEO, was hired to run the weather division.
Staffing up the four divisions proved slow and costly, and by 2003 sales were lower than in 1998. DTN entered Chapter 11 bankruptcy in late 2003 and its operation was taken over by a consortium of banks led by Wachovia, DTN's lead creditor. DTN emerged from bankruptcy six weeks later.
Revival (2004–2007)
Robert Gordon became president in January and CEO in July 2004. He gradually reduced staff to 675, eliminated redundancies, and pushed a new strategy of selling proprietary data at a premium price. Customers paid larger subscription fees ($100 a month for many weather customers) to receive customized, hourly webcasts based on more precise computer models; ag customers paid roughly 50% more for analyses of issues like Asian soybean rust and avian influenza; and energy customers gained access to online exchanges with real-time fuel prices.
Gordon's strategy proved successful and DTN began to re-expand. In July 2006 DTN acquired St. Louis-based Surface Systems, Inc. (SSI), a company specializing in weather monitors for roads and airport runways, and integrated its customers and its network of 6000 surface sensors into its Meteorlogix division. In 2007 it acquired the majority of the Edmond, Oklahoma company WeatherBank, integrating its energy, transportation, public safety, and agriculture weather forecasting customers into Meteorlogix.
In 2007 DTN also arranged with publisher Time Warner to acquire The Progressive Farmer, an agricultural magazine founded in 1886 and read at the time of acquisition by approximately 600,000 subscribers. The deal for an undisclosed sum supported Time Warner's strategy of shedding smaller publications to focus on its larger magazine properties and DTN's mission of providing agricultural information.
DTN remained structured around four brands: DTN Ag, DTN Energy, DTN Market Access, and DTN Meteorlogix.
Telvent and Schneider (2008–2016)
On 28 October 2008 Madrid-based IT company Telvent acquired DTN in a $445M all-cash deal. DTN at the time of the acquisition had about 700 employees, 700,000 subscribers, and annual revenues of approximately $180M, with about 90% of its sales derived from its subscription-based services. After its acquisition the company became known as Telvent DTN.
Telvent itself was purchased in 2011 by Schneider Electric, a large energy management company based in Rueil-Malmaison, France. Schneider announced in early 2011 that it had reached a deal to acquire Telvent, and in August received regulatory approval to complete the €1.4B acquisition.
Telvent DTN's revenue at the time of the acquisition was approximately $213 million.
In June 2014 DTN acquired RevCo, a St. Louis-based agricultural data management business.
TBG and recent growth (2017–)
In 2016 Schneider put DTN up for sale after a strategic review found that it wasn't essential to the company. Several firms were interested in purchasing DTN (including London-based Euromoney Institutional Investor), but it was Zurich-based venture capital firm TBG that ultimately purchased DTN and in April 2017 completed its acquisition of the company for $900M.
Since its purchase by TBG, DTN has acquired several smaller corporations or systems:
Wilkens Weather (acquired 29 September 2017), a marine weather forecasting company based in Houston, Texas and previously owned by Rockwell Collins.
Spensa Technologies (acquired 2 April 2018), a precision agriculture technology company founded in January 2009 and based in the Purdue Research Park in West Lafayette, Indiana.
Energy Management Institute (acquired 10 April 2018), a provider of training services and marketing analysis based in New York City.
Weather Decision Technologies (acquired 8 October 2018), a provider of weather decision support solutions based in Norman, Oklahoma.
PraxSoft (acquired 10 June 2019), previously Praxis Software, a provider of sensor interfaces and communications technology based in Orlando, Florida.
Weatherzone (acquired 1 October 2019), a weather information provider based in Australia.
ClearAg (acquired May 2020 for $12M), a system created by California-based Iteris for monitoring and predicting agricultural conditions.
Farm Market iD (acquired February 2021), a provider of aggregated farm and grower data based in Westmont, Illinois.
Online Fuels (acquired May 2021), a UK-based online trading platform for refined fuels.
MeteoGroup, a private Europe-based weather organisation acquired by parent TBG in September 2018, is being integrated into DTN.
Business sectors
Aviation
DTN is one of the main suppliers of weather intelligence to the Aviation industry. DTN's weather data and patented algorithms for EDR Turbulence, Icing and (Rapid Developing) Thunderstorms flow directly to airlines and airports or via one of DTN's many partners for Electronic Flight Bags(EFB) and Flight Planning. The data helps keeping crew and passengers safe in the air, especially around turbulence and thunderstorms events. DTN is delivering weather intelligence directly or indirectly to over 250 airlines worldwide. DTN has a large team of Aviation meteorologists in-house who support Aviation customers with (risk)consulting and creating e.g. RAMTAFs.
Agriculture
DTN's agricultural division sells services to farmers and agribusinesses.
For farmers and other producers, DTN provides ag market information, insights on market prices and strategies, and detailed weather information and forecasts. DTN also creates applications used by growers to determine the best time to work fields or spray for pests, and offers Adapt-N-based field nitrogen analysis and planning.
For agribusinesses DTN offers market information and analysis, tools to manage grain trading and communications with growers, weather reports, and other services.
Since 2007 DTN has owned and operated The Progressive Farmer, a 600,000-circulation farm and agribusiness magazine founded in 1886. DTN serves about 50,000 growers in North America and most leading agribusinesses.
Weather
DTN generates weather reports, forecasts, and analysis aimed at consumers in seven sectors: aviation, marine, utilities, renewable energy, transportation, sports, and construction/public safety. Forecasts and reports are based on large data sets that combine proprietary information from DTN's network of 6,000 weather stations with other global content sources. One such source is DTN Services and Systems Spain, where D-ATIS and D-VOLMET systems are manufactured for the company. Combining the various data into a single forecast is done in part through the National Center for Atmospheric Research's Dynamic Integrated foreCast (DICast), a technology licensed from the University Corporation for Atmospheric Research. DTN employs 120+ professional meteorologists in strategic locations globally.
The company's forecasts were ranked as the most accurate in the US each year since 2006 for predictions of short-term precipitation and high temperatures. WeatherSentry Online is DTN Weather's principal platform.
Oil and gas
In the energy sector DTN focuses on the buying, selling, transportation, and storage of fuel. DTN offers real-time fuel market data, tools to monitor how fuel is distributed, and software for managing and automating fuel storage.
Trading
Energy trading, and commodity markets are the focus of DTN's trading division. DTN sells streaming market data, tickers, news, historical analysis, stock and option quotes, and other related services. Its principal trading-related product is ProphetX, a software platform built for commodity and equity markets.
Products and services
Applications
DTN sells subscriptions to software applications which it makes available through web-based and mobile application platforms. For agriculture, DTN's primary application is MyDTN, an "agriculture application suite" that provides a variety of news, weather, and market information, as well as tools for calculating field nitrogen levels and farm profits. Other applications in the agriculture sector are:
DTN Ag Weather Tools, an iOS and Android app for weather updates, alerts, and information
DTN Connect, a web-based CRM tool
DTN Grain Portal, a web-based tool for managing grain transactions
For weather, DTN's principal application is WeatherSentry which it licenses in various editions tailored to specific markets. Other DTN weather applications:
AviationSentry, which provides aviation-related weather reports on thunderstorms, turbulence, icing risk, etc.
Flight Route Alerting, with editions for airlines or helicopters
MetConsole, a web-based software for managing weather sensors and networks, available in versions tailored to AWOS, ATIS, LLWAS, and regional networks
RoadCast for road conditions
Total View RWIS Data Management System
For fuel and energy, DTN markets applications mostly to refineries and large fuel suppliers and buyers. Applications include:
DTN Allocation Tracker for fuel buyers to see where fuel is available
DTN Allocation Viewer for fuel sellers to monitor their inventory
DTN FastRacks for giving real-time fuel prices to buyers
DTN Fuel Admin for managing electronic bills of lading
DTN Fuel Buyer for monitoring market conditions and prices
DTN DataConnect Messaging for back-office document management
DTN TABS (Terminal Authorization and Billing System) for large sellers to manage loading at their terminals
DTN TIMS (Terminal Inventory Management System) for large sellers to manage supply and inventory
DTN Exchange for managing and selling fuel supplies
DTN Guardian3, a terminal automation system
For trading, DTN's principal application is ProphetX, an application for market data and analysis which it sells in customized editions for commodities, energy, or livestock trading. ProphetX is accessible online and via an iOS or Android app. Other DTN trading applications:
DTN IQFeed, which provides streaming quotes for stocks, options, and futures
DTN.IQ, similar to IQFeed but with more analytics
Services
Meteorological consulting for airlines, airports, helicopter operators, renewable energy companies, utilities, transportation, and sports.
DTN AgHost, a service that creates branded websites for ag clients to better communicate with their customers.
Hardware
DTN sells automated weather stations that can be set up at a member's location to collect local meteorological data, which DTN then uses to produce hyper-local reports and forecasts. Roughly 5,500 stations operate from DTN member farms, part of the company's larger 22,500-station North American weather station network.
DTN also has a Weather Systems group based in Spain and the Netherlands that installs airport weather systems.
The Progressive Farmer
Since 2007 DTN has owned and operated The Progressive Farmer, a monthly magazine founded in 1886 that focuses on how to operate a successful farm by covering subjects like marketing, management, crop and livestock production, and equipment. The magazine has a national circulation of about 500,000 and is based in Birmingham, Alabama.
Past products
The original radio receivers and video display terminals used to receive DTN broadcasts are now obsolete, but older equipment remains in service in some areas. DTN terminals would display information on a page-by-page basis, and supported the optional attachment of a line printer to create hard copy reports. The basic package contained 40 pages of information, including charts, commentary, news, futures quotes, and weather; the subscription price in 1986 was $210/year.
Operations
Locations
DTN is headquartered at offices on Rupp Drive in Burnsville, Minnesota, a city on the south side of the Minneapolis metropolitan area, and also operates from its original location on West Dodge Road in Omaha, Nebraska. The company has satellite offices in a number of other US cities:
Birmingham, Alabama — location of The Progressive Farmer magazine, acquired in 2007.
Chicago, Illinois
Springfield, Illinois
Urbandale, Iowa — location of FarmDayta, acquired in 1996.
St. Louis, Missouri — location of RevCo, acquired in 2014.
Hastings, Nebraska
New York, New York — location of Energy Management Institute, acquired in 2018.
Plano, Texas — location of Diamond Control Systems, acquired in 2001.
Houston, Texas — location of Wilkens Weather, acquired in 2017.
Norman, Oklahoma — Location of WDT
Grand Forks, North Dakota — Location of the Weather and Agriculture Division of Iteris, acquired in early May 2020
Sydney, Australia — Location of Weatherzone, acquired 1 October 2019
In Europe, DTN has offices in Antwerp, Utrecht, Madrid, Seville and Appenzell. It also operates an Australian office in Brisbane.
Corporate governance
DTN is organized into four divisions: DTN Weather, DTN Agriculture, DTN Energy, and DTN Financial Analytics.
Executives
Since 14 July 2020 the position of President for DTN has been held by Marc Chesover. Other members of DTN's leadership:
Tom Dilworth — Chief Financial Officer
Lars Ewe — Chief Technology Officer
Doug Bennett — Chief Customer and Strategy Officer
Marc Norton — Chief Information Security Officer
John McPherson — Head of Human Resources and Integrations
Previous CEOs
Steve Matthesen, CEO (June 2019—July 2020)
Ron Sznaider, acting CEO (November 2018—June 2019)
Jerre Stead and Sheryl von Blucher, co-CEOs. (January 2018—November 2018)
Kip Pendleton (October 2017—January 2018)
Ron Sznaider (January 2016—September 2017)
Board of Directors
Marc Chesover — President
Tom Dilworth — Chief Fiancial Officer
References
External links
DTN website
DTN/The Progressive Farmer
Companies based in Minneapolis
Agriculture companies of the United States
Financial software companies
Meteorological companies
Privately held companies based in Minnesota
Technology companies established in 1984
Wide area networks |
1280312 | https://en.wikipedia.org/wiki/Jon%20Kleinberg | Jon Kleinberg | Jon Michael Kleinberg (born 1971) is an American computer scientist and the Tisch University Professor of Computer Science and Information Science at Cornell University known for his work in algorithms and networks. He is a recipient of the Nevanlinna Prize by the International Mathematical Union.
Early life and education
Jon Kleinberg was born in 1971 in Boston, Massachusetts. He received a Bachelor of Science degree in computer science from Cornell University in 1993 and a Ph.D. from Massachusetts Institute of Technology in 1996. He is the older brother of fellow Cornell computer scientist Robert Kleinberg.
Career
Since 1996 Kleinberg has been a professor in the Department of Computer Science at Cornell, as well as a visiting scientist at IBM's Almaden Research Center. His work has been supported by an NSF Career Award, an ONR Young Investigator Award, a MacArthur Foundation Fellowship, a Packard Foundation Fellowship, a Sloan Foundation Fellowship, and grants from Google, Yahoo!, and the NSF. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2011, he was elected to the United States National Academy of Sciences. In 2013 he became a fellow of the Association for Computing Machinery.
Research
Kleinberg is best known for his work on networks. One of his best-known contributions is the HITS algorithm, developed while he was at IBM. HITS is an algorithm for web search that builds on the eigenvector-based methods used in algorithms and served as the full-scale model for PageRank by recognizing that web pages or sites should be considered important not only if they are linked to by many others (as in PageRank), but also if they link to many others. Search engines themselves are examples of sites that are important because they link to many others. Kleinberg realized that this generalization implies two different classes of important web pages, which he called "hubs" and "authorities". The HITS algorithm is an algorithm for automatically identifying the leading hubs and authorities in a network of hyperlinked pages.
Kleinberg is also known for his work on algorithmic aspects of the small world experiment. He was one of the first to realize that Stanley Milgram's famous "six degrees" letter-passing experiment implied not only that there are short paths between individuals in social networks but also that people seem to be good at finding those paths, an apparently simple observation that turns out to have profound implications for the structure of the networks in question. The formal model in which Kleinberg studied this question is a two dimensional grid, where each node has both short-range connections (edges) to neighbours in the grid and long-range connections to nodes further apart. For each node v, a long-range edge between v and another node w is added with a probability that decays as the second power of the distance between v and w. This is generalized to a d-dimensional grid, where the probability decays as the d-th power of the distance.
Kleinberg has written numerous papers and articles as well as a textbook on computer algorithms, Algorithm Design, co-authored the first edition with Éva Tardos and sole authored the second edition. Among other honors, he received a MacArthur Foundation Fellowship also known as the "genius grant" in 2005 and the Nevanlinna Prize in 2006, an award that is given out once every four years along with the Fields Medal as the premier distinction in Computational Mathematics.
His new book is entitled "Networks, Crowds, and Markets: Reasoning About a Highly Connected World", published by Cambridge University Press in 2010.
Cornell's Association of Computer Science Undergraduates awarded him the "Faculty of the Year" award in 2002.
References
External links
Still the Rebel King -Video
Interview with Jon Kleinberg, ACM Infosys Foundation Award recipient by Stephen Ibaraki
Yury Lifshits, Four Results of Jon Kleinberg: a talk for St. Petersburg Mathematical Society
American computer scientists
1971 births
Living people
Fellows of the Association for Computing Machinery
Members of the United States National Academy of Sciences
MacArthur Fellows
Nevanlinna Prize laureates
Cornell University faculty
Cornell University alumni
Massachusetts Institute of Technology alumni
20th-century American engineers
21st-century American engineers
20th-century American scientists
21st-century American scientists
Simons Investigator
Recipients of the ACM Prize in Computing
Network scientists |
66296484 | https://en.wikipedia.org/wiki/Pixel%20Game%20Maker%20MV | Pixel Game Maker MV | Pixel Game Maker MV (released as "Action Game Tsukuru" (アクションゲームツクール) in Japan) is a 2D action game production software published by Playism.
It allows for the creation of 2D games without the need for programming.
The software is abbreviated to "Actsuku" amongst the Japanese community and to PGMMV in English.
PGMMV was released by Gotcha Gotcha Games, a subsidiary of KADOKAWA, as a beta version in 2018. It was fully released in 2019.
Development
The development of Pixel Game Maker MV was directed by Takuya Hatakeyama for Kadokawa Corporation in Tokyo. The development team included people who had worked on console games for the Game Boy Advance and Nintendo 3DS. The software was intended to facilitate indie game development without the need for programming. Pixel Game Maker MV was developed with indie developers in mind, and its features were gradually introduced through making reference to traditional 2D action games. During development the team attempted to recreate classic action games such as Super Mario Bros and Mega Man to ascertain that most of the important features required to develop such games were included.
Takuya Hatakeyama reports that during development some issues, such as the software being resource heavy or object clipping, had to be severely troubleshooted. These issues were fixed as much as possible but he stated that they may still occur to some degree. To circumvent such issues, Pixel Game Maker MV was incorporated with various tweaking functions for several parameters so that such issues may be addressed by the users during their process of developing a game.
The software was shipped with the inclusion of a series of quality-of-life functionalities, such as tutorials and the “execute object action” function which not only allows users to initiate the action of an object by bypassing other links and conditionals, but also allows them to apply the action of one object to another so they can use the same action for multiple objects. Other parameters, such as gravity and frame-by-frame animation adjustments, can be incorporated to specific designs. The early access version of PGMMV was newly released by Kadokawa Corporation as a beta version on July 24, 2018. It was released on Steam on 19 September 2019.
Cross-platform support
Pixel Game Maker MV at time of release could only produce games for Windows OS. However, since version 1.0.3 the engine could export games suitable for the Nintendo Switch. Developers are required to enter into a partnership with Kodakawa Corporation to release games on the Nintendo platform, which supports resolutions of 1280:720. The same approach has been used by the company in connection with games selected as winners of their so-called "Pixel Game Maker MV Game Development Challenge," that are periodic competitions in which the winning contestants would be required to sign a publishing agreement that allows Kadokawa to publish the game on their behalf and produce derivative work. In those challenges, resolutions of 640:360 were also supported provided that the display size was doubled.
Reception
Pixel Game Maker MV is reputed to have sold more than 2,000,000 copies worldwide. During the time of its release the software has been considered as a success within the independent developer community but has been criticized for being a relatively new and undocumented product, and lacking features such as drag-and-drop functionality. Other reviews of Pixel Game Maker MV state that despite its potential, the engine is overshadowed by competitors such as Unreal Engine and Unity, as these have been constantly improved for various decades whilst Pixel Game Maker MV is comparatively younger. In 2020, it was praised by IGN for allowing developers to apply for worldwide publishing of games developed through PGMMV on the Nintendo eShop. The periodic game development contests organized by the team behind PGMMV and called the Pixel Game Maker MV Game Development Challenge have generally been well received. Other reviews indicated that Pixel Game Maker MV is a flexible program with a decent resource library and a relatively easy user interface.
References
Video game development software
Video game IDE
Video game engines
Kadokawa Dwango franchises
Windows software
Top-down video games
Side-scrolling video games
Nintendo Switch games |
4090888 | https://en.wikipedia.org/wiki/Musix%20GNU%2BLinux | Musix GNU+Linux | Musix GNU+Linux is a discontinued live CD and DVD Linux distribution for the IA-32 processor family based on Debian. It contained a collection of software for audio production, graphic design, video editing and general purpose applications. The initiator and co-director of the project was Marcos Germán Guglielmetti.
Musix GNU+Linux was one of the few Linux distributions recognized by the Free Software Foundation as being composed completely of free software.
Musix was developed by a team from Argentina, Spain, Mexico and Brazil. The main language used in development discussion and documentation was Spanish; however, Musix had a community of users who speak Spanish, Portuguese, and English.
Software
Musix 1.0
The Musix 0.x and 1.0 Rx versions were released between 2005 and 2008, with Musix 1.0 R6 being the last stable release on DVD and Musix 1.0 R2R5 the last stable release on CD.
The Live CD system had 1350 software packages. The Live DVD had 2279 software packages.
Some of the programs included: Rosegarden and Ardour, both for musicians; Inkscape for vectorial design; GIMP for manipulation of images; Cinelerra for video editing and Blender for 3D animation.
Its desktop was very light (only 18 MB of RAM with X.org), based on IceWM/ROX-Filer and it had a unique feature: multiple "pinboards" ordered by General Purpose apps, Help, Office, Root/Admin, MIDI, Internet, Graphics, and Audio. The pinboards are arrays of desktop backgrounds and icons.
A small version of the KDE desktop was included by default on the Live-CD version. The Live-DVD has a full KDE version, supporting several languages.
Musix 2.0
Musix 2.0 was developed using the live-helper scripts from the Debian-Live project. The first Alpha version of Musix 2.0 was released on 25 March 2009 including two realtime-patched Linux-Libre kernels.
On 17 May 2009 the first beta version of Musix 2.0 was released.
The final Musix GNU+Linux 2.0 version on CD, DVD and USB was released in November 2009 by Daniel Vidal, Suso Comesaña, Carlos Sanchiavedraz, Joseangon and other Musix developers. This version was presented at the "Palau Firal de Congressos de Tarragona, España" by Suso Comesaña.
A similar LINUX-version, developed by Brazilian music teacher Gilberto André Borges, was named "Adriane" or "MusixBr". This version was not a fork and was derived from Knoppix 6.1 Adriane.
See also
Comparison of Linux distributions
dyne:bolic – another free distribution for multimedia enthusiasts
GNU/Linux naming controversy
List of Linux distributions based on Debian
References
External links
Debian-based distributions
Free audio software
Operating system distributions bootable from read-only media
Knoppix
Linux media creation distributions
Free software only Linux distributions
2008 software
Linux distributions |
52742 | https://en.wikipedia.org/wiki/Desktop%20computer | Desktop computer | A desktop computer is a personal computer designed for regular use at a single location on or near a desk due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit, memory, bus, certain peripherals and other electronic components), disk storage (usually one or more hard disk drives, solid state drives, optical disc drives, and in early models a floppy disk drive); a keyboard and mouse for input; and a computer monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.
History
Origins
Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small; the type of computers most commonly used were minicomputers, which were extremely large. Early computers took up the space of a whole room. Minicomputers generally fit into one or a few refrigerator-sized racks.
It was not until the 1970s when fully programmable computers appeared that could fit entirely on top of a desk. 1970 saw the introduction of the Datapoint 2200, a "smart" computer terminal complete with keyboard and monitor, was designed to connect with a mainframe computer but that didn't stop owners from using its built-in computational abilities as a stand-alone desktop computer. The HP 9800 series, which started out as programmable calculators in 1971 but was programmable in BASIC by 1972, used a smaller version of a minicomputer design based on ROM memory and had small one-line LED alphanumeric displays and displayed graphics with a plotter. The Wang 2200 of 1973 had a full-size cathode ray tube (CRT) and cassette tape storage. The IBM 5100 in 1975 had a small CRT display and could be programmed in BASIC and APL. These were generally expensive specialized computers sold for business or scientific uses.
Growth and development
Apple II, TRS-80 and Commodore PET were first generation personal home computers launched in 1977, which were aimed at the consumer market – rather than businessmen or computer hobbyists. Byte magazine referred to these three as the "1977 Trinity" of personal computing. Throughout the 1980s and 1990s, desktop computers became the predominant type, the most popular being the IBM PC and its clones, followed by the Apple Macintosh, with the third-placed Commodore Amiga having some success in the mid-1980s but declining by the early 1990s.
Early personal computers, like the original IBM Personal Computer, were enclosed in a "desktop case", horizontally oriented to have the display screen placed on top, thus saving space on the user's actual desk, although these cases had to be sturdy enough to support the weight of CRT displays that were widespread at the time. Over the course of the 1990s, desktop cases gradually became less common than the more-accessible tower cases (Tower was a trademark of NCR created by ad agency Reiser Williams deYong) that may be located on the floor under or beside a desk rather than on a desk. Not only do these tower cases have more room for expansion, they have also freed up desk space for monitors which were becoming larger every year. Desktop cases, particularly the compact form factors, remain popular for corporate computing environments and kiosks. Some computer cases can be interchangeably positioned either horizontally (desktop) or upright (mini-tower).
Influential games such as Doom and Quake during the 1990s had pushed gamers and enthusiasts to frequently upgrade to the latest CPUs and graphics cards (3dfx, ATI, and Nvidia) for their desktops (usually a tower case) in order to run these applications, though this has slowed since the late 2000s as the growing popularity of Intel integrated graphics forced game developers to scale back. Creative Technology's Sound Blaster series were a de facto standard for sound cards in desktop PCs during the 1990s until the early 2000s, when they were reduced to a niche product, as OEM desktop PCs came with sound boards integrated directly onto the motherboard.
Decline
While desktops have long been the most common configuration for PCs, by the mid-2000s the growth shifted from desktops to laptops. Notably, while desktops were mainly produced in the United States, laptops had long been produced by contract manufacturers based in Asia, such as Foxconn. This shift led to the closure of the many desktop assembly plants in the United States by 2010. Another trend around this time was the increasing proportion of inexpensive base-configuration desktops being sold, hurting PC manufacturers such as Dell whose build-to-order customization of desktops relied on upselling added features to buyers.
Battery-powered portable computers had just a 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use.
Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006.
In 2008, it was estimated that 145.9 million notebooks were sold and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units.
The sales breakdown of the Apple Macintosh has seen sales of desktop Macs staying mostly constant while being surpassed by that of Mac notebooks whose sales rate has grown considerably; seven out of ten Macs sold were laptops in 2009, a ratio projected to rise to three out of four by 2010. The change in sales of form factors is due to the desktop iMac moving from affordable G3 to upscale G4 model and subsequent releases are considered premium all-in-ones. By contrast, the MSRP of the MacBook laptop lines have dropped through successive generations such that the MacBook Air and MacBook Pro constitute the lowest price of entry to a Mac, with the exception of the even more inexpensive Mac Mini (albeit without a monitor and keyboard), and the MacBooks are the top-selling form factors of the Macintosh platform today.
The decades of development mean that most people already own desktop computers that meet their needs and have no need of buying a new one merely to keep pace with advancing technology. Notably, the successive release of new versions of Windows (Windows 95, 98, XP, Vista, 7, 8, 10 and so on) had been drivers for the replacement of PCs in the 1990s, but this slowed in the 2000s due to the poor reception of Windows Vista over Windows XP. Recently, some analysts have suggested that Windows 8 has actually hurt sales of PCs in 2012, as businesses have decided to stick with Windows 7 rather than upgrade. Some suggested that Microsoft has acknowledged "implicitly ringing the desktop PC death knell" as Windows 8 offers little upgrade in desktop PC functionality over Windows 7; instead, Windows 8's innovations are mostly on the mobile side.
The post-PC trend has seen a decline in the sales of desktop and laptop PCs. The decline has been attributed to increased power and applications of alternative computing devices, namely smartphones and tablet computers. Although most people exclusively use their smartphones and tablets for more basic tasks such as social media and casual gaming, these devices have in many instances replaced a second or third PC in the household that would have performed these tasks, though most families still retain a powerful PC for serious work.
Among PC form factors, desktops remain a staple in the enterprise market but have lost popularity among home buyers. PC makers and electronics retailers have responded by investing their engineering and marketing resources towards laptops (initially netbooks in the late 2000s, and then the higher-performance Ultrabooks from 2011 onwards), which manufacturers believe have more potential to revive the PC market than desktops.
In April 2017, StatCounter declared a "Milestone in technology history and end of an era" with the Android operating system more popular than Windows (the operating system that made desktops dominant over mainframe computers). Windows is still most popular on desktops (and laptops), while smartphones (and tablets) use Android, iOS or Windows 10 Mobile.
Resurgence
Although for casual use traditional desktops and laptops have seen a decline in sales, in 2018, global PC sales experienced a resurgence, driven by the business market. Desktops remain a solid fixture in the commercial and educational sectors. In addition, gaming desktops have seen a global revenue increase of 54% annually. For gaming, the global market of gaming desktops, laptops, and monitors is expected to grow to 61.1 million shipments by the end of 2023, up from 42.1 million, with desktops growing from 15.1 million shipments to 19 million. PC gaming as a whole now accounts for 28% of the total gaming market as of 2017. This is partially due to the increasing affordability of desktop PCs.
Types
By size
Full-size
Full-sized desktops are characterized by separate display and processing components. These components are connected to each other by cables or wireless connections. They often come in a tower form factor. These computers are easy to customize and upgrade per user requirements, e.g. by expansion card.
Early extended-size (significantly larger that mainstream ATX case) tower computers sometimes were labeled as "deskside computers", but currently this naming being quite rare.
Compact
Compact desktops are reduced in physical proportions compared to full-sized desktops. They are typically small-sized, inexpensive, low-power computers designed for basic tasks such as web browsing, accessing web-based applications, document processing, and audio/video playback. Hardware specifications and processing power are usually reduced and hence make them less appropriate for running complex or resource-intensive applications. A nettop is a notable example of a compact desktop.
All-in-one
An all-in-one (AIO) desktop computer integrates the system's internal components into the same case as the display, thus occupying a smaller footprint (with fewer cables) than desktops that incorporate a tower. The All-in-one systems are rarely labeled as desktop computers.
This form factor was popular during the early 1980s for personal computers intended for professional use such as the Kaypro II, Osborne 1, TRS-80 Model II, and Compaq Portable. Many manufacturers of home computers like Commodore and Atari included the computer's motherboard into the same enclosure as the keyboard; these systems were most often connected to a television set for display. Apple has manufactured several popular examples of all-in-one computers, such as the original Macintosh of the mid-1980s and the iMac of the late 1990s and 2000s. By the mid 2000s, many all-in-one designs have used flat panel displays, and later models have incorporated touchscreen displays, allowing them to be used similarly to a mobile tablet.
Some all-in-one desktops, such as the iMac G4, have used laptop components in order to reduce the size of the system case. And like most laptops, some all-in-one desktop computers are characterized by an inability to customize or upgrade internal components, as the systems' cases do not provide convenient access to upgradable components, and faults in certain aspects of the hardware may require the entire computer to be replaced, regardless of the health of its remaining components. There have been exceptions to this; the monitor portion of HP's Z1 workstation can be angled flat, and opened like a vehicle hood for access to internal hardware.
By usage
Gaming computer
Gaming computers are desktop computers with high performance CPU, GPU, and RAM optimized for playing video games at high resolution and frame rates. Gaming computer peripheries usually include mechanical keyboards for faster response time, and a gaming computer mouse which can track higher dots per inch movement.
Home theater
These desktops are connected to home entertainment systems and typically used for amusement purpose. They come with high definition display, video graphics, surround sound and TV tuner systems to complement typical PC features.
Thin client / Internet appliance
Over time some traditional desktop computers have been replaced with thin clients utilizing off-site computing solutions like the cloud. As more services and applications are served over the internet from off-site servers, local computing needs decrease, this drives desktop computers to be smaller, cheaper, and need less powerful hardware. More applications and in some cases entire virtual desktops are moved off-site and the desktop computer runs only an operating system or a shell application while the actual content is served from a server. Thin client computers may do almost all of their computing on a virtual machine in another site. Internal, hosted virtual desktops can offer users a completely consistent experience from anywhere.
Workstation
Workstations are advanced class of personal computers designed for a user and more powerful than a regular PC but less powerful than a server in regular computing. They are capable of high-resolution and three-dimensional interfaces, and typically used to perform scientific and engineering work. Like server computers, they are often connected with other workstations. The main form-factor for this class is a Tower case, but most vendors produce a compact or all-in-one low-end workstations. Most of tower workstations can be converted to a rack-mount version.
Desktop server
Oriented for small business class of servers; typically entry-level server machines, with similar to workstation/gaming PC computing powers and with some mainstream servers features, but with only basic graphic abilities; and some desktop servers can be converted to workstations.
Non-common types
Quantum technology
On January 29, 2021 Shenzhen SpinQ Technology announced that they will release the first-ever desktop quantum computer. this will be a miniaturized version of their previous quantum computer based on the same technology (nuclear magnetic resonance) and will be 2 qubit device. Applications will mostly be educational for high school and college students. the company claims SpinQ will be released to the public by the fourth quarter of 2021.
Comparison with laptops
Desktops have an advantage over laptops in that the spare parts and extensions tend to be standardized, resulting in lower prices and greater availability. For example, the size and mounting of the motherboard are standardized into ATX, microATX, BTX or other form factors. Desktops have several standardized expansion slots, like conventional PCI or PCI Express, while laptops tend to have only one mini-PCI slot and one PC Card slot (or ExpressCard slot). Procedures for assembly and disassembly of desktops tend to be simple and standardized as well. This tends not to be the case for laptops, though adding or replacing some parts, like the optical drive, hard disk, or adding an extra memory module is often quite simple. This means that a desktop computer configuration, usually a tower case, can be customized and upgraded to a greater extent than laptops. This customization has kept tower cases popular among gamers and enthusiasts.
Another advantage of the desktop is that (apart from environmental concerns) power consumption is not as critical as in laptop computers because the desktop is exclusively powered from the wall socket. Desktop computers also provide more space for cooling fans and vents to dissipate heat, allowing enthusiasts to overclock with less risk. The two large microprocessor manufacturers, Intel and AMD, have developed special CPUs for mobile computers (i.e. laptops) that consume less power and lower heat, but with lower performance levels.
Laptop computers, conversely, offer portability that desktop systems (including small form factor and all-in-one desktops) can not due to their compact size and clamshell design. The laptop's all-in-one design provides a built-in keyboard and a pointing device (such as a touchpad) for its user and can draw on power supplied by a rechargeable battery. Laptops also commonly integrate wireless technologies like WiFi, Bluetooth, and 3G, giving them a broader range of options for connecting to the internet, though this trend is changing as newer desktop computers come integrated with one or more of these technologies.
A desktop computer needs a UPS to handle electrical disturbances like short interruptions, blackouts, and spikes; achieving an on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS. A laptop with a sufficiently charged battery can continue to be used for hours in case of a power outage and is not affected by short power interruptions and blackouts.
A desktop computer often has the advantage over a comparable laptop in computational capacity. Overclocking is often more feasible on a desktop than on a laptop; similarly, hardware add-ons such as discrete graphics co-processors may be possible to install only in a desktop.
See also
Desktop replacement computer
Gaming computer
Home computer
Legacy ports
Operating system
Single board computer
Software
x86
x86-64
References
External links
Computer Tour of major components of a desktop computers at HowStuffWorks
Classes of computers
Personal computers
Office equipment |
20574670 | https://en.wikipedia.org/wiki/Trine%20%28video%20game%29 | Trine (video game) | Trine is a side-scrolling, action platform-puzzle video game developed by Frozenbyte and published by Nobilis. The game was originally released for Microsoft Windows in 2009, and has since been ported to Linux, OS X, and game consoles. The game takes place in a medieval fantasy setting and allows players to take control of three separate characters who can battle enemies and solve environmental puzzles.
A sequel, titled Trine 2, was released in 2011. A remake of Trine, titled Trine: Enchanted Edition, was released in 2014. The enchanted edition uses Trine 2 updated engine, and includes online multiplayer. The third installment in the series, Trine 3: The Artifacts of Power, was released on August 20, 2015. A fourth installment, Trine 4: The Nightmare Prince, was released on October 8, 2019.
Gameplay
The player controls and switches between three different characters (a thief, a knight, and a wizard) to try to complete levels. There is also a cooperative play feature, whereby multiple players can join in at any time to control different characters simultaneously. Each character has their own health and energy meter. Energy is used for certain weapons and abilities, and is replenished by blue-colored bottles found throughout levels. Health is replenished by collecting heart-shaped containers, which result from destroying certain enemies.
The player also has a single experience rating that is shared among all characters, and is incremented by acquiring green-colored bottles found throughout levels. Every 50 experience points, each character is given one point towards the purchase of upgrades to their abilities. Treasure chests are also spread throughout levels, each containing a charm that offers the bearing character new or upgraded abilities. The player can transfer these objects between characters, though some will only have an effect on certain characters.
Checkpoints are spread throughout levels, in the form of silver orbs on pedestals. Upon crossing a checkpoint, any deceased characters are brought back to life, and any characters below a certain amount of health and energy are replenished up to that amount. The amount of energy and health replenished is dependent upon the difficulty setting chosen by the player. When a character dies, the player must choose another living character to continue playing the level. If all three characters die, the player is sent back to the last checkpoint crossed, and all three characters are resurrected.
Enemies primarily include walking skeletons, spiders, and bats, along with boss characters, like giant skeletons and other large creatures. Some skeletons are armed with swords, others with bows and arrows, some spit fire, and some have shields. Skeletons are capable of scaling walls. Other dangers include lava, fireballs, giant sharp pendulums, and various other booby traps.
Trine uses Nvidia's PhysX physics engine to provide objects and characters with full physics interaction.
Characters
Zoya the Thief, the first of the three heroes introduced in the game, is voiced by Vicky Krueger. The Thief's weapon is her bow and arrow. The bow can be “charged” by holding down the fire button before releasing, and longer charges make for farther, straighter shots. The Thief also has a grappling hook which can be fired at wooden surfaces. Regular arrows and the grappling hook are unlimited, and do not diminish the Thief's energy. At some point during the game, the Thief can acquire the ability to shoot flaming arrows, which do diminish her energy. Flaming arrows inflict more damage on enemies, can break certain objects, and can light torches found in certain dark areas of the game. The Thief's possible upgrades include shooting more arrows with each shot, faster charging of the bow, and more damage inflicted with the flaming arrow. She is the quietest of the three heroes, and takes a strong liking to the magical forest ruins presented towards the end of the game.
Amadeus the Wizard, voiced by Kevin Howarth, has the ability to use sorcery to move objects remotely, as well as conjure new objects into existence. Initially, the Wizard is only able to conjure a cube-shaped object. At some point in the game, he acquires the ability to conjure an oblong platform (called a “plank” in the game). The box and plank behave as normal objects, obeying the laws of physics and gravity. The Wizard later acquires the ability to conjure a floating object shaped like a square pyramid (called a floating platform in the game), which remains at a fixed point in space unless the Wizard moves it.
Conjured objects are primarily used to help overcome obstacles and reach difficult areas. The plank, for example, can be used to bridge gaps. All conjuring and remote moving drains the Wizard's energy. The Wizard has no traditional attacks, however he can crush certain enemies by hurling objects into them. He can also block attacks by conjuring or moving objects in their path. The Wizard's possible upgrades include the ability to conjure more than one box or plank into simultaneous existence (whereas initially only one of each could be on the screen at once), changing future conjured floating platforms into wood (so that the Thief can attach her grappling hook to it), and making the floating platform into an explosive that the Knight or thief can trigger. In the game, he is shown as being wise but also foolish, cowardly but determined, and imagines himself to be a bit of a ladies man.
Pontius the Knight's initial weapons are his sword and shield. He is voiced in the game by Brian Bowles, and is presented as a brave and loyal companion despite the fact he is not that bright, and has a strong love for food and drink. The player can at some point acquire a flaming sword during the game, which the Knight can use to inflict more damage as well as use to light torches; the player can also pick up a sledgehammer for Pontius. The Knight also has the ability to lift certain objects and hurl them, and his shield can be used to deflect enemy attacks, as well as falling objects and projectiles. The Knight's possible upgrades include additional sword damage, charging attacks, and additional sledgehammer attacks.
Plot
Trine takes place in a forsaken and ruined kingdom. After enjoying a period of great peace, the king died without leaving an heir, plunging the kingdom into political instability. Taking advantage of the chaos, an undead army suddenly appeared and attacked, forcing the inhabitants to abandon the realm, save for those few souls brave enough to face the perils that had now befallen it.
The game's story is primarily told by an all knowing narrator voiced by Terry Wilton. Speaking after the fact, he fills in plot details in between the levels, as well as introducing and concluding the game.
After some time, the Astral Academy, an institution of magical studies, is evacuated due to the undead menace; Zoya the Thief sees this as an opportunity to search the academy for treasure. Unknown to her, Amadeus the Wizard is just waking up after sleeping for a fortnight due to a backfired potion he prepared while trying to learn the fireball spell; he realizes he must escape immediately. Finally, Pontius the Knight had also arrived, convinced that it was his duty to protect the academy. The three meet at the shrine of ancient treasure and, touching a magical object at the same time, disappear. The Wizard recalls that the treasure is actually an artifact called the Trine, which has the power to bind souls. This results in only one of them being able to physically exist, with the other two being forced to remain inside the Trine. Amadeus also remembers that the Trine was connected to the legend of a guardian, whose tomb could be found under the Astral Academy.
Searching for a way to free themselves of the Trine's effect, the three heroes explore the catacombs under the academy, finding the guardian's tomb. The Wizard deciphers some of the inscriptions inscribed on it and discovers that there were once three artifacts: one for the soul, one for the mind and one for the body, each protected by a guardian. The guardians used the three objects to maintain peace throughout the kingdom. Amadeus believes that reuniting the three artifacts might undo the spell binding their souls. The inscriptions also suggest that the artifact of the mind was guarded in the castle of the old king. The trio searches the castle; while they do not find the artifact, they learn from the king's journal that the three relics were originally created in some ruins immersed in a large forest, the home of the three guardians.
In the ruins, one of the guardians give the heroes visions of the past. These ruins were the resting place of the artifact of body, but an earthquake left its shrine vulnerable and it was stolen. It was then somehow paired with the artifact of the mind. Without the Trine, the artifact of souls, the other two became tainted and gave birth to an evil tower and the undead, creatures with a physical body and capable of thought, but devoid of purpose and righteousness. The trio ascends the tower, avoiding the obstacles created by the tormented soul of the old king and combines the Trine with the two lost artifacts, unbinding their souls. The undead are cleansed through the kingdom, allowing it to eventually recover, with the Wizard, the Thief and the Knight proclaimed as its heroes. The game ends with the narrator describing what happens to the three heroes; Pontius gives in to his true passion and becomes the new king's royal ale provider, Zoya is given reign over the forest ruins, and Amadeus marries a lady called Margaret, who gives birth to triplets that master the fireball as infants.
Development
Trine was originally started as a side-project by Jukka Kokkonen, Frozenbyte's senior programmer, while the rest of the team was working on another project. The other project ran into publisher and funding problems however, and the team decided to focus their efforts instead on developing Trine.
Release
The game was first released for Windows on July 3, 2009. The PlayStation Network version was to be released in July 2009, but last-minute bugs discovered in testing caused a delay. It was released on September 17, 2009 in Europe and on October 22, 2009 in North America. A port of the game to OS X was released on November 2, 2010.
The game was later ported to Linux by Alternative Games, with the finished port being first released as part of the Humble Frozenbyte Bundle. A version for Xbox Live Arcade was being developed by Atlus, but “most likely won't happen” according to Frozenbyte.
On June 18, 2014 a beta for Trine: Enchanted Edition was released. It ports the game to Trine 2 engine and adds online multiplayer. It was officially released on July 24, 2014. Trine: Enchanted Edition was released on PlayStation 4 and Wii U. Partnered with GameTrust, Trine: Enchanted Edition was announced and released on the Nintendo Switch on November 9, 2018.
Reception and legacy
Trine received generally favorable reviews, according to review aggregator Metacritic. Trine won GameSpot "Best Downloadable Game" Editor's Choice award at Electronic Entertainment Expo 2009.
PC Format magazine praised the game's "stunning attention to detail throughout", and added that its "beautifully fluid game mechanics are impossible not to appreciate." IT Reviews recommended Trine, and concluded: "Trine is an aesthetically pleasing and well executed puzzle platformer, with a distinct addictive streak when it comes to fully exploring the levels in order to upgrade your characters to their maximum power. When you're done with single player, the multiplayer mode adds extra life to the game, as the experience is genuinely different." IGN was more reserved, saying that "a lack of enemy variety, disappointing conclusion, and the wonky multiplayer keep Trine from greatness, but this is still a highly recommended puzzle platformer." The Australian video game talk show Good Games reviewers both gave Trine 15/20.
In February 2011, Frozenbyte announced Trine had sold approximately 400,000 copies across all platforms. Later that year on December 8, shortly before the release of the sequel, they stated that sales of the game had by then grown to 1.1 million copies. In October 2014, Frozenbyte announced that the Trine series had sold 7 million copies by then.
In November 2013, an announcer pack featuring the voice of the narrator was released for the multiplayer online battle arena game, Dota 2.
Sequels
To date, three sequels to Trine have been developed by Frozenbyte. Trine 2 was released in December 2011 for Windows, PlayStation 3, and Xbox 360, with later ports to the Wii U, PlayStation 4, and Nintendo Switch. Trine 2 included the series' first downloadable content pack, the Goblin Menace. Trine 3: The Artifacts of Power was released for personal computers in August 2015 with later ports for the PlayStation 4 and Nintendo Switch. Trine 3 veered from the previous games by making the gameplay based on 3D platforming, rather than the 2.5D of the previous two games, and generally was not as well-received due to this change. Trine 4: The Nightmare Prince was released in October 2019 for personal computer, PlayStation 4, Xbox One, and Nintendo Switch, with a return to the 2.5D style of the first two games. Alongside Trine 4, the Trine: Ultimate Collection has been released containing all four games and additional content as well as physical collectible items for the physical version of the game.
References
External links
Official website
2009 video games
Action-adventure games
Asymmetrical multiplayer video games
Cancelled Xbox 360 games
Fantasy video games
Frozenbyte games
Linux games
MacOS games
Multiplayer and single-player video games
Multiplayer hotseat games
Nintendo Network games
PlayStation 3 games
PlayStation 4 games
PlayStation Network games
Proprietary software that uses SDL
Puzzle-platform games
Side-scrolling platform games
SouthPeak Games
Video games with Steam Workshop support
Video games developed in Finland
Video games featuring female protagonists
Video games scored by Ari Pulkkinen
Video games with 2.5D graphics
Wii U eShop games
Windows games |
39573766 | https://en.wikipedia.org/wiki/Ninetology%20Outlook%20Pure | Ninetology Outlook Pure | The Ninetology Outlook Pure (T8700) tablet is powered by a Cortex A9 dual core (1.0 GHz) processor and is running on the Android Ice Cream Sandwich 4.0 Operating System, with 3G capabilities. The device is a result of collaboration between Clixster, Angkatan Koperasi Kebangsaan Malaysia Bhd (ANGKASA) and Ninetology.
History
Release
The Ninetology Outlook Pure (T8700) was announced on at a launch event organized by Clixster on 16 May 2013 and was released to the public for purchase in June.
Feature
Hardware
The Ninetology Outlook Pure (T8700) has a Cortex-A9 dual core 1.0 GHz processor and a 7.0" inch HD LCD capacitive (196 ppi pixel density) display screen with a resolution of 1024 X 600. It possesses a dimension of 192.4 mm (H) X 122.5 mm (W) X 10.5 mm (T) and weighs 330 grams.
The Ninetology Outlook Pure (T8700) supports 3G and WiFi capabilities and has a rear camera with a 2.0 megapixel camera, followed by a 0.3 MP front-facing camera.
The battery has a capacity of Li-Ion 3000mAh.
Additional storage is available via a MicroSD card socket, which is certified to support up to 32 GB of additional storage.
Software
The Ninetology Outlook Pure is running on the Android Ice Cream Sandwich 4.0 Operating System and is preloaded with a variety of applications:
Web: Native Android Browser
Social: Facebook, YouTube
Media: Camera, Gallery, FM Radio, Music Player, Video Player,
Personal Information Management: Calendar, Detail Contact Information
Utilities: Calculator, Alarm Clock, Google Maps, AirAsia, Voice Recorder, Tune Talk
Gaming: Diamond Dash, Subway Surfer
References
External links
http://ninetology.com/malaysia/products_tablets_outlook_pure_details.html
Smartphones
Tablet computers introduced in 2013
Android (operating system) devices |
13225880 | https://en.wikipedia.org/wiki/College%20of%20Computer%20Studies%2C%20University%20of%20Nueva%20Caceres | College of Computer Studies, University of Nueva Caceres | The University of Nueva Caceres College of Computer Studies was established when the population of Computer Science majors (formerly belonging to the College of Arts and Sciences) grew in size. It was formerly known as the College of Information Technology. This eventually changed in 2003 when the college added two new four-year courses: Bachelor of Science in Information Technology and Bachelor of Science in Information Management.
The college is headed by Cloyd San Juan who serves as dean. A new course was added last 2006 when they offered a 2D animation course.
A majority of the graduates are working in their respective fields and are professionals.
The UNC College of Computer Studies offers options to meet the needs of students who are interested in taking a four-year college degree course as well as those who plan to enter the job market directly after graduation. Students can choose from a variety of degrees, certificates, and individual courses.
The college offers courses that give students an opportunity to specialize in computers, while becoming exposed to a variety of liberal arts courses.
Three degrees of concentration are available: Bachelor of Science in Computer Science (BSCS), Bachelor of Science in Information Technology (BSIT) and Bachelor of Science in Information Management (BSIM).
Certificate courses are available for students who choose to focus on acquiring computer skills. These are two-year courses in Associate in Computer Technology (ACT), Computer Technician (CT), and Network Technician (NT).
University of Nueva Caceres |
15030594 | https://en.wikipedia.org/wiki/Quark/4 | Quark/4 | Quark/4 is a 1971 anthology of short stories and poetry edited by Samuel R. Delany and Marilyn Hacker. It is the fourth and final volume in the Quark series. The stories and poems are original to this anthology with the exception of "Voortrekker" which had previously appeared in the magazine Frendz.
Contents
On Speculative Fiction, by Samuel R. Delany & Marilyn Hacker
"Basileikon: Summe", by Avram Davidson
"Voortrekker", by Michael Moorcock
"Brass and Gold, or Horse and Zeppelin in Beverly Hills", by Philip José Farmer
"The Song of Passing", by Marco Cacchioni
"Norman Vs. America", by Charles Platt
"The True Reason for the Dreadful Death of Mr. Rex Arundel", by Helen Adam
"Acid Soap Opera", by Gail Madonia
"Bodies", by Thomas M. Disch
"Nightsong", by Marilyn Hacker
"Cages", by Vonda N. McIntyre
"Man of Letters", by Marek Obtulowicz
"The Fourth Profession", by Larry Niven
Twelve Drawings, by Olivier Olivier
from The Day, by Stan Persky
References
1971 short story collections
Science fiction anthology series |
3193644 | https://en.wikipedia.org/wiki/Amiga%20Corporation | Amiga Corporation | Amiga Corporation was a United States computer company formed in the early 1980s as Hi-Toro. It is most famous for having developed the Amiga computer, code named Lorraine.
History
In the early 1980s Jay Miner, along with other Atari staffers, had become fed up with management and decamped. In September 1982, they set up another chip-set project under a new company in Santa Clara, California, called Hi-Toro (which meant "high bull" to them, later renamed to Amiga), where they could have some creative freedom. There, they started to create a new 68000-based games console, codenamed Lorraine, that could be upgraded to a full-fledged computer. The initial start-up financing of Amiga Corporation was provided by three dentists in Florida, who later regained their investment once Commodore bought the company.
To raise money for the Lorraine project, Amiga designed and sold joysticks and game cartridges for popular game consoles such as the Atari 2600 and ColecoVision, as well as an odd input device called the Joyboard, essentially a joystick the player stood on.
During development in 1983, Amiga had exhausted venture capital and was desperate for more financing. Jay Miner approached his former employer, Atari, which then paid Amiga to continue development work. In return Atari was to obtain one-year exclusive use of the design. Atari had plans for a 68000-based machine, code-named "Mickey", that would have used customized chips, but details were sparse.
During this period a downturn started in the video game business that would soon turn into an outright rout known as the Video game crash of 1983. By the end of the year, Atari was losing about $1 million a day, and its owner, Warner Communications, became increasingly desperate to sell the company. For some time, no one was interested.
Meanwhile, at Commodore International a fight was brewing between Jack Tramiel, the president, and Irving Gould, the primary shareholder. Tramiel was pressing the development of a 32-bit machine to replace their earlier Commodore 64 and derived machines, fearing a new generation of machines like the Apple Macintosh would render the 64 completely obsolete. The fighting continued until Tramiel was dismissed on January 13, 1984.
Tramiel immediately formed a holding company, Tramel Technology, Ltd., (a phonetic spelling of "Tramiel") and began to visit various US computer companies with the intention of purchasing a company for manufacturing and possible technology acquisitions. Tramiel visited Mindset (run by Roger Badersher, former head of Atari's Computer Division), and Amiga. While initially entering talks with Tramiel, Amiga's talks eventually fell through as Tramiel told Amiga staff that he was very interested in the chipset, but not the staff. In the meantime, he had set his chief engineer (former Commodore engineer Shiraz Shivji) the task of developing a new low-cost, high-end computer system.
Tramiel's design for his next generation computer was 95% completed by June (which only fueled speculation that Shivji and other engineers had taken technology with them from Commodore). Tramiel discovered that Warner Communications wanted to sell Atari, which at that point was losing about $10,000 a day. Interested in Atari's overseas manufacturing and worldwide distribution network for his new computer, he approached Atari and entered talks. After on again/off again negotiations with Atari in May and June 1984, Tramiel had secured his funding and bought Atari's Consumer Division (which included the console and home computer departments) that July; Tramel Technology, Ltd. soon became Atari Corporation. Commodore almost immediately filed an injunction against Tramiel and Atari, seeking to bar them from releasing their new computer.
One of Tramiel's first acts after forming Atari Corp. was to fire most of Atari's remaining staff and cancel almost all ongoing projects in order to review their continued viability. It was during this time in late July that Tramiel's representatives discovered the original Atari Inc./Amiga contract.
BYTE had reported in April 1984 that Amiga "is developing a 68000-based home computer with a custom graphics processor. With 128K bytes of RAM and a floppy-disk drive, the computer will reportedly sell for less than $1000 late this year". It turned out that Amiga was supposed to deliver the Amiga chipset to Atari Inc. on June 30, 1984, or forfeit the company and its technology. With the deadline fast approaching and still not having enough funds to finish development, the Amiga crew went on alert after having heard rumors that Tramiel was in closed negotiations to complete the purchase of Atari in several days. Remembering Tramiel's visit that Spring during their investor campaign, they began scrambling for another large investor. So, at around the same time that Tramiel was in negotiations with Atari, Amiga wound up entering into discussions with Commodore. The discussions ultimately led to Commodore wanting to purchase Amiga outright, which would (from Commodore's viewpoint) cancel any outstanding contracts — including the contract given to the now defunct Atari Inc. So instead of Amiga delivering the chipset to Atari, Commodore delivered a check of $500,000 to Atari Corp. on Amiga's behalf (right about the time they were discovering the contract), in effect returning the funds invested into Amiga for completion of the Lorraine chipset.
Seeing a chance to gain some leverage Tramiel immediately used the situation to countersue Commodore through its new (pending) subsidiary, Amiga, which was done on August 13, 1984. He sought damages and an injunction to bar Amiga (and effectively Commodore) from producing anything with that technology. The suit tried to render Commodore's new acquisition (and the source for its next generation of computers) useless and do to Commodore what they were trying to do to him.
Meanwhile, at Commodore, the Amiga team (according to conversations by Curt Vendel of Atarimuseum.com directly with Dave Needle of Amiga and also with Joe Decuir of Amiga) was sitting in limbo for nearly the entire summer because of the lawsuit. No word on the status of the chipset, the Lorraine computer system or the team's fate was known. Finally in the fall of 1984 Commodore informed the team that the Lorraine project was active again, the chipset to be improved, the OS developed and the hardware design completed.
From this point on the former Amiga Corporation was a division of Commodore. Over the next few years many employees felt Commodore's management proved to be as annoying as Atari's, and most of the team members left, were laid off, or were fired. Meanwhile, Atari used this time to finish and release the Atari ST computer just months ahead of the release of the Amiga.
Both lawsuits themselves were eventually laid to rest in March 1987, when Commodore and Atari Corp. settled out of court in a closed decision.
See also
Amiga, Inc.
Commodore
References
External links
On the Edge: The Spectacular Rise and Fall of Commodore (2005), Variant Press. A book describing the formation of Amiga Corporation and subsequent acquisition by Commodore.
Amiga History Guide: Amiga 1982 - 1984
Amiga companies
Home computer hardware companies
Defunct computer hardware companies
Defunct computer companies based in California
Technology companies based in the San Francisco Bay Area
Companies based in Santa Clara, California
Computer companies established in 1982
Companies disestablished in 1984
1982 establishments in California
1984 disestablishments in California
Defunct companies based in the San Francisco Bay Area
sv:Amiga Inc. |
28205830 | https://en.wikipedia.org/wiki/Asprox%20botnet | Asprox botnet | The Asprox botnet (discovered around 2008), also known by its aliases Badsrc and Aseljo, is a botnet mostly involved in phishing scams and performing SQL injections into websites in order to spread malware. It is a highly infectious malware which spreads through an email or through a clone website. It can be used to trace any kind of personal or financial information and activities online.
Operations
Since its discovery in 2008 the Asprox botnet has been involved in multiple high-profile attacks on various websites in order to spread malware. The botnet itself consists of roughly 15,000 infected computers as of May, 2008, although the size of the botnet itself is highly variable as the controllers of the botnet have been known to deliberately shrink (and later regrow) their botnet in order to prevent more aggressive countermeasures from the IT Community.
The botnet propagates itself in a somewhat unusual way, as it actively searches and infects vulnerable websites running Active Server Pages. Once it finds a potential target the botnet performs a SQL injection on the website, inserting an IFrame which redirects the user visiting the site to a site hosting Malware.
The botnet usually attacks in waves - the goal of each wave is to infect as many websites as possible, thus achieving the highest possible spread rate. Once a wave is completed the botnet lay dormant for an extended amount of time, likely to prevent aggressive counterreactions from the security community. The initial wave took place in July, 2008, which infected an estimated 1,000 - 2,000 pages. An additional wave took place in October 2009, infecting an unknown number of websites. Another wave took place in June 2010, increasing the estimated total number of infected domains from 2,000 to an estimated 10,000 - 13,000 within a day.
Notable high-profile infections
While the infection targets of the Asprox botnet are randomly determined through Google searches, some high-profile websites have been infected in the past. Some of these infections have received individual coverage.
Sony PlayStation U.S.
Adobe's Serious Magic website
Several government, healthcare and business related websites
See also
Botnet
Malware
Email spam
Cybercrime
Internet security
References
Internet security
Multi-agent systems
Distributed computing projects
Spamming
Botnets |
3790487 | https://en.wikipedia.org/wiki/Plaintext-aware%20encryption | Plaintext-aware encryption | Plaintext-awareness is a notion of security for public-key encryption. A cryptosystem is plaintext-aware if it is difficult for any efficient algorithm to come up with a valid ciphertext without being aware of the corresponding plaintext.
From a lay point of view, this is a strange property. Normally, a ciphertext is computed by encrypting a plaintext. If a ciphertext is created this way, its creator would be aware, in some sense, of the plaintext. However, many cryptosystems are not plaintext-aware. As an example, consider the RSA cryptosystem without padding. In the RSA cryptosystem, plaintexts and ciphertexts are both values modulo N (the modulus). Therefore, RSA is not plaintext aware: one way of generating a ciphertext without knowing the plaintext is to simply choose a random number modulo N.
In fact, plaintext-awareness is a very strong property. Any cryptosystem that is semantically secure and is plaintext-aware is actually secure against a chosen-ciphertext attack, since any adversary that chooses ciphertexts would already know the plaintexts associated with them.
History
The concept of plaintext-aware encryption was developed by Mihir Bellare and Phillip Rogaway in their paper on optimal asymmetric encryption, as a method to prove that a cryptosystem is chosen-ciphertext secure.
Further research
Limited research on plaintext-aware encryption has been done since Bellare and Rogaway's paper. Although several papers have applied the plaintext-aware technique in proving encryption schemes are chosen-ciphertext secure, only three papers revisit the concept of plaintext-aware encryption itself, both focussed on the definition given by Bellare and Rogaway that inherently require random oracles. Plaintext-aware encryption is known to exist when a public-key infrastructure is assumed.
Also, it has been shown that weaker forms of plaintext-awareness exist under the knowledge of exponent assumption, a non-standard assumption about Diffie-Hellman triples.
Finally a variant of the Cramer Shoup encryption scheme was shown to be fully plaintext aware in the standard model under the knowledge of exponent assumption.
See also
Topics in cryptography
References
Theory of cryptography |
5231139 | https://en.wikipedia.org/wiki/Lego%20Creator%20%28video%20game%29 | Lego Creator (video game) | Lego Creator is a sandbox game for Microsoft Windows, which involves building with virtual Lego elements. The game has no missions, objectives, challenges, or money constraints. The game was released on 11 November 1998.
Gameplay
Lego Creator was initially conceived of as an 'Evergreen' replication of the physical toy. Starting with the 'Town' range, the game would expand at each release with the addition of further product themes. Functionality would also be enhanced with each 'content pack'.
Ultimately, individual ranges remained independent, and emphasis shifted to a play experience, with reduced emphasis on freeform construction. Originally, it had been hoped that sheer scale of unlimited bricks might offset the loss of tactile merit, but such hope was compromised by the computers of the day. Plans included being able to build content, which could be seamlessly dropped into separate Lego video games. This was dropped as the complexity of doing so was further explored.
By the time the Harry Potter theme was introduced to the game, the series had shifted far from the original premise of freeform LEGO construction. Instead, the product moved toward a more limited build environment, but with superior gameplay.
In addition to the regular bricks in an assortment of colors, there are specified "Action Bricks", which move or make noise. Examples include the hinge, propeller, and siren. There is also a "Destructa Brick", a 1x2 tile with an image of dynamite superimposed on it. This can be used to destroy models in Play Mode, although the player's creations will automatically rebuild when returning to Build Mode. Minifigures can also be used, and can stand, sit, or walk, and can be set to drive vehicles set to a path or road. In play mode, minifigures and vehicles can travel around the environment, special bricks can be interacted with, the sky can be set from day to night, and the player can control or see from the perspective of any minifigures set to move around, vehicles, and security cameras. Minifigures make gibberish sounds during play mode, and the game's instruction manual details how to replace the audio files for these sounds with custom files.
Awards
Lego Creator received 4 awards: Computer Game Developers Spotlight Award, Best New Children's Game; CODIE Software Publishers Association Excellence in Software Awards, Best New Home Creativity Software (US); "Top 100 Family Tested", Family PC Magazine; and PIN Quality Mark Gold Award, Parents Information. It was also nominated at the 2nd annual interactive achievement awards for computer children's game of the year.
Sequels
Lego Creator was followed by three sequels.
Lego Creator: Knights' Kingdom
Lego Creator: Knights' Kingdom is a medieval-themed construction and management simulation video game developed by Superscape and published by Lego Software in 2000. It is a stand-alone sequel to Lego Creator and is based on the first incarnation of the Lego Knights' Kingdom theme.
Lego Creator: Harry Potter
Lego Creator: Harry Potter is a construction and management simulation video game based on the 2001 Harry Potter film Harry Potter and the Philosopher's Stone and the Lego Harry Potter brand of building block sets. It was developed by Superscape and published by Lego Software in late 2001. It is the first Lego game based on a licensed property. In the game, the player can build Harry Potter-themed worlds and complete challenges.
Lego Creator: Harry Potter is related to the film version of Harry Potter and the Philosopher's Stone and allows the player to play as various different characters and go into four general areas, plus 5 extra areas. The area of Inside Hogwarts school has four place-able extra rooms to reach other areas, including Professor Snape's Potions Class and the Forbidden Corridor. The game includes many features that give the player a lot of creative ability. Features include taking control of minifigures and animals, driving the Hogwarts Express, changing the weather from rain to snow to night to day, casting spells and flying on broomsticks, and creating your own minifigures and models with classical and Harry Potter style Lego faces, bodies, cloaks and even wands; while the workshop contains castle pieces, to extras, to standard pieces.
Creator: Harry Potter and the Chamber of Secrets
Creator: Harry Potter and the Chamber of Secrets is the sequel to Lego Creator: Harry Potter, which focuses on the second movie, Harry Potter and the Chamber of Secrets. This is the only Lego Creator installment not to be developed by Superscape, instead being developed by Qube Software and published by Electronic Arts and Lego Interactive.
While the sequel contains many of the same features as the debut game, more additional features were added to enhance the player's creative ability, including more models, more worlds, and more minifigures. Certain characters or animals can reach certain areas of the game. Completing tasks will unlock different worlds and models the player can use in their own world. These tasks are tutorials, which show the user all the features of the program.
References
1998 video games
Creator (video game)
Construction and management simulation games
Video games developed in the United Kingdom
Windows games
Windows-only games |
60059991 | https://en.wikipedia.org/wiki/Salvatore%20Aranzulla | Salvatore Aranzulla | Salvatore Aranzulla (born 24 February 1990 in Caltagirone) is an Italian blogger and entrepreneur. He is a well-known popularizer and author of problem solving tutorials for information technology (especially software) by the general Italian public.
Early life and education
He was raised in the Sicilian hamlet of Mirabella Imbaccari by parents Giovanni and Maria in a family of four children (his three brothers are Giuseppe, Davide and Elia). His father was a nurse in Caltagirone hospital and the mother a housekeeper. He attended the scientific high school in Piazza Armerina.
In 2008 he moved to Milan to study at Bocconi University, where he graduated in 2015.
When he was 12, he started reading through the net and soon he began looking after a newsletter and a blog where he published practical advice on solving computer problems, thus becoming the youngest popularizer in Italy. As a teenager, he was also active as a bug hunter for some major websites and web browsers.
Popularization activities
Website
His blog had about 300,000 monthly readers between 2007 and 2008. When Aranzulla needed money to go to the University, he started using Google Ads.
In 2016 the readers became 9 million every month, with 20 million page views, and about daily in 2015 and 2018 thanks to an accurate use of SEO.
In 2018 the website ranked in the top 30 in Italy (first in the general field of "Information Technology" and with a 40% quota of the "computer news" sector) and in March 2019, it ranked 57th in Italy.
The company, whose sole shareholder is Aranzulla himself, is located in Milan and closed 2014 with a turnover of one million euros, then 1,6 million euros in 2016. In 2017 the annual turnover was two million and in 2019 three million euros. By 2018 the company has a staff of eight people.
The articles are written by 10 ghostwriters from various parts of Italy, such as Campania, Calabria, Lombardy, Tuscany.
Other editorial activities
He also wrote the technology column on the web portal Virgilio.it from 2008 to 2015. In 2016 he started a cooperation with the national newspaper Il Messaggero.
Italian Wikipedia's article deletion
In 2016 his article on the Italian Wikipedia was deleted because he was not considered sufficiently relevant. He replied by calling Italian Wikipedia users who proposed his deletion "low-class competitors and sore losers". The page was previously deleted 12 times over a period of 10 days in 2006, because it kept being rewritten probably by himself or close acquaintances.
The debate about the deletion was publicized. On Il Gazzettino, Il Foglio and La Stampa the deletion was described critically. It was also noted that the deletion was originally requested by a person who owns a website with similar content to the one of Aranzulla.
Awards and honours
In 2018 the website obtained a special recognition for best website by the comune of Perugia at the 2018 Macchianera awards in Perugia
In February 2019, he received the Candelora d'Oro from the comune of Catania.
In popular culture
In 2017 he had a cameo in the music video of the song In the town of Gabry Ponte and Sergio Sylvestre. In 2018 he dubbed a pop-up in the Italian version of Ralph Breaks the Internet. and he was also the Italian poster person for the Netflix television series Black Mirror.
In 2017 he was the protagonist of an episode of the Mediaset program , broadcast from Singapore and hosted by Pio and Amedeo.
References
Bibliography
Super hacker. I segreti della sicurezza nella nuova era digitale
Hacker contro hacker. Manuale pratico e facile di controspionaggio informatico Sperling & Kupfer (2013)
Il metodo Aranzulla, Mondadori (2018)
External links
1990 births
Living people
Bocconi University alumni
People from Caltagirone
Italian Internet celebrities |
1097056 | https://en.wikipedia.org/wiki/Xaverian%20College | Xaverian College | Xaverian College is a Roman Catholic college in Manchester, England. The campus is in Victoria Park with the college being two miles south from the Manchester city centre. Established in 1862, Xaverian college has become one of the most oversubscribed Sixth form college's in the Greater Manchester region alongside Loreto College, Manchester and Ashton Sixth Form College. It consistently ranks in the top 10 facilities for 16-18 education. Xaverian College is a member of the Association of Colleges. As of 2019, the acceptance rate is 30%.
It is located near world-renowned educational institutions such as the University of Manchester and the Royal Northern College of Music. As it is in partnership with the University of Manchester, Xaverian houses the foundational courses of Sciences on behalf of UoM and Xaverian College students are also able to access the University of Manchester Library with over 4 million resources able to be used.
In 2008, Ofsted declared that "Xaverian College is outstanding in all aspects of its provision" with a Grade 1 rating in all the inspection criteria. The college holds a Catholic ethos and mission whilst ensuring a "community of learning, faith and service". Xaverian's values are zeal, compassion, humility, trust and simplicity.
History
1862-1976
The Xaverian Brothers, or Congregation of St Francis Xavier (CFX), are a Roman Catholic religious order founded by Theodore James Ryken in Bruges, Belgium, in 1839 and named after St. Francis Xavier. The order is dedicated to Roman Catholic education in the United Kingdom, the United States and many other countries.
The college was founded by the Xaverian Brothers in 1862 and until 1903 was housed in a four-storey building on Oxford Road, Manchester. On the move to the then gated Victoria Park, it was originally housed in a building known as Firwood, but over time, through new building projects and acquisition, the campus grew.
Firwood was home to the Brothers until 1993 when the last of them left. Another former house which has now become part of the college, Ward Hall, was used as a camp for American servicemen in the Second World War.
Mancunian Films, a motion picture production company, used the exterior of the college in several of their films, including It's A Grand Life, starring Frank Randle and Diana Dors. The film company sold their Dickenson Road Studios to the BBC in 1954, making Dickenson Road Studios the first regional BBC TV studio. When the BBC left in 1974 to move to Oxford Road, Xaverian inherited their lighting rigs, now used in the drama studio. From 1946 to 1977, the school was a direct grant grammar school.
1977 to present
The college was a Roman Catholic grammar school for boys until 1977, when it became a mixed sixth-form college. Direct Grant Grammar School status ended and Xaverian became a Sixth Form College for young men and women aged sixteen to nineteen within the Manchester Local Education Authority. In 1993, the College Principal Mrs Quinn led an expansion in student numbers, refurbished and modernised many of the buildings and updated the curriculum with vocationally based courses and the introduction of information technology across many subjects. Her greatest success, however, was to maintain the distinctive Xaverian mission and ethos in a period of much change and uncertainty.
Capital from the Xaverian Brothers and grants from the FEFC allowed a new multi-resource building, The Ryken, to be constructed in 2002. By 2005, the FEFC had become the Learning and Skills Council and recognised the college's progress by part funding a state-of-the-art new building, which was named Mayfield. In 2007 Mrs. Mary Hunter was made Principal. Her appointment can be seen as another watershed in the life of Xaverian. Mary Hunter, whose previous experience was in the general FE sector, brought both an objective eye and a heart-felt empathy to a college truly committed to a special Mission. This was recognised in the latest Ofsted Inspection when the college was graded outstanding in all areas of the report. The college was subsequently awarded Beacon status.
Admissions
In the inner city suburb of Rusholme, close to Wilmslow Road and Oxford Road, many of the college's students originate from ethnic minorities, as well as various socioeconomic classes. Admissions consist of three hierarchal priorities:
1. Pupils studying at one of the seven associated Roman Catholic High Schools and Trinity CE High School in Hulme are guaranteed a place at Xaverian if they wish to take it.
2. Next priority is given to students in Roman Catholic schools who are in partnership with Xaverian.
3. Priority then falls to Roman Catholic pupils at non-Roman Catholic schools who meet entry requirements.
In addition, there is a NHS Cadets vocational programme which has entry criteria that are not based on an applicant's religion or beliefs and the college also accommodates a group of approximately fifty Manchester University students undertaking foundation degrees in dentistry, medicine and pharmacy.
Campus
The college consists of nine buildings on two sides of Lower Park Road: Ward Hall, Birtles, Marylands, Firwood, Xavier, Sunbury, Ryken, Mayfield, and Teresa Quinn built from 1840 onwards. Additions and renovations have been an ongoing feature of the campus's development, with Birtles a key example of this process. The Ryken and Mayfield buildings, added at the start of the 21st century, along with Teresa Quinn, opened in 2020, house information technology equipment. The Ryken building was named after one of the founders of the Xaverian order, Theodore James Ryken. The college buildings are around the perimeter of a central grassed area where sporting and social activities take place.
Ward Hall (previously the US Embassy Northern Outpost in World War Two) has been transformed to inspire the creative success of students passionate about Art, Graphic Communication, Photography, and Textiles. It also features extensive film and media facilities, a cine room where students can organise film afternoons, and brand-new classrooms for Criminology, Classical Civilisation, Law, Sociology, and History courses.
Birtles Sport, Geography, Music and Drama students are housed in the new Birtles building. Built to have drama and music studios, rehearsal rooms, a recording suite and computer labs.
Marylands for English Language and English Literature
Firwood houses the main student common room, catering facilities, student services, learning support suite, additional learning support and tutorial rooms, college chapel and RE rooms, administration offices and the main reception.
Xavier is home to the University of Manchester foundation courses in Biology, Medicine and Dentistry and also houses Mathematics and Sciences.
Sunbury houses RE classes, Theology and Philosophy, and the NHS cadet course, among others and Uniformed Public Services.
Ryken for Foundation Level 1 courses. The careers service and library. It also provides a seminar room for visiting speakers, and a large drop-in centre where students are able to make use of college ICT facilities.
Mayfield accommodating Accounting, Business Studies, Computer Science, Economics, Geography, Government and Politics, History, Mathematics, Modern Languages ICT and Psychology (Mayfield College was a Xaverian college in East Sussex.)
Teresa Quinn is the newest building in the campus for BTEC courses such as Criminology, Health & Social Care, Information Technology and other non A-level qualifications.
Notable alumni
Sixth form college
Caroline Aherne: actress and writer
Peter Ash: actor
Andrea Ashworth: writer and academic
Afshan Azad: actress, best known for playing Padma Patil in the Harry Potter films
Mark Collins: guitarist, The Charlatans
Sally Lindsay: actress and comedian
Mani: musician, Notably the Bassist for The Stone Roses and briefly Primal Scream
Chris Ofili: artist and recipient of the Turner Prize
Nedum Onuoha: footballer, playing for Queens Park Rangers F.C.
Lucy Powell: Labour MP for Manchester Central and former shadow secretary for education
Shaun Wright-Phillips: footballer, playing for MLS team New York Red Bulls
Grammar school
Brian Bagnall: cartoonist and writer for Private Eye (Bagnall was a writer for the satirical Dear Bill letters feature)
Chris Buckley: footballer
Anthony Burgess: author, poet, composer; A Clockwork Orange.
Wilfred Carr: Professor of the School of Education at the University of Sheffield from 1994
Denis Carter, Baron Carter: politician
James Cunningham: Bishop of Hexham and Newcastle, 1958–74
Augustine Hailwood: Conservative MP for Manchester Ardwick, 1916–22
Martin Hannett: record producer; co-founder of Factory Records
Peter Hebblethwaite: journalist
Bernard Hill: actor
Major Henry Kelly (VC)
Bernard Longley: Roman Catholic Archbishop of Birmingham from 2009
Gary Mounfield: musician, member of The Stone Roses
Tim Willocks: doctor and novelist
John Heffernan : Industrial Designer.Designed 1986 Aston Martin Virage and co-designed 1991 BentleyContinental R to list of pupils
See also
Listed buildings in Manchester-M14
List of direct grant grammar schools
References
External links
Audio interview with Brother Cyril - headmaster of Xaverian College from 1962 to 1989.
EduBase
Catholic secondary schools in the Diocese of Salford
Schools sponsored by the Xaverian Brothers
Buildings and structures in Manchester
Education in Manchester
Defunct grammar schools in England
Educational institutions established in 1862
Sixth form colleges in Greater Manchester
E
1862 establishments in England |
663430 | https://en.wikipedia.org/wiki/GeoTIFF | GeoTIFF | GeoTIFF is a public domain metadata standard which allows georeferencing information to be embedded within a TIFF file. The potential additional information includes map projection, coordinate systems, ellipsoids, datums, and everything else necessary to establish the exact spatial reference for the file. The GeoTIFF format is fully compliant with TIFF 6.0, so software incapable of reading and interpreting the specialized metadata will still be able to open a GeoTIFF format file.
An alternative to the "inlined" TIFF geospatial metadata is the *.tfw World File sidecar file format which may sit in the same folder as the regular TIFF file to provide a subset of the functionality of the standard GeoTIFF described here.
History
The GeoTIFF format was originally created by Dr. Niles Ritter while he was working at the NASA Jet Propulsion Laboratory. The reference implementation code was released mostly as public domain software with some parts under a permissive X license. On September 14th, 2019, the Open Geospatial Consortium published the OGC GeoTIFF standard, which defines the Geographic Tagged Image File Format (GeoTIFF) by specifying requirements and encoding rules for using the Tagged Image File Format (TIFF) for the exchange of georeferenced or geocoded imagery. The OGC GeoTIFF 1.1 standard formalizes the existing community GeoTIFF specification version 1.0 and aligns it with the continuing addition of data to the EPSG Geodetic Parameter Dataset.
Cloud Optimised GeoTIFF
"Cloud Optimized GeoTIFF" (COG) is a standard based on GeoTIFF, designed to make it straightforward to use GeoTIFFs hosted on HTTP webservers, so that users and software can make use of partial data within the file without having to download the entire file. It is designed to work with HTTP range requests, and specifies a particular layout of data and metadata within the GeoTIFF, such that clients can predict which range of bytes they need to download. COG is simply a specialisation of GeoTIFF, so COG files are TIFF files.
COG was developed within the Open Source Geospatial Foundation/GDAL project, starting in around 2016. The COG format can be read and written by many common geographic software tools including GDAL, QGIS, and GeoTrellis. Various providers now supply some of their data in COG format, including Google and DigitalGlobe.
See also
Digital raster graphic
GDAL - Open source GeoTIFF reader and writer
Tagged Image File Format (TIFF)
The *.tfw World File
References
External links
OGC GeoTIFF formalizes the existing community GeoTIFF specification version 1.0 and aligns it with the continuing addition of data to the EPSG Geodetic Parameter Dataset.
GeoTIFF.io Open-source website for viewing and analyzing GeoTIFF files
Cartography
GIS file formats
GIS raster file formats
Public-domain software with source code
Free software
Raster graphics file formats |
2503577 | https://en.wikipedia.org/wiki/Pertec%20Computer | Pertec Computer | Pertec Computer Corporation (PCC), formerly Peripheral Equipment Corporation (PEC), was a computer company based in Chatsworth, California which originally designed and manufactured peripherals such as floppy drives, tape drives, instrumentation control and other hardware for computers.
Pertec's most successful products were hard disk drives and tape drives, which were sold as OEM to the top computer manufacturers, including IBM, Siemens and DEC. Pertec manufactured multiple models of seven and nine track half-inch tape drives with densities 800CPI (NRZI) and 1600CPI (PE) and phase-encoding formatters, which were used by a myriad of original equipment manufacturers as I/O devices for their product lines.
In the 1970s, Pertec entered the computer industry through several acquisitions of computer producers and started manufacturing and marketing mostly minicomputers for data processing and pre-processing. This split up Pertec into two companies. Pertec Peripherals Corporation (PPC), which remained based in Chatsworth, California, and Pertec Computer Corporation (PCC), which was located at 17112 Armstrong Avenue, in Irvine, California.
Pertec and MITS
Pertec bought MITS, the manufacturers of the MITS Altair computer, for US$6.5 million in 1976. This purchase was motivated mainly by the ownership of the Microsoft BASIC sources and general license that Pertec erroneously assumed to be included in the deal. They also acquired iCOM, makers of micro peripherals, in the same year. They believed that these acquisitions would change them from selling computers mostly for hobbyists, to selling them for small businesses.
Pertec changed their name, after the acquisition of MITS, from Pertec Corporation to Pertec Computer Corporation to "be more reflective of the company's present position and to clearly state our future direction".
As a result of the acquisition, Pertec became involved in the manufacturing of microprocessor-based computers. Their first models were expanded versions of the Altair models, typically coupled to the existing disk-drive range. Despite initially good sales, the Altair's 8080 CPU was becoming increasingly outdated, so Pertec decided to retire the Altair as well as the MITS name itself.
In 1978, the company launched the first of its own designs, the PCC-2000. This was based on two Intel 8085 series microprocessors: one of which was given over to I/O control. Being a high end machine, it was intended to be the core of what would now be described as a workgroup. The machine was intended to support four "dumb" terminals connected via RS-232 serial lines, in addition to its internal console. The basic machine had twin 8-inch floppy drives, each capable of storing 1.2 megabytes and could link to two Pertec twin 14-inch disk drives, giving a total of 22.4 megabytes of storage, which was a very large amount for the time. The system was generally supplied with a multi-user operating system called MTX, which included a BASIC interpreter that was similar to Business Basic. The PCC-2000 was also available with MITS DOS or CP/M. In the UK, several systems were run under BOS. Unfortunately, the PCC-2000 was too expensive for the market and was never a great success.
Pertec Business Systems
Pertec/MITS 300
The MITS 300 was the first product built and released by the Pertec after their acquisition of MITS in 1977. They produced the 300/25 and the 300/55. Both were fully integrated systems that included both hardware and software in one package. The 300/25 used Pertec floppy diskette drives and the 300/55 added Pertec DC-3000 14-inch hard disk. The system consists of the MITS 2nd generation Altair 8800 (or Altair 8800b) computer with hard drive controller and MITS datakeeper storage system. The complete 300/55 business system sold for $15,950 and included the Altair 8800b with 64k of dynamic RAM, a CRT terminal and a desk. The system was designed to handle a variety of business applications including word processing, inventory control and accounting.
This system was prone to overheating and had a very short life span.
The new system allowed for MITS peripherals including Altair Floppy Disc, Altair Line Printer, Teletypewriter, and the Altair CRT terminal.
The printer was a bidirectional Mits/Altair C-700 that could print 60 characters/second and 26 lines/minute.
Pertec PCC-2100
Pertec's primary line of computer products was aimed at the key-to-disk minicomputer systems that were used as front-end data processors for the IBM 360/370 and similar systems. This line was opened in the first half of the 1970s by the Pertec PCC-2100 data entry system, which was essentially different from the PCC-2000 mentioned above. The system was able to serve up to 16 coaxial terminals, two D3000 disk drives and one T1640 tape drive.
Pertec XL-40
Pertec XL-40, introduced in 1977, was a more successful successor of Pertec PCC-2100. The XL-40 machine used custom 16-bit processors built from the TI3000 or AMD2900 slices, up to 512 KB operating memory and dedicated master-capable DMA controllers for tape units, floppy and rigid disk units, printers, card reader and terminals.
The maximum configuration came in two different versions. One featured four T1600 / T1800 tape units (manufactured by Pertec), two floppy disk units (manufactured by IBM or Pertec) and four D1400 / D3400 rigid disk units (4.4, 8.8, 17.6 MB formatted capacity, manufactured by Pertec or Kennedy). The other one featured two large capacity disk units (up to 70 MB formatted capacity, manufactured by Kennedy or NEC), one line printer connected through long-line interface (DataProducts LP600, LP1200, B300, Printronix P300, P600), four station printers connected through coaxial cable (Centronics), one card reader (Pertec), four SDLC communication channels and 30 proprietary coax terminals (Model 4141 with 40x12 characters or Model 4143 with 80x25 characters).
The system was mainly used for key-to-disk operations to replace the previously popular IBM card punches and more advanced key-to-tape systems manufactured for example by Mohawk Data Sciences (MDS) or Singer. In addition to the basic key-to-disk function, the proprietary operating system, called XLOS, supported indexed file operations for on-line transaction processing even with data journaling. The system was programmed in two different ways. The data entry was either described in several tables that specified the format of the input record with optional automatic data validation procedures or the indexed file operations were programmed in a special COBOL dialect with IDX and SEQ file support.
System maintenance operations were performed in a protected supervisor mode; the system supported batched operations in the supervisor mode through the use of batch files that specified operator selections. The operating system interacted with the user through a series of prompts with automatic on-screen explanations and default selections, probably the ultimate user-friendliness achievable in text-only human-computer interaction. The XL-40 was also marketed by Triumph-Adler in Europe as TA1540, the beginning of a relationship that would eventually see a merger of the two companies.
Pertec 3200
Pertec's final in-house computer design was a complete departure, the MC68000-based Series 3200. The primary operating system was an in-house developed multi-tasking, multi-user operating system, but it could also run Unix. As with the XL40, Triumph-Adler marketed the system in Europe under their own brand with the model name MSX 3200 (There were four models, eventually, in the Triumph-Adler series: 3200, 3220, 3230 and 3240). The key to disk application from the XL40 was re-implemented on the 3200. The other main application was a BASIC language driven database, similar to the ones used by MAI Basic Four or Pick operating system. These BASIC database business systems would be purchased by outside companies that bundled the PCC 3200 with their software to provide a complete small business package (accounts payable, accounts receivable, payroll, inventory, sales tracking, taxes, etc.) customized for specific businesses.
The 3200 was extremely advanced for the time, being intended to support up to 32 users, all using intelligent Z80-based terminals, each of which could optionally run CP/M attached to the 3200's high speed coax cable. Later an ISA bus to 3200 coax interface was made for the PC, and this allowed the usage of PC's as smart terminals for the 3200 or as networked systems running MS-DOS. It was the first Pertec product to support the emerging "Winchester" standard for miniature hard disks.
Eventual fate
Soon after the introduction of the 3200, Pertec Computer Corporation was purchased by Triumph-Adler. Later PCC was acquired by Scan-Optics in February 1987. During the transition from systems based on custom-made CPUs to CPUs made by Intel and Motorola, prices for these systems dropped dramatically, but without an offsetting increase in demand, and eventually companies such as PCC slowly dwindled away to small remnants of their peak days in the mid-1980s, or were bought out by larger companies.
Pertec's PPC magnetic tape interface standard of the early 1970s rapidly became an industry-wide standard and is still in use by tape drive manufacturers today. Similarly, its PERTEC disk interface was an industry standard for pre-Winchester disk drives of the 1970s.
References
External links
Pertec documentation at bitsavers.org
Pertec at VirtualAltair
A piece of Pertec history at VirtualAltair
American companies established in 1967
American companies disestablished in 1987
Companies based in Los Angeles County, California
Computer companies established in 1967
Computer companies disestablished in 1987
Defunct companies based in California
Defunct computer companies of the United States |
6435232 | https://en.wikipedia.org/wiki/Sentiment%20analysis | Sentiment analysis | Sentiment analysis (also known as opinion mining or emotion AI) is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. With the rise of deep language models, such as RoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly.
Examples
The objective and challenges of sentiment analysis can be shown through some simple examples.
Simple cases
Coronet has the best lines of all day cruisers.
Bertram has a deep V hull and runs easily through seas.
Pastel-colored 1980s day cruisers from Florida are ugly.
I dislike old cabin cruisers.
More challenging examples
I do not dislike cabin cruisers. (Negation handling)
Disliking watercraft is not really my thing. (Negation, inverted word order)
Sometimes I really hate RIBs. (Adverbial modifies the sentiment)
I'd really truly love going out in this weather! (Possibly sarcastic)
Chris Craft is better looking than Limestone. (Two brand names, identifying the target of attitude is difficult).
Chris Craft is better looking than Limestone, but Limestone projects seaworthiness and reliability. (Two attitudes, two brand names).
The movie is surprising with plenty of unsettling plot twists. (Negative term used in a positive sense in certain domains).
You should see their decadent dessert menu. (Attitudinal term has shifted polarity recently in certain domains)
I love my mobile but would not recommend it to any of my colleagues. (Qualified positive sentiment, difficult to categorise)
Next week's gig will be right koide9! ("Quoi de neuf?", French for "what's new?". Newly minted terms can be highly attitudinal but volatile in polarity and often out of known vocabulary.)
Types
A basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level—whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral. Advanced, "beyond polarity" sentiment classification looks, for instance, at emotional states such as enjoyment, anger, disgust, sadness, fear, and surprise.
Precursors to sentimental analysis include the General Inquirer, which provided hints toward quantifying patterns in text and, separately, psychological research that examined a person's psychological state based on analysis of their verbal behavior.
Subsequently, the method described in a patent by Volcani and Fogel, looked specifically at sentiment and identified individual words and phrases in text with respect to different emotional scales. A current system based on their work, called EffectCheck, presents synonyms that can be used to increase or decrease the level of evoked emotion in each scale.
Many other subsequent efforts were less sophisticated, using a mere polar view of sentiment, from positive to negative, such as work by Turney, and Pang who applied different methods for detecting the polarity of product reviews and movie reviews respectively. This work is at the document level. One can also classify a document's polarity on a multi-way scale, which was attempted by Pang and Snyder among others: Pang and Lee expanded the basic task of classifying a movie review as either positive or negative to predict star ratings on either a 3- or a 4-star scale, while Snyder performed an in-depth analysis of restaurant reviews, predicting ratings for various aspects of the given restaurant, such as the food and atmosphere (on a five-star scale).
First steps to bringing together various approaches—learning, lexical, knowledge-based, etc.—were taken in the 2004 AAAI Spring Symposium where linguists, computer scientists, and other interested researchers first aligned interests and proposed shared tasks and benchmark data sets for the systematic computational research on affect, appeal, subjectivity, and sentiment in text.
Even though in most statistical classification methods, the neutral class is ignored under the assumption that neutral texts lie near the boundary of the binary classifier, several researchers suggest that, as in every polarity problem, three categories must be identified. Moreover, it can be proven that specific classifiers such as the Max Entropy and SVMs can benefit from the introduction of a neutral class and improve the overall accuracy of the classification. There are in principle two ways for operating with a neutral class. Either, the algorithm proceeds by first identifying the neutral language, filtering it out and then assessing the rest in terms of positive and negative sentiments, or it builds a three-way classification in one step. This second approach often involves estimating a probability distribution over all categories (e.g. naive Bayes classifiers as implemented by the NLTK). Whether and how to use a neutral class depends on the nature of the data: if the data is clearly clustered into neutral, negative and positive language, it makes sense to filter the neutral language out and focus on the polarity between positive and negative sentiments. If, in contrast, the data are mostly neutral with small deviations towards positive and negative affect, this strategy would make it harder to clearly distinguish between the two poles.
A different method for determining sentiment is the use of a scaling system whereby words commonly associated with having a negative, neutral, or positive sentiment with them are given an associated number on a −10 to +10 scale (most negative up to most positive) or simply from 0 to a positive upper limit such as +4. This makes it possible to adjust the sentiment of a given term relative to its environment (usually on the level of the sentence). When a piece of unstructured text is analyzed using natural language processing, each concept in the specified environment is given a score based on the way sentiment words relate to the concept and its associated score. This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.
There are various other types of sentiment analysis like- Aspect Based sentiment analysis, Grading sentiment analysis (positive, negative, neutral), Multilingual sentiment analysis and detection of emotions.
Subjectivity/objectivity identification
This task is commonly defined as classifying a given text (usually a sentence) into one of two classes: objective or subjective. This problem can sometimes be more difficult than polarity classification. The subjectivity of words and phrases may depend on their context and an objective document may contain subjective sentences (e.g., a news article quoting people's opinions). Moreover, as mentioned by Su, results are largely dependent on the definition of subjectivity used when annotating texts. However, Pang showed that removing objective sentences from a document before classifying its polarity helped improve performance.
The term objective refers to the incident carry factual information.
Example of an objective sentence: 'To be elected president of the United States, a candidate must be at least thirty-five years of age.'
The term subjective describes the incident contains non-factual information in various forms, such as personal opinions, judgment, and predictions. Also known as 'private states' mentioned by Quirk et al. In the example down below, it reflects a private states 'We Americans'. Moreover, the target entity commented by the opinions can take several forms from tangible product to intangible topic matters stated in Liu(2010). Furthermore, three types of attitudes were observed by Liu(2010), 1) positive opinions, 2) neutral opinions, and 3) negative opinions.
Example of a subjective sentence: 'We Americans need to elect a president who is mature and who is able to make wise decisions.'
This analysis is a classification problem.
Each class's collections of words or phrase indicators are defined for to locate desirable patterns on unannotated text. For subjective expression, a different word list has been created. Lists of subjective indicators in words or phrases have been developed by multiple researchers in the linguist and natural language processing field states in Riloff et al.(2003). A dictionary of extraction rules has to be created for measuring given expressions. Over the years, in subjective detection, the features extraction progression from curating features by hand to automated features learning. At the moment, automated learning methods can further separate into supervised and unsupervised machine learning. Patterns extraction with machine learning process annotated and unannotated text have been explored extensively by academic researchers.
However, researchers recognized several challenges in developing fixed sets of rules for expressions respectably. Much of the challenges in rule development stems from the nature of textual information. Six challenges have been recognized by several researchers: 1) metaphorical expressions, 2) discrepancies in writings, 3) context-sensitive, 4) represented words with fewer usages, 5) time-sensitive, and 6) ever-growing volume.
Metaphorical expressions. The text contains metaphoric expression may impact on the performance on the extraction. Besides, metaphors take in different forms, which may have been contributed to the increase in detection.
Discrepancies in writings. For the text obtained from the Internet, the discrepancies in the writing style of targeted text data involve distinct writing genres and styles
Context-sensitive. Classification may vary based on the subjectiveness or objectiveness of previous and following sentences.
Time-sensitive attribute. The task is challenged by some textual data's time-sensitive attribute. If a group of researchers wants to confirm a piece of fact in the news, they need a longer time for cross-validation, than the news becomes outdated.
Cue words with fewer usages.
Ever-growing volume. The task is also challenged by the sheer volume of textual data. The textual data's ever-growing nature makes the task overwhelmingly difficult for the researchers to complete the task on time.
Previously, the research mainly focused on document level classification. However, classifying a document level suffers less accuracy, as an article may have diverse types of expressions involved. Researching evidence suggests a set of news articles that are expected to dominate by the objective expression, whereas the results show that it consisted of over 40% of subjective expression.
To overcome those challenges, researchers conclude that classifier efficacy depends on the precisions of patterns learner. And the learner feeds with large volumes of annotated training data outperformed those trained on less comprehensive subjective features. However, one of the main obstacles to executing this type of work is to generate a big dataset of annotated sentences manually. The manual annotation method has been less favored than automatic learning for three reasons:
Variations in comprehensions. In the manual annotation task, disagreement of whether one instance is subjective or objective may occur among annotators because of languages' ambiguity.
Human errors. Manual annotation task is a meticulous assignment, it require intense concentration to finish.
Time-consuming. Manual annotation task is an assiduous work. Riloff (1996) show that a 160 texts cost 8 hours for one annotator to finish.
All these mentioned reasons can impact on the efficiency and effectiveness of subjective and objective classification. Accordingly, two bootstrapping methods were designed to learning linguistic patterns from unannotated text data. Both methods are starting with a handful of seed words and unannotated textual data.
Meta-Bootstrapping by Riloff and Jones in 1999. Level One: Generate extraction patterns based on the pre-defined rules and the extracted patterns by the number of seed words each pattern holds. Leve Two: Top 5 words will be marked and add to the dictionary. Repeat.
Basilisk (Bootstrapping Approach to Semantic Lexicon Induction using Semantic Knowledge) by Thelen and Riloff. Step One: Generate extraction patterns Step Two: Move best patterns from Pattern Pool to Candidate Word Pool. Step Three: Top 10 words will be marked and add to the dictionary. Repeat.
Overall, these algorithms highlight the need for automatic pattern recognition and extraction in subjective and objective task.
Subjective and object classifier can enhance the serval applications of natural language processing. One of the classifier's primary benefits is that it popularized the practice of data-driven decision-making processes in various industries. According to Liu, the applications of subjective and objective identification have been implemented in business, advertising, sports, and social science.
Online review classification: In the business industry, the classifier helps the company better understand the feedbacks on product and reasonings behind the reviews.
Stock price prediction: In the finance industry, the classier aids the prediction model by process auxiliary information from social media and other textual information from the Internet. Previous studies on Japanese stock price conducted by Dong et al. indicates that model with subjective and objective module may perform better than those without this part.
Social media analysis.
Students' feedback classification.
Document summarising: The classifier can extract target-specified comments and gathering opinions made by one particular entity.
Complex question answering. The classifier can dissect the complex questions by classing the language subject or objective and focused target. In the research Yu et al.(2003), the researcher developed a sentence and document level clustered that identity opinion pieces.
Domain-specific applications.
Email analysis: The subjective and objective classifier detects spam by tracing language patterns with target words.
Feature/aspect-based
It refers to determining the opinions or sentiments expressed on different features or aspects of entities, e.g., of a cell phone, a digital camera, or a bank. A feature or aspect is an attribute or component of an entity, e.g., the screen of a cell phone, the service for a restaurant, or the picture quality of a camera. The advantage of feature-based sentiment analysis is the possibility to capture nuances about objects of interest. Different features can generate different sentiment responses, for example a hotel can have a convenient location, but mediocre food. This problem involves several sub-problems, e.g., identifying relevant entities, extracting their features/aspects, and determining whether an opinion expressed on each feature/aspect is positive, negative or neutral. The automatic identification of features can be performed with syntactic methods, with topic modeling, or with deep learning. More detailed discussions about this level of sentiment analysis can be found in Liu's work.
Intensity Ranking
Emotions and sentiments are subjective in nature. The degree of emotions/sentiments expressed in a given text at the document, sentence, or feature/aspect level—to what degree of intensity is expressed in the opinion of a document, a sentence or an entity differs on a case-to-case basis. However, predicting only the emotion and sentiment does not always convey complete information. The degree or level of emotions and sentiments often plays a crucial role in understanding the exact feeling within a single class (e.g., 'good' versus 'awesome'). Some methods leverage a stacked ensemble method for predicting intensity for emotion and sentiment by combining the outputs obtained and using deep learning models based on convolutional neural networks, long short-term memory networks and gated recurrent units. More detailed discussions about such methods can be found in Akhatar, Ekbal, and Cambria's work.
Methods and features
Existing approaches to sentiment analysis can be grouped into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches. Knowledge-based techniques classify text by affect categories based on the presence of unambiguous affect words such as happy, sad, afraid, and bored. Some knowledge bases not only list obvious affect words, but also assign arbitrary words a probable "affinity" to particular emotions. Statistical methods leverage elements from machine learning such as latent semantic analysis, support vector machines, "bag of words", "Pointwise Mutual Information" for Semantic Orientation, and deep learning. More sophisticated methods try to detect the holder of a sentiment (i.e., the person who maintains that affective state) and the target (i.e., the entity about which the affect is felt). To mine the opinion in context and get the feature about which the speaker has opined, the grammatical relationships of words are used. Grammatical dependency relations are obtained by deep parsing of the text. Hybrid approaches leverage both machine learning and elements from knowledge representation such as ontologies and semantic networks in order to detect semantics that are expressed in a subtle manner, e.g., through the analysis of concepts that do not explicitly convey relevant information, but which are implicitly linked to other concepts that do so.
Open source software tools as well as range of free and paid sentiment analysis tools deploy machine learning, statistics, and natural language processing techniques to automate sentiment analysis on large collections of texts, including web pages, online news, internet discussion groups, online reviews, web blogs, and social media. Knowledge-based systems, on the other hand, make use of publicly available resources, to extract the semantic and affective information associated with natural language concepts. The system can help perform affective commonsense reasoning. Sentiment analysis can also be performed on visual content, i.e., images and videos (see Multimodal sentiment analysis). One of the first approaches in this direction is SentiBank utilizing an adjective noun pair representation of visual content. In addition, the vast majority of sentiment classification approaches rely on the bag-of-words model, which disregards context, grammar and even word order. Approaches that analyses the sentiment based on how words compose the meaning of longer phrases have shown better result, but they incur an additional annotation overhead.
A human analysis component is required in sentiment analysis, as automated systems are not able to analyze historical tendencies of the individual commenter, or the platform and are often classified incorrectly in their expressed sentiment. Automation impacts approximately 23% of comments that are correctly classified by humans. However, humans often disagree, and it is argued that the inter-human agreement provides an upper bound that automated sentiment classifiers can eventually reach.
Evaluation
The accuracy of a sentiment analysis system is, in principle, how well it agrees with human judgments. This is usually measured by variant measures based on precision and recall over the two target categories of negative and positive texts. However, according to research human raters typically only agree about 80% of the time (see Inter-rater reliability). Thus, a program that achieves 70% accuracy in classifying sentiment is doing nearly as well as humans, even though such accuracy may not sound impressive. If a program were "right" 100% of the time, humans would still disagree with it about 20% of the time, since they disagree that much about any answer.
On the other hand, computer systems will make very different errors than human assessors, and thus the figures are not entirely comparable. For instance, a computer system will have trouble with negations, exaggerations, jokes, or sarcasm, which typically are easy to handle for a human reader: some errors a computer system makes will seem overly naive to a human. In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.
To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. The focus in e.g. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.
Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set.
Web 2.0
The rise of social media such as blogs and social networks has fueled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations. As businesses look to automate the process of filtering out the noise, understanding the conversations, identifying the relevant content and actioning it appropriately, many are now looking to the field of sentiment analysis. Further complicating the matter, is the rise of anonymous social media platforms such as 4chan and Reddit. If web 2.0 was all about democratizing publishing, then the next stage of the web may well be based on democratizing data mining of all the content that is getting published.
One step towards this aim is accomplished in research. Several research teams in universities around the world currently focus on understanding the dynamics of sentiment in e-communities through sentiment analysis. The CyberEmotions project, for instance, recently identified the role of negative emotions in driving social networks discussions.
The problem is that most sentiment analysis algorithms use simple terms to express sentiment about a product or service. However, cultural factors, linguistic nuances, and differing contexts make it extremely difficult to turn a string of written text into a simple pro or con sentiment. The fact that humans often disagree on the sentiment of text illustrates how big a task it is for computers to get this right. The shorter the string of text, the harder it becomes.
Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape. Furthermore, sentiment analysis on Twitter has also been shown to capture the public mood behind human reproduction cycles globally, as well as other problems of public-health relevance such as adverse drug reactions.
While sentiment analysis has been popular for domains where authors express their opinion rather explicitly ("the movie is awesome"), such as social media and product reviews, only recently robust methods were devised for other domains where sentiment is strongly implicit or indirect. For example, in news articles - mostly due to the expected journalistic objectivity - journalists often describe actions or events rather than directly stating the polarity of a piece of information. Earlier approaches using dictionaries or shallow machine learning features were unable to catch the "meaning between the lines", but recently researchers have proposed a deep learning based approach and dataset that is able to analyze sentiment in news articles.
Application in recommender systems
For a recommender system, sentiment analysis has been proven to be a valuable technique. A recommender system aims to predict the preference for an item of a target user. Mainstream recommender systems work on explicit data set. For example, collaborative filtering works on the rating matrix, and content-based filtering works on the meta-data of the items.
In many social networking services or e-commerce websites, users can provide text review, comment or feedback to the items. These user-generated text provide a rich source of user's sentiment opinions about numerous products and items. Potentially, for an item, such text can reveal both the related feature/aspects of the item and the users' sentiments on each feature. The item's feature/aspects described in the text play the same role with the meta-data in content-based filtering, but the former are more valuable for the recommender system. Since these features are broadly mentioned by users in their reviews, they can be seen as the most crucial features that can significantly influence the user's experience on the item, while the meta-data of the item (usually provided by the producers instead of consumers) may ignore features that are concerned by the users. For different items with common features, a user may give different sentiments. Also, a feature of the same item may receive different sentiments from different users. Users' sentiments on the features can be regarded as a multi-dimensional rating score, reflecting their preference on the items.
Based on the feature/aspects and the sentiments extracted from the user-generated text, a hybrid recommender system can be constructed. There are two types of motivation to recommend a candidate item to a user. The first motivation is the candidate item have numerous common features with the user's preferred items, while the second motivation is that the candidate item receives a high sentiment on its features. For a preferred item, it is reasonable to believe that items with the same features will have a similar function or utility. So, these items will also likely to be preferred by the user. On the other hand, for a shared feature of two candidate items, other users may give positive sentiment to one of them while giving negative sentiment to another. Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.
Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review. Review or feedback poorly written is hardly helpful for recommender system. Besides, a review can be designed to hinder sales of a target product, thus be harmful to the recommender system even it is well written.
Researchers also found that long and short forms of user-generated text should be treated differently. An interesting result shows that short-form reviews are sometimes more helpful than long-form, because it is easier to filter out the noise in a short-form text. For the long-form text, the growing length of the text does not always bring a proportionate increase in the number of features or sentiments in the text.
Lamba & Madhusudhan introduce a nascent way to cater the information needs of today's library users by repackaging the results from sentiment analysis of social media platforms like Twitter and provide it as a consolidated time-based service in different formats. Further, they propose a new way of conducting marketing in libraries using social media mining and sentiment analysis.
See also
Emotion recognition
Consumer sentiment
Stylometry
References
Natural language processing
Affective computing
Social media
Polling
Sociology of technology |
3164830 | https://en.wikipedia.org/wiki/Hare%20%28computer%20virus%29 | Hare (computer virus) | The Hare Virus was a destructive computer virus which infected DOS and Windows 95 machines in August 1996. It was also known as Hare.7610, Krsna and HD Euthanasia.
Description
The virus was capable of infecting .COM and .EXE executable files, as well as the master boot record of hard disks and the boot sector on floppy disks. The virus was set to read the system date of the computer and activate on August 22 and September 22, at which time it would erase the hard disk in the computer and display the following message:
HDEuthanasia by Demon Emperor: Hare Krsna, hare, hare
Timing
The date of the virus is controversial, even there is consensus that it started spreading at the end of the spring or at the beginning of the summer of 1996 in New Zealand, experts seem to disagree about the precise timing of the start of the spread. After a little while, its effects started to show up in South Africa and Canada. The United States saw the arrival of Hare in May 1996 and then continued spreading globally to Western and Eastern Europe.
See also
Timeline of computer viruses and worms
Comparison of computer viruses
References
External links
Hare Virus by Online VSUM.
Hare Krishna virus looming (The Augusta Chronicle)
DOS file viruses
Boot viruses |