3D computer graphics are different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques.
3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object (either inanimate or living). A model is not technically a graphic until it is visually displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.Contents [hide]
1 Overview
1.1 Modeling
1.2 Layout and animation
1.3 Rendering
2 Distinct from photorealistic 2D graphics
3 History
4 See also
5 External links
6 References
[edit]
Overview
A 3D scene of 8 red glass balls
The process of creating 3D computer graphics can be sequentially divided into three basic phases: 3D modeling which describes the shape of an object, Layout and Animation which describes the motion and placement of objects within a scene, and 3D rendering which produces an image of an object.
[edit]
Modeling
Main article: 3D modeling
A 3D rendering with raytracing and ambient occlusion using Blender and Yafray
The model describes the shape of an object. The two most common sources of 3D models are those originated on the computer by an artist or engineer using some kind of 3D modeling tool, and those scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation.
[edit]
Layout and animation
Main article: Computer animation
In raytracing, the number of reflections “rays” can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Modeled and rendered with Ashlar Cobalt.
Before object may be rendered, it must be placed within a scene. This is what defines the spacial relationships between objects in a scene including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture, though many of these techniques are used in conjunction with each-other. As with modeling, physical simulation is another way of specifying motion.
[edit]
Rendering
Main article: 3D rendering
Rendering converts a model into an image either by simulating light transport to get photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D Graphics API. The process of altering the scene into a suitable form for rendering also involves 3D projection which allows a three-dimensional image to be viewed in two dimensions.
[edit]
Distinct from photorealistic 2D graphics
Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters. See also still life.
[edit]
History
William Fetter was credited with coining the term Computer Graphics in 1960, to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand — produced by Ed Catmull and Fred Parke at the University of Utah.
An extensive history of computer graphics can be found at this page.
A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a dedicated graphics rendering device for a personal computer, workstation, or game console. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly parallel structure makes them more effective than typical CPUs for a range of complex algorithms. A GPU can sit on top of a video card, or it can be integrated directly into the motherboard in more than 90% of desktop and notebook computers (although integrated GPUs are usually far less powerful then their add-in counterparts).[1]
A GPU implements a number of graphics primitive operations in a way that makes running them much faster than drawing directly to the screen with the host CPU. The most common operations for early 2D computer graphics include the BitBLT operation (combines several bitmap patterns using a RasterOp), usually in special hardware called a "blitter", and operations for drawing rectangles, triangles, circles, and arcs. Modern GPUs also have support for 3D computer graphics, and typically include digital video-related functions.Contents [hide]
1 History
1.1 Early 1980s
1.2 1980s
1.3 1990s
1.4 2000 to present
1.5 GPU companies
2 Computational functions
3 GPU forms
3.1 Dedicated graphics cards
3.2 Integrated graphics solutions
3.3 Hybrid solutions
3.4 Stream processing/GPGPU
4 See also
5 References
6 External links
[edit]
History
[edit]
Early 1980s
Modern GPUs are descended from the monolithic graphic chips of the early 1980s and 1990s. These chips had limited BitBLT support in the form of sprites (if they had BitBLT support at all), and usually had no shape-drawing support. Some GPUs could run several operations in a display list, and could use DMA to reduce the load on the host processor; an early example was the ANTIC co-processor used in the Atari 800 and Atari 5200. In the late 1980s and early 1990s, high-speed, general-purpose microprocessors became popular for implementing high-end GPUs. Several high-end graphics boards for PCs and computer workstations used TI's TMS340 series (a 32-bit CPU optimized for graphics applications, with a frame buffer controller on-chip) to implement fast drawing functions; these were especially popular for CAD applications. Also, many laser printers from Apple shipped with a PostScript raster image processor (a special case of a GPU) running on a Motorola 68000-series CPU, or a faster RISC CPU like the AMD 29000 or Intel i960. A few very specialised applications used digital signal processors for 3D support, such as Atari Games' Hard Drivin' and Race Drivin' games.
As chip process technology improved, it eventually became possible to move drawing and BitBLT functions onto the same board (and, eventually, into the same chip) as a regular frame buffer controller such as VGA. These cut-down "2D accelerators" were not as flexible as microprocessor-based GPUs, but were much easier to make and sell.
[edit]
1980s
The Commodore Amiga was the first mass-market computer to include a blitter in its video hardware,and IBM's 8514 graphics system was one of the first PC video cards to implement 2D primitives in hardware.
The Amiga was unique, for the time, in that it featured what would now be recognized as a full graphics accelerator, offloading practically all video generation functions to hardware, including line drawing, area fill, block image transfer, and a graphics coprocessor with its own (though primitive) instruction set. Prior (and quite some time after on most systems) a general purpose CPU had to handle every aspect of drawing the display.
[edit]
1990s
By the early 1990s, the rise of Microsoft Windows sparked a surge of interest in high-speed, high-resolution 2D bitmapped graphics (which had previously been the domain of Unix workstations and the Apple Macintosh). For the PC market, the dominance of Windows meant PC graphics vendors could now focus development effort on a single programming interface, Graphics Device Interface (GDI).
In 1991, S3 Graphics introduced the first single-chip 2D accelerator, the S3 86C911 (which its designers named after the Porsche 911 as an indication of the speed increase it promised). The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, and these coprocessors faded away from the PC market.
Throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips. Video acceleration became popular as standards such as VCD and DVD arrived, and the Internet grew in popularity and speed. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later.
In the early and mid-1990s, CPU-assisted real-time 3D graphics were becoming increasingly common in computer and console games, which lead to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-marketed 3D graphics hardware can be found in fifth generation video game consoles such as PlayStation and Nintendo 64. In the PC world, notable failed first-tries for low-cost 3D graphics chips were the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were even pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, performance 3D graphics were possible only with separate add-on boards dedicated to accelerating 3D functions (and lacking 2D GUI acceleration entirely) such as the 3dfx Voodoo. However, as manufacturing technology again progressed, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were the first to do this well enough to be worthy of note.
As DirectX advanced steadily from a rudimentary (and perhaps tedious) API for game programming to become one of the leading 3D graphics programming interfaces, 3D accelerators evolved seemingly exponentially as years passed. Direct3D 5.0 was the first version of the burgeoning API to really dominate the gaming market and stomp out many of the proprietary interfaces. Direct3D 7.0 introduced support for hardware-accelerated transform and lighting (T&L). 3D accelerators moved beyond of being just simple rasterizers to add another significant hardware stage to the 3D rendering pipeline. The NVIDIA GeForce 256 (also known as NV10) was the first card on the market with this capability. Hardware transform and lighting set the precedent for later pixel shader and vertex shader units which were far more flexible and programmable.
[edit]
2000 to present
With the advent of the DirectX 8.0 API and similar functionality in OpenGL, GPUs added programmable shading to their capabilities. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. NVIDIA was first to produce a chip capable of programmable shading, the GeForce 3 (core named NV20). By October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and in general were quickly becoming as flexible as CPUs, and orders of magnitude faster for image-array operations. Pixel shading is often used for things like Bump mapping which adds texture, to either make an object look shiny, dull, rough, or even round or extruded. [2]
Today, parallel GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed GPGPU for General Purpose Computing on GPU has found its way into fields as diverse as oil exploration, scientific image processing, and even stock options pricing determination. There is increased pressure on GPU manufacturers from "GPGPU users" to improve hardware design, usually focusing on adding more flexibility to the programming model.[citation needed]
The newest version of DirectX, DirectX 10, is currently bundled with Microsoft Windows Vista.
[edit]
GPU companies
There have been many companies producing GPUs over the years, under numerous brand names. The current dominators of the market are ATI (manufacturers of the ATI Radeon graphics chip line) and NVIDIA (manufacturers of the NVIDIA Geforce graphics chip line.) Intel also produce GPUs that are built into their motherboards, such as the 915 and 945. These chips are often less than optimum for playing 3D games, and fixes often have to be applied. Although most games will play on the Intel chips (except for the few that are specifically coded not to run on it), frame rates will often become unplayable, even at the lowest settings. The 965 chipset is marginally faster, and finally includes hardware T&L, but the integrated nature of the chipset still gives a large performance hit.
[edit]
Computational functions
Modern GPUs use most of their transistors to do calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as translating vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations.
In addition to the 3D hardware, today's GPUs include basic 2D acceleration and frame buffer capabilities (usually with a VGA compatibility mode). In addition, most GPUs made since 1995 support the YUV color space and hardware overlays (important for digital video playback), and many GPUs made since 2000 support MPEG primitives such as motion compensation and iDCT. Recent graphics cards even decode high-definition video on the card, taking some load off the central processing unit.
[edit]
GPU forms
[edit]
Dedicated graphics cards
The most powerful class of GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIE) or Accelerated Graphics Port (AGP) and can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. However, a dedicated GPU is not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that dedicated graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered AGP or PCI Express, even if they are not physically interchangeable with their counterparts.
Multiple cards can draw together a single image, so that the number of pixels can be doubled and antialiasing can be set to higher quality. If the screen is parted into a left and right, each card can cache the textures and geometry from their side.
[edit]
Integrated graphics solutions
Integrated graphics solutions, or shared graphics solutions are graphics processors that utilize a portion of a computer's system RAM rather than dedicated graphics memory. Such solutions are typically far less expensive to implement in comparison to dedicated graphics solutions, but at a trade-off of being far less capable and are generally considered unfit to play modern games as well as run graphically intensive programs such as Adobe Flash. (Examples of such IGPs would be offerings from SiS and VIA circa 2004.)[3] However, todays integrated solutions such as the Intel's GMA X3000 (Intel G965), AMD's Radeon X1250 (AMD 690G) and NVIDIA's GeForce 7050 PV (NVIDIA nForce 630a) are more than capable of handling 2D graphics from Adobe Flash or low stress 3D graphics. Of course the aforementioned GPUs still struggle with high-end video games. Modern desktop motherboards often include an integrated graphics solution and have expansion slots available to add a dedicated graphics card later.
As a GPU is extremely memory intensive, an integrated solution finds itself competing for the already slow system RAM with the CPU as it has no dedicated video memory. System RAM may be 2 GB/s to 12.8 GB/s, yet dedicated GPUs enjoy between 10 GB/s and 88 GB/s of bandwidth depending on the model. Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.
[edit]
Hybrid solutions
This newer class of GPUs competes with integrated graphics in the low-end PC and notebook markets. The most common implementations of this are ATi's HyperMemory and NVIDIA's TurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These also share memory with the system memory, but have a smaller amount of memory on-board than discrete graphics cards do to make up for the high latency of the system RAM. Technologies within PCI Express can make this possible. While these solutions are sometimes advertised as having as much as 512MB of RAM, this refers to how much can be shared with the system memory.
[edit]
Stream processing/GPGPU
Main articles: GPGPU and Stream processing
A new concept application for GPUs is that of stream processing and the general purpose graphics processing unit. This concept turns the massive floating-point computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power, as opposed to being dedicated solely to graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete GPU designers, ATI and NVIDIA, are beginning to pursue this new market with an array of applications. ATI has teamed with Stanford University to create a GPU-based client for its Folding@Home distributed computing project that in certain circumstances yields results forty times faster than the conventional CPUs traditionally used in such application
Saturday, July 28, 2007
Playstation
According to the book "Game Over", by David Scheff, The first conceptions of the PlayStation date back to 1986. Nintendo had been attempting to work with disk technology since the Famicom, but the medium had problems. Its rewritable magnetic nature could be easily erased (thus leading to a lack of durability), and the disks were a piracy danger. Consequently, when details of CDROM/XA (an extension of the CD-ROM format that combines compressed audio, visual and computer data, allowing all to be accessed simultaneously) came out, Nintendo was interested. CD-ROM/XA was being simultaneously developed by Sony and Phillips. Nintendo approached Sony to develop a CD-ROM add-on, tentatively titled the "SNES-CD". A contract was struck, and work began. Nintendo's choice of Sony was due to a prior dealing: Ken Kutaragi, the person who would later be dubbed "The Father of PlayStation", was the individual who had sold Nintendo on using the Sony SPC-700 processor for use as the eight-channel ADPCM sound synthesis set in the Super Famicom/SNES console through an impressive demonstration of the processor's capabilities.
Sony also planned to develop another, Nintendo compatible, Sony-branded console, but one which would be more of a home entertainment system playing both Super Nintendo cartridges and a new CD format which Sony would design. This was also to be the format used in SNES-CD discs, giving a large degree of control to Sony despite Nintendo's leading position in the video gaming market.
In 1989, the SNES-CD was to be announced at the June Consumer Electronics Show (CES). However, when Hiroshi Yamauchi read the original 1988 contract between Sony and Nintendo, he realized that the earlier agreement essentially handed Sony complete control over any and all titles written on the SNES CD-ROM format. Yamauchi was furious; deeming the contract totally unacceptable, he secretly canceled all plans for the joint Nintendo-Sony SNES CD attachment. Indeed, instead of announcing their partnership, at 9 a.m. the day of the CES, Nintendo chairman Howard Lincoln stepped onto the stage and revealed that they were now allied with Philips, and were planning on abandoning all the previous work Nintendo and Sony had accomplished. Lincoln and Minoru Arakawa had, unbeknown to Sony, flown to Philips headquarters in Europe and formed an alliance of a decidedly different nature—one that would give Nintendo total control over its licenses on Philips machines.
DualShock.
The 9 a.m. CES announcement was a complete shock. Not only was it a complete surprise to the show goers (Sony had only just the previous night been optimistically showing off the joint project under the "Play Station" brand), but it was seen by many in the Japanese business community as a massive betrayal: a Japanese company snubbing another Japan-based company in favor of a European one was considered absolutely unthinkable in Japanese business.
After the collapse of the joint project, Sony considered halting their research, but ultimately the company decided to use what they had developed so far and make it into a complete, stand alone console. This led to Nintendo filing a lawsuit claiming breach of contract and attempted, in U.S. federal court, to obtain an injunction against the release of the PlayStation, on the grounds that Nintendo owned the name.[citation needed] The federal judge presiding over the case denied the injunction. Thus, in October 1991, the first incarnation of the new Sony PlayStation was revealed; it is theorized that only 200 or so of these machines were ever produced.
PlayStation Memory Card
By the end of 1992, Sony and Nintendo reached a deal whereby the "Sony Play Station" would still have a port for SNES games, but Nintendo would own the rights and receive the bulk of the profits from the games, and the SNES would continue to use the Sony-designed audio chip. However, at this point, Sony realized that the SNES technology was getting long in the tooth, and the next generation of console gaming was around the corner: work began in early 1993 on reworking the "Play Station" concept to target a new generation of hardware and software; as part of this process the SNES cartridge port was dropped, the space between the names was removed, and the PlayStation was born.
Launch
The PlayStation was launched in Japan on December 3, 1994, the United States on September 9, 1995, Europe on September 29, 1995, and Asia-Pacific in November 1995. In America, Sony enjoyed a very successful launch with titles of almost every genre including Battle Arena Toshinden, Twisted Metal, Warhawk, Philosoma, and Ridge Racer. Almost all of Sony's and Namco's launch titles went on to produce numerous sequels.
The launch price in the American market was US$299.00,[3] a price point later used by its successor, the PlayStation 2.
The PlayStation was also able to generate interest with a unique series of advertising campaigns. Many of the ads released at the time of launch were full of ambiguous content which had many gamers rabidly debating their meanings. The most well-known launch ads include the "Enos Lives" campaign, and the "U R Not e" ads (the "e" in "U R Not e" was always colored in red, to symbolize the word "ready", and the "Enos" meant "ready Ninth Of September", the U.S. launch date). The Enos ad could also be read as Sony written backward with phonetic sound of "E" replacing the "y". It is believed that these ads were an attempt to play off the gaming public's suspicion towards Sony as an unknown, untested entity in the video game market. The PlayStation 3 slogan, "PLAY B3YOND", resembles this slogan, as the 3 is red.
The PlayStation logo was designed by Manabu Sakamoto, who also designed the logo for Sony's VAIO computer products.
Titles
Well known titles on the PlayStation include Castlevania: Symphony of the Night, Crash Bandicoot, Final Fantasy VII, Gran Turismo, Grand Theft Auto, Legacy of Kain: Soul Reaver , Metal Gear Solid, Parasite Eve, Resident Evil, Silent Hill, Tony Hawk Pro Skater, Spyro The Dragon, Tekken, Tomb Raider, Twisted Metal, and Wipeout. The very last game for the system was FIFA Football 2005. As of May 18, 2004, Sony has shipped 100 million PlayStation and PSone consoles throughout the world. As of March 2007, 7,915 software titles have been released worldwide (counting games released in multiple regions as separate titles)[4] with cumulative software shipment of 962 million units.[5] The last German release for the Playstation was Schnappi Das kleine Krokodil in 2005, and the last releases in the United States and Japan were FIFA Soccer 2005 in 2004 and Black Matrix Zero OO in 2004, respectively.[citation needed]
Production run
Having lasted over 11 years, the PlayStation has enjoyed one of the longest production runs in the video game industry. This exceptionally long lifecycle has been since been used as proof that PlayStation hardware can last 10 years onwards. On March 23, 2006, Sony announced the end of production.[6]
Variants
The PlayStation went through a number of variants during its production run, each accompanied by a change in the part number. From an external perspective, the most notable change was the gradual reduction in the number of external connectors on the unit. This started very early on—the original Japanese launch units (SCPH-1000) had an S-Video port, which was removed on the next release. This also led to the strange situation where the US and European launch units had the same part number series (SCPH-100x) as the Japanese launch units, but had different hardware (Rev. C silicon and no S-Video port)—they were the same as the Japanese SCPH-3000, so for consistency should have been SCPH-3001 and SCPH-3002 (this numbering was used for the Yaroze machines, which were based on the same hardware and numbered DTL-H3000, DTL-H3001, and DTL-H3002). This series of machines had a reputation for CD drive problems—the optical pickup sled was made of thermoplastic, and eventually developed wear spots that moved the laser into a position where it was no longer parallel with the CD surface—a modification was made that replaced the sled with a die-cast one with hard nylon inserts, which corrected the problem.
With the release of the next series (SCPH-500x), the numbers moved back into sync. A number of changes were made to the unit internally (CD drive relocated, shielding simplified, PSU wiring simplified) and the RCA jacks and RFU power connectors were removed from the rear panel. This series also contained the SCPH-550x and SCPH-555x units, but these appear to have been bundle changes rather than actual hardware revisions.
These were followed by the SCPH-700x and SCHP-750x series—they are externally identical to the SCPH-500x machines, but have internal changes made to reduce manufacturing costs (for example, the system RAM went from 4 chips to 1, and the CD controller went from 3 chips to 1).
The final revision to the original PlayStation was the SCPH-900x series—these had the same hardware as the SCPH-750x machines with the exception of the removal of the parallel port and a slight reduction in the size of the PCB. The removal of the parallel port was probably partly because no official add-on had ever been released for it, and partly because it was being used to connect cheat cartridges that could be used to defeat the copy protection.
The PSone was based on substantially the same hardware as the SCPH-750x and 900x, but had the serial port deleted, the controller / memory card ports moved to the main PCB and the power supply replaced with a DC-DC converter that was also on the main PCB.
With the early units, many gamers experienced skipping full-motion video or dreaded physical "ticking" noises coming from their PlayStations. The problem appears to have come from poorly placed vents leading to overheating in some environments—the plastic moldings inside the console would warp very slightly and create knock-on effects with the laser assembly. The solution was to ensure the console was sat on a surface which dissipated heat efficiently in a well vented area, or raise the unit up slightly by propping something at its edges. A common fix for already affected consoles was to turn the PlayStation sideways or upside-down (thereby using gravity to cancel the effects of the warped interior) although some gamers smacked the lid of the PlayStation to make a game load or work.
Sony then released a version dubbed "Dual Shock", which included a controller with 2 analog sticks and a built in force-feedback feature.
Another version that was colored blue (as opposed to regular console units that were grey in color) was available to game developers and select press. Later versions of this were colored green—on a technical level, these units were almost identical to the retail units, but had a different CD controller in them that did not require the region code found on all pressed disks, since they were intended to be used with CD-R media for debugging. This also allowed the use of discs from different regions, but this was not officially supported; different debug stations existed for each region. The two different color cases were not cosmetic—the original blue debug station (DTL-H100x, DTL-H110x) contained "Revision B" silicon, the same as the early retail units (these units had silicon errata that needed software workarounds), the green units (DTL-H120x) had Rev. C hardware. As part of the required tests, you had to test your title on both. Contrary to popular belief, the RAM was the same as the retail units at 2 MB. The firmware was nearly identical—the only significant change was that debug printf()s got sent to the serial port if the title didn't open it for communications—this used a DTL-H3050 serial cable (the same as the one used for the Yaroze).
A white version was also produced that had the ability to play VCDs—this was only sold in Asia, since that format never really caught on anywhere else. From a developer perspective, the white PSX could be treated exactly like any other NTSC:J PlayStation.
Hacks
A number of these units appeared on the secondary market and were popular because they would run games from any region and CD-R copies, which tended to result them in commanding high prices. All the blue units tend to have CD problems, but the DTL-H110x units (with an external PSU block) are significantly more reliable than the original DTL-H100x ones.
"Chipped" Consoles
The installation of a modchip allowed the PlayStation's capabilities to be expanded, and several options were made available. By the end of the system's life cycle almost anyone with minimal soldering experience was able to realize the modification of the console. Such a modification allowed the playing of games from other regions, such as PAL titles on a NTSC console, or allowed the ability to play illicit copies of original games without restriction. Modchips allow the playing of games recorded on a regular CD-R. This created a wave of games developed without official approval using free GNU compiler tools, as well as the illegal reproduction of original discs. With the introduction of such devices the console was very attractive to programmers and pirates alike.
Individuals that insisted on creating copies of games that would play as their original counterparts faced many issues at the time, as the discs that were produced by Sony were designed to be difficult to copy—and impossible to copy on recordable media. Not only did the original discs have a specific black tint to them, they were mastered with a specific wobble in the leadin—when amplified and sliced this contained a 4 character sequence that was checked by the CD-ROM drives mechacon chip, and the drive would only accept the disk if it was correct (this string varied depending on the region of the disk—"SCEI" for NTSC:J machines, "SCEA" for NTSC:U/C machines, "SCEE" for PAL machines and "SCEW" for the Net Yaroze). Since the tracking pattern is pressed into the disc at the time of manufacture and CD-Rs have a clean spiral, this cannot be reproduced on a CD-ROM recorder. Some companies (notably Datel) did manage to produce discs that booted on unmodified retail units, but this was beyond the average pirate. The other issue was that most PC drives used Mode 1 or Mode 2/Form 1 (2048 bytes/sector) and the PSX used a mixed-mode format with most data in Mode 2/Form 1 and streaming audio/video data in Mode 2/Form 2—which a lot CD-R drives at the time could not handle well. Even after accurate copies were made, you still needed a modchip to send the correct code to the CD controller to enable the disc to be read (if a disk failed the security checks, it could be played as an audio CD, but the CD controller would reject any attempt at data transfers from it).
The creation and mass-production of these inexpensive modchips, coupled with their ease of installation, marked the beginning of widespread console videogame piracy. Coincidentally, CD-ROM burners were made available around this time. Prior to the PlayStation, the reproduction of copyrighted material for gaming consoles was restricted to either enthusiasts with exceptional technical ability, or others that had access to CD manufacturers. With this console, amateurs could replicate anything Sony was producing for a mere fraction of the MSRP.
Net Yaroze
A version of the PlayStation called the Net Yaroze was also produced. It was more expensive than the original PlayStation, colored black instead of the usual gray, and most importantly, came with tools and instructions that allowed a user to be able to program PlayStation games and applications without the need for a full developer suite, which cost many times the amount of a PlayStation and was only available to approved video game developers. Naturally, the Net Yaroze lacked many of the features the full developer suite provided. Programmers were also limited by the 2 MB of total game space that Net Yaroze allowed. That means the entire game had to be crammed into the 2 MB of system RAM. The user couldn't officially make actual game discs. The amount of space may seem small, but games like Ridge Racer ran entirely from the system RAM (except for the streamed music tracks). It was unique in that it was the only officially retailed Sony PlayStation with no regional lockout; it would play games from any territory.
PSone with LCD screen and a DualShock controller
PSone
The PSone (also PSOne, PS one, or PS1), launched in 2000, is Sony's smaller (and redesigned) version of its PlayStation video game console. The PSone is about one-third smaller than the original PlayStation (38mm × 193 mm × 144 mm versus 45 mm × 260 mm × 185 mm). It was released in July 7, 2000,[7] and went on to outsell all other consoles—including Sony's own brand-new PlayStation 2—throughout the remainder of the year. Sony also released a small LCD screen and an adaptor to power the unit for use in cars. The PSone is fully compatible with all PlayStation software. The PlayStation is now officially abbreviated as the "PS1" or "PSone." There were three differences between the "PSone" and the original, the first one being cosmetic change to the console, the second one was the home menu's Graphical User Interface, and the third being added protection against the mod-chip by changing the internal layout and making previous-generation mod-chip devices unusable. The PSone also lacks the original PlayStation's serial port, which allowed multiple consoles to be hooked up for multi-TV multiplayer. The serial port could also be used for an external mod-chip, which may have been why it was removed, although size-constraints may also be to blame.
Sony also planned to develop another, Nintendo compatible, Sony-branded console, but one which would be more of a home entertainment system playing both Super Nintendo cartridges and a new CD format which Sony would design. This was also to be the format used in SNES-CD discs, giving a large degree of control to Sony despite Nintendo's leading position in the video gaming market.
In 1989, the SNES-CD was to be announced at the June Consumer Electronics Show (CES). However, when Hiroshi Yamauchi read the original 1988 contract between Sony and Nintendo, he realized that the earlier agreement essentially handed Sony complete control over any and all titles written on the SNES CD-ROM format. Yamauchi was furious; deeming the contract totally unacceptable, he secretly canceled all plans for the joint Nintendo-Sony SNES CD attachment. Indeed, instead of announcing their partnership, at 9 a.m. the day of the CES, Nintendo chairman Howard Lincoln stepped onto the stage and revealed that they were now allied with Philips, and were planning on abandoning all the previous work Nintendo and Sony had accomplished. Lincoln and Minoru Arakawa had, unbeknown to Sony, flown to Philips headquarters in Europe and formed an alliance of a decidedly different nature—one that would give Nintendo total control over its licenses on Philips machines.
DualShock.
The 9 a.m. CES announcement was a complete shock. Not only was it a complete surprise to the show goers (Sony had only just the previous night been optimistically showing off the joint project under the "Play Station" brand), but it was seen by many in the Japanese business community as a massive betrayal: a Japanese company snubbing another Japan-based company in favor of a European one was considered absolutely unthinkable in Japanese business.
After the collapse of the joint project, Sony considered halting their research, but ultimately the company decided to use what they had developed so far and make it into a complete, stand alone console. This led to Nintendo filing a lawsuit claiming breach of contract and attempted, in U.S. federal court, to obtain an injunction against the release of the PlayStation, on the grounds that Nintendo owned the name.[citation needed] The federal judge presiding over the case denied the injunction. Thus, in October 1991, the first incarnation of the new Sony PlayStation was revealed; it is theorized that only 200 or so of these machines were ever produced.
PlayStation Memory Card
By the end of 1992, Sony and Nintendo reached a deal whereby the "Sony Play Station" would still have a port for SNES games, but Nintendo would own the rights and receive the bulk of the profits from the games, and the SNES would continue to use the Sony-designed audio chip. However, at this point, Sony realized that the SNES technology was getting long in the tooth, and the next generation of console gaming was around the corner: work began in early 1993 on reworking the "Play Station" concept to target a new generation of hardware and software; as part of this process the SNES cartridge port was dropped, the space between the names was removed, and the PlayStation was born.
Launch
The PlayStation was launched in Japan on December 3, 1994, the United States on September 9, 1995, Europe on September 29, 1995, and Asia-Pacific in November 1995. In America, Sony enjoyed a very successful launch with titles of almost every genre including Battle Arena Toshinden, Twisted Metal, Warhawk, Philosoma, and Ridge Racer. Almost all of Sony's and Namco's launch titles went on to produce numerous sequels.
The launch price in the American market was US$299.00,[3] a price point later used by its successor, the PlayStation 2.
The PlayStation was also able to generate interest with a unique series of advertising campaigns. Many of the ads released at the time of launch were full of ambiguous content which had many gamers rabidly debating their meanings. The most well-known launch ads include the "Enos Lives" campaign, and the "U R Not e" ads (the "e" in "U R Not e" was always colored in red, to symbolize the word "ready", and the "Enos" meant "ready Ninth Of September", the U.S. launch date). The Enos ad could also be read as Sony written backward with phonetic sound of "E" replacing the "y". It is believed that these ads were an attempt to play off the gaming public's suspicion towards Sony as an unknown, untested entity in the video game market. The PlayStation 3 slogan, "PLAY B3YOND", resembles this slogan, as the 3 is red.
The PlayStation logo was designed by Manabu Sakamoto, who also designed the logo for Sony's VAIO computer products.
Titles
Well known titles on the PlayStation include Castlevania: Symphony of the Night, Crash Bandicoot, Final Fantasy VII, Gran Turismo, Grand Theft Auto, Legacy of Kain: Soul Reaver , Metal Gear Solid, Parasite Eve, Resident Evil, Silent Hill, Tony Hawk Pro Skater, Spyro The Dragon, Tekken, Tomb Raider, Twisted Metal, and Wipeout. The very last game for the system was FIFA Football 2005. As of May 18, 2004, Sony has shipped 100 million PlayStation and PSone consoles throughout the world. As of March 2007, 7,915 software titles have been released worldwide (counting games released in multiple regions as separate titles)[4] with cumulative software shipment of 962 million units.[5] The last German release for the Playstation was Schnappi Das kleine Krokodil in 2005, and the last releases in the United States and Japan were FIFA Soccer 2005 in 2004 and Black Matrix Zero OO in 2004, respectively.[citation needed]
Production run
Having lasted over 11 years, the PlayStation has enjoyed one of the longest production runs in the video game industry. This exceptionally long lifecycle has been since been used as proof that PlayStation hardware can last 10 years onwards. On March 23, 2006, Sony announced the end of production.[6]
Variants
The PlayStation went through a number of variants during its production run, each accompanied by a change in the part number. From an external perspective, the most notable change was the gradual reduction in the number of external connectors on the unit. This started very early on—the original Japanese launch units (SCPH-1000) had an S-Video port, which was removed on the next release. This also led to the strange situation where the US and European launch units had the same part number series (SCPH-100x) as the Japanese launch units, but had different hardware (Rev. C silicon and no S-Video port)—they were the same as the Japanese SCPH-3000, so for consistency should have been SCPH-3001 and SCPH-3002 (this numbering was used for the Yaroze machines, which were based on the same hardware and numbered DTL-H3000, DTL-H3001, and DTL-H3002). This series of machines had a reputation for CD drive problems—the optical pickup sled was made of thermoplastic, and eventually developed wear spots that moved the laser into a position where it was no longer parallel with the CD surface—a modification was made that replaced the sled with a die-cast one with hard nylon inserts, which corrected the problem.
With the release of the next series (SCPH-500x), the numbers moved back into sync. A number of changes were made to the unit internally (CD drive relocated, shielding simplified, PSU wiring simplified) and the RCA jacks and RFU power connectors were removed from the rear panel. This series also contained the SCPH-550x and SCPH-555x units, but these appear to have been bundle changes rather than actual hardware revisions.
These were followed by the SCPH-700x and SCHP-750x series—they are externally identical to the SCPH-500x machines, but have internal changes made to reduce manufacturing costs (for example, the system RAM went from 4 chips to 1, and the CD controller went from 3 chips to 1).
The final revision to the original PlayStation was the SCPH-900x series—these had the same hardware as the SCPH-750x machines with the exception of the removal of the parallel port and a slight reduction in the size of the PCB. The removal of the parallel port was probably partly because no official add-on had ever been released for it, and partly because it was being used to connect cheat cartridges that could be used to defeat the copy protection.
The PSone was based on substantially the same hardware as the SCPH-750x and 900x, but had the serial port deleted, the controller / memory card ports moved to the main PCB and the power supply replaced with a DC-DC converter that was also on the main PCB.
With the early units, many gamers experienced skipping full-motion video or dreaded physical "ticking" noises coming from their PlayStations. The problem appears to have come from poorly placed vents leading to overheating in some environments—the plastic moldings inside the console would warp very slightly and create knock-on effects with the laser assembly. The solution was to ensure the console was sat on a surface which dissipated heat efficiently in a well vented area, or raise the unit up slightly by propping something at its edges. A common fix for already affected consoles was to turn the PlayStation sideways or upside-down (thereby using gravity to cancel the effects of the warped interior) although some gamers smacked the lid of the PlayStation to make a game load or work.
Sony then released a version dubbed "Dual Shock", which included a controller with 2 analog sticks and a built in force-feedback feature.
Another version that was colored blue (as opposed to regular console units that were grey in color) was available to game developers and select press. Later versions of this were colored green—on a technical level, these units were almost identical to the retail units, but had a different CD controller in them that did not require the region code found on all pressed disks, since they were intended to be used with CD-R media for debugging. This also allowed the use of discs from different regions, but this was not officially supported; different debug stations existed for each region. The two different color cases were not cosmetic—the original blue debug station (DTL-H100x, DTL-H110x) contained "Revision B" silicon, the same as the early retail units (these units had silicon errata that needed software workarounds), the green units (DTL-H120x) had Rev. C hardware. As part of the required tests, you had to test your title on both. Contrary to popular belief, the RAM was the same as the retail units at 2 MB. The firmware was nearly identical—the only significant change was that debug printf()s got sent to the serial port if the title didn't open it for communications—this used a DTL-H3050 serial cable (the same as the one used for the Yaroze).
A white version was also produced that had the ability to play VCDs—this was only sold in Asia, since that format never really caught on anywhere else. From a developer perspective, the white PSX could be treated exactly like any other NTSC:J PlayStation.
Hacks
A number of these units appeared on the secondary market and were popular because they would run games from any region and CD-R copies, which tended to result them in commanding high prices. All the blue units tend to have CD problems, but the DTL-H110x units (with an external PSU block) are significantly more reliable than the original DTL-H100x ones.
"Chipped" Consoles
The installation of a modchip allowed the PlayStation's capabilities to be expanded, and several options were made available. By the end of the system's life cycle almost anyone with minimal soldering experience was able to realize the modification of the console. Such a modification allowed the playing of games from other regions, such as PAL titles on a NTSC console, or allowed the ability to play illicit copies of original games without restriction. Modchips allow the playing of games recorded on a regular CD-R. This created a wave of games developed without official approval using free GNU compiler tools, as well as the illegal reproduction of original discs. With the introduction of such devices the console was very attractive to programmers and pirates alike.
Individuals that insisted on creating copies of games that would play as their original counterparts faced many issues at the time, as the discs that were produced by Sony were designed to be difficult to copy—and impossible to copy on recordable media. Not only did the original discs have a specific black tint to them, they were mastered with a specific wobble in the leadin—when amplified and sliced this contained a 4 character sequence that was checked by the CD-ROM drives mechacon chip, and the drive would only accept the disk if it was correct (this string varied depending on the region of the disk—"SCEI" for NTSC:J machines, "SCEA" for NTSC:U/C machines, "SCEE" for PAL machines and "SCEW" for the Net Yaroze). Since the tracking pattern is pressed into the disc at the time of manufacture and CD-Rs have a clean spiral, this cannot be reproduced on a CD-ROM recorder. Some companies (notably Datel) did manage to produce discs that booted on unmodified retail units, but this was beyond the average pirate. The other issue was that most PC drives used Mode 1 or Mode 2/Form 1 (2048 bytes/sector) and the PSX used a mixed-mode format with most data in Mode 2/Form 1 and streaming audio/video data in Mode 2/Form 2—which a lot CD-R drives at the time could not handle well. Even after accurate copies were made, you still needed a modchip to send the correct code to the CD controller to enable the disc to be read (if a disk failed the security checks, it could be played as an audio CD, but the CD controller would reject any attempt at data transfers from it).
The creation and mass-production of these inexpensive modchips, coupled with their ease of installation, marked the beginning of widespread console videogame piracy. Coincidentally, CD-ROM burners were made available around this time. Prior to the PlayStation, the reproduction of copyrighted material for gaming consoles was restricted to either enthusiasts with exceptional technical ability, or others that had access to CD manufacturers. With this console, amateurs could replicate anything Sony was producing for a mere fraction of the MSRP.
Net Yaroze
A version of the PlayStation called the Net Yaroze was also produced. It was more expensive than the original PlayStation, colored black instead of the usual gray, and most importantly, came with tools and instructions that allowed a user to be able to program PlayStation games and applications without the need for a full developer suite, which cost many times the amount of a PlayStation and was only available to approved video game developers. Naturally, the Net Yaroze lacked many of the features the full developer suite provided. Programmers were also limited by the 2 MB of total game space that Net Yaroze allowed. That means the entire game had to be crammed into the 2 MB of system RAM. The user couldn't officially make actual game discs. The amount of space may seem small, but games like Ridge Racer ran entirely from the system RAM (except for the streamed music tracks). It was unique in that it was the only officially retailed Sony PlayStation with no regional lockout; it would play games from any territory.
PSone with LCD screen and a DualShock controller
PSone
The PSone (also PSOne, PS one, or PS1), launched in 2000, is Sony's smaller (and redesigned) version of its PlayStation video game console. The PSone is about one-third smaller than the original PlayStation (38mm × 193 mm × 144 mm versus 45 mm × 260 mm × 185 mm). It was released in July 7, 2000,[7] and went on to outsell all other consoles—including Sony's own brand-new PlayStation 2—throughout the remainder of the year. Sony also released a small LCD screen and an adaptor to power the unit for use in cars. The PSone is fully compatible with all PlayStation software. The PlayStation is now officially abbreviated as the "PS1" or "PSone." There were three differences between the "PSone" and the original, the first one being cosmetic change to the console, the second one was the home menu's Graphical User Interface, and the third being added protection against the mod-chip by changing the internal layout and making previous-generation mod-chip devices unusable. The PSone also lacks the original PlayStation's serial port, which allowed multiple consoles to be hooked up for multi-TV multiplayer. The serial port could also be used for an external mod-chip, which may have been why it was removed, although size-constraints may also be to blame.
Advanced Micro Devices (AMD)
AMD is publicly traded at NYSE with the symbol AMD. Its market capitalization was around US$13 billion at the end of 2005.
Early AMD 8080 Processor (AMD AM9080ADC / C8080A), 1977
The company started as a producer of logic chips in 1969, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.
During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as flash memory. While the AMD 29K survived as an embedded processor and AMD continues to make industry leading flash memory, AMD was not as successful with its other endeavours. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets.
8086, 80286, 80386, Am486
Image:Amd intel.jpg
AMD 80286 1982
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but IBM's policy at the time was to require at least two sources for its chips. AMD later produced the 80286, or 286, under the same arrangement, but Intel cancelled the agreement in 1986, and refused to hand over technical details of the i386 part. The growing popularity of the PC clone market meant Intel could produce CPUs on its own terms, rather than IBM's.
AMD challenged this decision, and subsequently won under arbitration. A long legal dispute followed, ending in 1991 when the Supreme Court of California sided with AMD and forced Intel to pay over $1 billion in compensation for violation of contract. Subsequent legal disputes centered on whether AMD had legal rights to use derivatives of Intel's microcode. Rulings were made in both directions. In the face of uncertainty, AMD was forced to develop "clean room" versions of Intel code. In this fashion, one engineering team described the function of the code, and a second team without access to the source code itself had to develop microcode that performed the same functionality.
In 1991 AMD released the Am386, its clone of the later Intel 80386 processor. It took less than a year for AMD to sell a million units. AMD's 386DX-40 was very popular with smaller, independent clone manufacturers. AMD followed in 1993 with the Am486. Both sold at a significantly lower price than the Intel versions. The Am486 was used by a number of large OEMs, including Compaq, and proved popular, but again was just a clone of Intel's processor technology. But as product cycles shortened in the PC industry, cloning Intel's products became an ever less viable strategy for AMD, as it meant their technology would always be behind Intel.
On December 30 1994 the Supreme Court of California finally formally denied AMD rights to use the i386's microcode. Afterwards AMD and Intel concluded an agreement, the details of which remain largely secret, which gave AMD the right to produce and sell microprocessors containing the microcodes of Intel 286, 386, and 486. The agreement appears to allow for full cross-licensing of patents and some copyrights, allowing each partner to use the other's technological innovations without charge. Whatever the details, no significant legal action had resulted between AMD and Intel (until the 2005 antitrust suits in Japan and the U.S.), and the agreement evidently provided a "clean break".
K5
AMD's first completely in-house processor was the K5, launched belatedly in 1995. The "K" was a reference to "Kryptonite" which from comic book lore was the only substance that could harm Superman (a clear reference to Intel which was dominant in the market).
It was intended to compete directly with the Intel Pentium CPU, which had been released in 1993, but architecturally it had more in common with the newly-released Pentium Pro than the Pentium or Cyrix's 6x86, decoding x86 instructions into micro-ops and executing them on a RISC core. There were a number of problems. Many consumers were upset to learn the clock speed of their processor did not match the PR rating used to label some of the parts, and this was especially obvious at boot time, when the clock speed was posted to the main screen on many systems.
More tellingly, the K5 couldn't match the 6x86's integer performance, nor the Pentium's FPU performance. AMD tended to use benchmarks for its rating systems that avoided FPU intensive tasks. This, combined with the large die size and the fact that the design scaled badly, doomed the K5 to near-total failure in the marketplace. To its credit, the K5 did not suffer from the compatibility problems of the 6x86 nor did it run as hotly.
While the K5 was arguably better than the Pentium classic technologically, with modern features such as out of order execution and micro-ops RISC core, AMD was slow by two years compared to Intel. Missing schedule deadlines and lack of manufacturing expertise in scaling designs would continue to plague AMD until the K7. With a new fabrication plant in Austin, they could not afford to wait for another inhouse design.
NexGen / K6
Image:AMD-K6-2-300.jpg
AMD-K6-2-300
In 1996, AMD purchased NexGen specifically for the rights to their Nx series of x86 compatible processors. It is fair to say the technology gained in this acquisition saved AMD, which is somewhat ironic when one considers NexGen had been founded by ex-Intel employees.
AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was branded the K6 processor, introduced in 1997. The redesign included a feedback dynamic instruction reordering mechanism, MMX instructions, and added the missing floating point unit (FPU). It was also made pin-compatible with Intel's Pentium, enabling it to be used in the widely available "Socket 7"-based motherboards. Like the Nx686 and Nx586 before it, the K6 translated the Pentium compatible x86 instruction set to RISC-like micro-instructions. In the following year, AMD released the K6-2 which added a set of floating point multimedia instructions called 3DNow!, preceding Intel's SSE instructions, as well as a new socket standard called "Super Socket 7", that extended the front side bus frequency from 66 to 100 MHz.
In January 1999, the final iteration of the K6-x series, the 450 MHz K6-III, was extremely competitive with Intel's top of the line chips. This chip was essentially a K6-2 with 256 kilobytes of full-speed level 2 cache integrated into the core and a better branch prediction unit. While it matched (generally beating) the Pentium II/III in integer operations, the FPU was a non-pipelined serial design and could not compete with Intel's more advanced FPU architecture. Although 3DNow! could theoretically compensate for this weakness, few game developers made use of it, the most notable exception being id Software's Quake II.
Throughout its lifetime, the K6 processor came close, but never equalled the performance of processor offerings from Intel. While there were brief periods when AMD announced a clock speed advantage, volume availability of products was limited as AMD suffered from manufacturing and yield problems. Furthermore, having deviated from the official Intel motherboard specifications with the Super Socket 7 format, the motherboards that worked with the K6 were of varying quality, especially as regards implementation of the graphical AGP specification.
Overall the K6 proved popular with consumers, especially in markets outside North America, offering decent performance and a comparatively low price. But the problems surrounding the platform, and lack of availability for the announced high end parts, failed to establish AMD as a player in the corporate market. Intel responded to AMD's lower prices with the lower budget "Celeron" version of their Pentiums. While the Celerons were not as popular as Intel had hoped, this effectively left AMD struggling with low margins, chasing the low end of the market.
Athlon / K7
Image:Amd athlon classic.jpg
AMD Athlon "Classic" SlotA
It was clear that if AMD was to survive, the company had to change strategy. CEO and founder Jerry Sanders recognized this, and developed a famous "Virtual Gorilla" strategy. This utilized strategic industry partnerships, to enable AMD to compete with Intel on a more equal technological footing.
The fruits of this were shown in August of 1999, when AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, one of the lead engineers on the DEC Alpha project. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound the project down, in this fashion acquiring a genuine world-class-enterprise-level processor design team, for a bargain basement price. It should be noted, though, that the Athlon design team included those who worked on both the K5 and K6.
The Athlon had an advanced micro-architecture geared towards overall performance, with a notably powerful FPU. When compared to the P6 the Athlon was superior, solving many of the problems and bottlenecks that were inherent in the Intel design. The Athlon had a higher average execution per clock throughput. The fundamental reason that such a large design discrepancy was possible is that the original P6 had a much smaller transistor budget, since it was fabricated on a much earlier design process than the Athlon. Early samples of the Athlon had branch predictor problems as well as low clock rates, but when it shipped with all architectural fixes at an amazing (for the time) 650 MHz, it sent shockwaves through the industry.
Intel was immediately forced into a panicked internal re-design of the P6 core, which fixed many of the pipeline stalls that compromised its performance. The result was called the "Coppermine" revision. However, the rushed nature of the work put enormous pressure on Intel's manufacturing facilities and, even after it was announced, availability of the improved Coppermine chips was poor.
In comparison, AMD found processor yields exceeded expectations. As a result, AMD announced 900-MHz and 1-GHz Athlons in early March 2000, and delivered them in volume that same month, again surprising the industry. Intel announced a 1-GHz Pentium a few days after AMD did, but was unable to ship the part in volume for several months. Working with Motorola as part of the "Virtual Gorilla" strategy, AMD also perfected copper interconnect manufacturing over a year before Intel, enjoying a clear advantage in manufacturing process technology, further improving clock speeds. Compounding Intel's embarrassment, an attempt to leapfrog AMD with a 1.13-GHz Pentium III resulted in an unreliable product that worked only on one specific customized motherboard and was heavily panned by prominent industry critics [1]. That chip was soon withdrawn from the market, having been installed in only a small handful of OEM systems [2]. All of this greatly bolstered AMD's credibility in the market; what was formerly a producer of cut-rate clone chips was now increasingly being considered a viable competitor and rival to Intel.
AMD worked hard to increase the reliability and performance of motherboards for the Athlon with a quality assurance program. Confident with their unprecedented control of the performance end of the market, AMD was able to release a second line of budget processors, based on the Athlon core called the Duron. The combination of these astonishing technical and marketing successes did much to repair AMD's reputation, and the long time industry jokes about the company noticeably dried up. AMD continued to undercut Intel on price at the low end with the K6, and as Intel suffered part shortages and yield problems, AMD's market share briefly rose to 23%.
From a marketing point of view, AMD made the most of its performance advantage, greater consistency in delivery, and better system stability. The center point of its technical marketing strategy was a very large complement of easily reproducible performance benchmarks, based on a wide variety of applications. These efforts were bolstered by review sites and magazines, which were able to add their own performance benchmarks which verified AMD's claims of performance superiority of the Athlon versus its competition from Intel for many years. The importance of this publicity to AMD's credibility and reputation going forward cannot be overemphasized.
The spectacular success of the Athlon K7 processor started to diminish once Intel introduced the Pentium 4 processor. Though the K7 was capable of much greater clock speeds than the Pentium 3 line, the new Netburst architecture, designed to deliver all of its performance through deep pipeline, high clock speed methodologies started to outpace the Athlon Line. The early Pentium 4 did not quite realize the actual performance Intel had hoped for, and in some applications the original K7 Thunderbird architecture exceeded the P4 in per-cycle performance. Because the popular consensus was that the P4 was faster, the Athlon XP was released, although the chip itself was not tremendously improved (other than the progressional clock-speed increase). The Athlon XP remained competitive with overall performance until the release of the Pentium 4 Northwood CPU, which ran much more efficiently than the early Pentium 4s, and at even greater clock cycles, topping out at 3.4 GHz.
Unable to compete with the raw clock speeds offered by the P4, from the start of the Athlon XP Era, AMD began using a nomenclature that compares performance of the Athlon XP, with that of an earlier Thunderbird core revision. In reality most people saw the model numbering scheme for what it was - an attempt to disguise AMD's clock speed disadvantage compared to the P4. It did, however, make some consumers wary of the idea that Intel's high clockspeeds did not necessarily yield superior performance over the Athlon XP. This definition began to be applied more loosely over time as AMD struggled to compete with the ramping clock speeds of the Northwood core (>3.0 GHz). The credibility of the scheme was only saved by the arrival of the K8, where model numbers once again correlated more reasonably to actual performance.
AMD64 / K8
The K8 is a major revision of the K7 architecture, with the most notable features being the addition of a 64 bit extension to the x86-64 instruction set (called AMD64), incorporating an on-chip memory controller, and the implementation of an extremely high performance point to point multiprocessor capable interconnect called HyperTransport. The extension of the x86 to 64 bits was important to AMD, because it marked a true attempt to wrestle the leadership of the x86 standard away from Intel.
This move by AMD was well timed to take advantage of a product hole in Intel's roadmap, namely a Pentium-compatible CPU that can deal with the inevitable transition to 64 bits. Some viewed this transition as slightly premature; however it helped AMD to snatch the standard away from Intel and its quality 32-bit backwards compatibility made it a feasible chip even for home users. AMD's standard was adopted by Microsoft, Linux and even Sun Microsystems. This left Intel in a position where they were forced to make an agreement with AMD to use the AMD64 extensions for their own 64-bit based (EM64T) processors. The K8 is also notable for its Direct Connect Architecture.
The AMD64 project can be seen as the culmination of Jerry Sanders' "Virtual Gorilla" strategy, in which he set a corporate goal for AMD to become a powerful research corporation in its own right, and not just a low-margin, low-value, commodity clone manufacturer.
The AMD Opteron is the server version of the K8. AMD originally designed the Opteron to compete against Intel's IA-64 Itanium architecture. But the failure of the IA-64 project to leverage volume sales means it now competes with Intel's Xeon processor. AMD's technical leadership has considerably improved their credibility, and enabled AMD to make increasing market share inroads into the corporate sector.
On April 21, 2005 AMD released the world's first x86 server chip built on dual core technology, which was in the plans of the K8 line of processors from 2001 onward. The initial release were accompanied by the availability of the Opteron 865, 870, and 875 processors, and the 2xx version followed shortly afterwards.
On May 31, 2005 AMD released its first desktop-based dual core processor family — the Athlon 64 X2. Unlike Intel's dual-core designs, the X2 mates two cores into a single chip, rather than two chips into a single package. Intel's method may have theoretical yield advantages, but gives up some performance advantages since interprocessor communication still happens over external pins, rather than internally. The X2 marks a significant step towards even greater productivity and scalability, especially for multi-threaded software applications.
Geode
In August 2003 AMD also purchased the Geode business (originally the Cyrix MediaGX) from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of 667MHz and 1GHz (fanless), and 1.4GHz (TDP 25W).
Pacifica
AMD's Athlon series of processors are slated to include virtualization through the Pacifica technology specification. This technology stands in direct competition to Intel's 'Silvervale' virtualization technology.
Current production
AMD's main microprocessor manufacturing and design facilities are located in Dresden, Germany. Between 2003 and 2005, they constructed a second manufacturing (300mm) plant nearby in order to increase the number of chips they can produce, thus becoming more competitive with Intel. The new plant has been named "Fab 36", in recognition of AMD's 36 years of operation, and is expected to reach full production in mid 2006.
As part of its expanding microprocessor design program, AMD started an engineering design center in Bangalore. The AMD India Engineering Centre Private Limited, which is started in July, will contribute to the design of future generations of AMD microprocessors. The standalone facility will occupy approximately 38,000 square feet (3,500 m²) and is located in Richmond Road, Bangalore, Karnataka, India.
Partnerships
Image:NVIDIA logo.png
AMD continues to use industry partnerships as a means to counter Intel's superior financial resources. Notably nVidia's nForce2 chipset generated substantial revenues for nVidia as a popular enthusiast part.
HyperTransport is a point to point interconnect standard developed by AMD and Alpha Processor Incorporated, and then turned over to an industry standards body for finalization. It is used in the nForce3 and nForce4 chipsets. While not intended as a revenue-generating product line for AMD, by providing technological leadership, AMD enhances its standing within the computer industry. Again, innovation is key to AMD's "Virtual Gorilla" corporate strategy.
AMD has also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90-nm implementation. IBM holds many patents on SOI technology, and Intel is reluctant to implement the process for this reason, despite the significant reductions in power consumption offered.
AMD is also loosely partnered with end-user companies such as HP, Compaq, Asus, Alienware, ACER, Evesham Technology and several others in the area of processor distribution. Due to recent events regarding the lawsuit filed against Intel, AMD has gained a significant amount of marketshare hold in the end-user market.
Unlike some other companies, AMD provided the technical details required for the open source BIOS project LinuxBIOS [3].
Flash technology
While less visible to the general public than its CPU business, AMD is also a global leader in flash memory. To compete with Intel, AMD established a 50-50 partnership with Fujitsu called FASL in 1993, with manufacturing facilities in Aizu-Wakamatsu, Japan. In 2003 the long-term partnership was merged into a new company called FASL LLC, globally branded as Spansion[4], headquartered in Sunnyvale, California, USA. Under the deal, AMD took a 60 percent stake, Fujitsu 40 percent. Spansion has recently been spun off into its own IPO where AMD now owns only about 37% of the company.
The new company sells flash memory products through AMD and Fujitsu and their respective sales forces. Notable product families include Mirrorbit flash. In periods the flash business has been extremely profitable, exceeding the financial performance of the CPU division, although the industry is somewhat prone to boom-bust cycles. AMD / Spansion claim a number of important milestones in Flash development [5].
1992: "Negative Gate Erase" technology introduced
1996: Industry's first 2.7-volt flash device
1997: Industry's first 1.8-volt flash device
1998: AMD and Fujitsu's first page-mode flash device
1999: AMD and Fujitsu's first burst-mode flash device
2001: MirrorBit™ technology introduced
2002: Advanced Sector Protection introduced
2003: Industry's first 512-megabit NOR flash memory unveiled
Early AMD 8080 Processor (AMD AM9080ADC / C8080A), 1977
The company started as a producer of logic chips in 1969, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.
During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as flash memory. While the AMD 29K survived as an embedded processor and AMD continues to make industry leading flash memory, AMD was not as successful with its other endeavours. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets.
8086, 80286, 80386, Am486
Image:Amd intel.jpg
AMD 80286 1982
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but IBM's policy at the time was to require at least two sources for its chips. AMD later produced the 80286, or 286, under the same arrangement, but Intel cancelled the agreement in 1986, and refused to hand over technical details of the i386 part. The growing popularity of the PC clone market meant Intel could produce CPUs on its own terms, rather than IBM's.
AMD challenged this decision, and subsequently won under arbitration. A long legal dispute followed, ending in 1991 when the Supreme Court of California sided with AMD and forced Intel to pay over $1 billion in compensation for violation of contract. Subsequent legal disputes centered on whether AMD had legal rights to use derivatives of Intel's microcode. Rulings were made in both directions. In the face of uncertainty, AMD was forced to develop "clean room" versions of Intel code. In this fashion, one engineering team described the function of the code, and a second team without access to the source code itself had to develop microcode that performed the same functionality.
In 1991 AMD released the Am386, its clone of the later Intel 80386 processor. It took less than a year for AMD to sell a million units. AMD's 386DX-40 was very popular with smaller, independent clone manufacturers. AMD followed in 1993 with the Am486. Both sold at a significantly lower price than the Intel versions. The Am486 was used by a number of large OEMs, including Compaq, and proved popular, but again was just a clone of Intel's processor technology. But as product cycles shortened in the PC industry, cloning Intel's products became an ever less viable strategy for AMD, as it meant their technology would always be behind Intel.
On December 30 1994 the Supreme Court of California finally formally denied AMD rights to use the i386's microcode. Afterwards AMD and Intel concluded an agreement, the details of which remain largely secret, which gave AMD the right to produce and sell microprocessors containing the microcodes of Intel 286, 386, and 486. The agreement appears to allow for full cross-licensing of patents and some copyrights, allowing each partner to use the other's technological innovations without charge. Whatever the details, no significant legal action had resulted between AMD and Intel (until the 2005 antitrust suits in Japan and the U.S.), and the agreement evidently provided a "clean break".
K5
AMD's first completely in-house processor was the K5, launched belatedly in 1995. The "K" was a reference to "Kryptonite" which from comic book lore was the only substance that could harm Superman (a clear reference to Intel which was dominant in the market).
It was intended to compete directly with the Intel Pentium CPU, which had been released in 1993, but architecturally it had more in common with the newly-released Pentium Pro than the Pentium or Cyrix's 6x86, decoding x86 instructions into micro-ops and executing them on a RISC core. There were a number of problems. Many consumers were upset to learn the clock speed of their processor did not match the PR rating used to label some of the parts, and this was especially obvious at boot time, when the clock speed was posted to the main screen on many systems.
More tellingly, the K5 couldn't match the 6x86's integer performance, nor the Pentium's FPU performance. AMD tended to use benchmarks for its rating systems that avoided FPU intensive tasks. This, combined with the large die size and the fact that the design scaled badly, doomed the K5 to near-total failure in the marketplace. To its credit, the K5 did not suffer from the compatibility problems of the 6x86 nor did it run as hotly.
While the K5 was arguably better than the Pentium classic technologically, with modern features such as out of order execution and micro-ops RISC core, AMD was slow by two years compared to Intel. Missing schedule deadlines and lack of manufacturing expertise in scaling designs would continue to plague AMD until the K7. With a new fabrication plant in Austin, they could not afford to wait for another inhouse design.
NexGen / K6
Image:AMD-K6-2-300.jpg
AMD-K6-2-300
In 1996, AMD purchased NexGen specifically for the rights to their Nx series of x86 compatible processors. It is fair to say the technology gained in this acquisition saved AMD, which is somewhat ironic when one considers NexGen had been founded by ex-Intel employees.
AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was branded the K6 processor, introduced in 1997. The redesign included a feedback dynamic instruction reordering mechanism, MMX instructions, and added the missing floating point unit (FPU). It was also made pin-compatible with Intel's Pentium, enabling it to be used in the widely available "Socket 7"-based motherboards. Like the Nx686 and Nx586 before it, the K6 translated the Pentium compatible x86 instruction set to RISC-like micro-instructions. In the following year, AMD released the K6-2 which added a set of floating point multimedia instructions called 3DNow!, preceding Intel's SSE instructions, as well as a new socket standard called "Super Socket 7", that extended the front side bus frequency from 66 to 100 MHz.
In January 1999, the final iteration of the K6-x series, the 450 MHz K6-III, was extremely competitive with Intel's top of the line chips. This chip was essentially a K6-2 with 256 kilobytes of full-speed level 2 cache integrated into the core and a better branch prediction unit. While it matched (generally beating) the Pentium II/III in integer operations, the FPU was a non-pipelined serial design and could not compete with Intel's more advanced FPU architecture. Although 3DNow! could theoretically compensate for this weakness, few game developers made use of it, the most notable exception being id Software's Quake II.
Throughout its lifetime, the K6 processor came close, but never equalled the performance of processor offerings from Intel. While there were brief periods when AMD announced a clock speed advantage, volume availability of products was limited as AMD suffered from manufacturing and yield problems. Furthermore, having deviated from the official Intel motherboard specifications with the Super Socket 7 format, the motherboards that worked with the K6 were of varying quality, especially as regards implementation of the graphical AGP specification.
Overall the K6 proved popular with consumers, especially in markets outside North America, offering decent performance and a comparatively low price. But the problems surrounding the platform, and lack of availability for the announced high end parts, failed to establish AMD as a player in the corporate market. Intel responded to AMD's lower prices with the lower budget "Celeron" version of their Pentiums. While the Celerons were not as popular as Intel had hoped, this effectively left AMD struggling with low margins, chasing the low end of the market.
Athlon / K7
Image:Amd athlon classic.jpg
AMD Athlon "Classic" SlotA
It was clear that if AMD was to survive, the company had to change strategy. CEO and founder Jerry Sanders recognized this, and developed a famous "Virtual Gorilla" strategy. This utilized strategic industry partnerships, to enable AMD to compete with Intel on a more equal technological footing.
The fruits of this were shown in August of 1999, when AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, one of the lead engineers on the DEC Alpha project. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound the project down, in this fashion acquiring a genuine world-class-enterprise-level processor design team, for a bargain basement price. It should be noted, though, that the Athlon design team included those who worked on both the K5 and K6.
The Athlon had an advanced micro-architecture geared towards overall performance, with a notably powerful FPU. When compared to the P6 the Athlon was superior, solving many of the problems and bottlenecks that were inherent in the Intel design. The Athlon had a higher average execution per clock throughput. The fundamental reason that such a large design discrepancy was possible is that the original P6 had a much smaller transistor budget, since it was fabricated on a much earlier design process than the Athlon. Early samples of the Athlon had branch predictor problems as well as low clock rates, but when it shipped with all architectural fixes at an amazing (for the time) 650 MHz, it sent shockwaves through the industry.
Intel was immediately forced into a panicked internal re-design of the P6 core, which fixed many of the pipeline stalls that compromised its performance. The result was called the "Coppermine" revision. However, the rushed nature of the work put enormous pressure on Intel's manufacturing facilities and, even after it was announced, availability of the improved Coppermine chips was poor.
In comparison, AMD found processor yields exceeded expectations. As a result, AMD announced 900-MHz and 1-GHz Athlons in early March 2000, and delivered them in volume that same month, again surprising the industry. Intel announced a 1-GHz Pentium a few days after AMD did, but was unable to ship the part in volume for several months. Working with Motorola as part of the "Virtual Gorilla" strategy, AMD also perfected copper interconnect manufacturing over a year before Intel, enjoying a clear advantage in manufacturing process technology, further improving clock speeds. Compounding Intel's embarrassment, an attempt to leapfrog AMD with a 1.13-GHz Pentium III resulted in an unreliable product that worked only on one specific customized motherboard and was heavily panned by prominent industry critics [1]. That chip was soon withdrawn from the market, having been installed in only a small handful of OEM systems [2]. All of this greatly bolstered AMD's credibility in the market; what was formerly a producer of cut-rate clone chips was now increasingly being considered a viable competitor and rival to Intel.
AMD worked hard to increase the reliability and performance of motherboards for the Athlon with a quality assurance program. Confident with their unprecedented control of the performance end of the market, AMD was able to release a second line of budget processors, based on the Athlon core called the Duron. The combination of these astonishing technical and marketing successes did much to repair AMD's reputation, and the long time industry jokes about the company noticeably dried up. AMD continued to undercut Intel on price at the low end with the K6, and as Intel suffered part shortages and yield problems, AMD's market share briefly rose to 23%.
From a marketing point of view, AMD made the most of its performance advantage, greater consistency in delivery, and better system stability. The center point of its technical marketing strategy was a very large complement of easily reproducible performance benchmarks, based on a wide variety of applications. These efforts were bolstered by review sites and magazines, which were able to add their own performance benchmarks which verified AMD's claims of performance superiority of the Athlon versus its competition from Intel for many years. The importance of this publicity to AMD's credibility and reputation going forward cannot be overemphasized.
The spectacular success of the Athlon K7 processor started to diminish once Intel introduced the Pentium 4 processor. Though the K7 was capable of much greater clock speeds than the Pentium 3 line, the new Netburst architecture, designed to deliver all of its performance through deep pipeline, high clock speed methodologies started to outpace the Athlon Line. The early Pentium 4 did not quite realize the actual performance Intel had hoped for, and in some applications the original K7 Thunderbird architecture exceeded the P4 in per-cycle performance. Because the popular consensus was that the P4 was faster, the Athlon XP was released, although the chip itself was not tremendously improved (other than the progressional clock-speed increase). The Athlon XP remained competitive with overall performance until the release of the Pentium 4 Northwood CPU, which ran much more efficiently than the early Pentium 4s, and at even greater clock cycles, topping out at 3.4 GHz.
Unable to compete with the raw clock speeds offered by the P4, from the start of the Athlon XP Era, AMD began using a nomenclature that compares performance of the Athlon XP, with that of an earlier Thunderbird core revision. In reality most people saw the model numbering scheme for what it was - an attempt to disguise AMD's clock speed disadvantage compared to the P4. It did, however, make some consumers wary of the idea that Intel's high clockspeeds did not necessarily yield superior performance over the Athlon XP. This definition began to be applied more loosely over time as AMD struggled to compete with the ramping clock speeds of the Northwood core (>3.0 GHz). The credibility of the scheme was only saved by the arrival of the K8, where model numbers once again correlated more reasonably to actual performance.
AMD64 / K8
The K8 is a major revision of the K7 architecture, with the most notable features being the addition of a 64 bit extension to the x86-64 instruction set (called AMD64), incorporating an on-chip memory controller, and the implementation of an extremely high performance point to point multiprocessor capable interconnect called HyperTransport. The extension of the x86 to 64 bits was important to AMD, because it marked a true attempt to wrestle the leadership of the x86 standard away from Intel.
This move by AMD was well timed to take advantage of a product hole in Intel's roadmap, namely a Pentium-compatible CPU that can deal with the inevitable transition to 64 bits. Some viewed this transition as slightly premature; however it helped AMD to snatch the standard away from Intel and its quality 32-bit backwards compatibility made it a feasible chip even for home users. AMD's standard was adopted by Microsoft, Linux and even Sun Microsystems. This left Intel in a position where they were forced to make an agreement with AMD to use the AMD64 extensions for their own 64-bit based (EM64T) processors. The K8 is also notable for its Direct Connect Architecture.
The AMD64 project can be seen as the culmination of Jerry Sanders' "Virtual Gorilla" strategy, in which he set a corporate goal for AMD to become a powerful research corporation in its own right, and not just a low-margin, low-value, commodity clone manufacturer.
The AMD Opteron is the server version of the K8. AMD originally designed the Opteron to compete against Intel's IA-64 Itanium architecture. But the failure of the IA-64 project to leverage volume sales means it now competes with Intel's Xeon processor. AMD's technical leadership has considerably improved their credibility, and enabled AMD to make increasing market share inroads into the corporate sector.
On April 21, 2005 AMD released the world's first x86 server chip built on dual core technology, which was in the plans of the K8 line of processors from 2001 onward. The initial release were accompanied by the availability of the Opteron 865, 870, and 875 processors, and the 2xx version followed shortly afterwards.
On May 31, 2005 AMD released its first desktop-based dual core processor family — the Athlon 64 X2. Unlike Intel's dual-core designs, the X2 mates two cores into a single chip, rather than two chips into a single package. Intel's method may have theoretical yield advantages, but gives up some performance advantages since interprocessor communication still happens over external pins, rather than internally. The X2 marks a significant step towards even greater productivity and scalability, especially for multi-threaded software applications.
Geode
In August 2003 AMD also purchased the Geode business (originally the Cyrix MediaGX) from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of 667MHz and 1GHz (fanless), and 1.4GHz (TDP 25W).
Pacifica
AMD's Athlon series of processors are slated to include virtualization through the Pacifica technology specification. This technology stands in direct competition to Intel's 'Silvervale' virtualization technology.
Current production
AMD's main microprocessor manufacturing and design facilities are located in Dresden, Germany. Between 2003 and 2005, they constructed a second manufacturing (300mm) plant nearby in order to increase the number of chips they can produce, thus becoming more competitive with Intel. The new plant has been named "Fab 36", in recognition of AMD's 36 years of operation, and is expected to reach full production in mid 2006.
As part of its expanding microprocessor design program, AMD started an engineering design center in Bangalore. The AMD India Engineering Centre Private Limited, which is started in July, will contribute to the design of future generations of AMD microprocessors. The standalone facility will occupy approximately 38,000 square feet (3,500 m²) and is located in Richmond Road, Bangalore, Karnataka, India.
Partnerships
Image:NVIDIA logo.png
AMD continues to use industry partnerships as a means to counter Intel's superior financial resources. Notably nVidia's nForce2 chipset generated substantial revenues for nVidia as a popular enthusiast part.
HyperTransport is a point to point interconnect standard developed by AMD and Alpha Processor Incorporated, and then turned over to an industry standards body for finalization. It is used in the nForce3 and nForce4 chipsets. While not intended as a revenue-generating product line for AMD, by providing technological leadership, AMD enhances its standing within the computer industry. Again, innovation is key to AMD's "Virtual Gorilla" corporate strategy.
AMD has also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90-nm implementation. IBM holds many patents on SOI technology, and Intel is reluctant to implement the process for this reason, despite the significant reductions in power consumption offered.
AMD is also loosely partnered with end-user companies such as HP, Compaq, Asus, Alienware, ACER, Evesham Technology and several others in the area of processor distribution. Due to recent events regarding the lawsuit filed against Intel, AMD has gained a significant amount of marketshare hold in the end-user market.
Unlike some other companies, AMD provided the technical details required for the open source BIOS project LinuxBIOS [3].
Flash technology
While less visible to the general public than its CPU business, AMD is also a global leader in flash memory. To compete with Intel, AMD established a 50-50 partnership with Fujitsu called FASL in 1993, with manufacturing facilities in Aizu-Wakamatsu, Japan. In 2003 the long-term partnership was merged into a new company called FASL LLC, globally branded as Spansion[4], headquartered in Sunnyvale, California, USA. Under the deal, AMD took a 60 percent stake, Fujitsu 40 percent. Spansion has recently been spun off into its own IPO where AMD now owns only about 37% of the company.
The new company sells flash memory products through AMD and Fujitsu and their respective sales forces. Notable product families include Mirrorbit flash. In periods the flash business has been extremely profitable, exceeding the financial performance of the CPU division, although the industry is somewhat prone to boom-bust cycles. AMD / Spansion claim a number of important milestones in Flash development [5].
1992: "Negative Gate Erase" technology introduced
1996: Industry's first 2.7-volt flash device
1997: Industry's first 1.8-volt flash device
1998: AMD and Fujitsu's first page-mode flash device
1999: AMD and Fujitsu's first burst-mode flash device
2001: MirrorBit™ technology introduced
2002: Advanced Sector Protection introduced
2003: Industry's first 512-megabit NOR flash memory unveiled
Anti-Virus
In the virus dictionary approach, when the anti-virus software examines a file, it refers to a dictionary of known viruses that the authors of the anti-virus software have identified. If a piece of code in the file matches any virus identified in the dictionary, then the anti-virus software can take one of the following actions:
attempt to repair the file by removing the virus itself from the file
quarantine the file (such that the file remains inaccessible to other programs and its virus can no longer spread)
delete the infected file
To achieve consistent success in the medium and long term, the virus dictionary approach requires periodic (generally online) downloads of updated virus dictionary entries. As civically minded and technically inclined users identify new viruses "in the wild", they can send their infected files to the authors of anti-virus software, who then include information about the new viruses in their dictionaries.
Dictionary-based anti-virus software typically examines files when the computer's operating system creates, opens, closes or e-mails them. In this way it can detect a known virus immediately upon receipt. Note too that a System Administrator can typically schedule the anti-virus software to examine (scan) all files on the user's hard disk on a regular basis.
Although the dictionary approach can effectively contain virus outbreaks in the right circumstances, virus authors have tried to stay a step ahead of such software by writing "oligomorphic", "polymorphic" and more recently "metamorphic" viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match the virus's signature in the dictionary.
Suspicious behavior approach
The suspicious behavior approach, by contrast, doesn't attempt to identify known viruses, but instead monitors the behavior of all programs. If one program tries to write data to an executable program, for example, the anti-virus software can flag this suspicious behavior, alert a user and ask what to do.
Unlike the dictionary approach, the suspicious behavior approach therefore provides protection against brand-new viruses that do not yet exist in any virus dictionaries. However, it can also sound a large number of false positives, and users probably become desensitized to all the warnings. If the user clicks "Accept" on every such warning, then the anti-virus software obviously gives no benefit to that user. This problem has worsened since 1997, since many more nonmalicious program designs came to modify other .exe files without regard to this false positive issue. Thus, most modern anti-virus software uses this technique less and less.
Other ways to detect viruses
Some antivirus-software uses of other types of heuristic analysis. For example, it could try to emulate the beginning of the code of each new executable that the system invokes before transferring control to that executable. If the program seems to use self-modifying code or otherwise appears as a virus (if it immediately tries to find other executables, for example), one could assume that a virus has infected the executable. However, this method could result in a lot of false positives.
Yet another detection method involves using a sandbox. A sandbox emulates the operating system and runs the executable in this simulation. After the program has terminated, software analyzes the sandbox for any changes which might indicate a virus. Because of performance issues, this type of detection normally only takes place during on-demand scans.
Some virus scanners can also warn a user if a file is likely to contain a virus based on the file type.
Issues of concern
The spread of e-mail viruses (arguably the most destructive and widespread computer viruses) could be inhibited far more inexpensively and effectively, and without the need to install anti-virus software, if bugs in the e-mail clients, which relate to the execution of downloaded code and to the ability of executables to spread and wreak havoc, were fixed.
User education can effectively supplement anti-virus software; simply training users in safe computing practices (such as not downloading and executing unknown programs from the Internet) would slow the spread of viruses and obviate the need of much anti-virus software.
Computer users should not always run with administrator access to their own machine. If they would simply run in user mode then some types of viruses could not spread (or at least the damage caused by viruses could be reduced).
The dictionary approach to detecting viruses does not always suffice -- due to the continual creation of new viruses -- yet the suspicious behavior approach does not work well due to the false positive problem; hence, the current understanding of anti-virus software will never conquer computer viruses.
Various methods exist of encrypting and packing malicious software which will make even well-known viruses undetectable to anti-virus software. Detecting these "camouflaged" viruses requires a powerful unpacking engine, which can decrypt the files before examining them. Unfortunately, many popular anti-virus programs do not have this and thus are often unable to detect encrypted viruses.
The ongoing writing and spreading of viruses and of panic about them gives the vendors of commercial anti-virus software a financial interest in the ongoing existence of viruses.
Some anti-virus software can considerably reduce performance. Users may disable the anti-virus protection to overcome the performance loss, thus increasing the risk of infection. For maximum protection the anti-virus software needs to be enabled all the time - often at the cost of slower performance (see also Software bloat). Some anti-virus software has less impact on performance.
It is sometimes necessary to temporarily disable virus protection when installing major updates such as Windows Service Packs or updating graphics card drivers for example. Having anti-virus protection running at the same time as installing a major update may prevent the update installing properly or at all.
My Experience with Windows
History of Microsoft Windows
In 1983 Microsoft announced its development of Windows, a graphical user interface (GUI) for its own operating system (MS-DOS) that had shipped for IBM PC and compatible computers since 1981. Microsoft modeled the GUI, which was first known as Interface Manager, after that of Apple's Mac OS. Bill Gates had been shown a Macintosh prototype by Steve Jobs early in its development, around 1981, and Microsoft was partnered by Apple to create some of the important early Mac software, such as Word and Excel.Contents
1 Early history
2 Success with Windows 3.0
3 A step sideways: OS/2
4 Windows 3.1 and NT
5 Windows 95
6 Windows NT 4.0
7 Windows 98
8 Windows Millennium Edition (Me)
9 Windows 2000
10 Windows XP: Merging the product lines
11 Windows Server 2003
12 Thin client: Windows Fundamentals for Legacy PCs
13 Late 2006: Windows Vista
14 2007: Windows Server "Longhorn"
15 Future development: Windows "Vienna"
16 History of the Microsoft Operating Systems
16.1 MS-DOS product progression
16.2 OS/2 product progression
16.3 Current NT-Line product progression
17 Timeline
18 Other
19 See also
20 External links
Early history
The first independent version of Microsoft Windows, version 1.0, released in 1985, lacked a degree of functionality and achieved little popularity. It was originally going to be called Interface Manager, but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be a more appealing name to consumers. Windows 1.0 did not provide a complete operating system, but rather extended MS-DOS and shared the latter's inherent flaws and problems. Moreover, the programs that shipped with the early version comprised "toy" applications with little or limited appeal to business users.
Furthermore, legal challenges by Apple limited its functionality. For example, windows could only appear 'tiled' on the screen; that is, they could not overlap or overlie one another. Also, there was no trash can (place to store files prior to deletion), since Apple believed they owned the rights to that paradigm. Microsoft later removed both of these limitations by means of signing a licensing agreement.
Microsoft Windows version 2 came out in 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit (rumor has it that Windows was intended as a platform to run Microsoft Office applications first, and only later as a general-use GUI system).
Microsoft Windows received a major boost around this time when Aldus Pagemaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the beginning of the success of Windows.
Versions 2.0x still used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 Protected Mode. Later two new versions were released, named Windows/286 2.1 and Windows/386 2.1. Windows/286 was just adding a few additions to Windows 2.0 and still ran in real mode, but was the first version to support using the HMA, while Windows/386 had a protected mode kernel with EMS emulation, and no, it isn't kernel.exe (that and all Windows applications were still real mode applications, running over the protected mode kernel by using virtual 8086 mode, which is new to the 80386 and not only that, even DOS applications ran over it), it is win386.exe.
Version 2.03, and later 3.0, faced legal challenges from Apple over its overlapping windows and other features Apple charged mimicked the "look and feel" of its operating system and "embodie[d] and generate[d] a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of the 189 charges that Apple had sued Microsoft with on January 5 1989.
Success with Windows 3.0
Microsoft Windows scored a serious success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed a user to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory. It made PC compatibles serious competitors to the Apple Macintosh. This benefited from the improved graphics available on PCs by this time (by means of VGA video cards), and the Protected/Enhanced mode which allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in any of Real, Standard or 386 Enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to 80286 & 80386. Windows tried to auto detect which mode to run in, although it could be forced to run in a specific mode using the switches: /r (real), /s (standard) and /3 (386 enhanced) respectively. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel (again, it is not krnl386.exe, that was a program that ran in ring 3 of protected mode, and switched to that mode through DPMI, it is win386.exe) was an enhanced version of the protected mode kernel in Windows/386.
Due to this backwards compatibility, applications also had to be compiled in a 16-bit environment, without ever using the full 32-bit capabilities of the 386 CPU.
A limited multimedia version, Windows 3.0 with Multimedia Extensions 1.0, was released several months later. This was bundled with the first sound card / CD-ROM multimedia kits e.g. Creative Labs Sound Blaster Pro along with titles such as MS Bookshelf. This version was the precursor to the multimedia features available in v3.1 later.
However, the features listed above, as well as the growing market support made Windows 3.0 wildly successful—selling around 10 million copies in the two years before the release of version 3.1, Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans.
A step sideways: OS/2
During the mid to late 1980s, Microsoft and IBM had co-operatively been developing OS/2 as a successor to DOS, to take full advantage of the aforementioned Protected Mode of the Intel 80286 processor, to allow use of up to 16M of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.
A GUI, called the Presentation Manager (PM), was not available with OS/2 until version 1.1, released in 1988. Although some considered it to be in many ways superior to Windows, its API was incompatible with Windows. (Among other things, Presentation Manager placed X,Y coordinate 0,0 at the bottom left of the screen like Cartesian coordinates, while Windows put 0,0 at the top left of the screen like most other computer window systems.) Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the DOS FAT file system used by Windows.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They co-operated with each other in developing their PC operating systems, and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop a new operating system, OS/2 3.0, to later succeed OS/2 2.0.
This agreement soon however fell apart, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system still had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the almost chronic bad supply of OS/2 with up-to-date device support. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
At the time, it was unclear who would win the so-called "Desktop wars". But in the end, OS/2 did not manage to gain enough market share, even though IBM released several improved versions subsequently (see below).
Windows 3.1 and NT
Image:MS Windows logo.png
Image:Windows 3.11 workspace.png
Typical Windows 3.11 desktop
In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0 (such as display of TrueType scalable fonts, developed jointly with Apple), but primarily consisted of bugfixes and multimedia support. It also removed support for Real mode, and would only run on a 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992. Around the same time, Microsoft released Windows for Workgroups (WfW), available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. One optional download for WfW was the 'Wolverine' TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks. There were two versions of Windows for Workgroups, WfW 3.1 and WfW 3.11. Unlike the previous versions, Windows for Workgroups 3.11 only runs in 386 Enhanced mode, and requires at least an 80386SX processor.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de-facto standard for consumer software.
Meanwhile Microsoft continued to develop Windows NT. Microsoft hired Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation (later purchased by Compaq, now part of Hewlett-Packard) to develop NT into a more capable operating system. Cutler had been developing a follow-on to VMS at DEC called Mica, and when DEC dropped the project he brought the expertise and some engineers with him to Microsoft. DEC also believed he brought Mica's code to Microsoft and sued. Microsoft eventually paid $150 million US and agreed to support DEC's Alpha CPU chip in NT.
Windows NT 3.1 (Microsoft marketing desired to make Windows NT appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (code-named Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. (In hindsight, Cairo was a much more difficult project than Microsoft had anticipated, and as a result, NT and Chicago would not be unified until Windows XP, and still, parts of Cairo have not made it into Windows as of today. Specifically, the WinFS subsystem, which was the much touted Object File System of Cairo, has now been put on hold, and will not be released with Longhorn/Vista).
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines. Windows NT would not work for private users because of its resource demands; moreover, its GUI was simply a copy of Windows 3.1's, which was inferior to the OS/2 Workplace Shell, so there was not a good reason to propose it as a replacement to Windows 3.1.
However, the same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming a commodity), as it enjoyed advanced network connectivity options, and the efficient NTFS file system. Windows NT version 3.51 was Microsoft's stake into this market, a large part of which would be won over from Novell in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. Win32 API had three main implementations: one for Windows NT, one for Win32s (which was a subset of Win32 which could be used on Windows 3.1 systems), and one for Chicago. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures.
Windows 95
Image:Am windows95 desktop.png
A typical Microsoft Windows 95 desktop
After Windows 3.11, Microsoft began to develop a new version of the operating system code-named Chicago. Chicago was designed to be fully 32-bit and support pre-emptive multitasking, like OS/2 and Windows NT, that would improve its stability as opposed to the notoriously unstable 3.11. Many parts of the operating system's core were rewritten; others went through an elaborate overhaul. Win32 API was adopted as the standard external interface, Win16 compatibility being preserved through various measures and tricks. A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. This, and the fact that the numerous design flaws had to be carried over from the earlier Windows versions, eventually began to impact on the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first it made it impossible for consumers to use a cheaper, non-Microsoft DOS; secondly, although traces of DOS were never completely removed from the system, and a version of DOS would be loaded briefly as a part of the bootstrap process, Windows 95 applications ran solely in 386 Enhanced Mode, with a flat 32-bit address space and virtual memory. These features made it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2GB reserved for the operating system), and (at least in theory) prevented them from corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/ME did not support more than 512 megabytes of physical RAM without obscure system tweaks.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped preinstalled with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose marketshare.
It is probably impossible to nail down a specific reason why OS/2 failed to gain much marketshare. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM also introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part, but many people would probably agree that IBM's own marketing problems and lack of support for developers contributed at least as much to the failure.
Microsoft released 5 Versions of Windows 95:
Windows 95 Original Release
Windows 95 A - included Windows 95 Service Pack 1 slipstreamed into the installation.
Windows 95 B - (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support.
Windows 95 B USB - OSR2.1, included basic USB support.
Windows 95 C - (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced.
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
Windows NT 4.0
Image:Nt4server.png
Windows NT 4.0 Server Desktop
Originally developed as a part of its effort to introduce Windows NT to the workstation market, Microsoft released Windows NT 4.0, which featured the new Windows 95 interface on top of the Windows NT kernel (a patch was available for developers to make NT 3.51 use the new UI, but it was quite buggy; the new UI was first developed on NT but Windows 95 was released before NT 4.0).
Windows NT 4.0 came in four flavors:
Windows NT 4.0 Workstation
Windows NT 4.0 Server
Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering)
Windows NT 4.0 Terminal Server
Windows 98
Image:Word 6.0 on Win98SE.png
Windows 98SE with the "Jungle" theme, and a couple of the programs from Microsoft Office 4.3 running.
On June 25, 1998, Microsoft released Windows 98, which was widely regarded as a minor revision of Windows 95. It included new hardware drivers and the FAT32 file system to support disk partitions larger than the 2 GB allowed by Windows 95. USB support was far superior to the token, sketchy support provided by the OEM editions of Windows 95. It also controversially integrated the Internet Explorer browser into the Windows GUI and Windows Explorer file manager, prompting the opening of the United States v. Microsoft case, dealing with the question whether Microsoft was abusing its hold on the PC operating system market to push its products in other areas.
In 1999, Microsoft released Windows 98 Second Edition, an interim release whose most notable feature was the addition of Internet Connection Sharing (a brand name for a form of network address translation), which allowed several machines on a LAN to share a single internet connection. Hardware support through device drivers was increased. Many minor issues were found and fixed which make it, according to many, the most stable version of Windows on the Win9x kernel.
Windows Millennium Edition (Me)
Image:WindowsME.png
Windows Millennium Edition Desktop
In September 2000, Microsoft introduced Windows Me (Millennium Edition), which upgraded Windows 98 with enhanced multimedia and Internet features. It also introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of system failure. System Restore was a notable feature that made its way into Windows XP. The first version of Windows Movie Maker was introduced as well.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. As a result, Windows Me was not acknowledged as a unique OS along the lines of 95 or 98. Windows Me was widely and sometimes unfairly criticised for serious stability issues and for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition".
Windows 2000
Image:Windows2000.png
Windows 2000 Desktop
Main article: Windows 2000
Image:Wlogo.png
Windows logo, as of circa 2000
Microsoft released Windows 2000, known during its development cycle as "NT 5.0", in February 2000. It was successfully deployed both on the server and the workstation markets. Windows 2000, claimed by some to be the best Windows version to date, incrorporated a number of features, in particular the user interface, from Windows 98. Windows 2000 also introduced Active Directory, a near-complete replacement of NT 4's Windows Server domain model, which built on industry-standard technologies like DNS, LDAP and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions.
While Windows 2000 could upgrade a computer running Windows 98, Microsoft did not see Windows 2000 as a product designed for home users; instead, a continuation of the Windows 95/98 product line was shipped shortly after Windows 2000 called Windows Me.
Windows 2000 came in five editions:
Windows 2000 Professional
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
Windows 2000 Small Business Server
Windows XP: Merging the product lines
See also: Features new to Windows XP
Image:Windows xp desktop.PNG
A typical Windows XP desktop.
In 2001, Microsoft introduced Windows XP. The merging of the Windows NT/2000 and Windows 3.1/95/98/ME lines was achieved with Windows XP (codenamed "Whistler"). Windows XP uses the Windows NT 5.1 kernel; however, it finally marks the entrance of the Windows NT core to the consumer market, to replace the aging 16-bit branch.
Windows XP is available in a number of versions:
"Windows XP Home Edition", for home desktops and laptops (notebooks)
"Windows XP Home Edition N", as above, but without a default installation of Windows Media Player, as mandated by a European Union ruling
"Windows XP Professional Edition", for business and power users
"Windows XP Professional Edition N", as above, but without a default installation of Windows Media Player, as mandated by a European Union ruling
Windows XP Media Center Edition (MCE), released in November 2002 for desktops and notebooks with an emphasis on audio, video, and PVR capability
Windows XP Media Center Edition 2003
Windows XP Media Center Edition 2004
Windows XP Media Center Edition 2005, released on October 12th, 2004.
"Windows XP Tablet PC Edition", for tablet PCs (notebooks with touch screens)
Windows XP Embedded, for embedded systems
"Windows XP Starter Edition", for new computer users in developing countries
Windows XP Professional x64 Edition, released on April 25, 2005 for home and workstation systems utilizing 64-bit processors based on the x86 instruction set (AMD calls this AMD64, Intel calls it Intel EM64T)
Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as 'Workstations'.
Windows Server 2003
Image:Windows Server 2003 Enterprise Edition trial.png
Windows Server 2003 desktop and Start menu.
On April 24th, 2003 Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It has the version number 5.2.
In December 2005, Microsoft released Windows Server 2003 R2, which added a number of management features for branch offices, file serving, and company-wide identity integration.
Windows Server 2003 is available in seven editions:
Small Business Server
Web Edition
Standard Edition
Enterprise Edition (32 and 64-bit)
Datacenter Edition
Compute Cluster Edition
Storage Server
Thin client: Windows Fundamentals for Legacy PCs
In March 2006, Microsoft plans to release a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It will only be available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PC's that are running Windows 95, 98, ME, and 2000, that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.
Late 2006: Windows Vista
Image:Vista 5308 Desktop.png
Windows Vista desktop, from the February 2006 CTP release
Main article: Windows Vista
See also: Features new to Windows Vista
The next client version of Windows, Windows Vista, is expected in fall 2006. According to Microsoft, this will bring enhanced security from a new restricted user mode called User Account Protection, replacing the "administrator-by-default" philosophy of Windows Xp. Vista will also feature advanced graphics features, a user interface called "Aero", a number of new applications (such as Calendar, Defender, a DVD maker, some new games including Chess, Mahjong, and Purble Place), a revised and more secure version of Internet Explorer, a faster and more intuitive version of Windows Media Player, and a large number of underlying architectural changes.
2007: Windows Server "Longhorn"
Main article: Windows Server "Longhorn"
The next version of Windows Server, currently scheduled for release in the first half of 2007, is known by the codename Windows Server "Longhorn", but given Microsoft's announcement that its server products will maintain the year based naming scheme, it is likely to be released as "Windows Server 2007". Server "Longhorn" builds on the technological and security advances first introduced with Windows Vista, and aims to be significantly more modular than its predecessor, Windows Server 2003. {{Clr}
Future development: Windows "Vienna"
Main article: Windows "Vienna"
The next major release after Vista is code-named "Vienna", though in previous years was known by the code-name Blackcomb. Little is known about what Microsoft plans for the release of Windows following Vista.
History of the Microsoft Operating Systems
MS-DOS product progression
MS-DOS and PC-DOS
Windows 1.0
Windows 2.0
Windows 2.1 (aka Windows/286 and Windows/386)
Windows 3.0, Windows 3.1, Windows 3.11 (and Windows for Workgroups)
Windows 95 (Windows 4.0)
Windows 98 (Windows 4.1), Windows 98 Second Edition
Windows Millennium Edition (Windows 4.9)
OS/2 product progression
16-bit Versions: OS/2 1.0 (CLI only), 1.1, 1.2, 1.3
32-bit Versions: OS/2 2.0, 2.1, 2.11, 2.11 SMP, Warp 3, Warp 4
Until 32 bit Versions : OS/2 Warp 5
Current NT-Line product progression
Windows NT 3.1, 3.5, 3.51
Windows NT 4.0
Windows 2000 (Windows NT 5.0)
Windows XP (Windows NT 5.1)
Windows Server 2003 (Windows NT 5.2)
TimelineDate 16-bit 16/32-bit 32-bit 64-bit
November, 1985 Windows 1.0
1987 Windows 2.0
May, 1990 Windows 3.0
1992 Windows 3.1
1992 Windows for Workgroups 3.1
July, 1993 Windows NT 3.1
December, 1993 Windows for Workgroups 3.11
September, 1994 Windows NT 3.5
May, 1995 Windows NT 3.51
August 24, 1995 Windows 95
July, 1996 Windows NT 4.0
June 25, 1998 Windows 98
February 17, 2000 Windows 2000
September 14, 2000 Windows Me
October 25, 2001 Windows XP
April 25, 2003 Windows Server 2003 Windows Server 2003
2003 Windows XP Media Center Edition 2003
October 12, 2004 Windows XP Media Center Edition 2005
April 25, 2005 Windows XP Professional x64 Edition
Est. October, 2006 Windows Vista Windows Vista
2007 Windows Server "Longhorn" Windows Server "Longhorn"
2009 Windows "Vienna" Windows "Vienna"
In 1983 Microsoft announced its development of Windows, a graphical user interface (GUI) for its own operating system (MS-DOS) that had shipped for IBM PC and compatible computers since 1981. Microsoft modeled the GUI, which was first known as Interface Manager, after that of Apple's Mac OS. Bill Gates had been shown a Macintosh prototype by Steve Jobs early in its development, around 1981, and Microsoft was partnered by Apple to create some of the important early Mac software, such as Word and Excel.Contents
1 Early history
2 Success with Windows 3.0
3 A step sideways: OS/2
4 Windows 3.1 and NT
5 Windows 95
6 Windows NT 4.0
7 Windows 98
8 Windows Millennium Edition (Me)
9 Windows 2000
10 Windows XP: Merging the product lines
11 Windows Server 2003
12 Thin client: Windows Fundamentals for Legacy PCs
13 Late 2006: Windows Vista
14 2007: Windows Server "Longhorn"
15 Future development: Windows "Vienna"
16 History of the Microsoft Operating Systems
16.1 MS-DOS product progression
16.2 OS/2 product progression
16.3 Current NT-Line product progression
17 Timeline
18 Other
19 See also
20 External links
Early history
The first independent version of Microsoft Windows, version 1.0, released in 1985, lacked a degree of functionality and achieved little popularity. It was originally going to be called Interface Manager, but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be a more appealing name to consumers. Windows 1.0 did not provide a complete operating system, but rather extended MS-DOS and shared the latter's inherent flaws and problems. Moreover, the programs that shipped with the early version comprised "toy" applications with little or limited appeal to business users.
Furthermore, legal challenges by Apple limited its functionality. For example, windows could only appear 'tiled' on the screen; that is, they could not overlap or overlie one another. Also, there was no trash can (place to store files prior to deletion), since Apple believed they owned the rights to that paradigm. Microsoft later removed both of these limitations by means of signing a licensing agreement.
Microsoft Windows version 2 came out in 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit (rumor has it that Windows was intended as a platform to run Microsoft Office applications first, and only later as a general-use GUI system).
Microsoft Windows received a major boost around this time when Aldus Pagemaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the beginning of the success of Windows.
Versions 2.0x still used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 Protected Mode. Later two new versions were released, named Windows/286 2.1 and Windows/386 2.1. Windows/286 was just adding a few additions to Windows 2.0 and still ran in real mode, but was the first version to support using the HMA, while Windows/386 had a protected mode kernel with EMS emulation, and no, it isn't kernel.exe (that and all Windows applications were still real mode applications, running over the protected mode kernel by using virtual 8086 mode, which is new to the 80386 and not only that, even DOS applications ran over it), it is win386.exe.
Version 2.03, and later 3.0, faced legal challenges from Apple over its overlapping windows and other features Apple charged mimicked the "look and feel" of its operating system and "embodie[d] and generate[d] a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of the 189 charges that Apple had sued Microsoft with on January 5 1989.
Success with Windows 3.0
Microsoft Windows scored a serious success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed a user to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory. It made PC compatibles serious competitors to the Apple Macintosh. This benefited from the improved graphics available on PCs by this time (by means of VGA video cards), and the Protected/Enhanced mode which allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in any of Real, Standard or 386 Enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to 80286 & 80386. Windows tried to auto detect which mode to run in, although it could be forced to run in a specific mode using the switches: /r (real), /s (standard) and /3 (386 enhanced) respectively. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel (again, it is not krnl386.exe, that was a program that ran in ring 3 of protected mode, and switched to that mode through DPMI, it is win386.exe) was an enhanced version of the protected mode kernel in Windows/386.
Due to this backwards compatibility, applications also had to be compiled in a 16-bit environment, without ever using the full 32-bit capabilities of the 386 CPU.
A limited multimedia version, Windows 3.0 with Multimedia Extensions 1.0, was released several months later. This was bundled with the first sound card / CD-ROM multimedia kits e.g. Creative Labs Sound Blaster Pro along with titles such as MS Bookshelf. This version was the precursor to the multimedia features available in v3.1 later.
However, the features listed above, as well as the growing market support made Windows 3.0 wildly successful—selling around 10 million copies in the two years before the release of version 3.1, Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans.
A step sideways: OS/2
During the mid to late 1980s, Microsoft and IBM had co-operatively been developing OS/2 as a successor to DOS, to take full advantage of the aforementioned Protected Mode of the Intel 80286 processor, to allow use of up to 16M of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables.
A GUI, called the Presentation Manager (PM), was not available with OS/2 until version 1.1, released in 1988. Although some considered it to be in many ways superior to Windows, its API was incompatible with Windows. (Among other things, Presentation Manager placed X,Y coordinate 0,0 at the bottom left of the screen like Cartesian coordinates, while Windows put 0,0 at the top left of the screen like most other computer window systems.) Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the DOS FAT file system used by Windows.
By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They co-operated with each other in developing their PC operating systems, and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop a new operating system, OS/2 3.0, to later succeed OS/2 2.0.
This agreement soon however fell apart, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below).
After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system still had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the almost chronic bad supply of OS/2 with up-to-date device support. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup.
At the time, it was unclear who would win the so-called "Desktop wars". But in the end, OS/2 did not manage to gain enough market share, even though IBM released several improved versions subsequently (see below).
Windows 3.1 and NT
Image:MS Windows logo.png
Image:Windows 3.11 workspace.png
Typical Windows 3.11 desktop
In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0 (such as display of TrueType scalable fonts, developed jointly with Apple), but primarily consisted of bugfixes and multimedia support. It also removed support for Real mode, and would only run on a 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992. Around the same time, Microsoft released Windows for Workgroups (WfW), available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. One optional download for WfW was the 'Wolverine' TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks. There were two versions of Windows for Workgroups, WfW 3.1 and WfW 3.11. Unlike the previous versions, Windows for Workgroups 3.11 only runs in 386 Enhanced mode, and requires at least an 80386SX processor.
All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de-facto standard for consumer software.
Meanwhile Microsoft continued to develop Windows NT. Microsoft hired Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation (later purchased by Compaq, now part of Hewlett-Packard) to develop NT into a more capable operating system. Cutler had been developing a follow-on to VMS at DEC called Mica, and when DEC dropped the project he brought the expertise and some engineers with him to Microsoft. DEC also believed he brought Mica's code to Microsoft and sued. Microsoft eventually paid $150 million US and agreed to support DEC's Alpha CPU chip in NT.
Windows NT 3.1 (Microsoft marketing desired to make Windows NT appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (code-named Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. (In hindsight, Cairo was a much more difficult project than Microsoft had anticipated, and as a result, NT and Chicago would not be unified until Windows XP, and still, parts of Cairo have not made it into Windows as of today. Specifically, the WinFS subsystem, which was the much touted Object File System of Cairo, has now been put on hold, and will not be released with Longhorn/Vista).
Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines. Windows NT would not work for private users because of its resource demands; moreover, its GUI was simply a copy of Windows 3.1's, which was inferior to the OS/2 Workplace Shell, so there was not a good reason to propose it as a replacement to Windows 3.1.
However, the same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming a commodity), as it enjoyed advanced network connectivity options, and the efficient NTFS file system. Windows NT version 3.51 was Microsoft's stake into this market, a large part of which would be won over from Novell in the following years.
One of Microsoft's biggest advances initially developed for Windows NT was new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. Win32 API had three main implementations: one for Windows NT, one for Win32s (which was a subset of Win32 which could be used on Windows 3.1 systems), and one for Chicago. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures.
Windows 95
Image:Am windows95 desktop.png
A typical Microsoft Windows 95 desktop
After Windows 3.11, Microsoft began to develop a new version of the operating system code-named Chicago. Chicago was designed to be fully 32-bit and support pre-emptive multitasking, like OS/2 and Windows NT, that would improve its stability as opposed to the notoriously unstable 3.11. Many parts of the operating system's core were rewritten; others went through an elaborate overhaul. Win32 API was adopted as the standard external interface, Win16 compatibility being preserved through various measures and tricks. A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. This, and the fact that the numerous design flaws had to be carried over from the earlier Windows versions, eventually began to impact on the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first it made it impossible for consumers to use a cheaper, non-Microsoft DOS; secondly, although traces of DOS were never completely removed from the system, and a version of DOS would be loaded briefly as a part of the bootstrap process, Windows 95 applications ran solely in 386 Enhanced Mode, with a flat 32-bit address space and virtual memory. These features made it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2GB reserved for the operating system), and (at least in theory) prevented them from corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/ME did not support more than 512 megabytes of physical RAM without obscure system tweaks.
IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped preinstalled with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose marketshare.
It is probably impossible to nail down a specific reason why OS/2 failed to gain much marketshare. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM also introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part, but many people would probably agree that IBM's own marketing problems and lack of support for developers contributed at least as much to the failure.
Microsoft released 5 Versions of Windows 95:
Windows 95 Original Release
Windows 95 A - included Windows 95 Service Pack 1 slipstreamed into the installation.
Windows 95 B - (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support.
Windows 95 B USB - OSR2.1, included basic USB support.
Windows 95 C - (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced.
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
Windows NT 4.0
Image:Nt4server.png
Windows NT 4.0 Server Desktop
Originally developed as a part of its effort to introduce Windows NT to the workstation market, Microsoft released Windows NT 4.0, which featured the new Windows 95 interface on top of the Windows NT kernel (a patch was available for developers to make NT 3.51 use the new UI, but it was quite buggy; the new UI was first developed on NT but Windows 95 was released before NT 4.0).
Windows NT 4.0 came in four flavors:
Windows NT 4.0 Workstation
Windows NT 4.0 Server
Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering)
Windows NT 4.0 Terminal Server
Windows 98
Image:Word 6.0 on Win98SE.png
Windows 98SE with the "Jungle" theme, and a couple of the programs from Microsoft Office 4.3 running.
On June 25, 1998, Microsoft released Windows 98, which was widely regarded as a minor revision of Windows 95. It included new hardware drivers and the FAT32 file system to support disk partitions larger than the 2 GB allowed by Windows 95. USB support was far superior to the token, sketchy support provided by the OEM editions of Windows 95. It also controversially integrated the Internet Explorer browser into the Windows GUI and Windows Explorer file manager, prompting the opening of the United States v. Microsoft case, dealing with the question whether Microsoft was abusing its hold on the PC operating system market to push its products in other areas.
In 1999, Microsoft released Windows 98 Second Edition, an interim release whose most notable feature was the addition of Internet Connection Sharing (a brand name for a form of network address translation), which allowed several machines on a LAN to share a single internet connection. Hardware support through device drivers was increased. Many minor issues were found and fixed which make it, according to many, the most stable version of Windows on the Win9x kernel.
Windows Millennium Edition (Me)
Image:WindowsME.png
Windows Millennium Edition Desktop
In September 2000, Microsoft introduced Windows Me (Millennium Edition), which upgraded Windows 98 with enhanced multimedia and Internet features. It also introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of system failure. System Restore was a notable feature that made its way into Windows XP. The first version of Windows Movie Maker was introduced as well.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. As a result, Windows Me was not acknowledged as a unique OS along the lines of 95 or 98. Windows Me was widely and sometimes unfairly criticised for serious stability issues and for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition".
Windows 2000
Image:Windows2000.png
Windows 2000 Desktop
Main article: Windows 2000
Image:Wlogo.png
Windows logo, as of circa 2000
Microsoft released Windows 2000, known during its development cycle as "NT 5.0", in February 2000. It was successfully deployed both on the server and the workstation markets. Windows 2000, claimed by some to be the best Windows version to date, incrorporated a number of features, in particular the user interface, from Windows 98. Windows 2000 also introduced Active Directory, a near-complete replacement of NT 4's Windows Server domain model, which built on industry-standard technologies like DNS, LDAP and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions.
While Windows 2000 could upgrade a computer running Windows 98, Microsoft did not see Windows 2000 as a product designed for home users; instead, a continuation of the Windows 95/98 product line was shipped shortly after Windows 2000 called Windows Me.
Windows 2000 came in five editions:
Windows 2000 Professional
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
Windows 2000 Small Business Server
Windows XP: Merging the product lines
See also: Features new to Windows XP
Image:Windows xp desktop.PNG
A typical Windows XP desktop.
In 2001, Microsoft introduced Windows XP. The merging of the Windows NT/2000 and Windows 3.1/95/98/ME lines was achieved with Windows XP (codenamed "Whistler"). Windows XP uses the Windows NT 5.1 kernel; however, it finally marks the entrance of the Windows NT core to the consumer market, to replace the aging 16-bit branch.
Windows XP is available in a number of versions:
"Windows XP Home Edition", for home desktops and laptops (notebooks)
"Windows XP Home Edition N", as above, but without a default installation of Windows Media Player, as mandated by a European Union ruling
"Windows XP Professional Edition", for business and power users
"Windows XP Professional Edition N", as above, but without a default installation of Windows Media Player, as mandated by a European Union ruling
Windows XP Media Center Edition (MCE), released in November 2002 for desktops and notebooks with an emphasis on audio, video, and PVR capability
Windows XP Media Center Edition 2003
Windows XP Media Center Edition 2004
Windows XP Media Center Edition 2005, released on October 12th, 2004.
"Windows XP Tablet PC Edition", for tablet PCs (notebooks with touch screens)
Windows XP Embedded, for embedded systems
"Windows XP Starter Edition", for new computer users in developing countries
Windows XP Professional x64 Edition, released on April 25, 2005 for home and workstation systems utilizing 64-bit processors based on the x86 instruction set (AMD calls this AMD64, Intel calls it Intel EM64T)
Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as 'Workstations'.
Windows Server 2003
Image:Windows Server 2003 Enterprise Edition trial.png
Windows Server 2003 desktop and Start menu.
On April 24th, 2003 Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It has the version number 5.2.
In December 2005, Microsoft released Windows Server 2003 R2, which added a number of management features for branch offices, file serving, and company-wide identity integration.
Windows Server 2003 is available in seven editions:
Small Business Server
Web Edition
Standard Edition
Enterprise Edition (32 and 64-bit)
Datacenter Edition
Compute Cluster Edition
Storage Server
Thin client: Windows Fundamentals for Legacy PCs
In March 2006, Microsoft plans to release a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It will only be available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PC's that are running Windows 95, 98, ME, and 2000, that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix.
Late 2006: Windows Vista
Image:Vista 5308 Desktop.png
Windows Vista desktop, from the February 2006 CTP release
Main article: Windows Vista
See also: Features new to Windows Vista
The next client version of Windows, Windows Vista, is expected in fall 2006. According to Microsoft, this will bring enhanced security from a new restricted user mode called User Account Protection, replacing the "administrator-by-default" philosophy of Windows Xp. Vista will also feature advanced graphics features, a user interface called "Aero", a number of new applications (such as Calendar, Defender, a DVD maker, some new games including Chess, Mahjong, and Purble Place), a revised and more secure version of Internet Explorer, a faster and more intuitive version of Windows Media Player, and a large number of underlying architectural changes.
2007: Windows Server "Longhorn"
Main article: Windows Server "Longhorn"
The next version of Windows Server, currently scheduled for release in the first half of 2007, is known by the codename Windows Server "Longhorn", but given Microsoft's announcement that its server products will maintain the year based naming scheme, it is likely to be released as "Windows Server 2007". Server "Longhorn" builds on the technological and security advances first introduced with Windows Vista, and aims to be significantly more modular than its predecessor, Windows Server 2003. {{Clr}
Future development: Windows "Vienna"
Main article: Windows "Vienna"
The next major release after Vista is code-named "Vienna", though in previous years was known by the code-name Blackcomb. Little is known about what Microsoft plans for the release of Windows following Vista.
History of the Microsoft Operating Systems
MS-DOS product progression
MS-DOS and PC-DOS
Windows 1.0
Windows 2.0
Windows 2.1 (aka Windows/286 and Windows/386)
Windows 3.0, Windows 3.1, Windows 3.11 (and Windows for Workgroups)
Windows 95 (Windows 4.0)
Windows 98 (Windows 4.1), Windows 98 Second Edition
Windows Millennium Edition (Windows 4.9)
OS/2 product progression
16-bit Versions: OS/2 1.0 (CLI only), 1.1, 1.2, 1.3
32-bit Versions: OS/2 2.0, 2.1, 2.11, 2.11 SMP, Warp 3, Warp 4
Until 32 bit Versions : OS/2 Warp 5
Current NT-Line product progression
Windows NT 3.1, 3.5, 3.51
Windows NT 4.0
Windows 2000 (Windows NT 5.0)
Windows XP (Windows NT 5.1)
Windows Server 2003 (Windows NT 5.2)
TimelineDate 16-bit 16/32-bit 32-bit 64-bit
November, 1985 Windows 1.0
1987 Windows 2.0
May, 1990 Windows 3.0
1992 Windows 3.1
1992 Windows for Workgroups 3.1
July, 1993 Windows NT 3.1
December, 1993 Windows for Workgroups 3.11
September, 1994 Windows NT 3.5
May, 1995 Windows NT 3.51
August 24, 1995 Windows 95
July, 1996 Windows NT 4.0
June 25, 1998 Windows 98
February 17, 2000 Windows 2000
September 14, 2000 Windows Me
October 25, 2001 Windows XP
April 25, 2003 Windows Server 2003 Windows Server 2003
2003 Windows XP Media Center Edition 2003
October 12, 2004 Windows XP Media Center Edition 2005
April 25, 2005 Windows XP Professional x64 Edition
Est. October, 2006 Windows Vista Windows Vista
2007 Windows Server "Longhorn" Windows Server "Longhorn"
2009 Windows "Vienna" Windows "Vienna"
Subscribe to:
Posts (Atom)