Most of what has been said so far concerns audio and visual information generated in the real world and reaching human senses either directly or through a more or less transparent communication system. However, an increasing proportion of what we perceive today through communication devices is no longer generated in that way. Images and sound from computers and game consoles, originally almost exclusively synthetically generated, have an increasing share of naturally generated media, while what we perceive from TV sets and movie theatre screens, originally almost exclusively naturally generated, is more and more complemented by synthetically generated audio and video.
Because of this trend, the laying out of the digital media scene in preparation for the MPEG-4 stage would not be complete without a ride on these other types of bit. Indeed, and in spite of its “moving picture” label MPEG has been operating in this space since 1995 trying to build bridges between business communities in disciplines considered specific to them. Purpose of this page is then to clarify the background of synthetically-generated pictures and sound to help understand the role that MPEG has been, is and plans to be playing in it.
From the beginning, computers were machines capable of connecting to and controlling all sort of devices and thus ideally suited to replace the infinite number of ad-hoc solutions that for centuries humans had conceived to make a machine respond, in a predictable way, to external stimuli or to generate events directly correlated to its internal state. Besides typical “data processing” peripherals like card readers and line printers, over the years computer mice, joysticks, game pads, track balls, plotters, scanners and more were attached to computers. Of more interest for the purpose of this page, however, are other types of device such as microphones-loudspeakers and video cameras-monitors.
As early as the 1950s, computers were already connected to oscilloscopes used as display devices and in the 1960s direct-view storage tubes were also connected. In 1963, the Bell Labs developed a computer-generated film entitled “Simulation of a two-giro gravity attitude control system”. By the mid-1960’s, major corporations started taking an interest in this field and IBM was the first to make a commercially available graphics terminal (IBM 2250). The unstoppable drive to connect all sorts of device to computers is well demonstrated by the computer-controlled Head-Mounted Display (HMD) realised in 1966 at MIT. HMD provided a synthetically-generated stereoscopic 3D view by displaying two separate images, one for each eye. In 1975 Evans & Sutherland developed a frame buffer that could hold a picture.
In the very same years, after my second return from Japan in October 1973, my group at CSELT developed a 1 Mbyte frame store for video coding simulation built with 16 Kbit RAM chips that was capable of capturing and storing a few monochrome and composite (PAL) video frames in real time. The video store was interfaced to a GP-16, a simple but effective minicomputer manufactured by Selenia (now Alenia, at that time a company of the STET group) and video samples could be transferred to a magnetic tape and read by a mainframe computer. Video coding algorithms were tested on the mainframe and the processed data were again loaded on a magnetic tape, transferred from tape to RAM and visualised on the simulation system. If one considers that one cycle of this process could take days, it should not be difficulty to understand why I have used the word “vestals” to describe the people running the mainframe computers in those days.
3D Graphics (3DG) is the name given to the set of computer programming techniques that allow the effective generation of realistic 3D images for projection on a 2D surface. The impressive evolution of this field is paradigmatic of the way academic interests successfully morphed into commercial exploitation. The development of output devices matched to the needs of synthetic picture viewing was a necessary complement to Computer Graphic’s value. The field evolved in a matter of 15 years through a number of milestones as briefly sketched in the table below.
|Hidden-surface||Determines which surfaces are “behind” an object and thus should be “hidden” when the computer creates a 2D image representing what of a 3D scene a viewer should see.|
|Colour interpolation||Improves the realism of the synthetic image by interpolating across the polygons so reducing the aliasing caused by the sharp polygon edges.|
|Texture Mapping||Takes a 2D image of the surface of an object, and then applies it to a 3D object.|
|Z-buffer||Accelerates the hidden surface removal process by storing depth data for every pixel in an image buffer (called Z-buffer because Z represents the depth, Y the vertical position and X the horizontal position).|
|Phong shading||Interpolates the colours over a polygonal surface with accurate reflective highlights and shading.|
|Fractal||Covers the entire surface of a plane with a curve or geometric figure to create realistic simulations of natural phenomena such as mountains, coastlines, wood grain, etc.|
|Ray tracing||Simulates highly reflective surfaces by a process of tracing every ray of light, starting from the viewer’s perspective back into the 3D scene. In case an object is reflective, the ray is followed as it bounces off the object or until it either hits other objects with an opaque non-reflective surface or leaves the scene.|
|Radiosity||Determines how light reflects between surfaces using heat propagation formulae.|
The establishment of commercial companies was made possible by the progress of 3DG technologies and reduced price of computing. A milestone was reached in 1988 with the Renderman format. This provided all the information required to render a 3D scene: objects, light sources, cameras, atmospheric effects, etc. 3DG developers could give the modeling system the capability of producing Renderman compatible scene descriptions and output the content on machines supporting the format. In 1990 AutoDesk introduced Studio3D, a 3D computer animation product that has achieved a leading position in 3D computer animation software.
Already in the 1970s Computer Graphics had entered the world of television and prompted the development of hardware/software systems for scanning and manipulating artwork, e.g. making it squash, stretch, spin, fly around the screen, etc. Morphing, a technique that transforms an image of an object into the image of another object, was first demonstrated in 1983 with a video sequence showing a woman transforming herself into the shape of a lynx. In 1991 massive use of 3DG techniques in movies began with “Toy Story”, the first full-length computer-animated feature film, continuing with “Terminator 2” where the evil T-1000 robot was sometimes Robert Patrick, a the real actor, and sometimes a 3D computer animated version, and “Beauty and the Beast” that contained 3D animated objects, flat shaded with bright colors so that they would blend in with the hand-drawn characters.
In 1994 a group of companies established a consortium called Virtual Reality Modeling Language (VRML) – now Web3D Consortium – with the goal of developing a standard format to represent 3D worlds. The first specification, issued in 1997 as VRML 97, provided the coded representation of a 3D space that defined most of the commonly used semantics such as hierarchical transformations, light sources, viewpoints, geometry, animation, fog, material properties and texture mapping.
The need to cater to the growing 3DG researchers and users communities prompted the establishment of the Special Interest Group on Computer Graphics (SIGGRAPH) of the Association of Computing Machinery (ACM). The first SIGGRAPH conference held in 1973 was attended by 1,200 people, but today the conference has an attendance of tens of thousands of people.
What has been described so far could be called as the high end of computer graphics, but there is another, originally low- to middle-end application domain, that has given rise to an industry that has used the same computing technologies with an identity of its own: computer games. While the 3DG field had a more traditional evolution – first academia and then exploitation – the computer game field targeted exploitation from the very beginning. However, today’s the progress in computing devices is blurring the borders between the two fields.
This is short tracking shot of this business area over 4 decades
|1961||Space War||Probably the first video game, said to have been created by an MIT student for the Digital Equipment (DEC) PDP-1. It was very successful and was even used by DEC engineers as a diagnostic program|
|1966||Odyssey||The first home video game to catch spots of light with manually controlled dots. The game was licensed to Magnavox which sold the game for the consumer market|
|1971||Computer Space||The first arcade video game based on Space Wars – limited success
|1972||Pong||Arcade video game (name from ping-pong) – hugely successful|
|1976||Atari||(from the name of a move in the Japanese game “go”) is sold to Warner Communication|
|1977||2600 VCS||Atari introduces the first home game console with multiple games with 2Kbyte of ROM and 128 bytes of RAM|
|1978||Space Invaders||Taito releases the first blockbuster videogame installed in restaurants and corner stores|
|1979||Atari 800||Atari introduces Atari 800, an 9-bit machine|
|Space Invaders||Translated to the Atari 2600 video home game system|
|1979||Activision||Is established by Atari developers followed by other third-party development houses in the 1980’s Epyx, Broderbund, Sierra On-Line and SSI|
|1980||Odyssey2||Philips releases the game. This is followed by Intellivision (Mattel) and Pac-Man (Namco). More than 300,000 arcade units sold since introduction, a huge hit and an unforgettable experience of many no-longer-so-young people who have been raised in an environment populated by such computer game names, such as Zork, Donkey Kong, Galaxian, Centipede, Tempest, Ms. Pac-Man and Choplifter!)
|1981||Dozens of games for home computers such as Apple, Atari, and TRS-80 released
|1981||The game industry is worth more than 6 B$ in sales. Atari alone does 1B$ with Asteroids throughout its life span|
|1982||Gaming companies syill producing hits such as Access Software, Electronic Arts, and Lucasfilm Games (now LucasArts) are established
|Olympic Decathlon||Microsoft publishes this not particularly successful game, one reason why it took years before Microsoft would publish another computer game|
|1st half of 1980s||Home computers with game capabilities released: Atari 400 and 800, Commodore VIC-20 and C-64 (20 million units sold in 1982 when it was released). Common features: colour display capabilities, composite video output for TV sets, tape units, floppy disk drives and cartridge|
|1981||CGA||IBM releases Colour Graphic Adaptor, the first PC colour video adaptor with 4 colours|
|1984||Cartridge-based systems become suddenly unpopular, video game industry loses ground, home computers (Commodore et al.) gain ground because of the possibility to do other things than games|
|1985||C-64||Outsells Apple’s and Atari’s computers|
|Amiga||Launched with many advanced graphic features – another unforgettable experience|
|Nintendo||Nintendo Entertainment System (NES) is characterised by strict control on software, lockout chip, and restriction to companies to 5 games/year|
|TARGA||ATT releases the first board for PC professional applications, capable of displaying 32 colours|
|1986||Sega||Sega Master System console, technically superior to Nintendo, but a market failure because of lack of games caused by Sega’s neglect of third-party developers|
|1988||Sierra On-Line||Uses the 16-colour Enhanced Graphic Adaptor (EGA) graphics
|1989||Genesis||16-bit console with Electronic Arts sports titles. Nintendo keeps its 8-bits. Nintendo releases Super Mario 3 (all-time best-seller), Amiga and Atari ST die out.|
|The first game using the 256-colour Video Graphic Adaptor (VGA) graphics is published|
|1991||Super-NES||Nintendo launches a 16-bit console
|1992||Nintendo||7 billion USD in sales and higher profits than all U.S. movie and TV studios combined.|
|PC gaming explodes|
|1993||Real||Panasonic ships the 32-bit console from 3DO|
|Sega and Nintendo consoles held 80% of the game market|
|1994||Jaguar||Atari ships the 64-bit console
|1995||Saturn||Sega ships the 32-bit console|
|Playstation||Sony ships the 32-bit console|
|Window 95||Microsoft releases the new OS that included the Game SDK – Direct-X thus bringing major game performance under the folds of Windows.|
|1996||Ultra 64||Nintendo ships the 64-bit console|
|Internet boosts the growth of multi-player gaming|
|1997||3D-FX||3D acceleration starts to standardise and 3D acceleration became a common game feature|
|Pentium II||at 200 MHz starts providing serious game experiences|
|1998||Many very good PC games appeare. Playstation rules in the console domain. Commonality of the movies and gaming businesses: 300 games/year released but only 30 making money and 5 B$ in PC games, about the movie industry’s size.|
It took a considerable amount of time for computers of reasonable cost to provide a video output because of two heavy requirements on computers: fast CPUs to process a high amount of data and a large memory because of the need to store at least one screenful of data that could be generated asynchronously by the CPU and read out synchronously and converted to analogue form to drive a display. For this reason .
For audio, matters were much simpler because waveform could easily be generated in real time or read from a file on a disk, and converted to analogue form to drive a loudspeaker. If the waveform corresponded to a musical score, it was rather easy to provide special hardware designed to produce different types of sound. As an example, the C-64 had a built-in analog synthesiser chip and many games had an obsessive tune accompanying the game that changed depending on the state of the game. In 1989 the first sound cards, the Adlib and Soundblaster, brought a more professional sound to the PC, replacing the original “beep” of the internal speaker.
The Musical Instrument Digital Interface (MIDI), developed in 1983 by Sequential Circuits and Roland, is a protocol to control electronic musical devices. A MIDI message can tell a synthesiser when to start and stop playing a specific note, the volume of the note, the aftertouch (the amount of pressure applied on the keys of a given channel), the instrument desired to play on a channel, how to change sounds, master volume, modulation devices, and even how to receive information. In more advanced uses, MIDI information can indicate the starting and stopping points of a song or the metric position within a song.