Humans Interact With Machines

Humans Interact With Machines

Devising methods to interact with machines and the very act of interacting with them are often endeavours with varying degrees of challenge. Playing a musical instrument may require years of practice, while setting the time on a mechanical wristwatch is rather straightforward. Doing the same on a digital wristwatch may not be straightforward while changing a program on a TV set is easy, even though setting the programs for the first time may be a challenge. When Video Cassette Recorders (VCR) were a normal CE device, the act of programming the recording time on a VCR had achieved the status of emblem in user unfriendliness. 

Early computers had extremely primitive forms of Human-Machine Interaction (HMI). The first peripherals attached to computers were those needed to input and output data, possibly in the form of programs: paper tape, card readers, teletypewriters and line printers. But more primitive forms of interaction were also used. To boot one of the early Digital Equipment PDP-11 minicomputers, still used at CSELT in the mid 1970s, required the manual introduction, through switches, of a basic sequence of binary instructions. 

pdp1140

Figure 1 – The console of a PDP11/40

At that time interaction with computers had already considerably improved and was based on a very simple Command Line Interface (CLI). On the PDP-11, the RSX OS had a simple command line structure: a 3-letter code to indicate the function (e.g. PIP – Peripheral Interchange Program, to move files from one device to another) was followed by a sequence of characters specific to the particular function invoked with the first 3 letters. The airline reservation systems, the earliest mainframe query protocols still in use, were developed during that period of time with the goal to stuff as much information as possible into compact commands. 

With CPU power increasing in the 1960s and early 1970s, researchers began to consider new ways to reduce the data entry time and typing errors. In the late 1970s, the microcomputers’ drop in price of computing power resulted in popularisation of computing that later gave rise to the PC. So research was started on the “next generation” of computers because evolving computer interaction from the “vestals” model recalled above, to the “anybody does what he likes with his own PC” model, required substantial changes. 

The most notable interface research was carried out at the Xerox Palo Alto Research Center (PARC) where the Alto computer, completed in 1973, was the first system equipped with all the elements of the modern Graphical User Interface (GUI): 3-button mouse, bit-mapped display and graphical windows. 

The Alto HMI allowed users to have a communication with the computer that was more congenial to humans than before. Visual elements with a graphic content – icons – were introduced because they could be more effectively tracked and processed by the left hemisphere of the brain, unlike characters that require a sophisticated and highly specialised processing by the right hemisphere of the brain because characters represent a highly structured form of information.

Eight years later (1981) Xerox introduced the Star, the commercial version of the Alto computer whose interface added the concept of the desktop metaphor, overlapping and resizable windows, double-clickable icons, dialog boxes and a monochrome display with a resolution of 1024*768, virtually everything that we see today on our PC monitor, save for colour and higher resolutions. Xerox was unable to commercially exploit this innovative development.

Apple Computer was the one that really benefitted from the new HMI. Xerox allowed Apple to take elements of the Star interface in exchange for Apple stocks. Lisa , the first computer with the new HMI released in 1983, flopped and was followed by the Macintosh the following year. This turned out to be a success that continues to this day, through alternate phases. For several years Apple spent millions USD to enhance the Macintosh GUI, a commitment that paid off in the late 1980s when the professional market boomed and Apple’s GUI became an emblem of the new world of personal computing, widely praised and adopted by artists, writers, and publishers. The consistent implementation of user interfaces across applications was another reason for the success of the Macintosh that made Apple, for some time in the early 1990s, the biggest PC manufacturer.

Unlike what could be seen at Xerox and Apple, the IBM PC running MS-DOS had a cryptic Command Line Interface (CLI), but things were evolving. Already in 1983 some application programs like Visi On by Visi Corp, the company that had developed the epoch-marking Visicalc program, had added an integrated graphical software environment. In 1984 Digital Research announced its GEM icon/desktop user interface for MS-DOS, with just two unmovable, non-resizable windows for file browsing, a deliberately crippled version of its original development.

In the second half of the 1980s, Microsoft embarked on the development of a new OS with a different GUI and for some time cooperated with IBM in the development of their new OS, called OS/2, that IBM hoped would be generally adopted by the PC industry. Later, however, the partnership soured and Microsoft went it alone with Windows. At first the new interface was simply a special MS-DOS application that made available different graphic shells and providing such features as GUI and single-user multitasking. Apple sued Microsoft about the use of the Windows GUI but Microsoft successfully resisted. 

A similar process happened with UNIX. Like MS-DOS, UNIX has an obscure CLI inherited from mainframes. In the 1980s UNIX GUI shells were developed by consortia of workstation manufacturers to make their systems easier to use. The principal GUIs were Solaris by Sun Microsystems and Motif by the Open Software Foundation (OSF). 

With computers taking on many more forms than the traditional workstation or the PC, the HMI is becoming more and more crucial. One major case is provided by mobile handsets where reduced device size puts more constraints on the ability of humans to interact with the range of new services on offer today. The hottest spot is the set of technologies that Apple has assembled for its iPod, iPhone and iPad devices, but parallel developments happen on the rival Android front, as specialised by individual handset manufacturers. Companies endeavour to emulate – and sometimes are brought to court in a string of legal battles that seem to never end.

It is now some 40 years since the GUI paradigm was first applied, and its use is now ubiquitous. There is a good reason to expect forms of HMI and many interesting components are indeed available such as speech to text, and voice and gesture commands. However, no new paradigm comparable to the one brought forward by Xerox some 40 years ago is on the horizon.