Telecom Bits And Computer Bits

Telecom Bits And Computer Bits

Since the early times of computing, it became apparent that CPUs should be designed to handle chunks of bits called “bytes” instead of or in addition to individual bits, obviously without altering the status of bits as the atomic components of information. After some odd initial choices (like the 6 bits of the UNIVAC byte), the number of bits in a byte soon converged to 8 (hence bytes are sometimes called “octets”). With the progress of technology, CPUs became capable of dealing with more bytes at the same time. In the late 1960s and 1970s minicomputers were based on a two-byte architecture that enabled the addressing of 64 bytes of memory. Today the CPU of some advanced game machines can handle many bytes at a time. 

When the telcos decided to digitise speech, they, too, defined their own “byte”, the speech sample. After some initial dithering between 7 and 8 bits – all in the closed environment of CCITT meeting rooms in Geneva, with Americans favouring 7 and Europeans 8 bits – the eventual choice was 8 bits. Unlike the computer world, however, in which most processing involves bytes, telecom bytes are generated at the time of analogue-to-digital (A/D) conversion, but then they are immediately serialised and kept in that form until they are converted to bytes just before digital-to-analogue (D/A) conversion. Because of the way D/A conversion works, the “natural” order of bits in a telecom byte is Most Significant Bit (MSB) to Least Significant Bit (LSB).

The order of bits in a byte really depends on the architecture of the particular computer that processes the bytes. The same ambiguity is found in multi-byte data where the identification of how bytes are stored in the computer’s memory is described by little or big-endian. In a big-endian system, the most significant value in the sequence is stored at the lowest storage address (i.e., first). In a little-endian system, the least significant value in the sequence is stored first.

Transmission also responds to very different needs than storage or processing. In the 1960s the telcos started using a serialised and comparatively high transmission rate of 1,544 or 2,048 kbit/s, but network equipment performed rather simple operations on such streams, one of the most important being the identification of “frame start”. Transmission channels are far from being error free and, as we have already said, the codeword identifying TS 0 can be emulated by speech samples. This means that a receiver must be programmed to deal with the moment it is first switched on and when frame alignment has been lost. The data that have flown in the meantime are, well, lost, but there is no reason to worry: after all it is just a few milliseconds of speech.

For quite some time the bitrate used for transmission of computer data over the network was limited to a few hundred kbit/s, but the network had to perform rather sophisticated operations on the data. Data transmission must be error free, which means that codeword emulation must be avoided or compensated and retransmission is requested for all data that, for whatever reason, do not satisfy strict error checking criteria. 

Because the network does not have to perform complex operations on the speech samples (which does not mean that the logic behind the routing of those samples is simple), the transmission mode is “synchronous”. This means that the transmission channel can never be “idle” and requires that speech samples be organised in fixed-length “frames”, where a frame is immediately followed by another frame. Most networks derive the clock from the information flowing through it, but what happens if there is no speech and all bits are then set to zero? To avoid the case when it is impossible to derive the clock, every other bit of speech samples are inverted. Computer networks, on the other hand, transmit data in frames of variable length called “packets”.

This is an area where I had another techno-ideological clash with my telco colleagues in Europe. While the work on H.261 was progressing, COST 211 bis was discussing ways to multiplex the same data that the original COST 211 project had found necessary: audio, facsimile, text messages and, because things were changing, even some computer data arriving through one of those funny multiples of 300 bit/s rates used in telephony modems. With all the respect I had for the work done in COST 211 (to which, by the way, I had been a major contributor myself), where data multiplexing was done in the best telco tradition of “frames” and “multiframes”, I thought that there should be more modern and efficient – i.e. packet-based – ways of multiplexing data. 

In COST 211 I had already proposed the use of a packet transmission system for exchanging messages between terminals and the Multi-Conference Unit (MCU), a device that performed the management of “multiconference”, i.e. a videoconference with more than 2 users. The message proposal had been accepted by COST 211, but this was not surprising because in telcos the “signalling” function was dealt with by people with friendly ears to the IT language. My new proposal to define a packet-based multiplexer for media, however, was made on a completely different environment and did fell on deaf (or closed) ears. This is why H.211, the multimedia multiplexer used for H.261, is a latter-day survivor of another age: it organises non-video data in chunks of 8 kbit/s subchannels and each of these subchannels has its own framing structure that signals which bit in the frame is used for which purpose. It is unfortunate that there is no horror museum of telecom solutions because this would probably sit in the cemtre.  

There were two reasons for this. The first, and more obvious, is because there are people who, having done certain things in a certain way throughout their life time, simply do not conceive that the same things can possibly be done in a different way, particularly so if the new ideas come from younger folks, driven by some alien, ill-understood new discipline. In this case my colleagues were so accustomed to sequential processing of bits with a finite state machine that they did not conceive that there could be a microprocessor that would process the data stream in bytes and not in bits, instead of a special device designed on purpose to follow certain logic steps. The second reason is more convoluted. In some Post, Telephone and Telegraph (PTT) administrations, where the state had retained the telegraph and postal services but had licensed the telephone service to a private firm, even though the latter was still under some form ot control by the state, there was ground for an argument that “packet transmission” was akin to telegraphy and that telcos should therefore not automatically be given a license to manage packet data transmission services. Those telcos were then afraid of losing what at that time was – rightly – considered the next telco frontier. 

This is what it means to be a regulated private company providing public services. Not many years ago, a time when the telco business was said to be unregulated – while the state happily put its nose in the telephone service price list – one could see different companies digging the same portion of the street more than once to lay the same cables to provide the same services, when doing it once would suffice for all. Or one could see different companies building the same wireless radio antennae twice or thrice, when one antenna would suffice for all (and also reduce power consumption and electromagnetic pollution). All this madness was driven by what I call “electric pole-driven competition” philosophy under the watchful eye of the European Commission that made sure that no one even thought of “sharing the infrastructure”.

Yesterday, cables were laid once and antennae hoisted only once, but then the business had to be based on back-door dealings where bureaucrats issued rulings based on some arcane principles, after proxy battles intelligible – if ever – only by the cognoscenti. 

Frankly, I do not know which one of the two I like better. If I could express a desire, I would like a regulated world without brainless bureaucrats (I agree, it is not easy…), or a competitive world where the achievements of competition are not measured by the number of times city streets are dug to lay down the same cables belonging to different operators to offer the same services, but by the number of smart new services that are provided by different operators, obviously in competition, on the same plain old physical infrastructure. Actually there is room for sharing also some non-physical infrastructure, too, but that is another story. 

Until recently mine was a heretic view, but the current hard economic times have brought some resipiscence to some Public Authorities. The Commission of the European Communities (CEC) has started having second thoughts about imposing the building of separate mobile infrastructures by each operator and is inclined to allow the sharing of infrastructure. There is no better means to bring back sanity in people’s minds than the realisation that the bottom of the purse has been reached.

A further important difference between transmission in the telecom and computer worlds is that, when computers talk to computers via a network, they do so using a very different paradigm than the telephone network’s. This is called connection-oriented because it assumes that when subscriber A wants to talk to subscriber B, a unique path is ideally set up between the two telephone addresses by means of signaling between nodes (switches), that is maintained (and charged!) for the entire duration of the conversation. The computer network model, instead, assumes that a computer is permanently “connected” to the network, i.e. that it is “always on”, so that when computer A wants to talk to computer B, it chops the data in packets of appropriate length and then it sends the first packet to the address of computer B attaching to the packet the destination address and the source address. The computer network, being “always on”, knows how to deliver the said packet of data through the different nodes of the network to computer B. When computer A sends the second packet, it is by no means guaranteed that the network will use the same route as the first packet. It may even happen that the second packet arrives before the first packet, because this one has possibly been kept queuing somewhere in other network nodes. This lack of guaranteed packet sequence is the reason why packet networks usually have means to provide “flow control” so as to free applications from this concern. This communication model is then called connection-less

Several protocols were developed to enable transmitters and receivers to exchange computer data in the proper order. Among these is the ITU-T X.25 protocol, developed and widely deployed since the 1970’s. X.25 packets use the High-level Data Link Control (HDLC) frame format. The equivalent of the 2 Mbit/s sync word is a FLAG character of 01111110 Bin or 7E Hex. To avoid emulation of the FLAG by the data, the transmitter inserts a 0 after 5 consecutive 1s, and the receiver deletes a 0 if it follows 5 consecutive 1s (this is called “bit-stuffing”). Having been developed by the telecommunication industry it should be no surprise that X.25 attempted the merging of the connection-oriented and connection-less models in the sense that, once a path is established, packets follow one another in good order through the same route.

The way data move through the nodes of a network is also paradigmatic of the different approaches of the telecommunication and computer worlds. Each of the 30 speech channels contained in a primary multiplex is instantaneously switched to its destination, but an X.25 packet is first stored in the switch and then the entire packet is routed to its destination. Because of the considerable delay that a packet can undergo in a complex X.25 network, a variation of the protocol – dubbed Fast Packet Switching (FPS) – was introduced in the late 1980s. The computer in the node first interprets the destination address without storing the full packet and, as soon as it understands it, the packet is immediately routed to its destination. 

It is nice to think of the intersection of two movements: “data from computers become analogue and are carried by the telephone network” and “speech signals become digital data and are processed by computers”, but this would be an ideological reading. ISDN was a project created by the telcos to extend the reach of digital technologies from the core to the access network, the rather primitive – I would say naïve – service idea driving it being the provision of 2 telephone channels per subscriber. The hope was to be able to optimise the design and management of the network, not to enable a better way to carry computer data at a higher bitrate. Speech digitisation did make the speech signal processable by computers, but in practice the devices that handled digitised speech could hardly be described as computers, as they were devices that managed bits in a very efficient but non-programmable way. 

In the early 1980s there were more and more requests on the part of users to connect geographically dispersed computers through any type of network. This demand prompted the launch of an ambitious project called Open System Interconnection (OSI). The goal was to develop a set of standards that would enable a computer of any make to communicate with another computer of any make across any type of network. The project was started in Technical Committee 97 (TC 97) “Data Processing” of the International Organisation for Standardisation (ISO) and, for obvious reasons, was jointly executed with ITU-T, probably the first example of a large-scale project executed jointly by two distinct Standard Developing Organisations (SDO). 

For modelling purposes, the project broke down the communication functions of a communication device talking to another communication device into a hierarchical set of layers. This led to the definition of a Reference Model consisting of seven “layers”, a major conceptual achievement of the project. Each layer performs the functions required to communicate with the corresponding layer of the other system (peer-to-peer communication), as if the other layers were not involved. Each layer relies on the layer hierarchically below to have the relevant lower-layer functions performed and it provides “services” to the next higher layer. The architecture is so defined that changes in one layer should not require changes in the other layers. 

The seven OSI layers with the corresponding functions are: 

Name Function
Physical Transmission of unstructured bit streams over the physical link
Data link Reliable transfer of data across the physical link
Network Data transfer independent from the data transmission and switching technologies used to connect systems
Transport Reliable and transparent transfer of data between end points
Session Control structure for communication between applications
Presentation Data transformations appropriate to provide standardised application interface 
Application Services to the users of the OSI environment

In the mid 1980s the telco industry felt it was ready for the big plunge in the broadband network reaching individual subscribers everybody had dreamed of for decades. The CCITT started a new project, quite independently of the OSI project as the main sponsors of this project were the “transmission and switching” parts of the telcos. The first idea was to scale up the old telecom network in bitrate. This, however, was soon abandoned, for two main reasons: the first was the expected traffic increase of packet-based computer data (the main reason for the American telcos to buy into the project) and the second was the idea that such a network could only be justified by the provision of digital video services such as videoconference and television (the main motivation for the European telcos). 

Thus both applications envisaged were inherently at variable bitrate; the video one because the amount of information generated by a video source depends heavily on its “activity”. Asynchronous Transfer Mode (ATM) was the name given to the technology, that bore a clear derivation from FPS. The basic bitrate was 155 Mbit/s, an attempt at overcoming the differences in transmission bitrates from the two hierarchies spawned by the infamous 25-year old split. The basic cell length was 48 bytes, an attempt at reconciling the two main reasons for having the project: 64 bytes for computer data and 32 bytes for real time transmission.

A horse designed by a committee?