When radio and television broadcasting were introduced, it was reasonable to expect that physical forms of distribution would eventually disappear. Why should people pay for something when the same thing is available for free over the air? This down-to-earth consideration, however, turned out to be wrong, because people listening to radio or watching television hear and see what others have decided that they should listen to or watch. When home recorders became available a similar forecast, that people will buy recorders and blank tapes, and will stop buying vinyl records and watching movies in theatres, was an equally easy guess. Why should people buy a record in the shop if, by waiting until the right time, they could get it for free from the broadcast channel and record it? Not only did the forecast not materialise – people have better things to do in their lives than scanning newpapers to see when a given program will be aired – but great new businesses, even overshadowing the old ones, were created.
Now come Digital Media and some people make the “obvious” forecast that they signal the end of media as we know them. People will be able to get anything for free, copy it as many times as they want and send it to as many people as they want. Why on earth should they buy something, when they can get exactly the same thing for free? It is an easy temptation to shrug off the threat of the doomsayers of the moment and conclude that, as much as the other media technologies that seemed to signal the end of the media world that we know, this time some equilibrium point will also be achieved that will magically leave things unchanged or, maybe, even create some great new business.
Before taking a position on this debate, let me make a disclaimer. When humans are involved, it is hard to say that there is bound to be a single outcome to a given problem and it may very well be that an equilibrium point can magically be found and that, some years from now, we will look back and conclude that, once more, humans are resilient and adaptivity is their best feature. I personally do not believe this will happen, at least not in a clear-cut manner. If it will – and this seems the direction that we are heading – it is likely that end users will be the losers.
Broadcasting turned out not to be such a threat because being forced to consume what others had selected was not such a good proposition. Home recording was also not a threat because many felt it was more comfortable to buy what they were interested in than going through the hassle of actively searching for a broadcast or a cassette from a friend. Digital Technologies, instead, make it is possible to create a repository of all content on the network, and let computers do the job of searching, copying, organising and playing back the content a user is interested in. There is total availability and no hassle to discourage a user to go the “easy way” of getting things for free.
Many think that Technical Protection Measures (TPM) supplemented, where required, by legislation is the way to enable rights holders to enforce the rights that have been granted to them by law for centuries. The problem is that there are other rights that have also been granted by law to end users.
Such a right is one of the first examples of attention by Public Authorities (PA) to the well being of the populace. In antiquity the entire body of knowledge of a community was spread among its members and each individual or group of individuals could contribute directly to the augmentation of the common knowledge and draw freely from it. Libraries were devised as a means to facilitate access to the common body of knowledge when its size had exceeded the otherwise considerable memorisation capabilities of people of those times.
Scale excluded, the situation of today is not different from Babylon in 500 BC, Alexandria of Egypt in 100 BC or Rome in 200 AD. Public authorities still consider as part of their duties the provision of open and free access to content and use revenues of general taxation to allow free access to books and periodicals in a public library. In many countries they do the same with audio and video, by applying license fees to those who own a radio or television receiver. The same happens in universities where most relevant books and periodicals in a given field are housed in a faculty library. The difference here is that this “service” is offered from the revenues a university gets from enrollment fees. This kind of “free” access, ultimately, is not free at all. Someone has to pay for it, in a tortuous and not necessarily fair way.
Protected content, has the potential to provide a more equitable way to allow access to the body of knowledge that society “owns” to the extent that it considers the access integral with the rights of citizens, while ensuring the rightful remuneration to those who have produced the content and enabled its representation and carriage. How should public authorities implement that access? Should the richness of information today be considered so large that there is no longer a need for public authorities to play a role in content access, even if citizens must pay to access it? Should PAs get a lump sum from rights holders in return from granting them monopoly rights to exploit their works? Should PAs get a percentage of rights holders’ current revenues to pay for a minimum free access to content to all citizens? There is no need to give an answer to these questions and there need not be a single answer. It should be left to each individual community to decide where to put the balance between the reward to individual ingenuity that originated new content and the society that provided the environment in which that content could be produced.
Protected content also has an impact on another aspect. Today the Web houses an amount of information, accessible to every internet user, which has never been seen before in human history. Because information is in clear text and represented in a standard way, it is possible to create “search engines” that visit web sites, read the information, process it and extract that value added called “index”. Today the indices are the property of those who created the indices who can decide to offer access to anybody under the conditions they see fit.
As long as the original information is freely available on the web, it should be OK for anybody to create whatever value added he can think of from it. But if content has value and an identified owner, it is legitimate to ask whether this value added processing can be left to anybody or whether the content owner should have its rights extended to this additional information. But if content is protected, it will be the rights holders themselves who will develop the indices, and there will be, in general, no way for a third party to do that. This has serious consequences on the openness of the information because the categories used by rights holders to classify the pieces of content are not necessarily the same as a third party, who might have completely different views of what makes a piece of content important. So far rights holders owned – obviously – the copyright to a given piece of content, but with control of indices they can even dictate how the content should be considered, prioritised and used.
The nature of media so far has been such that the source was known but the target was anonymous. In most countries, one could purchase a newpaper at a newstand but the publisher would not know who the consumer was. One could receive a radio broadcast, but broadcasters would not know, except statistically and with varying degrees of accuracy, who their listeners were. Transactions, too, were anonymous. With digital networked media, the transactions, even fine grain patterns such as those related to individual web pages, become observable and the web has given the opportunity to web site owners to monitor the behaviour of visitors to their web sites. Every single click on a page can be recorded, and this enables a web site owner to create huge databases of profiles. As most of digital content will be traded by electronic means, the technical possibility exists for an organisation to build more and more accurate data about its customers or for a government to identify “anomalous” behavioural patterns within its citizens. The use that individuals, companies and governments will make of such information is a high-profile problem.
The next issue with protected content is how users can preserve the same level of content accessibility they have today. PAs have often used standards to accomplish some goal. Television standards – and a lot of them exist today as I have reported – were often introduced at the instigation of PAs for the purpose of either protecting the content industry of one country or to flood other countries with content generated within the country. Traditionally, PAs have supported the approach of telecommunication standards with worldwide applicability because providing communication between individuals, even across political boundaries, was considered as part of their duty. The CE industry has always been keen to achieve standards for its products because that would increase the propensity of consumers to buy a particular product because of the added value of interoperability. The information technology industry has shunned standards in many cases: from the beginning, IT products were conceived as stand-alone pieces of equipment with basic hardware to perform computations, the OS assembling basic functionality but tied to the basic hardware, and applications, again designed to run on the specific OS.
What is going to be the approach to standards when we deal with protected content? Indeed, if content is protected, standards may no longer be relevant. With its IPMP Extension MPEG has already provided solutions that preserve the most valuable goal for end users: interoperability at the level of protected content. So it is possible to request rights holders to guarantee that people retain a practically enforceable right to access the content they are interested in, if they are ready to accept right holders’ access conditions
Lastly, a necessary condition for a practically enforceable right to free speech in Information Society is that individuals have general access to content protection technology. Indeed if TPMs will be used to package content so that it can be delivered in digital form, access to content protection technologies should not be discriminatory. This is a high societal goal that amounts to giving citizens the practical means to exercise their rights to free speech. In the digital world freedom of speech means being able to express oneself, making the expression available for other citizens to access, under the conditions that the originator sets, but letting the originator retain control of his expression. The achievement of this goal, however, must be balanced against the critical nature of protection technologies. A rogue user may be given access to a technology because of the need not to discriminate against him and as a result huge amounts of content may be compromised. In different times the management of these technologies would have been entrusted to the state. In the 21st century there should be better ways to manage the problem.