Free Software, Public Domain and Digital Commons (continuation)

Richard Stallman’s dismisses proprietary software for a good reason but his recommendations are only followed by ICT experts. Let us have a closer look.

Professional software versus software for the masses

The market share for operating systems for desktop computers of MS Windows is actually 91,2%. Apple has 5,27% while GNU like systems take only 3,61% . Apple OS and Android dominate mobile devices, with 47,06% for Android, 43,83% for Apple and only 2,38 for MSWindows. Though Android is a GNU like system it isn’t free software as to Richard Stallman:

Google has complied with the requirements of the GNU General Public License for Linux, but the Apache license on the rest of Android does not require source release. Google said it would never publish the source code of Android 3.0 (aside from Linux). Android 3.1 source code was also withheld, making Android 3, apart from Linux, nonfree software pure and simple.

So  strictly there is no relevant1 free software for smart-phones. Even the OS of the Fairphone is based on Google’s Android 4.2 Jelly Bean. Fairphone_back_inside_sim_slots_03 The market share of public servers, web-servers and firewall systems shows a quite different picture. Web-servers use Linux for 38.6% and only for 32,6% MSWindows. When security is involved Linux is used for 58-78% while only 18-38% MSWindows. Microsoft has a quasi monopoly on the home computer market while there is an oligopoly of Apple and Google for smart phones. GNU-like systems are only popular with businesses corporations in the ICT world. Thus Stallman’s pragmatic idealism is only adhered by professionals but not by ordinary users. They do not benefit the freedom of the Free Software Foundation. About Open Source Software Stallman claims “it is missing the point of free software”. Though it is useful to many ordinary users. Firefox, WordPress, Android are Open Source. As to Stallman, after 30 years of free software activism the situation has only worsened. How come? Let us dive a little deeper into history to get the picture complete.

Back in history of software and computers

Though we owe the invention of computers to Europeans like Alan Turing a Brittany and John Von Neuman, a Hungarian born in a Jewish family that emigrated to the US in 1930, the first computers were developed in the US and Germany after the second World War. The main players worldwide in commercialising computers were International Bureau Machines and Siemens. The roots of IBM, a company based in New-York, date back to the 1880s, decades before the development of electronic computers. In the decades leading up to the onset of the second World War IBM had operations in many countries that would be involved in the war, on both the side of the Allies and the Axis. IBM had a lucrative subsidiary in Germany, which it was majority owner of, as well as operations in Poland, Switzerland and other countries in Europe. The bureaucracy of the Nazi concentration camps used IBM systems.

In the sixties and seventies IBM didn’t sell most of his mainframes and minicomputers. Hardware was leased making corporations dependent on IBM’s expensive service contracts. With the introduction of microcomputers, IBM’s quasi monopoly threatened to be broken. In 1980 IBM approached Digital Research to license a forthcoming version of CP/M for its new product, the IBM Personal Computer, in order to take a share in the home computer market. But the talks to obtain a signed non-disclosure agreement, failed. Finally Bill Gates, the nerdy grandson of a banker, got a contract to write a version of Basic for the IBM personal computer.

Software and hardware systems from 1980 until today

Microsoft started bit by bit to take over the monopoly of IBM selling MSDOS. Hardware and software was no longer leased but sold free on the market. But today software is leased again. You cannot buy any longer copies of Microsoft Office, iWork, Adobe Photoshop… Users have to subscribe and pay a monthly fee, if not the application is blocked remotely using internet. Apple, Adobe and Microsoft… have become Application Service Providers (ASP). This is the same business model that was used by IBM for mainframes and minicomputers in the sixties for corporations: Software As A Service (SAAS).

Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centres connected through direct phone connections. Electronic data communication dates from long before internet conquered the world. After all automated teller machines date from the early sixties too. They made use of data communication already. Früher_Bankautomat_von_Nixdorf The expansion of the Internet during the 1990s brought the SAAS business model back, but now it enters our living rooms. This innovation of profiteering is called “cloud computing”, a magic term to hide the greed of the big ICT companies. The fortresses that host data centres are built on solid soil. Instead of loosening control it is tightened again and again. The possibilities to spy on users have only expanded. For millions of users Stallman’s idealistic fight for free software was useless. Only ICT professionals do understand his pragmatic idealism. They use a lot of free software and turn it into moneymakers patenting it, the way Google did.

Political economy of software and hardware systems

But when you take a closer look to the history of operating systems and application software you can detect it is not only about different flavours but also about different classes. These classes were initially connected to a specific hardware system. Mainframe computers (IBM, Siemens, Unysys etc.) and minicomputers (IBM, Siemens, Dec, PDP etc). They were aimed at the professional market. A quality they shared was “strict backward compatibility with older software”.

The negotiations between the corporations buying these computers and the computer vendors took place on an equal base. Corperations were able to enforce their conditions. And this is the big difference with the situation today, a home computer user is not a business client with the same power as a corporation. There is no room for negotiations on the consumer market of sofware. Just take or leave it. No backward compatibility is offered any longer. If you buy a new computer it comes with the latest operating system and most of the time you can forget about using your old application software on it. You have to buy new application software too. The whole business system of personal computer hardware and software is a scam.

In the early days Microsoft produced MSDOS 1.0 to 6.0 only in one and the same version. You still had different flavours of application software. For word processing you had WordPerfect, XyWrite, Microsoft Word and dozens of other word processing software. Microsoft not only succeeded to gain a quasi monopoly in the operating system market but it also succeeded to gain a quasi monopoly in the market of application software. His MS Office suite has outcompeted all the others. What happened? As from Windows 95, the first MS product that could compete on the market of computers with a graphical user interface like Apple’s Mac and Unix systems, used by professionals, Microsoft forced the application developers to adapt to its demands. Application software developers needed the Microsoft application programming interface (API) to make their products suitable to run on Windows computers. The interoperability requirement of operating system software and application software was Gates “Horse of Troy”.

So companies that developed application software were acquired one by one. Once Microsoft gained a monopoly on the home-computer market it started to compete also on the professional market. It made systems in different classes. Windows 7 is available in six different editions, of which the Home Premium, Professional, and Ultimate were available at retail, but all having different prices. Aother annoying feature of MS policy is that  it does not deliver a system that is finished. When you buy their system you become just another guinea pig for MS software experiments. While vulnerability after vulnerability comes to light, users receive their weekly update to prevent the worst, but never being sure that their system is secure.

There is a second reason why this a scam, though it in the marketing world this is considered as a normal practice. The six editions that Microsoft made for Windows 7 could in fact be reduced to two, one for client computers and one for servers. I guess Microsoft made only one edition having full capabilities, and then started to strip it down in order to sell as much as it could. Though even the cheapest edition was quite expensive and didn’t fit in the budget of the working poor besides lacking important features. But marketing strategies are not based on actual use values, when people start to see through it, the story can end abruptly. Anyway MS strategy didn’t work on the market of smart phones. Apple having a complete closed and interoperable system from the beginning outcompeted all others, even the posh BlackBerry. But soon Google outcompeted Iphone with Android. Vertical integration of the three major companies dominate the market, and as to me that is the biggest problem of the computer software and hardware world.

Corporations operating on the stock market have to compete no matter what. But that kind of competition is no longer capable to deliver quality software and hardware at affordable prices. Standardisation, interoperability and connectivity is substantial. Therefore cooperation is needed, but it is supplanted by competition resulting in loss of sustainability. The political economy of computer software and hardware lacks diversity and interoperability. In a paper “Quantifying economic sustainability: Implications for free-enterprise theory, policy and practice”, Goerner, Lieater and Ulanowicz conclude:

durable economic vitality requires exchange networks that exhibit the same balance of hardy weave, diverse alternatives, and efficient throughput performance that produces long-term vitality in all flow systems. On the progress side, the role diversity and intricate connectivity play in supporting vitality and averting disaster gives them a new status not visible in current theory. ”

From the short overview of the history of hardware and software it becomes clear that they evolved from a certain diversity to almost no diversity left at all, making the system less sustainable and introducing system instability. What about “intricate connectivity”? The vertical integration of the large software companies delivers only intricate connectivity within the limited domain of developers of these companies, not between them. But the most important matter to notice,  the software users are completely left out the development strategy. 

The strategic role of telecommunication

By focussing on the production of software and not analysing the crucial role of data communication Richard Stallman didn’t see the whole picture. To explain this we have, again to go back to the days the telephone network developed and spread. The public switched telephone network (PSTN) – as engineers call it – appears like a flat many-to-many network to the users, though it is a highly centralised network. Users might think they are connected directly to their peers, they are not. A connection always passes some central redistribution point easy to tap. The early telephone exchange was even operated manually. Photograph_of_Women_Working_at_a_Bell_System_Telephone_Switchboard_(3660047829) The capacity to control wired communication depends on the ownership of the telephone lines. The US Bell Telephone Company has a long standing monopoly in the US that dates from 1877. It merged to American Telephone and Telegraph Company (AT&T) in 18852. The remains of its worldwide expansion can still be found in a lot of countries and cities. Its influence on economies should not be underestimated. Bell Telephone Manufacturing Company (ITT) was involved in the coup of the dictator Pinochet in Chile in 1973. In Europe in the twentieth century most phone companies were government owned. In the 80s and 90s a wave of de-nationalisation went through the world. Many national phone companies and cable companies had turned into private companies when internet left the safe academic reserve to enter our homes.

So it is no surprise that the largest internet providers in Belgium for instance are the former national phone company, Belgacom and the former Flemish cable company Telenet. KPN in the Netherlands, Telephonica in Spain… underwent a similar transformation. Vodafone Group, the British multinational telecommunications company and the world’s 3rd-largest mobile telecommunications company (behind China Mobile and Sing Tel), stems from ‘Racal Strategic Radio Ltd’, UK’s largest maker of military radio technology in the 80s. Many mobile phone providers originate form regular national phone companies. So the fact that national markets of internet providers are dominated by oligopolies can be tracked down in the history of telecommunication. In the former Soviet Union all post offices had a secret room for opening private letters and wire-tapping telephone exchanges. Regular post and telephone communication often were controlled by one and the same authority.

Today the majority of Internet Service Providers stem from phone companies. The telecommunication sector is powerful and omnipresent, its strategic role is underestimated. In modern times governments all over the world have tried to control telecommunication facilities. During the Spanish civil war, the occupation of the phone exchange in Barcelona was attacked vigorously by Franco. In Poland during the 80s the communist government did shut down all communication lines to counter the rise of Solidarnosc. During the war in Former Yugoslavia in the 90s, the telephone exchange between Serbia, Croatia and Bosnia was made impossible. Since the revelations of Snowden, we know the control by authorities remains substantial but hidden. Network connectivity however doesn’t mean connectedness. See for more about this issue at “The biological implications of Electronic media use”.

Though today we have highly connected physical networks, shallow communication on social media, community life is scattered more than ever. Concerning software development for instance, the nerds of the Free Software Foundation didn’t succeed to involve the users of their software. On the positive side we must agree that they do not consider the software users as guinea pigs either, the way Microsoft does, but collecting user requirements is not on their agenda as far as I can see. Prove me, if I’m wrong3. Anyway at the Internet Ungovernance Forum in Istanbul on 3 September 2014 the welcome speech contained three recommendations to the Internet Governance Forum, taking place on the same day also in Istanbul. The third recommendation read:

“There is a serious division between those who develop technologies and those who do “internet policy”. These people usually complain about each other. Let us complain about both. Even if activist minded, it is easy to fall into technocratic solutionism when fighting for policy change or developing disruptive technologies. Our question to you, what are you going to do to crack open your close knit networks with fancy vocabularies so that we can have an internet of the people?”

This was a clear call to involve also internet users both into the development of software and the internet governance, launched from the Turkish youngsters that occupied Taksim Gezi Park in Istanbul (from 28/05/2013 to 15/06/2013). Not only concerning the internet governance, users are sidelined also concerning user application software development.

Taksim Gezi Park protests
Taksim Gezi Park protests

Without user involvement the FSF is doomed to fail

Stallman’s criticism may be genuine, it is incomplete and therefore not effective. He lacks an analysis of the  political economy of telecommunication and computer industry. His  dispute about open source is  far to complicated to be understood by the ordinary user. In an article in Wired Stallman righteously claims that: “Freedom means having control over your own life”, but adding: “if the users don’t control the program, the program controls the users,” sounds like a reproach and it is greatly exaggerated. One can also use proprietary software for his own means when he is aware of the limitations of it. And those controlling the users have names like: Bill Gates, Satya Nadella, Zuckerberg, Larry Page, Shantanu Narayen, Tim Cook, John Porter, Daniel S. Mead, Randall L. Stephenson, César Alierta all CEO’s in ICT or telecommunication, working with bankers and serving their shareholders but NOT the software and internet users. Stallman is powerless and his only outlet is to blame the users:

“When you use proprietary programs or SaaSS, first of all you do wrong to yourself, because it gives some entity unjust power over you. For your own sake, you should escape. It also wrongs others if you make a promise not to share. It is evil to keep such a promise, and a lesser evil to break it; to be truly upright, you should not make the promise at all.”

Which makes him a moralist preacher, though he is claiming to be an atheist. Though GNU has contributed to computer education in for instance India it is far to weak to counter companies like Google, Microsoft, Apple, Adobe, Verizon, Telephonica, Telenet… which makes him a naïve dreamer. As long as the world of ICT and the world of ordinary users remain separated and is dominated by powerful companies and trusts, users will be deprived of affordable software they can control. Without user involvement Stallman’s cause is doomed to fail. On the other hand users do not benefit from Stallman’s purism. To a user the main worry about software is that it is an affordable fair deal, that it adds life quality, that it is doing what it says it is doing, that it is secure, that it respects his rights and gives him the desired freedom. These requirements are summarized in four points by Douwe Smidt. His article is in Dutch, so I translate his main points:

  1. It has to be open source;
  2. It must be userfriendly;
  3. You have to be able to control your own data;
  4. It must be useful to you.

I would like to add a fifth point, it must be a fair deal as well for the user as for the developer. As to Internet, the charter of human rights and principles for the internet of the Internet Rights & Principles Coalition (IRPC) is indispensable and for sure worth to fight for, Stallman’s purism isn’t.   You find the campaign page of the IRPC here.

1Firefox OS and Ubuntu touch have only a minor market share. Firefox OS was demonstrated by Mozilla in February 2012. It was designed to have a complete community based alternative system for mobile devices, using open standards and HTML5 applications. The first commercially available Firefox OS phones were ZTE Open and Alcatel One Touch Fire. As of 2014 more companies have partnered with Mozilla including Panasonic and Sony. Ubuntu Touch (also known as Ubuntu Phone) is a mobile version of the Ubuntu operating system developed by Canonical UK Ltd and Ubuntu Community.It is designed primarily for touchscreen mobile devices such as smartphones and tablet computers.

2About this time the closure of the commons was finalised. About the commons see ‘Introduction to defining the commons

3This issue is to important for careless handling. I’ve written down some thoughts about it in Dutch. An English paper on the subject is to be finalised but it will appear in the months to come.

Some references

Goerner, Sally J, Bernard Lietaer, Robert E. Ulanowicz (2009), “Quantifying economic sustainability: Implications for free-enterprise theory, policy and practice”, Ecological Economics 69 (2009) 76–81, retrieved at http://people.biology.ufl.edu/ulan/pubs/Goerner.pdf

Ulanowicz, R.E., Goerner, S.J., Lieater, B., Gomez, R., (2009) “Quantifying sustainability: resilience, efficiency and the return of information theory”, Ecological Complexity 6 (1), 27–36 March, retrieved at http://people.biology.ufl.edu/ulan/pubs/ECOCOMP2.pdf

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s