The History of the Operating System – From paper tape to Red Hat OpenShift

I’ve been thinking about operating systems a fair bit recently. A few months back I wrote a blog about CentOS being discontinued, Orb Data has just become a Red Hat partner, last week I ordered a copy of the recently released AmigaOS 3.2 for my 1980’s Commodore Amiga and apparently Microsoft has released Windows 11.

Operating Systems are at the centre of everything we do with computers and one way or another I have been working with various versions for about 35 years; first as a System Administrator, then as a consultant and latterly as an owner of a business that specialises in the management of large enterprises. During this time, I’ve seen many changes and so I thought I’d look at a brief history of the Operating System and share a few experiences along the way, before finally analysing what the future holds in a cloud dominated world.

In the Beginning

Early computers did not have operating systems. Computers at this time were unrecognisable to today’s multi-processing systems that we can access from any location. Instead, these large computers were single-user machines that needed to be booked in advance so that the operators could enter their programs and data via punched paper cards and magnetic or paper tape. These programs would then run until it finished or more likely it crashed. This “code” had to include all the instructions needed to talk to the hardware so even the simplest program was highly complex. I went to High School in the US and was taught programming on Commodore Pets and Apple ][s before returning to the UK to do my Computer Science O level at 15 which taught me the history of paper tape. I decided not to move on to A level computing for that reason and instead carried on teaching myself to program in my Spectrum. Luckily the government soon realised this was a mistake and commissioned the BBC Micro for schools. You can read more about this here.

The first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors’ Research division for its IBM 704. The practice of customers producing their own operating systems continued until the 1960s when IBM started work on their 360 series of machines, all of which used the same instruction and input/output architecture. IBM intended to develop a single operating system for the new hardware, the OS/360. Unfortunately, as detailed in the book the Mythical Man-Month: Essays on Software Engineering by Fred Brooks (1975) there were a series of issues with both project management and development. One of his many observations was that “any attempt to fix observed errors tends to result in the introduction of other errors”. In the end, IBM decided to lower its ambitions and release a family of operating systems instead of the original single version (OS/360, PC, MFT, MVT and DOS/360).

Unix

It may surprise younger IT workers to know that when I worked in a Unix team at a large multi-national mobile phone company in the 1990s, I didn’t have a PC or laptop on my desk. Instead, I had a Sun workstation which was powerful, had all the necessary tools needed to manage a network of Unix systems, ran email natively, and even had word processing software installed (WordPerfect). PCs were simply not necessary. The first time I had a PC was when I was given a laptop as a consultant and to this day have only ever owned one (in 1995) when I bought a system called a Time machine for gaming (a mistake).

The story of Unix started back in the 1960s, when the Massachusetts Institute of Technology, Bell Labs, and General Electric started to develop Multics (which stood for Multiplexed Information and Computer Services), a time-sharing operating system for the GE-645 mainframe. Multics was innovative but was large and complex and as a result, individuals and companies started to abandon the project. The last to leave was Ken Thompson and Dennis Ritchie who decided to start a new smaller project. This new project was a single-tasking operating system called Unics for Uniplexed Information and Computing Service (a pun on Multics). Brian Kernighan said “no one can remember” why the spelling changed but eventually the name was changed to the more familiar Unix.

Due to its popularity in academic circles the late 1970s and early 1980s saw Unix being adopted by many startups however this also led to many incompatible versions including DYNIX, HP-UX, SunOS/Solaris, AIX, and even Microsoft had a go with Xenix. DYNIX was the first version of Unix I used 34 years ago on a computer called a Sequent. This operating system was memorable for offering a dual universe which enabled you to switch between versions of ATT Unix or UCB Unix.

Things started to improve in the late 1980s when AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors.

This led to the growth in popularity, until today over 90% of the world’s top 500 fastest supercomputers use Unix or variants. One of these variants was Linux which is now dominant in this market. Linux was originally developed as an open-source project by Linus Torvalds but has been taken on through collaboration by a worldwide network of programmers. There are now over 600 Linux versions and about 500 of these are in active development (Debian, Ubuntu, Red Hat Enterprise, CentOS, Fedora, etc).

As a sidenote, IBM obtained the rights to DYNIX in 1999, when it acquired Sequent and initiated Project Monterey, “to unify AIX with Sequent’s Dynix/ptx operating system and UnixWare.” By 2001, because of the popularity of Linux, this project was quietly ditched.

In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, later renamed macOS.

The Home Computer Revolution

In the late 1970s, several 8-bit home computers hit the market. Computers such as the Apple ][  (selling between five and six million units), the Commodore 64,  the Atari 8-bit series and in the UK the BBC Micro, the Amstrad CPC and the ZX Spectrum series became popular household computers. All of these shipped with a built-in BASIC interpreter on ROM (although Atari 400/800s shipped with a Basic cartridge), which also served as the command-line interface. This allowed programming from the command line but also allowed the user to perform file management commands and load and save to disk/cassette. The most popular home computer, the Commodore 64, supplied its operating system DOS (no not that one) on ROM in the disk drive hardware. On these 8-bit computers, a complex operating system would compromise the performance of the machine without really being needed. Instead, games and other software took over the computer completely.

Bill Gates gets lucky

In 1980 IBM began to see the way the market was moving and so started what they called the Floridian project to create their own PC. Bill Lowe of IBM knew that if they followed IBM’s standard working practices the proposed IBM PC would take many years to come to market and by that time it would be too late. Instead, Bill suggested that they use an open architecture, using both non-IBM technology and non-IBM software. Most of the hardware could be bought off the shelf and so was relatively easy but the operating system was another matter. IBM knew of Microsoft because of their Basic language and so at short notice arranged a meeting for the following day. Famously when the IBM executives arrived at Microsoft’s office they assumed the 17-year-old Bill Gates that met them and took to the meeting room was the office boy. The other Microsoft employee present was Steve Balmer as he was the only other person with a suit. IBM explained that they not only wanted Microsoft basic but also an operating system. Microsoft was honest enough to admit that this wasn’t really their business area and suggested they speak instead to Gary Kildall at Digital Research who had developed an operating system called CP/M. Bill Gates facilitated this meeting and told Gary Kildall that IBM would be coming to speak to him and to be nice to them. IBM drove immediately to Pacific Grove, but for some unknown reason, Gary had gone out and left his wife alone to handle the most important meeting of their company’s life. Unlike Microsoft who signed the IBM NDA immediately thinking that they had nothing to lose Digital Research took advice from their lawyer and refused to sign. IBM left and returned to Microsoft. Bill Gates didn’t need a second chance. He didn’t have an operating system but he knew a local company that did. Seattle Computer Products had a product called the Quick and Dirty Operating System (or Q-DOS) which had been modified from Gary Kildall’s CP/M by Tim Patterson. Microsoft bought the rights outright for $50,000 and then signed a deal to license this to IBM (as MS-DOS) for $50 per device. IBM did therefore not own the operating system and would have to pay for many years to come and by the time they realised their mistake (creating OS/2 as a result), it was too late. Microsoft had pulled off the deal of the century and as a result, started their dominance in the PC market. Gary Kildall got nothing and ended up presenting the Computer Chronicles on PBS TV.

Microsoft based all their operating systems up until Windows 95 on MS-DOS. This included the first versions of Windows (1.0 through to 3.11) which were simply graphical shells that ran from MS-DOS. As we know Microsoft dominate this market to this day and currently supply over 86% of all desktop or laptop operating systems.

Cloud

Many of you will have been shouting at your screens that Red Hat OpenShift is not an operating system. This is of course true but the reason I started this blog was that in a recent IBM presentation they presented OpenShift as the operating system of the cloud. Of course, it’s not an operating system as we know it, but in a way they are right. Over time the line between virtual machines, container orchestration, and operating systems has blurred as computers become more distributed. In many ways, container runtime software today plays the role that the operating system formerly played, including managing the hardware resources (processor, memory, I/O devices), applying scheduling policies, and providing security to allow administrators to manage the system. At the same time, we now have seen drastically simplified operating systems that have been designed to run only on virtual systems.

If you look at the diagram below you can see my point:

In traditional architecture, the operating system sits on top of the hardware and controls the application. When it came to virtualised environments, the hypervisor was added to manage Virtual Machines, but the Virtual Machines themselves were essentially full copies of the operating system image and application. However, in a container deployment, the container image represents binary data that encapsulates an application and all its software dependencies but doesn’t include the operating system. The Container Runtime environment (e.g., Kubernetes) manages the containerized workloads and services in a similar way to an operating system managing local processes in the traditional deployment.

This is taken a step further by Red Hat OpenShift which is a hybrid cloud, enterprise Kubernetes application platform. It adds increased security, streamlined workflows to help DevSecOps teams get their applications to production faster, including built-in Jenkins pipelines and source-to-image technology to go straight from application code to a container.  More importantly for this blog, Red Hat OpenShift has all the components you need to run Kubernetes in production including the underlying Linux platform, integrated networking, storage, monitoring, logging, installation and upgrades. The line between the operating system and the orchestration tool has become obscured.

This shift towards containerised software deployment and the general movement towards the cloud has meant that administrators not only need to learn the new container orchestration platforms (e.g., Kubernetes, OpenShift, Diamantes D10, VMware Tanzu, etc) but also learn how to manage and monitor them. That is why Orb Data has not only just become a Red Hat partner, but we are also suggesting Instana and Turbonomic to monitor and manage the resulting deployments.  We are running a series of Webinars on these new cloud products which can be watched and subscribed to here.

And finally…

In a LinkedIn article a few years ago it was reported that there are more Mobile Phones in the world than toothbrushes. If you had read this you’d probably not be too surprised that despite the dominance of Windows and Linux, it is Android that is installed on over 50% of the world’s new devices (and yes I know Android was built on a Linux Kernel).

Despite this, it appears that the world has decided that at present it prefers Windows for desktops and laptops (87%), Linux for Super Computers and Android and iOS for Smartphones and tablets. Web Servers are evenly split between Linux and Windows Server (about 29% each).

As for the future, I have 4 predictions:

01

Tighter Integration
The integration between Linux and container runtimes will get tighter. Red Hat is already leading the way with this and is in a good position to push this further.

02

NLP for document control
As we all know documents can often become scattered across a computer; some in the file system, others on the desktop, some in the email and others in the myriad of cloud storage offerings (e.g. Dropbox). In the future improvements in NLP and AI will enable documents to be tagged as they arrive on a computer so that documents are much easier to find using the processing power of the device and advanced search.

03

Privacy
You are probably familiar with an item you have been browsing on a retail site then following you around in the ads for the next few days. Changes to Apple and Google’s web-tracking tools mean that advertisers may lose this ability. This data will now be stored on the phone in the operating system so that advertisers have to pay the operating system suppliers to use this rather than companies like Facebook. It will also mean if you throw your phone away your profile is gone with the device. Advertisers will now need to use local data but without ever seeing the data themselves. This will mean that more features relating to advertising will be built into the operating system locally which you will be able to opt into if you wish.

04

Simplification
Lastly, I think base operating systems will become simpler and will act as a base for 3rd party developers to add functionality through app stores, addons or plugins. The Google Play app store now has 3,482,452 apps available; the Apple App store has 2,226,823 and the Windows Store has 669,000 apps (figures from Statista 2021). This added functionality is beyond the operating system provider’s ability to provide and so the growth of these operating systems necessitates an open platform. Even the Raspberry PI now has different options depending on your needs including LibreElec to turn Raspberry Pi into an entertainment centre, RetroPi to turn your Raspberry Pi into a retro-gaming machine and Raspberry Pi Desktop. This last one is Debian but provides the Raspberry Pi OS desktop, as well as most of the recommended software that comes with Raspberry Pi OS, for any PC or Apple Mac computer.

If you have any questions or comments about this blog or others that I have written either comment below or send me an email at simon.barnes@orb-data.com. Alternatively, if you need help with your operating system strategy then don’t hesitate to contact me.

Visits: 4246