sorcerer_see: (Default)

1969... When Bell Labs gave up on the Multics project, on which our three friends were working until then, their time became freed up. Kenneth Thompson saw an opportunity to start a personal project: he began working on an older PDP-7, in a bid to make it practically his own. It had an18bit architecture and 9KB of RAM memory, extendable up to 144KB.

The PDP-7 was classified as a microcomputer. These computers were smaller in size than the by companies in the 1960's commonly used 'mainframe' types. Mainframe computers were very large, collectively used, and had their own operation system and programming languages especially developed for it. Microcomputers, on the contrary, were suited for individual use, which promoted experimentation and innovation by creative individuals like our three pioneers.

Kenneth wanted to program 'his' computer's operating system and utilities from scratch with a higher level language rather than using plain assembly language. Assembly language directly translates into machine language to operate the electronic components of a computer directly. In those days there was a movement to veer away from using assembly language, and instead use more abstract 'higher-level' programming. The reason for this was that these languages:

1. because they use human logic and terms, are easier to visualize, analyze, plan, communicate, etc
2. because they are concisely and logically ordered, are more efficient in terms of time spend for achieving the same result
3. because they are compiled to machine language by a compiler,  they are not tied to specific architectures, and are portable to any system as long as it has a compiler installed that translates that higher level language to the machine instructions needed for that machine
4. ...

The nature of software: input, operation & output
In the end all existing software on a computer is one large program telling the machine what operation to do / output to produce, at any give time. Part of that program will be active most of the time, other parts will be dormant in memory until activated. Memory makes it possible to use programs again and again without the need to manually recreate them every time you want to use them. The first forms of memory were electrically static parts, next memory was stored in primitive modular physical media like cardboards with holes, called punch cards, that acted as switches for electrical circuits. Machines then could function in a myriad of ways, depending on the cards that were inserted. These cards had to be entered manually. Electric (digital) memory, on the contrary, made it possible to store and activate programs by merely using electrical commands. This kind of memory could hold many more instructions and was stored in tapes, disks and later floppies, optical hard drives, cd and dvd and later flash drives and ssd.

RAM serves as temporary memory in which programs are temporarily stored ready to be used by the central processing unit (CPU), which in fact is the real computer - it is the brain which does the computations.

Programming programs as well as, later on, adding instructions in real time,  during operation, was, from early on, being done with a form of keyboard. In the beginning such a board could just consist of switches, where as later on symbols were assigned to keys. Much later mouse input was added. Output of programs for human reading was, in the beginning, expressed by mechanical components, then written on paper or punch cards, in case of software development for example. and only later later on screen displays.

The point was to be able to instruct, compute and make the results of that computation human readable. In the beginning programming was prepared outside of the machine, but with the event of electrical storage, formerly external logistical tools and activities became part of the machine.

The Operating System or O.S. in short
The operating system of a computer is the software that bundles all general basic operations that are needed to run most functionally specific programs. In this way these programs can exclude any general instructions already embedded in the OS, and be loaded into the general operating program as a modular addition as needed.

UNIX
The basic operational program that Kenneth Thompson created, to be used every time he booted the computer, and meant to accommodate any modular program he created on the side, to be run by the computer any time he needed it, was eventually, in 1970, coined as the 'Unix Operation System'' by his colleague Brian Kernighan. This term was a joking reference to the name of the abandoned project 'Multics'.

Brian is also the man who later on helped Dennis Ritchie with developing C and writing the famous K&R book.  Unix was an essential precursor to microcomputer operating systems in general.

Development of higher level programming languages for UNIX
Kenneth used several languages to write this OS. He started with FORTRAN but ended up with what he called the B programming language. This language was influenced by the existing high level programming languages BCLP and Simplified ALGOL (SMALGOL).

Later however, when the three started working with the PDP-11, a much more powerful computer, B proved to be slow and could not take fully advantage of the greater technical capabilities of the  PDP-11. Hence Dennis Ritchie decided to re-write the B language, improving it and adding functionality to it, in order to make it fitting for the PDP-11. At first the three called this language 'New B', or NB in short. But the language had become so unique that eventually a new compiler was written for it and the name of the language changed to C. In 1972 C was publicly released as an independent language.

The following years C gradually matured until, by the end of the 1970's, it was ready to be spread out to the broader professional world. In the beginning it was only used for writing software for the Unix operating system and for programs to be used with that OS. C was largely perceived as Unix specific. But since it was meant to be converted to assembly to operate hardware, it had the potential to be used for any kind of system, as long as a compiler was written for it for that system.  As programmers across systems began to recognize the usefulness of C,  they started to write compilers for it for their system. Hence, gradually, C became a language of choice for any system around at the time. Portability became one of the major features of C.

In 1978 C received a first form of, be it unofficial, standardization. The book 'The C Programming Language', first edition, by K&R (Kernighan & Ritchie) gave programmers clear instructions on how to write C. It is up to today seen as the most defining book on computer programming. The program 'Hello World', featured in it, became universally known as the very definition of the first program to be written by beginner programmers. This unofficial standard is being refereed to as K&R C.

In the 1980's C grew to be fully OS independent and a premium language among programmers from all systems: Atari, Amiga, Apple, Tandy, IBM, etc... Computers with all kinds of architectures, running all kinds of operating systems, were being programmed in C. Gradually the language started to replace other existing popular high-level programming languages like BASIC and Pascal.

During the same period also direct derivatives of C arose, of which C++ became the most prominent. Assembly was still, besides these new languages, and sometimes combined, widely used as well. It is only in the nineties that we see higher level languages really replacing assembly.

It is important to emphasize that almost all modern languages trace their existence back to C. C is very much THE defining programming language and C is used in unison with most modern programming languages, because, although they very much improve C on some important aspects, they seem not well able to entirely replace all of it's functionality. C is arguably the most all-round language and although it might function better if combined with other languages, it, strictly speaking, does not need any other languages to function.

The first real official standard of C was solemnized by the American National Standards Institute (ANSI), This  organization was created in 1911 for the purpose of harmonizing professional habitual usages to ensure compatibility and portability among economical actors. Such standardization was acutely necessary for the world of Information technology, in order to align hardware, systems, compilers and programmers with one another. In 1989 ANSI was ready to release it's first official C reference sheet. This C version received the names ANSI C as well as C89, due to the name of this organization and the time of the first publication.

Since 1990 however, C standards are not released by ANSI anymore, but by the ISO/IEC JTC or Joint Technical Committee and International Electrotechnical Commission (IEC) of the Standardization Organization (ISO). This is a committee that specializes in the standardization of programming languages. In 1990 the ISO and it's relevant committees, released the ISO/IEC 9899:1990 standard of C, in short ISO C90.

Please be reminded that, effectively, ANSI C, C89 and C90 are the exact same C reference list.

Over the years ISO/IEC JTC released C90, C99, C03, C11, C17 and at last C23, which is the current standard.

And here we are!
We are now ready to dive into C!

To be clear, the C standard I will follow is not C23 but the standards C90, C99 and C03, in order to be fully compatible with the software era of 1995 - 2005, which is also called the early internet era, or, in terms of game development: the 3D generation. I will use Windows XP compatible IDE's and compilers setup to follow these C standards.

You may ask yourself, is it not a waste of time to learn old C standards?
I will add that in C, new standards do not omit anything from the older standards, they merely build upon them and add extra methods to do the same things faster and more efficient. In this way, older versions are maybe slightly less efficient, but they are certainly more broadly compatible than newer versions with existing programs and tools. Also, in C, changes are more limited compared to other languages. In C++ for example, which is a cutting edge, experimental, and innovative derivative of C, later versions show more differences from earlier versions, than is the case in C. The latter tends to be conservative and aiming at stability foremost.

C-heers!

Profile

sorcerer_see: (Default)
sorcerer_see

October 2025

S M T W T F S
    12 34
56 7 8 9 10 11
12 1314 15 16 1718
19 2021 22 23 2425
2627 28293031 

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 16th, 2026 02:29 am
Powered by Dreamwidth Studios