Linux for Infrastructure Engineers: A Beginner’s Guide to History, Features & Concepts

1. Introduction

This article introduces Linux, an essential technology for infrastructure engineers. Linux is an open source operating system (OS). Windows and Mac are the most common desktop computers we use in our daily lives, and those who use Linux are in the minority. However, Linux has a dominant presence in servers that support commercial services. In fact, according to a 2024 Fortune Business Insights report, Linux boasts the top share of 63.1% in the global server OS market, far ahead of second place Windows. Therefore, unless you are already proficient in other operating systems such as Window servers, we recommend that you first acquire Linux skills. We hope that this article will help you understand the basics of Linux and be useful for your future career.

Reference: Server Operating System Market Volume, Share | Analysis, 2032

2. Birth/Development of Linux

The following is an explanation of how Linux came to have the overwhelming market share it has today, including the history of computers.

□1940s
The world’s first practical computer (electronic calculator) is considered to be the ENIAC (Electronic Numerical Integrator and Calculator) developed in the United States in 1946. This computer was developed for military purposes, such as calculating the ballistics of artillery. It was extremely powerful for its time, capable of performing approximately 5,000 operations per second. However, it used about 18,000 vacuum tubes to handle “0” and “1” signals, and frequent failures caused reliability issues. Another problem was the large amount of electricity consumed.

Reference: ENIAC – Wikipedia

□1950s
Transistors were used to solve the problems of vacuum-tube computers, and computers became smaller and more powerful. In 1955, IBM introduced the IBM 608, the first transistorized calculator for commercial use, which was first shipped to the market in 1957.

Also in 1956, GM-NAA I/O, considered the world’s first operating system (OS), was developed. Large computers at that time required a lot of manual work, such as switching punch cards, every time a single program was executed, which took a lot of time.
Punch cards:

Reference: FORTRAN Port-A-Punch card. Compiler directive “SQUEEZE” removed the alternating blank columns from the input. Godfrey Manning. – Punched card – Wikipedia

GM-NAA I/O, however, enabled batch processing in which the next program was automatically executed when the currently running program finished, thereby reducing the time spent waiting for input/output.

In addition, various programming languages appeared in the late 1950s, including FORTRAN (1957) and COBOL (1959).

□1960s
In 1964, IBM introduced a large mainframe called System/360, which greatly increased storage capacity and computing power. In addition, integrated circuits (ICs) replaced transistors.

At the same time, computers continued to shrink in size, giving rise to what were known as minicomputers: in 1964, DEC Corporation began selling the 12-bit PDP-8. In 1969, UNIX was developed at AT&T Bell Labs in the U.S. with the aim of creating an easy-to-use general-purpose operating system for minicomputers.

□1970s

In 1971, the first version of UNIX developed in assembly went into operation. In 1973, UNIX developed in assembler was converted to C. The portability and versatility of UNIX were enhanced by C, and UNIX spread to universities and research institutes such as UC Berkeley.

□1980s

In 1983, AT&T released UNIX System V, a commercial UNIX. It became the foundation for many commercial UNIXes such as Solaris, AIX, and HP-UX. In addition, research on BSD (Berkeley Software Distribution) UNIX was also conducted by the University of California, Berkeley, to enhance networking capabilities; BSD UNIX greatly contributed to the later popularization of TCP/IP.

In 1981, MS-DOS was introduced as an operating system for the IBM PC, and in 1984, Mac OS was introduced for the Apple Macintosh, which led to a rapid expansion of demand for personal computers.

□1990s

UNIX development was mainstream, but UNIX was too expensive for individuals to handle because it ran on expensive computers and workstations. Linux was developed by Linus Torvalds, a Finnish student at the University of Helsinki, from scratch to create a UNIX-compatible operating system. Since then, Linux has continued to evolve as open source free software with the cooperation of many developers around the world. Today, the Linux kernel, the core of Linux, is a huge program with over 20 million lines of source code.

3. Overview of Linux functions

Linux is an open source operating system (OS). In a narrow sense, it refers to the basic component called the “Linux kernel,” but in a broader sense, it refers to the entire operating system that combines the Linux kernel plus peripheral software. What is generally recognized as “Linux” is this broad definition, and is distributed as “Linux distributions” such as Red Hat, Debian, and Fedora.

The following is an overview of the main functions of Linux. Details of each function are described in a separate article.

System Call
A system call is a mechanism for an application program running in user mode (non-privileged mode) to request processing from the Linux kernel running in kernel mode (privileged mode). This mechanism prevents user programs from directly interfering with hardware or other processes and ensures the stability and security of the entire system.

Process Management
Process management is a mechanism that determines which process can use the CPU, when and for how long, for a process that is a running program. This mechanism is implemented by the Linux kernel scheduler, which makes it possible to have multiple processes running in parallel (or to make them appear to be running at the same time) by switching processes periodically.

Memory Management
Memory management is a management function for efficiently and safely allocating limited physical memory to multiple processes. in Linux, the virtual memory mechanism allows each process to have its own independent memory space and to prevent other processes from accessing its memory, thereby This enhances security and stability. In addition, when physical memory is insufficient, Linux can handle more memory space than physical memory by temporarily using swap space on disk.

Device Management
Device management refers to the ability of the Linux kernel to identify hardware (devices) connected to a computer so that they can be properly controlled and used. in Linux, devices are abstracted as files, just like regular data files. This file for device operation is called a “device file,” and when input/output is made to it, access to the physical device is performed via the device driver.

File System
The file system is a mechanism for safely and efficiently performing operations such as file creation, reading, writing, and deletion on data stored on a hard disk or SSD. Communication with physical devices such as hard disks and SSDs is handled by a kernel module called a block device driver. Through the block device driver, the Linux kernel can safely and efficiently read and write data in block units without being aware of differences in hardware.

Network Stack
The network stack is a hierarchical structure of software for network communication and is responsible for handling protocols such as TCP/IP and UDP. The network stack in the kernel sends and receives data to and from physical network devices such as switches and routers through network device drivers.

4. Linux Features

Open source
The most important feature is that it is free open source and the source code is publicly available. Anyone can freely use, modify, and redistribute it. It also has the advantage that it is easy to collect information because of active community activities.

Highly stable and secure
The OS is recognized as a highly stable and secure OS because it has few fatal bugs, and security holes and other vulnerabilities are quickly addressed. It can be expected to operate stably even during long-term operations.

Highly customizable
It is possible to install the OS in a minimum configuration and add only the necessary functions; by using a lightweight OS configuration, the OS can run efficiently even on a machine with low specs.

5. conclusion

I have explained the history, functional overview, and features of Linux. Although we use Linux as a matter of course, some of you may feel that there are many things about Linux that you are not aware of. I hope you will continue to read them. Thank you for reading to the end.

6.Referencee

[試して理解]Linuxのしくみ | 技術評論社
ITシステムやソフトウェアの基盤OSとして幅広く使われているLinux。エンジニアとしてLinuxに関する知識はいまや必須とも言えますが、あなたはそのしくみや動作を具体的にイメージすることができるでしょうか。本書では、Linux OS にお...
意外に知らないLinuxの実像、UNIXからの歴史をおさらいしよう
みなさんご存知のようにLinuxはWindowsやmacOSと同じくパソコンで動作するOSです。今回は、歴史的な流れを踏まえながらLinuxの概要について説明していきます。

https://halo.aichi-u.ac.jp/~mouri/lecture/comsci08.pdf

コメント