Chapter 4: Operating System Concepts
Introduction
An operating system (OS) is the master program that manages all computer resources and provides services to application software. Without an OS, every program would need to handle hardware directly, manage memory, schedule its own execution, and more. The OS abstracts away hardware complexity and provides a consistent interface for programs.
Why This Matters
Understanding operating system concepts is fundamental to system programming. When you write system-level code, you're either working within an OS (like writing a device driver or kernel module) or building OS-like functionality (like creating a scheduler or memory allocator). These concepts form the foundation of how modern computers work.
How to Study This Chapter
- Relate to experience - Think about how your OS manages running programs
- Observe your system - Use Task Manager or Activity Monitor
- Experiment - Try running multiple programs, observe resource usage
- Draw diagrams - Visualize how components interact
What is an Operating System?
The operating system is software that:
- Manages hardware resources (CPU, memory, storage, I/O devices)
- Provides abstraction (files instead of disk sectors, processes instead of raw CPU)
- Enforces security (prevents programs from interfering with each other)
- Enables user interaction (graphical or command-line interface)
The OS as a Resource Manager
Think of the OS as a manager coordinating resources:
Applications
↓
Operating System (Resource Manager)
↓
Hardware
Without an OS: Each program would need device drivers, memory management, task scheduling - chaos!
With an OS: Programs request resources through system calls, OS handles the complexity.
The OS as an Extended Machine
The OS provides a simpler, more abstract view of hardware:
| Hardware Reality | OS Abstraction |
|---|---|
| Disk sectors, cylinders, heads | Files and directories |
| Physical memory addresses | Virtual address spaces |
| CPU instructions | Processes and threads |
| Network packets | Sockets and connections |
| Device registers | Device files and drivers |
Core OS Functions
1. Process Management
Process: A running program, with its own memory space, registers, and resources.
OS Responsibilities:
- Create and terminate processes
- Schedule processes for CPU time
- Provide inter-process communication (IPC)
- Handle synchronization (locks, semaphores)
Example: When you open a web browser, the OS:
- Loads the program from disk into memory
- Creates a process with a virtual address space
- Schedules it for CPU time
- Manages its interactions with other processes
2. Memory Management
OS Responsibilities:
- Allocate memory to processes
- Keep track of which memory is in use
- Implement virtual memory (more on this later)
- Protect processes from accessing each other's memory
- Swap memory to disk when RAM is full
Key concepts:
- Virtual memory: Each process thinks it has all memory to itself
- Paging: Memory divided into fixed-size pages
- Swapping: Moving inactive pages to disk
3. File System Management
OS Responsibilities:
- Organize data into files and directories
- Manage disk space allocation
- Provide file access methods (read, write, seek)
- Implement permissions and security
- Buffer and cache file data
File System Hierarchy:
/ (root)
├── bin/ (essential programs)
├── etc/ (configuration files)
├── home/ (user directories)
│ └── user/
├── var/ (variable data, logs)
└── dev/ (device files)
4. Device Management
OS Responsibilities:
- Provide uniform interface to diverse hardware
- Load and manage device drivers
- Handle I/O requests
- Manage device queues and buffering
Device Types:
- Block devices: Random access (hard drives, SSDs)
- Character devices: Sequential access (keyboard, serial ports)
- Network devices: Network interfaces
5. Security and Protection
OS Responsibilities:
- User authentication
- Access control (who can read/write/execute what)
- Resource isolation (processes can't interfere with each other)
- Privilege levels (kernel mode vs user mode)
OS Architecture Models
Monolithic Kernel
All OS services run in kernel space, with full hardware access.
+------------------------------------------+
| User Applications |
+------------------------------------------+
↕ (system calls)
+------------------------------------------+
| File System | Drivers | Network | MM | ← All in kernel
+------------------------------------------+
| Hardware |
+------------------------------------------+
Examples: Linux, Unix, DOS
Advantages:
- Fast (no context switching between services)
- Efficient communication between components
Disadvantages:
- Large kernel (complex)
- Driver bug can crash entire system
- Harder to maintain
Microkernel
Minimal kernel, most services run in user space.
+------------------------------------------+
| User Apps | File System | Drivers | Net | ← Services in user space
+------------------------------------------+
↕ (messages)
+------------------------------------------+
| Microkernel (IPC, basic scheduling) | ← Minimal kernel
+------------------------------------------+
| Hardware |
+------------------------------------------+
Examples: Minix, QNX, seL4
Advantages:
- More stable (driver crash doesn't kill kernel)
- More modular
- Better security isolation
Disadvantages:
- Slower (more context switches)
- More complex IPC
Hybrid Kernel
Compromise between monolithic and microkernel.
Examples: Windows NT, macOS/XNU
Approach: Keep performance-critical services in kernel, move others to user space.
Kernel Mode vs User Mode
Modern CPUs support privilege levels to protect the kernel.
User Mode (Ring 3)
- Applications run here
- Limited access to hardware
- Cannot execute privileged instructions
- Cannot access kernel memory
Kernel Mode (Ring 0)
- OS kernel runs here
- Full access to hardware
- Can execute any instruction
- Can access all memory
Mode Switching
When an application needs OS services, it uses a system call:
- Application invokes system call (e.g.,
read()) - CPU switches from user mode to kernel mode
- Kernel executes the request
- Kernel returns result
- CPU switches back to user mode
Example: Reading a file
// User mode
int fd = open("/etc/passwd", O_RDONLY); // System call: switches to kernel
char buffer[100];
read(fd, buffer, 100); // System call: switches to kernel
close(fd); // System call: switches to kernel
// Back in user mode
Processes and Threads
Processes
A process is an instance of a running program.
Process Components:
- Code (text segment): The program instructions
- Data: Global variables
- Heap: Dynamically allocated memory
- Stack: Local variables, function calls
- PCB (Process Control Block): OS bookkeeping (PID, state, registers, etc.)
Process States:
New
↓
Ready ←→ Running
↓ ↓
Waiting ←--+
↓
Terminated
- New: Being created
- Ready: Waiting for CPU
- Running: Executing on CPU
- Waiting: Waiting for I/O or event
- Terminated: Finished execution
Threads
A thread is a lightweight process - multiple threads share the same process memory.
Thread Components:
- Own stack
- Own registers (including PC)
- Shares code, data, and heap with other threads in same process
Advantages of Threads:
- Lighter weight than processes
- Faster to create and context switch
- Easy communication (shared memory)
- Better for parallel programming
Example: Web browser
- One thread for UI
- One thread for network requests
- One thread for rendering
- All share same memory
Context Switching
When the OS switches between processes/threads:
- Save current process state (registers, PC, etc.)
- Load next process state from memory
- Resume next process
Cost: Context switching takes time (microseconds), too many switches slow system down.
CPU Scheduling
The OS decides which process runs when.
Scheduling Algorithms
1. First-Come, First-Served (FCFS)
- Simplest: run processes in order they arrive
- Problem: long process blocks short ones
2. Shortest Job First (SJF)
- Run shortest process first
- Optimal average waiting time
- Problem: need to predict process length
3. Round Robin (RR)
- Each process gets a time slice (e.g., 10ms)
- Fair, prevents starvation
- Used by most modern OSes
4. Priority Scheduling
- Each process has priority
- Higher priority runs first
- Problem: low-priority processes may starve
5. Multilevel Feedback Queue
- Combination of above
- Processes move between queues based on behavior
- Used in Linux, Windows
Inter-Process Communication (IPC)
Processes need to communicate and synchronize.
IPC Mechanisms
1. Pipes
- Unidirectional data flow
- One process writes, another reads
ls | grep ".txt" # Pipe output of ls to grep
2. Message Queues
- Processes send messages to a queue
- Other processes read from queue
3. Shared Memory
- Multiple processes access same memory region
- Fastest IPC method
- Requires synchronization
4. Sockets
- Communication over network or locally
- Used for client-server applications
5. Signals
- Simple notifications
- Example: SIGKILL (terminate process), SIGINT (Ctrl+C)
Synchronization
When multiple processes/threads access shared resources, we need synchronization.
The Critical Section Problem
Problem: Two threads trying to update the same variable:
// Thread 1:
count = count + 1;
// Thread 2:
count = count + 1;
Without synchronization: Final value might be incorrect due to race condition.
Synchronization Primitives
1. Mutex (Mutual Exclusion)
- Lock that only one thread can hold
- Ensures exclusive access to shared resource
2. Semaphore
- Counter-based synchronization
- Can allow N threads to access resource
3. Monitors
- High-level synchronization construct
- Combines mutex and condition variables
Deadlock
Deadlock: Two or more processes waiting for each other forever.
Example:
- Process A holds resource 1, wants resource 2
- Process B holds resource 2, wants resource 1
- Both wait forever
Conditions for Deadlock:
- Mutual exclusion: Resources can't be shared
- Hold and wait: Processes hold resources while waiting
- No preemption: Can't forcibly take resources
- Circular wait: Circular chain of processes waiting
Deadlock Prevention: Break one of the four conditions.
Key Concepts
- OS manages hardware resources and provides abstraction
- Kernel mode has full hardware access, user mode is restricted
- Processes are running programs with isolated memory
- Threads share memory within a process
- Scheduling determines which process runs when
- IPC enables process communication
- Synchronization prevents race conditions
Common Mistakes
- Confusing processes and programs - Program is code, process is running instance
- Ignoring kernel/user mode - Understanding this is crucial for security
- Forgetting context switch cost - Creating too many threads hurts performance
- Not handling deadlock - Always think about resource ordering
- Assuming atomic operations - Most operations aren't atomic
Debugging Tips
- Use system monitors - htop, top, Task Manager show processes
- Check process states - Is it waiting for I/O or CPU?
- Monitor context switches - Too many indicates problem
- Watch for deadlocks - Hung processes may be deadlocked
- Understand scheduling - Priority affects performance
Mini Exercises
- List all running processes on your system (use ps, Task Manager, etc.)
- Identify your OS's kernel architecture (monolithic, microkernel, hybrid)
- Run a program and find its Process ID (PID)
- Observe CPU usage - which processes use most CPU?
- Create a simple process that spawns child processes
- Research: What scheduling algorithm does your OS use?
- Find the number of threads your web browser is running
- Use a pipe in the command line:
ls | wc -l - Send a signal to a process (kill command)
- Monitor context switches on your system
Review Questions
- What are the main functions of an operating system?
- What's the difference between kernel mode and user mode?
- Explain the difference between a process and a thread.
- What is a context switch and why is it costly?
- What are the four conditions necessary for deadlock?
Reference Checklist
By the end of this chapter, you should be able to:
- Explain what an operating system does
- Differentiate between monolithic and microkernel architectures
- Understand kernel mode vs user mode
- Describe processes and threads
- Explain process states and scheduling
- Understand IPC mechanisms
- Recognize synchronization problems
- Identify potential deadlocks
Next Steps
Now that you understand operating system concepts, the next chapter dives into C programming - the language of systems. You'll learn why C is used for OS development, how to manage memory manually, and how to make system calls.
Key Takeaway: The operating system is the master program that manages all resources, provides abstraction over hardware, and enables multiple programs to run safely and efficiently. Understanding OS concepts is essential for system programming.