No account yet?
Operating system: A program that allows the user to interact with the low level hardware, so that the user does not need technical knowledge to run their computer.
Processes are programs which execute instructions on the computer. Each process requires resources which are allocated by the OS.
The process contains:
This includes info about the process:
There are 5 states it can go through while running:
A process can have children process which each execute in the same way as the parent, forming a tree of processes.
They can share resources, share a subset of resources or share no resources. They can also execute concurrently or one after the other.
This is the place which the processes live. It has 4 components:
You have the user mode and the kernel mode and you only let users execute instructions that won't cause harm.
You swap to kernel mode when:
This is the thing which take a ready process and then assigns it the resources so that it can begin executing. It also reassigns resources when a process is terminated.
independent processes: Processes that cannot be affected by the execution of other processes
Co-pendent processes: Process that may share data with other processes and rely on others to function
Advantages of multiprogramming
Multiprogramming aims to maximize CPU use
This selects the next process to allocate to the cpu.
it performs the decision when:
pre-empting: Taking off CPU
Burst time: Time a process is on for.
Pre-emptive: Causes programs to stop executing before they're finished
Do the one which comes first first.
Compares processes waiting to determine the length of next CPU burst and schedule one with shortest time
Some things to note:
Each process gets a time slice on the CPU and after that time has elapsed it is replaced with the next process from the front of the queue. it can be implemented with a normal queue (FCFS), a priority queue, or a priority queue for items competing for the back.
The time slice or time quantum is denoted with a q and it's usually 10 to 100 milliseconds.
Note: In an exam priority is given by a number e.g 1 - 5. The lower the number the higher the priority. So priority 1 is higher than priority 5.
If a new process arrives with a shorter remaining time than the current executing one it will pre empt it and take it's place.
If it isn't shorter it will wait it's turn until there is no item with a shorter remaining time than it in the queue.
Process with highest priority takes cpu time.
Smallest integer = highest priority
This is where you put processes into two different queues depending on their performance requirements (e.g how fast their response time needs to be). The aim is to make the computer run faster for the user, by giving the processes the user needs more attention.
These are where processes can jump from one queue to the next if their resource requirements change. It's difficult to implement in an efficient way.
Basic unit of Cpu utilisation (subset of a process)
it has all the global variables from the process and threads can communicate with other parts.
Schedules based on priority based pre-emptive scheduling
Thread runs until
In order for processes to run they need have access to memory. However the computer only has a finite amount of memory so just like we allocate CPU time we need to allocate memory space.
There are two partitions one for kernel processes and one for user processes doing this is more efficient
Each partition is contained in a contiguous section of memory
When allocating a process memory what we do is find a hole in the main memory which is large enough to store the processes memory. We place memory in blocks instead of scattering it because when reading the memory if it's all together in one contiguous block of bytes it's alot more efficient to read.
hole: block of available memory
When picking a hole for the memory we can use 3 different strategies to decide which one to pick. Ideally we want:
We have the following strats:
First fit and best fit usually considered better than worst fit.
Under ideal conditions we'd have all the processes memory in the main memory, however there often isn't enough space for that. So instead we have virtual memory.
Virtual memory is the idea of storing memory for processes that should be in the ram in a section of the harddrive and then swapping it out with the real stuff in the main memory when it's needed. Inorder for this to work we use a concept called paging.
This is storing a set of logical addresses which point to the actual real physical memory instead of just addressing the real memory.
paging means that a process and the developer writing code for the process doesn't actually need to care about how the underlying main memory is structured or work around the holes. They get instead an abstract view of things which they can much more easily work with.
Paging also means we can swap out the pages the process uses and put it in what's called the virtual memory. We do this by just removing the logical address to the frame and giving it to another process. The list of frames that are free to use in the physical address space is maintained by the OS when it gets full we start using the virtual memory.
Virtual memory: the ability to address more locations than the main memory has
This is where we only bring a page into main memory when it is required by the process during execution.
To begin we create the page table by giving each process all the pages it needs but no page is actually connected to the main memory. We store whether a page has a connected frame using a valid/invalid bit. At the start all pages have a valid/invalid bit of 0.
By performance we mean our fast can we access the page. We call this effective access time.
if there is no free frame we encounter a page fault and we need to move a victim page into virtual memory to make room for a new frame to connect to the page.
We need a page replacement algorithm to decide which page to swap out. The aim is to minimise the number of page faults.
The oldest page is the victim page. This is implemented with a queue.
FIFO suffers sometimes from belady's anomaly which is where the higher the number of allocated frames the higher the page fault rate.
As the name suggests the least recently used page is swapped out.
This is used for comparison only. The algorithm is simple; you always pick the page which won't be used for the longest time, which of course makes it optimal.
The reason it's only theoretical is it requires knowing the future which the OS can't do.
This is when a process spends more time dealing with page faults then doing actual work. Aka the page fault rate is very high.
This can happen when a process gets allocated too few frames and as such needs to keep swapping in and out pages from virtual memory.
It's the reason your computer slows too a halt when you reach the limits of your ram capacity.
Load into memory all the pages that a process will need. Can be done by keep track of the pages a process needs in a working set and then if it's suspended when it eventually is resumed we prepage all the pages that were in it's working set.
Using prepaging is a balancing act between the cost of prepaging and minimising page faults.
These provide the bulk of the secondary storage of modern computers
Drives can be attached via an I/O bus
Platters come from 2cm to 30 cm. The HDD range from 100 GB to 8 TB.
A storage device with no moving parts.
Advantages:
Disadvantages
Early storage device which holds data on spools of cartridges. It is mainly used as a backup storage of infrequently used data. It is kept in spool and wound or rewound past read-writer head.
Addressed as large 1 dimensional array of Logical blocks
The 1 dimensional array of logical blocks is mapped into sectors of the disk sequentially.
Easy to map from logical to physical address as long as no bad sectors. However there is a non constant number of sectors per track.
This lets you attach an array of disks.
Aim is to minimize seek time.
Just get each address in the queue. This is pretty inefficient as you're not optimizing it at all.
Just get all the ones which are closest. this basically forms a path through the cylinders.
This algorithm is pretty good, however may cause starvation of requests.
Basically just constantly sweep from end to the other end, doing all requests when it gets to them.
More uniform wait time than Scan. it just treats the list as a circular list and constantly rotates around them.
Allocate part of disk space as virtual memory to extend main memory. You can carve this swap space out of the normal file system or in it's own disk partition.
File data is written to swap space until a file system is requested. Dirty pages go to swap space and text segments pages are thrown out and reread from file system. (not sure what this mean?)
Redundant array of inexpensive disks. You have more reliability by having redundancy.
This increases the mean time to failure, by having more disks.
Combined with nvram to improve write performance.
RAID split into 6 levels
Raid within a storage array can fail if the array fails so it is common to have automatic replication.
Stripe: Splitting data across multiple disks
Mirror: Mirroring data across disk arrays or (single disks)
Snapshot: View of file system before a set of changes take place
Replication: Automatic duplication of writes between seperate sides. This creates redundancy and disaster recovery.
Hot spare disk: This disk automatically used for RAID production if a disk fails to replace the failed disk and rebuild the Raid set if possible. This decreases the mean time to repair.