Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

operating system support

computer system architecture report
by

Alvin de Vela

on 26 February 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of operating system support

OPERATING SYSTEM SUPPORT OBJECTIVES AND FUNCTIONS -Convenience Layers and Views of a Computer System -Efficiency -Security Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System access Error detection and response Accounting Operating System It is a program (software) that acts as an intermediary (interface) between a user of a computer and the computer hardware. What is an Operating System? It provides an environment
in which a user may
execute programs Computer System Components Operating System Concepts 1. Hardware
2. Operating system
3. Applications programs
4. Users Computer System Components O/S as a Resource Manager -Interactive
-Batch
-Single program (Uni-programming)
-Multi-programming (Multi-tasking) Types of Operating System Late 1940s to mid 1950s Early Systems No Operating System Programs interact directly with hardware Two main problems:
Scheduling
Setup time Monitor handles scheduling Simple Batch Systems Resident Monitor program Users submit jobs to operator Operator batches jobs Monitor controls sequence
of events to process batch When one job is finished, control returns to monitor which reads next job Instructions to Monitor
Usually denoted by $
e.g.
$JOB
$FTN
... Some Fortran instructions
$LOAD
$RUN
... Some data
$END Job Control Language I/O devices very slow
When one program is waiting for I/O, another can use the CPU Multi-programmed Batch Systems Single Program Multi-Programming with
Two Programs Multi-Programming with
Three Programs Allow users to interact directly with the computer
i.e. Interactive
Multi-programming allows a number of users to interact with the computer Time Sharing Systems Key to multi-programming
Long term
Medium term
Short term
I/O Scheduling (CPU management) Determines which programs are submitted for processing
i.e. controls the degree of multi-programming

Once submitted, a job becomes a process for the short term scheduler
(or it becomes a swapped out job for the medium term scheduler) Long Term Scheduling Part of the swapping function (later…)
Usually based on the need to manage multi-programming
If no virtual memory, memory management is also an issue Medium Term Scheduling Dispatcher
Fine grained decisions of which job to execute next
i.e. which job actually gets to use the processor in the next time slot Short Term Scheduler As a process executes, it changes state
new: The process is being created.

running: Instructions are being executed.

waiting: The process is waiting for some event to occur.

ready: The process is waiting to be assigned to a processor.

terminated: The process has finished execution. Process States Process States Representation of a process in the operating system.

Information associated with each process.
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information Process Control Block (PCB) Key Elements of O/S I/O I/O Queue I/O I/O Queue I/O I/O Queue CPU Short-Term
Queue Long-Term
Queue End Process
Request Process Scheduling Subdividing memory to accommodate multiple processes
Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time
Memory Management A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Swapping Long term queue of processes stored on disk
Processes “swapped” in as space becomes available
As a process completes it is moved out of main memory
If none of the processes in memory are ready (i.e. all I/O blocked)
-Swap out a blocked process to intermediate queue
-Swap in a ready process or a new process
-But swapping is an I/O process... What is Swapping? Splitting memory into sections to allocate to processes (including Operating System)
Fixed-sized partitions
May not be equal size
Process is fitted into smallest hole that will take it (best fit)
Some wasted memory
Leads to variable sized partitions Partitioning Fixed
Partitioning Allocate exactly the required memory to a process
This leads to a hole at the end of memory, too small to use
Only one small hole - less waste
When all processes are blocked, swap out a process and bring in another
New process may be smaller than swapped out process
Another hole Variable Sized Partitions (1) Eventually have lots of holes (fragmentation)
Solutions:
Coalesce - Join adjacent holes into one large hole
Compaction - From time to time go through memory and move all hole into one free block (c.f. disk de-fragmentation) Variable Sized Partitions (2) Effect of Dynamic Partitioning When program loaded into memory the actual (absolute) memory locations are determined


A process may occupy different partitions which means different absolute memory locations during execution (from swapping) time)


Compaction will also cause a program to occupy a different partition which means different absolute memory locations Relocation Partition memory into small equal fixed-size chunks and divide each process into the same size chunks


The chunks of a process are called pages and chunks of memory are called frames Paging Logical and Physical Addresses - Paging a memory management technique developed for multitasking kernels


makes application programming easier by hiding fragmentation of physical memory
Virtual Memory (Demand Paging) Processes on a system require more memory than it already has


If a process does not have “enough” pages, the pagefault rate is very high.
This leads to:
– low CPU utilization
– operating system spends most of its time swapping to disk

a process is busy swapping pages in and out Thrashing We do not need all of a process in memory for it to run
We can swap in pages as required

So - we can now run processes that are bigger than total memory available!

Main memory is called real memory
User/programmer sees much bigger memory - virtual memory Bonus Alternate Inverted Page Table Structure The TLB is a small cache of the most recent virtual to physical mappings
Used to translate addresses for accesses to virtual memory
Translation Lookaside Buffer TLB and Cache Operation
(special Cache for tables) All segments of all programs do not have to be of the same length

There is a maximum segment length

Addressing consist of two parts - a segment number and an offset

Since segments are not equal, segmentation is similar to dynamic partitioning Segmentation Simplifies handling of growing data structures
Allows programs to be altered and recompiled independently, without re-linking and re-loading
Lends itself to sharing among processes
Lends itself to protection
Some systems combine segmentation with paging Advantages of Segmentation Hardware for segmentation and paging
Unsegmented unpaged
-virtual address = physical address
-Low complexity
-High performance

Unsegmented paged
-Memory viewed as paged linear address space
-Protection and management via paging
Berkeley UNIX Pentium II Segmented unpaged
-Collection of local address spaces
-Protection to single byte level
-Translation table needed is on chip when segment is in memory

Segmented paged
-Segmentation used to define logical memory partitions subject to access control
-Paging manages allocation of memory within partitions
Unix System V Pentium II “Segment” uses 2 bits to provide 4 levels of protection, typically:
0: OS kernel, 1: OS, 2: apps needing special security, 3: general apps Pentium II Address Translation Mechanism Each virtual address is 16-bit segment and 32-bit offset
2 bits of segment are protection mechanism
14 bits specify segment
Unsegmented virtual memory 232 = 4Gbytes
Segmented 246=64 terabytes
Can be larger – depends on which process is active
Half (8K segments of 4Gbytes) is global
Half is local and distinct for each process Pentium II Segmentation Protection bits give 4 levels of privilege
0 most protected, 3 least
Use of levels software dependent
Usually level 3 is for applications, level 1 for O/S and level 0 for kernel (level 2 not used)
Level 2 may be used for apps that have internal security, e.g., database
Some instructions only work in level 0 Pentium II Protection Segmentation may be disabled in which case linear address space is used
Two level page table lookup
First, page directory
Page directory for current process always in memory
Use TLB holding 32 page table entries
Two page sizes available 4k or 4M Pentium II Paging Pentium Virtual Address Breakdown Pentium Segment/Paging Operation 32 bit – paging with simple segmentation
64 bit paging with more powerful segmentation
Or, both do block address translation
Map 4 large blocks of instructions & 4 of memory to bypass paging
e.g. OS tables or graphics frame buffers
32 bit effective address
-12 bit byte selector
4kbyte pages
-16 bit page id
64k pages per segment
-4 bits indicate one of 16 segment registers
Segment registers under OS control PowerPC Memory Management Hardware PowerPC 32-bit Memory Management Formats PowerPC 32-bit Address Translation COMPUTER SYSTEM ARCHITECTURE DE VELA, Alvin
DIMACULANGAN, Andre Maurus
MALIBIRAN, Jan Carlo THANK YOU!! THE END BATCH SYSTEMS It is the first rudimentary
operating system. It reduces setup time by batching similar jobs (that is jobs with common needs are batched) Machines run only
one application Automatic job sequencing

– automatically transfers control from one job to another. kernel memory management input/output interface GUI CLI Memory Management requirements: -Relocation
-Protection
-Sharing
-Logical organization
-Physical Organization Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in - swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed Schematic view of swapping

Logical
-Reference to a memory location independent of the current assignment of data to memory
-Translation must be made to the physical address
-generated by the CPU; also referred to as virtual address

Physical
The absolute address or actual location in main memory

Paging
Hierarchal Paging
-Break up the logical address space into multiple page tables
-A simple technique is a two-level page table

Hashed Page Tables
-Common in address spaces > 32 bits
-The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location.
-Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.
Inverted page tables
-One entry for each real page of memory
-Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page
-Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs
TLB Operation
Page Name stores virtual address of page for that translation entry

Page Frame Address stores physical memory location of that page
Segmentation may be disabled
-In which case linear address space is used
Two level page table lookup
First, page directory
1024 entries max
Splits 4G linear memory into 1024 page groups of 4Mbyte
Each page table has 1024 entries corresponding to 4Kbyte pages
Can use one page directory for all processes, one per process or mixture
Page directory for current process always in memory
Use TLB holding 32 page table entries
Two page sizes available 4k or 4M
Pentium II Paging
Full transcript