System Software And Operating System Important Descriptive Questions 4
Q.34. What is a semaphore? Explain a binary semaphore with the help of an example? (4) Ans:
Q.31. Why is segmented paging important (as compared to a paging system)? What are the different pieces of the virtual address in a segmented paging? (6)
Ans:
Paging can be superimposed on a segment oriented addressing mechanism to obtain efficient utilization of the memory. This is a clever scheme with advantages of both paging as well as segmentation. In such a scheme each segment would have a descriptor with its pages identified. So we have to now use three sets of offsets. First, a segment offset helps to identify the set of pages. Next, within the corresponding page table (for the segment), we need to identify the exact page table. This is done by using the page table part of the virtual address. Once the exact page has been identified, the offset is used to obtain main memory address reference. The final address resolution is exactly same as in paging. The different pieces of virtual address in a segmented paging is as shown below:
31
DC14 System Software and Operating System
Q.32. Consider the situation in which the disk read/write head is currently located at track
45 (of tracks 0-255) and moving in the positive direction. Assume that the following track requests have been made in this order: 40, 67, 11, 240, 87. What is the order in which optimised C-SCAN would service these requests and what is the total seek distance? (6)
Ans:
Disk queue: 40, 67, 11, 240, 87 and disk is currently located at track 45.The order in which optimised C-SCAN would service these requests is shown by the following diagram.
0 11 40 45 67 87 240 255
Total seek distance=(67-45)+(87-67)+(240-87)+(255-240)+255+(11-0)+(40-11)
=22+20+153+15+255+11+29
=505
Q.33. Explain any three policies for process scheduling that uses resource consumption information. What is response ratio? (8)
32
DC14 System Software and Operating System
Ans:
Three policies for process scheduling are described below in brief:
1. First-come First-served (FCFS) (FIFO)
– Jobs are scheduled in order of arrival
– Non-preemptive
• Problem:
– Average waiting time can be large if small jobs wait behind long ones
– May lead to poor overlap of I/O and CPU and convoy effects
2. Shortest Job First (SJF)
– Choose the job with the shortest next CPU burst
– Provably optimal for minimizing average waiting time
• Problem:
– Impossible to know the length of the next CPU burst
3. Round Robin(RR)
– Often used for timesharing
– Ready queue is treated as a circular queue (FIFO)
– Each process is given a time slice called a quantum
– It is run for the quantum or until it blocks
– RR allocates the CPU uniformly (fairly) across all participants. If average queue length is n, each participant gets 1/n
– As the time quantum grows, RR becomes FCFS
– Smaller quanta are generally desirable, because they improve response time
• Problem:
– Context switch overhead of frequent context switch
Highest Response Ratio Next (HRRN) scheduling is a non-preemptive discipline, similar to Shortest Job First (SJF) in which the priority of each job is dependent on its estimated run time, and also the amount of time it has spent waiting. Jobs gain higher priority the longer they wait which prevents indefinite postponement . In fact, the jobs that have spent a long time waiting compete against those which are estimated to have short run times.
|
A semaphore is a synchronization tool that provides a general-purpose solution to controlling access to critical sections.
A semaphore is an abstract data type (ADT) that defines a nonnegative integer variable which, apart from initialization, is accessed only through two standard operations: wait and signal. The classical definition of wait in pseudo code is
wait(S){
while(S<=0)
; // do nothing
S--;
}
The classical definitions of signal in pseudocode is signal(S){
S++;
}
33
DC14 System Software and Operating System
A binary semaphore is one that only takes the values 0 and 1. These semaphores are used to implement mutual exclusion.
Q.35. Consider the following page reference and reference time strings for a program: Page reference string: 5,4,3,2,1,4,3,5,4,3,2,1,5,…..
Show how pages will be allocated using the FIFO page replacement policy. Also calculate the total number of page faults when allocated page blocks are 3 and 4 respectively. (8)
Ans:
Page reference string is: 5,4,3,2,1,4,3,5,4,3,2,1,5,…..
For allocated page blocks 3, we have following FIFO allocation. Page reference marked with ‘+’ cause page fault and result in page replacement which is performed by replacing the earliest loaded page existing in memory:
| | 3 | 3 | 3 | 4 | 4 | 4 | 4 | 4 | 2 | 2 | 2 |
| 4 | 4 | 4 | 1 | 1 | 1 | 5 | 5 | 5 | 5 | 5 | 5 |
5 | 5 | 5 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 1 | 1 |
5+ | 4+ | 3+ | 2+ | 1+ | 4+ | 3+ | 5+ | 4 | 3 | 2+ | 1+ | 5 |
Page Reference
For allocated page blocks 4, we have following FIFO allocation. Page reference marked with ‘+’ cause page fault and result in page replacement.
| | | 2 | 2 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 |
| | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 | 5 |
| 4 | 4 | 4 | 4 | 4 | 4 | 5 | 5 | 5 | 5 | 1 | 1 |
5 | 5 | 5 | 5 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 |
5+ | 4+ | 3+ | 2+ | 1+ | 4 | 3 | 5+ | 4+ | 3+ | 2+ | 1+ | 5+ |
Total number of page faults =10 when allocated page blocks=3
Total number of page faults =11 when allocated page blocks=4
Q.36. What are the different parameter passing mechanisms to a function? Explain with the help of example? (8)
Ans:
The various parameter-passing mechanisms are:
1. Call by value
2. Call by value-result
3. Call by reference
4. Call by name
In call by value mechanism, the values of actual parameters are passed to the called function. These values are assigned to the corresponding formal parameters. If a function changes the value of a formal parameter, the change is not reflected on the corresponding actual parameter. This is commonly used for built-in functions of the
34
DC14 System Software and Operating System
language. Its main advantage is its simplicity. The compiler can treat formal parameter as a local variable. This simplifies compilation considerably.
Call by value-result: This mechanism extends the capabilities of the call by value mechanism by copying the values of formal parameters back into corresponding ctual parameters at return. This mechanism inherits the simplicity of the call by value mechanism but incurs higher overheads.
Call by reference: Here the address of an actual parameter is passed to the called function. If the parameter is an expression, its value is computed and stored in a temporary location and the address of the temporary location is passed to the called function. If the parameter is an array element, its address is similarly computed at the time of call. This mechanism is very popular because it has ‘cleaner’ semantics than call by value-result.
Call by name: This parameter transmission mechanism has the same effect as if every occurrence of a formal parameter in the body of the called function is replaced by the name of the corresponding actual parameter. The actual parameter corresponding to a formal parameter can change dynamically during the execution of a function. This makes the call by name mechanism immensely powerful. However the high overheads make it less attractive in practice.
Q.37. What is meant by inter process communication? Explain the two fundamental models of inter process communication. (8)
Ans:
Inter process Communication: The OS provides the means for cooperating processes to communicate with each other via an inter process communication (IPC) facility.
IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network.
IPC is best implemented by message passing system where communication among the
user processes is accomplished through the passing of messages. An IPC facility provides at least the two operations:
send(message) and receive(message).
Two types of message passing system are as follows:
(a) Direct Communication: With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as:
• send(P, message)- Send a message to process P.
• receive(Q, message)- Receive a message from process Q. A communication link in this scheme has the following properties:
• A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s identity to communicate.
• A link is associated with exactly two processes.
• Exactly one link exists between each pair of processes.
(b)With indirect communication, the messages are sent to and received from mailboxes, or ports. Each mailbox has a unique identification. In this scheme, a process can communicate with some other process via a number of different
35
DC14 System Software and Operating System
mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows:
• send (A, message)- Send a message to mailbox A
• receive (A, message)- Receive a message from mailbox A. In this scheme, a communication link has the following properties:
• A link is established between a pair of processes only if both members of the pair have a shared mailbox.
• A link may be associated with more than two processes.
• A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox.
Q.38. Differentiate between program translation and program interpretation. (6)
Ans:
The program translation model bridges the execution gap by translating a program written in a programming language, called the source program (SP), into an equivalent program in the machine or assembly language of the computer system, called the target program (TP). Following diagram depicts the program translation model.
Errors
Data
Source program
Translator
m/c language program
Target program
In a program interpretation process, the interpreter reads the source program and stores it in its memory. It bridges an execution gap without generating a machine language program so we can say that the interpreter is a language translator. However, it takes one statement of higher-level language at a time, translates it into machine language and executes it immediately. Translation and execution are carried out for each statement. The absence of a target program implies the absence of an outer interface of the interpreter. Thus language-processing activity of an interpreter cannot be separated from its program execution activities. Hence we can say that interpreter executes a program written in a programming language. In essence, the execution gap vanishes. Following figure depicts the program interpretation model.
Interpreter
Memory
PC Source
Program
+
Errors
Data
36
DC14 System Software and Operating System
Characteristics of the program translation model are:
• A program must be translated before it can be executed.
• The translated program may be saved in a file. The saved program may be executed repeatedly.
• A program must be retranslated following modifications. Characteristics of the program interpretation model:
• The source program is retained in the source form itself i.e. no target program form exists.
• A statement is analyzed during its interpretation.
Q.39. Explain the differences between macros and subroutines. (4) Ans:
Macros Vs Subroutines
(i) Macros are pre processor directives i.e. it is processed before the source program is passed to the compiler.
Subroutines are blocks of codes with a specific task, to be performed and are directly passed to the compiler.
(ii) In a macro call the pre processor replaces the macro template with its macro expansion, in a literal way.
As against this, in a function call the control is passed to a function along with
certain arguments, some calculations are performed in the function and a useful value is returned back from the function.
(iii) Macro increases the program size. For example, if we use a macro hundred times in a program, the macro expansion goes into our source code at hundred different places. Whereas, functions make the program smaller and compact. For example, if a function is used, the even if it is called from hundred different places in the program, it would take the same amount of space in the program.
(iv) Macros make the program run faster as they have already been expanded and placed in the source code before compilation. Whereas, passing arguments to a function and getting back the returned values does take time and would therefore slow down the program.
(v) Example of macro
#define AREA(x) (3.14*x*x) // macro definition main(){
float r1=6.25, r2=2.5, a;
a=AREA(r1); // expanded to (3.14 * r1 * r1)
printf(“\n Area of circle =%f”,a);
a=AREA(r2); // // expanded to (3.14 * r2 * r2)
printf(“\n Area of circle= %f”,a);} Example of subroutine
main(){
float r1=6.25, r2=2.5, a;
a=AREA(r1); // calls AREA() printf(“\n Area of circle =%f”,a); a=AREA(r2); // calls AREA()
printf(“\n Area of circle= %f”,a);}
float AREA(float r) // subroutine{
return 3.14*r*r;}
37
DC14 System Software and Operating System
Q.40. Explain the stack storage allocation model. (6)
Ans:
In stack-based allocation, objects are allocated in last-in, first-out data structure, a stack.–E.g. Recursive subroutine parameters. The stack storage allocation model
• Grow and shrink on procedure calls and returns.
• Register allocation works best for stack-allocated objects.
• Memory allocation and freeing are partially predictable.
• Restricted but simple and efficient.
• Allocation is hierarchical: Memory freed in opposite order of allocation. That is If alloc (A) then alloc (B) then alloc (C), then it must be free(C) then free(B) then free(A).
No comments:
Post a Comment