Peter Baer Galvin. In memory of Uncle - Greg Gagne . As we wrote this seventh edition of Operating System Concepts, we were guided by the many . ruthenpress.infoom/college/silberschatz and choosing the Student Solu -. Welcome to the Web site for Operating System Concepts, Seventh Edition by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne. This Web site gives you. PETER BAER GALVIN . As we wrote this Ninth Edition of Operating System Concepts, we were .. Thomas Gagne prepared new artwork for this edition.
|Language:||English, Spanish, Arabic|
|Distribution:||Free* [*Register to download]|
Operating System Concepts, Seventh Edition [Abraham Silberschatz, Peter B. in Silberschatz, Galvin, and Gagne's Operating System Concepts, Seventh Edition. .. of prior editions or perhaps if you are lucky, someone has a PDF of them. operating system concepts 7th Ed silberscatz galvin gagne - Free ebook download as PDF File .pdf) or read book online for free. PETER BAER GALVIN . As we wrote this Ninth Edition of Operating System Concepts, we were guided .. Parts of Chapter 17 were derived from a paper by Levy and Silberschatz Thomas Gagne prepared new artwork for this edition.
Modify the solution to Exercise 3. By switch- ing the CPU among processes, the operating system can make the computer more productive.
In this chapter, we introduce the basic scheduling concepts and discuss in great length CPU scheduling. This is their first exposure to the idea of resource allocation and scheduling, so it is important that they understand how it is done.
Gantt charts, simulations, and play acting are valuable ways to get the ideas across. Show how the ideas are used in other situations like waiting in line at a post office, a waiter time sharing between customers, even classes being an interleaved Round-Robin scheduling of professors. A simple project is to write several different CPU schedulers and compare their performance by simulation. The instructor can make the trace tape up in advance to provide the same data for all students.
The first line of a job was the word JOB and the job number. The job was terminated by an END line with the job number again. Round-Robin is more difficult, since it requires putting unfinished requests back in the ready queue. Exercises 5. Such programs typically do not use up their entire CPU quantum. CPU utilization and response time b.
Average turnaround time and maximum waiting time c. CPU utilization is increased if the overheads associated with context switching is minimized. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes.
Average turnaround time is minimized by executing the shortest tasks first. Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time. CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches.
What are the implications of assigning the following values to the parameters used by the algorithm? Consequently, the scheduling algorithm is almost memory-less, and simply predicts the length of the previous burst for the next quantum of CPU execution. What is the turnaround time of each process for each of the scheduling algorithms in part a?
What is the waiting time of each process for each of the scheduling algorithms in part a?
Which of the schedules in part a results in the minimal average waiting time over all processes? The four Gantt charts are b. Shortest Job First 5. First-come, first-served b. Shortest job first c. Priority Answer: Shortest job first and priority-based scheduling algorithms could result in starvation. What would be the effect of putting two pointers to the same process in the ready queue?
What would be the major advantages and disadvantages of this scheme? How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers? In effect, that process will have increased its priority since by getting time more often it is receiving preferential treatment.
The advantage is that more important jobs could be given more time, in other words, higher priority in treatment. The conse- quence, of course, is that shorter jobs will suffer. Allot a longer amount of time to processes deserving higher pri- ority. In other words, have two or more quantums possible in the Round-Robin scheme.
Also assume that the context switching overhead is 0. What is the CPU utilization for a round-robin scheduler when: The time quantum is 1 millisecond b. The time quantum is 10 milliseconds Answer: Irrespective of which process is scheduled, the scheduler incurs a 0.
Exercises 31 5. The program could maximize the CPU time allocated to it by not fully utilizing its time quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before the end of the quantum, thereby increasing the priority associated with the process.
Larger priority numbers imply higher priority. All processes are given a priority of 0 when they enter the ready queue. FCFS b. LIFO 5. Multilevel feedback queues Answer: FCFS —discriminates against short jobs since any short jobs arriv- ing after long jobs will have a longer waiting time.
RR —treats all jobs equally giving them equal bursts of CPU time so short jobs will be able to leave the system faster since they will finish first. Multilevel feedback queues—work similar to the RR algorithm— they discriminate favorably toward short jobs. What is the time quantum in milliseconds for a thread with priority 10?
With priority 55? Assume a thread with priority 35 has used its entire time quantum without blocking. What new priority will the scheduler assign this thread? The higher the number, the lower the priority.
The scheduler recalculates process priorities once per second using the following function: What will be the new priorities for these three processes when priorities are recalculated? The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes. Concurrency is generally very hard for students to deal with correctly, and so we have tried to introduce it and its problems with the classic process coordination problems: An understanding of these problems and their solutions is part of current operating-system theory and development.
We first use semaphores and monitors to introduce synchronization tech- niques. Next, Java synchronization is introduced to further demonstrate a language-based synchronization technique. We conclude with a discussion of how contemporary operating systems provide features for process synchro- nization and thread safety. Exercises 6. The two processes, P0 and P1 , share the following variables: Prove that the algorithm satisfies all three requirements for the critical-section problem.
This algorithm satisfies the three conditions of mutual ex- clusion. If both processes set their flag to true, only one will succeed. Namely, the process whose turn it is.
The waiting process can only enter its critical section when the other process updates the value of turn. This algorithm does not provide strict alternation. It only sets turn to the value of the other process upon exiting its critical section.
If this process wishes to enter its critical section again - before the other process - it repeats the process of entering its critical section and setting turn to the other process upon exiting. Assume two processes wish to enter their respec- tive critical sections.
They both set their value of flag to true, however only the thread whose turn it is can proceed, the other thread waits. If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section.
The processes share the following variables: The structure of process Pi is shown in Figure 6. This algorithm satisfies the three conditions. Before we show that the three conditions are satisfied, we give a brief explanation of what the algorithm does to ensure mutual exclusion. When a process i requires access to critical section, it first sets its flag variable to want in to indicate its desire.
It then performs the following steps: Given the above description, we can reason about how the algorithm satisfies the requirements in the following manner: Notice that a process enters the critical section only if the following requirements is satisfied: Since the process sets its own flag variable set to in cs before checking the status of other processes, we are guaranteed that no two processes will enter the critical section simultaneously.
When this happens, all processes realize that there are competing processes, enter the next iteration of the outer while 1 loop and reset their flag variables to want in. Now the only process that will set its turn variable to in cs is the process whose index is closest to turn. It is however possible that new processes whose index values are even closer to turn might decide to enter the critical section at this point and therefore might be able to simultaneously set its flag to in cs.
These processes would then realize there are competing processes and might restart the process of entering the critical section. However, at each iteration, the index values of processes that set their flag variables to in cs become closer to turn and eventually we reach the following condition: This process then gets to enter the critical section.
The bounded waiting requirement is satisfied by the fact that when a process k desires to enter the critical section, its flag is no longer set to idle. Therefore, any process whose index does not lie between turn and k cannot enter the critical section. In the meantime, all processes whose index falls between turn and k and desire to enter the critical section would indeed enter the critical section due to the fact that the system always makes progress and the turn value monotonically becomes closer to k.
Eventually, either turn becomes k or there are no processes whose index values lie between turn and k, and therefore process k gets to enter the critical section. What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without relinquish the processor.
Alternatively, a process could wait by relinquishing the processor, and block on a condition and wait to be awakened at some appropriate time in the future. Busy waiting can be avoided but incurs the overhead associated with putting a process to sleep and having to wake it up when the appropriate program state is reached. Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock could be obtained only by executing a different process.
If the process is not relinquishing the processor, other processes do not get the opportu- nity to set the program condition required for the first process to make progress. In a multiprocessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock.
If a user-level program is given the ability to disable interrupts, then it can disable the timer interrupt and prevent context switching from taking place, thereby allowing it to use the processor without letting other processes to execute. Interrupts are not sufficient in multiprocessor systems since disabling interrupts only prevents other processes from executing on the processor in which interrupts were disabled; there are no limitations on what processes could be executing on other processors and therefore the process disabling interrupts cannot guarantee mutually exclusive access to program state.
For example, a server may wish to have only N socket connections at any point in time. As soon as N connections are made, the server will not accept another incoming connection until an existing connection is re- leased. Explain how semaphores can be used by a server to limit the number of concurrent connections.
A semaphore is initialized to the number of allowable open socket connections. When a connection is accepted, the acquire method is called, when a connection is released, the release method is called. A wait operation atomically decrements the value associated with a semaphore.
If two wait operations are executed on a semaphore when its value is 1, if the two operations are not performed atomically, then it is possible that both operations might proceed to decrement the semaphore value thereby violating mutual exclusion. The solution should exhibit minimal busy waiting.
Here is the pseudocode for implementing the operations: A barbershop consists of a waiting room with n chairs and a barber room with one barber chair. If there are no customers to be served, the barber goes to sleep. If a customer enters the barbershop and all chairs are occupied, then the customer leaves the shop.
If the barber is busy but chairs are available, then the customer sits in one of the free chairs. If the barber is asleep, the customer wakes up the barber. Write a program to coordinate the barber and the customers. A semaphore can be implemented using the following monitor code: Each condition variable is represented by a queue of threads waiting for the condition. Each thread has a semaphore associated with its queue entry.
When a thread performs a wait operation, it creates a new semaphore initialized to zero , appends the semaphore to the queue associated with the condition variable, and performs a blocking semaphore decrement operation on the newly created semaphore.
When a thread performs a signal on a condition variable, the first process in the queue is awakened by performing an increment on the corresponding semaphore. Explain why this is true. Design a new scheme that is suitable for larger portions. These copy operations could be expensive if one were using large extents of memory for each buffer region.
The increased cost of copy operation means that the monitor is held for a longer period of time while a process is in the produce or consume operation. This decreases the overall throughput of the system. This problem could be alleviated by storing pointers to buffer regions within the monitor instead of storing the buffer regions themselves.
This operation should be relatively inexpensive and therefore the period of time that the monitor is being held will be much shorter, thereby increasing the throughput of the monitor. Propose a method for solving the readers- writers problem without causing starvation. Throughput in the readers-writers problem is increased by favoring multiple readers as opposed to allowing a single writer to exclusively access the shared values.
On the other hand, favoring readers could result in starvation for writers. The starvation in the readers- writers problem could be avoided by keeping timestamps associated with waiting processes.
When a writer is finished with its task, it would wake up the process that has been waiting for the longest duration. When a reader arrives and notices that another reader is accessing the database, then it would enter the critical section only if there are no waiting writers. These restrictions would guarantee fairness. The signal operations associated with monitors is not persistent in the following sense: If a subsequent wait operation is performed, then the corresponding thread simply blocks.
A future wait operation would immediately succeed because of the earlier increment. Suggest how the implementation described in Section 6. If the signal operation were the last statement, then the lock could be transferred from the signalling process to the process that is the recipient of the signal.
Otherwise, the signalling process would have to explicitly release the lock and the recipient of the signal would have to compete with all other processes to obtain the lock to make progress.
Write a monitor that allocates three identical line printers to these processes, using the priority numbers for deciding the order of allocation. Here is the pseudocode: The sum of all unique numbers associated with all the processes currently accessing the file must be less than n.
Write a monitor to coordinate access to the file. The pseudocode is as follows: How would the solution to the preceding exercise differ with the two different ways in which signaling can be performed? The solution to the previous exercise is correct under both situations.
However, it could suffer from the problem that a process might be awakened only to find that it is still not possible for it to make forward progress either because there was not sufficient slack to begin with when a process was awakened or if an intervening process gets control, obtains the monitor and starts accessing the file.
Also, note that the broadcast operation wakes up all of the waiting processes. If the signal also transfers control and the monitor from the current thread to the target, then one could check whether the target would indeed be able to make forward progress and perform the signal only if it it were possible. Write a monitor using this scheme to implement the readers— writers problem. Explain why, in general, this construct cannot be implemented efficiently.
What restrictions need to be put on the await statement so that it can be implemented efficiently? Restrict the generality of B; see Kessels . This requires considerable complexity as well as might require some interaction with the compiler to evaluate the conditions at different points in time. One could restrict the boolean condition to be a disjunction of conjunctions with each component being a simple check equality or inequality with respect to a static value on a program variable.
In that case, the boolean condition could be communicated to the runtime system, which could perform the check every time it needs to determine which thread to be awakened. You may assume the existence of a real hardware clock that invokes a procedure tick in your monitor at regular intervals. Here is a pseudocode for implementing this: Solaris, Linux, and Windows use spinlocks as a syn- chronization mechanism only on multiprocessor systems.
In a multipro- cessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock.
Why is this restriction necessary? If the transaction needs to be aborted, then the values of the updated data values need to be rolled back to the old values. This requires the old values of the data entries to be logged before the updates are performed.
A schedule refers to the execution sequence of the operations for one or more transactions. A serial schedule is the situation where each transaction of a schedule is performed atomically. If a schedule consists of two different transactions where consecutive operations from the different transactions access the same data and at least one of the operations is a write, then we have what is known as a conflict. If a schedule can be transformed into a serial schedule by a series of swaps on nonconflicting operations, we say that such a schedule is conflict serializable.
The two-phase locking protocol ensures conflict serializabilty because exclusive locks which are used for write operations must be acquired serially, without releasing any locks during the acquire growing phase. Other transactions that wish to acquire the same locks must wait for the first transaction to begin releasing locks.
By requiring that all locks must first be acquired before releasing any locks, we are ensuring that potential conflicts are avoided. How does the system process transactions that were issued after the rolled-back transaction but that have timestamps smaller than the new timestamp of the rolled-back transaction?
If the transactions that were issued after the rolled-back trans- action had accessed variables that were updated by the rolled-back trans- action, then these transactions would have to rolled-back as well. If they have not performed such operations that is, there is no overlap with the rolled-back transaction in terms of the variables accessed , then these operations are free to commit when appropriate. Processes may ask for a number of these resources and —once finished —will return them.
As an example, many commercial software packages provide a given number of licenses, indicating the number of applications that may run concurrently. When the application is started, the license count is decremented. When the application is terminated, the license count is incremented. If all licenses are in use, requests to start the application are denied.
Such requests will only be granted when an existing license holder terminates the application and a license is returned. The maximum number of resources and the number of available resources are declared as follows: Do the fol- lowing: Identify the data involved in the race condition.
Identify the location or locations in the code where the race condition occurs. Using a semaphore, fix the race condition. The variable available resources. The code that decrements available resources and the code that increments available resources are the statements that could be involved in race conditions. Use a semaphore to represent the available resources variable and replace increment and decrement operations by semaphore increment and semaphore decrement operations. This leads to awkward programming for a process that wishes obtain a number of resources: This will allow a process to invoke decrease count by simply calling decrease count count ; The process will only return from this function call when sufficient resources are available.
It is important that the students learn the three basic approaches to deadlock: It can be useful to pose a deadlock problem in human terms and ask why human systems never deadlock. Can the students transfer this understanding of human systems to computer systems?
Projects can involve simulation: Ask the students to al- locate the resources to prevent deadlock. The survey paper by Coffman, Elphick, and Shoshani  is good sup- plemental reading, but you might also consider having the students go back to the papers by Havender , Habermann , and Holt [a].
The last two were published in CACM and so should be readily available. Exercises 7. Show that the four necessary conditions for deadlock indeed hold in this example. State a simple rule for avoiding deadlocks in this system. The four necessary conditions for a deadlock are 1 mutual exclu- sion; 2 hold-and-wait; 3 no preemption; and 4 circular wait. The mutual exclusion condition holds as only one car can occupy a space in the roadway. A car cannot be removed i.
Lastly, there is indeed a circular wait as each car is waiting for a subsequent car to advance. The circular wait condition is also easily observed from the graphic. A simple rule that would avoid this traffic deadlock is that a car may not advance into an intersection if it is clear they will not be able to immediately clear the intersection. Dis- cuss how the four necessary conditions for deadlock indeed hold in this setting. Discuss how deadlocks could be avoided by eliminating any one of the four conditions.
Deadlock is possible because the four necessary conditions hold in the following manner: Deadlocks could be avoided by overcoming the conditions in the following manner: Exercises 49 7. Such synchronization objects may include mutexes, semaphores, condition variables, etc.
We can prevent the deadlock by adding a sixth object F. This solution is known as containment: Compare this scheme with the circular-wait scheme of Section 7. This is probably not a good solution because it yields too large a scope. It is better to define a locking policy with as narrow a scope as possible. Runtime overheads b. System throughput Answer: A deadlock-avoidance scheme tends to increase the runtime overheads due to the cost of keep track of the current resource allocation.
However, a deadlock-avoidance scheme allows for more concurrent use of resources than schemes that statically prevent the formation of dead- lock. In that sense, a deadlock-avoidance scheme could increase system throughput. Resources break or are replaced, new processes come and go, new re- sources are bought and added to the system. Increase Available new resources added. Decrease Available resource permanently removed from system c. Increase Max for one process the process needs more resources than allowed, it may want more d.
Decrease Max for one process the process decides it does not need that many resources e. Increase the number of processes.
Decrease the number of processes. Increase Available new resources added - This could safely be changed without any problems. Increase Max for one process the process needs more resources than allowed, it may want more - This could have an effect on the system and introduce the possibility of deadlock. Decrease Max for one process the process decides it does not need that many resources - This could safely be changed without any problems.
Increase the number of processes - This could be allowed assum- ing that resources were allocated to the new process es such that the system does not enter an unsafe state. Decrease the number of processes - This could safely be changed without any problems. Show that the system is deadlock-free. Suppose the system is deadlocked. This implies that each process is holding one resource and is waiting for one more.
Since there are three processes and four resources, one process must be able to obtain two resources. This process requires no more resources and, therefore it will return its resources when done. Resources can be requested and released by pro- cesses only one at a time. Show that the system is deadlock free if the following two conditions hold: The maximum need of each process is between 1 and m resources b.
Using the terminology of Section 7. Hence the system cannot be in a deadlock state.
Assume that requests for chopsticks are made one at a time. The following rule prevents deadlock: Assume now that each philosopher requires three chopsticks to eat and that resource re- quests are still issued separately.
Describe some simple rules for deter- mining whether a particular request could be satisfied without causing deadlock given the current allocation of chopsticks to philosophers. When a philosopher makes a request for a chopstick, allocate the request if: Need A B C P0 7 4 3 P1 0 2 0 P2 6 0 0 P3 0 1 1 P4 4 3 1 If the value of Available is 2 3 0 , we can see that a request from process P0 for 0 2 0 cannot be satisfied as this lowers Available to 2 1 0 and no process could safely finish.
What is the content of the matrix Need? Is the system in a safe state? If a request from process P1 arrives for 0,4,2,0 , can the request be granted immediately? The values of Need for processes P0 through P4 respectively are 0, 0, 0, 0 , 0, 7, 5, 0 , 1, 0, 0, 2 , 0, 0, 2, 0 , and 0, 6, 4, 2.
With Available being equal to 1, 5, 2, 0 , either process P0 or P3 could run. Once process P3 runs, it releases its resources which allow all other existing processes to run. Yes it can. This results in the value of Available being 1, 1, 0, 0. How could this assumption be violated? The optimistic assumption is that there will not be any form of circular-wait in terms of resources allocated and processes making requests for them. This assumption could be violated if a circular-wait does indeed in practice.
Create n threads that request and release resources from the banker. A banker will only grant the request if it leaves the system in a safe state. Ensure that access to shared data is thread-safe by employing Java thread synchronization as discussed in Section 7.
Farmers in the two villages use this bridge to deliver their produce to the neighboring town. The bridge can be- come deadlocked if both a northbound and a southbound farmer get on the bridge at the same time Vermont farmers are stubborn and are un- able to back up.
Using semaphores, design an algorithm that prevents deadlock. Initially, do not be concerned about starvation the situation in which northbound farmers prevent southbound farmers from using the bridge, or vice versa. We want the student to learn about all of them: Exercises 8. Internal Fragmentation is the area in a region or a page that is not used by the job occupying that region or page.
This space is unavailable for use by the system until that job is finished and the page or region is released. A compiler is used to generate the object code for individual modules, and a linkage editor is used to combine multiple object modules into a single program binary.
How does the linkage editor change the binding of instructions and data to memory addresses? What information needs to be passed from the compiler to the linkage editor to facilitate the memory binding tasks of the linkage editor?
The linkage editor has to replace unresolved symbolic ad- dresses with the actual addresses associated with the variables in the final program binary. In order to perform this, the modules should keep track of instructions that refer to unresolved symbols. During linking, each module is assigned a sequence of addresses in the overall program binary and when this has been performed, unresolved references to sym- bols exported by this binary could be patched in other modules since every other module would contain the list of instructions that need to be patched.
Which algorithm makes the most efficient use of memory? Data allocated in the heap segments of programs is an example of such allocated memory.
What is required to support dynamic memory allocation in the following schemes: Exercises 57 8. Pure segmentation also suffers from external fragmentation as a segment of a process is laid out contiguously in physical memory and fragmentation would occur as segments of dead processes are replaced by segments of new processes. Segmentation, however, enables processes to share code; for instance, two different processes could share a code segment but have distinct data segments.
Pure paging does not suffer from external frag- mentation, but instead suffers from internal fragmentation. Processes are allocated in page granularity and if a page is not completely utilized, it results in internal fragmentation and a corresponding wastage of space. Paging also enables processes to share code at the granularity of pages. How could the operating system allow access to other memory? Why should it or should it not?
An address on a paging system is a logical page number and an offset. The physical page is found by searching a table based on the logical page number to produce a physical page number. Because the operating system controls the contents of this table, it can limit a process to accessing only those physical pages allocated to the process.
There is no way for a process to refer to a page it does not own because the page will not be in the page table. This is useful when two or more processes need to exchange data—they just read and write to the same physical addresses which may be at varying logical addresses. This makes for very efficient interprocess communication.
Paging requires more memory overhead to maintain the trans- lation structures. Segmentation requires just two registers per segment: Paging on the other hand requires one entry per page, and this entry provides the physical address in which the page is located. Code is stored starting with a small fixed virtual address such as 0.
The code segment is followed by the data segment that is used for storing the program variables. When the program starts executing, the stack is allocated at the other end of the virtual address space and is allowed to grow towards lower virtual addresses. What is the significance of the above structure on the following schemes: This could be much higher than the actual memory requirements of the process. When a program needs to extend the stack or the heap, it needs to allocate a new page but the corresponding page table entry is preallocated.
If a memory reference takes nanoseconds, how long does a paged memory reference take? If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effec- tive memory reference time?
Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there. Segmentation and paging are often combined in order to im- prove upon each other. Segmented paging is helpful when the page table becomes very large. A large contiguous section of the page table that is unused can be collapsed into a single segment table entry with a page- table address of zero.
Paged segmentation handles the case of having very long segments that require a lot of time for allocation. By paging the segments, we reduce wasted memory due to external fragmentation as well as simplify the allocation.
Exercises 59 8. Since segmentation is based on a logical division of memory rather than a physical one, segments of any size can be shared with only one entry in the segment tables of each user. With paging there must be a common entry in the page tables for each page that is shared. Segment Base Length 0 1 14 2 90 3 4 96 What are the physical addresses for the following logical addresses? In certain situations the page tables could become large enough that by paging the page tables, one could simplify the memory allocation problem by ensuring that everything is allocated as fixed-size pages as opposed to variable-sized chunks and also enable the swapping of portions of page table that are not currently used.
How many memory operations are performed when an user program executes a memory load operation? When a memory load operation is performed, there are three memory operations that might be performed.
One is to translate the position where the page table entry for the page could be found since page tables themselves are paged. The second access is to access the page table entry itself, while the third access is the actual memory load operation.
Under what circumstances is one scheme preferrable over the other? When a program occupies only a small portion of its large virtual address space, a hashed page table might be preferred due to its smaller size.
The disadvantage with hashed page tables however is the problem that arises due to conflicts in mapping multiple pages onto the same hashed page table entry. If many pages map to the same entry, then traversing the list corresponding to that hash table entry could in- cur a significant overhead; such overheads are minimal in the segmented paging scheme where each page table entry maintains information re- garding only one page.
Describe all the steps that the Intel takes in translating a logical address into a physical address. What are the advantages to the operating system of hardware that provides such complicated memory translation hardware? Are there any disadvantages to this address-translation system? If so, what are they? If not, why is it not used by every manufacturer? The selector is an index into the segment descriptor table. The seg- ment descriptor result plus the original offset is used to produce a linear address with a dir, page, and offset.
The dir is an index into a page directory. The entry from the page directory selects the page table, and the page field is an index into the page table. The entry from the page table, plus the offset, is the physical address. Such a page translation mechanism offers the flexibility to allow most operating systems to implement their memory scheme in hardware, instead of having to implement some parts in hardware and some in software.
Because it can be done in hardware, it is more efficient and the kernel is simpler.
Address translation can take longer due to the multiple table lookups it can invoke. Caches help, but there will still be cache misses. The objectives of this chapter are to explain these concepts and show how paging works.
A simulation is probably the easiest way to allow the students to program several of the page-replacement algorithms and see how they really work. If an interactive graphics display can be used to display the simulation as it works, the students may be better able to understand how paging works.
Exercises 9. Assume that the page boundary is at and the move instruction is moving values from a source region of Assume that a page fault occurs while accessing location By this time the locations of For every memory access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page.
These checks would have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation. What is the hardware support required to implement this feature?
When two processes are accessing the same set of program values for instance, the code segment of the source binary , then it is useful to map the corresponding pages into the virtual address spaces of the two programs in a write-protected manner.
When a write does indeed take place, then a copy must be made to allow the two programs to individually access the different copies without interfering with each other.
The hardware support required to implement is simply the fol- lowing: If it is indeed write-protected, a trap would occur and the operating system could resolve the issue. The computer has bytes of physical memory. The virtual memory is implemented by paging, and the page size is bytes. A user process generates the virtual address Explain how the system establishes the corresponding physical location. Distinguish between software and hardware operations.
The virtual address in binary form is Since the page size is , the page table size is The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than nanoseconds?
What can you say about the system if you notice the following behavior: If the pointer is moving fast, then the program is accessing a large number of pages simultaneously. It is most likely that during the period between the point at which the bit corresponding to a page is cleared and it is checked again, the page is accessed again and therefore cannot be replaced.
This results in more scanning of the pages before a victim page is found. If the pointer is moving slow, then the virtual memory system is finding candidate pages for replacement extremely efficiently, indicating that many of the resident pages are not being ac- cessed.
Also discuss under what circumstance does the opposite holds. Consider the following sequence of memory accesses in a system that can hold four pages in memory. When page 5 is accessed, the least frequently used page-replacement algorithm would replace a page other than 1, and therefore would not incur a page fault when page 1 is accessed again. Consider the sequence in a system that holds four pages in memory: The most frequently used page replacement algo- rithm evicts page 4 while fetching page 5, while the LRU algorithm evicts page 1.
This is unlikely to happen much in practice. Assume that the free-frame pool is managed using the least recently used replacement policy. Answer the following questions: If a page fault occurs and if the page does not exist in the free- frame pool, how is free space generated for the newly requested page? If a page fault occurs and if the page exists in the free-frame pool, how is the resident page set and the free-frame pool managed to make space for the requested page?
What does the system degenerate to if the number of resident pages is set to one? What does the system degenerate to if the number of pages in the free-frame pool is zero? The accessed page is then moved to the resident set. Install a faster CPU. Install a bigger paging disk. Increase the degree of multiprogramming. Decrease the degree of multiprogramming. Install more main memory. Install a faster hard disk or multiple controllers with multiple hard disks. Add prepaging to the page fetch algorithms.
Increase the page size. The system obviously is spending most of its time paging, indicating over-allocation of memory. If the level of multiprogramming is reduced resident processes would page fault less frequently and the CPU utilization would improve. Another way to improve performance would be to get more physical memory or a faster paging drum. Get a faster CPU —No. Get a bigger paging drum—No. Increase the degree of multiprogramming—No. Decrease the degree of multiprogramming—Yes.
Exercises 65 e. Install more main memory—Likely to improve CPU utilization as more pages can remain resident and not require paging to or from the disks. Install a faster hard disk, or multiple controllers with multiple hard disks—Also an improvement, for as the disk bottleneck is removed by faster response and more throughput to the disks, the CPU will get more data more quickly. Add prepaging to the page fetch algorithms—Again, the CPU will get more data faster, so it will be more in use.
This is only the case if the paging action is amenable to prefetching i. Increase the page size —Increasing the page size will result in fewer page faults if data is being accessed sequentially. If data access is more or less random, more paging action could ensue because fewer pages can be kept in memory and more data is transferred per page fault.
So this change is as likely to decrease utilization as it is to increase it. What is the sequence of page faults incurred when all of the pages of a program are currently non-resident and the first instruction of the program is an indirect memory load operation? What happens when the operating system is using a per-process frame allocation technique and only two pages are allocated to this process?
The following page faults take place: The operating system will generate three page faults with the third page replacing the page containing the instruc- tion. If the instruction needs to be fetched again to repeat the trapped instruction, then the sequence of page faults will continue indefinitely.
If the instruction is cached in a register, then it will be able to execute completely after the third page fault.
What would you gain and what would you lose by using this policy rather than LRU or second-chance replacement? Such an algorithm could be implemented with the use of a reference bit.
After every examination, the bit is set to zero; set back to one if the page is referenced. The algorithm would then select an arbitrary page for replacement from the set of unused pages since the last examination.
The advantage of this algorithm is its simplicity - nothing other than a reference bit need be maintained. The disadvantage of this algorithm is that it ignores locality by only using a short time frame for determining whether to evict a page or not. We can do this minimization by distributing heavily used pages evenly over all of memory, rather than having them compete for a small number of page frames.
We can associate with each page frame a counter of the number of pages that are associated with that frame. Then, to replace a page, we search for the page frame with the smallest counter.
Define a page-replacement algorithm using this basic idea. Specif- ically address the problems of 1 what the initial value of the counters is, 2 when counters are increased, 3 when counters are decreased, and 4 how the page to be replaced is selected. How many page faults occur for your algorithm for the following reference string, for four page frames? What is the minimum number of page faults for an optimal page- replacement strategy for the reference string in part b with four page frames?
Define a page-replacement algorithm addressing the problems of: Initial value of the counters—0.
Counters are increased —whenever a new page is associ- ated with that frame. Counters are decreased —whenever one of the pages asso- ciated with that frame is no longer required. How the page to be replaced is selected —find a frame with the smallest counter. Use FIFO for breaking ties. Addresses are translated through a page table in main memory, with an access time of 1 microsec- ond per memory access.
Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an asso- ciative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory.
Assume that 80 percent of the accesses are in the associative memory and that, of the remaining, 10 percent or 2 percent of the total cause page faults. What is the effective memory access time? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem? Thrashing is caused by underallocation of the minimum num- ber of pages required by a process, forcing it to continuously page fault.
The system can detect thrashing by evaluating the level of CPU utiliza- tion as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming. One representing data and another representing code? As an example, the code being accessed by a process may retain the same working set for a long period of time. However, the data the code accesses may change, thus reflecting a change in the working set for data accesses. This could result in a large number of page faults.
However, once a process is scheduled, it is unlikely to generate page faults since its resident set has been overestimated. Using Figure 9. Perform coalescing whenever possible: The following allocation is made by the Buddy system: The byte request is assigned a byte segment.
The byte request is assigned a byte segement, the 60 byte request is assigned a 64 byte segment and the byte request is assigned a byte segment. After the allocation, the following segment sizes are available: After the releases of memory, the only segment in use would be a byte segment containing bytes of data. The following segments will be free: What could be done to address this scalability issue?
This had long been a problem with the slab allocator - poor scalability with multiple CPUs. The issue comes from having to lock the global cache when it is being accesses. This has the effect of serializing cache accesses on multiprocessor systems. Solaris has addressed this by introducing a per-CPU cache, rather than a single global cache.
What are the advantages of such a paging scheme? What modifications to the virtual memory system are provide this functionality? The program could have a large code segment or use large- sized arrays as data. These portions of the program could be allocated to larger pages, thereby decreasing the memory overheads associated with a page table.
The virtual memory system would then have to maintain multiple free lists of pages for the different sizes and should also need to have more complex code for address translation to take into account different page sizes.
First, generate a random page- reference string where page numbers range from Apply the ran- dom page-reference string to each algorithm and record the number of page faults incurred by each algorithm.
Implement the replacement algorithms such that the number of page frames can vary from Assume that demand paging is used. Design two programs that communicate with shared memory using the Win32 API as outlined in Section 9. The consumer process will then read and output the sequence from shared memory. In this instance, the producer process will be passed an integer pa- rameter on the command line specifying the number of Catalan numbers to produce, i. Every- thing is typically stored in files: The student should learn what a file is to the operating system and what the problems are provid- ing naming conventions to allow files to be found by user programs, protec- tion.
Two problems can crop up with this chapter. First, terminology may be different between your system and the book. This can be used to drive home the point that concepts are important and terms must be clearly defined when you get to a new system. Second, it may be difficult to motivate students to learn about directory structures that are not the ones on the system they are using.
This can best be overcome if the students have two very different systems to consider, such as a single-user system for a microcomputer and a large, university time-shared system. Projects might include a report about the details of the file system for the local system. It is also possible to write programs to implement a simple file system either in memory allocate a large block of memory that is used to simulate a disk or on top of an existing file system.
In many cases, the design of a file system is an interesting project of its own. Exercises What problems may occur if a new file is created in the same storage area or with the same absolute path name? How can these problems be avoided? Let F1 be the old file and F2 be the new file.
A user wishing to access F1 through an existing link will actually access F2. Note that the access protection for file F1 is used rather than the one associated with F2.
This can be accomplished in several ways: Solutions Manuals are available for thousands of the most popular college and high school textbooks in subjects such as Math, Science Physics , Chemistry , Biology , Engineering Mechanical , Electrical , Civil , Business and more.
It's easier to figure out tough problems faster using Chegg Study. Unlike static PDF Operating System Concepts solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. No need to wait for office hours or assignments to be graded to find out where you took a wrong turn.
You can check your reasoning as you tackle a problem using our interactive solutions viewer. Plus, we regularly update and improve textbook solutions based on student ratings and feedback, so you can be sure you're getting the latest information available.
Our interactive player makes it easy to find solutions to Operating System Concepts problems you're working on - just go to the chapter for your book.
Hit a particularly tricky question? Bookmark it to easily review again before an exam. The best part? As a Chegg Study subscriber, you can view available interactive solutions manuals for each of your classes for one low monthly price.
Why download extra books when you can get all the homework help you need in one place? You bet! Just post a question you need help with, and one of our experts will provide a custom solution.