Operating system concepts(Seventh edition) 2008.3
other without any constraints.
2.14 What is the main advantage for an operating-system designer of using a virtual-machine architecture? What is the main advantage for a user?
Answer: The system is easy to debug, and security problems are easy to solve. Virtual machines also provide a good platform for operating system research sincemany different operating systemsmay run on one physical system.
2.15 Why is a just-in-time compiler useful for executing Java programs?
Answer: Java is an interpreted language. This means that the JVM interprets the bytecode instructions one at a time. Typically,most interpreted environments are slower than running native binaries, for the interpretation process requires converting each instruction into native machine code. A just-in-time (JIT) compiler compiles the bytecode for a method into native machine code the first time the method is encountered. This means that the Java program is essentially running as a native application(of course, the conversion process of the JIT takes time as well
but not as much as bytecode interpretation.) Furthermore, the JIT caches compiled code so that it may be reused the next time the method is encountered.A Java program that is run by a JIT rather than a traditional interpreter typically runs much faster.
2.16 What is the relationship between a guest operating system and a host operating system in a system like VMware? What factors need to be considered in choosing the host operating system?
Answer: A guest operating system provides its services by mapping them onto the functionality provided by the host operating system. A key issue that needs to be considered in choosing the host operating system is whether it is sufficiently general in terms of its system-call interface in order to be able to support the functionality associated with the guest operating system.
2.17 The experimental Synthesis operating system has an assembler incorporatedwithin the kernel. To optimize system-call performance, the kernel assembles routineswithin kernel space to minimize the path that the system call must take through the kernel. This approach is the antithesis of the layered approach, in which the path through the kernel is extended to make building the operating system easier. Discuss the pros and cons of the Synthesis approach to kernel design and to system-performance optimization.
Answer: Synthesis is impressive due to the performance it achieves through on-the-fly compilation. Unfortunately, it is difficult to debug problems within the kernel due to the fluidity of the code. Also, such compilation is system specific, making Synthesis difficult to port (a new compiler must be written for each
Operating system concepts(Seventh edition) 2008.3
architecture).
2.18 In Section 2.3,we described a program that copies the contents of one file to a destination file. This program works by first prompting the user for the name of the source and destination files. Write this program using either the Windows32 or POSIX API. Be sure to include all necessary error checking, including ensuring that the source file exists. Once you have correctly designed and tested the program, if you used a system that supports it, run the program using a utility that traces system calls. Linux systems provide the ptrace utility, and Solaris systems use the truss or dtrace command. On Mac OS X, the ktrace facility provides similar functionality.
Answer:
//Solution 1: The Copy program implemented with file system calls of Linux.
//This program was written by Wendell on Mar. 5,2008. // Usage: copy src dst #include
#define BUFSIZE 8192
int main( int argc, char ** argv) {
if (argc!=3) {
printf(\ return -1; }
int src, dst;
char buf[BUFSIZE]; int n;
src=open(argv[1], O_RDONLY);
dst=open(argv[2], O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR | S_IXUSR);
while ( ( n = read(src, buf, BUFSIZE) ) > 0) {
if (write(dst, buf, n) != n) printf(\ }
if (n<0)
printf(\
Operating system concepts(Seventh edition) 2008.3
close(src); close(dst); exit(0); }
//solution 2: using Windows CopyFile #include
int main (int argc, LPTSTR argv []) {
HANDLE hIn, hOut; DWORD nIn, nOut;
CHAR Buffer [BUF_SIZE]; if (argc != 3) {
printf (\ return 1; }
hIn = CreateFile (argv [1], GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (hIn == INVALID_HANDLE_VALUE) {
printf (\());
return 2; }
hOut = CreateFile (argv [2], GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (hOut == INVALID_HANDLE_VALUE) {
printf (\());
return 3; }
while (ReadFile (hIn, Buffer, BUF_SIZE, &nIn, NULL) && nIn > 0) {
WriteFile (hOut, Buffer, nIn, &nOut, NULL); if (nIn != nOut) {
printf (\ return 4; } }
CloseHandle (hIn);
Operating system concepts(Seventh edition) 2008.3
CloseHandle (hOut); return 0; }
//solution 3: using Windows API #include
int main (int argc, LPTSTR argv []) {
if (argc != 3) {
printf (\ return 1; }
if (!CopyFile (argv [1], argv [2], FALSE)) {
printf (\ return 2; }
return 0; } /* Sorry, The copy program hasn’t finished now using POSIX API, Solaris API or Mac Os X API. */
Operating system concepts(Seventh edition) 2008.3
Chapter 3
3.1 Describe the differences among short-term, medium-term, and long-term scheduling.
Answer:
? Short-term (CPU scheduler) —— selects from jobs in memory those jobs that are ready to execute and allocates the CPU to them.
? Medium-term —— used especially with time-sharing systems as an intermediate scheduling level. A swapping scheme is implemented to remove partially run programs from memory and reinstate them later to continue where they left off.
? Long-term (job scheduler) —— determines which jobs are brought into memory for processing.
The primary difference is in the frequency of their execution. The short-term must select a new process quite often. Long-term is used much less often since it handles placing jobs in the system and may wait a while for a job to finish before it admits another one.
3.2 Describe the actions taken by a kernel to context-switch between pro-cesses.
Answer: In general, the operating system must save the state of the currently running process and restore the state of the process scheduled to be run next. Saving the state of a process typically includes the values of all the CPU registers in addition to memory allocation. Context switches must also perform many architecture-specific operations, including flushing data and instruction caches.
3.3 Consider the RPC mechanism. Describe the undesirable circumstances that could arise from not enforcing either the \Describe possible uses for a mechanism that had neither of these guarantees.
Answer: If an RPC mechanism could not support either the “at most once” or “at least once” semantics, then the RPC server cannot guarantee that a remote procedure will not be invoked multiple occurrences. Consider if a remote procedure were withdrawing money from a bank account on a system that did not support these semantics. It is possible that a single invocation of the remote procedure might lead to multiple withdrawals on the server.
For a system to support either of these semantics generally requires the server maintain some form of client state such as the timestamp described in the text. If a system were unable to support either of these sematics, then such a system could only safely provide remote procedures that do not alter data or provide time-sensitive results. Using our bank accunt as an example, we certainly require “at most once” or “at least once” semantics for performing a withdrawal (or deposit!) However, an inquiry into an account balance or other accunt information such as name, address, etc. does not require these semantics.
Operating system concepts(Seventh edition) 2008.3
solutions to the exercises
Chapter 1
1.1 In a multiprogramming and time-sharing environment, several users share the system simultaneously. This situation can result in various security problems. a. What are two such problems? b. Can we ensure the same degree of security in a time-shared machine as in a dedicated machine? Explain your answer.
Answer: a. Stealing or copying one’s programs or data; using system resources (CPU, memory, disk space, peripherals) without proper accounting.
b. Probably not, since any protection scheme devised by humans can inevitably be broken by a human, and the more complex the scheme, the more difficult it is to feel confident of its correct implementation.
1.2 The issue of resource utilization shows up in different forms in different types of operating systems. List what resources must be managed
carefully in the following settings: a. Mainframe or minicomputer systems b. Workstations connected to servers c. Handheld computers
Answer:
a. Mainframes:memory and CPU resources, storage, network bandwidth. b. Workstations: memory and CPU resouces
c. Handheld computers: power consumption, memory resources.
1.3 Under what circumstances would a user be better off using a timesharing system rather than a PC or single-user workstation?
Answer: When there are few other users, the task is large, and the hardware is fast, time-sharingmakes sense. The full power of the system can be brought to bear on the user’s problem. The problemcan be solved faster than on a personal computer. Another case occurs when lots of other users need resources at the same time.
A personal computer is best when the job is small enough to be executed reasonably on it and when performance is sufficient to execute the program to the user’s satisfaction.
1.4 Which of the functionalities listed below need to be supported by the operating system for the following two settings: (a) handheld devices and (b) real-time systems. a. Batch programming b. Virtual memory c. Time sharing
Answer: For real-time systems, the operating system needs to support virtual memory
Operating system concepts(Seventh edition) 2008.3
and time sharing in a fair manner. For handheld systems,the operating system needs to provide virtual memory, but does not need to provide time-sharing. Batch programming is not necessary in both settings.
1.5 Describe the differences between symmetric and asymmetric multiprocessing.What are three advantages and one disadvantage of multiprocessor systems?
Answer: Symmetric multiprocessing treats all processors as equals, and I/O can be processed on any CPU. Asymmetric multiprocessing has one master CPU and the remainder CPUs are slaves. The master distributes tasks among the slaves, and I/O is usually done by the master only.
Multiprocessors can save money by not duplicating power supplies,housings, and peripherals. They can execute programs more quickly and can have increased reliability. They are also more complex in both hardware and software than uniprocessor systems.
1.6 How do clustered systems differ from multiprocessor systems? What is required for two machines belonging to a cluster to cooperate to provide a highly available service?
Answer: Clustered systems are typically constructed by combining multiple computers into a single system to perform a computational task distributed across the cluster. Multiprocessor systems on the other hand could be a single physical entity comprising of multiple CPUs. A clustered system is less tightly coupled than a multiprocessor system.Clustered systems communicate using messages, while processors in a multiprocessor system could communicate using shared memory.
In order for twomachines to provide a highly available service, the state on the two machines should be replicated and should be consistently updated. When one of the machines fail, the other could then take-over the functionality of the failed machine.
1.7 Distinguish between the client-server and peer-to-peer models of distributed systems.
Answer: The client-server model firmly distinguishes the roles of the client and server. Under this model, the client requests services that are provided by the server. The peer-to-peer model doesn’t have such strict roles. In fact, all nodes in the system are considered peers and thus may act as either clients or servers - or both. A node may request a service from another peer, or the node may in fact provide such a service to other peers in the system.
For example, let’s consider a system of nodes that share cooking recipes.Under the client-server model, all recipes are stored with the server. If a client wishes to access a recipe, it must request the recipe from the specified server. Using the peer-to-peer model, a peer node could ask other peer nodes for the specified recipe.
Operating system concepts(Seventh edition) 2008.3
The node (or perhaps nodes) with the requested recipe could provide it to the requesting node. Notice how each peer may act as both a client (i.e. it may request recipes) and as a server (it may provide recipes.)
1.8 Consider a computing cluster consisting of twonodes running adatabase.Describe two ways in which the cluster software can manage access to the data on the disk. Discuss the benefits and disadvantages of each.
Answer: Consider the following two alternatives: asymmetric clustering and parallel clustering. With asymmetric clustering, one host runs the database application with the other host simply monitoring it. If the server fails, the monitoring host becomes the active server. This is appropriate for providing redundancy. However, it does not utilize the potential processing power of both hosts. With parallel clustering, the database application can run in parallel on both hosts. The difficulty implementing parallel clusters is providing some form of distributed locking mechanism for files on the shared disk.
1.9 How are network computers different from traditional personal computers? Describe some usage scenarios in which it is advantageous to use network computers.
Answer: A network computer relies on a centralized computer for most of its services. It can therefore have a minimal operating system to manage its resources. A personal computer on the other hand has to be capable of providing all of the required functionality in a standalonemanner without relying on a centralized manner. Scenarios where administrative costs are high and where sharing leads to more efficient use of resources are precisely those settings where network computers are preferred.
1.10 What is the purpose of interrupts? What are the differences between a trap and an interrupt? Can traps be generated intentionally by a user program? If so, for what purpose?
Answer: An interrupt is a hardware-generated change-of-flow within the system. An interrupt handler is summoned to deal with the cause of the interrupt; control is then returned to the interrupted context and instruction. A trap is a software-generated interrupt. An interrupt can be used to signal the completion of an I/O to obviate the need for device polling. A trap can be used to call operating system routines or to catch arithmetic errors.
1.11 Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPU′s execution load.
a. How does the CPU interface with the device to coordinate the transfer? b. How does the CPU know when the memory operations are complete?
c. The CPU is allowed to execute other programs while the DMA controller is
Operating system concepts(Seventh edition) 2008.3
transferring data. Does this process interfere with the execution of the user programs? If so, describe what forms of interference are caused.
Answer: The CPU can initiate a DMA operation by writing values into special registers that can be independently accessed by the device.The device initiates the corresponding operation once it receives a command from the CPU. When the device is finished with its operation, it interrupts the CPU to indicate the completion of the operation.
Both the device and the CPU can be accessing memory simultaneously.The memory controller provides access to the memory bus in a fair manner to these two entities. A CPU might therefore be unable to issue memory operations at peak speeds since it has to compete with the device in order to obtain access to the memory bus.
1.12 Some computer systems do not provide a privileged mode of operation in hardware. Is it possible to construct a secure operating system for these computer systems? Give arguments both that it is and that it is not possible.
Answer: An operating system for a machine of this type would need to remain in control (or monitor mode) at all times. This could be accomplished by two methods:
a. Software interpretation of all user programs (like some BASIC,Java, and LISP systems, for example). The software interpreter would provide, in software, what the hardware does not provide.
b. Require meant that all programs be written in high-level languages so that all object code is compiler-produced. The compiler would generate (either in-line or by function calls) the protection checks that the hardware is missing.
1.13 Give two reasons why caches are useful.What problems do they solve? What problems do they cause? If a cache can be made as large as the device for which it is caching (for instance, a cache as large as a disk), why not make it that large and eliminate the device?
Answer: Caches are useful when two or more components need to exchange data, and the components perform transfers at differing speeds.Caches solve the transfer problem by providing a buffer of intermediate speed between the components. If the fast device finds the data it needs in the cache, it need not wait for the slower device. The data in the cache must be kept consistent with the data in the components. If a omponent has a data value change, and the datum is also in the cache, the cache must also be updated. This is especially a problem on multiprocessor systemswhere more than one process may be accessing a datum.Acomponent may be eliminated by an equal-sized cache, but only if: (a) the cache and the component have equivalent state-saving capacity (that is,if the component retains its data when electricity is removed, the cache must retain data as well), and (b) the cache is affordable, because faster storage tends to be more expensive.
Operating system concepts(Seventh edition) 2008.3
1.14 Discuss, with examples, how the problem of maintaining coherence of cached data manifests itself in the following processing environments: a. Single-processor systems b. Multiprocessor systems c. Distributed systems
Answer: In single-processor systems, the memory needs to be updated when a processor issues updates to cached values. These updates can be performed immediately or in a lazy manner. In amultiprocessor system,different processors might be caching the same memory location in its local caches. When updates are made, the other cached locations need to be invalidated or updated. In distributed systems, consistency of cached memory values is not an issue. However, consistency problems might arise when a client caches file data.
1.15 Describe a mechanism for enforcing memory protection in order to prevent a program from modifying the memory associated with other programs.
Answer: The processor could keep track of what locations are associated with each process and limit access to locations that are outside of a program’s extent. Information regarding the extent of a program’s memory could be maintained by using base and limits registers and by performing a check for every memory access.
1.16 What network configuration would best suit the following environments? a. A dormitory floor b. A university campus c. A state d. A nation
Answer:
a. A dormitory floor - A LAN.
b. A university campus - A LAN, possible a WAN for very large campuses. c. A state - AWAN. d. A nation - A WAN.
1.17 Define the essential properties of the following types of operating systems: a. Batch
b. Interactive c. Time sharing d. Real time e. Network f. Parallel g. Distributed h. Clustered i. Handheld
Operating system concepts(Seventh edition) 2008.3
Answer:
a. Batch. Jobs with similar needs are batched together and run through the computer as a group by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, andmultiprogramming. Batch is good for executing large jobs that need little interaction; it can be submitted and picked up later. b. Interactive. This system is composed of many short transactions where the results of the next transaction may be unpredictable. Response time needs to be short (seconds) since the user submits and waits for the result.
c. Time sharing. This systems uses CPU scheduling and multiprogramming to provide economical interactive use of a system. The CPU switches rapidly from one user to another. Instead of having a job defined by spooled card images, each program reads its next control card from the terminal, and output is normally printed immediately to the screen.
d. Real time. Often used in a dedicated application, this system reads information from sensors and must respond within a fixed amount of time to ensure correct performance. e. Network. Provides operating system features across a network such as file sharing. f. SMP. Used in systems where there are multiple CPU’s each running the same copy of the operating system.Communication takes place across the system bus.
g. Distributed.This system distributes computation among several physical processors. The processors do not share memory or a clock. Instead, each processor has its own local memory. They communicate with each other through various communication lines, such as a high-speed bus or local area network.
h. Clustered. A clustered system combines multiple computers into a single system to perform computational task distributed across the cluster.
i. Handheld. A small computer system that performs simple tasks such as calendars, email, and web browsing. Handheld systems differ from traditional desktop systemswith smallermemory and display screens and slower processors.
1.18 What are the tradeoffs inherent in handheld computers?
Answer: Handheld computers are much smaller than traditional desktop PC’s. This results in smaller memory, smaller screens, and slower processing capabilities than a standard desktop PC. Because of these limitations,most handhelds currently can perform only basic tasks such as calendars, email, and simple word processing. However, due to their small size, they are quite portable and, when they are equipped with wireless access, can provide remote access to electronic mail and the world wide web.
Operating system concepts(Seventh edition) 2008.3
Chapter 2
2.1 The services and functions provided by an operating system can be divided into two main categories. Briefly describe the two categories and discuss how they differ.
Answer: One class of services provided by an operating system is to enforce protection between different processes running concurrently in the system. Processes are allowed to access only thosememory locations that are associated with their address spaces. Also, processes are not allowed to corrupt files associated with other users. A process is also not allowed to access devices directly without operating system ntervention.
The second class of services provided by an operating system is to provide new functionality that is not supported directly by the underlying hardware. Virtual memory and file systems are two such examples of new services provided by an operating system.
2.2 List five services provided by an operating system that are designed to make it more convenient for users to use the computer system. In what cases it would be impossible for user-level programs to provide these services? Explain.
Answer:
? Program execution. The operating system loads the contents (or sections) of a file into memory and begins its execution. A user-level program could not be trusted to properly allocate CPU time. ? I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a very low level. The user need only specify the device and the operation to perform on it, while the system converts that request into device- or controller-specific commands. User-level programs cannot be trusted to only access devices they should have access to and to only access them when they are otherwise unused.
? File-system manipulation. There are many details in file creation, deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked. Deleting a file requires removing the name file information and freeing the allocated blocks. Protections must also be checked to assure proper file access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free blocks and deallocate blocks on file deletion.
? Communications. Message passing between systems requires messages be turned into packets of information, sent to the network controller,transmitted across a communications medium, and reassembled by the destination system. Packet ordering and data correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets destined for other processes. ? Error detection. Error detection occurs at both the hardware and software levels. At the hardware level, all data transfers must be inspected to ensure that data have
Operating system concepts(Seventh edition) 2008.3
not been corrupted in transit. All data on media must be checked to be sure they have not changed since they were written to the media. At the software level, media must be checked for data consistency; for instance, do the number of allocated and
unallocated blocks of storage match the total number on the device. There, errors are frequently process-independent (for instance, the corruption of data on a disk), so there must be a global program (the operating system) that handles all types of errors. Also, by having errors processed by the operating system, processes need not contain code to catch and correct all the errors possible on a system.
2.3 Describe three general methods for passing parameters to the operating system.
Answer:
a. Pass parameters in registers
b. Registers pass starting addresses of blocks of parameters
c. Parameters can be placed, or pushed, onto the stack by the program, and popped off the stack by the operating system.
2.4 Describe how you could obtain a statistical profile of the amount of time spent by a program executing different sections of its code.Discuss the importance of obtaining such a statistical profile.
Answer: One could issue periodic timer interrupts and monitor what instructions or what sections of code are currently executing when the interrupts are delivered. A statistical profile of which pieces of code were active should be consistent with the time spent by the program in different sections of its code. Once such a statistical profile has been obtained, the programmer could optimize those sections of code that are consuming more of the CPU resources.
2.5 What are the five major activities of an operating system in regard to file management?
Answer:
? The creation and deletion of files
? The creation and deletion of directories
? The support of primitives for manipulating files and directories ? The mapping of files onto secondary storage
? The backup of files on stable (nonvolatile) storage media 2.6 What are the advantages and disadvantages of using the same systemcall interface for manipulating both files and devices?
Answer: Each device can be accessed as though it was a file in the file system. Since most of the kernel deals with devices through this file interface,it is relatively
Operating system concepts(Seventh edition) 2008.3
easy to add a new device driver by implementing the hardware-specific code to support this abstract file interface. Therefore,this benefits the development of both user program code, which can bewritten to access devices and files in the samemanner, and device driver code, which can be written to support a well-defined API. The disadvantage with using the same interface is that it might be difficult to capture the functionality of certain devices within the context of the file access API, thereby either resulting in a loss of functionality or a loss of performance. Some of this could be overcome by the use of ioctl operation that provides a general purpose interface for processes to invoke operations on devices.
2.7 What is the purpose of the command interpreter? Why is it usually separate from the kernel? Would it be possible for the user to develop a new command interpreter using the system-call interface provided by the operating system?
Answer: It reads commands from the user or from a file of commands and executes them, usually by turning them into one or more system calls. It is usually not part of the kernel since the command interpreter is subject to changes. An user should be able to develop a new command interpreter using the system-call interface provided by the operating system. The command interpreter allows an user to create and manage processes and also determine ways by which they communicate (such as through pipes and files). As all of this functionality could be accessed by an user-level program using the system calls, it should be possible for the user to develop a new command-line interpreter.
2.8 What are the two models of interprocess communication? What are the strengths and weaknesses of the two approaches?
Answer: The two models of interprocess communication are message-passing model and the shared-memory model.
2.9 Why is the separation of mechanism and policy desirable?
Answer: Mechanism and policymust be separate to ensure that systems are easy to modify. No two system installations are the same, so each installation may want to tune the operating system to suit its needs.With mechanism and policy separate, the policy may be changed at will while the mechanism stays unchanged. This arrangement provides a more flexible system.
2.10 Why does Java provide the ability to call from a Java program native methods that are written in, say, C or C++? Provide an example of a situation in which a native method is useful.
Answer: Java programs are intended to be platform I/O independent. Therefore, the language does not provide access to most specific system resources such as reading
Operating system concepts(Seventh edition) 2008.3
fromI/O devices or ports. To perform a system I/O specific operation, you must write it in a language that supports such features (such as C or C++.) Keep in mind that a Java program that calls a native method written in another language will no longer be architecture-neutral.
2.11 It is sometimes difficult to achieve a layered approach if two components of the operating system are dependent on each other. Identify a scenario in which it is unclear how to layer two system components that require tight coupling of their functionalities.
Answer: The virtual memory subsystem and the storage subsystem are typically tightly-coupled and requires careful design in a layered system due to the following interactions. Many systems allow files to be mapped into the virtual memory space of an executing process. On the other hand, the virtualmemory subsystem typically uses the storage system to provide the backing store for pages that do not currently reside in memory. Also, updates to the filesystem are sometimes buffered in physical memory before it is flushed to disk, thereby requiring careful coordination of the usage of memory between the virtual memory subsystem and the filesystem.
2.12 What is the main advantage of the microkernel approach to system design? How do user programs and system services interact in amicrokernel architecture? What are the disadvantages of using the microkernel approach?
Answer: Benefits typically include the following (a) adding a new service does not require modifying the kernel, (b) it is more secure as more operations are done in user mode than in kernel mode, and (c) a simpler kernel design and functionality typically results in a more reliable operating system. User programs and system services interact in a microkernel architecture by using interprocess communication mechanisms such asmessaging. These messages are conveyed by the operating system. The primary disadvantage of the microkernel architecture are the overheads associated with interprocess communication and the frequent use of the operating system’s messaging functions in order to enable the user process and the system service to interact with each other.
2.13 In what ways is the modular kernel approach similar to the layered approach? In what ways does it differ from the layered approach?
Answer: The modular kernel approach requires subsystems to interact with each other through carefully constructed interfaces that are typically narrow (in terms of the functionality that is exposed to external modules). The layered kernel approach is similar in that respect. However, the layered kernel imposes a strict ordering of subsystems such that subsystems at the lower layers are not allowed to invoke operations corresponding to the upper-layer subsystems. There are no such restrictions in the modular-kernel approach, wherein modules are free to invoke each
Operating system concepts(Seventh edition) 2008.3
3.4 Using the program shown in Figure 3.24, explain what will be output at Line A.
Answer: PARENT: value = 5
3.5 What are the benefits and detriments of each of the following? Consider both the systems and the programmers'levels.
a. Symmetric and asymmetric communication b. Automatic and explicit buffering c. Send by copy and send by reference
d. Fixed-sized and variable-sized messages
Answer:
a. Symmetric and asymmetric communication—A benefit of symmetric communication is that it allows a rendezvous between the
sender and receiver. A disadvantage of a blocking send is that a rendezvous may not be required and the message could be delivered asynchronously; received at a point of no interest to the sender. As a result, message-passing systems often provide both forms of synchronization.
b. Automatic and explicit buffering—Automatic buffering provides a queue with indefinite length; thus ensuring the sender will never have to block while waiting to copy a message. There are no specifications how automatic buffering will be provided; one scheme may reserve sufficiently large memory where much of the memory is wasted. Explicit buffering specifies how large the buffer is. In this situation, the sender may be blocked while waiting for available space in the queue. However, it is less likely memory will be wasted with explicit buffering.
c. Send by copy and send by reference—Send by copy does not allow the receiver to alter the state of the parameter; send by reference does allow it. A benefit of send by reference is that it allows the programmer to write a distributed version of a centralized application. Java’s RMI provides both, however passing a parameter by reference requires declaring the parameter as a remote object as well.
d. Fixed-sized and variable-sized messages—The implications of this are mostly related to buffering issues; with fixed-size messages, a buffer with a specific size can hold a known number of messages. The number of variable-sized messages that can be held by such a buffer is unknown. Consider how Windows 2000 handles this situation: with fixed-sized messages (anything < 256 bytes), the messages are copied from the address space of the sender to the address space of the receiving process. Larger messages (i.e. variable-sized messages) use shared memory to pass the message. 3.6 The Fibonacci sequence is the series of numbers 0, 1, 1, 2, 3, 5, 8, .... Formally, it can be expressed as:
fib0 = 0 fib1 = 1
fibn = fibn?1 + fibn?2
Write a C program using the fork() system call that that generates the Fibonacci
Operating system concepts(Seventh edition) 2008.3
sequence in the child process. The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process. Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence.
Have the parent invoke the wait() call to wait for the child process to complete before exiting the program. Perform necessary error checking to ensure that a non-negative number is passed on the command line.
Answer: // This program solves exercise 3.6 int main(int argc, char *argv[]) {
if (argc != 2) exit(0);
//该程序用来检测输入是否正确,因为默认情况下,argv[0]中存放的是程序的名称,argv[1]中存放的是用户的第一个输入,以此类推。所以仅当用户输入一个数据时argc的值为2,否则检测到错误程序退出。
pid_t pid; //在linux下定义了一个进程 int i, a, b, fib;
int n = atoi(argv[1]); //将字符串转化成整数类型 /* fork another process */ pid = fork();
if (pid < 0) { /* error occurred */ fprintf(stderr, \ exit(-1); }
else if (pid == 0) { /* child process */ if (n == 1)
printf(\ else if (n == 2) printf(\ else if (n > 2) { a = 0; b = 1;
printf(\
for (i = 3; i < n; i++) { fib = a + b;
printf(\ a = b; b = fib; }
printf(\
} }
Operating system concepts(Seventh edition) 2008.3
else { /* parent process */
/* parent will wait for the child to complete */ wait(NULL);
exit(0); } }
3.7 Repeat the preceding exercise, this time using the CreateProcess() in the Win32 API. In this instance, you will need to specify a separate program to be invoked from CreateProcess(). It is this separate program that will run as a child process outputting the Fibonacci sequence. Perform necessary error checking to ensure that a non-negative number is passed on the command line.
Answer: //These programs solve exercise 3.7 //program 1: cp
#include
#define MAX_STRING 256
int main( int argc, char *argv[] ) {
if (argc < 2) exit(0);
STARTUPINFO si;
PROCESS_INFORMATION pi; char command[MAX_STRING];
ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si);
ZeroMemory( &pi, sizeof(pi) );
sprintf(command,\
// Start the child process.
if( !CreateProcess( NULL, // No module name (use command line). command, // Command line.
NULL, // Process handle not inheritable. NULL, // Thread handle not inheritable. FALSE, // Set handle inheritance to FALSE. 0, // No creation flags.
NULL, // Use parent's environment block. NULL, // Use parent's starting directory.
Operating system concepts(Seventh edition) 2008.3
&si, // Pointer to STARTUPINFO structure.
&pi ) // Pointer to PROCESS_INFORMATION structure. ) {
printf( \ return -1; }
// Wait until child process exits.
WaitForSingleObject( pi.hProcess, INFINITE );
// Close process and thread handles. CloseHandle( pi.hProcess ); CloseHandle( pi.hThread ); }
//program 2: fib #include
int main( int argc, char *argv[] ) {
int max = atoi(argv[1]);
if (max <= 0) exit(0); else {
if (max == 1)
printf(\ else if (max == 2) printf(\ else {
int a = 0; int b = 1; int fib;
printf(\
for (int i = 3; i < max; i++) { fib = a + b;
printf(\ a = b; b = fib; }
Operating system concepts(Seventh edition) 2008.3
printf(\ } } }
3.8 Modify the Date server shown in Figure 3.19 so that it delivers random one-line fortunes rather than the current date. Allow the fortunes to contain multiple lines. The date client shown in Figure 3.20 can be used to read the multi-line fortunes returned by the fortune server.
Answer: //Client program requesting fortune from server.
import java.net.*; import java.io.*;
public class FortuneClient {
public static void main(String[] args) throws IOException { InputStream in = null; BufferedReader bin = null; Socket sock = null;
try {
sock = new Socket(\ in = sock.getInputStream();
bin = new BufferedReader(new InputStreamReader(in));
String line;
while( (line = bin.readLine()) != null) System.out.println(line); }
catch (IOException ioe) {
System.err.println(ioe); }
finally {
sock.close(); } } }
//Fortune server listening to port 6013. import java.net.*; import java.io.*;
Operating system concepts(Seventh edition) 2008.3
public class FortuneServer {
private static final String[] fortunes = { \ \
\
\ \
\ };
public static void main(String[] args) throws IOException { Socket client = null; ServerSocket sock = null;
try {
sock = new ServerSocket(6013); // now listen for connections while (true) {
client = sock.accept();
System.out.println(\ System.out.println(\
// we have a connection
PrintWriter pout = new PrintWriter(client.getOutputStream(), true);
// write the Date to the socket
pout.println(fortunes[(int)(java.lang.Math.random() * fortunes.length)] );
pout.close(); client.close(); } }
catch (IOException ioe) {
System.err.println(ioe); }
finally {
if (sock != null) sock.close(); if (client != null) client.close(); } } }
Operating system concepts(Seventh edition) 2008.3
3.9 An echo server is a server that echoes back whatever it receives from a client. For example, if a client sends the server the string Hello there! the server will respond with the exact data it received from the client — that is, Hello there!
Write an echo server using the Java networking API described in Section 3.6.1. This server will wait for a client connection using the accept() method. When a client connection is received, the server will loop, performing the following steps:
? Read data from the socket into a buffer.
? Write the contents of the buffer back to the client.
The server will break out of the loop only when it has determined that the client has closed the connection.
The date server shown in Figure 3.19 uses the java.io.BufferedReader class. BufferedReader extends the java.io.Reader class, which is used for reading character streams. However, the echo server cannot guarantee that it will read characters from clients; it may receive binary data as well. The class java.io.InputStream deals with data at the byte level rather than the character level. Thus, this echo server must use an object that extends java.io.InputStream. The read() method in the java.io.InputStream class returns -1 when the client has closed its end of the socket connection.
Answer:
// An echo server listening on port 6007.
// This server reads from the client and echoes back the result.
// When the client enters the character '.' – the server closes the connection. // This conforms to RFC 862 for echo servers.
import java.net.*; import java.io.*;
public class EchoServer {
public static final int DEFAULT_PORT = 6007; public static final int BUFFER_SIZE = 256;
public static void main(String[] args) throws IOException { ServerSocket sock = null;
byte[] buffer = new byte[BUFFER_SIZE]; InputStream fromClient = null; OutputStream toClient = null;
try {
Operating system concepts(Seventh edition) 2008.3
// establish the socket
sock = new ServerSocket(DEFAULT_PORT);
while (true) { /**
* now listen for connections */
Socket client = sock.accept();
/**
* get the input and output streams associated with the socket. */
fromClient = new BufferedInputStream(client.getInputStream()); toClient = new BufferedOutputStream(client.getOutputStream()); int numBytes;
/** continually loop until the client closes the connection */
while ( (numBytes = fromClient.read(buffer)) != -1) { toClient.write(buffer,0,numBytes); toClient.flush(); }
fromClient.close(); toClient.close(); client.close(); } }
catch (IOException ioe) { } finally {
if (sock != null) sock.close(); } } }
3.10 In Exercise 3.6, the child process must output the Fibonacci sequence, since the parent and child have their own copies of the data. Another approach to designing this program is to establish a shared-memory segment between the parent and child processes. This technique allows the child to write the contents of the Fibonacci sequence to the shared- memory segment and has the parent output the sequence when the child completes. Because the memory is shared, any changes the child makes to
Operating system concepts(Seventh edition) 2008.3
the shared memory will be reflected in the parent process as well.
This program will be structured using POSIX shared memory as described in Section 3.5.1. The program first requires creating the data structure for the shared-memory segment. This is most easily accom- plished using a struct. This data structure will contain two items: (1) a fixed-sized array of size MAX SEQUENCE that will hold the Fibonacci values; and (2) the size of the sequence the child process is to generate-sequence_size where sequence_size ≤ MAX SEQUENCE. These items can be represented in a struct as follows: #define MAX SEQUENCE 10 typedef struct {
long fib sequence[MAX SEQUENCE]; int sequence size; } shared data;
The parent process will progress through the following steps:
a. Accept the parameter passed on the command line and perform error checking to ensure that the parameter is ≤ MAX SEQUENCE.
b. Create a shared-memory segment of size shared data. c. Attach the shared-memory segment to its address space.
d. Set the value of sequence size to the parameter on the command line.
e. Fork the child process and invoke the wait() system call to wait for the child to finish.
f. Output the value of the Fibonacci sequence in the shared-memory segment. g. Detach and remove the shared-memory segment.
Because the child process is a copy of the parent, the shared-memory region will be attached to the child’s address space as well. The child process will then write the Fibonacci sequence to shared memory and finally will detach the segment. One issue of concern with cooperating processes involves synchronization issues. In this exercise, the parent and child processes must be synchronized so that the parent does not output the Fibonacci sequence until the child finishes generating the sequence. These two processes will be synchronized using the wait() system call; the parent process will invoke wait(), which will cause it to be suspended until the child process exits.
Answer: /* Example shared memory program. */
#include
#define PERMS (S_IRUSR | S_IWUSR) #define MAX_SEQUENCE 10
Operating system concepts(Seventh edition) 2008.3
typedef struct {
long fib_sequence[MAX_SEQUENCE]; int sequence_size; } shared_data;
int main(int argc, char *argv[]) {
int i, seq_size;
/* the process identifier */ pid_t pid;
/* the id of the shared memory segment */ int segment_id;
/* a pointer to the shared memory segment */ shared_data* shared_memory;
/* do some error checking to ensure the parameter was passed */ if (argc != 2) {
fprintf(stderr,\ return -1; }
seq_size = atoi(argv[1]);
if (seq_size > MAX_SEQUENCE) {
fprintf(stderr,\ return -1; }
/* allocate a shared memory segment */
if ( (segment_id = shmget(IPC_PRIVATE, sizeof(shared_data), PERMS)) == -1) { fprintf(stderr,\ return 1; }
printf(\
/* now attach the shared memory segment at the specified address */ if ( (shared_memory = (shared_data *) shmat(segment_id, 0, 0)) == (shared_data *)-1) {
fprintf(stderr,\ return 0; }
Operating system concepts(Seventh edition) 2008.3
/* set the size of the sequence */
shared_memory->sequence_size = seq_size;
/* now fork a child process */
if ( (pid = fork()) == (pid_t)-1) { return 1; }
/**
* now create a child process and have the child process set * the the shared memory segment to a certain value.
* The parent process will inquire on this shared value when * it returns from wait(). Thus, the call to wait() provides the synchronization. */
if (pid == 0) { /** child code */
printf(\shared_memory);
/* now have the child generate the Fibonacci sequence .... */ shared_memory->fib_sequence[0] = 0; shared_memory->fib_sequence[1] = 1;
for (i = 2; i < shared_memory->sequence_size; i++)
shared_memory->fib_sequence[i] = shared_memory->fib_sequence[i-1] + shared_memory->fib_sequence[i-2];
/* now detach the shared memory segment */ shmdt((void *)shared_memory); }
else { /* parent code */ wait(NULL);
for (i = 0; i < shared_memory->sequence_size; i++)
printf(\
/* now detach and remove the shared memory segment */ shmdt((void *)shared_memory); shmctl(segment_id, IPC_RMID, NULL); }
return 0; }
Operating system concepts(Seventh edition) 2008.3
3.11 Most UNIX and Linux systems provide the ipcs command. This command lists the status of various POSIX interprocess communicationmechanisms,including
shared-memory segments. Much of the information for the command comes from the data structure struct shmid_ds,which is available in the /usr/include/sys/shm.h file. Some of the fields of this structure include:
? int shm segsz—size of the shared-memory segment
? short shm nattch—number of attaches to the shared-memory segment
? struct ipc perm shm perm—permission structure of the shared-memory segment The struct ipc perm data structure (which is available in the file /usr/include/sys/ipc.h) contains the fields:
? unsigned short uid—identifier of the user of the shared-memory segment ? unsigned short mode—permission modes
? key t key (on Linux systems, key)—user-specified key identifier The permission modes are set according to how the shared-memory segment is established with the shmget() system call. Permissions are identified according to the following: Mode Meaning 0400 Read permission of owner. 0200 Write permission of owner. 0040 Read permission of group. 0020 Write permission of group. 0004 Read permission of world. 0002 Write permission of world. Permissions can be accessed by using the bitwise AND operator &. For example, if the statement mode & 0400 evaluates to true, the permission mode allows read permission by the owner of the shared-memory segment.
Shared-memory segments can be identified according to a user-specified key or according to the integer value returned fromthe shmget() system call, which
represents the integer identifier of the shared-memory segment created. The shm ds structure for a given integer segment identifier can be obtained with the following shmctl() system call:
/* identifier of the shared memory segment*/ int segment id; shm ds shmbuffer;
shmctl(segment id, IPC STAT, &shmbuffer);
If successful, shmctl() returns 0; otherwise, it returns -1. Write a C program that is passed an identifier for a shared-memory segment. This program will invoke the shmctl() function to obtain its shm ds structure. It will then output the following values of the given shared-memory segment: ? Segment ID ? Key ? Mode
? Owner UID ? Size
? Number of attaches
Operating system concepts(Seventh edition) 2008.3
Answer:
// This program illustrates the functionality of the ipcs command on POSIX systems.
// This program is written for Linux 2.4 systems.
// Getting it to work on Solaris, OS X, etc. will require modifying the source code where commented. // Usage: gcc -o sm sm.c // ./sm
#include
int main(int argc, char *argv[]) {
/* the segment number */ int segment_id;
/* permissions of the segment */ unsigned short mode;
/** the shared memory segment */ struct shmid_ds shmbuffer;
/** do some error checking */ if (argc < 2) {
fprintf(stderr,\ return -1; }
/**
* this needs to be set to the * shared memory segment number * being attached to. */
segment_id = atoi(argv[1]);
/* get the shm_ds information */
if (shmctl(segment_id, IPC_STAT, &shmbuffer) == -1) {
fprintf(stderr,\ return -1; }
Operating system concepts(Seventh edition) 2008.3
/** now report the fields in shm_ds */
printf(\\\t\\t KEY \\t MODE \\t\\t OWNER \\t SIZE \\t ATTTACHES \\n\ printf(\\\t\\t --- \\t ---- \\t\\t ----- \\t ---- \\t --------- \\n\
/** Linux has __key rather than key field */
printf(\
/** Mac OS X Darwin uses the key field */
//printf(\
/** report on the permission */ mode = shmbuffer.shm_perm.mode;
/** report on the permission */ mode = shmbuffer.shm_perm.mode;
/** OWNER */
if (mode & 0400) printf(\ else
printf(\ if (mode & 0200) printf(\ else
printf(\ if (mode & 0100) printf(\ else
printf(\
/** GROUP */
if (mode & 0040)
printf(\ else
printf(\ if (mode & 0020) printf(\ else
printf(\ if (mode & 0010) printf(\ else
printf(\
Operating system concepts(Seventh edition) 2008.3
/** WORLD */
if (mode & 0004)
printf(\ else
printf(\ if (mode & 0002) printf(\ else
printf(\ if (mode & 0001) printf(\ else
printf(\
/** Darwin (Mac OS X) has user_from_uid() function */
//printf(\ printf(\ printf(\ printf(\
//printf(\
printf(\
return 0; }
Chapter 4
4.1 Provide two programming examples in which multithreading does not provide better
Operating system concepts(Seventh edition) 2008.3
Answer: I/O-bound programs have the property of performing only a small amount of computation before performing IO. Such programs typically do not use up their entire CPU quantum. CPU-bound programs,on the other hand, use their entire quantum without performing any blocking IO operations. Consequently, one could make better use of the computer ’s resouces by giving higher priority to I/O-bound programs and allow them to execute ahead of the CPU-bound programs.
5.2 Discuss how the following pairs of scheduling criteria conflict in certain settings.
a. CPU utilization and response time
b. Average turnaround time and maximum waiting time c. I/O device utilization and CPU utilization
Answer:
? CPU utilization and response time: CPU utilization is increased if the overheads associated with context switching is minimized. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes.
? Average turnaround time and maximum waiting time: Average turnaround time is minimized by executing the shortest tasks first.
Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time.
? I/O device utilization and CPU utilization: CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches. I/O device utilization is maximized by scheduling I/O-bound jobs as soon as they become ready to run, thereby incurring the overheads of context switches.
5.3 Consider the exponential average formula used to predict the length of the next CPU burst. What are the implications of assigning the following values to the parameters used by the algorithm? a. = 0 and 0 = 100milliseconds b. = 0.99 and 0 = 10milliseconds
Answer: When = 0 and 0 = 100milliseconds, the formula always makes a prediction of 100 milliseconds for the next CPU burst. When =0.99 and 0 = 10milliseconds, the most recent behavior of the process is given much higher weight than the past history associated with the process. Consequently, the scheduling algorithm is almost memory-less, and simply predicts the length of the previous burst for the next quantum of CPU execution.
5.4 Consider the following set of processes, with the length of the CPU-burst time given in milliseconds:
Process Burst Time Priority
Operating system concepts(Seventh edition) 2008.3
P1 10 3 P2 1 1 P3 2 3 P4 1 4 P5 5 2
The processes are assumed to have arrived in the order P1 , P2 , P3 , P4 , P5 , all at time 0.
a. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a nonpreemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1) scheduling.
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of the scheduling algorithms in part a?
d. Which of the schedules in part a results in the minimal average waiting time (over all processes)?
Answer:
a. The four Gantt charts are
b. Turnaround time Process FCFS P1 P2 P3 P4 P5 10 11 13 14 19 RR 19 2 7 4 14 SJF 19 1 4 2 9 Priority 16 1 18 19 6
c. Waiting time (turnaround time minus burst time) Process FCFS P1 P2 0 10 RR 9 1 SJF 9 0 Priority 6 0 Operating system concepts(Seventh edition) 2008.3
P3 P4 P5 11 13 14 5 3 9 2 1 4 16 18 1 d. Shortest Job First
5.5 Which of the following scheduling algorithms could result in starvation? a. First-come, first-served b. Shortest job first c. Round robin
d. Priority
Answer: Shortest job first and priority-based scheduling algorithms could result in starvation.
5.6 Consider a variant of the RR scheduling algorithm where the entries in the ready queue are pointers to the PCBs.
a. What would be the effect of putting two pointers to the same process in the ready queue?
b. What would be the major advantages and disadvantages of this scheme?
c. How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers?
Answer: a. In effect, that process will have increased its priority since by getting time more often it is receiving preferential treatment. b. The advantage is that more important jobs could be given more time, in other words, higher priority in treatment. The consequence, of course, is that shorter jobs will suffer.
c. Allot a longer amount of time to processes deserving higher pri- ority. In other words, have two or more quantums possible in the Round-Robin scheme.
5.7 Consider a system running ten I/O-bound tasks and one CPU-bound task. Assume that the I/O-bound tasks issue an I/O operation once for every millisecond of CPU computing and that each I/O operation takes 10 milliseconds to complete. Also assume that the context switching overhead is 0.1 millisecond and that all processes are long-running tasks. What is the CPU utilization for a round-robin scheduler when: a. The time quantum is 1 millisecond b. The time quantum is 10 milliseconds
Answer:
? The time quantum is 1 millisecond: Irrespective of which process is scheduled, the scheduler incurs a 0.1 millisecond context-switching cost for every context-switch. This results in a CPU utilization of 1/1.1 * 100 = 91%.
? The time quantum is 10 milliseconds: The I/O-bound tasks incur a context switch after using up only 1 millisecond of the time quantum. The time required to cycle
Operating system concepts(Seventh edition) 2008.3
through all the processes is therefore 10*1.1 + 10.1 (as each I/O-bound task executes for 1 millisecond and then incur the context switch task, whereas the CPU-bound task executes for 10 milliseconds before incurring a context switch). The CPU utilization is therefore 20/21.1 * 100 = 94%.
5.8 Consider a system implementing multilevel queue scheduling. What strategy can a computer user employ to maximize the amount of CPU time allocated to the user’s process?
Answer: The program could maximize the CPU time allocated to it by not fully utilizing its time quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before the end of the quantum, thereby increasing the priority associated with the process.
5.9 Consider a preemptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not run- ning), its priority changes at a rate ; when it is running, its priority changes at a rate . All processes are given a priority of 0 when they enter the ready queue. The parameters and can be set to give many different scheduling algorithms. a. What is the algorithm that results from β>α> 0? b. What is the algorithm that results from α<β< 0?
Answer: a. FCFS b. LIFO
5.10 Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes: a. FCFS b. RR
c. Multilevel feedback queues
Answer:
a. FCFS — discriminates against short jobs since any short jobs arriving after long jobs will have a longer waiting time.
b. RR — treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be able to leave the system faster since they will finish first. c. Multilevel feedback queues — work similar to the RR algorithm —they discriminate favorably toward short jobs.
5.11 Using the Windows XP scheduling algorithm, what is the numeric pri- ority of a thread for the following scenarios?
a. A thread in the REALTIME PRIORITY CLASS with a relative priority of HIGHEST. b. A thread in the NORMAL PRIORITY CLASS with a relative priority of NORMAL. c. A thread in the HIGH PRIORITY CLASS with a relative priority of ABOVE NORMAL.
Operating system concepts(Seventh edition) 2008.3
Answer: a. 26 b. 8 c. 14
5.12 Consider the scheduling algorithm in the Solaris operating system for time sharing threads:
a. What is the time quantum (in milliseconds) for a thread with priority 10? With priority 55? b. Assume a thread with priority 35 has used its entire time quantum without blocking. What new priority will the scheduler assign this thread? c. Assume a thread with priority 35 blocks for I/O before its time quantum has expired. What new priority will the scheduler assign this thread?
Answer: a. 160 and 40 b. 35 c. 54
5.13 The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: The higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function:
Priority = (Recent CPU usage / 2) + Base
where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, process P2 is 18, and process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process?
Answer: The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.
Chapter 6
6.1 The first known correct software solution to the critical-section problem