现代操作系统作业答案

合集下载

现代操作系统(第二版)习题答案

现代操作系统(第二版)习题答案

MODERN OPERATING SYSTEMS SECOND EDITIONPROBLEM SOLUTIONSANDREW S. TANENBAUM Vrije Universiteit Amsterdam, The NetherlandsPRENTICE HALLUPPER SADDLE RIVER, NJ 07458SOLUTIONS TO CHAPTER 1 PROBLEMS1. An operating system must provide the users with an extended (i.e., virtual) machine, and it must manage the I/O devices and other system resources.2. Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.3. Input spooling is the technique of reading in jobs, for example, from cards, onto the disk, so that when the currently executing processes are finished, there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.4. The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).5. Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs.6. It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.7. A 25×80 character monochrome text screen requires a 2000-byte buffer. The 1024 ×768 pixel 24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.8. Choices (a), (c), and (d) should be restricted to kernel mode.9. Personal computer systems are always interactive, often with only a single user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.10. Every nanosecond one instruction emerges from the pipeline. This meansthe machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per2 PROBLEM SOLUTIONS FOR CHAPTER 1stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.11. The manuscript contains 80 × 50 × 700 = 2.8 million characters. This is, of course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7 seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.12. Logically, it does not matter if the limit register uses a virtual address or a physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time.13. Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.14. A trap is caused by the program and is synchronous with it. If the program is run again and again, the trap will always occur at exactly the same position in the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.15. Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address + base calculation was completed before starting the limit check, thus slowing down the computer.16. The process table is needed to store the state of a process that is currently suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.17. Mounting a file system makes any files already in the mount point directory inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired.PROBLEM SOLUTIONS FOR CHAPTER 1 318. Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it. 19. If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.20. It contains the bytes: 1, 5, 9, 2.21. Block special files consist of numbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files. 22. System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.23. Yes it can, especially if the kernel is a message-passing system.24. As far as program logic is concerned it does not matter whether a call to a library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.25. Several UNIX calls have no counterpart in the Win32 API:Link: a Win32 program cannot refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod: Windows programmers have to assume that every user can access every file.Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.4 PROBLEM SOLUTIONS FOR CHAPTER 126. The conversions are straightforward:(a) A micro year is 10−6 × 365× 24× 3600= 31.536 sec. (b) 1000 meters or 1 km.(c) There are 240 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg. SOLUTIONS TO CHAPTER 2 PROBLEMS1. The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to running. The other missing transition,from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block.2. You could have a register containing a pointer to the current process table entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.3. Generally, high-level languages do not allow one the kind of access to CPU hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’stack area. Also, interrupt service routines must execute as rapidly as possible.4. There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’ s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.5. It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.PROBLEM SOLUTIONS FOR CHAPTER 2 56. When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs its own register save area.7. No. If a single-threaded process is blocked on the keyboard, it cannot fork.8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9. Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.10. User-level threads cannot be preempted by the clock uless the whole process’ quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick adifferent thread from the same process to run next if it so desires.11. In the single-threaded case, the cache hits take 15 msec and cache misses take 90 msec. The weighted average is 2/3×15+ 1/3 ×90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 15 msec, and the server can handle 66 2/3 requests per second.12. Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, th e entire database takes 64 megabytes, and can easily be kept in the server’ s memory to provide fast lookup.13. The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to give the size to create3global, which is all right, but what type should the second parameter of set3global be, and what type should the value of read3global be?14. It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,6 PROBLEM SOLUTIONS FOR CHAPTER 2then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.15. Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.16. The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.17. Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.18. A race condition is a situation in which two (or more) processes are about to perform some action. Depending on the exact timing, one or the other goesfirst. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.19. Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition.20. Yes, it still works, but it still is busy waiting, of course.21. It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.22. Yes it can. The memory word is used as a flag, with 0 meaning that no one is using the critical variables and 1 meaning that someone is using them. Put a 1 in the register, and swap the memory word and the register. If the register contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory. PROBLEM SOLUTIONS FOR CHAPTER 2 723. To do a semaphore operation, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up, it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.24. Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s, and a list of processes blocked on that semaphore. To implement down, a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up, first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.25. If the program operates in phases and neither process may enter the next phase until both are finished with the current phase, it makes perfect sense to use a barrier.26. With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its critical region. The point is, with priority scheduling, Lnever gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.27. With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.28. It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the runtime system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened ona signal primitive.8 PROBLEM SOLUTIONS FOR CHAPTER 229. The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes. 30. It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.31. If a philosopher blocks, neighbors can later see that he is hungry by checking his state, in test, so he can be awakened when the forks are available.32. The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.33. Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.34. It will need nT sec.35. If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.36. In simple cases it may be possible to determine whether I/O will be limiting by looking at source code. For instance a program that reads all its input filesinto buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.37. For multiple processes in a pipeline, the common parent could pass to the operating system information about the flow of data. With this information PROBLEM SOLUTIONS FOR CHAPTER 2 9the OS could, for instance, determine which process could supply output to a process blocking on a call for input.38. The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T, the basic cycle is for the process to run for T and undergo a process switch for S. Thus (a) and (b) have an efficiency of T/(S + T). When the quantum is shorter than T, each run of T will require T/Q process switches, wasting a time ST/Q. The efficiency here is thenT + ST/Q T 333333333which reduces to Q/(Q + S), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q → 0 the efficiency goes to 0.39. Shortest job first is the way to minimize average response time. 0 < X≤ 3: X , 3, 5, 6, 9.3 < X≤ 5: 3, X , 5, 6, 9.5 < X≤ 6: 3, 5, X , 6, 9.6 < X≤ 9: 3, 5, 6, X , 9.X >9: 3, 5, 6, 9, X.40. For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.41. The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.42. A check could be made to see if the program was expecting input and did anything with it. A program that was not expecting input and did not process it would not get any special priority boost.43. The sequence of predictions is 40, 30, 35, and now 25.44. The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To beschedulable, this must be less than 1. Thus x must be less than 12.5 msec. 45. Two-level scheduling is needed when memory is too small to hold all the ready processes. Some set of them is put into memory, and a choice is made 10 PROBLEM SOLUTIONS FOR CHAPTER 2from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.46. The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.47. A possible shell script might beif [ ! –f numbers ]; then echo 0 > numbers; fi count=0 while (test $count != 200 ) docount=‘expr $count + 1 ‘ n=‘tail –1 numbers‘ expr $n + 1 >>numbers doneRun the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers . It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this:if ln numbers numbers.lock then n=‘tail –1 numbers‘ expr $n + 1 >>numbersrm numbers.lock fiThis version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER 3 PROBLEMS1. In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary electionsPROBLEM SOLUTIONS FOR CHAPTER 3 11are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.2. If the printer starts to print a file before the entire file has been received (this is often allowed to speed response), the disk may fill with other requests that can’ t be printed until the first file is done, but which use up disk space needed to receive the file currently being printed. If the spooler does not start to print a file until the entire file has been spooled it can reject a request that is too big.Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the fil e that won’ t fit is still deadlocked of course, and must go to another facility that permits printing bigger files.3. The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that4. Yes. It does not make any difference whatsoever.5. Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.6. A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.7. Neither change leads to deadlock. There is no circular wait in either case.8. Voluntary relinquishment of a resource is most similar to recovery through preemption. The essential difference is that computer processes are not expected to solve such problems on their own. Preemption is analogous to the operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.12 PROBLEM SOLUTIONS FOR CHAPTER 39. The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.10. If the system had two or more CPUs, two or more processes could run in parallel, leading to diagonal trajectories.11. Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.12. The method can only be used to guide the scheduling if the exact instant at which a resource is going to be claimed is known in advance. In practice, this is rarely the case.13. A request from D is unsafe, but one from C is safe.14. There are states that are neither safe nor deadlocked, but which lead to deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:Has Needs Available。

现代操作系统第二版中文版答案

现代操作系统第二版中文版答案

MODERN OPERATING SYSTEM第一章答案1. 操作系统必须向用户提供一台扩展(即,实际上)的机器,和它必须管理I/O设备和其它系统资源。

2. 多道程序就是CPU在内存中多个进程之间迅速切换。

它一般被用来使CPU保持忙碌,当有一个或多个进程进行I/O时。

3. 输入spooling是作业中的读入技术,例如,从卡片在磁盘,这样当当前执行的进程完成时,将等候CPU。

输出spooling在打印之前首先复制打印文件,而非直接打印。

在个人计算机上的输入spooling很少,但是输出spooling非常普遍。

4. 多道程序的主要原因是当等候I/O完成时CPU有事可做。

如果没有DMA,I/O操作时CPU 被完全占有,因此,多道程序无利可图(至少在CPU利用方面)。

无论程序作多少I/O操作,CPU都是100%的忙碌。

当然,这里假定主要的延迟是数据复制时的等待。

如果I/O很慢的话,CPU可以做其它工作。

5. 第二代计算机没有必要的硬件保护操作系统免受恶意的用户程序的侵害。

6. 它依然存在。

例如,Intel以各种各样的不同的属性包括速度和能力消耗来生产Pentium I, II, III和4。

所有这些机器的体系结构都是兼容的,仅仅是价格上的不同,这些都是家族思想的本质。

7. 25 X 80字符的单色文本屏幕需要2000字节的缓冲器。

1024 X 768象素24位颜色的位图需要2359296字节。

1980年代这两种选择将分别地耗费$10和$11520。

而对于当前的价格,将少于$1/MB。

8. 选择(a),(c),(d)应该被限制在内核模式。

9. 个人的计算机系统总是交互式的,而且经常只有一个用户。

而大型机系统几乎总有许多用户强调批处理或者分时。

除了对所有资源的有效使用,大型机系统上的保护更加重要。

10. 从管道中每纳秒出现一条指令。

意味着该机器每秒执行十亿条指令。

它对于管道有多少个阶段全然不予理睬。

即使是10-阶段管道,每阶段1 nsec,也将执行对每秒十亿条指令。

现代操作系统(中文第三版)习题答案

现代操作系统(中文第三版)习题答案
标准答案)
7、下面的哪一条指令只能在内核态中使用?
a)禁止所有的中断。
b)读日期-时间时钟。
c)设晋日期-时间时钟。
d)改变存储器映像。
答:选择(a)、(c)、(d)应该被限制在内核模式。
8、考虑一个有两个CPU的系统,并且每一个CPU有两个线程(超线程)。假设有三个程序P0,P1,P2,分別以运行时间 5ms,10ms,20ms开始。运行这些程序需要多少时间?假设这三个程序都是100% 限于CPU,在运行时无阻塞,并且一旦设 定就不改变CPU。
的文件复制到装配点,使得他们在进行设备检查或修理时,可以在紧急事件中的普通路径上找到这些文件。
17、在一个操作系统中系统调用的目的是什么? 答:系统调用允许用户进程在内核中访问和执行操作系统功能。用户程序使用系统调用操作系统服务。
18、对于下列系统调用,给出引起失败的条件:fork、exec以及unlink。 答:如果进程表中没有空闲的槽(或者没有内存和交换空间),fork 将失败。如果所给的文件名不存在,或者不是一个有效的 可执行文件,exec将失败。如果将要解除链接的文件不存在,或者调用unlink的进程没有权限,则unlink将失败。 19、在count = write(fd, buffer, nbytes);调用中,能在counБайду номын сангаас中而不是nbytes中返回值吗?如果能,为什么? 答:如果fd不正确,调用失败,将返回1。同样,如果磁盘满,调用也失败,要求写入的字节数和实际写入的字节数可能不 等。在正确终止时,总是返回nbytes。 20、有一个文件,其文件描述符是fd,内含下列字节序列:3,1,4,1,5,9,2,6,5,3,5。 有如下系统调用:
12、在用户程序进行一个系统调用,以读写磁盘文件时,该程序提供指示说明了所需要的文件,一个指向数据缓冲区的指针 以及计数。然后,控制权转给操作系统,它调用相关的驱动程序。假设驱动程序启动磁盘并且直到中断发生才终止。在从磁盘 读的情况下,很明显,调用者会被阻塞(因为文件中没有数据)。在向磁盘写时会发生什么情况?需要把调用者阻塞一直等到 磁盘传送完成为止吗?答:也许。如果调用者取回控制,并且在最终发生写操作时立即重写数据,将会写入错误的数据。然 而,如果驱动程序在返回之前首先复制将数据复制到一个专用的缓冲器,那么调用者可以立即继续执行。另一个可能性是允许 调用者继续,并且在缓冲器可以再用时给它一个信号,但是这需要很高的技巧,而且容易出错。

现代操作系统习题及答案

现代操作系统习题及答案

现代操作系统习题及答案现代操作系统习题及答案随着科技的不断进步和发展,现代操作系统成为了计算机领域的重要组成部分。

操作系统作为计算机硬件和软件之间的桥梁,起着管理和协调资源的重要作用。

为了更好地理解和掌握现代操作系统的相关知识,下面将给出一些习题及其答案,希望能够对读者有所帮助。

1. 什么是操作系统?它的主要功能是什么?答:操作系统是一种软件,它管理和控制计算机硬件资源,并为用户和应用程序提供接口。

其主要功能包括进程管理、内存管理、文件系统管理和设备管理等。

2. 请简要描述进程和线程的区别。

答:进程是程序的执行实例,拥有独立的内存空间和系统资源。

线程是进程中的一个执行单元,共享进程的资源,但拥有独立的执行路径。

进程之间相互独立,而线程之间可以共享数据和资源。

3. 什么是死锁?如何避免死锁的发生?答:死锁是指两个或多个进程互相等待对方释放资源,导致所有进程无法继续执行的情况。

为避免死锁的发生,可以采取以下几种方法:避免使用互斥锁、避免使用占有并等待、避免使用循环等待、引入资源预先分配策略等。

4. 请简要描述虚拟内存的概念及其作用。

答:虚拟内存是一种计算机系统使用的内存管理技术,它将物理内存和磁盘空间结合起来,为每个进程提供了一个虚拟地址空间。

虚拟内存的作用包括扩大可用内存空间、提供更高的内存访问效率、实现进程间的内存隔离等。

5. 什么是文件系统?请简要描述文件系统的组织结构。

答:文件系统是操作系统中用于管理和存储文件的一种机制。

文件系统的组织结构包括文件、目录和文件描述符等。

文件是存储在磁盘上的数据集合,目录是用于组织和管理文件的一种结构,文件描述符是操作系统中对文件进行操作的抽象。

6. 请简要描述操作系统的进程调度算法。

答:操作系统的进程调度算法决定了进程在系统中的执行顺序。

常见的进程调度算法包括先来先服务(FCFS)、最短作业优先(SJF)、轮转调度(RR)和优先级调度等。

不同的调度算法有不同的优缺点,可以根据系统需求选择合适的算法。

现代操作系统(第三版)答案

现代操作系统(第三版)答案

MODERNOPERATINGSYSTEMSTHIRD EDITION PROBLEM SOLUTIONSANDREW S.TANENBAUMVrije UniversiteitAmsterdam,The NetherlandsPRENTICE HALLUPPER SADDLE RIVER,NJ07458Copyright Pearson Education,Inc.2008SOLUTIONS TO CHAPTER1PROBLEMS1.Multiprogramming is the rapid switching of the CPU between multiple proc-esses in memory.It is commonly used to keep the CPU busy while one or more processes are doing I/O.2.Input spooling is the technique of reading in jobs,for example,from cards,onto the disk,so that when the currently executing processes arefinished, there will be work waiting for the CPU.Output spooling consists offirst copying printablefiles to disk before printing them,rather than printing di-rectly as the output is generated.Input spooling on a personal computer is not very likely,but output spooling is.3.The prime reason for multiprogramming is to give the CPU something to dowhile waiting for I/O to complete.If there is no DMA,the CPU is fully occu-pied doing I/O,so there is nothing to be gained(at least in terms of CPU utili-zation)by multiprogramming.No matter how much I/O a program does,the CPU will be100%busy.This of course assumes the major delay is the wait while data are copied.A CPU could do other work if the I/O were slow for other reasons(arriving on a serial line,for instance).4.It is still alive.For example,Intel makes Pentium I,II,and III,and4CPUswith a variety of different properties including speed and power consumption.All of these machines are architecturally compatible.They differ only in price and performance,which is the essence of the family idea.5.A25×80character monochrome text screen requires a2000-byte buffer.The1024×768pixel24-bit color bitmap requires2,359,296bytes.In1980these two options would have cost$10and$11,520,respectively.For current prices,check on how much RAM currently costs,probably less than$1/MB.6.Consider fairness and real time.Fairness requires that each process be allo-cated its resources in a fair way,with no process getting more than its fair share.On the other hand,real time requires that resources be allocated based on the times when different processes must complete their execution.A real-time process may get a disproportionate share of the resources.7.Choices(a),(c),and(d)should be restricted to kernel mode.8.It may take20,25or30msec to complete the execution of these programsdepending on how the operating system schedules them.If P0and P1are scheduled on the same CPU and P2is scheduled on the other CPU,it will take20mses.If P0and P2are scheduled on the same CPU and P1is scheduled on the other CPU,it will take25msec.If P1and P2are scheduled on the same CPU and P0is scheduled on the other CPU,it will take30msec.If all three are on the same CPU,it will take35msec.2PROBLEM SOLUTIONS FOR CHAPTER19.Every nanosecond one instruction emerges from the pipeline.This means themachine is executing1billion instructions per second.It does not matter at all how many stages the pipeline has.A10-stage pipeline with1nsec per stage would also execute1billion instructions per second.All that matters is how often afinished instruction pops out the end of the pipeline.10.Average access time=0.95×2nsec(word is cache)+0.05×0.99×10nsec(word is in RAM,but not in cache)+0.05×0.01×10,000,000nsec(word on disk only)=5002.395nsec=5.002395μsec11.The manuscript contains80×50×700=2.8million characters.This is,ofcourse,impossible tofit into the registers of any currently available CPU and is too big for a1-MB cache,but if such hardware were available,the manuscript could be scanned in2.8msec from the registers or5.8msec from the cache.There are approximately27001024-byte blocks of data,so scan-ning from the disk would require about27seconds,and from tape2minutes7 seconds.Of course,these times are just to read the data.Processing and rewriting the data would increase the time.12.Maybe.If the caller gets control back and immediately overwrites the data,when the writefinally occurs,the wrong data will be written.However,if the driverfirst copies the data to a private buffer before returning,then the caller can be allowed to continue immediately.Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused,but this is tricky and error prone.13.A trap instruction switches the execution mode of a CPU from the user modeto the kernel mode.This instruction allows a user program to invoke func-tions in the operating system kernel.14.A trap is caused by the program and is synchronous with it.If the program isrun again and again,the trap will always occur at exactly the same position in the instruction stream.An interrupt is caused by an external event and its timing is not reproducible.15.The process table is needed to store the state of a process that is currentlysuspended,either ready or blocked.It is not needed in a single process sys-tem because the single process is never suspended.16.Mounting afile system makes anyfiles already in the mount point directoryinaccessible,so mount points are normally empty.However,a system admin-istrator might want to copy some of the most importantfiles normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being repaired.PROBLEM SOLUTIONS FOR CHAPTER13 17.A system call allows a user process to access and execute operating systemfunctions inside the er programs use system calls to invoke operat-ing system services.18.Fork can fail if there are no free slots left in the process table(and possibly ifthere is no memory or swap space left).Exec can fail if thefile name given does not exist or is not a valid executablefile.Unlink can fail if thefile to be unlinked does not exist or the calling process does not have the authority to unlink it.19.If the call fails,for example because fd is incorrect,it can return−1.It canalso fail because the disk is full and it is not possible to write the number of bytes requested.On a correct termination,it always returns nbytes.20.It contains the bytes:1,5,9,2.21.Time to retrieve thefile=1*50ms(Time to move the arm over track#50)+5ms(Time for thefirst sector to rotate under the head)+10/100*1000ms(Read10MB)=155ms22.Block specialfiles consist of numbered blocks,each of which can be read orwritten independently of all the other ones.It is possible to seek to any block and start reading or writing.This is not possible with character specialfiles.23.System calls do not really have names,other than in a documentation sense.When the library procedure read traps to the kernel,it puts the number of the system call in a register or on the stack.This number is used to index into a table.There is really no name used anywhere.On the other hand,the name of the library procedure is very important,since that is what appears in the program.24.Yes it can,especially if the kernel is a message-passing system.25.As far as program logic is concerned it does not matter whether a call to a li-brary procedure results in a system call.But if performance is an issue,if a task can be accomplished without a system call the program will run faster.Every system call involves overhead time in switching from the user context to the kernel context.Furthermore,on a multiuser system the operating sys-tem may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.26.Several UNIX calls have no counterpart in the Win32API:Link:a Win32program cannot refer to afile by an alternative name or see it in more than one directory.Also,attempting to create a link is a convenient way to test for and create a lock on afile.4PROBLEM SOLUTIONS FOR CHAPTER1Mount and umount:a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod:Windows uses access control listsKill:Windows programmers cannot kill a misbehaving program that is not cooperating.27.Every system architecture has its own set of instructions that it can execute.Thus a Pentium cannot execute SPARC programs and a SPARC cannot exe-cute Pentium programs.Also,different architectures differ in bus architecture used(such as VME,ISA,PCI,MCA,SBus,...)as well as the word size of the CPU(usually32or64bit).Because of these differences in hardware,it is not feasible to build an operating system that is completely portable.A highly portable operating system will consist of two high-level layers---a machine-dependent layer and a machine independent layer.The machine-dependent layer addresses the specifics of the hardware,and must be implemented sepa-rately for every architecture.This layer provides a uniform interface on which the machine-independent layer is built.The machine-independent layer has to be implemented only once.To be highly portable,the size of the machine-dependent layer must be kept as small as possible.28.Separation of policy and mechanism allows OS designers to implement asmall number of basic primitives in the kernel.These primitives are sim-plified,because they are not dependent of any specific policy.They can then be used to implement more complex mechanisms and policies at the user level.29.The conversions are straightforward:(a)A micro year is10−6×365×24×3600=31.536sec.(b)1000meters or1km.(c)There are240bytes,which is1,099,511,627,776bytes.(d)It is6×1024kg.SOLUTIONS TO CHAPTER2PROBLEMS1.The transition from blocked to running is conceivable.Suppose that a processis blocked on I/O and the I/Ofinishes.If the CPU is otherwise idle,the proc-ess could go directly from blocked to running.The other missing transition, from ready to blocked,is impossible.A ready process cannot do I/O or any-thing else that might block it.Only a running process can block.PROBLEM SOLUTIONS FOR CHAPTER25 2.You could have a register containing a pointer to the current process tableentry.When I/O completed,the CPU would store the current machine state in the current process table entry.Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry(the ser-vice procedure).This process would then be started up.3.Generally,high-level languages do not allow the kind of access to CPU hard-ware that is required.For instance,an interrupt handler may be required to enable and disable the interrupt servicing a particular device,or to manipulate data within a process’stack area.Also,interrupt service routines must exe-cute as rapidly as possible.4.There are several reasons for using a separate stack for the kernel.Two ofthem are as follows.First,you do not want the operating system to crash be-cause a poorly written user program does not allow for enough stack space.Second,if the kernel leaves stack data in a user program’s memory space upon return from a system call,a malicious user might be able to use this data tofind out information about other processes.5.If each job has50%I/O wait,then it will take20minutes to complete in theabsence of competition.If run sequentially,the second one willfinish40 minutes after thefirst one starts.With two jobs,the approximate CPU utiliza-tion is1−0.52.Thus each one gets0.375CPU minute per minute of real time.To accumulate10minutes of CPU time,a job must run for10/0.375 minutes,or about26.67minutes.Thus running sequentially the jobsfinish after40minutes,but running in parallel theyfinish after26.67minutes.6.It would be difficult,if not impossible,to keep thefile system consistent.Sup-pose that a client process sends a request to server process1to update afile.This process updates the cache entry in its memory.Shortly thereafter,anoth-er client process sends a request to server2to read thatfile.Unfortunately,if thefile is also cached there,server2,in its innocence,will return obsolete data.If thefirst process writes thefile through to the disk after caching it, and server2checks the disk on every read to see if its cached copy is up-to-date,the system can be made to work,but it is precisely all these disk ac-cesses that the caching system is trying to avoid.7.No.If a single-threaded process is blocked on the keyboard,it cannot fork.8.A worker thread will block when it has to read a Web page from the disk.Ifuser-level threads are being used,this action will block the entire process, destroying the value of multithreading.Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9.Yes.If the server is entirely CPU bound,there is no need to have multiplethreads.It just adds unnecessary complexity.As an example,consider a tele-phone directory assistance number(like555-1212)for an area with1million6PROBLEM SOLUTIONS FOR CHAPTER2people.If each(name,telephone number)record is,say,64characters,the entire database takes64megabytes,and can easily be kept in the server’s memory to provide fast lookup.10.When a thread is stopped,it has values in the registers.They must be saved,just as when the process is stopped the registers must be saved.Multipro-gramming threads is no different than multiprogramming processes,so each thread needs its own register save area.11.Threads in a process cooperate.They are not hostile to one another.If yield-ing is needed for the good of the application,then a thread will yield.After all,it is usually the same programmer who writes the code for all of them. er-level threads cannot be preempted by the clock unless the whole proc-ess’quantum has been used up.Kernel-level threads can be preempted indivi-dually.In the latter case,if a thread runs too long,the clock will interrupt the current process and thus the current thread.The kernel is free to pick a dif-ferent thread from the same process to run next if it so desires.13.In the single-threaded case,the cache hits take15msec and cache misses take90msec.The weighted average is2/3×15+1/3×90.Thus the mean re-quest takes40msec and the server can do25per second.For a multithreaded server,all the waiting for the disk is overlapped,so every request takes15 msec,and the server can handle662/3requests per second.14.The biggest advantage is the efficiency.No traps to the kernel are needed toswitch threads.The biggest disadvantage is that if one thread blocks,the en-tire process blocks.15.Yes,it can be done.After each call to pthread create,the main programcould do a pthread join to wait until the thread just created has exited before creating the next thread.16.The pointers are really necessary because the size of the global variable isunknown.It could be anything from a character to an array offloating-point numbers.If the value were stored,one would have to give the size to create global,which is all right,but what type should the second parameter of set global be,and what type should the value of read global be?17.It could happen that the runtime system is precisely at the point of blocking orunblocking a thread,and is busy manipulating the scheduling queues.This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching,since they might be in an inconsistent state.One solution is to set aflag when the run-time system is entered.The clock handler would see this and set its ownflag, then return.When the runtime systemfinished,it would check the clockflag, see that a clock interrupt occurred,and now run the clock handler.PROBLEM SOLUTIONS FOR CHAPTER27 18.Yes it is possible,but inefficient.A thread wanting to do a system callfirstsets an alarm timer,then does the call.If the call blocks,the timer returns control to the threads package.Of course,most of the time the call will not block,and the timer has to be cleared.Thus each system call that might block has to be executed as three system calls.If timers go off prematurely,all kinds of problems can develop.This is not an attractive way to build a threads package.19.The priority inversion problem occurs when a low-priority process is in itscritical region and suddenly a high-priority process becomes ready and is scheduled.If it uses busy waiting,it will run forever.With user-level threads,it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run.There is no preemption.With kernel-level threads this problem can arise.20.With round-robin scheduling it works.Sooner or later L will run,and eventu-ally it will leave its critical region.The point is,with priority scheduling,L never gets to run at all;with round robin,it gets a normal time slice periodi-cally,so it has the chance to leave its critical region.21.Each thread calls procedures on its own,so it must have its own stack for thelocal variables,return addresses,and so on.This is equally true for user-level threads as for kernel-level threads.22.Yes.The simulated computer could be multiprogrammed.For example,while process A is running,it reads out some shared variable.Then a simula-ted clock tick happens and process B runs.It also reads out the same vari-able.Then it adds1to the variable.When process A runs,if it also adds one to the variable,we have a race condition.23.Yes,it still works,but it still is busy waiting,of course.24.It certainly works with preemptive scheduling.In fact,it was designed forthat case.When scheduling is nonpreemptive,it might fail.Consider the case in which turn is initially0but process1runsfirst.It will just loop forever and never release the CPU.25.To do a semaphore operation,the operating systemfirst disables interrupts.Then it reads the value of the semaphore.If it is doing a down and the sema-phore is equal to zero,it puts the calling process on a list of blocked processes associated with the semaphore.If it is doing an up,it must check to see if any processes are blocked on the semaphore.If one or more processes are block-ed,one of them is removed from the list of blocked processes and made run-nable.When all these operations have been completed,interrupts can be enabled again.8PROBLEM SOLUTIONS FOR CHAPTER226.Associated with each counting semaphore are two binary semaphores,M,used for mutual exclusion,and B,used for blocking.Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s,and a list of processes blocked on that semaphore.To im-plement down,a processfirst gains exclusive access to the semaphores, counter,and list by doing a down on M.It then decrements the counter.If it is zero or more,it just does an up on M and exits.If M is negative,the proc-ess is put on the list of blocked processes.Then an up is done on M and a down is done on B to block the process.To implement up,first M is down ed to get mutual exclusion,and then the counter is incremented.If it is more than zero,no one was blocked,so all that needs to be done is to up M.If, however,the counter is now negative or zero,some process must be removed from the list.Finally,an up is done on B and M in that order.27.If the program operates in phases and neither process may enter the nextphase until both arefinished with the current phase,it makes perfect sense to use a barrier.28.With kernel threads,a thread can block on a semaphore and the kernel canrun some other thread in the same process.Consequently,there is no problem using semaphores.With user-level threads,when one thread blocks on a semaphore,the kernel thinks the entire process is blocked and does not run it ever again.Consequently,the process fails.29.It is very expensive to implement.Each time any variable that appears in apredicate on which some process is waiting changes,the run-time system must re-evaluate the predicate to see if the process can be unblocked.With the Hoare and Brinch Hansen monitors,processes can only be awakened on a signal primitive.30.The employees communicate by passing messages:orders,food,and bags inthis case.In UNIX terms,the four processes are connected by pipes.31.It does not lead to race conditions(nothing is ever lost),but it is effectivelybusy waiting.32.It will take nT sec.33.In simple cases it may be possible to determine whether I/O will be limitingby looking at source code.For instance a program that reads all its inputfiles into buffers at the start will probably not be I/O bound,but a problem that reads and writes incrementally to a number of differentfiles(such as a compi-ler)is likely to be I/O bound.If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program,you can compare this with the total time to complete execution of the program.This is,of course,most meaningful on a system where you are the only user.34.For multiple processes in a pipeline,the common parent could pass to the op-erating system information about the flow of data.With this information the OS could,for instance,determine which process could supply output to a process blocking on a call for input.35.The CPU efficiency is the useful CPU time divided by the total CPU time.When Q ≥T ,the basic cycle is for the process to run for T and undergo a process switch for S .Thus (a)and (b)have an efficiency of T /(S +T ).When the quantum is shorter than T ,each run of T will require T /Q process switches,wasting a time ST /Q .The efficiency here is thenT +ST /QT which reduces to Q /(Q +S ),which is the answer to (c).For (d),we just sub-stitute Q for S and find that the efficiency is 50%.Finally,for (e),as Q →0the efficiency goes to 0.36.Shortest job first is the way to minimize average response time.0<X ≤3:X ,3,5,6,9.3<X ≤5:3,X ,5,6,9.5<X ≤6:3,5,X ,6,9.6<X ≤9:3,5,6,X ,9.X >9:3,5,6,9,X.37.For round robin,during the first 10minutes each job gets 1/5of the CPU.Atthe end of 10minutes,C finishes.During the next 8minutes,each job gets 1/4of the CPU,after which time D finishes.Then each of the three remaining jobs gets 1/3of the CPU for 6minutes,until B finishes,and so on.The fin-ishing times for the five jobs are 10,18,24,28,and 30,for an average of 22minutes.For priority scheduling,B is run first.After 6minutes it is finished.The other jobs finish at 14,24,26,and 30,for an average of 18.8minutes.If the jobs run in the order A through E ,they finish at 10,16,18,22,and 30,for an average of 19.2minutes.Finally,shortest job first yields finishing times of 2,6,12,20,and 30,for an average of 14minutes.38.The first time it gets 1quantum.On succeeding runs it gets 2,4,8,and 15,soit must be swapped in 5times.39.A check could be made to see if the program was expecting input and didanything with it.A program that was not expecting input and did not process it would not get any special priority boost.40.The sequence of predictions is 40,30,35,and now 25.41.The fraction of the CPU used is35/50+20/100+10/200+x/250.To beschedulable,this must be less than1.Thus x must be less than12.5msec. 42.Two-level scheduling is needed when memory is too small to hold all theready processes.Some set of them is put into memory,and a choice is made from that set.From time to time,the set of in-core processes is adjusted.This algorithm is easy to implement and reasonably efficient,certainly a lot better than,say,round robin without regard to whether a process was in memory or not.43.Each voice call runs200times/second and uses up1msec per burst,so eachvoice call needs200msec per second or400msec for the two of them.The video runs25times a second and uses up20msec each time,for a total of 500msec per second.Together they consume900msec per second,so there is time left over and the system is schedulable.44.The kernel could schedule processes by any means it wishes,but within eachprocess it runs threads strictly in priority order.By letting the user process set the priority of its own threads,the user controls the policy but the kernel handles the mechanism.45.The change would mean that after a philosopher stopped eating,neither of hisneighbors could be chosen next.In fact,they would never be chosen.Sup-pose that philosopher2finished eating.He would run test for philosophers1 and3,and neither would be started,even though both were hungry and both forks were available.Similarly,if philosopher4finished eating,philosopher3 would not be started.Nothing would start him.46.If a philosopher blocks,neighbors can later see that she is hungry by checkinghis state,in test,so he can be awakened when the forks are available.47.Variation1:readers have priority.No writer may start when a reader is ac-tive.When a new reader appears,it may start immediately unless a writer is currently active.When a writerfinishes,if readers are waiting,they are all started,regardless of the presence of waiting writers.Variation2:Writers have priority.No reader may start when a writer is waiting.When the last ac-tive processfinishes,a writer is started,if there is one;otherwise,all the readers(if any)are started.Variation3:symmetric version.When a reader is active,new readers may start immediately.When a writerfinishes,a new writer has priority,if one is waiting.In other words,once we have started reading,we keep reading until there are no readers left.Similarly,once we have started writing,all pending writers are allowed to run.48.A possible shell script might beif[!–f numbers];then echo0>numbers;ficount=0while(test$count!=200)docount=‘expr$count+1‘n=‘tail–1numbers‘expr$n+1>>numbersdoneRun the script twice simultaneously,by starting it once in the background (using&)and again in the foreground.Then examine thefile numbers.It will probably start out looking like an orderly list of numbers,but at some point it will lose its orderliness,due to the race condition created by running two cop-ies of the script.The race can be avoided by having each copy of the script test for and set a lock on thefile before entering the critical area,and unlock-ing it upon leaving the critical area.This can be done like this:if ln numbers numbers.lockthenn=‘tail–1numbers‘expr$n+1>>numbersrm numbers.lockfiThis version will just skip a turn when thefile is inaccessible,variant solu-tions could put the process to sleep,do busy waiting,or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER3PROBLEMS1.It is an accident.The base register is16,384because the program happened tobe loaded at address16,384.It could have been loaded anywhere.The limit register is16,384because the program contains16,384bytes.It could have been any length.That the load address happens to exactly match the program length is pure coincidence.2.Almost the entire memory has to be copied,which requires each word to beread and then rewritten at a different location.Reading4bytes takes10nsec, so reading1byte takes2.5nsec and writing it takes another2.5nsec,for a total of5nsec per byte compacted.This is a rate of200,000,000bytes/sec.To copy128MB(227bytes,which is about1.34×108bytes),the computer needs227/200,000,000sec,which is about671msec.This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied.However,if there are many holes andmany data segments,the holes will be small,so k will be small and the error in the calculation will also be small.3.The bitmap needs1bit per allocation unit.With227/n allocation units,this is224/n bytes.The linked list has227/216or211nodes,each of8bytes,for a total of214bytes.For small n,the linked list is better.For large n,the bitmap is better.The crossover point can be calculated by equating these two formu-las and solving for n.The result is1KB.For n smaller than1KB,a linked list is better.For n larger than1KB,a bitmap is better.Of course,the assumption of segments and holes alternating every64KB is very unrealistic.Also,we need n<=64KB if the segments and holes are64KB.4.Firstfit takes20KB,10KB,18KB.Bestfit takes12KB,10KB,and9KB.Worstfit takes20KB,18KB,and15KB.Nextfit takes20KB,18KB,and9 KB.5.For a4-KB page size the(page,offset)pairs are(4,3616),(8,0),and(14,2656).For an8-KB page size they are(2,3616),(4,0),and(7,2656).6.They built an MMU and inserted it between the8086and the bus.Thus all8086physical addresses went into the MMU as virtual addresses.The MMU then mapped them onto physical addresses,which went to the bus.7.(a)M has to be at least4,096to ensure a TLB miss for every access to an ele-ment of X.Since N only affects how many times X is accessed,any value of N will do.(b)M should still be atleast4,096to ensure a TLB miss for every access to anelement of X.But now N should be greater than64K to thrash the TLB, that is,X should exceed256KB.8.The total virtual address space for all the processes combined is nv,so thismuch storage is needed for pages.However,an amount r can be in RAM,so the amount of disk storage required is only nv−r.This amount is far more than is ever needed in practice because rarely will there be n processes ac-tually running and even more rarely will all of them need the maximum al-lowed virtual memory.9.The page table contains232/213entries,which is524,288.Loading the pagetable takes52msec.If a process gets100msec,this consists of52msec for loading the page table and48msec for running.Thus52%of the time is spent loading page tables.10.(a)We need one entry for each page,or224=16×1024×1024entries,sincethere are36=48−12bits in the page numberfield.。

现代操作系统中文答案

现代操作系统中文答案

现代操作系统中文答案篇一:操作系统习题答案整理】(固定分区)支持多道程序设计、管理最简单,但存储碎片多;(段式)使内存碎片尽可能少,而且使内存利用率最高。

2 为使虚存系统有效地发挥其预期的作用,所运行的程序应具有的特性是该程序应具有较好的局部性(locality)。

3 提高内存利用率主要是通过内存分配功能实现的,内存分配的基本任务是为每道程序(分配内存)。

使每道程序能在不受干扰的环境下运行,主要是通过(内存保护)功能实现的。

4 适合多道程序运行的存储管理中,存储保护是为了防止各道作业相互干扰。

5 (分段存储管理)方法有利于程序的动态链接6 在请求分页系统的页表增加了若干项,其中状态位供(程序访问)7 关于请求分段存储管理的叙述中,正确的叙述(分段的尺寸受内存空间的限制,但作业总的尺寸不受内存空间的限制)。

8 虚拟存储器的特征是基于(局部性原理)。

9 实现虚拟存储器最关键的技术是(请求调页(段))。

10“抖动”现象的发生是由(置换算法选择不当)引起的。

11 在请求分页系统的页表增加了若干项,其中修改位供(换出页面)12 虚拟存储器是程序访问比内存更大的地址空间13 测得某个请求调页的计算机系统部分状态数据为:cpu 利用率20%,用于对换空间的硬盘的利用率97.7 %,其他设备的利用率5%。

由此断定系统出现异常。

此种情况下(减少运行的进程数)能提高cpu 的利用率。

14 在请求调页系统中,若逻辑地址中的页号超过页表控制寄存器中的页表长度,则会引起(越界中断)。

15 测得某个请求调页的计算机系统部分状态数据为:cpu 利用率20%,用于对换空间的硬盘的利用率97.7 %,其他设备的利用率5%。

由此断定系统出现异常。

此种情况下(加内存条,增加物理空间容量)能提高cpu 的利用率。

16 对外存对换区的管理应以(提高换入换出速度)为主要目标,对外存文件区的管理应以(提高存储空间的利用率)为主要目标。

17 在请求调页系统中,若所需的页不在内存中,则会引起(缺页中断)。

现代操作系统课后习题答案

第二章进程管理令狐采学第一部分教材习题(P81)3、为什么程序并发执行会产生间断性特征?(P36)4、程序并发执行,为何会失去封闭性和可再现性?(P37)【解】程序在并发执行时,是多个程序共享系统中的各种资源,因而这些资源的状态将由多个程序来改变,致使程序的运行已失去了封闭性。

同时由于失去了封闭性,也将导致其再失去可再现性。

程序在并发执行时,由于失去了封闭性,程序经过多次执行后,其计算机结果已与并发程序的执行速度有关,从而使程序的执行失去了可再现性。

5、在操作系统中为什么要引入进程概念?(P37)它会产生什么样的影响?【解】在操作系统中引入进程的概念,是为了实现多个程序的并发执行。

传统的程序不能与其他程序并发执行,只有在为之创建进程后,才能与其他程序(进程)并发执行。

这是因为并发执行的程序(即进程)是“停停走走”地执行,只有在为它创建进程后,在它停下时,方能将其现场信息保存在它的PCB中,待下次被调度执行是,再从PCB中恢复CPU现场并继续执行,而传统的程序却无法满足上述要求。

建立进程所带来的好处是使多个程序能并发执行,这极大地提高了资源利用率和系统吞吐量。

但管理进程也需付出一定的代价,包括进程控制块及协调各运行机构所占用的内存空间开销,以及为进行进程间的切换、同步及通信等所付出的时间开销。

6、试从动态性、并发性和独立性上比较进程和程序?(P37)【解】(1)动态性:进程既然是进程实体的执行过程,因此,动态性是进程最基本的特性。

动态性还表现为:“它由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤消而消亡”。

可见,进程有一定的生命期。

而程序只是一组有序指令的集合,并存放在某种介质上,本身并无运动的含义,因此,程序是个静态实体。

(2)并发性:所谓进程的并发,指的是多个进程实体,同存于内存中,能在一段时间内同时运行。

并发性是进程的重要特征,同时也成为OS的重要特征。

引入进程的目的也正是为了使其程序能和其它进程的程序并发执行,而程序是无法并发执行的。

现代操作系统课后习题答案之欧阳德创编

第二章进程管理第一部分教材习题(P81)3、为什么程序并发执行会产生间断性特征?(P36)4、程序并发执行,为何会失去封闭性和可再现性?(P37)【解】程序在并发执行时,是多个程序共享系统中的各种资源,因而这些资源的状态将由多个程序来改变,致使程序的运行已失去了封闭性。

同时由于失去了封闭性,也将导致其再失去可再现性。

程序在并发执行时,由于失去了封闭性,程序经过多次执行后,其计算机结果已与并发程序的执行速度有关,从而使程序的执行失去了可再现性。

5、在操作系统中为什么要引入进程概念?(P37)它会产生什么样的影响?【解】在操作系统中引入进程的概念,是为了实现多个程序的并发执行。

传统的程序不能与其他程序并发执行,只有在为之创建进程后,才能与其他程序(进程)并发执行。

这是因为并发执行的程序(即进程)是“停停走走”地执行,只有在为它创建进程后,在它停下时,方能将其现场信息保存在它的PCB中,待下次被调度执行是,再从PCB中恢复CPU现场并继续执行,而传统的程序却无法满足上述要求。

建立进程所带来的好处是使多个程序能并发执行,这极大地提高了资源利用率和系统吞吐量。

但管理进程也需付出一定的代价,包括进程控制块及协调各运行机构所占用的内存空间开销,以及为进行进程间的切换、同步及通信等所付出的时间开销。

6、试从动态性、并发性和独立性上比较进程和程序?(P37)【解】(1)动态性:进程既然是进程实体的执行过程,因此,动态性是进程最基本的特性。

动态性还表现为:“它由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤消而消亡”。

可见,进程有一定的生命期。

而程序只是一组有序指令的集合,并存放在某种介质上,本身并无运动的含义,因此,程序是个静态实体。

(2)并发性:所谓进程的并发,指的是多个进程实体,同存于内存中,能在一段时间内同时运行。

并发性是进程的重要特征,同时也成为OS的重要特征。

引入进程的目的也正是为了使其程序能和其它进程的程序并发执行,而程序是无法并发执行的。

现代操作系统第四版答案

现代操作系统第四版答案SANY标准化小组 #QS8QHH-HHGX8Q8-GNHHJ8-HHMHGN#第五章输入/输出习题1.芯片技术的进展已经使得将整个控制器包括所有总线访问逻辑放在一个便宜的芯片上成为可能。

这对于图1-5的模型具有什么影响?答:(题目有问题,应该是图1-6)在此图中,一个控制器有两个设备。

单个控制器可以有多个设备就无需每个设备都有一个控制器。

如果控制器变得几乎是自由的,那么只需把控制器做入设备本身就行了。

这种设计同样也可以并行多个传输,因而也获得较好的性能。

2.已知图5-1列出的速度,是否可能以全速从一台扫描仪扫描文档并且通过802.1lg网络对其进行传输请解释你的答案。

答:太简单了。

扫描仪最高速率为400KB/Sec,而总线程和磁盘都为16.7MB/sec,因此磁盘和总线都无法饱和。

3.图5-3b显示了即使在存在单独的总线用于内存和用于I/O设备的情况下使用内存映射I/O的一种方法,也就是说,首先尝试内存总线,如果失败则尝试I/O总线。

一名聪明的计算机科学专业的学生想出了一个改进办法:并行地尝试两个总线,以加快访问I/O设备的过程。

你认为这个想法如何?答:这不是一个好主意。

内存总线肯定比I/O总线快。

一般的内存请求总是内存总线先完成,而I/O总线仍然忙碌。

如果CPU要一直等待I/O总线完成,那就是将内存的性能降低为I/O总线的水平。

4.假设一个系统使用DMA将数据从磁盘控制器传送到内存。

进一步假设平均花费t2ns获得总线,并且花费t1ns在总线上传送一个字(t1>>t2)。

在CPU对DMA控制器进行编程之后,如果(a)采用一次一字模式,(b)采用突发模式,从磁盘控制器到内存传送1000个字需要多少时间?假设向磁盘控制器发送命令需要获取总线以传输一个字,并且应答传输也需要获取总线以传输一个字。

答:(a)1000×[(t1+t2)+(t1+t2)+(t1+t2)];第一个(t1+t2)是获取总线并将命令发送到磁盘控制器,第二个(t1+t2)是用于传输字,第三个(t1+t2)是为了确认。

《现代操作系统第四版》-第六章-答案

第四章文件系统习题Q1: 给出文件/etc/passwd的五种不同的路径名。

(提示:考虑目录项”.”和”…”。

)A:/etc/passwd/./etc/passwd/././etc/passwd/./././etc/passwd/etc/…/etc/passwd/etc/…/etc/…/etc/passwd/etc/…/etc/…/etc/…/etc/passwd/etc/…/etc/…/etc/…/etc/…/etc/passwdQ2:在Windows中,当用户双击资源管理器中列出的一个文件时,就会运行一个程序,并以这个文件作为参数。

操作系统要知道运行的是哪个程序,请给出两种不同的方法。

A:Windows使用文件扩展名。

每种文件扩展名对应一种文件类型和某些能处理这种类型的程序。

另一种方式时记住哪个程序创建了该文件,并运行那个程序。

Macintosh以这种方式工作。

Q3:在早期的UNIX系统中,可执行文件(a.out)以一个非常特別的魔数开始,这个数不是随机选择的。

这些文件都有文件头,后面是正文段和数据段。

为什么要为可执行文件挑选一个非常特别的魔数,而其他类型文件的第一个字反而有一个或多或少是随机选择的魔数?A:这些系统直接把程序载入内存,并且从word0(魔数)开始执行。

为了避免将header作为代码执行,魔数是一条branch指令,其目标地址正好在header之上。

按这种方法,就可能把二进制文件直接读取到新的进程地址空间,并且从0 开始运行。

Q4: 在UNIX中open系统调用绝对需要吗?如果没有会产生什么结果?A: open调用的目的是:把文件属性和磁盘地址表装入内存,便与后续调用的快速访问。

首先,如果没有open系统调用,每次读取文件都需要指定要打开的文件的名称。

系统将必须获取其i节点,虽然可以缓存它,但面临一个问题是何时将i节点写回磁盘。

可以在超时后写回磁盘,虽然这有点笨拙,但它可能起作用。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

37. (b) priority
B E A C D
0
6
14
24 26
30
Turn around time of B = 6 minutes Turn around time of E = 14 minutes Turn around time of A = 24 minutes Turn around time of C = 26 minutes Turn around time of D = 30 minutes Average = (6 + 14 + 24 + 26 + 30) / 5 = 20 minutes
用信号量解题的关键
步骤: (1)分析题目中存在的同步和互斥关系 (2)信号量的设置 (3)给信号量赋初值(常用的互斥和同步信号量值的大 小); (4)P、V操作安排的位置(其中,P的顺序不能颠倒, V的顺序任意)
51. int w_count = 0, m_count =0 ; semaphore w_mutex =1, m_mutex =1; semaphore bathroom =1; woman_wants_to_enter: down(&w_mutex ) ; w_count = w_count +1; if (w_count ==1) down (&bathroom); up(&w_mutex) ; woman_leaves: down(&w_mutex ) ; w_count = w_count -1; if (w_count ==0) up (&bathroom); up(&w_mutex) ;
练习题:桌上有一只盘子,每次只能放一个水果, 爸爸专向盘中放苹果,妈妈专向盘中放桔子, 儿子专等吃盘里的桔子,女儿专等吃盘里的苹 果。只要盘子空,则爸爸或妈妈可向盘中放水 果,仅当盘中有自己需要的水果时,儿子或女 儿可从中取出,请给出四人之间的同步关系, 并用P、V操作实现四人正确活动的程序。
问题分析

37. (a) For round robin, during the first 10 minutes each
job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets ¼ of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. Average = (10 + 18 + 24 + 28 + 30) / 5 = 22 minutes
Figure 2-8 A multithreaded Web server
14. The biggest advantage is efficiency. No traps to the kernel are needed to switch threads. The biggest disadvantage is that if one thread blocks, the entire process blocks. 28. With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails. 31. It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.

补充题2 Consider the following set of processes, with the length of CPU burst time given in milliseconds

Process P1 P2 P3 P4 P5
Duration 10 1 2 1 5
Priority 3 1 5 4 2

51. (ctd.) man_wants_to_enter: down(&m_mutex ) ; m_count = m_count +1; if (m_count ==1) down (&bathroom); up(&m_mutex) ; man_leaves: down(&m_mutex ) ; m_count = m_count -1; if (m_count ==0) up (&bathroom); up(&m_mutex) ;
37. (d) SJF
C D B E A
0 2
6
12

20
30
Turn around time of C = 2 minutes Turn around time of D = 6 minutes Turn around time of B = 12 minutes Turn around time of E = 20 minutes Turn around time of A = 30 minutes Average = (2 + 6 + 12 + 20 + 30) / 5 = 14 minutes


8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others. 11. Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.

四人之间的关系
爸爸,妈妈要互斥使用盘子,所以两者之间是
互斥关系; 爸爸放的苹果,女儿吃,所以两者是同步关系 ; 妈妈放的桔子,儿子吃,所以两者也是同步关 系。

信号量设置
一个信号量表示是否可向盘中放水果 一个信号量表示可否取桔子
一个信号量表示可否取苹果

设置三个信号量s,so,sa,初值分别为1,0,0,表示是 否可向盘中放水果,表示可否取桔子,表示可否取苹果 semaphore s=1,so=0,sa=0 main() father() daughter() { { while(true) { while(true) father(); { p(s); { p(sa); mother(); 将苹果放入盘中; 取苹果; daughter(); v(sa); v(s); son(); } 吃苹果; } } } }
读者进入阅览室的动作描述getin: while(TRUE){ P (seats); /*没有座位则离开*/ P(mutex) /*进入临界区*/ 填写登记表; 进入阅览室读书; V(mutex) /*离开临界区*/ V(readers) }
读者离开阅览室的动作描述getout: while(TRUE){ P(readers) /*阅览室是否有人读书 */ P(mutex) /*进入临界区*/ 消掉登记; 离开阅览室; V(mutex) /*离开临界区*/ V(seats) /*释放一个座位资源*/
Arrival Time 0 0 0 0 0
补充题2(续) The processes are assumed to have arrived in the order of P1,P2,P3,P4,P5 all at time 0. a. Draw four Gantt chart illustrating the execution of these processes using FCFS, SJF, a non-preemptive priority (a smaller priority number implies a higher priority) and RR scheduling (time quantum=1). b. What is the turn around time of each process for each scheduling algorithm in part a? c. What is the waiting time of each process for each scheduling algorithm in part a? d. Which of the schedules in part a results in the minimal average waiting time?
相关文档
最新文档