操作系统部分课后习题答案

合集下载

操作系统第五版费祥林-课后习题答案参考

操作系统第五版费祥林-课后习题答案参考

操作系统第五版费祥林-课后习题答案参考1. 习题一a) 内容描述:- 系统调用是操作系统提供给用户程序的一组接口,用于访问操作系统的功能和服务。

- 系统调用是以进程的方式运行的,通过软中断或硬中断触发,并返回一个状态值,表示系统调用的执行结果。

b) 答案:系统调用的主要目的是提供一种安全的方式,让用户程序能够访问操作系统的特权功能。

通过系统调用,用户程序可以进行文件操作、网络通信、进程管理等功能。

2. 习题二a) 内容描述:- 进程是计算机中正在运行的程序的实例。

- 进程由程序代码、相关数据和执行上下文组成。

- 进程拥有自己的虚拟内存空间、寄存器状态和资源。

- 进程可以通过操作系统的调度机制进行切换和调度。

b) 答案:进程的主要特征包括并发性、独立性和随机性。

并发性指的是多个进程可以同时存在和执行;独立性指的是进程拥有独立的资源和执行上下文;随机性指的是进程的执行顺序和时间不确定。

3. 习题三a) 内容描述:- 死锁是指两个或多个进程因为竞争有限的资源而无法继续执行的状态。

- 死锁发生的原因包括互斥、占有且等待、不可抢占和循环等待。

b) 答案:死锁的预防和避免是操作系统中重要的问题。

预防死锁的方法包括破坏死锁产生的条件,如破坏互斥条件、破坏占有且等待条件等;避免死锁的方法包括资源分配图和银行家算法。

4. 习题四a) 内容描述:- 页面置换算法是操作系统中用于管理虚拟内存的重要手段。

- 页面置换算法的目标是在有限的物理内存空间中有效地管理大量的进程和页面。

- 常见的页面置换算法有FIFO、LRU和LFU等。

b) 答案:页面置换算法的选择依赖于系统的具体需求和资源限制。

FIFO算法是最简单的页面置换算法,它总是选择最先进入内存的页面进行置换;LRU算法则是根据页面最近被访问的频率进行置换;LFU算法是根据页面被访问的次数进行置换。

5. 习题五a) 内容描述:- 文件系统是操作系统中负责管理文件和目录的一组服务和数据结构。

操作系统第二版课后习题答案

操作系统第二版课后习题答案

操作系统第二版课后习题答案操作系统第二版课后习题答案操作系统是计算机科学中的重要领域,它负责管理计算机硬件和软件资源,为用户提供良好的使用体验。

在学习操作系统的过程中,课后习题是巩固和深化知识的重要方式。

本文将为大家提供操作系统第二版课后习题的答案,帮助读者更好地理解和掌握操作系统的知识。

第一章:引论1. 操作系统的主要功能包括进程管理、内存管理、文件系统管理和设备管理。

2. 进程是指正在执行的程序的实例。

进程控制块(PCB)是操作系统用来管理进程的数据结构,包含进程的状态、程序计数器、寄存器等信息。

3. 多道程序设计是指在内存中同时存放多个程序,通过时间片轮转等调度算法,使得多个程序交替执行。

4. 异步输入输出是指程序执行期间,可以进行输入输出操作,而不需要等待输入输出完成。

第二章:进程管理1. 进程调度的目标包括提高系统吞吐量、减少响应时间、提高公平性等。

2. 进程调度算法包括先来先服务(FCFS)、最短作业优先(SJF)、优先级调度、时间片轮转等。

3. 饥饿是指某个进程长时间得不到执行的情况,可以通过调整优先级或引入抢占机制来解决。

4. 死锁是指多个进程因为争夺资源而陷入无限等待的状态,可以通过资源预分配、避免环路等方式来避免死锁。

第三章:内存管理1. 内存管理的主要任务包括内存分配、内存保护、地址转换等。

2. 连续内存分配包括固定分区分配、可变分区分配和动态分区分配。

3. 分页和分段是常见的非连续内存分配方式,分页将进程的地址空间划分为固定大小的页,分段将进程的地址空间划分为逻辑段。

4. 页面置换算法包括最佳置换算法、先进先出(FIFO)算法、最近最久未使用(LRU)算法等。

第四章:文件系统管理1. 文件是操作系统中用来存储和组织数据的逻辑单位,可以是文本文件、图像文件、音频文件等。

2. 文件系统的主要功能包括文件的创建、删除、读取、写入等操作。

3. 文件系统的组织方式包括层次目录结构、索引结构、位图结构等。

操作系统课后部分习题及答案

操作系统课后部分习题及答案

第2章操作系统的运行环境2.2 现代计算机为什么设置目态/管态这两种不同的机器状态?现在的lntel80386设置了四级不同的机器状态(把管态又分为三个特权级),你能说出自己的理解吗?答:现在的Intel 80386把执行全部指令的管态分为三个特权级,再加之只能执行非特权指令的目态,这四级不同的机器状态,按照系统处理器工作状态这四级不同的机器状态也被划分管态和目态,这也完全符合处理器的工作状态。

2.6 什么是程序状态字?主要包括什么内容?答:如何知道处理器当前处于什么工作状态,它能否执行特权指令,以及处理器何以知道它下次要执行哪条指令呢?为了解决这些问题,所有的计算机都有若干的特殊寄存器,如用一个专门的寄存器来指示一条要执行的指令称程序计数器PC,同时还有一个专门的寄存器用来指示处理器状态的,称为程序状态字PSW。

主要内容包括所谓处理器的状态通常包括条件码--反映指令执行后的结果特征;中断屏蔽码--指出是否允许中断,有些机器如PDP-11使用中断优先级;CPU的工作状态--管态还是目态,用来说明当前在CPU上执行的是操作系统还是一般用户,从而决定其是否可以使用特权指令或拥有其它的特殊权力。

2.11 CPU如何发现中断事件?发现中断事件后应做什么工作?答:处理器的控制部件中增设一个能检测中断的机构,称为中断扫描机构。

通常在每条指令执行周期内的最后时刻中扫描中断寄存器,询为是否有中断信号到来。

若无中断信号,就继续执行下一条指令。

若有中断到来,则中断硬件将该中断触发器内容按规定的编码送入程序状态字PSW的相应位(IBM-PC中是第16~31位),称为中断码。

发现中断事件后应执行相中断处理程序,先由硬件进行如下操作:1、将处理器的程序状态字PSW压入堆栈2、将指令指针IP(相当于程序代码段落的段内相对地址)和程序代码段基地址寄存器CS的内容压入堆栈,以保存被子中断程序的返回地址。

3、取来被接受的中断请求的中断向量地址(其中包含有中断处理程序的IP,CS的内容),以便转入中断处理程序。

计算机操作系统课后习题答案第四版

计算机操作系统课后习题答案第四版

计算机操作系统课后习题答案第四版计算机操作系统课后习题答案(第四版)计算机操作系统是计算机系统中至关重要的组成部分,它负责管理和控制计算机的硬件和软件资源,为用户提供一个方便、高效、可靠的工作环境。

下面是对计算机操作系统第四版课后习题的答案解析。

一、操作系统的概念1、什么是操作系统?它的主要功能有哪些?操作系统是管理计算机硬件与软件资源的程序,是计算机系统的内核与基石。

它的主要功能包括处理机管理、存储器管理、设备管理、文件管理和用户接口管理等。

处理机管理负责合理分配和调度 CPU 资源,提高 CPU 利用率;存储器管理负责管理内存空间的分配、回收和保护;设备管理负责对外部设备进行有效管理和控制;文件管理负责对文件的存储、检索、共享和保护;用户接口管理则为用户提供了方便的操作界面。

2、操作系统有哪些分类?操作系统可以按照不同的标准进行分类。

按照用户数量,可分为单用户操作系统和多用户操作系统;按照任务数,可分为单任务操作系统和多任务操作系统;按照系统功能,可分为批处理操作系统、分时操作系统、实时操作系统、网络操作系统和分布式操作系统等。

1、什么是进程?进程和程序有什么区别?进程是程序在一个数据集合上的一次执行过程,是系统进行资源分配和调度的基本单位。

进程与程序的区别在于:程序是静态的指令集合,而进程是动态的执行过程;程序可以长期保存,进程具有生命周期;进程具有并发性,而程序没有;进程由程序、数据和进程控制块(PCB)组成。

2、进程的三种基本状态是什么?它们之间是如何转换的?进程的三种基本状态是就绪状态、执行状态和阻塞状态。

当进程已获得除CPU 以外的所有必要资源,只要再获得CPU 便可立即执行时,处于就绪状态;当进程正在 CPU 上运行时,处于执行状态;当进程因等待某一事件而暂时无法继续执行时,处于阻塞状态。

就绪状态到执行状态是通过进程调度实现的;执行状态到就绪状态是时间片用完或出现更高优先级的进程;执行状态到阻塞状态是进程因等待某事件而主动放弃 CPU;阻塞状态到就绪状态是等待的事件发生。

操作系统课后习题及答案

操作系统课后习题及答案

第一章1.下面不属于操作系统的是(C )A、OS/2B、UCDOSC、WPSD、FEDORA2.操作系统的功能不包括(B )A、CPU管理B、用户管理C、作业管理D、文件管理3.在分时系统中,当时间片一定时,(B ),响应越快。

A、内存越大B、用户越少C、用户越多D、内存越小4.分时操作系统的及时性是指( B )A、周转时间B、响应时间C、延迟时间D、A、B和C5.用户在程序设计的过程中,若要得到系统功能,必须通过(D )A、进程调度B、作业调度C、键盘命令D、系统调用6.批处理系统的主要缺点是( C )A、CPU使用效率低B、无并发性C、无交互性D、都不是第二章1、若信号量的初值为2,当前值为-3,则表示有(C )个进程在等待。

A、1B、2C、3D、52、在操作系统中,要对并发进程进行同步的原因是(B )A、进程必须在有限的时间内完成B、进程具有动态性C、并发进程是异步的D、进程具有结构性3、下列选项中,导致创进新进程的操作是(C )I用户成功登陆II设备分配III启动程序执行A、仅I和IIB、仅II和IIIC、仅I和IIID、I,II,III4、在多进程系统中,为了保证公共变量的完整性,各进程应互斥进入临界区。

所谓的临界区是指(D )A、一个缓冲区B、一个数据区C、一种同步机构D、一段程序5、进程和程序的本质区别是(B )A、内存和外存B、动态和静态特征C、共享和独占计算机资源D、顺序和非顺序执行计算机指令6、下列进程的状态变化中,(A )的变化是不可能发生的。

A、等待->运行B、运行->等待C、运行->就绪D、等待->就绪7、能从1种状态变为3种状态的是(D )A、就绪B、阻塞C、完成D、执行8、下列关于进程的描述正确的是(A )A、进程获得CPU是通过调度B、优先级是进程调度的重要依据,一旦确定就不能改变C、在单CPU系统中,任何时刻都有一个进程处于执行状态D、进程申请CPU得不到满足时,其状态变为阻塞9、CPU分配给进程的时间片用完而强迫进程让出CPU,此时进程的状态为(C )。

操作系统第九版部分课后作业习题答案解析

操作系统第九版部分课后作业习题答案解析

CHAPTER 9 Virtual Memory Practice Exercises9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.Answer:A page fault occurs when an access to a page that has not beenbrought into main memory takes place. The operating system verifiesthe memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.9.2 Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p;n distinct page numbers occur in it. Answer these questions for any page-replacement algorithms:a. What is a lower bound on the number of page faults?b. What is an upper bound on the number of page faults?Answer:a. nb. p9.3 Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of freepage frames is D, E, F (that is, D is at the head of the list, E is second, and F is last).Convert the following virtual addresses to their equivalent physical addresses in hexadecimal. All numbers are given in hexadecimal. (A dash for a page frame indicates that the page is not in memory.)• 9EF• 1112930 Chapter 9 Virtual Memory• 700• 0FFAnswer:• 9E F - 0E F• 111 - 211• 700 - D00• 0F F - EFF9.4 Consider the following page-replacement algorithms. Rank these algorithms on a five-point scale from “bad” to “perfect” according to their page-fault rate. Separate those algorithms that suffer from Belady’s anomaly from those that do not.a. LRU replacementb. FIFO replacementc. Optimal replacementd. Second-chance replacementAnswer:Rank Algorithm Suffer from Belady’s anomaly1 Optimal no2 LRU no3 Second-chance yes4 FIFO yes9.5 Discuss the hardware support required to support demand paging. Answer:For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.9.6 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutions per minute and transfers 1 million words per second. The following statistical measurements were obtained from the system:• 1 percent of all instructions executed accessed a page other than the current page.•Of the instructions that accessed another page, 80 percent accesseda page already in memory.Practice Exercises 31•When a new page was required, the replaced page was modified 50 percent of the time.Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.Answer:effective access time = 0.99 × (1 sec + 0.008 × (2 sec)+ 0.002 × (10,000 sec + 1,000 sec)+ 0.001 × (10,000 sec + 1,000 sec)= (0.99 + 0.016 + 22.0 + 11.0) sec= 34.0 sec9.7 Consider the two-dimensional array A:int A[][] = new int[100][100];where A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three page frames, how many page faults are generated bythe following array-initialization loops, using LRU replacement andassuming that page frame 1 contains the process and the other twoare initially empty?a. for (int j = 0; j < 100; j++)for (int i = 0; i < 100; i++)A[i][j] = 0;b. for (int i = 0; i < 100; i++)for (int j = 0; j < 100; j++)A[i][j] = 0;Answer:a. 5,000b. 509.8 Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each.•LRU replacement• FIFO replacement•Optimal replacement32 Chapter 9 Virtual MemoryAnswer:Number of frames LRU FIFO Optimal1 20 20 202 18 18 153 15 16 114 10 14 85 8 10 76 7 10 77 77 79.9 Suppose that you want to use a paging algorithm that requires a referencebit (such as second-chance replacement or working-set model), butthe hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.Answer:You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. O n first reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.9.10 You have devised a new page-replacement algorithm that you thinkmaybe optimal. In some contorte d test cases, Belady’s anomaly occurs. Is the new algorithm optimal? Explain your answer.Answer:No. An optimal algorithm will not suffer from Belady’s anomaly because —by definition—an optimal algorithm replaces the page that will notbe used for the long est time. Belady’s anomaly occurs when a pagereplacement algorithm evicts a page that will be needed in the immediatefuture. An optimal algorithm would not have selected such a page.9.11 Segmentation is similar to paging but uses variable-sized“pages.”Definetwo segment-replacement algorithms based on FIFO and LRU pagereplacement schemes. Remember that since segments are not the samesize, the segment that is chosen to be replaced may not be big enoughto leave enough consecutive locations for the needed segment. Consider strategies for systems where segments cannot be relocated, and thosefor systems where they can.Answer:a. FIFO. Find the first segment large enough to accommodate the incoming segment. If relocation is not possible and no one segmentis large enough, select a combination of segments whose memoriesare contiguous, which are “closest to the first of the list” andwhich can accommodate the new segment. If relocation is possible, rearrange the memory so that the firstNsegments large enough forthe incoming segment are contiguous in memory. Add any leftover space to the free-space list in both cases.Practice Exercises 33b. LRU. Select the segment that has not been used for the longestperiod of time and that is large enough, adding any leftover spaceto the free space list. If no one segment is large enough, selecta combination of the “oldest” segments that are contiguous inmemory (if relocation is not available) and that are large enough.If relocation is available, rearrange the oldest N segments to be contiguous in memory and replace those with the new segment.9.12 Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of CPU and the paging disk. The results are one of the following alternatives. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping?a. CPU utilization 13 percent; disk utilization 97 percentb. CPU utilization 87 percent; disk utilization 3 percentc. CPU utilization 13 percent; disk utilization 3 percentAnswer:a. Thrashing is occurring.b. CPU utilization is sufficiently high to leave things alone, and increase degree of multiprogramming.c. Increase the degree of multiprogramming.9.13 We have an operating system for a machine that uses base and limit registers, but we have modified the ma chine to provide a page table.Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?Answer:The page table can be set up to simulate base and limit registers provided that the memory is allocated in fixed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.9.27.Consider a demand-paging system with the following time-measured utilizations:CPU utilization 20%Paging disk 97.7%Other I/O devices 5%Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.a. Install a faster CPU.b. Install a bigger paging disk.c. Increase the degree of multiprogramming.d. Decrease the degree of multiprogramming.e. Install more main memory.f. Install a faster hard disk or multiple controllers with multiple hard disks.g. Add prepaging to the page fetch algorithms.h. Increase the page size.Answer: The system obviously is spending most of its time paging, indicating over-allocationof memory. If the level of multiprogramming is reduced resident processeswould page fault less frequently and the CPU utilization would improve. Another way toimprove performance would be to get more physical memory or a faster paging drum.a. Get a faster CPU—No.b. Get a bigger paging drum—No.c. Increase the degree of multiprogramming—No.d. Decrease the degree of multiprogramming—Yes.e. Install more main memory—Likely to improve CPU utilization as more pages canremain resident and not require paging to or from the disks.f. Install a faster hard disk, or multiple controllers with multiple hard disks—Also animprovement, for as the disk bottleneck is removed by faster response and morethroughput to the disks, the CPU will get more data more quickly.g. Add prepaging to the page fetch algorithms—Again, the CPU will get more datafaster, so it will be more in use. This is only the case if the paging action is amenableto prefetching (i.e., some of the access is sequential).h. Increase the page size—Increasing the page size will result in fewer page faults ifdata is being accessed sequentially. If data access is more or less random, morepaging action could ensue because fewer pages can be kept in memory and moredata is transferred per page fault. So this change is as likely to decrease utilizationas it is to increase it.10.1、Is disk scheduling, other than FCFS scheduling, useful in asingle-userenvironment? Explain your answer.Answer: In a single-user environment, the I/O queue usually is empty. Requests generally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when aWeb browser retrieves data in the background while the operating system is paging and another application is active in the foreground.10.2.Explain why SSTF scheduling tends to favor middle cylindersover theinnermost and outermost cylinders.The center of the disk is the location having the smallest average distance to all other tracks.Thus the disk head tends to move away from the edges of the disk.Here is another way to think of it.The current location of the head divides the cylinders into two groups.If the head is not in the center of the disk and a new request arrives,the new request is more likely to be in the group that includes the center of the disk;thus,the head is more likely to move in that direction.10.11、Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk-scheduling algorithms?a. FCFSb. SSTFc. SCANd. LOOKe. C-SCANAnswer:a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081.b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745.c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769.d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319.e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813.f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363.12CHAPTERFile-SystemImplementationPractice Exercises12.1 Consider a file currently consisting of 100 blocks. Assume that the filecontrol block (and the index block, in the case of indexed allocation)is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow atthe beginning but there is room to grow at the end. Also assume thatthe block information to be added is stored in memory.a. The block is added at the beginning.b. The block is added in the middle.c. The block is added at the end.d. The block is removed from the beginning.e. The block is removed from the middle.f. The block is removed from the end.Answer:The results are:Contiguous Linked Indexeda. 201 1 1b. 101 52 1c. 1 3 1d. 198 1 0e. 98 52 0f. 0 100 012.2 What problems could occur if a system allowed a file system to be mounted simultaneously at more than one location?Answer:4344 Chapter 12 File-System ImplementationThere would be multiple paths to the same file, which could confuse users or encourage mistakes (deleting a file with one path deletes thefile in all the other paths).12.3 Why must the bit map for file allocation be kept on mass storage, ratherthan in main memory?Answer:In case of system crash (memory failure) the free-space list would notbe lost as it would be if the bit map had been stored in main memory.12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file?Answer:•Contiguous—if file is usually accessed sequentially, if file isrelatively small.•Linked—if file is large and usually accessed sequentially.• Indexed—if file is large and usually accessed randomly.12.5 One problem with contiguous allocation is that the user must preallocate enough space for each file. If the file grows to be larger than thespace allocated for it, special actions must be taken. One solution to this problem is to define a file structure consisting of an initial contiguous area (of a specified size). If this area is filled, the operating system automatically defines an overflow area that is linked to the initialc ontiguous area. If the overflow area is filled, another overflow areais allocated. Compare this implementation of a file with the standard contiguous and linked implementations.Answer:This method requires more overhead then the standard contiguousallocation. It requires less overheadthan the standard linked allocation. 12.6 How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?Answer:Caches allow components of differing speeds to communicate moreefficie ntly by storing data from the slower device, temporarily, ina faster device (the cache). Caches are, almost by definition, more expensive than the device they are caching for, so increasing the number or size of caches would increase system cost.12.7 Why is it advantageous for the user for an operating system to dynamically allocate its internal tables? What are the penalties to the operating system for doing so?Answer:Dynamic tables allow more flexibility in system use growth — tablesare never exceeded, avoiding artificial use limits. Unfortunately, kernel structures and code are more complicated, so there is more potentialfor bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.Practice Exercises 4512.8 Explain how the VFS layer allows an operating system to support multiple types of file systems easily.Answer:VFS introduces a layer of indirection in the file system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of file system type). Each file system type provides its function calls and data structuresto the VFS layer. A system call is translated into the proper specific functions for the ta rget file system at the VFS layer. The calling program has no file-system-specific code, and the upper levels of the system call structures likewise are file system-independent. The translation at the VFS layer turns these generic calls into file-system-specific operations.。

操作系统第九版部分课后作业习题答案分析解析

操作系统第九版部分课后作业习题答案分析解析

CHAPTER 9 Virtual Memory Practice Exercises9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.Answer:A page fault occurs when an access to a page that has not beenbrought into main memory takes place. The operating system veri?esthe memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.9.2 Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p;n distinct page numbers occur in it. Answer these questions for anypage-replacement algorithms:a. What is a lower bound on the number of page faults?b. What is an upper bound on the number of page faults?Answer:a. nb. p9.3 Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of freepage frames is D, E, F (that is, D is at the head of the list, E is second,and F is last).Convert the following virtual addresses to their equivalent physicaladdresses in hexadecimal. All numbers are given in hexadecimal. (Adash for a page frame indicates that the page is not in memory.)? 9EF? 1112930 Chapter 9 Virtual Memory? 700? 0FFAnswer:? 9E F - 0E F? 111 - 211? 700 - D00? 0F F - EFF9.4 Consider the following page-replacement algorithms. Rank thesealgorithms on a ?ve-point scale from “bad” to “perfect” according to the page-fault rate. Separate those algorithms that suffer from Belady’sanomaly from those that do not.a. LRU replacementb. FIFO replacementc. Optimal replacementd. Second-chance replacementAnswer:Rank Algorithm Suffer from Belady’s anomaly1 Optimal no2 LRU no3 Second-chance yes4 FIFO yes9.5 Discuss the hardware support required to support demand paging. Answer:For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whetherthe program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.9.6 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutionsper minute and transfers 1 million words per second. The following statistical measurements were obtained from the system:page other than the? 1 percent of all instructions executed accessed acurrent page.?Of the instructions that accessed another page, 80 percent accesseda page already in memory.Practice Exercises 31?When a new page was required, the replaced page was modi?ed 50 percent of the time.Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.Answer:(2 sec)(1sec + 0.008 ×effective access time = 0.99 ×(10,000 sec + 1,000 sec)+ 0.002 ×(10,000 sec + 1,000 sec)+ 0.001 ×9.7 Consider the two-dimensional array A:int A[][] = new int[100][100];where A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three page frames, how many page faults are generated bythe following array-initialization loops, using LRU replacement andassuming that page frame 1 contains the process and the other two are initially empty?a. for (int j = 0; j < 100; j++)for (int i = 0; i < 100; i++)A[i][j] = 0;b. for (int i = 0; i < 100; i++)for (int j = 0; j < 100; j++)A[i][j] = 0;Answer:a. 5,000b. 509.8 Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, ?ve, six, or seven frames? Remember all frames are initially empty, so your ?rst unique pages will all cost one fault each.?LRU replacement? FIFO replacement?Optimal replacement32 Chapter 9 Virtual MemoryAnswer:Number of frames LRU FIFO Optimal1 20 20 202 18 18 153 15 16 114 10 14 85 8 10 76 7 10 77 77 79.9 Suppose that you want to use a paging algorithm that requires a referencebit (such as second-chance replacement or working-set model), butthe hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.Answer:You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. On ?rst reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.9.10 You have devised a new page-replacement algorithm that you thinkmaybe optimal. In some contorte d test cases, Belady’s anomaly occurs. Is thenew algorithm optimal? Explain your answer.Answer:No. An optimal algorithm will not suffer from Belady’s anomaly beca an optimal algorithm replaces the page that will not—by de?nition—be used for the longest time. Belady’s anomaly occurs when a pagereplacement a lgorithm evicts a page that will be needed in theimmediatefuture. An optimal algorithm would not have selected such a page.9.11 Segmentation is similar to paging but usesnevariable-sized“pages.”De?two segment-replacement algorithms based on FIFO and LRU pagereplacement s chemes. Remember that since segments are not thesamesize, the segment that is chosen to be replaced may not be big enoughto leave enough consecutive locations for the needed segment. Considerstrategies for systems where segments cannot be relocated, and thosefor systems where they can.Answer:a. FIFO. Find the ?rst segment large enough to accommodate theincoming segment. If relocation is not possible and no one segmentis large enough, select a combination of segments whose memoriesare contiguous, which are “closest to the ?rst of the list” and which can accommodate the new segment. If relocation is possible,rearrange the memory so that the ?rstNsegments large enough forthe incoming segment are contiguous in memory. Add any leftoverspace to the free-space list in both cases.Practice Exercises 33b. LRU. Select the segment that has not been used for the longestperiod of time and that is large enough, adding any leftover spaceto the free space list. If no one segment is large enough, selecta combination of the “oldest” segments that are contiguous inmemory (if relocation is not available) and that are large enough.If relocation is available, rearrange the oldest N segments to becontiguous in memory and replace those with the new segment.9.12 Consider a demand-paged computer system where the degree of multiprogramming is currently ?xed at four. The system was recentlymeasured to determine utilization of CPU and the paging disk. The resultsare one of the following alternatives. For each case, what is happening?Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping?a. CPU utilization 13 percent; disk utilization 97 percentb. CPU utilization 87 percent; disk utilization 3 percentc. CPU utilization 13 percent; disk utilization 3 percentAnswer:a. Thrashing is occurring.b. CPU utilization is suf?ciently high to leave things alone, andincrease degree of multiprogramming.c. Increase the degree of multiprogramming.9.13 We have an operating system for a machine that uses base and limit registers, but we have modi?ed the ma chine to provide a page table.Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?Answer:The page table can be set up to simulate base and limit registers provided that the memory is allocated in ?xed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.9.27.Consider a demand-paging system with the following time-measured utilizations:CPU utilization 20%Paging disk 97.7%Other I/O devices 5%Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.a. Install a faster CPU.b. Install a bigger paging disk.c. Increase the degree of multiprogramming.d. Decrease the degree of multiprogramming.e. Install more main memory.f. Install a faster hard disk or multiple controllers with multiple hard disks.g. Add prepaging to the page fetch algorithms.h. Increase the page size.Answer: The system obviously is spending most of its time paging, indicating over-allocationof memory. If the level of multiprogramming is reduced resident processeswould page fault less frequently and the CPU utilization would improve. Another way toimprove performance would be to get more physical memory or a faster paging drum.a. Get a faster CPU—No.b. Get a bigger paging drum—No.c. Increase the degree of multiprogramming—No.d. Decrease the degree of multiprogramming—Yes.e. Install more main memory—Likely to improve CPU utilization as more pages canremain resident and not require paging to or from the disks.f. Install a faster hard disk, or multiple controllers with multiple hard disks—Also animprovement, for as the disk bottleneck is removed by faster response and morethroughput to the disks, the CPU will get more data more quickly.g. Add prepaging to the page fetch algorithms—Again, the CPU will get more datafaster, so it will be more in use. This is only the case if the paging actionis amenableto prefetching (i.e., some of the access is sequential).h. Increase the page size—Increasing the page size will result in fewer page faults ifdata is being accessed sequentially. If data access is more or less random, morepaging action could ensue because f ewer pages c an be kept in memory and moredata is transferred per page fault. So this change is as likely to decrease utilizationas it is to increase it.10.1、Is disk scheduling, other than FCFS scheduling, useful in a single-userenvironment? Explain your answer.Answer: In a single-user environment, the I/O queue usually is empty. Requests g enerally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when aWeb browser retrieves data in the background while the operating system is paging and another application is active in the foreground.10.2.Explain why SSTF scheduling tends to favor middle cylindersover theinnermost and outermost cylinders.The center of the disk is the location having the smallest average distance to all other tracks.Thus the disk head tends to move away from the edges of the disk.Here is another way to think of it.The current location of the head divides the cylinders into two groups.If the head is not in the center of the disk and a new request arrives,the new request is more likely to be in the group that includes the center of the disk;thus,the head is more likely to move in that direction.10.11、Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk-scheduling algorithms?a. FCFSb. SSTFc. SCANd. LOOKe. C-SCANAnswer:a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081.b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745.c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769.d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319.e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813.f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363.12CHAPTERFile-SystemImplementationPractice Exercises12.1 Consider a ?le currently consisting of 100 blocks. Assume that the?lecontrol block (and the index block, in the case of indexed allocation)is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow atthe beginning but there is room to grow at the end. Also assume thatthe block information to be added is stored in memory.a. The block is added at the beginning.b. The block is added in the middle.c. The block is added at the end.d. The block is removed from the beginning.e. The block is removed from the middle.f. The block is removed from the end.Answer:The results are:Contiguous Linked Indexeda. 201 1 1b. 101 52 1c. 1 3 1d. 198 1 0e. 98 52 0f. 0 100 012.2 What problems could occur if a system allowed a ?le system to be mounted simultaneously at more than one location?Answer:4344 Chapter 12 File-System ImplementationThere would be multiple paths to the same ?le, which could confuse users or encourage mistakes (deleting a ?le with one path deletes the?le in all the other paths).12.3 Why must the bit map for ?le allocation be kept on mass storage, ratherthan in main memory?Answer:In case of system crash (memory failure) the free-space list would not be lost as it would be if the bit map had been stored in main memory.12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular ?le?Answer:?Contiguous—if ?le is usually accessed sequentially, if ?le isrelatively small.?Linked—if ?le is large and usually accessed sequentially.? Indexed—if ?le is large and usually accessed randomly.12.5 One problem with contiguous allocation is that the user must preallocate enough space for each ?le. If the ?le grows to be larger than thespace allocated for it, special actions must be taken. One solution to this problem is to de?ne a ?le structure consisting of an initial contiguousarea (of a speci?ed size). If this area is ?lled, the operating system automatically de?nes an over?ow area that is linked to the initial contiguous area. If the over?ow area is ?lled, another over?ow areais allocated. Compare this implementation of a ?le with the standard contiguous and linked implementations.Answer:This method requires more overhead then the standard contiguousallocation. It requires less overheadthan the standard linked allocation.12.6 How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?Answer:Caches allow components of differing speeds to communicate moreef?ciently by storing data from the slower device, temporarily, ina faster device (the cache). Caches are, almost by de?nition, moreexpensive than the device they are caching for, so increasing the numberor size of caches would increase system cost.12.7 Why is it advantageous for the user for an operating system to dynamically allocate its internal tables? What are the penalties to the operating system for doing so?Answer:tablesDynamic tables allow more ?exibility in system use growth —are never exceeded, avoiding arti?cial use limits. Unfortunately, kernel structures and code are more complicated, so there is more potentialfor bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.Practice Exercises 4512.8 Explain how the VFS layer allows an operating system to support multiple types of ?le systems easily.Answer:VFS introduces a layer of indirection in the ?le system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of ?le system type). Each ?le system type provides its function calls and data structuresto the VFS layer. A system call is translated into the proper speci?c functions for the target ?le system at the VFS layer. The calling program has no ?le-system-speci?c code, and the upper levels of the system call structures likewise are ?le system-independent. The translation at the VFS layer turns these generic calls into ?le-system-speci?c operations.。

操作系统(1~8章的课后习题答案)

操作系统(1~8章的课后习题答案)

1.1:存储程序式计算机的主要特点是:集中顺序过程控制(1)过程性:模拟人们手工操作(2)集中控制:由CPU集中管理(3)顺序性:程序计数器1.2:a:批处理系统的特点:早期批处理有个监督程序,作业自动过渡直到全部处理完,而脱机批处理的特点:主机与卫星机并行操作。

b:分时系统的特点:(1):并行性。

共享一台计算机的众多联机用户可以在各自的终端上同时处理自己的程序。

(2):独占性。

分时操作系统采用时间片轮转的方法使一台计算机同时为许多终端上同时为许多终端用户服务,每个用户的感觉是自己独占计算机。

操作系统通过分时技术将一台计算机改造为多台虚拟计算机。

(3):交互性。

用户与计算机之间可以进行“交互会话”,用户从终端输入命令,系统通过屏幕(或打印机)将信息反馈给用户,用户与系统这样一问一答,直到全部工作完成。

c:分时系统的响应比较快的原因:因为批量操作系统的作业周转时间较长,而分时操作系统一般采用时间片轮转的方法,一台计算机与许多终端设备连接,使一台计算机同时为多个终端用户服务,该系统对每个用户都能保证足够快的响应时间,并提供交互会话功能。

1.3:实时信息处理系统和分时系统的本质区别:实时操作系统要追求的目标是:对外部请求在严格时间范围内做出反应,有高可靠性和完整性。

其主要特点是资源的分配和调度首先要考虑实时性然后才是效率。

此外,实时操作系统应有较强的容错能力,分时操作系统的工作方式是:一台主机连接了若干个终端,每个终端有一个用户在使用。

用户交互式地向系统提出命令请求,系统接受每个用户的命令,采用时间片轮转方式处理服务请求,并通过交互方式在终端上向用户显示结果。

用户根据上步结果发出下道命。

分时操作系统将CPU 的时间划分成若干个片段,称为时间片。

操作系统以时间片为单位,轮流为每个终端用户服务。

每个用户轮流使用一个时间片而使每个用户并不感到有别的用户存在。

分时系统具有多路性、交互性、“独占”性和及时性的特征。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

第一章1、设计现代OS的主要目标就是什么?方便性,有效性,可扩充性与开放性。

2、OS的作用可表现在哪几个方面?(1)OS作为用户与计算机硬件系统之间的接口。

(2)OS作为计算机系统资源的管理者。

(3)OS实现了对计算机资源的抽象。

4、试说明推动多道批处理系统形成与发展的主要动力就是什么主要动力来源于四个方面的社会需求与技术发展(1)不断提高计算机资源的利用率(2)方便用户(3)器件的不断更新换代(4)计算机体系结构的不断发展。

7、实现分时系统的关键问题就是什么?应如何解决关键问题就是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令。

在用户能接受的时延内将结果返回给用户。

解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据,为每个终端配置缓冲区,暂存用户键入的命令或数据。

针对及时处理问题,应使所有的用户作业都直接进入内存,并且为每个作业分配一个时间片,允许作业只在自己的时间片内运行。

这样在不长的时间内,能使每个作业都运行一次。

12、试从交互性、及时性以及可靠性方面,将分时系统与实时系统进行比较。

(1)及时性。

实时信息处理系统对实时性的要求与分时系统类似,都就是以人所能接受的等待时间来确定,而实时控制系统的及时性,就是以控制对象所要求的开始截止时间或完成截止时间来确定的,一般为秒级到毫秒级,甚至有的要低于100微妙。

(2)交互性。

实时信息处理系统具有交互性,但人与系统的交互仅限于访问系统中某些特定的专用服务程序,不像分时系统那样能向终端用户提供数据与资源共享等服务。

(3)可靠性。

分时系统也要求系统可靠,但相比之下,实时系统则要求系统具有高度的可靠性。

因为任何差错都可能带来巨大的经济损失,甚至就是灾难性后果,所以在实时系统中,往往都采取了多级容错措施保障系统的安全性及数据的安全性。

13、OS有哪几大特征?其最基本的特征就是什么?并发性、共享性、虚拟性与异步性四个基本特征。

最基本的特征就是并发性。

14、处理机管理有哪些主要功能?它们的主要任务就是什么?处理机管理的主要功能就是:进程管理、进程同步、进程通信与处理机调度(1)进程管理:为作业创建进程,撤销已结束进程,控制进程在运行过程中的状态转换(2)进程同步:为多个进程(含线程)的运行进行协调(3)进程通信:用来实现在相互合作的进程之间的信息交换(4)处理机调度:①作业调度:从后备队里按照一定的算法,选出若干个作业,为她们分配运行所需的资源,首选就是分配内存②进程调度:从进程的就绪队列中,按照一定算法选出一个进程把处理机分配给它,并设置运行现场,使进程投入执行。

15、内存管理有哪些主要功能?她们的主要任务就是什么内存管理的主要功能有:内存分配、内存保护、地址映射与内存扩充。

内存分配:为每道程序分配内存。

内存保护:确保每道用户程序都只在自己的内存空间运行,彼此互不干扰。

地址映射:将地址空间的逻辑地址转换为内存空间与对应的物理地址。

内存扩充:用于实现请求调用功能、置换功能等。

16、设备管理有哪些主要功能?其主要任务就是什么?主要功能有: 缓冲管理、设备分配与设备处理以及虚拟设备等。

主要任务: 完成用户提出的I/O请求、为用户分配I/O设备、提高CPU与I/O设备的利用率、提高I/O速度以及方便用户使用I/O设备。

17、文件管理有哪些主要功能?其主要任务就是什么?文件管理主要功能:文件存储空间的管理、目录管理、文件的读/写管理与保护。

文件管理的主要任务:管理用户文件与系统文件、方便用户使用、保证文件安全性。

18、就是什么原因使操作系统具有异步性特征操作系统的异步性体现在三个方面:一就是进程的异步性,进程以人们不可预知的速度向前推进。

二就是程序的不可再现性,即程序执行的结果有时就是不确定的。

三就是程序执行时间的不可预知性,即每个程序何时执行,执行顺序以及完成时间就是不确定的。

23、何谓微内核技术?在微内核中通常提供了哪些功能把操作系统中更多的成分与功能放到更高的层次,即用户模式中去运行,而留下一个尽量小的内核,用它来完成操作系统最基本的核心功能,称这种技术为微内核技术。

在微内核中通常提供了进程、线程管理、低级存储器管理、中断与陷入处理等功能。

第二章5、在操作系统中为什么要引入进程概念?它会产生什么样的影响?为了使程序在多道程序环境下能并发执行,并对并发执行的程序加以控制与描述,在操作系统中引入了进程概念。

影响: 使程序的并发执行得以实行。

6、试从动态性、并发性与独立性上比较进程与程序?(1)动态性就是进程最基本的特性,表现为由创建而产生、由调度而执行,因得不到资源而暂停执行,由撤销而消亡。

进程有一定的生命期,而程序只就是一组有序的指令集合,静态实体。

(2)并发性就是进程的重要特征,同时也就是OS的重要特征。

引入进程的目的正就是为了使其程序能与其它进程的程序并发执行,而程序就是不能并发执行的。

(3)独立性就是指进程实体就是一个能独立运行的基本单位,也就是系统中独立获得资源与独立调度的基本单位。

对于未建立任何进程的程序,不能作为独立单位参加运行。

7、试说明PCB 的作用,为什么说PCB 就是进程存在的惟一标志?PCB就是进程实体的一部分,就是操作系统中最重要的记录型数据结构。

作用就是使一个在多道程序环境下不能独立运行的程序,成为一个能独立运行的基本单位,成为能与其它进程并发执行的进程。

OS就是根据PCB对并发执行的进程进行控制与管理的。

8、试说明进程在三个基本状态之间转换的典型原因。

1就绪状态→执行状态进程分配到CPU资源2执行状态→就绪状态时间片用完3执行状态→阻塞状态I/O请求4阻塞状态→就绪状态I/O完成13、在创建一个进程时所要完成的主要工作就是什么(1)OS 发现请求创建新进程事件后,调用进程创建原语Creat()(2)申请空白PCB(3)为新进程分配资源(4)初始化进程控制块(5)将新进程插入就绪队列。

14、在撤销一个进程时所要完成的主要工作就是什么(1)根据被终止进程标识符,从PCB集中检索出进程PCB读出该进程状态。

(2)若被终止进程处于执行状态,立即终止该进程的执行,置调度标志真指示该进程被终止后重新调度。

(3)若该进程还有子进程,应将所有子孙进程终止,以防它们成为不可控进程。

(4)将被终止进程拥有的全部资源,归还给父进程,或归还给系统。

(5)将被终止进程PCB 从所在队列或列表中移出,等待其它程序搜集信息。

15、试说明引起进程阻塞或被唤醒的主要事件就是什么16、进程在运行时存在哪两种形式的制约?并举例说明之。

(1)间接相互制约关系。

举例:有两进程A与B,如果A 提出打印请求,系统已把唯一的一台打印机分配给了进程B,则进程A只能阻塞,一旦B释放打印机,A才由阻塞改为就绪。

(2)直接相互制约关系。

举例:有输入进程A通过单缓冲向进程B提供数据。

当缓冲空时,计算进程因不能获得所需数据而阻塞,当进程A把数据输入缓冲区后,便唤醒进程B,反之,当缓冲区已满时,进程A因没有缓冲区放数据而阻塞,进程B将缓冲区数据取走后便唤醒A。

17、为什么进程在进入临界区之前应先执行“进入区”代码,而在退出前又要执行“退出区”代码为了实现多个进程对临界资源的互斥访问,必须在临界区前面增加一段用于检查欲访问的临界资源就是否正被访问的代码。

如果未被访问,该进程便可进入临界区对资源进行访问,并设置正被访问标志;如果正被访问,则本进程不能进入临界区,实现这一功能的代码为"进入区"代码,在退出临界区后,必须执行"退出区"代码,用于恢复未被访问标志,使其它进程能再访问此临界资源。

18、同步机构应遵循哪些基本准则?为什么同步机构应遵循的基本准则就是:空闲让进、忙则等待、有限等待、让权等待原因,为实现进程互斥进入自己的临界区。

23、在生产者消费者问题中,如果缺少了signal(full)或signal(empty),对执行结果有何影响?如果缺少signal(full),那么表明从第一个生产者进程开始就没有改变信号量full 值,即使缓冲池产品已满,但full值还就是0,这样消费者进程执行wait(full)时认为缓冲池就是空而取不到产品,消费者进程一直处于等待状态。

如果缺少signal(empty),在生产者进程向n个缓冲区投满产品后消费者进程才开始从中取产品,这时empty=0full=n,那么每当消费者进程取走一个产品empty值并不改变,直到缓冲池取空了,empty值也就是0,即使目前缓冲池有n个空缓冲区,生产者进程要想再往缓冲池中投放产品也会因为申请不到空缓冲区被阻塞。

24、在生产消费者问题中,如果将两个wait操作即wait(full)与wait(mutex)互换位置,或者将signal(mutex)与signal(full)互换位置,结果如何?将wait(full)与wait(mutex)互换位置后,可能引起死锁。

考虑系统中缓冲区全满时,若一生产者进程先执行了wait(mutex)操作并获得成功,则当再执行wait(empty)操作时,它将因失败而进入阻塞状态,它期待消费者进程执行signal(empty)来唤醒自己,在此之前,它不可能执行signal(mutex)操作,从而使试图通过执行wait(mutex)操作而进入自己的临界区的其她生产者与所有消费者进程全部进入阻塞状态,这样容易引起系统死锁。

若signal(mutex)与signal(full)互换位置后只就是影响进程对临界资源的释放次序,而不会引起系统死锁,因此可以互换位置。

26、试修改下面生产者消费者问题解法中的错误:producer:beginrepeat、、、producer an item in nextp;wait(mutex);wait(full); /* 应为wait(empty),而且还应该在wait(mutex)的前面*/ buffer(in):=nextp;/* 缓冲池数组游标应前移: in:=(in+1) mod n; */signal(mutex);/* signal(full); */until false;endconsumer:beginrepeatwait(mutex);wait(empty); /* 应为wait(full),而且还应该在wait(mutex)的前面*/ nextc:=buffer(out);out:=out+1; /* 考虑循环应改为: out:=(out+1) mod n; */ signal(mutex);/* signal(empty); */consumer item in nextc;until false; end27、试利用记录型信号量写出一个不会出现死锁的哲学家进餐问题的算法、Var chopstick:array[0,…,4] of semaphore;所有信号量均被初始化为1第i 位哲学家的活动可描述为RepeatWait(chopstick[i]);Wait(、chopstick[(i+1) mod 5]);、、、Ea、t ;、、、Signal(chopstick[i]);Signal(chopstick[(i+1) mod 5])Ea、t ;、、、Think;Until false;28、在测量控制系统中的数据采集任务时,把所采集的数据送往一单缓冲区;计算任务从该单缓冲区中取出数据进行计算。

相关文档
最新文档