Chapter 1: Introducing Operating Systems
1.The operating system manages each and every piece of hardware and software.
2. An operating system is a special type of hardware.
3. The Memory Manager, the Interface Manager, the User Manager, and the File Manager are the basis of all operating systems.
4. Network functions were not always an integral part of operating systems.
5. The Memory Manager is in charge of indirect memory, also known as ROM.
6. The high-level portion of the Process Manager is called the Process Scheduler.
7. When launching a program, if the program is in storage, the Storage Manager must calculate its exact location on the disk, pass this information to the File Manager, which retrieves and sends the program on to the Memory Manager, which must find space for it and record its exact location in memory
8. When a program has finished executing, it must send a finished message back to the Processor Manager.
9. Operating systems with networking capability have a fifth essential manager called the Network Manager that provides a convenient way for users to share resources while controlling users’ access to them.
10. The File Manager is responsible for data files but not program files.
11. The CPU, the central processing unit, is the brains of a computer, with the circuitry to control the interpretation and execution of instructions.
12. Until the mid-1970s, all computers were classified by price alone.
13. The minicomputer only supported one user.
14. The supercomputer was introduced primarily for government applications needing massive and fast number-crunching ability to carry out military operations and weather forecasting.
15. Since the mid-1970s rapid advances in computer technology have blurred the distinguishing characteristics of early machines.
16. The Intel 4004 chip in 1971 had 2,300 transistors while the Pentium II chip twenty years later had 7.5 million, and the Pentium 4 Extreme Edition processor introduced in 2004 had 178 trillion transistors.
17. Card systems date from the earliest computers, which relied on punched cards or tape for input when a job was entered by assembling the cards into a deck and running the entire deck of cards through a card reader as a group.
18. Real-time systems are the fastest and are used in time-critical environments where data must be processed extremely quickly because the output influences immediate decisions.
19. Onboard systems are computers placed inside other products to add features and capabilities.
20. The first bug was a moth trapped in a Harvard computer.
21. Many early programs used convoluted logic that only the original programmer could understand, so it was nearly impossible for anyone else to debug or change the program later on.
22. Only one FORTRAN program could run at a time, then the FORTRAN compiler had to be reloaded into memory.
23. If the control unit has two buffers, the second buffer can be loaded while the first buffer is transmitting its contents to or from the CPU.
24. Few major advances were made in data management during the 1960s.
25. A process requires space in main memory where it resides during its execution although, from time to time, it requires other resources such as data files or I/O devices.
1. Which part of the operating system is unique to each operating system?
a. User Command Interface
c. Memory Manager
b. Process Manager
d. File Manager
2. The ____ must receive the electrical impulses from the keyboard, decode the keystrokes to form the command, and send the command to the User Command Interface, where the Processor Manager validates the command.
a. Device Manager
c. Keyboard Manager
b. File Manager
d. Memory Manager
3. ____ include(s) every peripheral unit in the system such as printers, disk drives, CD drives, magnetic tape devices, keyboards, DVD players, modems, and so on.
a. The CPU
b. I/O Devices
d. Secondary components
4. In a computer, the ____ holds the Central Processing Unit, the Arithmetic and Logic Unit, registers, cache, and main memory.
a. parallel system
b. USB interface
5. How many floating-point operations can a supercomputer perform per second?
a. 240 million
c. 2.4 trillion
b. 2.4 billion
d. 24 trillion
6. The most powerful microcomputers used by commercial, educational and government enterprises are called ____.
7. What is the primary distinguishing characteristic of modern computers?
a. memory capacity
c. disk space
b. processor capacity
d. physical size
8. Which operating systems are typically used for a network platform?
a. IRIX, UNICOS
b. Linux, Macintosh, MS-DOS, Windows 2000/XP
c. Linux, NetWare, UNIX, Windows Server 2003
d. IBM OS/390, UNIX
9. Which of the following requires a real-time system?
a. space flights
c. telephone switching
b. airport traffic control
d. All of the above
10. A hybrid system is a combination of the ____ systems.
a. batch and interactive
c. interactive and real-time
b. batch and real-time
d. real-time and general-purpose
11. What type of system is designed to perform one specific function?
12. What is the name for the core part of operating system software?
13. For what period were vacuum tube computers used?
14. What was the primary market for second-generation computers?
a. the government
15. What was one of the main improvements of second-generation computers?
a. job scheduling
c. improved user interface
b. better debugging
d. more structured logic for programs
16. ____ means that several logical records are grouped within one physical record.
17. In second-generation computers, to reduce the discrepancy in speed between the I/O and the CPU, an interface called the ____ was placed between them to perform the function of buffering.
a. control unit
d. buffer manager
18. The most common mechanism for implementing multiprogramming was the introduction of the ____ concept, which is when the CPU is notified of events needing operating systems services.
19. A system with ____ would divide the programs into segments and keep them in secondary storage, bringing each segment into memory only as it was needed.
a. virtual memory
c. segmented processing
b. shared memory
d. passive multiprogramming
20. Which word is used to indicate that a program is permanently held in ROM (read only memory), as opposed to being held in secondary storage?
21. What is a disadvantage of a distributed operating system?
a. user has to monitor which computer is being controlled
b. users have to worry about scheduling each processor rather than dealing with a uniprocessor system
c. more complex processor-scheduling algorithms
22. A typical ____ computer houses devices to perform audio, video, and graphic creation and editing.
23. ____ are self-contained modules (units of software) that provide models of the real world and can be reused in different applications.
24. What is another name for a thread?
a. heavyweight process
b. lightweight process
25. In which of the following are processors placed at remote locations and are connected to each other via telecommunication devices?
c. distributed processing
d. shared processing
Study Guide - Chapter 2: Memory Management: Early Systems TRUE/FALSE
1. All computers have only a finite amount of memory and if a program doesn’t fit, then either the size of the main memory must be increased or the program must be modified.
2. To overlay is to transfer segments of a program from main memory into secondary storage for execution, so that two or more segments take turns occupying the same memory locations.
3. The first step in loading a job in a single-user system is storing the first memory location of program into the base register (for memory protection).
4. A single-user system supports multiprogramming.
5. The first attempt to allow for multiprogramming was to create fixed partitions.
6. The algorithm used to store jobs into memory requires a few more steps than the one used for a single-user system because the size of the job must be matched with the size of the partition to make sure it fits completely.
7. The fixed partition scheme does not require that the entire program be stored contiguously and in memory from the beginning to the end of its execution.
8. The fixed partition scheme works well if all of the jobs run on the system are of the same size or if the sizes are known ahead of time and don’t vary between reconfigurations.
9. In a fixed partition scheme, large jobs may have a longer turnaround time as they wait for free partitions of sufficient size or may never run.
10. The best-fit allocation method keeps the free/busy lists organized by memory locations, low-order memory to high-order memory.
11. A large job can have problems with a first-fit memory allocation list.
12. The best-fit free list scheme uses memory more efficiently than the first-fit free scheme but it’s slower to implement.
13. The first-fit algorithm assumes that the Memory Manager keeps only one list containing free memory blocks.
14. One of the problems with the best-fit algorithm is that the entire table must be searched before the allocation can be made because the memory blocks are physically stored in sequence according to their location in memory.
15. For a fixed partition system, memory deallocation is quite complex.
16. A null entry in the busy list occurs when a memory block between two other busy memory blocks is returned to the free list.
17. In the relocatable dynamic partitions scheme the Memory Manager relocates programs to gather together all of the empty blocks and compact them to make one block of memory large enough to accommodate some or all of the jobs waiting to get in.
18. During garbage collection memory is allocated.
19. During compaction, the operating system must distinguish between addresses and data values, and the distinctions are not obvious once the program has been loaded into memory.
20. After relocation and compaction, both the free list and the busy list are updated.
21. The bounds register is used to store the highest location in memory accessible by each program.
22. It is always best to perform compaction only when there are jobs waiting to get in.
23. Early memory management schemes are still used in today’s operating systems.
24. The problem of partition intrusion is present in single-user contiguous allocation schemes.
25. Research continues to focus on finding the optimum allocation scheme.
1. Which of the following describes the first memory allocation scheme?
a. Each program to be processed was loaded into secondary storage, then swapped into memory in parts
b. Each program to be processed was partially loaded into memory, then granted more memory as needed
c. Each program to be processed was allocated a portion of memory and could negotiate with other programs to access more memory
d. Each program to be processed was loaded in its entirety into memory and allocated as much contiguous space in memory as it needed
2. How are jobs processed in a single-user system?
d. the longest job is processed first
3. Consider the following algorithm. Which of the following choices should replace the * in Step 6?
1 Store first memory location of program into base register (for memory protection)
2 Set program counter (it keeps track of memory space used by the program) equal to address of first memory location
3 Read first instruction of program
4 Increment program counter by number of bytes in instruction
5 Has the last instruction been reached? if yes, then stop loading program if no, then continue with step 6
6 Is program counter greater than memory size? if yes, then stop loading program if no, *
7 Load instruction in memory
8 Read next instruction of program
9 Go to step 4 a. then continue with step 4 c. then continue with step 6 b. then continue with step 5 d. then continue with step 7
4. What is another name for a fixed partition?
a. complete partition
c. direct partition
b. static partition
d. sized partition
5. What is the first step in the algorithm to load a job in a fixed partition?
a. Compare job size to size of largest partition
b. Determine job’s requested memory size
c. Set counter to 1
d. No partition available at this time, put job in waiting queue
6. In the partition scheme, the Memory Manager uses a table to keep track of jobs. What are the components of the table?
a. partition size, memory address, status
b. status, access, memory address
c. partition size, status, access
d. partition size, memory address, access, status
7. When does a fixed partition scheme work well?
a. when jobs have the same size
c. when job sizes are not known in advance
b. when jobs have different sizes
d. when all jobs are under 100K
8. What is the name for fragments of free memory between blocks of allocated memory?
a. inefficient fit
c. external fragmentation
b. indirect partitioning
d. internal fragmentation
9. Which memory allocation scheme places jobs in the first partition fitting the requirements?
a. fixed partitioning
c. dynamic fit memory allocation
b. first-fit memory allocation
d. best-fit memory allocation
10. Which memory allocation scheme results in the smallest amount of wasted space?
11. Consider the following space requirements for jobs 1-4 and memory blocks. Assuming a first-fit scheme is used, which job is not able to run? Jobs: J1 10K J2 20K J3 30K J4 10K Blocks: B1 30K B2 15K B3 50K B4 20K a. J1 c. J3 b. J2 d. J4
12. Consider the following space requirements for jobs 1-4 and memory blocks. Assuming a best-fit scheme is used, which job is placed in the last block? Jobs: J1 10K J2 20K J3 30K J4 10K Blocks: B1 30K B2 15K B3 50K B4 20K a. J1 c. J3 b. J2 d. J4
13. The following algorithm can be described as ____. 1 Set counter to 1 2 Do while counter <= number of blocks in memory If job_size > memory_size(counter) Then counter = counter + 1 Else load job into memory_size(counter) adjust free/busy memory lists go to step 4 End do 3 Put job in waiting queue 4 Go fetch next job a. first-fit memory allocation c. least-fit memory allocation b. best-fit memory allocation d. fixed partition memory allocation
14. The following algorithm can be described as ____. 1 Initialize memory_block(0) = 99999 2 Compute initial_memory_waste = memory_block(0) – job_size 3 Initialize subscript = 0 4 Set counter to 1 5 Do while counter <= number of blocks in memory If job_size > memory_size(counter) Then counter = counter + 1 Else memory_waste = memory_size(counter) – job_size If initial_memory_waste > memory_waste Then subscript = counter initial_memory_waste = memory_waste counter = counter + 1 End do 6 If subscript = 0 Then put job in waiting queue Else Load job into memory_size(subscript) adjust free/busy memory lists 7 Go fetch next job a. first-fit memory allocation c. least-fit memory allocation b. best-fit memory allocation d. fixed partition memory allocation
15. Assume the Memory Manager receives a request for a block of 200. When the best-fit algorithm is used, what is the beginning address of the block granted by the Memory Manager? Beginning Address Memory Block Size 4075(###) ###-####5 6785(###) ###-####20 7600 205###-##-####15125 230###-##-####a. 6785 c. 10250 b. 7600 d. 15125
16. How is memory deallocated in a fixed partition scheme? a. Memory Manager releases the block and combines it with another free block. b. Memory Manager immediately gives memory to another program. c. Memory Manager adds block to free list and removes it from busy list. d. Memory Manager resets the status of the memory block where the job was stored to “free.”
17. In a dynamic partition scheme, how does the Memory Manager deallocate a block that is between two other free blocks? a. All three are merged into one. b. All three are moved individually from the busy list to the free list. c. The block is combined with the larger of the two adjacent blocks. d. The status of the block is set to free.
18. When memory is deallocated, an entry can be removed from the free list by creating a(n) ____. a. blank line c. joined entry b. null entry d. empty entry
19. A(n) ____ in the busy list occurs when a memory block between two other busy memory blocks is returned to the free list. a. blank line c. joined entry b. null entry d. empty entry
20. How does an operating system reclaim fragmented memory space? a. deallocation c. compaction b. redirection d. reallocation
21. During relocation, the operating system must ____ memory addresses so that they can be adjusted to the new memory location. a. flag c. redirect b. grab d. compact
22. The ____ contains the value that must be added to each address referenced in the program so it will be able to access the correct memory addresses after relocation. a. busy list c. relocation register b. compaction monitor d. bounds register
23. When should compaction be performed? a. when a certain percentage of memory becomes busy b. only when there are jobs waiting to get in c. after a prescribed amount of time has elapsed d. Any of the above, depending on the system
24. What is the actual memory address for a job that starts at 18K? a. 1,800 c. 18,432 b. 18,000 d. 180,000
25. In a fixed partition scheme, what is the problem with partitions that are too large? a. small jobs will have to wait c. external fragmentation b. large jobs will have to wait d. wasted memory
Study Guide - Chapter 3: Memory Management: Virtual Memory TRUE/FALSE
1. One sector will hold one page of job instructions and fit into one page frame of memory.
2. Paged memory allocation usually results in internal fragmentation, but never external fragmentation.
3. The Job Table (JT) contains two entries for each active job: the size of the job and the memory location where its Page Map Table is stored.
4. To find the address of a given program line, the line number is ***** by the page size.
5. Each page is actually stored in a page frame that can be located anywhere in available main memory.
6. To find the exact position of an instruction in memory, after the page number has been calculated the operating system refers to the job’s PMT and find out which page frame contains the page.
7. The address of the beginning of a page frame is found by multiplying the page frame number by the number of frames.
8. Every time an instruction is executed, or a data value is used, the operating system (or the hardware) must translate the job space address, which is relative, into its physical address, which is absolute.
9. Demand paging was the first widely used scheme that removed the restriction of having the entire job in memory from the beginning to the end of its processing.
10. The key to the successful implementation of demand paging is the use of a direct access memory device that can work directly with the CPU.
11. Demand paging requires that the Page Map Table for each job keep track of each page as it is loaded or removed from main memory.
12. To move in a new page, a resident page must be swapped back into primary storage.
13. Demand paging is a perfect solution to inefficient memory limitation and has no real problems or issues.
14. The first-in-first-out replacement policy is always preferable to the least-recently-used policy.
15. A page interrupt is generated when a new page is brought into memory.
16. When using a FIFO scheme, more memory will always result in better performance.
17. A variation of the LRU page replacement algorithm is known as the clock page replacement policy because it is implemented with a circular queue and uses a pointer to step through the reference bits of the active pages, simulating a clockwise motion.
18. The process of shifting bits to the right and resetting the leftmost bit to 1 when a page is referenced gives a history of each page’s usage.
19. A job’s direct set is the set of pages residing in memory that can be accessed directly without incurring a page fault.
20. Within the Memory Manager the Segment Link Table lists details about each segment (one for each job).
21. The segmented/demand paged memory allocation scheme is a combination of segmentation and demand paging, and it offers the logical benefits of segmentation, as well as the physical benefits of paging.
22. In general, when a job is allocated to the CPU its Page Map Table is loaded into main memory while the Segment Map Tables are loaded only as needed.
23. The use of virtual memory requires cooperation between the Memory Manager (which tracks each page or segment) and the processor hardware (which issues the interrupt and resolves the virtual address).
24. Cache memory is a small high-speed memory unit that a processor can access less rapidly than main memory.
25. Virtual memory in Linux is managed using a three-level table hierarchy, which accommodates both 64- and 32-bit architectures.
1. What is the primary advantage of storing programs in noncontiguous locations? a. multiple programs can run at the same time b. every program will be able to run c. secondary storage is accessed more quickly d. main memory is used more efficiently
2. How many entries per page are there in the PMT? a. 0 c. 2 b. 1 d. 5
3. If the page size is 100 lines, what is the displacement for line 214? a. 0.5 c. 14 b. 2 d. 21400
4. Assume that the Page Map Table below is in effect. The number of lines per page is 400. What is the actual memory location for line 433? Job Page Number Page Frame Number 0 8 1 10 2 5 3 11 a. 1 c. 4000 b. 33 d. 4033
5. What will happen when a page size is too small? a. very long PMTs c. more difficult to calculate actual position b. excessive internal fragmentation d. excessive external fragmentation
6. What field(s) must be added to the PMT to support demand paging? a. field to determine if page contents have been modified b. field to determine if the page being requested is already in memory c. field to determine if the page has been referenced recently d. All of the above
7. What happens in demand paging when a job requires a certain page to be loaded and there is no empty page frame? a. a resident page must be swapped back into secondary storage b. the page cannot be loaded and the job will exit c. the job must wait until a page frame is freed by another job d. the page will share a page frame with another page from the same job
8. Consider the following page fault handler algorithm. What should replace the * in the algorithm? 1 If there is no free page frame Then select page to be swapped out using page removal algorithm update job’s Page Map Table If content of page had been changed then * End if End if 2 Use page number from step 3 from the Hardware Instruction Processing Algorithm to get disk address where the requested page is stored 3 Read page into memory 4 Update job’s Page Map Table 5 Update Memory Map Table 6 Restart interrupted instruction a. save page to main memory c. mark page as being changed in PMT b. write page to disk d. choose another page
9. When there is an excessive amount of page swapping between main memory and secondary storage, the operation becomes inefficient. This phenomenon is called ____. a. excessive demand paging c. thrashing b. hot swapping d. overswapping
10. What is the name of the page replacement policy where the page to remove is the one that has been in memory the longest. a. TRU c. LIFO b. LRU d. FIFO
11. Assume that four page frames are available and are numbered 1-4. Pages A-D have been loaded into page frames 1-4 in order. Assume that page E is requested. Which page frame will it be loaded into when the FIFO algorithm is used? a. 1 c. 3 b. 2 d. 4
12. Assume that four page frames are available and are numbered 1-4. Pages A-D have been loaded into page frames 1-4 in order. The program has accessed the pages in the following order: B, D, A, C. Assume that page E is requested. Which page frame will it be loaded into when the LRU algorithm is used? a. 1 c. 3 b. 2 d. 4
13. If a particular demand paging configuration has 9 page interrupts out of 11 page requests, what is the failure rate? a. 18% c. 82% b. 52% d. 95%
14. When using the clock replacement policy, a page with a reference bit of ____ is replaced. a. -1 c. 1 b. 0 d. 5
15. In the PMT, the ____ bit for all pages in memory is 1. a. status c. modified b. referenced d. frame
16. Consider the following four cases. Which page would the LRU policy be lease likely to swap? Modified Referenced Meaning Case 1 0 0 Not modified AND not referenced Case 2 0 1 Not modified BUT was referenced Case 3 1 0 Was modified BUT not referenced Case 4 1 1 Was modified AND was referenced a. Case 1 c. Case 3 b. Case 2 d. Case 4
17. Which of the following phrases means that during any phase of its execution the program references only a small fraction of its pages? a. dynamic paging c. locality of reference b. structured programming d. working set
18. To access a location in memory when using segmented memory management, the address is composed of two entries: ____. a. the segment number and the line number b. the segment number and the displacement c. the line number and the displacement d. the segment number, the line number, and the displacement
19. The segmented/demand paged memory allocation scheme divides each segment into equally sized ____. a. pages c. frames b. blocks d. sets
20. How many associative registers are there? a. 2 c. 10 b. 5 d. varies by system
21. ____ gives users the appearance that their programs are being completely loaded in main memory during their entire processing time a. Segmenting c. Shared memory b. Virtual memory d. Multithreading
22. What is an advantage of virtual memory management? a. job’s size is no longer restricted to the size of main memory b. allows an unlimited amount of multiprogramming c. facilitates dynamic linking of program segments d. All of the above
23. ____ can be thought of as being an intermediary between main memory and the special-purpose registers, which are the domain of the CPU. a. Virtual memory c. Paging b. Cache memory d. Segmenting
24. What is the cache hit ratio if the total number of requests is 10 and 6 of those are found in cache memory? a. 6% c. 60% b. 10% d. 100%
25. What algorithm does Linux use for demand paging? a. buddy c. FIFO b. LRU d. segmenting
Study Guide - Chapter 4: Processor Management TRUE/FALSE
1. A process is an inactive unit, such as a file stored on a disk.
2. A program is an active entity that requires a set of resources, including a processor and special registers, to perform its function.
3. The processor is also known as the CPU.
4. A single processor can be shared by several jobs, or several processes, but if, and only if, the operating system has a scheduling policy, as well as a scheduling algorithm, to determine when to stop working on one job and proceed to another.
5. The Processor Manager is a composite of two submanagers: one in charge of job scheduling and the other in charge of program scheduling.
6. After a job has been placed on the READY queue by the Job Scheduler, the Process Scheduler takes over.
7. Most computer programs alternate between CPU cycles and I/O cycles.
8. CPU-bound jobs (such as printing a series of documents) have many brief CPU cycles and long I/O cycles
9. As a job moves through the system it’s always in one of five states; these are called the job status or the process status.
10. From HOLD, the job moves to WAITING when it’s ready to run but is waiting for the CPU.
11. The transition from one job or process status to another is initiated by either the Job Scheduler or the Process Scheduler.
12. Each job is uniquely identified by the user’s identification and a pointer connecting it to its descriptor.
13. The process state contains all of the data about the job needed by the operating system to manage the processing of the job.
14. It is possible to minimize response time by running only interactive jobs and letting the batch jobs wait until the interactive load ceases.
15. The Process Scheduler often uses a timing mechanism and periodically interrupts running processes when a predetermined slice of time has expired.
16. First-come, first-served (FCFS) is a preemptive scheduling algorithm that handles jobs according to their arrival time.
17. If one job monopolizes the system, the extent of its overall effect on system performance depends on the scheduling policy and whether the job is CPU-bound or I/O-bound.
18. Shortest job next (SJN) is a nonpreemptive scheduling algorithm (also known as shortest job first, or SJF) that handles jobs based on the length of their CPU cycle time.
19. When using priority scheduling, priorities are assigned to jobs by the owner of the job (the user).
20. The Shortest remaining time (SRT) algorithm is often used in interactive systems.
21. Context switching is required by all preemptive algorithms.
22. In round robin scheduling, if processing isn’t finished when time expires, the job is preempted and put at the end of the READY queue and its information is saved in its PCB.
23. Multiple-level queues isn’t really a separate scheduling algorithm but works in conjunction with several other schemes.
24. Aging is used to ensure that jobs in lower-level queues will eventually complete their execution.
25. The control program that handles the interruption sequence of events is called the interrupt scheduler.
1. What is the name for a portion of a process that can run independently? a. thread c. miniprocess b. program d. subprocess
2. What is the high-level scheduler? a. Process Scheduler c. Program Scheduler b. Job Scheduler d. Thread Scheduler
3. What does the Job Scheduler seek to do when scheduling jobs? a. run all CPU intensive jobs first c. balance CPU and I/O intensive jobs b. run all I/O intensive jobs first d. run the quickest jobs first
4. The Process Scheduler assigns the CPU to execute the processes of those jobs placed on the ____ queue by the Job Scheduler. a. waiting c. process b. next d. ready
5. In a highly interactive environment there is a third layer of the Processor Manager called the ____ scheduler. a. Managing c. middle-level b. Subprocess d. Program
6. What is the initial state for a job? a. HOLD c. WAITING b. RUNNING d. READY
7. Which transition is managed by the Job Scheduler? a. READY to RUNNING c. RUNNING back to READY b. RUNNING to WAITING d. HOLD to READY
8. Which transition is sometimes managed by the Process Scheduler and sometimes by the Job Scheduler? a. WAITING to READY c. RUNNING to FINISHED b. RUNNING to WAITING d. HOLD to READY
9. Which of the following is part of the Process State information maintained in the PCB? a. Main Memory c. Process Priority b. Register Contents d. All of the above
10. Which section of the PCB is used primarily for performance measurement? a. Accounting c. Process Identification b. Process State d. Process Status
11. The queues used to manage jobs are formed from ____. a. jobs c. PCBs b. processes d. record identifiers
12. An I/O request is called a(n) ____ wait in multiprogramming environments. a. forced c. scheduled b. natural d. indirect
13. How is the First-come, first-served (FCFS) scheduling algorithm implemented? a. using a FIFO queue c. using a circular queue b. using a LIFO queue d. using a directed graph
14. What is a disadvantage of First Come First Served? a. I/O-bound jobs are given priority b. jobs are frequently interrupted c. CPU-bound jobs are given priority d. average turnaround times vary widely and are seldom minimized
15. Assume that four jobs A-D require the CPU cycles listed below. Using the SJN algorithm, which job is run first? Job: A B C D CPU cycle: 5 2 6 4 a. A c. C b. B d. D
16. Assume that four jobs A-D require the CPU cycles listed below. Using the SJN algorithm, what is the average turnaround time? Job: A B C D CPU cycle: 5 2 6 4 a. 5.5 c. 9.0 b. 6.8 d. 11.1
17. Some systems increase the priority of jobs that have been in the system for an unusually long time to expedite their exit. This is known as ____. a. lagging c. bumping b. aging d. accelerated priority
18. Assume that jobs A-D arrive in the ready queue in quick succession and have the CPU cycle requirements listed below. Using the SRT algorithm, what is the average turnaround time? Arrival time: 0 1 2 3 Job: A B C D CPU cycle: 6 3 1 4 a. 2.5 c. 7.75 b. 6.25 d. 9.0
19. Assume jobs A-D arrive in quick succession in the READY queue. Using round robin scheduling, what is the turnaround time for job C? Arrival time: 0 1 2 3 Job: A B C D CPU cycle: 8 4 9 5 a. 7 c. 22 b. 20 d. 24
20. What is the best time quantum size in round robin scheduling? a. it depends on the system b. it should be long enough to allow 80 percent of the CPU cycles to run to completion c. it should be at least 100 times longer than the time required to perform one context switch operation d. All of the above
21. No movement between queues is a very simple policy that rewards those who have ____ jobs. a. high-priority c. CPU-bound b. low-priority d. I/O-bound
22. Which multiple-level queue management scheme facilitates I/O-bound jobs and is good in interactive systems? a. No movement between queues c. Variable time quantum per queue b. Movement between queues d. Aging
23. Which multiple-level queue management scheme allows for faster turnaround of CPU-bound jobs? a. No movement between queues c. Variable time quantum per queue b. Movement between queues d. Aging
24. When the operating system detects a non-recoverable error, which of the following happens first? a. state of the interrupted process is saved c. interrupt is processed b. type of interrupt is described and stored d. processor resumes operation
25. What is another name for an internal interrupt? a. I/O interrupt c. illegal operation b. illegal job instruction d. synchronous interrupt
Study Guide - Chapter 5: Process Management TRUE/FALSE
1. Deadlock is a system-wide tangle of resource requests that begins when two or more jobs are put on hold, each waiting for a vital resource to become available.
2. Deadlock does not affect the entire system, but only one process.
3. Deadlock was a serious problem for early batch systems.
4. Deadlocks are most serious in real-time systems.
5. Locking can be done only at the level of the entire database.
6. If locks are not used to preserve their integrity, the updated records in the database might include only some of the data—and their contents would depend on the order in which each process finishes its execution.
7. Deadlocks occur when processes request access to devices of the same type, but never when waiting for access to devices of different types.
8. Most systems have transformed the printer into a sharable device by installing a high-speed device, a disk, between it and the CPU.
9. A livelock is caused by two processes attempting to access a shared device such as a disk.
10. A deadlock is preceded by five conditions that the operating system should recognize and act upon to prevent deadlock from occurring.
11. In a directed graph used to model deadlock, processes are represented using squares.
12. When modeling deadlock, if there’s a cycle in the graph then there’s a deadlock involving the processes and the resources in the cycle.
13. It is easy to design a foolproof deadlock prevention policy.
14. Mutual exclusion should be eliminated from a computer system to avoid deadlock.
15. Resource holding, where a job holds on to one resource while waiting for another one that’s not yet available, could be sidestepped by forcing each job to request, at creation time, every resource it will need to run to completion.
16. In many cases, there exists at least one allocation of resources sequence that will allow jobs to continue without becoming deadlocked.
17. According to the Banker’s Algorithm an unsafe state always leads to deadlock.
18. The operating system must be sure never to satisfy a request that moves it from a safe state to an unsafe one.
19. Although the Banker’s Algorithm has been used to avoid deadlocks in systems with a few resources, it isn’t always practical for most systems.
20. There are several recovery algorithms, but they all have one feature in common: they all require at least one victim, an expendable job, which, when removed from the deadlock, will free the system.
21. Starvation is the result of the liberal allocation of resources.
22. Once starvation has been detected, the system can block new jobs until the starving jobs have been satisfied.
23. In the dining philosophers problem there are five philosophers but only four forks.
24. When recovering from deadlock, jobs close to completion are usually left alone.
25. A problem with the Banker’s Algorithm is that resources aren’t well utilized because the algorithm assumes the worst case and, as a result, keeps vital resources unavailable to guard against unsafe states.
1. Interactive systems generally improve the use of resources through ____ resource sharing, but this resource sharing capability also increases the possibility of deadlocks. a. interspersed c. dynamic b. group d. static
2. In what type of system are deadlocks most critical? a. batch c. real-time b. interactive d. general purpose
3. Consider the case of a home construction company with two application programs, purchasing (P1) and sales (P2), which are active at the same time. They each need to access two files, inventory (F1) and suppliers (F2), to update daily transactions. The following series of events will cause a deadlock. Fill in the missing event in the sequence. 1. Purchasing (P1) accesses the supplier file (F2). 2. Sales (P2) accesses the inventory file (F1).
3. Purchasing (P1) doesn’t release the supplier file (F2) but requests the inventory file (F1), but P1 is blocked because F1 is being held by P2.
4. Meanwhile, ____ a. sales (P2) doesn’t release the inventory file (F1) but requests the supplier file (F2) b. sales (P2) does release the inventory file (F1) and then requests the supplier file (F2) c. purchasing (P1) does release the supplier file (F2) which is then requested by sales (P2) d. purchasing (P1) exits 4. Fill in the missing event that causes deadlock in a database. There are two processes (P1 and P2), each of which needs to update two records (R1 and R2) and the following sequence leads to a deadlock: 1. P1 accesses R1 and locks it. 2. P2 accesses R2 and locks it. 3. ____ 4. P2 requests R1, which is locked by P1. a. P2 releases R2. c. P1 requests R2, which is locked by P2. b. P1 requests R1 again. d. P2 releases R1.
5. Failure to lock database records before updating them may result in a ____ between processes. a. struggle c. deadlock b. race d. livelock
6. Fill in the missing step in the following deadlock situation. Two users from the local board of education are each running a program (P1 and P2), and both programs will eventually need two tape drives to copy files from one tape to another. Only two tape drives are available and they’re allocated on an “as requested” basis. Soon the following sequence transpires: 1. P1 requests tape drive 1 and gets it. 2. ____ 3. P1 requests tape drive 2 but is blocked. 4. P2 requests tape drive 1 but is blocked. a. P1 requests tape drive 2. c. P2 requests tape drive 1 but is blocked. b. P2 requests tape drive 2 and gets it. d. P1 releases tape drive 1.
7. In modern printing systems a disk accepts output from several users and acts as a temporary storage area for all output until the printer is ready to accept it. This process is called ____. a. buffering c. spooling b. lagging d. spoofing
8. How does deadlock occur on a modern printer? a. the network connection for the printer overflows with too many requests to use the printer b. too many users attempt to access the printer at the same time c. the buffer fills up with too many print jobs and the printer cannot decide which one to print d. the printer needs all of a job’s output before it will begin printing, but the spooling system fills the available disk space with only partially completed output
9. A network that’s congested or has filled a large percentage of its I/O buffer space can become deadlocked if it doesn’t have ____ to control the flow of messages through the network. a. procedures c. policies b. protocols d. rules
10. Fill in the missing event that causes livelock. At an insurance company the system performs many daily transactions. One day the following series of events ties up the system: 1. Process P1 wishes to show a payment so it issues a command to read the balance, which is stored in cylinder 20 of a disk pack. 2. ____ 3. P2 gains control of the I/O channel and issues a command to write someone else’s payment to a record stored in cylinder 310. If the command is not “locked out,” P2 will be put on hold while the control unit moves the arm to cylinder 310. 4. Because P2 is “on hold,” the channel is free to be captured again by P1, which reconfirms its command to “read from cylinder 20.” 5. Since the last command from P2 had forced the arm mechanism to cylinder 310, the disk control unit begins to reposition the arm to cylinder 20 to satisfy P1. The I/O channel would be released because P1 is once again put on hold, so it could be captured by P2, which issues a WRITE command only to discover that the arm mechanism needs to be repositioned. a. While the control unit is moving the arm to cylinder 20, P1 is put on hold and the I/O channel is free to process the next I/O request. b. P1 discovers that another process has locked the portion of the disk it needs to access. c. P2 is put on hold while the control unit moves the arm to satisfy P1’s request d. P1 is unable to find the information it needs, so requests a different READ operation for a different cylinder.
11. What is a condition that causes deadlock? a. mutual exclusion c. circular wait b. resource holding d. All of the above
12. ____ is the act of allowing only one process to have access to a dedicated resource. a. No preemption c. Resource holding b. Circular wait d. Mutual exclusion
13. ____ occurs when two processes do not release control of resources they are using. a. No preemption c. Resource holding b. Circular wait d. Mutual exclusion
14. ____ allows a resource to be held by a process as long as it is needed. a. No preemption c. Resource holding b. Circular wait d. Mutual exclusion
15. Who developed a technique for modeling deadlock using a directed graph? a. Havender c. Dijkstra b. Holt d. Lane & Mooney
16. In a directed graph used to model deadlock, which of the following represents deadlock? a. a solid arrow c. a cycle b. a dashed arrow d. any path
17. Assume the following events and actions take place. Which of the following statements is true? Event Action 1 P1 requests and is allocated the printer R1. 2 P1 releases the printer R1. 3 P2 requests and is allocated the disk drive R2. 4 P2 releases the disk R2. 5 P3 requests and is allocated the plotter R3. 6 P3 releases the plotter R3. a. there is no deadlock c. event 5 caused deadlock b. event 4 caused deadlock d. event 6 caused deadlock
18. Assume the following events and actions take place. Which of the following statements is true? Event Action 1 P1 requests and is allocated R1. 2 P2 requests and is allocated R2. 3 P3 requests and is allocated R3. 4 P1 requests R2. 5 P2 requests R3. 6 P3 requests R1. a. there is no deadlock c. event 5 caused deadlock b. event 4 caused deadlock d. event 6 caused deadlock
19. Which of the following is necessary in any computer system? a. mutual exclusion c. no preemption b. resource holding d. circular wait
20. What scheme can be used to eliminate circular wait? a. “hierarchical ordering” c. saving and restoring job state b. preemption d. requesting all resources before job run
21. Who developed the Banker’s Algorithm? a. Havender c. Dijkstra b. Holt d. Lane & Mooney
22. What is the first step in reducing a directed graph to eliminate deadlock? a. Remove the process that is holding on to the most resources. b. Find a process that’s waiting only for resource classes that aren’t fully allocated c. Find a process that is currently using a resource and not waiting for one. d. Find the oldest process and remove it from the graph.
23. What is the simplest deadlock recovery method? a. select a nondeadlocked job, preempt the resources it’s holding, and allocate them to a deadlocked process so it can resume execution, thus breaking the deadlock b. identify which jobs are involved in the deadlock and terminate them one at a time, checking to see if the deadlock is eliminated after each removal c. terminate only the jobs involved in the deadlock and ask their users to resubmit them d. terminate every job that’s active in the system and restart them from the beginning
24. In the “dining philosophers” problem, when can a philosopher pick up a fork? a. when there is one available b. when there are two available c. when no other philosopher is eating d. when it is his turn, going in numerical order from one philosopher to the next
25. An algorithm that is used to detect starvation by tracking how long each job has been waiting for resources is the same as ____. a. deadlock c. preemption b. aging d. round robin
Study Guide - Chapter 6: Concurrent Processes TRUE/FALSE
1. Parallel processing is a situation in which two or more processors operate in unison.
2. For multiprocessing systems, the Processor Manager has to coordinate the activity of each processor, as well as synchronize the interaction among the CPUs.
3. Multiprocessors were developed for high-end models of IBM mainframes and VAX midrange computers where the operating system treated additional CPUs as additional resources that could be scheduled for work.
4. The only reason for multiprocessing is to increase computer power.
5. The master/slave configuration is an symmetric multiprocessing system.
6. In a master/slave system, the master processor is responsible for managing the entire system—all files, devices, memory, and processors.
7. The loosely coupled configuration features several complete computer systems, each with its own memory, I/O devices, CPU, and operating system.
8. In a loosely coupled system, a job may move from one processor to another during execution.
9. The symmetric configuration is best implemented if the processors are all of the same type.
10. In a symmetric configuration, processor scheduling is centralized.
11. The success of process synchronization hinges on the capability of the operating system to make a resource available to other processes while it is being used by one of them.
12. The common element in all synchronization schemes is to allow a process to finish work on a critical region of the program before other processes have access to it.
13. Test-and-set is a single indivisible machine instruction known simply as TS and was introduced by IBM for its multiprocessing System 360/370 computers.
14. The WAIT and SIGNAL operations must be issued at the same time.
15. A semaphore can have a negative value.
16. In parallel computations mutual exclusion is achieved automatically because each operation is handled in order, one at a time.
17. The classic problem of producers and consumers is one in which one process produces some data that another process consumes later.
18. The problem of masters and slaves arises when two types of processes need to access a shared resource such as a file or database.
19. Multiprocessing can refer to one job using several processors to execute sets of instructions in parallel.
20. For many computational purposes, serial processing is sufficient; it’s easy to implement and fast enough for most users.
21. Implied parallelism includes automatic detection by the compiler of instructions that can be performed in parallel.
22. Multiplying two 3x3 matrices requires 45 operations when performed in parallel using three processors.
23. Threads share the same resources as the process that created them.
24. Thread Information Blocks contain basic information about a thread such as its ID, state, and a pointer to the process that created it.
25. Augusta Ada Byron is regarded as the world’s first programmer for her work on Charles Babbage’s Analytical Engine in the 1830s.
1. What is another name for parallel processing? a. uniprocessing c. shared processing b. multiprocessing d. divided processing
2. Since the mid-____s, when the costs of CPU hardware began to decline, multiprocessor systems with dozens of CPUs have found their way into business environments. a. 1960 c. 1980 b. 1970 d. 1990
3. What is a disadvantage of the master/slave configuration? a. reliability is no higher than for a single processor system b. can lead to poor use of resources c. increases the number of interrupts d. All of the above
4. What is a disadvantage of a loosely coupled system? a. can be difficult to detect when a processor has failed b. prone to catastrophic system failures c. jobs take longer to complete as they are rotated among processors d. requires a lot of interprocessor communication to manage centralized resources
5. Which configuration is the most difficult to implement? a. master/slave c. symmetric configuration b. loosely coupled d. shared load
6. Which of the following is true of a symmetric configuration? a. a job runs on only one processor from start to finish b. prone to conflicts as several processors try to access the same resource at the same time c. each processor may use a different scheduling algorithm d. does not balance loads well
7. An error in ____ could lead to a “missed customer” problem. a. communication c. processing b. synchronization d. math
8. A ____ of processing must be handled as a unit. a. line c. critical region b. segment d. semaphore
9. Lock and key synchronization must take place within a single ____. a. instruction c. processor b. computer d. machine cycle
10. A problem with test-and-set is that when many processes are waiting to enter a critical region, ____ could occur because the processes gain access in an arbitrary fashion. a. starvation c. deadlock b. synchronization d. an error
11. ____ sets the process’s process control block (PCB) to the blocked state and links it to the queue of processes waiting to enter this particular critical region. a. TS c. WAIT b. SIGNAL d. STOP
12. What are the two operations identified by Dijkstra to be performed on a semaphore? a. P and V c. Test-and-set b. WAIT and SIGNAL d. check and update
13. When using a semaphore, a value of ____ indicates that a critical region is in use. a. -100 c. 100 b. 0 d. 9999
14. Operations on semaphore s enforce the concept of mutual exclusion, which is necessary to avoid having two operations attempt to execute at the same time. The name traditionally given to this semaphore in the literature is ____. a. phore c. signal b. exe d. mutex
15. How many semaphores are used in the producer and consumer problem? a. 1 c. 3 b. 2 d. 4
16. Who proposed a solution to the readers and writers problem that did not result in starvation for readers or writers? a. Hoare c. Heymans b. Courtois d. Parnas
17. How many semaphores are used in the solution to the readers and writers problem that does not involve starvation? a. 0 c. 2 b. 1 d. 3
18. When a computer evaluates the expression A = 3 * B * C + 4 / (D + E) ** (F – G), in a sequential manner, what is evaluated as a first step? a. F – G c. C + 4 b. 3 * B d. B * C
19. What type of programming task benefits from implicit parallelism? a. matrix multiplication c. sorting and merging files b. array processing using a loop d. All of the above
20. Most current operating systems support the implementation of threads, or ____, which have become part of numerous application packages. a. parallel processes c. heavyweight processes b. lightweight processes d. semaphores
21. What is shared by the threads in a process? a. processor registers c. data area b. program counter d. status
22. Once a thread is in the ready state, which state can it enter next? a. blocked c. waiting b. finished d. running
23. What organization was responsible for the development of the Ada programming language? a. Microsoft c. Department of Defense b. Sun Microsystems d. IBM
24. The source code of a Java program is first compiled into an intermediate language called Java ____, which are platform-independent. a. beans c. bits b. nibs d. bytecodes
25. What type of process does Linux execute according to FIFO? a. non-preemtible “real time” processes c. “normal” processes b. preemptible “real time” processes d. “batch” processes Read more: http://www.justanswer.com/questions/33rey-study-guide-chapter-1-introducing-operating-systems-1-the#ixzz0iCylNp9B