13 February 2017

Operating Systems Concepts Study Material Questions Answers PDF

Source: Campus Placement Tricks - Useful and helpful for Recruitment tests and exams.

www.matterhere.com - Nareddula Rajeev Reddy (NRR)

Operating System Concepts

1. What is MUTEX ? What is Semaphore ?
Ans:- Mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. the mutex is set to unlock when the data is no longer needed or the routine is finished.
A mutex is owned by a thread/process. So once a thread locks it, then other threads/processes will either spin or block on the mutex. Whereas, semaphore allows one or more threads/processes to share the resource.
Example explaining Mutex and Semaphore:- Mutex: 1. Serial access one toilet is available with a Key for it.one guy will have the key and will use the toilet, so making others to wait for access.once done, he gives the key to the other. 2. Mutex is semaphore of value 1Semaphores:1. N resources access say for example 4 toilets are there; each has a common key for the toilets available.at the start semaphore value set to 4(4 toilets are free), once any user gets in ; se aphore de re e ts the alue;o e do e he ill i re e t sa i g it ; it’s FREE...
Mutex :Mutex is the synchronized object used to achieve serialized resource access i.e. one by one to single resource.eg : In an interview, only one person can interview a candidate, after his completion one more person will come & interview the same candidate.Semaphore:Semaphore is the synchronized object used to achieve serialized resource access i.e. one by one to the N number of resource.eg : In an interview, one person interviews 2 or more candidates sequentially, after his completion one more person will come & interview the same candidates.

2. What isthe difference between a thread a d a process ?
A process is an instance of a program running in a computer. It is close in meaning to task , a term used in some operating systems. Like a task, a process is a running program with which a particular set of data is associated so that the process can be kept track of. A thread is code that is to be serially executed within a process. A processor executes threads, not processes, so each application has at least one process, and a process always has at least one thread of execution, known as the primary thread. A process can have multiple threads in addition to the primary thread. Prior to the introduction of multiple threads of execution, applications were all designed to run on a single thread of execution. When a thread begins to execute, it continues until it is killed or until it is i terrupted a thread ith higher priorit ( a user a tio or the ker el’s thread scheduler). Each thread can run separate sections of code, or multiple threads can execute the same section of code. Threads executing the same block of code maintain separate sta ks. Ea h thread i a pro ess shares that pro ess’s glo al aria les a d resoures.

3. What is INODE?
Ans:- Data structures that contain information about the files that are created when unix file systems are created. Each file has an i-node & is identified by an inode number(i-number) in the file system where it resides. inode provides important information on files such as group ownership, access mode(read, write, execute permissions).

4. Explain the working of Virtual Memory.
Ans:- Virtual memory is hardware technique where the system appears to have more memory that it actually does. This is done by time-sharing, the physical memory and storage parts of the memory one disk when they are not actively being used.
If your computer lacks the random access memory (RAM) needed to run a program or operation, Windows uses virtual memory to compensate.
Virtual memory combines your computer’s RAM with temporary space on your hard disk. When RAM runs low, virtual memory moves data from RAM to a space called a paging file. Moving data to and from the paging file frees up RAM to complete its work.
The more RAM your computer has, the faster your programs will generally run. If a lack of RAM is slowing your computer, you might be tempted to increase virtual memory to compensate. However, your computer can read data from RAM much more quickly than from a hard disk, so adding RAM is a better solution.

5. How does Windows NT supports Multitasking?
Ans:- Pre-emptive multitasking.

6. Explain the Unix Kernel ?
Ans:- UNIX Kernel is heart of the operating system. UNIX kernal is loaded first when UNIX system is booted. It handles allocation of devices, cpu, memory from that point on.

7. What is Concurrency? Expain with example Deadlock and Starvation ?
Ans:- Concurrency: Two events are said to be concurrent if they occur within the same time interval. Two or more tasks executing over the same time interval are said to execute
o urre tl . For our purposes, o urre t does ’t e essaril ea at the sa e e a t instant. For example, two tasks may occur concurrently within the same second but with each task executing within different fractions of the second. The first task may execute for the first tenth of the second and pause, the second task may execute for the next tenth of the second and pause, the first task may start again executing in the third tenth of a second, and so on. Each task may alternate executing. However, the length of a second is so short that it appears that both tasks are executing simultaneously.
Deadlock: Two processes are said to be in deadlock situation if process A holding onto resources required for process B and where as B holding onto the resources required for process A.
Starvation: This is mostly happens in time sharing systems in which the process which requires less time slot is waiting for the large process to finish and to release the resources, but the large process holding the resources for long time (almost for forever) and the process that requires small time slot goes on waiting. Such situation is starvation for small process.

8. What are your solutio strategies for Di i g Philosophers Pro le ?
Ans:- One line solution is: do ’t lo k a resour es, if ou a ot o plete our pro ess ith the available resources. If a philosopher will not hold a chopstick until both the chopsticks are available there. I thinks, this concept runs on all our Operating systems.

9. Explain Memory Partitioning, Paging, Segmentation.
Ans:- Memory partitioning is the way to distribute the Kernel and User Space Area in Memory. Paging is actually a minimum memory, that can be swapped in and swapped out from Memory. In modern Server operating systems, we can use Multiple page size support. That actually helps to tune OS performance, depending on type of applications. Segmentation is actually a way to keep similar objects in one place. For example: you can have your stack stuffs in one place (Stack Segment), Binary code in another place(text segment), data in another place (Data and BSS segment).Linux does ’t ha e seg e t ar hite ture. AIX has a Seg e t ar hite ture.

10. Explain Scheduling.
Ans:- Every operating system use a mechanism to execute processes. It maintains a run queue which actually sort process and keep them on the execution order and they wait for their turn. Actually, in normal cases, our operating systems use Priority And Round Robin Mechanism.
11. Operating System Security.

12. What are the different process states?
Ans:- A process may be in anyone of the following states 1.NEW 2.READY 3.WAIT 4.RUNNING 5.TERMINATE

13. What is Marshalling?
Ans:- The process of gathering data and transforming it into a standard format before it is transmitted over a network so that the data can transcend network boundaries. In order for an object to be moved around a network, it must be converted into a data stream that corresponds with the packet structure of the network transfer protocol. This conversion is known as data marshalling. Data pieces are collected in a message buffer before they are marshaled. When the data is transmitted, the receiving computer converts the marshaled data back into an object.

14. Define and explain COM?
Ans:- COM is a specification(Standards). COM has two aspects:- a) COM specifications provide a definition for what object is. B) COM provides services or blue prints for creation of object and communication between client and server.COM is loaded when the first object of the component is created.

15. Why paging is used ?
Ans:- Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available.

16. Difference - Loading and Linking ?
Ans:- Linking: Resolving unresolved references to code. Loading: Actually loading program to memory (allocating addresses to segments).

17. What is the difference between a MUTEX and a binary semaphore?
Ans: A binary Semaphore and mutex are one and the same. A binary semaphore has different types like counting semaphore,binary semaphore.Let me put in this way a mutex is a semaphore with count as 1 that means synchronization is achieved to process/thread for the resource in one by one manner.

18. What is multi tasking, multi programming, multi threading?
Ans:- Multi programming: Multiprogramming is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle. Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system. Multi threading: An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. To get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used.

19. What is fragmentation? Different types of fragmentation?
Ans:- Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request. External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory. This size difference is memory internal to a partition, but not being used.

20. What is Context Switch?
Ans:- Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch. Context- switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existed of special instructions(such as a single instruction to load or store all registers).

21. What is CPU Scheduler?
Ans:- Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1) Switches from running to waiting state. 2) Switches from running to ready state. 3) Switches from waiting to ready. 4) Terminates. Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.

22. What is Dispatcher?
Ans:- Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context, Switching to user mode, Jumping to the proper location in the user program to restart that program, dispatch latency is the time it takes for the dispatcher to stop one process and start another running.

23. What is hard disk and what is its purpose?
Ans:- Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on the magnetic medium of the disk. Hard disks have a hard platter that holds the magnetic medium, the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have a hard disk with a capacity of between 10 and 40 gigabytes. Data is stored onto the disk in the form of files.

24. What is DRAM? In which form does it store data?
Ans:- DRAM is ot the est, ut it’s heap, does the jo , a d is a aila le al ost e er here ou look. DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose data u less it’s re harged e er ouple of illise o ds, a d this recharging tends to slow down the performance of DRAM compared to speedier RAM types.

25. What is cache memory?
Ans:- Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.

26. What is a Safe State and what is its use in deadlock avoidance?
Ans:- When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. System is in safe state if there exists a safe sequence of all processes. Deadlock Avoidance: ensure that a system will never enter an unsafe state.

27. What is a Real-Time System?
Ans:- A real time process is a process that must respond to the events within a certain time period. A real time operating system is an operating system that can run real time processes successfully.

28. What is the cause of thrashing? How does the system detect thrashing? Once it detects
thrashing, what can the system do to eliminate this problem? Ans:- Thrashing is computer activity that makes little or no progress, usually because memory or other resources have become exhausted or too limited to perform needed operations
.Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it
to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
29. What is the difference between Hard and Soft real-time systems?
Ans:- A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it. A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded.

30. What is Throughput, Turnaround time, waiting time and Response time?
Ans:- Throughput is the number of processes that complete their execution per time unit. Turnaround time is the amount of time to execute a particular process. Waiting time is the amount of time a process has been waiting in the ready queue. Response time is the amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment).

31. What are the basic functions of an operating system?
Ans:- Operating system controls and coordinates the use of the hardware among the various applications programs for various uses. Operating system acts as resource allocator and manager. Since there are many possibly conflicting requests for resources the operating system must decide which requests are allocated resources to operating the computer system efficiently and fairly. Also operating system is control program which controls the user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

32. What is RAM ?
Ans: RAM acronym for random access memory, a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers. There are two basic types of RAM:# dynamic RAM (DRAM)* static RAM (SRAM)The two types differ in the technology they use to hold data, dynamic RAM being the more common type. Dynamic RAM needs to be refreshed thousands of times per second. Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.In common usage, the term RAM is synonymous with main memory, the memory available to programs. For example, a computer with 8M RAM has approximately 8 million bytes of memory that programs can use. In contrast, ROM (read-only memory) refers to special memory used to store programs that boot the computer and perform diagnostics. Most personal computers have a small amount of ROM (a few thousand bytes). In fact, both types of memory (ROM and RAM) allow random access. To be precise, therefore, RAM should be referred to as read/write RAM and ROM as read-only RAM.

33. What is a binary semaphore? What is its use?
Ans:- A binary semaphore is one, which takes only 0 and 1 as values. They are used to implement mutual exclusion and synchronize concurrent processes.

34. List the Coffman's conditions that lead to a deadlock.
Ans:- List the Coffman's conditions that lead to a deadlock.
1. Mutual Exclusion: Only one process may use a critical resource at a time. 2. Hold & Wait: A process may be allocated some resources while waiting for others. 3. No Pre-emption: No resource can be forcibly removed from a process holding it. 4. Circular Wait: A closed chain of processes exist such that each process holds at least one
resource needed by another process in the chain.

35. What are short, long and medium-term scheduling?
Ans:- Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process.
Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of real-memory until they are ready to execute. The swapping-in decision is based on memory-management criteria.
Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by preemption.

36. When is a system in safe state?
Ans:- The set of dispatchable processes is in a safe state if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.

37. What is the Translation Lookaside Buffer (TLB)?
Ans:- In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.

38. What is cycle stealing?
Ans:- We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force
the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.

39. What is busy waiting?
Ans:- The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.

40. Explain the popular multiprocessor thread-scheduling strategies.
Ans:- Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.
Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool. Dynamic scheduling: The number of thread in a program can be altered during the course of execution.

41. In loading programs into memory, what is the difference between load-time dynamic
linking and run-time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a target external module causes that module to be loaded and the references are updated to a relative address from the start base address of the application module.
With run-time dynamic loading: Some of the linking is postponed until actual reference during execution. Then the correct module is loaded and linked.

42. What are demand-paging and pre-paging?
Ans:- With demand paging, a page is brought into memory only when a location on that page is actually referenced during execution. With pre-paging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.

43. What is SMP?
Ans:- To achieve maximum efficiency and reliability a mode of operation known as symmetric multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.

44. What is process spawning?
Ans:- When the OS at the explicit request of another process creates a process, this action is called process spawning.

45. What is process migration?
Ans:- It is the transfer of sufficient amount of the state of process from one machine to the target machine.

46. What is an idle thread?
Ans:- The special thread a dispatcher will execute when no ready thread is found.

47. Explain the meaning of Kernel ?
Ans:- Kernel is the core that provides basic services for all other parts of the operating system.. It is the kernel that loads first and retain in main memory of the computer system. It provides all essential operations /services those are needed by applications. Kernel takes the responsibility of managing the memory, task, disk and process.

48. What is a command interpreter?
Ans:- A command interpreter is a program which reads the instructions given by the user. It then translates these instructions into the context of the operating system followed by the execution. Command interpreter is also known as ‘shell’.

49. What is a daemon?
Ans:- Daemon is a program that runs in the background without user’s interaction. A daemon runs in a multitasking operating system like UNIX. A daemon is initiated and controlled by special programs known as ‘processes’. Usually daemons have a suffix letter‘d’. For instance, ‘syslogd’, a daemon for handling the system log.

50. Explain the basic functions of process management.
Ans:- A process is an integral part of operating system. The resources are allocated by the operating system to the processes. The functions are:
- Allocation and protection of resources - Synchronization enabling for all processes - Processes protection

53. What is a named pipe?
Ans:- A named pipe is an extension of the concept ‘pipe’ in multitasking operating system. Inter process communication is implemented using a named pipe. A pipe / traditional pipe is unnamed. The reason is it persists as long as the process is executing. Where as a named pipe is system-persistent and exists more than a process running time. It can be removed if not required in future.

51. What is pre-emptive and non-preemptive scheduling?
Ans:- Preemptive scheduling: The preemptive scheduling is prioritized. The highest priority process should always be the process that is currently utilized.
Non-Preemptive scheduling: When a process enters the state of running, the state of that process is not deleted from the scheduler until it finishes its service time.

52. What is interrupt latency?
Ans:- The time between a device that generates an interrupt and the servicing of the device that generated the interrupt is known as interrupt latency. Many operating systems’ devices are serviced soon after the interrupt handler of the device is executed. The effect of interrupt latency may be caused by the interrupt controllers, interrupt masking, and the methods that handle interrupts of an operating system.

53. What is spin lock?
Ans:- In a loop a thread waits simply (‘spins’) checks repeatedly until the lock becomes available. This type of lock is a spin lock. The lock is a kind of busy waiting, as the threads remains active by not performing a useful task. The spin locks are to release explicitly, although some locks are released automatically when the tread blocks.

54. What is an operating system? What are the functions of an operating system?
Ans:- An operating system is an interface between hardware and software. OS is responsible for managing and co-ordinating the activities of a computer system.
Functions of an operating system: Every operating system has two main functions –
1. Operating system makes sure that the data is saved in the required place on the
storage media. Programs are loaded into the memory properly, and the file system of OS will keep the files in the order.
2. OS enables the hardware and software to interact and perform functionality like, printing, scanning, mouse operations, web cam operations. OS allows application softwares to interact with the hardware.

55. What is paging? Why paging is used?
Ans:- OS performs an operation for storing and retrieving data from secondary storage devices for use in main memory. Paging is one of such memory management scheme. Data is retrieved from storage media by OS, in the same sized blocks called as pages. Paging allows the physical address space of the process to be non contiguous. The whole program had to fit into storage contiguously.
Paging is to deal with external fragmentation problem. This is to allow the logical address space of a process to be noncontiguous, which makes the process to be allocated physical memory.

56. Difference between a process and a program
Ans:- - A program is a set of instructions that are to perform a designated task, where as the process is an operation which takes the given instructions and perform the manipulations as per the code, called ‘execution of instructions’. A process is entirely dependent of a ‘program’.
- A process is a module that executes modules concurrently. They are separate loadable modules. Where as the program perform the tasks directly relating to an operation of a user like word processing, executing presentation software etc.

59. What is the meaning of physical memory and virtual memory?
Ans:- Physical memory is the only memory that is directly accessible to the CPU. CPU reads the instructions stored in the physical memory and executes them continuously. The data that is operated will also be stored in physical memory in uniform manner.
Virtual memory is one classification of memory which was created by using the hard disk for simulating additional RAM, the addressable space available for the user. Virtual addresses are mapped into real addresses.

57. What are the difference between THREAD, PROCESS and TASK?
Ans:- A program in execution is known as ‘process’. A program can have any number of processes. Every process has its own address space.
Threads uses address spaces of the process. The difference between a thread and a process is, when the CPU switches from one process to another the current information needs to be saved in Process Descriptor and load the information of a new process. Switching from one thread to another is simple.
A task is simply a set of instructions loaded into the memory. Threads can themselves split themselves into two or more simultaneously running tasks.

58. Difference between NTFS and FAT32
Ans:- The differences are as follows:
- Allows the access local to Windows 2000, Windows 2003, Windows NT with service pack 4 and later versions may get access for some file. - Maximum partition size is 2TB and more. - Maximum size of file is upto 16TB - File and folder encryption is possible.
FAT 32:
- Allows the access local to Windows 95, Windows 98, Windows ME, Windows 2000, Windows xp on local partition. - Maximum partition size is 2TB - Maximum size of file is upto 4GB - File and folder encryption is not possible.

61. Differentiate between RAM and ROM
- Volatile memory - Electricity needs to flow continuously - Program information is stored in RAM - RAM is read / write memory - Cost is high
- Permanent memory - Instructions are stored in ROM permanently. - BIOS has information to boot the system - ROM is read only memory - Access speed is less

62. What is cache memory? Explain its functions
Ans:- Cache memory is RAM. The most recently processing data is stored in cache memory. CPU can access this data more quickly than it can access data in RAM. When the microprocessor starts processing the data, it first checks in cache memory.
The size of each cache block ranges from 1 to 16 bytes. Every location has an index that corresponds to the location which has data to access. This index is known as address.
The locations have tags; each contains the index and the datum in the memory that is needed to be cached.

63. Differentiate between Complier and Interpreter
- The program syntax is checked by the compiler; where as the keywords of the program is checked by the interpreter.
- The complete program is checked by the compiler, where as the interpreter checks simultaneously in the editor.
- Color coding is provided to the program by the interpreter, and enables self debugging while authoring a program.
- Interpreter converts each source code line into machine code and executes on the fly.
- Compiler takes more time for analyzing and processing the program, where as the interpreter takes very less time for analyzing and processing the program.

64. Describe different job scheduling in operating systems.
Job scheduling is an activity for deciding the time for a process to receive the resources they request. First Come First Served: In this scheduling, the job that is waiting for a long time is served next.
Round Robin Scheduling: A scheduling method, in which every process gets a time slice for running and later it is preempted and the next process gets running. This process is known as time sharing, which provides the effect of all the processes running at the same time.
Shortest Job First: It is a non-preemptive scheduling, in which the jobs were chosen which will execute in the shortest possible time.
Priority Scheduling: A scheduling method which assigns the highest priority process is assigned to the resource.

65. What do you mean by deadlock?
Ans:- Dead lock is a situation of two or more processes waiting for each other to finish their tasks. In this situation no progress or no advancement is made. It is a standstill resulting from two evenly matched processes, which are waiting in cyclic configuration.

66. Difference between Primary storage and secondary storage
- Primary memory storages are temporary; where as the secondary storage is permanent.
- Primary memory is expensive and smaller, where as secondary memory is cheaper and larger
- Primary memory storages are faster, where as secondary storages are slower.
- Primary memory storages are connected through data buses to CPU, where as the secondary storages are connect through data cables to CPU.

67. Define Thread.
Ans:- Threads are small processes that are parts of a larger process. A thread is contained inside a process. Different threads in the same process share some resources.

68. What are the advantages of using Thread?
Ans:- Some advantages of using threads are:
- A process switching takes a longer time than that by threads. - They can execute in parallel on a multiprocessor. - Threads can share address spaces.

69. Compare Thread and process.
Threads: Share address space Have direct access to data segment of its process Can communicate with other threads of the same process Have no overhead If a main thread gets affected, other threads to can get affected
Processes: Have own address space Have own copy of data segment of the parent process Processes must use IPC for communication within sibling processes Have considerable overhead Change in a parent process has no effect on the child processes.

70. What is Swapping in Operating System ?
Ans:- When you load a file or program, the file is stored in the random access memory (RAM). Since RAM is finite, some files cannot fit on it. These files are stored in a special section of the hard drive called the "swap file". "Swapping" is the act of using this swap file.
A swapping is a mechanism in which a process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution.

* To get Clear; figures / diagrams, tables / values, answers / exaplanations and more, download the Operating Systems Concepts Study Material Questions Answers PDF.