Talk:Scheduling (computing)

Mach threads in Mac OS X
I removed the statement that Mac OS X Mach threads are cooperatively scheduled. This is not true, the linked reference actually states that cooperative threads can be build on top of normal Mach threads. According to the document there are three scheduling algorithms:

- the standard policy (THREAD_STANDARD_POLICY), under which threads are scheduled by a system-defined fair algorithm

- the time constraint policy (THREAD_TIME_CONSTRAINT_POLICY), under which threads are scheduled according to real-time constraints, and

- the precedence policy (THREAD_PRECEDENCE_POLICY), which allows a task to specify the importance of a thread relative to the task's other threads. —Preceding unsigned comment added by 92.236.36.82 (talk) 19:53, 2 May 2009 (UTC)

Category
If I would have to choose to put this article into Category:Scheduling (computing) or Category:Scheduling algorithms (the second category is a subcategory of the first one) I would choose Category:Scheduling (computing). What do you think? --surueña 13:22, 2005 May 28 (UTC)


 * Agreed. --huwr 00:18, 29 May 2005 (UTC)

Swapping vs. Paging
Could you elaborate why paging in/out is incorrect?


 * Paging is the process of mapping pages of virtual memory to and from physical memory; swapping is the process of moving pages of physical memory to and from secondary storage (disk-based swap files/partitions).
 * According to virtual memory, "Paging is the process of saving inactive virtual memory pages to disk and restoring them to real memory when required.". This challenges what you're arguing. The Paging article differentiates "Paging" and "Swapping", but not in the way you explain. From my understanding of that article paging in/out covers the swapping in/out definition but can also apply to other kind of page/disk operations like Memory-mapped I/O. —Preceding unsigned comment added by 66.36.133.101 (talk) 11:19, 6 May 2008 (UTC)
 * I guess, strictly speaking, you could say that only if a memory page cannot be "paged in" (in other words, a page fault occurs due to the page in question not being mapped in the page table), then it gets "swapped in". --Piet Delport 08:47, 25 April 2006 (UTC)

Reply: Without any references, I'm guessing that swapping is applied to processes and paging applies to smaller pieces of a process. In other words, if RAM cannot be allocated for a process's Working Set, the process should be medium-term suspended, marking all of it's pages should be swapped out (or marked as free if they are unchanged from the most recent disk image).

If the sum of all Working Sets fits within RAM, a single page can be paged out and another one paged in. A good paging algorithm places inactive pages on an inactive queue but does not page them out until something needs to be paged in (well, they are still likely to be reused and reusing them is nearly overhead-free.) — Preceding unsigned comment added by 68.183.178.29 (talk) 23:06, 1 February 2012 (UTC)

Software vs. Hardware
What about the schedulers in CPUs? Should that be a separate article, or should it be included here?the1physicist 20:06, 28 May 2006 (UTC)


 * Either here or in the central processing unit article, probably, depending on the content. You don't need to worry too much, though; be bold and add it! :)  The content can always be moved later.  --Piet Delport 22:08, 28 May 2006 (UTC)

Scheduling Criteria
This entire section is an unattributed verbatim quotation from parts of Chapter 5.2 of the book Operating System Concepts (8th Edition) by SilberSchatz. —Preceding unsigned comment added by Ctxr (talk • contribs) 08:49, 29 March 2010 (UTC)

And I concur. This is almost same as junk. Reality differs from the book. — Preceding unsigned comment added by Nmondal (talk • contribs) 05:01, 9 February 2013 (UTC)

Job Shop Problem
The Job-Shop-Scheduling is linked to the article job shop. As far as I know, a job job is a sort of factory, but the job-shop-problem is a wide-spread scheduling problem with which a job shop can be analysed an optimized (and other problems, for sure).

Todo: Add this
This is a theorists nightmare : Given a single CPU, and multiple process, how the scheduler runs itself? A.K.A who schedules the schedular? (As Alan Moore would suggest - who watches the watchman? ) It can not. This is a classic bootstrapping problem, and none apparently gave an answer here. We should add this information under non linear (interrupt driven ) programming. — Preceding unsigned comment added by Nmondal (talk • contribs) 05:08, 9 February 2013 (UTC)

The scheduler determines which process to run based on:
 * Intrinsic Properties and
 * -Running time
 * -Space allocated
 * etc


 * Extrinsic Properties
 * -Is there a priority associated with the program
 * -Is the program focused
 * etc

Separate communication article?
I suggest that the data packet scheduling issues are removed from scheduling (computing) and scheduling algorithm articles into a separate article on scheduling (communication). Perhaps the remaining scheduling algorithm article should be merged with [scheduling (computing), which now should focus on operational systems and multitasking.

Which of all the scheduling disciplines, policies and algorithms listed in the scheduling (computing) article belongs to networking, and which belongs to operational systems?

Is it possible to clearify what is an algorithm (such as fair queuing), and what is a policy or discipline (such as max-min fairness)?

Mange01 07:25, 21 October 2006 (UTC)


 * Be bold, and take a stab at it! --Piet Delport 02:40, 22 October 2006 (UTC)


 * This article needs to be improved before contemplating a split. The same basic processes are involved in communication and computing scheduling. I think it best to consolidate, improve then look and see if a split is still warranted. As such, I have merged Scheduling algorithm into this article. --Kvng (talk) 19:07, 25 November 2011 (UTC)

Unisys 2200 Operating System CPU Dispatch Algorithm
The pattern used to allocate CPU time based on spiraling through the levels. The pattern is 1213121412131215... The time on CPU (ignoring intervening interrupt pre-processing) doubles for each step increase. For example, 1 might be 50 ms., 2 would be 100 ms., 3 would be 200 ms., 4 would be 400 ms., 5 would be 800 ms. When a thread is put on the dispatch queue, it is assigned the shortest quantum corresponding to the 1-level. If it runs until it exceeds its quantum without causing an I/O, the quantum doubles and the level increases by 1. Eventually, the quantum assigned and level assigned is at some maximum. An I/O request causes a thread's level to revert to 1. The plan is to permit CPU bound problems to solve with infrequent, long slurps from the CPU resource whilst I/O bound problems get frequent, short slurps to start the next I/O. Thus, context switches are minimized whilst I/O resources are kept as busy as possible.

A CPU quantum has traditionally been measured by a clock that runs only when the CPU is in user state (i.e. in Guard Mode). A thread that has not exceeded its quantum can continue to process until it exceeds its quantum or an exigent event (i.e. an interrupt that must be handled for longer than it would take to pre-process an I/O interrupt) occurs. On the Unisys 2200 architecture a context switch is not necessary in order to pre-process an I/O interrupt because the CPU contains a set of registers for user mode and an identical separate set for executive mode. Pre-processing an I/O interrupt entails recording everything necessary to be able to post-process the I/O interrupt: which IOU? which channel? which sub-channel? which device? what status? --72.75.104.219 (talk) 00:06, 29 April 2008 (UTC)

Outdated Reference
In the reference section, the link "Information on the Linux 2.6 scheduler" is outdated as only covers the Linux scheduler up to the 0(1) scheduler which, as the current article says, has been replaced by CFS in 2.6.23. I believe this should be noted next to the link. —Preceding unsigned comment added by 66.36.133.101 (talk) 11:03, 6 May 2008 (UTC)

Missing numbered references
There are no numbered references in this article (and they have never been there, as far as I looked), while there are paragraphs referring to them (for example note #2 in the Operating system scheduler implementations section). Virtlinkt&#x2022;c 16:06, 16 August 2008 (UTC)

Possibly-erroneous citation
Citation 1 (http://www.jgcampbell.com/caos/html/node13.html) says: "Windows 95 introduced preemptive scheduling. However, backwards software compatibility, the great bugbear of all Microsoft and Intel systems, meant that it had to support software using the cooperative method as well. Hence, in 95, and 98 and Me, you cannot really say that the kernel is in full control."

I will still have to do some proper fact-checking. I recall from my assembler class that Windows managed co-operative threads separately as stated above. However, this co-operative process management was in turn managed by the pre-emptive process management system thus implying that the kernel is in fact in full control. Brendan Hide (talk) 20:03, 2 May 2009 (UTC)


 * In Windows 1.x, 2.x and 3.x no windows apps were directly pre-emptible, but whenever you made a call to a bit of the OS that yielded the CPU, other apps would get a look in. The ::Yield api call would do this, as would MessageBox, GetMessage and some others. Odd things would happen here, (memory could move around), so it was certainly not something you did lightly. However, Windows/386 added support for virtual dos windows, and I believe there may have been some pre-emptive scheduling at that level. In which case it was the windows apps that were co-operatively scheduling. In these versions of windows, all windows applications shared the same memory space, so you have to view them not so much as separate processes, as separate, co-operating threads in a single process. Regarding Windows 95-98, I think the issue may be which bits of the user interface were re-entrant. I have forgotten the details, thankfully. SteveLoughran (talk) 17:20, 13 March 2010 (UTC)

the above is correct, on a 386 processor, Windows 3.1x could run multiple DOS virtual machines in a preemptive manner. — Preceding unsigned comment added by 71.163.100.99 (talk) 03:28, 18 December 2018 (UTC)

Plagiarism
The text under Scheduling Criteria is copied word-for-word from Operating System Concepts, Silberschatz, Galvin, and Gagne. Pages 187-188 of the 8th edition of the International edition. --Ultrarob (talk) 02:44, 25 September 2010 (UTC)

Solaris: Does a high number mean high or low priority?
I am confused by the statement in Scheduling_(computing) of "0-59 are reserved for time-shared threads [...] and 160-169 for low priority interrupts". What are low priority interrupts in this context, anyway? Priority 169 is the upper most priority, 0 is the lowest priority, or am I mistaken here? Hacklschorsch (talk) 15:20, 30 April 2011 (UTC)

0.2 seconds to switch processes?
It says that Linux 2.4 uses 200 ms (0.2 seconds) to switch between real-time tasks. This sounds like a bit much to me, but I haven't taken a course in operating systems. How many times per second is this kind of switching expected to occur? The text has been here since 5 March 2009 and no one else seems to have reacted, so I guess I'm wrong here. I just don't understand it. --τις (talk) 04:30, 6 December 2011 (UTC)


 * I believe what this paragraph is trying to say is that each active realtime task gets to run for 200 ms and each active nice task runs for 10 ms. Task switches are therefore 5 to 100 per second. That's just my reading. The material is uncited so I can't verify. I've requested a citation. --Kvng (talk)

Scheduling Processes vs. Threads
How does a scheduler deal with a multi-threaded process?

I run a low-priority non-interactive SMP process in Windows. Unfortunately for good throughput a certain group of threads must receive equal amounts of CPU time because it completes work at the rate of the slowest thread. If one thread is preempted, the Win7 scheduler will REDUCE the priority of that thread in an attempt to minimize thrashing, which is exactly what I don't want to happen, even though it's often the right thing to do.

Even lower priority processes may be dispatched when in my situation, they shouldn't be. If my process needs N balanced threads and the scheduler sees that both this process and a lower priority process can be dispatched by allocating me only (N-1) threads, it will do so. I conclude that the fixed priority settings are not being applied to threads, only to processes.

This is not simple, either in my case or in general, because some threads need priority and others do not and the scheduler doesn't have enough information to do what I want it to do.

Does any OS deal with this dilemma? — Preceding unsigned comment added by 68.183.178.29 (talk) 22:25, 1 February 2012 (UTC)

No definition of the term Quantum as relates to Scheduling?
The term "quantum" is used when discussing scheduling (e.g., see the discussion of Unisys elsewhere on this page), yet I don't see a definition in Wikipedia. Probably it doesn't need its own first class definition and could be done somehow on this Scheduling page. I've always perceived it as either a synonym for "time slice" or as the duration of a "time slice" (or both, since uses may differ). Is there some reason there's no entry in Quantum (disambiguation) that leads to apply to scheduling? The only computer science disambiguation of Quantum leads to Quantum computing. --Netsettler (talk) 23:34, 10 December 2012 (UTC)

This article is a lot like the load balancing article
Just noticing. Feel free to delete this notice if it's improper. Sometree (talk) 21:34, 11 May 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 one external links on Scheduling (computing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Corrected formatting/usage for http://sriramk.com/schedulers.html
 * Corrected formatting/usage for http://cn.opensolaris.org/files/solaris_linux_bsd_cmp.pdf

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.—cyberbot II  Talk to my owner :Online 22:05, 31 March 2016 (UTC)

partition scheduler
Other Wikipedia articles mention various types of "partition scheduler" -- real-time computing, QNX, ARINC 653, etc. My understanding from the brief description in those articles is that a "partition scheduler" is a category of scheduling algorithm.

I came to this "scheduling" article hoping to find a few more details on what a "partition scheduler" is and how it compares and contrasts to other categories of scheduling algorithm. Alas, I am disappointed that this article fails to even mention the word "partition". --DavidCary (talk) 18:04, 24 January 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Scheduling (computing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20110811094049/http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html to http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 09:54, 3 September 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on Scheduling (computing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20060613130106/http://oreilly.com/catalog/linuxkernel/chapter/ch10.html to http://www.oreilly.com/catalog/linuxkernel/chapter/ch10.html
 * Added archive https://web.archive.org/web/20110811094049/http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html to http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 20:09, 14 September 2017 (UTC)

FCFS program in C
I'm moving this recent anonymous contribution here for discussion. It strikes me as potential WP:OR and WP:NOTHOWTO. If it had some more explanation or had a source, we might want to use it. ~Kvng (talk) 13:01, 11 September 2018 (UTC)

void main { clrscr; int a[10][10],i,g[10],w[10],n,j; float avg=0, sum=0; printf("Enter the number of process(maximum ten)"); scanf("%d",&n); for(i=0;i
 * 2) include

int temp1=a[i][0]; int temp2=a[i][1]; a[i][0]=a[minimum][0]; a[i][1]=a[minimum][1]; a[minimum][0]=temp1; a[minimum][1]=temp2; } g[0]=a[0][0]; for(i=0;i<n;i++){ g[i+1]=g[i]+a[i][1]; } for(i=0;i<n;i++){ w[i]=g[i]-a[i][0]; } printf("\nWaiting time of each process is:"); for(i=0;i<n;i++){ printf("\n %d",w[i]); sum=sum+w[i]; } avg=sum/n; printf("\naverage waiting time is: %f", avg); getch; }

OUTPUT: Enter the number of process(maximum ten)3 Enter the arrival and burst time 0 1 1 1 2 1 waiting time for each process is: 0 0 0 average waiting time is: 0.000000

"mop"?
In section "Dispatcher" I suspect "mop" is a typo. If not, it needs to be defined. Arthur.Goldberg (talk) 15:59, 8 August 2020 (UTC)

learning
anything anyone need help with be free to msg me .. 10CHOSEN1111 (talk) 14:38, 19 June 2023 (UTC)

2601:C5:180:3490:5CC:186F:B970:7809 (talk) 01:39, 13 July 2023 (UTC)