User:Luc4~enwiki/Sandbox1

In computer science sporadic server is an algorithm used to scheduling aperiodic jobs in real-time system priority driven system.

Introduction
In real-time systems periodic jobs are scheduled together with aperiodic and sporadic jobs. There are many algorithms available which allow to create feasible schedules of periodic and aperiodic jobs, each of which is characterized by specific consumption and replenishment rules (i.e. how the execution time of the algorithm is consumed and how it is increased in order to let the algorithm execute aperiodic jobs). The most important feature of a sporadic server is that in any time interval it never uses more time than the periodic task $$T\left( p_s, e_s\right)$$ with the same parameters, where $$p_s$$ is the period and $$e_s$$ is the maximum execution time. This means the sporadic server can be treated just like a periodic task $$T_{SS}\left(p_S, e_S\right)$$ like any other.

Consumption rules
The parameter $$e_S$$ is also called execution budget and it indicates the time left to the sporadic server to execute aperiodic jobs and it is consumed at a rate of 1 per time unit whenever either one of these conditions are met: $$t_r$$ indicates the latest replenishment time and $$T_H$$ the set of higher priority jobs.
 * the server is executing;
 * the server has executed since $$t_r$$, has become idle before current time $$t$$ and $$T_H$$ is idle.