sgpemv2/doc/sgpem2uman.texi

1642 lines
61 KiB
Plaintext

\input texinfo @c -*-texinfo-*-
@c %**start of header
@setfilename sgpem2uman.info
@settitle SGPEMv2 User Manual
@include vers-uman.texi
@c %**end of header
@dircategory SGPEM v2 - A Process Scheduling Simulator
@direntry
* Users: (sgpem2uman)Top
@end direntry
@c % --------------------------------------------------
@copying
This is SGPEMv2 User Manual (version @value{VERSION},
@value{UPDATED}).
Copyright @copyright{} 2005-2007 University of Padova, dept. of Pure
and Applied Mathematics
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. A copy of the license is included in the section entitled ``GNU
Free Documentation License''.
@end copying
@c % --------------------------------------------------
@titlepage
@title SGPEMv2 User Manual
@subtitle for version @value{VERSION}, @value{UPDATED}
@author Filippo Paparella (@email{ironpipp@@gmail.com})
@author Paolo Santi (@email{psanti@@studenti.math.unipd.it})
@author Matteo Settenvini (@email{matteo@@member.fsf.org})
@author Marco Trevisan (@email{evenjn@@gmail.com})
@author Djina Verbanac (@email{betalgez@@yahoo.com})
@author Luca Vezzaro (@email{lvezzaro@@studenti.math.unipd.it})
@page
@vskip 0pt plus 1filll
@insertcopying
@end titlepage
@c Output the table of contents at the beginning.
@contents
@c % --------------------------------------------------
@c SGPEMv2 User Manual
@c % --------------------------------------------------
@ifnottex
@node Top, History, (none), (dir)
@top Learn how to operate SGPEMv2
@insertcopying
@end ifnottex
@menu
* History:: The history of changes to this document.
* Overview of SGPEM:: Description and objectives of SGPEM v2.
* Installation:: Here we explain how to install SGPEM v2,
as well as providing some advice for
believed-to-be useful compilation options.
* Basics:: Things you should know before starting.
* Using SGPEM:: Instructions on how to use SGPEM.
* Extending SGPEM:: Learn how to write new policies and plugins.
* License:: A full copy of the GNU Free Documentation License
this manual is licensed into.
* Concept index:: Complete index.
@end menu
@c % --------------------------------------------------
@node History, Overview of SGPEM, Top, Top
@unnumbered History
@table @strong
@item 2007, March 5th @r{-- Matteo Settenvini}
Updated subsection ``The Schedulables/Requests tree''
@item 2006, September 12th @r{--- Luca Vezzaro}
Updated section ``From the commandline''
@item 2006, September 9th @r{--- Luca Vezzaro}
Written documentation for section ``The Schedulables/Requests tree''
and section ``The Resources list''
@item 2006, September 8th @r{--- Luca Vezzaro}
Written documentation for section ``Overall view of the main window''
@item 2006, September 8th @r{--- Matteo Settenvini}
Update chapters about building and installation. Rewrite some of the
chapter about extending SGPEMv2 with custom CPU policies, and add a
more complex example. Document interfaces exported to Python.
Quickly describe built-in scheduling policies.
@item 2006, September 7th @r{--- Luca Vezzaro}
First attempt at expanding the manual structure with the
stuff we'll need in the forthcoming beta testing
@item 2006, March 10th @r{--- Djina Verbanac}
Added chapter Writing new policies
@item 2006, March 9th @r{--- Djina Verbanac}
Add chapters Overview of SGPEM and Starting with SGPEM.
@item 2006, January 26th @r{--- Matteo Settenvini}
Add subsection about how to generate code documentation
via Doxygen.
@item 2005, December 11th @r{--- Matteo Settenvini}
Added full license text.
@item 2005, November 8th @r{--- Matteo Settenvini}
First draft of this document.
@end table
@c % --------------------------------------------------
@node Overview of SGPEM, Installation, History, Top
@chapter Overview of SGPEM
@menu
* Description and aims::
* How to read this manual?::
* Reporting Bugs::
* Features::
@end menu
@c % --------------------------------------------------
@node Description and aims, How to read this manual?, Overview of SGPEM, Overview of SGPEM
@section Description and aims
@cindex SGPEM
@cindex description
SGPEM is an Italian acronym, standing for ``@emph{Simulatore della Gestione dei Processi
in un Elaboratore Multiprogrammato}'' (in English, ``@emph{Process
Management Simulator for a Multitasking Computer}'').
It was initially developed for use inside the ``Operating Systems'' teaching,
part of the Computer Science course of the University of Padova, Italy.
The aim of SGPEM is to provide an easy-to-use environment for
simulating process scheduling policies, and for assigning resources in
a multitasking computer. SGPEMv2 is an educational software, and it can
help students to better understand the functionality of operating systems.
@c % --------------------------------------------------
@node How to read this manual?, Reporting Bugs, Description and aims, Overview of SGPEM
@section How to read this manual?
@cindex manual
We recommend that you read the manual following the the structure that
we layed out for it. You will be gently led trough Installation, Configuration and Usage of SGPEMv2.
If you find yourself in trouble reading the manual, please don't hesitate to contact us via
@url{https://mail.gna.org/listinfo/sgpemv2-devel}.
@c % --------------------------------------------------
@node Reporting Bugs,Features, How to read this manual?, Overview of SGPEM
@section Reporting Bugs
@cindex bugs
@cindex reporting
We welcome bug reports and suggestions for any aspect of the SGPEM v2 system, program in general,
documentation, installation... anything. Please contact us via @url{https://mail.gna.org/listinfo/sgpemv2-devel}.
For bug reporters, include enough information for us to reproduce the problem. In general:
@itemize
@item
version and number of SGPEM v2.
@item
hardware and operating system name and version.
@item
the content of any file neccesary to reproduce the bug.
@item
description of the problem and any erroneous output.
@item
any unusual option you gave to configure.
@item
anything else you think might be helpful.
@end itemize
If you are ambitious you can try to fix the problem yourself, but we warmly recommend that you read the
Developer Manual first.
@c % --------------------------------------------------
@node Features, (none), Reporting Bugs, Overview of SGPEM
@section Features
@cindex features
Main features are:
@itemize
@item
You can both use SGPEMv2 via commandline or via a graphical user interface.
For more information see @ref{SGPEM Commands}.
@item
You can schedule threads or processes, and threads can make atomic request
to one or more resource at each instant of the simulation.
@item
It is displayed an Holt graph of the resource allocation in the graphical version.
@item
Statistics are shown at each simulation step, separated for processes and threads.
@item
You can easily jump at different instants of the simulation, seeing what happened at
a given moment.
@item
Editing an existing simulation is easy and quick.
@item
Savefiles are by default written in XML, making it easier for external tools to
provide compatibility with SGPEMv2.
@item
You can write your own policies using python, or easily extend SGPEMv2 with
you own plugin to add more scripting languages.
For more information see @ref{Writing new policies}.
@end itemize
@c % --------------------------------------------------
@node Installation, Basics, Overview of SGPEM, Top
@chapter Installation
@cindex installation
@menu
* Prerequisites:: Programs and libraries needed to
compile and run SGPEM
* Building:: Help for compiling SGPEM on
your platform.
@end menu
@c % --------------------------------------------------
@node Prerequisites, Building, Installation, Installation
@section Prerequisites
@cindex requirements
Some software is needed in order to build and install SGPEM on your
personal computer. You will have the need of different pieces of
software installed, whether you are a developer, a user building it
from sources, or just a user that's running the binary a packager
has given to him.
And if you find this section misses something / it lists
the wrong version of a program, please let us know!
@c % ---- new subsection
@subsection Runtime dependencies
To run SGPEMv2, you require:
@table @emph
@item Gtkmm >= 2.8 with Cairo support
The popular C++ jacket for the even-more popular GIMP
ToolKit. We use Cairo to draw our custom widgets.
@item Python >= 2.3
We use Python to let the user write her own policies
in a simple and complete language.
@item libXML2 >= 2.6.10
An XML library we use to save and load files to/from disk.
@end table
@c % ---- new subsection
@subsection Building from source
Other than the runtime dependencies, you'll need:
@table @emph
@item SWIG >= 1.3.21
SWIG generates the C++ sources needed to build a module that
Python can use, starting from a simple interface specification.
@end table
@c % ---- new subsection
@subsection Developers
Other than the tools needed by users building from sources,
you'll need:
@table @emph
@item GCC with C++ support
as well as the other standard GNU binutils and tools: make, sed, ld...
GCC version >=3.4 is highly recommended. Please don't report
compiling-related problems with any previous version. There are some
known issues with certain versions of GCC 4.0. @xref{Building}.
@item Automake >= 1.9
We use a single @file{Makefile.am} to avoid
recursive make. Older versions of automake didn't play right
with it. See @url{http://aegis.sourceforge.net/@/auug97.pdf} for
the motivations that led to this choice.
@item Autoconf, libtool, autopoint @dots{}
The standard autotool family.
@item Subversion >= 1.2
If you need to update the sources from our repository, or commit
your changes, you'll need Subversion built with SSL support.
@item Dejagnu >= 1.4
The testsuite framework we use as a platform for running tests.
@end table
@c % --------------------------------------------------
@node Building, (none), Prerequisites, Installation
@section Building
@cindex compiling
@noindent To ensure a clean build, follow these steps:
@sp 1
@example
@code{cd <the package root directory>}
@code{mkdir =build}
@code{cd =build}
@code{CXXFLAGS="what you want" ../configure --prefix=/usr/local}
@end example
@sp 1
@noindent This will check you have all the needed software installed.
@noindent Choose good @env{CXXFLAGS} to optimize your build.
For example, on my machine, I would use:
@sp 1
@example
@code{CXXFLAGS="-O3 -pipe -march=pentium4" ../configure --prefix=/usr/local}
@end example
@sp 1
@noindent Being a developer, though, if I had to debug SGPEM, I would
type:
@sp 1
@example
@code{../configure --prefix=`pwd`/../=inst --enable-debug}
@end example
@sp 1
@noindent Please note that those around ``pwd'' are backticks, and not
normal apostrophes.
@strong{Warning}: at the moment, we are aware that passing
@option{--disable-shared} to configure doesn't work. We'll look into it
sooner or later, but in the meantime just build shared libraries.
@noindent Once succesfully configured SGPEMv2, just type:
@sp 1
@example
@command{make}
@end example
@sp 1
@noindent Some versions of GCC 4, usually those before the 4.1 series,
present some problems with the newly-added visibility support for DSO
object symbols. For example, OpenSuSE 10.0 is known to have such
issues. If you encounter problems during building and in linking stage
about unresolved symbols in libraries, please re-run
@command{configure} with the @option{--disable-visibility-support}
option. You'll then have to run @command{make clean && make}.
@noindent Upon a succesful build, you can install SGPEMv2 just by hitting:
@sp 1
@example
@code{su -c "make install"}
@end example
@sp 1
@noindent Root password will be required (of course, if you're
installing it with a prefix placed inside your home directory,
you won't need administrative rights, and just ``@command{make install}''
will sufficit).
See the ``@file{INSTALL}'' file in this folder for an overview of other
(less common) autoconf options.
@subsection Generating API documentation
We added Doxygen support to the project. If you've installed it,
you can simply run @command{make apidox} from the package
top source directory. The documentation will be outputted into
the @samp{$@{BUILD_DIR@}/docs/API/} dir.
If you'd like to generate nicier inheritance graphs, you've just to
install @command{dot}, part of the @emph{Graphviz} package. If you
didn't have it previously installed, you may need to re-run @command{configure}.
@c % --------------------------------------------------
@node Basics, Using SGPEM, Installation, Top
@chapter Basics
@cindex basics
@menu
* The Scheduler:: Essential background information necessary to
understand how schedulable entities are scheduled.
* Policies:: Everything you'll ever wanted to know about policies
in SGPEM!
@end menu
@c % -------------------------------------------------
@node The Scheduler, (none), Policies, Basics
@section The Scheduler
@cindex scheduler basics
From the scheduler's point of view, the simulated environment is populated
by processes and resources. Processes are spawned at differnt instants and
compete for the CPU and other resources until their termination.
Processes have an arrival time, i. e. an instant at wich they are spawned,
and a priority.
Our application simulates the scheduling of threads, not the scheduling of
processes. Anyway, it is possible to simulate processes scheduling simply placing
one single thread within each process, and hiding thread details on the GUI.
In SGPEM, a process is quite just a container of threads. Threads have a
required cpu time, a priority within the process, and an arrival time delta.
The arrival time delta of a thread is relative to the execution time of the parent
process, and not to the arrival time of the parent process.
The scheduler's task is to assign the cpu and the other resources to the
processes. Both resources and CPU are mutually exclusive, meaning that no two
processes may use them at the same time.
A thread may raise requests at any time of its execution: a request has a raising
time delta, which is relative to the execution time of the owner thread, and not
to the arrival time of the owner thread.
A request specifies a set of resources and the time they are requested for.
The specified set of resources will be acquired atomically, meaning that either all
of the requested resources is given to the thread, or none of them is.
A thread may raise any number of requests at any instant. Requiring four resources
may be done either atomically, specifying one request with four separate subrequests,
or non-atomically, specifying four requests with one subrequest each. A subrequest
is the specification of which resource and for how much time.
Resources have multiplicity, or places. A resource with two places acts like two
indistinguishable resources.
@c % -------------------------------------------------
@node Policies, The Scheduler, Basics, Basics
@section Policies
@cindex policies
@menu
* What is a policy in SGPEM?:: Explains what a SGPEM policy can, should and must do,
and what it can't do. And how.
* What kind of policies are there?:: In SGPEM there are many species of policies. Here
you will explore our zoo.
* Built-in policies:: Here you will find a detailed descriptions of the policies
shipped with the standard distribution of SGPEM.
@end menu
@node What is a policy in SGPEM?, What kind of policies are there?, Policies, Policies
@subsection What is a policy in SGPEM?
@cindex policies basics
A policy is a rule used by the scheduler to decide which thread should run next.
Our scheduler needs two different policies to perform this choice: one is called a
cpu (scheduling) policy and the other is called a resource (scheduling) policy.
@subsubsection CPU Scheduling Policies
The first, from now on called simply "policy", is the rule telling which of the
ready (or running) threads is the best candidate to get the cpu. For example, the
FCFS policy is a rule which tells that, among the ready threads, the one which
asked the CPU first is the best candidate. The Lottery policy is a rule which tells
that, among the ready threads, one chosen at random is the best candidate.
Being the best candidate means to get the CPU and try to run: anyway, getting the cpu
does not mean to be able to run: a thread may need a resource to complete its work,
and mutually exclusive resources may be locked by other threads. In this event
a thread is said to raise a request for some resources, and to get blocked by those
requests.
@subsubsection Resource Scheduling Policies
The second policy is the rule telling, for each resource, which of the raised requests
are the allowed to be satisfied, according to the places offered by the resource.
For example, the FIFO resource policy is a rule which tells that, among the raised requests, the ones which
came first are allowed to be allocated. An other example, the Priority policy
is a rule which, roughly speaking, tells that, among the raised requests, the ones having higher priority
are allowed to be allocated.
SGPEM provides some resource policies, but it does not allow the user to create its own.
Like cpu scheduling policies, resource policies are parametric, altough at the moment none
of the included is.
Resource policies are largely dependant on the mehcanism of the scheduler, and since is
very complex to understand the mechanism of scheduler, it would be wasteful to provide
an extension mechanism for resource policies: the user willing to implement a new resource
scheduling policy would better understand and adapt the SGPEM source code.
@subsubsection Policy Parameters
A policy in SGPEM is in general a parametric rule: this means that the user should set some parameters
to actually use the policy. Parameters are either integer, float or string values, which
further specify the behavior of the policy: for example, the round-robin policy needs
the user to choose the length of a time slice. Parametric policies always provide default
values for their parameters, thus the user is not forced to set them manually. (see gui_set_policy)
@node What kind of policies are there?, Built-in policies, What is a policy in SGPEM?, Policies
@subsection What kind of policies are there?
@cindex policies kinds
SGPEM defines four classes of policies, and the scheduler uses different kinds of policies
in different ways. The four kinds are: Simple, Time-sharing, Preemptive, Preemptive and Time-sharing.
@subsubsection Simple policies
Simple policies may change the running thread only when the running one has blocked or has terminated.
A simple policy is allowed to change the running thread at any instant during the simulation, replacing
it with the best candidate among the set of all the ready threads.
@subsubsection Time-sharing policies
Within SGPEM, a policy is said to be time-shared when the policy may change the
running thread after it has been running for a full time-slice (or time quantum).
The size of the time-slice is supposed to be fixed, and varying the size of the
time-slice during the simulaiton is possible, altought not very useful.
A time-sharing policy is allowed to change the running thread only when it has exhausted
its quantum, or it has blocked, or it has terminated, replacing it with the best candidate
among the set of all the ready or running(*) threads.
* At the moment any running thread which used up its quantum is set to ready, therefore there is no
running thread to choose when a time-sharing policy is used.
@subsubsection Preemptive policies
Within SGPEM, a policy is said to be preemptive (or priority-preemptive, too) when
the policy may change the running thread for priority reasons. A preemptive policy
is allowed to change the running thread at any instant during the simulation, replacing
it with the best candidate among the set of all the ready or running threads.
Note that this meaning of the adjective "preemptive" may not match the one
found in your favourite operating systems reference book.
Actually, our application does not check if the preemption is done for priority
reasons, so one could, in principle, implement time-shared policies without specifying
a fixed size for the time slice, i. e. without declaring the policy as time-shared.
Time-sharing may be implemented using an internal counter, relying on the fact that
a preemptive policy is called exactly at every instant.
@subsubsection Preemptive and Time-sharing policies
These policies are used by scheduler roughly in the same way as preemptive policies are.
Note that altough this distinction is enough to understand most of the common policies,
SGPEM is not that simple (wasn't it simple?).
The actual implementation does not partition the space of policies in four classes: a real
SGPEM policy may in fact dynamically "change its class", thus not fit in any of the previously listed.
For using full-blown policies, advanced users should look directly
at the mechanism itself.
@node Built-in policies, (none), What kind of policies are there?, Policies
@subsection Built-in policies
@cindex built-in policies
@subsubsection CPU scheduling policies
@table @asis
@item FCFS: First come first served
The first thread to arrive to the CPU will run until
it ends. This policy never pre-empts; it is probably
the simplest of them all.
This policy has no options to configure, too.
@item SJF: Shortest job first
The thread with the shortest required CPU time
will run until it ends. If @samp{Is pre-emptive?}
is set to true (@samp{1}), given that a thread requiring
less than the remaining time of the current running
thread arrives at the CPU, the latter will
pre-empt the former.
In this case, the policy is also called ``Shortest Remaining
Time Next''.
You can configure if you want this policy to be pre-emptive
or not.
@item RR: Round Robin
This policy executes a thread for a given amount
of time (the time-slice value), and then puts it
at the end of the queue. It does not pre-empt before
the end of the time slice, since it doesn't take
priority in account. Use ``RR priority'' for that.
You can configure the duration of the time slice.
@item RR priority
No lower priority thread can run if a higher
priority thread exists. If pre-emptive by priority, a
higher-priority thread becoming ready, even in the middle
of a time slice, will pre-empt the running thread. Else,
the time slice will have to end before the higher-priority
thread can run.
You can configure if this policy is preemptive or not,
and the duration of the time slice.
@item Lottery scheduling
Every time slice, a thread will be selected from the ready
queue by random. This policy does not pre-empt before the
end of the time slice.
@end table
@subsubsection Resource scheduling policies
@table @asis
@item First in first out
A resource policy which satisfies earlier requests before older ones.
This policy has no options to configure.
@item Last in first out
A resource policy which allows a request to be immediately allocated if there is enough space.
This policy has no options to configure.
@item Higher Priority First
A resource policy which satisfies higher priority requests before lower priority ones.
Note that a thread with priority 0 has an higher prioriy than a thread with priority 5.
This policy has no options to configure.
@end table
@c % --------------------------------------------------
@node Using SGPEM, Extending SGPEM, Basics, Top
@chapter Using SGPEM
@cindex using
@menu
* From the GUI::
* From the commandline::
@end menu
@c % -------------------------------------------------
@node From the GUI, From the commandline, Using SGPEM, Using SGPEM
@section From the GUI
@cindex GUI
@menu
* Overall view of the main window::
* The Schedulables/Requests tree::
* The Resources list::
* The Simulation widget::
* The Holt graph::
* The Preferences dialog::
* Controlling the simulation:: This subsection will explain you all the means
available to control the simulation workflow.
@end menu
@c % -------------------------------------------------
@node Overall view of the main window, The Schedulables/Requests tree, From the GUI, From the GUI
@subsection Overall view of the main window
@cindex main window
@image{main-window,18cm,13.5cm,Screenshot of the main window during a simulation}
Just below the menus, there's the toolbar. The purpose of most toolbar
buttons is easily understood.
For example, you can instantly change the current scheduling policy by using the menu
just to the right of the "Scheduling Policy" toolbar button.
Similarly, you can do the same with a resource allocation policy. The aforementioned
"Scheduling Policy" and "Resource Scheduling" toolbar
buttons can be used to configure the policy's parameters, if there are any.
To know more about the other toolbar buttons, such as "Pause", "Play" and "Stop",
see @ref{Controlling the simulation}.
Normally, the window is split into three sections.
@itemize
@item
The top left section is briefly called
the "Schedulables tree", every entity, except resources, in the SGPEMv2 is shown and
edited in this tree view.
The interface of this widget is straightforward, but in case you need to know more about it,
see @ref{The Schedulables/Requests tree}.
@item
The top right section is the resources list, you can interact with it in the same way you do
with the Schedulables tree. We won't get into the details here, as there is
@ref{The Resources list, a dedicated section} for this widget.
@item
Finally, the bottom section contains the "Simulation vidget", which displays how the scheduling
is proceeding. This widget is too complex to be described here, so we'll leave that to
@ref{The Simulation widget, its dedicated section}.
@end itemize
Well, in fact that's not all, folks. There's also the "Holt graph", which is displayed in a separate window,
so it doesn't steal precious window space to the simulation widget, and also because you may not
need it if you don't use resources and/or requests in your simulation. For more information on this widget, see
@ref{The Holt graph}.
@c % -------------------------------------------------
@node The Schedulables/Requests tree, The Resources list, Overall view of the main window, From the GUI
@subsection The Schedulables/Requests tree
@cindex schedulables tree
This widget is used to add/edit/remove processes, threads and requests.
To perform an operation on it, simply right-click, and a context-sensitive menu
will popup.
Each tree level is dedicated to a specific entity:
@itemize
@item The first level is for @strong{processes}
@item The second level is for @strong{threads}
@item The third level is for @strong{requests}
@end itemize
Right-clicking on any location over the tree will always allow you to add processes, while to
add threads or requests you must select a process or a thread, respectively.
To remove or edit an entity simply select it, and the popup menu will contain the remove
or edit operation specific for that entity.
Note also that adding a process, since version 1.0.1, automatically
adds also a ``Main'' thread. This is for your convenience: you're
still able to modify/delete it if you want.
Anyway, these functionalities are only useful for a stopped simulation. While the simulation
is not in a stopped state, a lot of dynamic information is displayed by the widget.
Let's begin by describing what's the meaning of the colors used to highlight the entities' name:
@itemize
@item @strong{Light Grey} is used for "future" processes, threads, requests and subrequests.
"future" means an entity in the real world will still not exist, since it will "arrive"
at a time greater than the current instant
@item @strong{Green} is used for running processes, threads and for allocated requests and subrequests
@item @strong{Yellow} is used for ready processes, threads and for allocable requests and subrequests
@item @strong{Red} is used for blocked processes, threads and for unallocable requests and subrequests
@item @strong{Dark Grey} is used for terminated processes, threads and for exhausted requests and subrequests
@end itemize
Anyway, to improve readability, the state is also written in the second column of the view.@*
The dynamic display for processes and threads simply consists of their "elapsed time"/"required time"
(between parenthesis), and a "current priority" field, which is obviously their dynamic priority which may change
if the scheduling policy decides to do so. @*
Probably the format used to display requests is a bit less trivial (yes, I'm sarcastic), but since
a request has no additional information other than its state, it makes sense to condense requests and
its associated subrequests on a single line. @*
So the color of the @strong{at <n>:} represents the state of the
request, the <n> being the instant at which the request is raised.@*
Then there are a series of subrequests, which are displayed as @strong{->} (arrows), followed by a
colored resource name and two numbers separated by a slash. The color of the resource represents
the state of the subrequest, and the numbers between parenthesis are its "elapsed time"/"required time".
@c % -------------------------------------------------
@node The Resources list, The Simulation widget, The Schedulables/Requests tree, From the GUI
@subsection The Resources list
@cindex resources
You can interact with this widget in the same way you interact with the
@ref{The Schedulables/Requests tree, Schedulables tree}, but since it's a plain
list, not a tree, it's much more simpler. As you may have guesses, since a resource has
no elapsed and required time, the numbers between parenthesis must be something else.
And you are right! The numbers displayed just after the resource name are the
"allocated"/"places", that is, the number of subrequests for that resource currently allocated
"over" the number of places of the resource.@*
So let's get to the hot stuff: when the simulation moves from the boring stopped state
to a running or paused state, below each resource will be displayed the subrequests queue.
Since a subrequest has no name, the name of the thread owning that subrequest will be displayed,
instead.@*
As if that wasn't cool enough, the thread name in the queue is colored accordingly with the state of the
subrequest!
@c % -------------------------------------------------
@node The Simulation widget, The Holt graph, The Resources list, From the GUI
@subsection The Simulation widget
@cindex simulation widget
@image{simulation_widget,,,Screenshot of the simulation widget}
The simulation graph, as his name tell, show graphically the simulation progress
along the time.@*
It represent the processes status at each instant from the simulation beginning
to the actual one.@*
Into the graph is possible to view the processes only or both processes and threads.
@strong{Watch out:} this graph illustrates the @emph{past}. After each simulation
step is gone, the corresponding processes'/threads' states are drawn.
The graph is divided in three areas:
@itemize
@item At left there are the processes (and optionally threads) names list
@item From center to the right take place the graphical area
@item At the bottom there is the time ruler
@end itemize
@strong{The Processes/Threads names list}
Here each process is listed in insertion order.@*
If the thread visualization is enabled, below every process is shown a list of his threads.
@strong{The graphical area}
It's a rectangular region wich contains some horizontal bars.
Each bar correspond to a process or thread; the processes' bars are fat, the threads'
are thin.
The bars are composed horizontally to show the story of each process and thread.
If the process (thread) state change, and this is the rule, the corresponding bar
change color.@*
As default the colors are: green, yellow, red.
@itemize
@item @strong{Green} is used for running processes/threads
@item @strong{Yellow} is used for ready processes/threads (waiting to run)
@item @strong{Red} is used for blocked processes/threads (waiting for a resource)
@end itemize
The bar starts when the process or thread begin, ends when it die.@*
The length of the bar correspond to the time life of the process or thread.
@strong{The time ruler}
Below the graphical area there is a time ruler from 0 to the current instant.@*
The last represented time is the @emph{past} instant.
The first click on play button will show only notch 0 and no process bars.@*
At the second time, on the ruler, will be notches 0 and 1 and eventually the squares
corresponding to living processes or threads.@*
The other clicks... are all the same!
@strong{How to show/hide threads}
With the menu item "Show/Hide Threads" under the "View" menu the user can enable or
disable threads visibility.
@strong{Scaling the graph}
The user can select a scaling mode to view the graph.@*
This option is available with a popup menu right clicking in client area.
The options available are:
@itemize
@item @strong{No scaling} (default mode) the graph isn't scaled at all.
A white space can appear at right or bottom of the graph or even
the dimension can exceed client area. With horizontal and vertical scrollbar
the user can view all the graph surface.
@item @strong{Fit in window} the graph is resized to make visible every part of the graph.
A white (sometimes big) space can appear at right or bottom of the graph.
@item @strong{Stretch in window}like above the graph is resized but even stretched
to cover all client area.
@end itemize
Always one of these commands isn't available at a time;
the current mode doesn't appear because there isn't any reason to choose it.
@c % -------------------------------------------------
@node The Holt graph, The Preferences dialog, The Simulation widget, From the GUI
@subsection The Holt graph
@cindex holt
@image{holt_circle,10.2cm,10.4cm,Screenshot of the holt graph}
The graph show the simulation status at @emph{this time}.@*
It represent resources, processes or threads (and status),
requests for resources and allocation.
If the user choose to view processes then a circle per process is displayed,
if she/he choose to view threads only a circle per thread (and no process)
is displayed.
Resources are drawn as squares, processes and threads are circular, requests and
allocations are in form of arrows.@*
In center of resources are printed two lines: the name at top, the used/total places at bottom.@*
Into schedulables is shown their name.@*
An arrow from process (thread) to a resource is a request from the process to have
the resource; an arrow from the resource to the process denote the allocation of the
resource to the process.
The colors, as usual, are: green, yellow, red.
@itemize
@item @strong{Green} is used for running processes/threads
@item @strong{Yellow} is used for ready processes/threads (waiting to run)
@item @strong{Red} is used for blocked processes/threads (waiting for a resource)
@end itemize
@strong{How to show processes or threads}
With the menu item "Show/Hide Threads" under the "View" menu the user can switch from
processes to threads visibility.
@strong{How to show or hide the Holt Window}
Holt graph, for pratical reasons, is placed in a separate frame out of the main
application window.@*
With the item "Show/Hide Holt graph" of the "View" menu is possible to show or hide
this window. To close is always possible to use the standard close button or
system menu command.
@strong{Changing graph disposition}
The user can select the disposition of elements in the graph.@*
This option is available with a popup menu right clicking in client area.
The options available are:
@itemize
@item @strong{Dispose vertical} items are arranged vertially in two
columns, resources at left, processes (or threads) at right.
@item @strong{Dispose horizontal} items are arranged horizontally in two
rows, resources at top, processes (or threads) at bottom.
@item @strong{Dispose circular} the items are disposed along a circle.
@item @strong{Auto dispose} (default mode) one of above is select in function of
the aspect ratio of the window
@end itemize
Always one of these commands isn't available at a time;
the current mode doesn't appear because there isn't any reason to choose it.
@strong{Changing size and shape}
The user can change size of the Holt window.@*
As the window change size, his contents is scaled to fit into
the client area.
If the disposition is set in @emph{"Auto dispose"} mode then the disposition
can change during the resizing operation as described following.@*
If the height/width ratio is >= 5/3 the items are arranged vertially in two
columns, resources at left, processes (or threads) at right.@*
If the height/width ratio is <= 3/5 the items are arranged horizontally in two
rows, resources at top, processes (or threads) at bottom.@*
Otherwise the items are disposed along a circle.
@c % -------------------------------------------------
@node The Preferences dialog, Controlling the simulation, The Holt graph, From the GUI
@subsection The Preferences dialog
@cindex preferences
The preferences window allow the user to set the simulation speed.
The simulation speed is minimum waiting time between a step and an other; since computing
the next step of the simulation may require the allocation of many resources, the specified
speed may only be set as a minimum.
The preferences window also allow the user to add and remove the directories where
policies and the plugins are found and loaded from.
Changes regarding policies and plugins will be applied at the next run of SGPEM.
Preferences are saved and loaded from the sgpem.cfg file located in the
installation directory.
Preferences are loaded when the application is started, and saved when the "Close"
button of the dialogis pressed.
@c % -------------------------------------------------
@node Controlling the simulation, (none), The Preferences dialog, From the GUI
@subsection Controlling the simulation
@cindex simulation
The simulation itself is not interactive, so it may be thought as a recording.
From a mathematical point of view, every simulation has an instant, called
its @strong{end}, after wich no significant changes invove the simulated entities.
Our simulator does reproduce simualations from the beginning to the end,
and not further.
@subsubsection Simulation reproduction controls
Controls over the simulation reproduction are very similar to those of a
digital audio player.
The "play" button starts the reproduction, the "pause" button pauses it, and
the "stop" button stops it. After the simulation is stopped, the last reproduced
information is left on the screen, as if the simulation were paused. Anyway,
pressing play after having stopped the simulation will start the reproducion from
the beginning of the recording.
@subsubsection Simulation reproduction modes
If the simulation play mode is set to @strong{continuous}, reproduction of the simulation
will continue until the end is reached.
Otherwise the simulation will pause after every single advance in reproduction.
The simulation mode may be selected on the "Simulation" menu.
@subsubsection Caching issues
The content of the simulation itself is calculated on demand, and cached, so
the first reproduction will usually be slightly slower than the following ones.
When a simulation is stopped the cache is @strong{not} erased. The cache is erased
each time the user @strong{modifies} the simulated environment, by adding,
removing or editing any kind of entity, or by changing any policy or any of
its parameters.
This is also the reason for simulations using the lottery policy will sometimes
be reproduced identical.
@c % -------------------------------------------------
@node From the commandline, (none), From the GUI, Using SGPEM
@section From the commandline
@cindex commandline
@menu
* SGPEM Commands:: Here you'll find a set of commands available
from the command line
* SGPEM Output:: Interpretation of the output
@end menu
@c % -------------------------------------------------
@node SGPEM Commands, SGPEM Output, From the commandline, From the commandline
@subsection SGPEM Commands
@cindex commands
SGPEMv2 commands are case-insensitive, and use extensively numerical identifiers, which is annoying, but
since there is no restriction to the name of the entities, it is the only way to be sure they're uniquely
identifiable.@*
Use the @command{show} command to obtain the numerical identifiers you need. For most kind of entities,
identifiers should not be influenced by additions, but they may be affected by removals. Also, policies
are dynamically loaded at startup, so it is highly recommended you don't make assumptions on the relation
between policies and their identifiers if the application is run several times.@*
A list of the commands, with a detailed description follows:
@table @strong
@item @command{help <string>}
If <string> is a valid command, it prints the usage instructions for that specific command, otherwise prints the
list of supported commands
@item @command{run}
Starts the simulation. It can be continuous or step-by-step depending on the mode configured
with set continuous (default=true).@*
The output of run is a snapshot of the state of the simulation at each instant.@*
The instant 0 represents the initial state, during which no process is running. The scheduler activity begins at instant 1.
@item @command{pause}
Pauses the simulation. The next call to run will continue it.
@item @command{stop}
Stops the simulation. The next call to run will bring the simulation to the first instant and start it.
@item @command{configure <entity>}
Where <entity> may be cpu-policy or resource-policy.@*
This is currently the only way to control the behaviour of policies without modifying their source code.
@item @command{get <attr_name>}
Where <attr_name> may be simulation-tick or continuous.
@item @command{set <attr_name> [=] <value>}
Where <attr_name> may be simulation-tick, continuous, cpu-policy or resource-policy.@*
@strong{simulation-tick} is the time between steps in a continuous simulation, in milliseconds, @strong{continuous}
is a boolean ("true" or "false") indicating whether the simulation should advance continuosly or step-by-step.
By default it's value is "true".
@item @command{show}
Displays the name of the entities (if available) and other informations prefixed by its numeric identifier.@*
Syntax depends from entities being displayed:
@itemize
@item @command{show processes | resources | cpu-policies | resource-policies}
@item @command{show threads <process_id>}
With <process_id> being the numeric identifier of the parent process
@item @command{show requests <process_id> <thread_id>}
With <thread_id> being the numeric identifier of the thread child of process identified by <process_id>
@item @command{show subrequests <process_id> <thread_id> <request_id>}
Where the numeric ids follow the same logic of the previous commands
@item @command{show statistics}
Shows statistics for the whole simulation for the current instant
@end itemize
@item @command{add}
Adds an entity by using a questionary-like approach.@*
Syntax depends from entity being added:
@itemize
@item @command{add process | resource}
@item @command{add thread <process_id>}
With <process_id> being the numeric identifier of the parent process
@item @command{add request <process_id> <thread_id>}
With <thread_id> being the numeric identifier of the thread child of process identified by <process_id>
@item @command{add subrequest <process_id> <thread_id> <request_id>}
Where the numeric ids follow the same logic of the previous commands
@end itemize
@item @command{remove}
Removes an entity.@*
Syntax depends from entity being removed:
@itemize
@item @command{remove process | resource <id>}
Where <id> is the process or resource identifier
@item @command{remove thread <process_id> <thread_id>}
With <process_id> being the identifier of the parent process, and <thread_id> the id of the thread to be removed
@item @command{remove request <process_id> <thread_id> <request_id>}
Where the numeric ids follow the same logic of the previous commands
@item @command{remove subrequest <process_id> <thread_id> <request_id> <subrequest_id>}
Where the numeric ids follow the same logic of the previous commands
@end itemize
@item @command{save <filename>}
Saves the simulation to file <filename>, which may be a path in a format suitable for the operating system used.
@item @command{load <filename>}
Loads a simulation from file <filename>, which may be a path in a format suitable for the operating system used.
@item @command{quit}
Gently closes the program. You may also use the @kbd{C-d} combination to obtain the same effect, but only from
the "main" command prompt, not inside wizards for adding entities or for configuring policies.
@end table
@c % -------------------------------------------------
@node SGPEM Output, (none), SGPEM Commands, From the commandline
@subsection SGPEM Output
@cindex output
The output of RUN is pretty complex.@*
Example:
@smallexample
@verbatim
>>>> 4
READY QUEUE: { Anassimandro ~ }
RESOURCES:
0. forchetta, with 1 places
queue: { [Anassimene] || Pitagora ~ Pitagora }
PROCESSES: state arrival requiring elapsed priority res_id
1. Pitagorici BLOCKED 0 4 0 0
1. Pitagora BLOCKED 0 4 0 0
1.1 forchetta UNALLOCABLE 0 4 0 0
1.2 forchetta UNALLOCABLE 0 4 0 0
2.1 forchetta FUTURE 2 4 0 0
2. Scuola di Mileto >> RUNNING << 3 8 1 0
1. Anassimene >> RUNNING << 0 6 1 0
1.1 forchetta ALLOCATED 0 2 1 0
2. Anassimandro READY 0 2 0 0
1.1 forchetta FUTURE 0 2 0 0
@end verbatim
@end smallexample
The first number (4, in this example) is the current instant of the simulation. @*
Just below there's the ready queue, containing the threads ready to be executed, it'll be up to the scheduling policy
to decide what to do with them.@*
Then there are resources. The number just before their name is their numerical identifier (the one displayed also by
@command{show}). Each resource has its subrequests queue, where the leftmost element is the first in the queue
(since subrequests have no name, the name of the thread issuing it is used).
Elements in the queue are normally separated by a "~",
while a "||" is used to separate allocable subrequest from unallocable ones (allocables are to the left of the separator,
unallocables to the right).@*
Finally there are processes, threads and requests. The hieararchy is similar to the one used for the
@ref{The Schedulables/Requests tree, schedulables tree}, except that requests are expanded, and only subrequests are shown.
The number used for processes and threads is simply their numerical identifier, as it is for resources.@*
There are two number separated by a dot for subrequests, the first is the numerical identifier of the request, the second
is the indentifier of the subrequest itself.@*
For this kind of entities, a tabular format is used, and fields are left blank if the
information is not available for an entity. The name of the columns should be self-explaining.@*
@c % ------------------------------------------------
@node Extending SGPEM, License, Using SGPEM, Top
@chapter Extending SGPEM
@cindex extending
@menu
* Writing new policies:: Steps that must be followed to insert a new policy
* Writing plugins::
@end menu
@c % -------------------------------------------------
@node Writing new policies, Writing plugins, Extending SGPEM, Extending SGPEM
@section Writing new policies
@cindex writing policies
All built-in policies are implemented in Python, but don't worry: you
don't have to be a Python expert to write a new policy. We'll explain
you how to write a new policy on an simple example of FCFS
policy. Then a more complex example will follow: a Round Robin policy
that uses pre-emption by priority.
Now let's get started, all you have to do to create your own policy is
to change the few bold lines of the following example. Also remember
that the name of the class have to be the same of the name of the file
(minus the @code{.py} file extension, of course).
@c % --------- new subsection
@subsection A beginner example: First Come First Served
@example
01 from CPUPolicy import CPUPolicy
02 class fcfs(Policy) :
03 def __init__(self):
04 pass;
05 def configure(self):
@strong{06 print 'No options to configure for fcfs'}
07 def is_preemptive(self):
@strong{08 return False}
09 def get_time_slice(self):
@strong{10 return -1}
11 def sort_queue(self, event, queue):
@strong{12 cmpf = lambda a, b: \
a.get_schedulable().get_arrival_time() + \
a.get_process().get_arrival_time <= \
b.get_schedulable().get_arrival_time() + \
b.get_process().get_arrival_time
13 self.sort(queue,cmpf)}
@end example
@sp 2
@table @asis
@item body of @code{def configure(self)}: line 06
Configure policy to initial values. This is called just before a
simulation starts, and it is responsible to define
the parameters the policy wants to expose to the user. For example, it may make
the return value returned by @code{is_preemptive()} configurable, or
to register an integer value for a the time slice duration.
@item body of @code{def is_preemptive(self):} line 08
It says whether the policy wants to be preemptive, other than by
normal time slice termination (if a positive time slice has been provided).
The possible return values are:
@enumerate
@item
@code{True}: If the policy returns True, it declares that it wants the running
thread to be released if a thread at higher priority is put at the
beginning of the ready threads queue.
This is achieved by putting the current running thread, if there is
one, onto the ready queue. It is up to you, into the
@code{sort_queue()} method, to manage this special case.
@item
@code{False}: The policy always waits the end of the time slice (or a thread
blocking/termination) before selecting a new running thread, even if it
has greater priority than the current one.
There will never be a running thread in the ready queue passed to
@code{sort_queue()}.
@end enumerate
Please note how the word ``priority'' here has a general meaning: it indicates every thread than
can bubble up the sorted ready queue and come before another. So it's up
to Policy.sort_queue() to give it a precise meaning.
@sp 1
@item body of @code{def get_time_slice(self):} line 10
Returns how long is a time-slice for this policy.
A time sliced policy should return a positive integer value, a policy
which doesn't use slices should instead
return @code{-1}. You're encouraged to use a user-configurable
parameter via @code{Policy.configure()} if the policy is
time-sliced, to ensure greater flexibility.
@sp 1
@item body of @code{def sort_queue(self, event, queue):} line 12,13
Sort the queue of ready threads. This method is called by the
scheduler at each step of the simulation to sort the ready threads
queue. It is the core of your policy: when scheduler has to select
a new thread it will always try to take the first of the queue. If it
cannot run for some reason (for example, it immediately blocks), the
second is selected and so on, until the end of the queue.
Remember that if @code{is_preemptible()} returns True, you may have
a running thread in the queue. See the following example for some tips
about how to manage this case.
Pay attention to the fact that we used the @code{<=} relation at line
@samp{12}, and not a simple @code{<}. This is because
@code{queue.sort()} uses a in-place implementation of quicksort.
@xref{ReadyQueue.sort_queue()}. If your policy behaves strangely,
this may be the cause.
@end table
@c % --------- new subsection
@subsection Exposed interface: what you can use
This is a list of exported interfaces that you can use from
your policy script to manipulate SGPEMv2 exported objects.
If you want to see what methods a Python object exports, remember
that you can also use the built-in @code{dir()} Python function.
@c % --- new subsubsection
@anchor{Configuring parameters}
@subsubsection Configuring parameters
TODO: list and describe all methods exposed from PolicyParameters.
In the meantime, see the example below about the RR policy with priority.
@c % --- new subsubsection
@subsubsection Methods for manipulating the ready queue
The parameter @code{queue} passed to @code{CPUPolicy.sort_queue()}
is of type @code{ReadyQueue}. This is a description of the available
methods:
@table @code
@anchor{ReadyQueue.sort_queue()}
@item ReadyQueue.sort_queue(queue, compare_function)
This is the function that actually does the sorting
of the queue for you. You can of course avoid to call this
method and sort the queue by hand (the ``lottery'' policy
for example doesn't call it).
It takes two parameters: the first is the queue, and the second is a
compare function. Usually you'll want to use a simple lambda-function
defined in the way you can see in the above and following examples.
Remember that this function will internally use a in-place version of
quicksort, which is a stable sorting algorithm only when employed with
a less-or-equal relation(``@code{<=}'') or a greater-or-equal one
(``@code{>=}''). Otherwise the queue would still be sorted, but two
adjacent threads that have the same value for a given property would
be swapped. This might be indesiderable with certain policies, and
could lead to unexpected results, so be careful.
@item ReadyQueue.size()
Returns the number of elements in the queue.
@item ReadyQueue.get_item_at(position)
Returns the thread contained at the given position of the queue, where
@code{0} means the front, and @code{queue.size() - 1} means the last
element (the back) of the queue. Trying to access an element outside
the range [0, queue size) will raise an exception.
@item ReadyQueue.bubble_to_front(position)
Moves the item at the given position up in the queue until
it reaches the front, preserving the order of the other threads.
Trying to access an element outside the range [0, queue size) will
throw an exception at you.
@item ReadyQueue.swap(position_a, position_b)
Swaps the element in position a with the element in position b.
This is used mainly by the internal quicksort implementation, but
you may want to employ it directly in some cases, too.
As you may have already guessed, trying to access an element
outside of the queue will raise an exception.
@end table
@c % --- new subsubsection
@subsubsection Properties of schedulable entities
All schedulables, both threads and processes, implement the following methods:
@table @code
@item get_arrival_time()
Returns the time a schedulable arrives to the CPU. For a thread, it is
relative to the time his parent process is spawned. For a process, it
is the absolute time value.
So, a thread will arrive to the CPU after @code{get_arrival_time() +
get_process().get_arrival_time()} units.
@item get_elapsed_time()
Returns for how many time units a schedulable has been running up until now.
@item get_last_acquisition()
Returns the last time a schedulable has been selected for scheduling (that
is, to become the running one).
@item get_last_release()
Returns the last time a schedulable had stopped being scheduled as a
running and has been preempted. Note that this also happens every time
a time-slice ends.
@item get_base_priority()
Returns the priority a schedulable has been spawned with.
@item get_current_priority()
Returns the current priority. It is usually given by
@code{get_base_priority() + priority_push}. See below.
@item set_priority_push(new_value = 0)
Sets the priority push to change the base priority of a
schedulable. It is the only method available that changes
the state of a schedulable.
@item get_total_cpu_time()
Returns the time a schedulable will run before terminating.
@item get_state()
Returns a string describing the state of a schedulable. It can be:
@enumerate
@item ``future''
@item ``ready''
@item ``running''
@item ``blocked''
@item ``terminated''
@end enumerate
@item get_name()
Returns a string with the name the user gave to the schedulable.
@end table
@sp 2
Class @code{Thread} has another method, which is @code{get_process()}. It
returns the father process. Class @code{Process} behaves similarly by
providing a @code{get_threads()} method that returns a list of
children threads.
@c % --------- new subsection
@subsection A more complete example: Round Robin with priority
Now, let's see a more interesting (and a little more complex) example:
a Round Robin by priority policy that can optionally also work with
pre-emption by priority.
@sp 2
@example
00 from CPUPolicy import CPUPolicy
01
02 class rr_priority(CPUPolicy) :
03 """Round Robin scheduling policy that takes priority in account.
04
05 No lower priority thread can run if a higher
06 priority thread exists. If pre-emptive by priority, a
07 higher-priority thread becoming ready even in the middle
08 of a time slice will pre-empt the running thread. Else,
09 the time slice will have to end before the former can run."""
10
11 def __init__(self):
12 pass;
13
14 def configure(self):
15 param = self.get_parameters()
16 param.register_int("Time slice", 1, 10000, True, 2)
17 param.register_int("Is preemptive?", 0, 1, True, 1)
18
19 def is_preemptive(self):
20 value = self.get_parameters().get_int("Is preemptive?")
21 if value == 0:
22 return False
23 else:
24 return True
25
26 def get_time_slice(self):
27 return self.get_parameters().get_int("Time slice")
28
29 def sort_queue(self, queue):
30 by_ltime = lambda a, b: \
31 a.get_last_acquisition() <= \
32 b.get_last_acquisition()
33 by_prio = lambda a, b: \
34 a.get_current_priority() <= \
35 b.get_current_priority()
36
37 self.sort(queue,by_ltime)
38 self.sort(queue,by_prio)
39
40 # manage preemption: see if we've a running thread
41 # in the ready queue, and if it can still run
42 if self.is_preemptive() == True:
43 higher_prio = queue.get_item_at(0).get_current_priority()
44 i = 0
45 while i < queue.size():
46 sched = queue.get_item_at(i)
47 priority = sched.get_current_priority()
48 if(priority != higher_prio):
49 break
50 if sched.get_state() == "running":
51 queue.bubble_to_front(i)
52 i += 1
@end example
We've also added a description of the class immediately
following the class declaration (lines @samp{03-09}). This is what is
returned as the policy description in the frontend. You may want to
document your policies in the same way too.
Now, let's see the most complex parts together:
@table @code
@item configure()
There are three types of parameters you can register in the value
returned by @code{self.get_parameters()}, and they are integer
parameters, float parameters and strings. Usually boolean values can
be simulated by registering a integer parameter limited in the
interval [0, 1]. @xref{Configuring parameters}, for the exposed interface.
@item is_preemptive()
TODO: write me
@item sort_queue()
Here there are quite a lot of things going on, so let's tackle them
one by one.
At line @samp{30} we create a lambda-function that says to sort the queue
by last aquisition time, so that threads that have been aquired
recently end up at the back of the queue (which is exactly what a
Round Robin policy should do).
Then, at line @samp{33}, we create another lambda-function, this time
because we want to sort the queue by priority, too.
Done this, we let quicksort do the hard job at lines @samp{37-38}.
Since we may have pre-emption enabled, we may have a running thread on
the ready queue (if one exists at the current instant). But what
happens if the running thread was put in the queue, and we just sorted it?
Unfortunately, having the greatest last aquisition time, the running thread would end
at the back of the queue, thus never being selected to run for more
than a single time unit if the queue is non-empty and there are other
threads with the same priority!
The solution is to check if there is a thread with state ``running''
at the beginning of the queue, between those that have the same
priority. If there's one, we make it bubble to the top of the queue.
This is the explanation for lines @samp{42-52}.
@end table
@c % -------------------------------------------------
@node Writing plugins, (none), Writing new policies, Extending SGPEM
@section Writing plugins
@cindex plugins
Writing plugins for SGPEMv2 goes outside the scope of this manual. For
some informations on how to extend it with a plugin of yours,
@xref{Top, , Writing your own plugins, sgpem2dman, SGPEMv2 Developer Manual}.
@c % -------------------------------------------------
@c include license text
@node License, Concept index, Extending SGPEM, Top
@include fdl.texi
@c % --------------------------------------------------
@node Concept index, (none), License, Top
@unnumbered Index
@printindex cp
@bye