\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename sgpem2uman.info @settitle SGPEMv2 User Manual @include vers-uman.texi @c %**end of header @dircategory SGPEM v2 - A Process Scheduling Simulator @direntry * Users: (sgpem2uman)Top @end direntry @c % -------------------------------------------------- @copying This is SGPEMv2 User Manual (version @value{VERSION}, @value{UPDATED}). Copyright @copyright{} 2005-2006 University of Padova, dept. of Pure and Applied Mathematics Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. @end copying @c % -------------------------------------------------- @titlepage @title SGPEMv2 User Manual @subtitle for version @value{VERSION}, @value{UPDATED} @author Filippo Paparella (@email{ironpipp@@gmail.com}) @author Paolo Santi (@email{psanti@@studenti.math.unipd.it}) @author Matteo Settenvini (@email{matteo@@member.fsf.org}) @author Marco Trevisan (@email{evenjn@@gmail.com}) @author Djina Verbanac (@email{betalgez@@yahoo.com}) @author Luca Vezzaro (@email{lvezzaro@@studenti.math.unipd.it}) @page @vskip 0pt plus 1filll @insertcopying @end titlepage @c Output the table of contents at the beginning. @contents @c % -------------------------------------------------- @c SGPEMv2 User Manual @c % -------------------------------------------------- @ifnottex @node Top, History, (none), (dir) @top Learn how to operate SGPEMv2 @insertcopying @end ifnottex @menu * History:: The history of changes to this document. * Overview of SGPEM:: Description and objectives of SGPEM v2. * Installation:: Here we explain how to install SGPEM v2, as well as providing some advice for believed-to-be useful compilation options. * Basics:: Things you should know before starting. * Using SGPEM:: Instructions on how to use SGPEM. * Extending SGPEM:: Learn how to write new policies and plugins. * License:: A full copy of the GNU Free Documentation License this manual is licensed into. * Concept index:: Complete index. @end menu @c % -------------------------------------------------- @node History, Overview of SGPEM, Top, Top @unnumbered History @table @strong @item 2006, September 9th @r{--- Luca Vezzaro} Written documentation for section "The Schedulables/Requests tree" and section "The Resources list" @item 2006, September 8th @r{--- Luca Vezzaro} Written documentation for section "Overall view of the main window" @item 2006, September 8th @r{--- Matteo Settenvini} Update chapters about building and installation. Rewrite some of the chapter about extending SGPEMv2 with custom CPU policies, and add a more complex example. Document interfaces exported to Python. Quickly describe built-in scheduling policies. @item 2006, September 7th @r{--- Luca Vezzaro} First attempt at expanding the manual structure with the stuff we'll need in the forthcoming beta testing @item 2006, March 10th @r{--- Djina Verbanac} Added chapter Writing new policies @item 2006, March 9th @r{--- Djina Verbanac} Add chapters Overview of SGPEM and Starting with SGPEM. @item 2006, January 26th @r{--- Matteo Settenvini} Add subsection about how to generate code documentation via Doxygen. @item 2005, December 11th @r{--- Matteo Settenvini} Added full license text. @item 2005, November 8th @r{--- Matteo Settenvini} First draft of this document. @end table @c % -------------------------------------------------- @node Overview of SGPEM, Installation, History, Top @chapter Overview of SGPEM @menu * Description and aims:: * How to read this manual?:: * Reporting Bugs:: * Features:: @end menu @c % -------------------------------------------------- @node Description and aims, How to read this manual?, Overview of SGPEM, Overview of SGPEM @section Description and aims @cindex SGPEM @cindex description SGPEM is an Italian acronym, standing for ``@emph{Simulatore della Gestione dei Processi in un Elaboratore Multiprogrammato}'' (in English, ``@emph{Process Management Simulator for a Multitasking Computer}''). It was initially developed for use inside the ``Operating Systems'' teaching, part of the Computer Science course of the University of Padova, Italy. The aim of SGPEM is to provide an easy-to-use environment for simulating process scheduling policies, and for assigning resources in a multitasking computer. SGPEMv2 is an educational software, and it can help students to better understand the functionality of operating systems. @c % -------------------------------------------------- @node How to read this manual?, Reporting Bugs, Description and aims, Overview of SGPEM @section How to read this manual? @cindex manual We recommend that you read the manual following the the structure that we layed out for it. You will be gently led trough Installation, Configuration and Usage of SGPEMv2. If you find yourself in trouble reading the manual, please don't hesitate to contact us at @email{swe@@thgnet.it}. @c % -------------------------------------------------- @node Reporting Bugs,Features, How to read this manual?, Overview of SGPEM @section Reporting Bugs @cindex bugs @cindex reporting We welcome bug reports and suggestions for any aspect of the SGPEM v2 system, program in general, documentation, installation... anything. Please email us at @email{swe@@thgnet.it}. For bug reporters, include enough information for us to reproduce the problem. In general: @itemize @item version and number of SGPEM v2. @item hardware and operating system name and version. @item the content of any file neccesary to reproduce the bug. @item description of the problem and any erroneous output. @item any unusual option you gave to configure. @item anything else you think might be helpful. @end itemize If you are ambitious you can try to fix the problem yourself, but we warmly recommend that you read the Developer Manual first. @c % -------------------------------------------------- @node Features, (none), Reporting Bugs, Overview of SGPEM @section Features @cindex features Main features are: @itemize @item For now you can use only prompt commands to start the simulation and change some parameters. For more information see @ref{SGPEM Commands}. @item You can use the program from your own shell, or if you prefer you can use the minimal GUI that SGPEM offers, at this moment. @item The output of the simulation is textual, and you can see it on the main GUI window or on your Terminal window. @item The policy in use is First Come First Served. @item You can write your own policies. For more information see @ref{Writing new policies}. @end itemize @c % -------------------------------------------------- @node Installation, Basics, Overview of SGPEM, Top @chapter Installation @cindex installation @menu * Prerequisites:: Programs and libraries needed to compile and run SGPEM * Building:: Help for compiling SGPEM on your platform. @end menu @c % -------------------------------------------------- @node Prerequisites, Building, Installation, Installation @section Prerequisites @cindex requirements Some software is needed in order to build and install SGPEM on your personal computer. You will have the need of different pieces of software installed, whether you are a developer, a user building it from sources, or just a user that's running the binary a packager has given to him. And if you find this section misses something / it lists the wrong version of a program, please let us know! @c % ---- new subsection @subsection Runtime dependencies To run SGPEMv2, you require: @table @emph @item Gtkmm >= 2.8 with Cairo support The popular C++ jacket for the even-more popular GIMP ToolKit. We use Cairo to draw our custom widgets. @item Python >= 2.3 We use Python to let the user write her own policies in a simple and complete language. @item libXML2 >= 2.6.10 An XML library we use to save and load files to/from disk. @end table @c % ---- new subsection @subsection Building from source Other than the runtime dependencies, you'll need: @table @emph @item SWIG >= 1.3.21 SWIG generates the C++ sources needed to build a module that Python can use, starting from a simple interface specification. @end table @c % ---- new subsection @subsection Developers Other than the tools needed by users building from sources, you'll need: @table @emph @item GCC with C++ support as well as the other standard GNU binutils and tools: make, sed, ld... GCC version >=3.4 is highly recommended. Please don't report compiling-related problems with any previous version. There are some known issues with certain versions of GCC 4.0. @xref{Building}. @item Automake >= 1.9 We use a single @file{Makefile.am} to avoid recursive make. Older versions of automake didn't play right with it. See @url{http://aegis.sourceforge.net/@/auug97.pdf} for the motivations that led to this choice. @item Autoconf, libtool, autopoint @dots{} The standard autotool family. @item Subversion >= 1.2 If you need to update the sources from our repository, or commit your changes, you'll need Subversion built with SSL support. @item Dejagnu >= 1.4 The testsuite framework we use as a platform for running tests. @end table @c % -------------------------------------------------- @node Building, (none), Prerequisites, Installation @section Building @cindex compiling @noindent To ensure a clean build, follow these steps: @sp 1 @example @code{cd } @code{mkdir =build} @code{cd =build} @code{CXXFLAGS="what you want" ../configure --prefix=/usr/local} @end example @sp 1 @noindent This will check you have all the needed software installed. @noindent Choose good @env{CXXFLAGS} to optimize your build. For example, on my machine, I would use: @sp 1 @example @code{CXXFLAGS="-O3 -pipe -march=pentium4" ../configure --prefix=/usr/local} @end example @sp 1 @noindent Being a developer, though, if I had to debug SGPEM, I would type: @sp 1 @example @code{../configure --prefix=`pwd`/../=inst --enable-debug} @end example @sp 1 @noindent Please note that those around ``pwd'' are backticks, and not normal apostrophes. @strong{Warning}: at the moment, we are aware that passing @option{--disable-shared} to configure doesn't work. We'll look into it sooner or later, but in the meantime just build shared libraries. @noindent Once succesfully configured SGPEMv2, just type: @sp 1 @example @command{make} @end example @sp 1 @noindent Some versions of GCC 4, usually those before the 4.1 series, present some problems with the newly-added visibility support for DSO object symbols. For example, OpenSuSE 10.0 is known to have such issues. If you encounter problems during building and in linking stage about unresolved symbols in libraries, please re-run @command{configure} with the @option{--disable-visibility-support} option. You'll then have to run @command{make clean && make}. @noindent Upon a succesful build, you can install SGPEMv2 just by hitting: @sp 1 @example @code{su -c "make install"} @end example @sp 1 @noindent Root password will be required (of course, if you're installing it with a prefix placed inside your home directory, you won't need administrative rights, and just ``@command{make install}'' will sufficit). See the ``@file{INSTALL}'' file in this folder for an overview of other (less common) autoconf options. @subsection Generating API documentation We added Doxygen support to the project. If you've installed it, you can simply run @command{make apidox} from the package top source directory. The documentation will be outputted into the @samp{$@{BUILD_DIR@}/docs/API/} dir. If you'd like to generate nicier inheritance graphs, you've just to install @command{dot}, part of the @emph{Graphviz} package. If you didn't have it previously installed, you may need to re-run @command{configure}. @c % -------------------------------------------------- @node Basics, Using SGPEM, Installation, Top @chapter Basics @cindex basics @menu * The Scheduler:: Essential background information necessary to understand how schedulable entities are scheduled. * Policies:: Everything you'll ever wanted to know about policies in SGPEM! @end menu @c % ------------------------------------------------- @node The Scheduler, (none), Policies, Basics @section The Scheduler @cindex scheduler basics From the scheduler's point of view, the simulated environment is populated by processes and resources. Processes are spawned at differnt instants and compete for the CPU and other resources until their termination. Processes have an arrival time, i. e. an instant at wich they are spawned, and a priority. Our application simulates the scheduling of threads, not the scheduling of processes. Anyway, it is possible to simulate processes scheduling simply placing one single thread within each process, and hiding thread details on the GUI. In SGPEM, a process is quite just a container of threads. Threads have a required cpu time, a priority within the process, and an arrival time delta. The arrival time delta of a thread is relative to the execution time of the parent process, and not to the arrival time of the parent process. The scheduler's task is to assign the cpu and the other resources to the processes. Both resources and CPU are mutually exclusive, meaning that no two processes may use them at the same time. A thread may raise requests at any time of its execution: a request has a raising time delta, which is relative to the execution time of the owner thread, and not to the arrival time of the owner thread. A request specifies a set of resources and the time they are requested for. The specified set of resources will be acquired atomically, meaning that either all of the requested resources is given to the thread, or none of them is. A thread may raise any number of requests at any instant. Requiring four resources may be done either atomically, specifying one request with four separate subrequests, or non-atomically, specifying four requests with one subrequest each. A subrequest is the specification of which resource and for how much time. Resources have multiplicity, or places. A resource with two places acts like two indistinguishable resources. @c % ------------------------------------------------- @node Policies, The Scheduler, Basics, Basics @section Policies @cindex policies @menu * What is a policy in SGPEM?:: Explains what a SGPEM policy can, should and must do, and what it can't do. And how. * What kind of policies are there?:: In SGPEM there are many species of policies. Here you will explore our zoo. * Built-in policies:: Here you will find a detailed descriptions of the policies shipped with the standard distribution of SGPEM. @end menu @node What is a policy in SGPEM?, What kind of policies are there?, Policies, Policies @subsection What is a policy in SGPEM? @cindex policies basics A policy is a rule used by the scheduler to decide which thread should run next. Our scheduler needs two different policies to perform this choice: one is called a cpu (scheduling) policy and the other is called a resource (scheduling) policy. @subsubsection CPU Scheduling Policies The first, from now on called simply "policy", is the rule telling which of the ready (or running) threads is the best candidate to get the cpu. For example, the FCFS policy is a rule which tells that, among the ready threads, the one which asked the CPU first is the best candidate. The Lottery policy is a rule which tells that, among the ready threads, one chosen at random is the best candidate. Being the best candidate means to get the CPU and try to run: anyway, getting the cpu does not mean to be able to run: a thread may need a resource to complete its work, and mutually exclusive resources may be locked by other threads. In this event a thread is said to raise a request for some resources, and to get blocked by those requests. @subsubsection Resource Scheduling Policies The second policy is the rule telling, for each resource, which of the raised requests are the allowed to be satisfied, according to the places offered by the resource. For example, the FIFO resource policy is a rule which tells that, among the raised requests, the ones which came first are allowed to be allocated. An other example, the Priority policy is a rule which, roughly speaking, tells that, among the raised requests, the ones having higher priority are allowed to be allocated. SGPEM provides some resource policies, but it does not allow the user to create its own. Like cpu scheduling policies, resource policies are parametric, altough at the moment none of the included is. Resource policies are largely dependant on the mehcanism of the scheduler, and since is very complex to understand the mechanism of scheduler, it would be wasteful to provide an extension mechanism for resource policies: the user willing to implement a new resource scheduling policy would better understand and adapt the SGPEM source code. @subsubsection Policy Parameters A policy in SGPEM is in general a parametric rule: this means that the user should set some parameters to actually use the policy. Parameters are either integer, float or string values, which further specify the behavior of the policy: for example, the round-robin policy needs the user to choose the length of a time slice. Parametric policies always provide default values for their parameters, thus the user is not forced to set them manually. (see gui_set_policy) @node What kind of policies are there?, Built-in policies, What is a policy in SGPEM?, Policies @subsection What kind of policies are there? @cindex policies kinds SGPEM defines four classes of policies, and the scheduler uses different kinds of policies in different ways. The four kinds are: Simple, Time-sharing, Preemptive, Preemptive and Time-sharing. @subsubsection Simple policies Simple policies may change the running thread only when the running one has blocked or has terminated. A simple policy is allowed to change the running thread at any instant during the simulation, replacing it with the best candidate among the set of all the ready threads. @subsubsection Time-sharing policies Within SGPEM, a policy is said to be time-shared when the policy may change the running thread after it has been running for a full time-slice (or time quantum). The size of the time-slice is supposed to be fixed, and varying the size of the time-slice during the simulaiton is possible, altought not very useful. A time-sharing policy is allowed to change the running thread only when it has exhausted its quantum, or it has blocked, or it has terminated, replacing it with the best candidate among the set of all the ready or running(*) threads. * At the moment any running thread which used up its quantum is set to ready, therefore there is no running thread to choose when a time-sharing policy is used. @subsubsection Preemptive policies Within SGPEM, a policy is said to be preemptive (or priority-preemptive, too) when the policy may change the running thread for priority reasons. A preemptive policy is allowed to change the running thread at any instant during the simulation, replacing it with the best candidate among the set of all the ready or running threads. Note that this meaning of the adjective "preemptive" may not match the one found in your favourite operating systems reference book. Actually, our application does not check if the preemption is done for priority reasons, so one could, in principle, implement time-shared policies without specifying a fixed size for the time slice, i. e. without declaring the policy as time-shared. Time-sharing may be implemented using an internal counter, relying on the fact that a preemptive policy is called exactly at every instant. @subsubsection Preemptive and Time-sharing policies These policies are used by scheduler roughly in the same way as preemptive policies are. Note that altough this distinction is enough to understand most of the common policies, SGPEM is not that simple (wasn't it simple?). The actual implementation does not partition the space of policies in four classes: a real SGPEM policy may in fact dynamically "change its class", thus not fit in any of the previously listed. For using full-blown policies, advanced users should look directly at the mechanism itself. @node Built-in policies, (none), What kind of policies are there?, Policies @subsection Built-in policies @cindex built-in policies @subsubsection CPU scheduling policies @table @asis @item FCFS: First come first served The first thread to arrive to the CPU will run until it ends. This policy never pre-empts; it is probably the simplest of them all. This policy has no options to configure, too. @item SJF: Shortest job first The thread with the shortest required CPU time will run until it ends. If @samp{Is pre-emptive?} is set to true (@samp{1}), given that a thread requiring less than the remaining time of the current running thread arrives at the CPU, the latter will pre-empt the former. In this case, the policy is also called ``Shortest Remaining Time Next''. You can configure if you want this policy to be pre-emptive or not. @item RR: Round Robin This policy executes a thread for a given amount of time (the time-slice value), and then puts it at the end of the queue. It does not pre-empt before the end of the time slice, since it doesn't take priority in account. Use ``RR priority'' for that. You can configure the duration of the time slice. @item RR priority No lower priority thread can run if a higher priority thread exists. If pre-emptive by priority, a higher-priority thread becoming ready, even in the middle of a time slice, will pre-empt the running thread. Else, the time slice will have to end before the higher-priority thread can run. You can configure if this policy is preemptive or not, and the duration of the time slice. @item Lottery scheduling Every time slice, a thread will be selected from the ready queue by random. This policy does not pre-empt before the end of the time slice. @end table @subsubsection Resource scheduling policies @table @asis @item First in first out A resource policy which satisfies earlier requests before older ones. This policy has no options to configure. @item Last in first out A resource policy which allows a request to be immediately allocated if there is enough space. This policy has no options to configure. @item Higher Priority First A resource policy which satisfies higher priority requests before lower priority ones. Note that a thread with priority 0 has an higher prioriy than a thread with priority 5. This policy has no options to configure. @end table @c % -------------------------------------------------- @node Using SGPEM, Extending SGPEM, Basics, Top @chapter Using SGPEM @cindex using @menu * From the GUI:: * From the commandline:: @end menu @c % ------------------------------------------------- @node From the GUI, From the commandline, Using SGPEM, Using SGPEM @section From the GUI @cindex GUI @menu * Overall view of the main window:: * The Schedulables/Requests tree:: * The Resources list:: * The Simulation widget:: * The Holt graph:: * The Preferences dialog:: * Controlling the simulation:: This subsection will explain you all the means available to control the simulation workflow. @end menu @c % ------------------------------------------------- @node Overall view of the main window, The Schedulables/Requests tree, From the GUI, From the GUI @subsection Overall view of the main window @cindex main window @image{main-window,18cm,13.5cm,Screenshot of the main window during a simulation} Just below the menus, there's the toolbar. The purpose of most toolbar buttons is easily understood. For example, you can instantly change the current scheduling policy by using the menu just to the right of the "Scheduling Policy" toolbar button. Similarly, you can do the same with a resource allocation policy. The aforementioned "Scheduling Policy" and "Resource Scheduling" toolbar buttons can be used to configure the policy's parameters, if there are any. To know more about the other toolbar buttons, such as "Pause", "Play" and "Stop", see @xref{Controlling the simulation}. Normally, the window is split into three sections. @itemize @item The top left section is briefly called the "Schedulables tree", every entity, except resources, in the SGPEMv2 is shown and edited in this tree view. The interface of this widget is straightforward, but in case you need to know more about it, see @ref{The Schedulables/Requests tree}. @item The top right section is the resources list, you can interact with it in the same way you do with the Schedulables tree. We won't get into the details here, as there is @ref{The Resources list, a dedicated section} for this widget. @item Finally, the bottom section contains the "Simulation vidget", which displays how the scheduling is proceeding. This widget is too complex to be described here, so we'll leave that to @ref{The Simulation widget, its dedicated section}. @end itemize Well, in fact that's not all, folks. There's also the "Holt graph", which is displayed in a separate window, so it doesn't steal precious window space to the simulation widget, and also because you may not need it if you don't use resources and/or requests in your simulation. For more information on this widget, see @ref{The Holt graph}. @c % ------------------------------------------------- @node The Schedulables/Requests tree, The Resources list, Overall view of the main window, From the GUI @subsection The Schedulables/Requests tree @cindex schedulables tree This widget is used to add/edit/remove processes, threads and requests. To perform an operation on it, simply right-click, and a context-sensitive menu will popup. Each tree level is dedicated to a specific entity: @itemize @item The first level is for @strong{processes} @item The second level is for @strong{threads} @item The third level is for @strong{requests} @end itemize Right-clicking on any location over the tree will always allow you to add processes, while to add threads or requests you must select a process or a thread, respectively. To remove or edit an entity simply select it, and the popup menu will contain the remove or edit operation specific for that entity. Anyway, these functionalities are only useful for a stopped simulation. While the simulation is not in a stopped state, a lot of dynamic information is displayed by the widget. Let's begin by describing what's the meaning of the colors used to highlight the entities' name: @itemize @item @strong{Light Grey} is used for "future" processes, threads, requests and subrequests. "future" means an entity in the real world will still not exist, since it will "arrive" at a time greater than the current instant @item @strong{Green} is used for running processes, threads and for allocated requests and subrequests @item @strong{Yellow} is used for ready processes, threads and for allocable requests and subrequests @item @strong{Red} is used for blocked processes, threads and for unallocable requests and subrequests @item @strong{Dark Grey} is used for terminated processes, threads and for exhausted requests and subrequests @end itemize The dynamic display for processes and threads simply consists of their "elapsed time" (the time they've been given the processor) and "current priority", which is obviously their dynamic priority which may change if the scheduling policy decides to do so. Probably the format used to display requests is a bit less trivial (yes, I'm sarcastic), but since a request has no additional information other than its state, it makes sense to condense requests and its associated subrequests on a single line. @* So the color of the @strong{at :} represents the state of the request, the being the instant at which the request is raised.@* Then there are a series of subrequests, which are displayed as colored @strong{->} (arrows), followed by a resource name and two numbers separated by a slash. The color of the arrow represents the state of the subrequest, and the numbers between parenthesis are its "elapsed time"/"required time". @c % ------------------------------------------------- @node The Resources list, The Simulation widget, The Schedulables/Requests tree, From the GUI @subsection The Resources list @cindex resources You can interact with this widget in the same way you interact with the @ref{The Schedulables/Requests tree, Schedulables tree}, but since it's a plain list, not a tree, it's much more simpler. So let's get to the hot stuff: when the simulation moves from the boring stopped state to a running or paused state, below each resource will be displayed the requests queue. Since a request has no name, the name of the thread owning that request will be displayed, instead.@* As if that wasn't cool enough, the thread name in the queue is colored accordingly with the state of the request! @c % ------------------------------------------------- @node The Simulation widget, The Holt graph, The Resources list, From the GUI @subsection The Simulation widget @cindex simulation widget @c % ------------------------------------------------- @node The Holt graph, The Preferences dialog, The Simulation widget, From the GUI @subsection The Holt graph @cindex holt @c % ------------------------------------------------- @node The Preferences dialog, Controlling the simulation, The Holt graph, From the GUI @subsection The Preferences dialog @cindex preferences @strong{TODO:} Spiegare: The preferences window allow the user to set the simulation speed. The simulation speed is minimum waiting time between a step and an other; since computing the next step of the simulation may require the allocation of many resources, the specified speed may only be set as a minimum. The preferences window also allow the user to add and remove the directories where policies and the plugins are found and loaded from. Changes regarding policies and plugins will be applied at the next run of SGPEM. Preferences are saved and loaded from the sgpem.cfg file located in the installation directory. Preferences are loaded when the application is started, and saved when the "Close" button of the dialogis pressed. @c % ------------------------------------------------- @node Controlling the simulation, (none), The Preferences dialog, From the GUI @subsection Controlling the simulation @cindex simulation @c % ------------------------------------------------- @node From the commandline, (none), From the GUI, Using SGPEM @section From the commandline @cindex commandline @menu * SGPEM Commands:: Here you'll find a set of commands available from the command line * SGPEM Output:: Interpretation of the output @end menu @c % ------------------------------------------------- @node SGPEM Commands, SGPEM Output, From the commandline, From the commandline @subsection SGPEM Commands @cindex commands @table @strong @item help @command{} If is a valid command, it prints the usage instructions for that specific command @item @command{run} Advances the simulation by one or more steps, depending on the actual state and on the value set ed with setmode @item @command{pause} It is useful only when the advancement mode is continue. Calling again run will cause the simulation to start from the current simulation step. @item @command{stop} Stops the simulation. @item @command{setmode } This methods allows to change the way the simulation progresses. If the input value is 0 (false), the simulation will advance a single time step for each call to run. If the input value is 1 (true), the simulation will advance contiuosly, waiting the time defined with settimer between each step, until all processes terminate, or some error occurs. @item @command{getmode} Prints the simulation advancement mode: 0 if step-to-step, 1 if continue. @item @command{settimer } This command is used to define how a single time unit is to be interpreted when the simulation advancement mode is continue. The input value is in milliseconds, and it must be in range [0, 10000]. @item @command{gettimer} Prints the value of the current timer @item @command{reset} Resets the simulation.Erases the state of the simulation, and takes care of removing any residual or temporary data to ensure the simulation has reached a clean and stable state. @item @command{jumpto } Causes the simulation to jump to a given time unit. @item @command{getpolicy} Prints the current policy. @item @command{getpolicyattributes} Prints the name and the value of the policy's attributes @end table @c % ------------------------------------------------- @node SGPEM Output, (none), SGPEM Commands, From the commandline @subsection SGPEM Output @cindex output You can see the textual output of the simulation in your console window or on the GUI window provided with SGPEM v2. The output of RUN gives you one or more rows, each one representing the state of schedulable entities. The possible states are: @emph{RUNNING}, @emph{READY}, @emph{BLOCKED}, @emph{FUTURE} or @emph{TERMINATED}. The row begins with the number of the instant described by the following lists of states: @itemize @item instant 0 - represents the INITIAL STATE during which no process is running. @item instant 1 - the scheduler activity begins. @end itemize Each schedulable entity is represented by its name followed by its priority enclosed between round parenthesis. @c % ------------------------------------------------ @node Extending SGPEM, License, Using SGPEM, Top @chapter Extending SGPEM @cindex extending @menu * Writing new policies:: Steps that must be followed to insert a new policy * Writing plugins:: @end menu @c % ------------------------------------------------- @node Writing new policies, Writing plugins, Extending SGPEM, Extending SGPEM @section Writing new policies @cindex writing policies All built-in policies are implemented in Python, but don't worry: you don't have to be a Python expert to write a new policy. We'll explain you how to write a new policy on an simple example of FCFS policy. Then a more complex example will follow: a Round Robin policy that uses pre-emption by priority. Now let's get started, all you have to do to create your own policy is to change the few bold lines of the following example. Also remember that the name of the class have to be the same of the name of the file (minus the @code{.py} file extension, of course). @c % --------- new subsection @subsection A beginner example: First Come First Served @example 01 from CPUPolicy import CPUPolicy 02 class fcfs(Policy) : 03 def __init__(self): 04 pass; 05 def configure(self): @strong{06 print 'No options to configure for fcfs'} 07 def is_preemptive(self): @strong{08 return False} 09 def get_time_slice(self): @strong{10 return -1} 11 def sort_queue(self, event, queue): @strong{12 cmpf = lambda a, b: \ a.get_schedulable().get_arrival_time() <= \ b.get_schedulable().get_arrival_time() 13 self.sort(queue,cmpf)} @end example @sp 2 @table @asis @item body of @code{def configure(self)}: line 06 Configure policy to initial values. This is called just before a simulation starts, and it is responsible to define the parameters the policy wants to expose to the user. For example, it may make the return value returned by @code{is_preemptive()} configurable, or to register an integer value for a the time slice duration. @item body of @code{def is_preemptive(self):} line 08 It says whether the policy wants to be preemptive, other than by normal time slice termination (if a positive time slice has been provided). The possible return values are: @enumerate @item @code{True}: If the policy returns True, it declares that it wants the running thread to be released if a thread at higher priority is put at the beginning of the ready threads queue. This is achieved by putting the current running thread, if there is one, onto the ready queue. It is up to you, into the @code{sort_queue()} method, to manage this special case. @item @code{False}: The policy always waits the end of the time slice (or a thread blocking/termination) before selecting a new running thread, even if it has greater priority than the current one. There will never be a running thread in the ready queue passed to @code{sort_queue()}. @end enumerate Please note how the word ``priority'' here has a general meaning: it indicates every thread than can bubble up the sorted ready queue and come before another. So it's up to Policy.sort_queue() to give it a precise meaning. @sp 1 @item body of @code{def get_time_slice(self):} line 10 Returns how long is a time-slice for this policy. A time sliced policy should return a positive integer value, a policy which doesn't use slices should instead return @code{-1}. You're encouraged to use a user-configurable parameter via @code{Policy.configure()} if the policy is time-sliced, to ensure greater flexibility. @sp 1 @item body of @code{ def sort_queue(self, event, queue):} line 12,13 Sort the queue of ready threads. This method is called by the scheduler at each step of the simulation to sort the ready threads queue. It is the core of your policy: when scheduler has to select a new thread it will always try to take the first of the queue. If it cannot run for some reason (for example, it immediately blocks), the second is selected and so on, until the end of the queue. Remember that if @code{is_preemptible()} returns True, you may have a running thread in the queue. See the following example for some tips about how to manage this case. Pay attention to the fact that we used the @code{<=} relation at line @samp{12}, and not a simple @code{<}. This is because @code{queue.sort()} uses a in-place implementation of quicksort. @xref{ReadyQueue.sort_queue()}. If your policy behaves strangely, this may be the cause. @end table @c % --------- new subsection @subsection Exposed interface: what you can use This is a list of exported interfaces that you can use from your policy script to manipulate SGPEMv2 exported objects. If you want to see what methods a Python object exports, remember that you can also use the built-in @code{dir()} Python function. @c % --- new subsubsection @anchor{Configuring parameters} @subsubsection Configuring parameters TODO: list and describe all methods exposed from PolicyParameters. In the meantime, see the example below about the RR policy with priority. @c % --- new subsubsection @subsubsection Methods for manipulating the ready queue The parameter @code{queue} passed to @code{CPUPolicy.sort_queue()} is of type @code{ReadyQueue}. This is a description of the available methods: @table @code @anchor{ReadyQueue.sort_queue()} @item ReadyQueue.sort_queue(queue, compare_function) This is the function that actually does the sorting of the queue for you. You can of course avoid to call this method and sort the queue by hand (the ``lottery'' policy for example doesn't call it). It takes two parameters: the first is the queue, and the second is a compare function. Usually you'll want to use a simple lambda-function defined in the way you can see in the above and following examples. Remember that this function will internally use a in-place version of quicksort, which is a stable sorting algorithm only when employed with a less-or-equal relation(``@code{<=}'') or a greater-or-equal one (``@code{>=}''). Otherwise the queue would still be sorted, but two adjacent threads that have the same value for a given property would be swapped. This might be indesiderable with certain policies, and could lead to unexpected results, so be careful. @item ReadyQueue.size() Returns the number of elements in the queue. @item ReadyQueue.get_item_at(position) Returns the thread contained at the given position of the queue, where @code{0} means the front, and @code{queue.size() - 1} means the last element (the back) of the queue. Trying to access an element outside the range [0, queue size) will raise an exception. @item ReadyQueue.bubble_to_front(position) Moves the item at the given position up in the queue until it reaches the front, preserving the order of the other threads. Trying to access an element outside the range [0, queue size) will throw an exception at you. @item ReadyQueue.swap(position_a, position_b) Swaps the element in position a with the element in position b. This is used mainly by the internal quicksort implementation, but you may want to employ it directly in some cases, too. As you may have already guessed, trying to access an element outside of the queue will raise an exception. @end table @c % --- new subsubsection @subsubsection Properties of schedulable entities All schedulables, both threads and processes, implement the following methods: @table @code @item get_arrival_time() Returns the time a schedulable arrives to the CPU. For a thread, it is relative to the time his parent process is spawned. For a process, it is the absolute time value. So, a thread will arrive to the CPU after @code{get_arrival_time() + get_process().get_arrival_time()} units. @item get_elapsed_time() Returns for how many time units a schedulable has been running up until now. @item get_last_acquisition() Returns the last time a schedulable has been selected for scheduling (that is, to become the running one). @item get_last_release() Returns the last time a schedulable had stopped being scheduled as a running and has been preempted. Note that this also happens every time a time-slice ends. @item get_base_priority() Returns the priority a schedulable has been spawned with. @item get_current_priority() Returns the current priority. It is usually given by @code{get_base_priority() + priority_push}. See below. @item set_priority_push(new_value = 0) Sets the priority push to change the base priority of a schedulable. It is the only method available that changes the state of a schedulable. @item get_total_cpu_time() Returns the time a schedulable will run before terminating. @item get_state() Returns a string describing the state of a schedulable. It can be: @enumerate @item ``future'' @item ``ready'' @item ``running'' @item ``blocked'' @item ``terminated'' @end enumerate @item get_name() Returns a string with the name the user gave to the schedulable. @end table @sp 2 Class @code{Thread} has another method, which is @code{get_process()}. It returns the father process. Class @code{Process} behaves similarly by providing a @code{get_threads()} method that returns a list of children threads. @c % --------- new subsection @subsection A more complete example: Round Robin with priority Now, let's see a more interesting (and a little more complex) example: a Round Robin by priority policy that can optionally also work with pre-emption by priority. @sp 2 @example 00 from CPUPolicy import CPUPolicy 01 02 class rr_priority(CPUPolicy) : 03 """Round Robin scheduling policy that takes priority in account. 04 05 No lower priority thread can run if a higher 06 priority thread exists. If pre-emptive by priority, a 07 higher-priority thread becoming ready even in the middle 08 of a time slice will pre-empt the running thread. Else, 09 the time slice will have to end before the former can run.""" 10 11 def __init__(self): 12 pass; 13 14 def configure(self): 15 param = self.get_parameters() 16 param.register_int("Time slice", 1, 10000, True, 2) 17 param.register_int("Is preemptive?", 0, 1, True, 1) 18 19 def is_preemptive(self): 20 value = self.get_parameters().get_int("Is preemptive?") 21 if value == 0: 22 return False 23 else: 24 return True 25 26 def get_time_slice(self): 27 return self.get_parameters().get_int("Time slice") 28 29 def sort_queue(self, queue): 30 by_ltime = lambda a, b: \ 31 a.get_last_acquisition() <= \ 32 b.get_last_acquisition() 33 by_prio = lambda a, b: \ 34 a.get_current_priority() <= \ 35 b.get_current_priority() 36 37 self.sort(queue,by_ltime) 38 self.sort(queue,by_prio) 39 40 # manage preemption: see if we've a running thread 41 # in the ready queue, and if it can still run 42 if self.is_preemptive() == True: 43 higher_prio = queue.get_item_at(0).get_current_priority() 44 i = 0 45 while i < queue.size(): 46 sched = queue.get_item_at(i) 47 priority = sched.get_current_priority() 48 if(priority != higher_prio): 49 break 50 if sched.get_state() == "running": 51 queue.bubble_to_front(i) 52 i += 1 @end example We've also added a description of the class immediately following the class declaration (lines @samp{03-09}). This is what is returned as the policy description in the frontend. You may want to document your policies in the same way too. Now, let's see the most complex parts together: @table @code @item configure() There are three types of parameters you can register in the value returned by @code{self.get_parameters()}, and they are integer parameters, float parameters and strings. Usually boolean values can be simulated by registering a integer parameter limited in the interval [0, 1]. @xref{Configuring parameters}, for the exposed interface. @item is_preemptive() TODO: write me @item sort_queue() Here there are quite a lot of things going on, so let's tackle them one by one. At line @samp{30} we create a lambda-function that says to sort the queue by last aquisition time, so that threads that have been aquired recently end up at the back of the queue (which is exactly what a Round Robin policy should do). Then, at line @samp{33}, we create another lambda-function, this time because we want to sort the queue by priority, too. Done this, we let quicksort do the hard job at lines @samp{37-38}. Since we may have pre-emption enabled, we may have a running thread on the ready queue (if one exists at the current instant). But what happens if the running thread was put in the queue, and we just sorted it? Unfortunately, having the greatest last aquisition time, the running thread would end at the back of the queue, thus never being selected to run for more than a single time unit if the queue is non-empty and there are other threads with the same priority! The solution is to check if there is a thread with state ``running'' at the beginning of the queue, between those that have the same priority. If there's one, we make it bubble to the top of the queue. This is the explanation for lines @samp{42-52}. @end table @c % ------------------------------------------------- @node Writing plugins, (none), Writing new policies, Extending SGPEM @section Writing plugins @cindex plugins Writing plugins for SGPEMv2 goes outside the scope of this manual. For some informations on how to extend it with a plugin of yours, @xref{Top, , Writing your own plugins, sgpemv2dman, SGPEMv2 Developer Manual}. @c % ------------------------------------------------- @c include license text @node License, Concept index, Extending SGPEM, Top @include fdl.texi @c % -------------------------------------------------- @node Concept index, (none), License, Top @unnumbered Index @printindex cp @bye