OpenCores
URL https://opencores.org/ocsvn/openrisc_me/openrisc_me/trunk

Subversion Repositories openrisc_me

[/] [openrisc/] [trunk/] [rtos/] [ecos-2.0/] [doc/] [html/] [ref/] [kernel-mutexes.html] - Diff between revs 28 and 174

Only display areas with differences | Details | Blame | View Log

Rev 28 Rev 174
<!-- Copyright (C) 2003 Red Hat, Inc.                                -->
<!-- Copyright (C) 2003 Red Hat, Inc.                                -->
<!-- This material may be distributed only subject to the terms      -->
<!-- This material may be distributed only subject to the terms      -->
<!-- and conditions set forth in the Open Publication License, v1.0  -->
<!-- and conditions set forth in the Open Publication License, v1.0  -->
<!-- or later (the latest version is presently available at          -->
<!-- or later (the latest version is presently available at          -->
<!-- http://www.opencontent.org/openpub/).                           -->
<!-- http://www.opencontent.org/openpub/).                           -->
<!-- Distribution of the work or derivative of the work in any       -->
<!-- Distribution of the work or derivative of the work in any       -->
<!-- standard (paper) book form is prohibited unless prior           -->
<!-- standard (paper) book form is prohibited unless prior           -->
<!-- permission is obtained from the copyright holder.               -->
<!-- permission is obtained from the copyright holder.               -->
<HTML
<HTML
><HEAD
><HEAD
><TITLE
><TITLE
>Mutexes</TITLE
>Mutexes</TITLE
><meta name="MSSmartTagsPreventParsing" content="TRUE">
><meta name="MSSmartTagsPreventParsing" content="TRUE">
<META
<META
NAME="GENERATOR"
NAME="GENERATOR"
CONTENT="Modular DocBook HTML Stylesheet Version 1.76b+
CONTENT="Modular DocBook HTML Stylesheet Version 1.76b+
"><LINK
"><LINK
REL="HOME"
REL="HOME"
TITLE="eCos Reference Manual"
TITLE="eCos Reference Manual"
HREF="ecos-ref.html"><LINK
HREF="ecos-ref.html"><LINK
REL="UP"
REL="UP"
TITLE="The eCos Kernel"
TITLE="The eCos Kernel"
HREF="kernel.html"><LINK
HREF="kernel.html"><LINK
REL="PREVIOUS"
REL="PREVIOUS"
TITLE="Alarms"
TITLE="Alarms"
HREF="kernel-alarms.html"><LINK
HREF="kernel-alarms.html"><LINK
REL="NEXT"
REL="NEXT"
TITLE="Condition Variables"
TITLE="Condition Variables"
HREF="kernel-condition-variables.html"></HEAD
HREF="kernel-condition-variables.html"></HEAD
><BODY
><BODY
CLASS="REFENTRY"
CLASS="REFENTRY"
BGCOLOR="#FFFFFF"
BGCOLOR="#FFFFFF"
TEXT="#000000"
TEXT="#000000"
LINK="#0000FF"
LINK="#0000FF"
VLINK="#840084"
VLINK="#840084"
ALINK="#0000FF"
ALINK="#0000FF"
><DIV
><DIV
CLASS="NAVHEADER"
CLASS="NAVHEADER"
><TABLE
><TABLE
SUMMARY="Header navigation table"
SUMMARY="Header navigation table"
WIDTH="100%"
WIDTH="100%"
BORDER="0"
BORDER="0"
CELLPADDING="0"
CELLPADDING="0"
CELLSPACING="0"
CELLSPACING="0"
><TR
><TR
><TH
><TH
COLSPAN="3"
COLSPAN="3"
ALIGN="center"
ALIGN="center"
>eCos Reference Manual</TH
>eCos Reference Manual</TH
></TR
></TR
><TR
><TR
><TD
><TD
WIDTH="10%"
WIDTH="10%"
ALIGN="left"
ALIGN="left"
VALIGN="bottom"
VALIGN="bottom"
><A
><A
HREF="kernel-alarms.html"
HREF="kernel-alarms.html"
ACCESSKEY="P"
ACCESSKEY="P"
>Prev</A
>Prev</A
></TD
></TD
><TD
><TD
WIDTH="80%"
WIDTH="80%"
ALIGN="center"
ALIGN="center"
VALIGN="bottom"
VALIGN="bottom"
></TD
></TD
><TD
><TD
WIDTH="10%"
WIDTH="10%"
ALIGN="right"
ALIGN="right"
VALIGN="bottom"
VALIGN="bottom"
><A
><A
HREF="kernel-condition-variables.html"
HREF="kernel-condition-variables.html"
ACCESSKEY="N"
ACCESSKEY="N"
>Next</A
>Next</A
></TD
></TD
></TR
></TR
></TABLE
></TABLE
><HR
><HR
ALIGN="LEFT"
ALIGN="LEFT"
WIDTH="100%"></DIV
WIDTH="100%"></DIV
><H1
><H1
><A
><A
NAME="KERNEL-MUTEXES">Mutexes</H1
NAME="KERNEL-MUTEXES">Mutexes</H1
><DIV
><DIV
CLASS="REFNAMEDIV"
CLASS="REFNAMEDIV"
><A
><A
NAME="AEN1098"
NAME="AEN1098"
></A
></A
><H2
><H2
>Name</H2
>Name</H2
>cyg_mutex_init, cyg_mutex_destroy, cyg_mutex_lock, cyg_mutex_trylock, cyg_mutex_unlock, cyg_mutex_release, cyg_mutex_set_ceiling, cyg_mutex_set_protocol&nbsp;--&nbsp;Synchronization primitive</DIV
>cyg_mutex_init, cyg_mutex_destroy, cyg_mutex_lock, cyg_mutex_trylock, cyg_mutex_unlock, cyg_mutex_release, cyg_mutex_set_ceiling, cyg_mutex_set_protocol&nbsp;--&nbsp;Synchronization primitive</DIV
><DIV
><DIV
CLASS="REFSYNOPSISDIV"
CLASS="REFSYNOPSISDIV"
><A
><A
NAME="AEN1108"><H2
NAME="AEN1108"><H2
>Synopsis</H2
>Synopsis</H2
><DIV
><DIV
CLASS="FUNCSYNOPSIS"
CLASS="FUNCSYNOPSIS"
><A
><A
NAME="AEN1109"><P
NAME="AEN1109"><P
></P
></P
><TABLE
><TABLE
BORDER="5"
BORDER="5"
BGCOLOR="#E0E0F0"
BGCOLOR="#E0E0F0"
WIDTH="70%"
WIDTH="70%"
><TR
><TR
><TD
><TD
><PRE
><PRE
CLASS="FUNCSYNOPSISINFO"
CLASS="FUNCSYNOPSISINFO"
>#include &lt;cyg/kernel/kapi.h&gt;
>#include &lt;cyg/kernel/kapi.h&gt;
        </PRE
        </PRE
></TD
></TD
></TR
></TR
></TABLE
></TABLE
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_init</CODE
>void cyg_mutex_init</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_destroy</CODE
>void cyg_mutex_destroy</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>cyg_bool_t cyg_mutex_lock</CODE
>cyg_bool_t cyg_mutex_lock</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>cyg_bool_t cyg_mutex_trylock</CODE
>cyg_bool_t cyg_mutex_trylock</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_unlock</CODE
>void cyg_mutex_unlock</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_release</CODE
>void cyg_mutex_release</CODE
>(cyg_mutex_t* mutex);</CODE
>(cyg_mutex_t* mutex);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_set_ceiling</CODE
>void cyg_mutex_set_ceiling</CODE
>(cyg_mutex_t* mutex, cyg_priority_t priority);</CODE
>(cyg_mutex_t* mutex, cyg_priority_t priority);</CODE
></P
></P
><P
><P
><CODE
><CODE
><CODE
><CODE
CLASS="FUNCDEF"
CLASS="FUNCDEF"
>void cyg_mutex_set_protocol</CODE
>void cyg_mutex_set_protocol</CODE
>(cyg_mutex_t* mutex, enum cyg_mutex_protocol protocol/);</CODE
>(cyg_mutex_t* mutex, enum cyg_mutex_protocol protocol/);</CODE
></P
></P
><P
><P
></P
></P
></DIV
></DIV
></DIV
></DIV
><DIV
><DIV
CLASS="REFSECT1"
CLASS="REFSECT1"
><A
><A
NAME="KERNEL-MUTEXES-DESCRIPTION"
NAME="KERNEL-MUTEXES-DESCRIPTION"
></A
></A
><H2
><H2
>Description</H2
>Description</H2
><P
><P
>The purpose of mutexes is to let threads share resources safely. If
>The purpose of mutexes is to let threads share resources safely. If
two or more threads attempt to manipulate a data structure with no
two or more threads attempt to manipulate a data structure with no
locking between them then the system may run for quite some time
locking between them then the system may run for quite some time
without apparent problems, but sooner or later the data structure will
without apparent problems, but sooner or later the data structure will
become inconsistent and the application will start behaving strangely
become inconsistent and the application will start behaving strangely
and is quite likely to crash. The same can apply even when
and is quite likely to crash. The same can apply even when
manipulating a single variable or some other resource. For example,
manipulating a single variable or some other resource. For example,
consider:
consider:
      </P
      </P
><TABLE
><TABLE
BORDER="5"
BORDER="5"
BGCOLOR="#E0E0F0"
BGCOLOR="#E0E0F0"
WIDTH="70%"
WIDTH="70%"
><TR
><TR
><TD
><TD
><PRE
><PRE
CLASS="PROGRAMLISTING"
CLASS="PROGRAMLISTING"
>static volatile int counter = 0;
>static volatile int counter = 0;
 
 
void
void
process_event(void)
process_event(void)
{
{
    &#8230;
    &#8230;
 
 
    counter++;
    counter++;
}</PRE
}</PRE
></TD
></TD
></TR
></TR
></TABLE
></TABLE
><P
><P
>Assume that after a certain period of time <TT
>Assume that after a certain period of time <TT
CLASS="VARNAME"
CLASS="VARNAME"
>counter</TT
>counter</TT
>
>
has a value of 42, and two threads A and B running at the same
has a value of 42, and two threads A and B running at the same
priority call <TT
priority call <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>process_event</TT
>process_event</TT
>. Typically thread A
>. Typically thread A
will read the value of <TT
will read the value of <TT
CLASS="VARNAME"
CLASS="VARNAME"
>counter</TT
>counter</TT
> into a register,
> into a register,
increment this register to 43, and write this updated value back to
increment this register to 43, and write this updated value back to
memory. Thread B will do the same, so usually
memory. Thread B will do the same, so usually
<TT
<TT
CLASS="VARNAME"
CLASS="VARNAME"
>counter</TT
>counter</TT
> will end up with a value of 44. However if
> will end up with a value of 44. However if
thread A is timesliced after reading the old value 42 but before
thread A is timesliced after reading the old value 42 but before
writing back 43, thread B will still read back the old value and will
writing back 43, thread B will still read back the old value and will
also write back 43. The net result is that the counter only gets
also write back 43. The net result is that the counter only gets
incremented once, not twice, which depending on the application may
incremented once, not twice, which depending on the application may
prove disastrous.
prove disastrous.
      </P
      </P
><P
><P
>Sections of code like the above which involve manipulating shared data
>Sections of code like the above which involve manipulating shared data
are generally known as critical regions. Code should claim a lock
are generally known as critical regions. Code should claim a lock
before entering a critical region and release the lock when leaving.
before entering a critical region and release the lock when leaving.
Mutexes provide an appropriate synchronization primitive for this.
Mutexes provide an appropriate synchronization primitive for this.
      </P
      </P
><TABLE
><TABLE
BORDER="5"
BORDER="5"
BGCOLOR="#E0E0F0"
BGCOLOR="#E0E0F0"
WIDTH="70%"
WIDTH="70%"
><TR
><TR
><TD
><TD
><PRE
><PRE
CLASS="PROGRAMLISTING"
CLASS="PROGRAMLISTING"
>static volatile int counter = 0;
>static volatile int counter = 0;
static cyg_mutex_t  lock;
static cyg_mutex_t  lock;
 
 
void
void
process_event(void)
process_event(void)
{
{
    &#8230;
    &#8230;
 
 
    cyg_mutex_lock(&amp;lock);
    cyg_mutex_lock(&amp;lock);
    counter++;
    counter++;
    cyg_mutex_unlock(&amp;lock);
    cyg_mutex_unlock(&amp;lock);
}
}
      </PRE
      </PRE
></TD
></TD
></TR
></TR
></TABLE
></TABLE
><P
><P
>A mutex must be initialized before it can be used, by calling
>A mutex must be initialized before it can be used, by calling
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_init</TT
>cyg_mutex_init</TT
>. This takes a pointer to a
>. This takes a pointer to a
<SPAN
<SPAN
CLASS="STRUCTNAME"
CLASS="STRUCTNAME"
>cyg_mutex_t</SPAN
>cyg_mutex_t</SPAN
> data structure which is typically
> data structure which is typically
statically allocated, and may be part of a larger data structure. If a
statically allocated, and may be part of a larger data structure. If a
mutex is no longer required and there are no threads waiting on it
mutex is no longer required and there are no threads waiting on it
then <TT
then <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_destroy</TT
>cyg_mutex_destroy</TT
> can be used.
> can be used.
      </P
      </P
><P
><P
>The main functions for using a mutex are
>The main functions for using a mutex are
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> and
> and
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_unlock</TT
>cyg_mutex_unlock</TT
>. In normal operation
>. In normal operation
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> will return success after claiming
> will return success after claiming
the mutex lock, blocking if another thread currently owns the mutex.
the mutex lock, blocking if another thread currently owns the mutex.
However the lock operation may fail if other code calls
However the lock operation may fail if other code calls
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_release</TT
>cyg_mutex_release</TT
> or
> or
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_thread_release</TT
>cyg_thread_release</TT
>, so if these functions may get
>, so if these functions may get
used then it is important to check the return value. The current owner
used then it is important to check the return value. The current owner
of a mutex should call <TT
of a mutex should call <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_unlock</TT
>cyg_mutex_unlock</TT
> when a
> when a
lock is no longer required. This operation must be performed by the
lock is no longer required. This operation must be performed by the
owner, not by another thread.
owner, not by another thread.
      </P
      </P
><P
><P
><TT
><TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_trylock</TT
>cyg_mutex_trylock</TT
> is a variant of
> is a variant of
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> that will always return
> that will always return
immediately, returning success or failure as appropriate. This
immediately, returning success or failure as appropriate. This
function is rarely useful. Typical code locks a mutex just before
function is rarely useful. Typical code locks a mutex just before
entering a critical region, so if the lock cannot be claimed then
entering a critical region, so if the lock cannot be claimed then
there may be nothing else for the current thread to do. Use of this
there may be nothing else for the current thread to do. Use of this
function may also cause a form of priority inversion if the owner
function may also cause a form of priority inversion if the owner
owner runs at a lower priority, because the priority inheritance code
owner runs at a lower priority, because the priority inheritance code
will not be triggered. Instead the current thread continues running,
will not be triggered. Instead the current thread continues running,
preventing the owner from getting any cpu time, completing the
preventing the owner from getting any cpu time, completing the
critical region, and releasing the mutex.
critical region, and releasing the mutex.
      </P
      </P
><P
><P
><TT
><TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_release</TT
>cyg_mutex_release</TT
> can be used to wake up all
> can be used to wake up all
threads that are currently blocked inside a call to
threads that are currently blocked inside a call to
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> for a specific mutex. These lock
> for a specific mutex. These lock
calls will return failure. The current mutex owner is not affected.
calls will return failure. The current mutex owner is not affected.
      </P
      </P
></DIV
></DIV
><DIV
><DIV
CLASS="REFSECT1"
CLASS="REFSECT1"
><A
><A
NAME="KERNEL-MUTEXES-PRIORITY-INVERSION"
NAME="KERNEL-MUTEXES-PRIORITY-INVERSION"
></A
></A
><H2
><H2
>Priority Inversion</H2
>Priority Inversion</H2
><P
><P
>The use of mutexes gives rise to a problem known as priority
>The use of mutexes gives rise to a problem known as priority
inversion. In a typical scenario this requires three threads A, B, and
inversion. In a typical scenario this requires three threads A, B, and
C, running at high, medium and low priority respectively. Thread A and
C, running at high, medium and low priority respectively. Thread A and
thread B are temporarily blocked waiting for some event, so thread C
thread B are temporarily blocked waiting for some event, so thread C
gets a chance to run, needs to enter a critical region, and locks
gets a chance to run, needs to enter a critical region, and locks
a mutex. At this point threads A and B are woken up - the exact order
a mutex. At this point threads A and B are woken up - the exact order
does not matter. Thread A needs to claim the same mutex but has to
does not matter. Thread A needs to claim the same mutex but has to
wait until C has left the critical region and can release the mutex.
wait until C has left the critical region and can release the mutex.
Meanwhile thread B works on something completely different and can
Meanwhile thread B works on something completely different and can
continue running without problems. Because thread C is running a lower
continue running without problems. Because thread C is running a lower
priority than B it will not get a chance to run until B blocks for
priority than B it will not get a chance to run until B blocks for
some reason, and hence thread A cannot run either. The overall effect
some reason, and hence thread A cannot run either. The overall effect
is that a high-priority thread A cannot proceed because of a lower
is that a high-priority thread A cannot proceed because of a lower
priority thread B, and priority inversion has occurred.
priority thread B, and priority inversion has occurred.
      </P
      </P
><P
><P
>In simple applications it may be possible to arrange the code such
>In simple applications it may be possible to arrange the code such
that priority inversion cannot occur, for example by ensuring that a
that priority inversion cannot occur, for example by ensuring that a
given mutex is never shared by threads running at different priority
given mutex is never shared by threads running at different priority
levels. However this may not always be possible even at the
levels. However this may not always be possible even at the
application level. In addition mutexes may be used internally by
application level. In addition mutexes may be used internally by
underlying code, for example the memory allocation package, so careful
underlying code, for example the memory allocation package, so careful
analysis of the whole system would be needed to be sure that priority
analysis of the whole system would be needed to be sure that priority
inversion cannot occur. Instead it is common practice to use one of
inversion cannot occur. Instead it is common practice to use one of
two techniques: priority ceilings and priority inheritance.
two techniques: priority ceilings and priority inheritance.
      </P
      </P
><P
><P
>Priority ceilings involve associating a priority with each mutex.
>Priority ceilings involve associating a priority with each mutex.
Usually this will match the highest priority thread that will ever
Usually this will match the highest priority thread that will ever
lock the mutex. When a thread running at a lower priority makes a
lock the mutex. When a thread running at a lower priority makes a
successful call to <TT
successful call to <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> or
> or
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_trylock</TT
>cyg_mutex_trylock</TT
> its priority will be boosted to
> its priority will be boosted to
that of the mutex. For example, given the previous example the
that of the mutex. For example, given the previous example the
priority associated with the mutex would be that of thread A, so for
priority associated with the mutex would be that of thread A, so for
as long as it owns the mutex thread C will run in preference to thread
as long as it owns the mutex thread C will run in preference to thread
B. When C releases the mutex its priority drops to the normal value
B. When C releases the mutex its priority drops to the normal value
again, allowing A to run and claim the mutex. Setting the
again, allowing A to run and claim the mutex. Setting the
priority for a mutex involves a call to
priority for a mutex involves a call to
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_set_ceiling</TT
>cyg_mutex_set_ceiling</TT
>, which is typically called
>, which is typically called
during initialization. It is possible to change the ceiling
during initialization. It is possible to change the ceiling
dynamically but this will only affect subsequent lock operations, not
dynamically but this will only affect subsequent lock operations, not
the current owner of the mutex.
the current owner of the mutex.
      </P
      </P
><P
><P
>Priority ceilings are very suitable for simple applications, where for
>Priority ceilings are very suitable for simple applications, where for
every thread in the system it is possible to work out which mutexes
every thread in the system it is possible to work out which mutexes
will be accessed. For more complicated applications this may prove
will be accessed. For more complicated applications this may prove
difficult, especially if thread priorities change at run-time. An
difficult, especially if thread priorities change at run-time. An
additional problem occurs for any mutexes outside the application, for
additional problem occurs for any mutexes outside the application, for
example used internally within eCos packages. A typical eCos package
example used internally within eCos packages. A typical eCos package
will be unaware of the details of the various threads in the system,
will be unaware of the details of the various threads in the system,
so it will have no way of setting suitable ceilings for its internal
so it will have no way of setting suitable ceilings for its internal
mutexes. If those mutexes are not exported to application code then
mutexes. If those mutexes are not exported to application code then
using priority ceilings may not be viable. The kernel does provide a
using priority ceilings may not be viable. The kernel does provide a
configuration option
configuration option
<TT
<TT
CLASS="VARNAME"
CLASS="VARNAME"
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT_PRIORITY</TT
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT_PRIORITY</TT
>
>
that can be used to set the default priority ceiling for all mutexes,
that can be used to set the default priority ceiling for all mutexes,
which may prove sufficient.
which may prove sufficient.
      </P
      </P
><P
><P
>The alternative approach is to use priority inheritance: if a thread
>The alternative approach is to use priority inheritance: if a thread
calls <TT
calls <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> for a mutex that it
> for a mutex that it
currently owned by a lower-priority thread, then the owner will have
currently owned by a lower-priority thread, then the owner will have
its priority raised to that of the current thread. Often this is more
its priority raised to that of the current thread. Often this is more
efficient than priority ceilings because priority boosting only
efficient than priority ceilings because priority boosting only
happens when necessary, not for every lock operation, and the required
happens when necessary, not for every lock operation, and the required
priority is determined at run-time rather than by static analysis.
priority is determined at run-time rather than by static analysis.
However there are complications when multiple threads running at
However there are complications when multiple threads running at
different priorities try to lock a single mutex, or when the current
different priorities try to lock a single mutex, or when the current
owner of a mutex then tries to lock additional mutexes, and this makes
owner of a mutex then tries to lock additional mutexes, and this makes
the implementation significantly more complicated than priority
the implementation significantly more complicated than priority
ceilings.
ceilings.
      </P
      </P
><P
><P
>There are a number of configuration options associated with priority
>There are a number of configuration options associated with priority
inversion. First, if after careful analysis it is known that priority
inversion. First, if after careful analysis it is known that priority
inversion cannot arise then the component
inversion cannot arise then the component
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL</TT
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL</TT
>
>
can be disabled. More commonly this component will be enabled, and one
can be disabled. More commonly this component will be enabled, and one
of either
of either
<TT
<TT
CLASS="VARNAME"
CLASS="VARNAME"
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT</TT
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_INHERIT</TT
>
>
or
or
<TT
<TT
CLASS="VARNAME"
CLASS="VARNAME"
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING</TT
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_CEILING</TT
>
>
will be selected, so that one of the two protocols is available for
will be selected, so that one of the two protocols is available for
all mutexes. It is possible to select multiple protocols, so that some
all mutexes. It is possible to select multiple protocols, so that some
mutexes can have priority ceilings while others use priority
mutexes can have priority ceilings while others use priority
inheritance or no priority inversion protection at all. Obviously this
inheritance or no priority inversion protection at all. Obviously this
flexibility will add to the code size and to the cost of mutex
flexibility will add to the code size and to the cost of mutex
operations. The default for all mutexes will be controlled by
operations. The default for all mutexes will be controlled by
<TT
<TT
CLASS="VARNAME"
CLASS="VARNAME"
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT</TT
>CYGSEM_KERNEL_SYNCH_MUTEX_PRIORITY_INVERSION_PROTOCOL_DEFAULT</TT
>,
>,
and can be changed at run-time using
and can be changed at run-time using
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_set_protocol</TT
>cyg_mutex_set_protocol</TT
>.
>.
      </P
      </P
><P
><P
>Priority inversion problems can also occur with other synchronization
>Priority inversion problems can also occur with other synchronization
primitives such as semaphores. For example there could be a situation
primitives such as semaphores. For example there could be a situation
where a high-priority thread A is waiting on a semaphore, a
where a high-priority thread A is waiting on a semaphore, a
low-priority thread C needs to do just a little bit more work before
low-priority thread C needs to do just a little bit more work before
posting the semaphore, but a medium priority thread B is running and
posting the semaphore, but a medium priority thread B is running and
preventing C from making progress. However a semaphore does not have
preventing C from making progress. However a semaphore does not have
the concept of an owner, so there is no way for the system to know
the concept of an owner, so there is no way for the system to know
that it is thread C which would next post to the semaphore. Hence
that it is thread C which would next post to the semaphore. Hence
there is no way for the system to boost the priority of C
there is no way for the system to boost the priority of C
automatically and prevent the priority inversion. Instead situations
automatically and prevent the priority inversion. Instead situations
like this have to be detected by application developers and
like this have to be detected by application developers and
appropriate precautions have to be taken, for example making sure that
appropriate precautions have to be taken, for example making sure that
all the threads run at suitable priorities at all times.
all the threads run at suitable priorities at all times.
      </P
      </P
><DIV
><DIV
CLASS="WARNING"
CLASS="WARNING"
><P
><P
></P
></P
><TABLE
><TABLE
CLASS="WARNING"
CLASS="WARNING"
BORDER="1"
BORDER="1"
WIDTH="100%"
WIDTH="100%"
><TR
><TR
><TD
><TD
ALIGN="CENTER"
ALIGN="CENTER"
><B
><B
>Warning</B
>Warning</B
></TD
></TD
></TR
></TR
><TR
><TR
><TD
><TD
ALIGN="LEFT"
ALIGN="LEFT"
><P
><P
>The current implementation of priority inheritance within the eCos
>The current implementation of priority inheritance within the eCos
kernel does not handle certain exceptional circumstances completely
kernel does not handle certain exceptional circumstances completely
correctly. Problems will only arise if a thread owns one mutex,
correctly. Problems will only arise if a thread owns one mutex,
then attempts to claim another mutex, and there are other threads
then attempts to claim another mutex, and there are other threads
attempting to lock these same mutexes. Although the system will
attempting to lock these same mutexes. Although the system will
continue running, the current owners of the various mutexes involved
continue running, the current owners of the various mutexes involved
may not run at the priority they should. This situation never arises
may not run at the priority they should. This situation never arises
in typical code because a mutex will only be locked for a small
in typical code because a mutex will only be locked for a small
critical region, and there is no need to manipulate other shared resources
critical region, and there is no need to manipulate other shared resources
inside this region. A more complicated implementation of priority
inside this region. A more complicated implementation of priority
inheritance is possible but would add significant overhead and certain
inheritance is possible but would add significant overhead and certain
operations would no longer be deterministic.
operations would no longer be deterministic.
      </P
      </P
></TD
></TD
></TR
></TR
></TABLE
></TABLE
></DIV
></DIV
><DIV
><DIV
CLASS="WARNING"
CLASS="WARNING"
><P
><P
></P
></P
><TABLE
><TABLE
CLASS="WARNING"
CLASS="WARNING"
BORDER="1"
BORDER="1"
WIDTH="100%"
WIDTH="100%"
><TR
><TR
><TD
><TD
ALIGN="CENTER"
ALIGN="CENTER"
><B
><B
>Warning</B
>Warning</B
></TD
></TD
></TR
></TR
><TR
><TR
><TD
><TD
ALIGN="LEFT"
ALIGN="LEFT"
><P
><P
>Support for priority ceilings and priority inheritance is not
>Support for priority ceilings and priority inheritance is not
implemented for all schedulers. In particular neither priority
implemented for all schedulers. In particular neither priority
ceilings nor priority inheritance are currently available for the
ceilings nor priority inheritance are currently available for the
bitmap scheduler.
bitmap scheduler.
      </P
      </P
></TD
></TD
></TR
></TR
></TABLE
></TABLE
></DIV
></DIV
></DIV
></DIV
><DIV
><DIV
CLASS="REFSECT1"
CLASS="REFSECT1"
><A
><A
NAME="KERNEL-MUTEXES-ALTERNATIVES"
NAME="KERNEL-MUTEXES-ALTERNATIVES"
></A
></A
><H2
><H2
>Alternatives</H2
>Alternatives</H2
><P
><P
>In nearly all circumstances, if two or more threads need to share some
>In nearly all circumstances, if two or more threads need to share some
data then protecting this data with a mutex is the correct thing to
data then protecting this data with a mutex is the correct thing to
do. Mutexes are the only primitive that combine a locking mechanism
do. Mutexes are the only primitive that combine a locking mechanism
and protection against priority inversion problems. However this
and protection against priority inversion problems. However this
functionality is achieved at a cost, and in exceptional circumstances
functionality is achieved at a cost, and in exceptional circumstances
such as an application's most critical inner loop it may be desirable
such as an application's most critical inner loop it may be desirable
to use some other means of locking.
to use some other means of locking.
      </P
      </P
><P
><P
>When a critical region is very very small it is possible to lock the
>When a critical region is very very small it is possible to lock the
scheduler, thus ensuring that no other thread can run until the
scheduler, thus ensuring that no other thread can run until the
scheduler is unlocked again. This is achieved with calls to <A
scheduler is unlocked again. This is achieved with calls to <A
HREF="kernel-schedcontrol.html"
HREF="kernel-schedcontrol.html"
><TT
><TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_scheduler_lock</TT
>cyg_scheduler_lock</TT
></A
></A
>
>
and <TT
and <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_scheduler_unlock</TT
>cyg_scheduler_unlock</TT
>. If the critical region
>. If the critical region
is sufficiently small then this can actually improve both performance
is sufficiently small then this can actually improve both performance
and dispatch latency because <TT
and dispatch latency because <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_lock</TT
>cyg_mutex_lock</TT
> also
> also
locks the scheduler for a brief period of time. This approach will not
locks the scheduler for a brief period of time. This approach will not
work on SMP systems because another thread may already be running on a
work on SMP systems because another thread may already be running on a
different processor and accessing the critical region.
different processor and accessing the critical region.
      </P
      </P
><P
><P
>Another way of avoiding the use of mutexes is to make sure that all
>Another way of avoiding the use of mutexes is to make sure that all
threads that access a particular critical region run at the same
threads that access a particular critical region run at the same
priority and configure the system with timeslicing disabled
priority and configure the system with timeslicing disabled
(<TT
(<TT
CLASS="VARNAME"
CLASS="VARNAME"
>CYGSEM_KERNEL_SCHED_TIMESLICE</TT
>CYGSEM_KERNEL_SCHED_TIMESLICE</TT
>). Without
>). Without
timeslicing a thread can only be preempted by a higher-priority one,
timeslicing a thread can only be preempted by a higher-priority one,
or if it performs some operation that can block. This approach
or if it performs some operation that can block. This approach
requires that none of the operations in the critical region can block,
requires that none of the operations in the critical region can block,
so for example it is not legal to call
so for example it is not legal to call
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_semaphore_wait</TT
>cyg_semaphore_wait</TT
>. It is also vulnerable to
>. It is also vulnerable to
any changes in the configuration or to the various thread priorities:
any changes in the configuration or to the various thread priorities:
any such changes may now have unexpected side effects. It will not
any such changes may now have unexpected side effects. It will not
work on SMP systems.
work on SMP systems.
      </P
      </P
></DIV
></DIV
><DIV
><DIV
CLASS="REFSECT1"
CLASS="REFSECT1"
><A
><A
NAME="KERNEL-MUTEXES-RECURSIVE"
NAME="KERNEL-MUTEXES-RECURSIVE"
></A
></A
><H2
><H2
>Recursive Mutexes</H2
>Recursive Mutexes</H2
><P
><P
>The implementation of mutexes within the eCos kernel does not support
>The implementation of mutexes within the eCos kernel does not support
recursive locks. If a thread has locked a mutex and then attempts to
recursive locks. If a thread has locked a mutex and then attempts to
lock the mutex again, typically as a result of some recursive call in
lock the mutex again, typically as a result of some recursive call in
a complicated call graph, then either an assertion failure will be
a complicated call graph, then either an assertion failure will be
reported or the thread will deadlock. This behaviour is deliberate.
reported or the thread will deadlock. This behaviour is deliberate.
When a thread has just locked a mutex associated with some data
When a thread has just locked a mutex associated with some data
structure, it can assume that that data structure is in a consistent
structure, it can assume that that data structure is in a consistent
state. Before unlocking the mutex again it must ensure that the data
state. Before unlocking the mutex again it must ensure that the data
structure is again in a consistent state. Recursive mutexes allow a
structure is again in a consistent state. Recursive mutexes allow a
thread to make arbitrary changes to a data structure, then in a
thread to make arbitrary changes to a data structure, then in a
recursive call lock the mutex again while the data structure is still
recursive call lock the mutex again while the data structure is still
inconsistent. The net result is that code can no longer make any
inconsistent. The net result is that code can no longer make any
assumptions about data structure consistency, which defeats the
assumptions about data structure consistency, which defeats the
purpose of using mutexes.
purpose of using mutexes.
      </P
      </P
></DIV
></DIV
><DIV
><DIV
CLASS="REFSECT1"
CLASS="REFSECT1"
><A
><A
NAME="KERNEL-MUTEXES-CONTEXT"
NAME="KERNEL-MUTEXES-CONTEXT"
></A
></A
><H2
><H2
>Valid contexts</H2
>Valid contexts</H2
><P
><P
><TT
><TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_init</TT
>cyg_mutex_init</TT
>,
>,
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_set_ceiling</TT
>cyg_mutex_set_ceiling</TT
> and
> and
<TT
<TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_mutex_set_protocol</TT
>cyg_mutex_set_protocol</TT
> are normally called during
> are normally called during
initialization but may also be called from thread context. The
initialization but may also be called from thread context. The
remaining functions should only be called from thread context. Mutexes
remaining functions should only be called from thread context. Mutexes
serve as a mutual exclusion mechanism between threads, and cannot be
serve as a mutual exclusion mechanism between threads, and cannot be
used to synchronize between threads and the interrupt handling
used to synchronize between threads and the interrupt handling
subsystem. If a critical region is shared between a thread and a DSR
subsystem. If a critical region is shared between a thread and a DSR
then it must be protected using <A
then it must be protected using <A
HREF="kernel-schedcontrol.html"
HREF="kernel-schedcontrol.html"
><TT
><TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_scheduler_lock</TT
>cyg_scheduler_lock</TT
></A
></A
>
>
and <TT
and <TT
CLASS="FUNCTION"
CLASS="FUNCTION"
>cyg_scheduler_unlock</TT
>cyg_scheduler_unlock</TT
>. If a critical region is
>. If a critical region is
shared between a thread and an ISR, it must be protected by disabling
shared between a thread and an ISR, it must be protected by disabling
or masking interrupts. Obviously these operations must be used with
or masking interrupts. Obviously these operations must be used with
care because they can affect dispatch and interrupt latencies.
care because they can affect dispatch and interrupt latencies.
      </P
      </P
></DIV
></DIV
><DIV
><DIV
CLASS="NAVFOOTER"
CLASS="NAVFOOTER"
><HR
><HR
ALIGN="LEFT"
ALIGN="LEFT"
WIDTH="100%"><TABLE
WIDTH="100%"><TABLE
SUMMARY="Footer navigation table"
SUMMARY="Footer navigation table"
WIDTH="100%"
WIDTH="100%"
BORDER="0"
BORDER="0"
CELLPADDING="0"
CELLPADDING="0"
CELLSPACING="0"
CELLSPACING="0"
><TR
><TR
><TD
><TD
WIDTH="33%"
WIDTH="33%"
ALIGN="left"
ALIGN="left"
VALIGN="top"
VALIGN="top"
><A
><A
HREF="kernel-alarms.html"
HREF="kernel-alarms.html"
ACCESSKEY="P"
ACCESSKEY="P"
>Prev</A
>Prev</A
></TD
></TD
><TD
><TD
WIDTH="34%"
WIDTH="34%"
ALIGN="center"
ALIGN="center"
VALIGN="top"
VALIGN="top"
><A
><A
HREF="ecos-ref.html"
HREF="ecos-ref.html"
ACCESSKEY="H"
ACCESSKEY="H"
>Home</A
>Home</A
></TD
></TD
><TD
><TD
WIDTH="33%"
WIDTH="33%"
ALIGN="right"
ALIGN="right"
VALIGN="top"
VALIGN="top"
><A
><A
HREF="kernel-condition-variables.html"
HREF="kernel-condition-variables.html"
ACCESSKEY="N"
ACCESSKEY="N"
>Next</A
>Next</A
></TD
></TD
></TR
></TR
><TR
><TR
><TD
><TD
WIDTH="33%"
WIDTH="33%"
ALIGN="left"
ALIGN="left"
VALIGN="top"
VALIGN="top"
>Alarms</TD
>Alarms</TD
><TD
><TD
WIDTH="34%"
WIDTH="34%"
ALIGN="center"
ALIGN="center"
VALIGN="top"
VALIGN="top"
><A
><A
HREF="kernel.html"
HREF="kernel.html"
ACCESSKEY="U"
ACCESSKEY="U"
>Up</A
>Up</A
></TD
></TD
><TD
><TD
WIDTH="33%"
WIDTH="33%"
ALIGN="right"
ALIGN="right"
VALIGN="top"
VALIGN="top"
>Condition Variables</TD
>Condition Variables</TD
></TR
></TR
></TABLE
></TABLE
></DIV
></DIV
></BODY
></BODY
></HTML
></HTML
 
 

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.