Simulated product dispenser

Dependencies:   HTS221

Fork of mbed-cloud-workshop-connect-HTS221 by Jim Carver

Embed: (wiki syntax)

« Back to documentation index

Show/hide line numbers atomic-queue.h Source File

atomic-queue.h

Go to the documentation of this file.
00001 // ----------------------------------------------------------------------------
00002 // Copyright 2015-2017 ARM Ltd.
00003 //
00004 // SPDX-License-Identifier: Apache-2.0
00005 //
00006 // Licensed under the Apache License, Version 2.0 (the "License");
00007 // you may not use this file except in compliance with the License.
00008 // You may obtain a copy of the License at
00009 //
00010 //     http://www.apache.org/licenses/LICENSE-2.0
00011 //
00012 // Unless required by applicable law or agreed to in writing, software
00013 // distributed under the License is distributed on an "AS IS" BASIS,
00014 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
00015 // See the License for the specific language governing permissions and
00016 // limitations under the License.
00017 // ----------------------------------------------------------------------------
00018 
00019 /**
00020  * @file atomic-queue.h
00021  * @brief A linked-list queue based on atomic access
00022  * @details This queue is designed explicitly for use in contexts where atomic
00023  * access is desirable. Using an atomic queue comes with three major benefits:
00024  *
00025  * 1. This allows the queue to be used in contexts where some RTOS primitives
00026  *    (mutexes) cannot be used, such as interrupt context.
00027  * 2. On many platforms, critical sections are very expensive, while atomics
00028  *    are not. Thus, there is a significant performance benefit to using atomic
00029  *    primitives wherever possible.
00030  * 3. Atomic operations have the least effect on all other execution contexts
00031  *    on the device. Critical sections have performance and responsiveness side
00032  *    effects. Mutexes can disrupt the execution of other threads. Atomics do 
00033  *    not affect the execution of other contexts and are immune to priority
00034  *    inversion.
00035  * 
00036  * In short, the atomics are the most cooperative way of building a queue.
00037  * 
00038  * Theory of Operation:
00039  * The queue is intended to be multi-writer/multi-reader. Multi-writer
00040  * semantics have been fully validated, but multi-reader semantics still
00041  * require additional validation. It is recommended that the atomic queue be
00042  * treated as multi-writer/single-reader until this validation is complete.
00043  * 
00044  * Assumptions:
00045  * The queue MUST own all memory currently in the queue. Any modification
00046  * to a queue element that is already enqueued can result in undefined
00047  * behaviour.
00048  * Because of this, the queue expects queue elements to be pool-allocated prior
00049  * to insertion and freed after extraction.
00050  * 
00051  * To mitigate the possibility of a double-insert, the queue elements are
00052  * populated with a "lock" field. This is used to indicate when the element is
00053  * in use, to ensure that a parallel thread of execution cannot accidentally
00054  * reuse an element that has already been inserted.
00055  * 
00056  * *NB:* Element locks are unnecessary when the atomic queue is used
00057  * exclusively with pool allocated queue elements.
00058  * 
00059  * Queue Organization:
00060  * The queue is a singly linked list. The list pointer is the tail pointer of
00061  * the queue. The tail pointer points to the last element in the queue. To find
00062  * the head of the queue, the dequeue mechanism traverses the list, starting at
00063  * the tail of the queue until it finds the last list element, which is the
00064  * head of the queue. Each queue element contains:
00065  * * Next pointer
00066  * * Lock element
00067  * * Data (void* by default, custom element possible)
00068  * 
00069  * Element Insertion:
00070  * To insert an element:
00071  * Do: 
00072  * * Read the tail pointer (load exclusive).
00073  * * Write the tail pointer into the next pointer of the new element.
00074  * * Write the tail pointer with the address of the new element 
00075  *   (store exclusive).
00076  * Until the store is successful.
00077  * 
00078  * If a different thread of higher priority executed between the read and the
00079  * last write, it will cause the new element to have the wrong next pointer.
00080  * This is why the load-exclusive and store-exclusive are used. These are ARMv7
00081  * instructions that allow a given thread of execution to recognize whether it
00082  * has been interrupted. If another thread has set the exclusive bit, then the
00083  * store will fail.
00084  * 
00085  * Element Extraction:
00086  * Extracting an element is much more complex than element insertion due to the
00087  * need to traverse the queue.
00088  * 
00089  * 1. Read the tail pointer
00090  * 2. If the tail pointer is NULL, return NULL.
00091  * 3. Set the current element to the tail pointer.
00092  * 4. Read the current element's next pointer (load exclusive).
00093  * 5. If the next pointer was NULL, start over (go to 1.)
00094  * 6. Load the next element.
00095  * 7. Check the value of the next element's next pointer
00096  * 8. If it is non-NULL, set the current element pointer to the next element and go to 4.
00097  * 9. Otherwise, set the current element's next pointer to NULL (store exclusive).
00098  * 10. Return the next element.
00099  * 
00100  * There is the potential for another thread of execution to interrupt the
00101  * search for the head of the queue. This should cause the dequeue mechanism to
00102  * find a NULL next pointer in the current element. However, this may depend on
00103  * other circumstances in the system, which might break the behaviour of the
00104  * queue. Until this is fully analyzed, the queue should be treated as single-
00105  * reader/multi-writer.
00106  */
00107 
00108 #ifndef __ATOMIC_QUEUE_H__
00109 #define __ATOMIC_QUEUE_H__
00110 
00111 #ifdef __cplusplus
00112 extern "C" {
00113 #endif
00114 
00115 #include <stdint.h>
00116 
00117 
00118 #ifndef ATOMIC_QUEUE_CUSTOM_ELEMENT
00119 struct atomic_queue_element {
00120     struct atomic_queue_element * volatile next;
00121     uintptr_t lock;
00122     void * data;
00123 };
00124 #endif
00125 
00126 struct atomic_queue {
00127     struct atomic_queue_element * volatile tail;
00128 };
00129 
00130 enum aq_failure_codes {
00131     ATOMIC_QUEUE_SUCCESS = 0,
00132     ATOMIC_QUEUE_NULL_QUEUE,
00133     ATOMIC_QUEUE_NULL_ELEMENT,
00134     ATOMIC_QUEUE_DUPLICATE_ELEMENT,
00135     
00136 };
00137 
00138 /**
00139  * \brief Add an element to the tail of the queue
00140  *
00141  * Since the queue only maintains a tail pointer, this simply inserts the new element before the tail pointer
00142  *
00143  * Element Insertion:
00144  * To insert an element:
00145  * Do: 
00146  * * Read the tail pointer (load exclusive).
00147  * * Write the tail pointer into the next pointer of the new element.
00148  * * Write the tail pointer with the address of the new element 
00149  *   (store exclusive).
00150  * Until the store is successful.
00151  *
00152  * If a different thread of higher priority executed between the read and the
00153  * last write, it will cause the new element to have the wrong next pointer.
00154  * This is why the load-exclusive and store-exclusive are used. These are ARMv7
00155  * instructions that allow a given thread of execution to recognize whether it
00156  * has been interrupted. If another thread has set the exclusive bit, then the
00157  * store will fail. 
00158  * 
00159  * @param[in,out] q the queue structure to operate on
00160  * @param[in] e The element to add to the queue
00161  */
00162 int aq_push_tail(struct atomic_queue * q, struct atomic_queue_element * e);
00163 /**
00164  * \brief Get an element from the head of the queue
00165  *
00166  * This function iterates over the queue and removes an element from the head when it finds the head. This is slower
00167  * than maintaining a head pointer, but it is necessary to ensure that a pop is completely atomic.
00168  *
00169  * Element Extraction:
00170  * Extracting an element is much more complex than element insertion due to the
00171  * need to traverse the queue.
00172  * 
00173  * 1. Read the tail pointer
00174  * 2. If the tail pointer is NULL, return NULL.
00175  * 3. Set the current element to the tail pointer.
00176  * 4. Read the current element's next pointer (load exclusive).
00177  * 5. If the next pointer was NULL, start over (go to 1.)
00178  * 6. Load the next element.
00179  * 7. Check the value of the next element's next pointer
00180  * 8. If it is non-NULL, set the current element pointer to the next element and go to 4.
00181  * 9. Otherwise, set the current element's next pointer to NULL (store exclusive).
00182  * 10. Return the next element.
00183  * 
00184  * There is the potential for another thread of execution to interrupt the
00185  * search for the head of the queue. This should cause the dequeue mechanism to
00186  * find a NULL next pointer in the current element. However, this may depend on
00187  * other circumstances in the system, which might break the behaviour of the
00188  * queue. Until this is fully analyzed, the queue should be treated as single-
00189  * reader/multi-writer.
00190  * 
00191  * @param[in,out] q The queue to pop from
00192  * @return The popped element or NULL if the queue was empty
00193  */
00194 struct atomic_queue_element * aq_pop_head(struct atomic_queue * q);
00195 /**
00196  * Check if there are any elements in the queue
00197  *
00198  * Note that there is no guarantee that a queue which is not empty when this API is called will not be become empty
00199  * before aq_pop_head is called
00200  *
00201  * @retval non-zero when the queue is empty
00202  * @retval 0 when the queue is not empty
00203  */
00204 int aq_empty(struct atomic_queue * q);
00205 /**
00206  * Iterates over the queue and counts the elements in the queue
00207  *
00208  * The value returned by this function may be invalid by the time it returns. Do not depend on this value except in
00209  * a critical section.
00210  *
00211  * @return the number of elements in the queue
00212  */
00213 unsigned aq_count(struct atomic_queue * q);
00214 
00215 /**
00216  * Initialize an atomic queue element.
00217  *
00218  * WARNING: Only call this function one time per element, or it may result in undefined behaviour.
00219  *
00220  * @param[in] element Element to initialize
00221  */
00222 void aq_initialize_element(struct atomic_queue_element* e);
00223 
00224 /**
00225  * Take an element (this acquires the element lock)
00226  * 
00227  * @param[in] element Element to take
00228  */
00229 int aq_element_take(struct atomic_queue_element * e);
00230 
00231 /**
00232  * Release an element (this releases the element lock)
00233  * 
00234  * @param[in] element Element to release
00235  */
00236 int aq_element_release(struct atomic_queue_element * e);
00237 
00238 /**
00239  * Atomic Compare and Set
00240  * 
00241  * Take a pointer to a uintptr_t, compare its current value to oldval. If it is
00242  * as expected, try to write newval to the pointer target. Fail if interrupted.
00243  * 
00244  * @param[in,out] ptr A pointer to the target of the atomic compare and set
00245  * @param[in]     oldval A value to compare with the target of ptr.
00246  * @param[in]     newval A new value to store to the target of ptr.
00247  * @retval 1 if newval was stored
00248  * @retval 0 if oldval did not match *ptr, or if the store was interrupted.
00249  */
00250 int aq_atomic_cas_uintptr(uintptr_t *ptr, uintptr_t oldval, uintptr_t newval);
00251 
00252 /**
00253  * Atomic increment
00254  * 
00255  * Increment the value pointed to by ptr and increment it by inc atomically
00256  * This is just a passthrough to __sync_add_and_fetch on platforms where it
00257  * is supported.
00258  * 
00259  * @param[in,out] ptr A pointer to the target of the increment
00260  * @param[in]     inc A value by which to increment *ptr
00261  * @return the new value of *ptr
00262  */
00263 int32_t aq_atomic_inc_int32(int32_t *ptr, int32_t inc);
00264 
00265 
00266 #ifdef __cplusplus
00267 }
00268 #endif
00269 
00270 #endif // __ATOMIC_QUEUE_H__