Important changes to repositories hosted on mbed.com
Mbed hosted mercurial repositories are deprecated and are due to be permanently deleted in July 2026.
To keep a copy of this software download the repository Zip archive or clone locally using Mercurial.
It is also possible to export all your personal repositories from the account settings page.
Dependencies: FXAS21002 FXOS8700Q
atomic-queue.h
00001 // ---------------------------------------------------------------------------- 00002 // Copyright 2015-2017 ARM Ltd. 00003 // 00004 // SPDX-License-Identifier: Apache-2.0 00005 // 00006 // Licensed under the Apache License, Version 2.0 (the "License"); 00007 // you may not use this file except in compliance with the License. 00008 // You may obtain a copy of the License at 00009 // 00010 // http://www.apache.org/licenses/LICENSE-2.0 00011 // 00012 // Unless required by applicable law or agreed to in writing, software 00013 // distributed under the License is distributed on an "AS IS" BASIS, 00014 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 00015 // See the License for the specific language governing permissions and 00016 // limitations under the License. 00017 // ---------------------------------------------------------------------------- 00018 00019 /** 00020 * @file atomic-queue.h 00021 * @brief A linked-list queue based on atomic access 00022 * @details This queue is designed explicitly for use in contexts where atomic 00023 * access is desirable. Using an atomic queue comes with three major benefits: 00024 * 00025 * 1. This allows the queue to be used in contexts where some RTOS primitives 00026 * (mutexes) cannot be used, such as interrupt context. 00027 * 2. On many platforms, critical sections are very expensive, while atomics 00028 * are not. Thus, there is a significant performance benefit to using atomic 00029 * primitives wherever possible. 00030 * 3. Atomic operations have the least effect on all other execution contexts 00031 * on the device. Critical sections have performance and responsiveness side 00032 * effects. Mutexes can disrupt the execution of other threads. Atomics do 00033 * not affect the execution of other contexts and are immune to priority 00034 * inversion. 00035 * 00036 * In short, the atomics are the most cooperative way of building a queue. 00037 * 00038 * Theory of Operation: 00039 * The queue is intended to be multi-writer/multi-reader. Multi-writer 00040 * semantics have been fully validated, but multi-reader semantics still 00041 * require additional validation. It is recommended that the atomic queue be 00042 * treated as multi-writer/single-reader until this validation is complete. 00043 * 00044 * Assumptions: 00045 * The queue MUST own all memory currently in the queue. Any modification 00046 * to a queue element that is already enqueued can result in undefined 00047 * behaviour. 00048 * Because of this, the queue expects queue elements to be pool-allocated prior 00049 * to insertion and freed after extraction. 00050 * 00051 * To mitigate the possibility of a double-insert, the queue elements are 00052 * populated with a "lock" field. This is used to indicate when the element is 00053 * in use, to ensure that a parallel thread of execution cannot accidentally 00054 * reuse an element that has already been inserted. 00055 * 00056 * *NB:* Element locks are unnecessary when the atomic queue is used 00057 * exclusively with pool allocated queue elements. 00058 * 00059 * Queue Organization: 00060 * The queue is a singly linked list. The list pointer is the tail pointer of 00061 * the queue. The tail pointer points to the last element in the queue. To find 00062 * the head of the queue, the dequeue mechanism traverses the list, starting at 00063 * the tail of the queue until it finds the last list element, which is the 00064 * head of the queue. Each queue element contains: 00065 * * Next pointer 00066 * * Lock element 00067 * * Data (void* by default, custom element possible) 00068 * 00069 * Element Insertion: 00070 * To insert an element: 00071 * Do: 00072 * * Read the tail pointer (load exclusive). 00073 * * Write the tail pointer into the next pointer of the new element. 00074 * * Write the tail pointer with the address of the new element 00075 * (store exclusive). 00076 * Until the store is successful. 00077 * 00078 * If a different thread of higher priority executed between the read and the 00079 * last write, it will cause the new element to have the wrong next pointer. 00080 * This is why the load-exclusive and store-exclusive are used. These are ARMv7 00081 * instructions that allow a given thread of execution to recognize whether it 00082 * has been interrupted. If another thread has set the exclusive bit, then the 00083 * store will fail. 00084 * 00085 * Element Extraction: 00086 * Extracting an element is much more complex than element insertion due to the 00087 * need to traverse the queue. 00088 * 00089 * 1. Read the tail pointer 00090 * 2. If the tail pointer is NULL, return NULL. 00091 * 3. Set the current element to the tail pointer. 00092 * 4. Read the current element's next pointer (load exclusive). 00093 * 5. If the next pointer was NULL, start over (go to 1.) 00094 * 6. Load the next element. 00095 * 7. Check the value of the next element's next pointer 00096 * 8. If it is non-NULL, set the current element pointer to the next element and go to 4. 00097 * 9. Otherwise, set the current element's next pointer to NULL (store exclusive). 00098 * 10. Return the next element. 00099 * 00100 * There is the potential for another thread of execution to interrupt the 00101 * search for the head of the queue. This should cause the dequeue mechanism to 00102 * find a NULL next pointer in the current element. However, this may depend on 00103 * other circumstances in the system, which might break the behaviour of the 00104 * queue. Until this is fully analyzed, the queue should be treated as single- 00105 * reader/multi-writer. 00106 */ 00107 00108 #ifndef __ATOMIC_QUEUE_H__ 00109 #define __ATOMIC_QUEUE_H__ 00110 00111 #ifdef __cplusplus 00112 extern "C" { 00113 #endif 00114 00115 #include <stdint.h> 00116 00117 00118 #ifndef ATOMIC_QUEUE_CUSTOM_ELEMENT 00119 struct atomic_queue_element { 00120 struct atomic_queue_element *volatile next; 00121 uintptr_t lock; 00122 void *data; 00123 }; 00124 #endif 00125 00126 struct atomic_queue { 00127 struct atomic_queue_element *volatile tail; 00128 }; 00129 00130 enum aq_failure_codes { 00131 ATOMIC_QUEUE_SUCCESS = 0, 00132 ATOMIC_QUEUE_NULL_QUEUE, 00133 ATOMIC_QUEUE_NULL_ELEMENT, 00134 ATOMIC_QUEUE_DUPLICATE_ELEMENT, 00135 ATOMIC_QUEUE_INVALID_CONTEXT 00136 }; 00137 00138 /* Use this value as the argument of aq_element_take when a context isn't needed */ 00139 #define ATOMIC_QUEUE_NO_CONTEXT ((void*)-1) 00140 00141 /** 00142 * \brief Add an element to the tail of the queue 00143 * 00144 * Since the queue only maintains a tail pointer, this simply inserts the new element before the tail pointer 00145 * 00146 * Element Insertion: 00147 * To insert an element: 00148 * Do: 00149 * * Read the tail pointer (load exclusive). 00150 * * Write the tail pointer into the next pointer of the new element. 00151 * * Write the tail pointer with the address of the new element 00152 * (store exclusive). 00153 * Until the store is successful. 00154 * 00155 * If a different thread of higher priority executed between the read and the 00156 * last write, it will cause the new element to have the wrong next pointer. 00157 * This is why the load-exclusive and store-exclusive are used. These are ARMv7 00158 * instructions that allow a given thread of execution to recognize whether it 00159 * has been interrupted. If another thread has set the exclusive bit, then the 00160 * store will fail. 00161 * 00162 * @param[in,out] q the queue structure to operate on 00163 * @param[in] e The element to add to the queue 00164 */ 00165 int aq_push_tail(struct atomic_queue *q, struct atomic_queue_element *e); 00166 /** 00167 * \brief Get an element from the head of the queue 00168 * 00169 * This function iterates over the queue and removes an element from the head when it finds the head. This is slower 00170 * than maintaining a head pointer, but it is necessary to ensure that a pop is completely atomic. 00171 * 00172 * Element Extraction: 00173 * Extracting an element is much more complex than element insertion due to the 00174 * need to traverse the queue. 00175 * 00176 * 1. Read the tail pointer 00177 * 2. If the tail pointer is NULL, return NULL. 00178 * 3. Set the current element to the tail pointer. 00179 * 4. Read the current element's next pointer (load exclusive). 00180 * 5. If the next pointer was NULL, start over (go to 1.) 00181 * 6. Load the next element. 00182 * 7. Check the value of the next element's next pointer 00183 * 8. If it is non-NULL, set the current element pointer to the next element and go to 4. 00184 * 9. Otherwise, set the current element's next pointer to NULL (store exclusive). 00185 * 10. Return the next element. 00186 * 00187 * There is the potential for another thread of execution to interrupt the 00188 * search for the head of the queue. This should cause the dequeue mechanism to 00189 * find a NULL next pointer in the current element. However, this may depend on 00190 * other circumstances in the system, which might break the behaviour of the 00191 * queue. Until this is fully analyzed, the queue should be treated as single- 00192 * reader/multi-writer. 00193 * 00194 * @param[in,out] q The queue to pop from 00195 * @return The popped element or NULL if the queue was empty 00196 */ 00197 struct atomic_queue_element *aq_pop_head(struct atomic_queue *q); 00198 /** 00199 * Check if there are any elements in the queue 00200 * 00201 * Note that there is no guarantee that a queue which is not empty when this API is called will not be become empty 00202 * before aq_pop_head is called 00203 * 00204 * @retval non-zero when the queue is empty 00205 * @retval 0 when the queue is not empty 00206 */ 00207 int aq_empty(struct atomic_queue *q); 00208 /** 00209 * Iterates over the queue and counts the elements in the queue 00210 * 00211 * The value returned by this function may be invalid by the time it returns. Do not depend on this value except in 00212 * a critical section. 00213 * 00214 * @return the number of elements in the queue 00215 */ 00216 unsigned aq_count(struct atomic_queue *q); 00217 00218 /** 00219 * Initialize an atomic queue element. 00220 * 00221 * WARNING: Only call this function one time per element, or it may result in undefined behaviour. 00222 * 00223 * @param[in] element Element to initialize 00224 */ 00225 void aq_initialize_element(struct atomic_queue_element *e); 00226 00227 /** 00228 * Take an element (this acquires the element lock) 00229 * 00230 * @param[in] element Element to take 00231 * @param[in] ctx The element context. If a context is not needed, use ATOMIC_QUEUE_NO_CONTEXT. 00232 * If a context is needed, pass a non-NULL pointer. 00233 * 00234 * @retval ATOMIC_QUEUE_SUCCESS for success 00235 * @retval ATOMIC_QUEUE_NULL_ELEMENT if the element is NULL 00236 * @retval ATOMIC_QUEUE_INVALID_CONTEXT if the context is NULL 00237 */ 00238 int aq_element_take(struct atomic_queue_element *e, void *ctx); 00239 00240 /** 00241 * Release an element (this releases the element lock) 00242 * 00243 * @param[in] element Element to release 00244 * @param[out] ctx The element context will be written to this pointer 00245 * 00246 * @retval ATOMIC_QUEUE_SUCCESS for success 00247 * @retval ATOMIC_QUEUE_NULL_ELEMENT if the element is NULL 00248 * @retval ATOMIC_QUEUE_INVALID_CONTEXT if the pointer to the context is NULL 00249 */ 00250 int aq_element_release(struct atomic_queue_element *e, void **ctx); 00251 00252 /** 00253 * Atomic Compare and Set 00254 * 00255 * Take a pointer to a uintptr_t, compare its current value to oldval. If it is 00256 * as expected, try to write newval to the pointer target. Fail if interrupted. 00257 * 00258 * @param[in,out] ptr A pointer to the target of the atomic compare and set 00259 * @param[in] oldval A value to compare with the target of ptr. 00260 * @param[in] newval A new value to store to the target of ptr. 00261 * @retval 1 if newval was stored 00262 * @retval 0 if oldval did not match *ptr, or if the store was interrupted. 00263 */ 00264 int aq_atomic_cas_uintptr(uintptr_t *ptr, uintptr_t oldval, uintptr_t newval); 00265 00266 /** 00267 * Atomic increment 00268 * 00269 * Increment the value pointed to by ptr and increment it by inc atomically 00270 * This is just a passthrough to __sync_add_and_fetch on platforms where it 00271 * is supported. 00272 * 00273 * @param[in,out] ptr A pointer to the target of the increment 00274 * @param[in] inc A value by which to increment *ptr 00275 * @return the new value of *ptr 00276 */ 00277 int32_t aq_atomic_inc_int32(int32_t *ptr, int32_t inc); 00278 00279 00280 #ifdef __cplusplus 00281 } 00282 #endif 00283 00284 #endif // __ATOMIC_QUEUE_H__
Generated on Tue Jul 12 2022 20:20:57 by
