Port of TI's CC3100 Websock camera demo. Using FreeRTOS, mbedTLS, also parts of Arducam for cams ov5642 and 0v2640. Can also use MT9D111. Work in progress. Be warned some parts maybe a bit flacky. This is for Seeed Arch max only, for an M3, see the demo for CM3 using the 0v5642 aducam mini.

Dependencies:   mbed

Committer:
dflet
Date:
Tue Sep 15 16:45:04 2015 +0000
Revision:
22:f9b5e0b80bf2
Parent:
0:50cedd586816
Removed some debug.

Who changed what in which revision?

UserRevisionLine numberNew contents of line
dflet 0:50cedd586816 1 /*
dflet 0:50cedd586816 2 FreeRTOS V8.2.1 - Copyright (C) 2015 Real Time Engineers Ltd.
dflet 0:50cedd586816 3 All rights reserved
dflet 0:50cedd586816 4
dflet 0:50cedd586816 5 VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION.
dflet 0:50cedd586816 6
dflet 0:50cedd586816 7 This file is part of the FreeRTOS distribution.
dflet 0:50cedd586816 8
dflet 0:50cedd586816 9 FreeRTOS is free software; you can redistribute it and/or modify it under
dflet 0:50cedd586816 10 the terms of the GNU General Public License (version 2) as published by the
dflet 0:50cedd586816 11 Free Software Foundation >>!AND MODIFIED BY!<< the FreeRTOS exception.
dflet 0:50cedd586816 12
dflet 0:50cedd586816 13 ***************************************************************************
dflet 0:50cedd586816 14 >>! NOTE: The modification to the GPL is included to allow you to !<<
dflet 0:50cedd586816 15 >>! distribute a combined work that includes FreeRTOS without being !<<
dflet 0:50cedd586816 16 >>! obliged to provide the source code for proprietary components !<<
dflet 0:50cedd586816 17 >>! outside of the FreeRTOS kernel. !<<
dflet 0:50cedd586816 18 ***************************************************************************
dflet 0:50cedd586816 19
dflet 0:50cedd586816 20 FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY
dflet 0:50cedd586816 21 WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
dflet 0:50cedd586816 22 FOR A PARTICULAR PURPOSE. Full license text is available on the following
dflet 0:50cedd586816 23 link: http://www.freertos.org/a00114.html
dflet 0:50cedd586816 24
dflet 0:50cedd586816 25 ***************************************************************************
dflet 0:50cedd586816 26 * *
dflet 0:50cedd586816 27 * FreeRTOS provides completely free yet professionally developed, *
dflet 0:50cedd586816 28 * robust, strictly quality controlled, supported, and cross *
dflet 0:50cedd586816 29 * platform software that is more than just the market leader, it *
dflet 0:50cedd586816 30 * is the industry's de facto standard. *
dflet 0:50cedd586816 31 * *
dflet 0:50cedd586816 32 * Help yourself get started quickly while simultaneously helping *
dflet 0:50cedd586816 33 * to support the FreeRTOS project by purchasing a FreeRTOS *
dflet 0:50cedd586816 34 * tutorial book, reference manual, or both: *
dflet 0:50cedd586816 35 * http://www.FreeRTOS.org/Documentation *
dflet 0:50cedd586816 36 * *
dflet 0:50cedd586816 37 ***************************************************************************
dflet 0:50cedd586816 38
dflet 0:50cedd586816 39 http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading
dflet 0:50cedd586816 40 the FAQ page "My application does not run, what could be wrong?". Have you
dflet 0:50cedd586816 41 defined configASSERT()?
dflet 0:50cedd586816 42
dflet 0:50cedd586816 43 http://www.FreeRTOS.org/support - In return for receiving this top quality
dflet 0:50cedd586816 44 embedded software for free we request you assist our global community by
dflet 0:50cedd586816 45 participating in the support forum.
dflet 0:50cedd586816 46
dflet 0:50cedd586816 47 http://www.FreeRTOS.org/training - Investing in training allows your team to
dflet 0:50cedd586816 48 be as productive as possible as early as possible. Now you can receive
dflet 0:50cedd586816 49 FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers
dflet 0:50cedd586816 50 Ltd, and the world's leading authority on the world's leading RTOS.
dflet 0:50cedd586816 51
dflet 0:50cedd586816 52 http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products,
dflet 0:50cedd586816 53 including FreeRTOS+Trace - an indispensable productivity tool, a DOS
dflet 0:50cedd586816 54 compatible FAT file system, and our tiny thread aware UDP/IP stack.
dflet 0:50cedd586816 55
dflet 0:50cedd586816 56 http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate.
dflet 0:50cedd586816 57 Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS.
dflet 0:50cedd586816 58
dflet 0:50cedd586816 59 http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High
dflet 0:50cedd586816 60 Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS
dflet 0:50cedd586816 61 licenses offer ticketed support, indemnification and commercial middleware.
dflet 0:50cedd586816 62
dflet 0:50cedd586816 63 http://www.SafeRTOS.com - High Integrity Systems also provide a safety
dflet 0:50cedd586816 64 engineered and independently SIL3 certified version for use in safety and
dflet 0:50cedd586816 65 mission critical applications that require provable dependability.
dflet 0:50cedd586816 66
dflet 0:50cedd586816 67 1 tab == 4 spaces!
dflet 0:50cedd586816 68 */
dflet 0:50cedd586816 69
dflet 0:50cedd586816 70 #include <stdlib.h>
dflet 0:50cedd586816 71 #include <string.h>
dflet 0:50cedd586816 72
dflet 0:50cedd586816 73 /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining
dflet 0:50cedd586816 74 all the API functions to use the MPU wrappers. That should only be done when
dflet 0:50cedd586816 75 task.h is included from an application file. */
dflet 0:50cedd586816 76 #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE
dflet 0:50cedd586816 77
dflet 0:50cedd586816 78 #include "FreeRTOS.h"
dflet 0:50cedd586816 79 #include "task.h"
dflet 0:50cedd586816 80 #include "queue.h"
dflet 0:50cedd586816 81
dflet 0:50cedd586816 82 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:50cedd586816 83 #include "croutine.h"
dflet 0:50cedd586816 84 #endif
dflet 0:50cedd586816 85
dflet 0:50cedd586816 86 /* Lint e961 and e750 are suppressed as a MISRA exception justified because the
dflet 0:50cedd586816 87 MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the
dflet 0:50cedd586816 88 header files above, but not in this file, in order to generate the correct
dflet 0:50cedd586816 89 privileged Vs unprivileged linkage and placement. */
dflet 0:50cedd586816 90 #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */
dflet 0:50cedd586816 91
dflet 0:50cedd586816 92
dflet 0:50cedd586816 93 /* Constants used with the xRxLock and xTxLock structure members. */
dflet 0:50cedd586816 94 #define queueUNLOCKED ( ( BaseType_t ) -1 )
dflet 0:50cedd586816 95 #define queueLOCKED_UNMODIFIED ( ( BaseType_t ) 0 )
dflet 0:50cedd586816 96
dflet 0:50cedd586816 97 /* When the Queue_t structure is used to represent a base queue its pcHead and
dflet 0:50cedd586816 98 pcTail members are used as pointers into the queue storage area. When the
dflet 0:50cedd586816 99 Queue_t structure is used to represent a mutex pcHead and pcTail pointers are
dflet 0:50cedd586816 100 not necessary, and the pcHead pointer is set to NULL to indicate that the
dflet 0:50cedd586816 101 pcTail pointer actually points to the mutex holder (if any). Map alternative
dflet 0:50cedd586816 102 names to the pcHead and pcTail structure members to ensure the readability of
dflet 0:50cedd586816 103 the code is maintained despite this dual use of two structure members. An
dflet 0:50cedd586816 104 alternative implementation would be to use a union, but use of a union is
dflet 0:50cedd586816 105 against the coding standard (although an exception to the standard has been
dflet 0:50cedd586816 106 permitted where the dual use also significantly changes the type of the
dflet 0:50cedd586816 107 structure member). */
dflet 0:50cedd586816 108 #define pxMutexHolder pcTail
dflet 0:50cedd586816 109 #define uxQueueType pcHead
dflet 0:50cedd586816 110 #define queueQUEUE_IS_MUTEX NULL
dflet 0:50cedd586816 111
dflet 0:50cedd586816 112 /* Semaphores do not actually store or copy data, so have an item size of
dflet 0:50cedd586816 113 zero. */
dflet 0:50cedd586816 114 #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 )
dflet 0:50cedd586816 115 #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U )
dflet 0:50cedd586816 116
dflet 0:50cedd586816 117 #if( configUSE_PREEMPTION == 0 )
dflet 0:50cedd586816 118 /* If the cooperative scheduler is being used then a yield should not be
dflet 0:50cedd586816 119 performed just because a higher priority task has been woken. */
dflet 0:50cedd586816 120 #define queueYIELD_IF_USING_PREEMPTION()
dflet 0:50cedd586816 121 #else
dflet 0:50cedd586816 122 #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API()
dflet 0:50cedd586816 123 #endif
dflet 0:50cedd586816 124
dflet 0:50cedd586816 125 /*
dflet 0:50cedd586816 126 * Definition of the queue used by the scheduler.
dflet 0:50cedd586816 127 * Items are queued by copy, not reference. See the following link for the
dflet 0:50cedd586816 128 * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html
dflet 0:50cedd586816 129 */
dflet 0:50cedd586816 130 typedef struct QueueDefinition
dflet 0:50cedd586816 131 {
dflet 0:50cedd586816 132 int8_t *pcHead; /*< Points to the beginning of the queue storage area. */
dflet 0:50cedd586816 133 int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */
dflet 0:50cedd586816 134 int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */
dflet 0:50cedd586816 135
dflet 0:50cedd586816 136 union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */
dflet 0:50cedd586816 137 {
dflet 0:50cedd586816 138 int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */
dflet 0:50cedd586816 139 UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */
dflet 0:50cedd586816 140 } u;
dflet 0:50cedd586816 141
dflet 0:50cedd586816 142 List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */
dflet 0:50cedd586816 143 List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */
dflet 0:50cedd586816 144
dflet 0:50cedd586816 145 volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */
dflet 0:50cedd586816 146 UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */
dflet 0:50cedd586816 147 UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */
dflet 0:50cedd586816 148
dflet 0:50cedd586816 149 volatile BaseType_t xRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 0:50cedd586816 150 volatile BaseType_t xTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 0:50cedd586816 151
dflet 0:50cedd586816 152 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 153 UBaseType_t uxQueueNumber;
dflet 0:50cedd586816 154 uint8_t ucQueueType;
dflet 0:50cedd586816 155 #endif
dflet 0:50cedd586816 156
dflet 0:50cedd586816 157 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 158 struct QueueDefinition *pxQueueSetContainer;
dflet 0:50cedd586816 159 #endif
dflet 0:50cedd586816 160
dflet 0:50cedd586816 161 } xQUEUE;
dflet 0:50cedd586816 162
dflet 0:50cedd586816 163 /* The old xQUEUE name is maintained above then typedefed to the new Queue_t
dflet 0:50cedd586816 164 name below to enable the use of older kernel aware debuggers. */
dflet 0:50cedd586816 165 typedef xQUEUE Queue_t;
dflet 0:50cedd586816 166
dflet 0:50cedd586816 167 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 168
dflet 0:50cedd586816 169 /*
dflet 0:50cedd586816 170 * The queue registry is just a means for kernel aware debuggers to locate
dflet 0:50cedd586816 171 * queue structures. It has no other purpose so is an optional component.
dflet 0:50cedd586816 172 */
dflet 0:50cedd586816 173 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:50cedd586816 174
dflet 0:50cedd586816 175 /* The type stored within the queue registry array. This allows a name
dflet 0:50cedd586816 176 to be assigned to each queue making kernel aware debugging a little
dflet 0:50cedd586816 177 more user friendly. */
dflet 0:50cedd586816 178 typedef struct QUEUE_REGISTRY_ITEM
dflet 0:50cedd586816 179 {
dflet 0:50cedd586816 180 const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 0:50cedd586816 181 QueueHandle_t xHandle;
dflet 0:50cedd586816 182 } xQueueRegistryItem;
dflet 0:50cedd586816 183
dflet 0:50cedd586816 184 /* The old xQueueRegistryItem name is maintained above then typedefed to the
dflet 0:50cedd586816 185 new xQueueRegistryItem name below to enable the use of older kernel aware
dflet 0:50cedd586816 186 debuggers. */
dflet 0:50cedd586816 187 typedef xQueueRegistryItem QueueRegistryItem_t;
dflet 0:50cedd586816 188
dflet 0:50cedd586816 189 /* The queue registry is simply an array of QueueRegistryItem_t structures.
dflet 0:50cedd586816 190 The pcQueueName member of a structure being NULL is indicative of the
dflet 0:50cedd586816 191 array position being vacant. */
dflet 0:50cedd586816 192 QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ];
dflet 0:50cedd586816 193
dflet 0:50cedd586816 194 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:50cedd586816 195
dflet 0:50cedd586816 196 /*
dflet 0:50cedd586816 197 * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not
dflet 0:50cedd586816 198 * prevent an ISR from adding or removing items to the queue, but does prevent
dflet 0:50cedd586816 199 * an ISR from removing tasks from the queue event lists. If an ISR finds a
dflet 0:50cedd586816 200 * queue is locked it will instead increment the appropriate queue lock count
dflet 0:50cedd586816 201 * to indicate that a task may require unblocking. When the queue in unlocked
dflet 0:50cedd586816 202 * these lock counts are inspected, and the appropriate action taken.
dflet 0:50cedd586816 203 */
dflet 0:50cedd586816 204 static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 205
dflet 0:50cedd586816 206 /*
dflet 0:50cedd586816 207 * Uses a critical section to determine if there is any data in a queue.
dflet 0:50cedd586816 208 *
dflet 0:50cedd586816 209 * @return pdTRUE if the queue contains no items, otherwise pdFALSE.
dflet 0:50cedd586816 210 */
dflet 0:50cedd586816 211 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 212
dflet 0:50cedd586816 213 /*
dflet 0:50cedd586816 214 * Uses a critical section to determine if there is any space in a queue.
dflet 0:50cedd586816 215 *
dflet 0:50cedd586816 216 * @return pdTRUE if there is no space, otherwise pdFALSE;
dflet 0:50cedd586816 217 */
dflet 0:50cedd586816 218 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 219
dflet 0:50cedd586816 220 /*
dflet 0:50cedd586816 221 * Copies an item into the queue, either at the front of the queue or the
dflet 0:50cedd586816 222 * back of the queue.
dflet 0:50cedd586816 223 */
dflet 0:50cedd586816 224 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 225
dflet 0:50cedd586816 226 /*
dflet 0:50cedd586816 227 * Copies an item out of a queue.
dflet 0:50cedd586816 228 */
dflet 0:50cedd586816 229 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 230
dflet 0:50cedd586816 231 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 232 /*
dflet 0:50cedd586816 233 * Checks to see if a queue is a member of a queue set, and if so, notifies
dflet 0:50cedd586816 234 * the queue set that the queue contains data.
dflet 0:50cedd586816 235 */
dflet 0:50cedd586816 236 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION;
dflet 0:50cedd586816 237 #endif
dflet 0:50cedd586816 238
dflet 0:50cedd586816 239 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 240
dflet 0:50cedd586816 241 /*
dflet 0:50cedd586816 242 * Macro to mark a queue as locked. Locking a queue prevents an ISR from
dflet 0:50cedd586816 243 * accessing the queue event lists.
dflet 0:50cedd586816 244 */
dflet 0:50cedd586816 245 #define prvLockQueue( pxQueue ) \
dflet 0:50cedd586816 246 taskENTER_CRITICAL(); \
dflet 0:50cedd586816 247 { \
dflet 0:50cedd586816 248 if( ( pxQueue )->xRxLock == queueUNLOCKED ) \
dflet 0:50cedd586816 249 { \
dflet 0:50cedd586816 250 ( pxQueue )->xRxLock = queueLOCKED_UNMODIFIED; \
dflet 0:50cedd586816 251 } \
dflet 0:50cedd586816 252 if( ( pxQueue )->xTxLock == queueUNLOCKED ) \
dflet 0:50cedd586816 253 { \
dflet 0:50cedd586816 254 ( pxQueue )->xTxLock = queueLOCKED_UNMODIFIED; \
dflet 0:50cedd586816 255 } \
dflet 0:50cedd586816 256 } \
dflet 0:50cedd586816 257 taskEXIT_CRITICAL()
dflet 0:50cedd586816 258 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 259
dflet 0:50cedd586816 260 BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue )
dflet 0:50cedd586816 261 {
dflet 0:50cedd586816 262 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 263
dflet 0:50cedd586816 264 configASSERT( pxQueue );
dflet 0:50cedd586816 265
dflet 0:50cedd586816 266 taskENTER_CRITICAL();
dflet 0:50cedd586816 267 {
dflet 0:50cedd586816 268 pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize );
dflet 0:50cedd586816 269 pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 0:50cedd586816 270 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 0:50cedd586816 271 pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize );
dflet 0:50cedd586816 272 pxQueue->xRxLock = queueUNLOCKED;
dflet 0:50cedd586816 273 pxQueue->xTxLock = queueUNLOCKED;
dflet 0:50cedd586816 274
dflet 0:50cedd586816 275 if( xNewQueue == pdFALSE )
dflet 0:50cedd586816 276 {
dflet 0:50cedd586816 277 /* If there are tasks blocked waiting to read from the queue, then
dflet 0:50cedd586816 278 the tasks will remain blocked as after this function exits the queue
dflet 0:50cedd586816 279 will still be empty. If there are tasks blocked waiting to write to
dflet 0:50cedd586816 280 the queue, then one should be unblocked as after this function exits
dflet 0:50cedd586816 281 it will be possible to write to it. */
dflet 0:50cedd586816 282 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 283 {
dflet 0:50cedd586816 284 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:50cedd586816 285 {
dflet 0:50cedd586816 286 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 287 }
dflet 0:50cedd586816 288 else
dflet 0:50cedd586816 289 {
dflet 0:50cedd586816 290 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 291 }
dflet 0:50cedd586816 292 }
dflet 0:50cedd586816 293 else
dflet 0:50cedd586816 294 {
dflet 0:50cedd586816 295 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 296 }
dflet 0:50cedd586816 297 }
dflet 0:50cedd586816 298 else
dflet 0:50cedd586816 299 {
dflet 0:50cedd586816 300 /* Ensure the event queues start in the correct state. */
dflet 0:50cedd586816 301 vListInitialise( &( pxQueue->xTasksWaitingToSend ) );
dflet 0:50cedd586816 302 vListInitialise( &( pxQueue->xTasksWaitingToReceive ) );
dflet 0:50cedd586816 303 }
dflet 0:50cedd586816 304 }
dflet 0:50cedd586816 305 taskEXIT_CRITICAL();
dflet 0:50cedd586816 306
dflet 0:50cedd586816 307 /* A value is returned for calling semantic consistency with previous
dflet 0:50cedd586816 308 versions. */
dflet 0:50cedd586816 309 return pdPASS;
dflet 0:50cedd586816 310 }
dflet 0:50cedd586816 311 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 312
dflet 0:50cedd586816 313 QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType )
dflet 0:50cedd586816 314 {
dflet 0:50cedd586816 315 Queue_t *pxNewQueue;
dflet 0:50cedd586816 316 size_t xQueueSizeInBytes;
dflet 0:50cedd586816 317 QueueHandle_t xReturn = NULL;
dflet 0:50cedd586816 318 int8_t *pcAllocatedBuffer;
dflet 0:50cedd586816 319
dflet 0:50cedd586816 320 /* Remove compiler warnings about unused parameters should
dflet 0:50cedd586816 321 configUSE_TRACE_FACILITY not be set to 1. */
dflet 0:50cedd586816 322 ( void ) ucQueueType;
dflet 0:50cedd586816 323
dflet 0:50cedd586816 324 configASSERT( uxQueueLength > ( UBaseType_t ) 0 );
dflet 0:50cedd586816 325
dflet 0:50cedd586816 326 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 327 {
dflet 0:50cedd586816 328 /* There is not going to be a queue storage area. */
dflet 0:50cedd586816 329 xQueueSizeInBytes = ( size_t ) 0;
dflet 0:50cedd586816 330 }
dflet 0:50cedd586816 331 else
dflet 0:50cedd586816 332 {
dflet 0:50cedd586816 333 /* The queue is one byte longer than asked for to make wrap checking
dflet 0:50cedd586816 334 easier/faster. */
dflet 0:50cedd586816 335 xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ) + ( size_t ) 1; /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 0:50cedd586816 336 }
dflet 0:50cedd586816 337
dflet 0:50cedd586816 338 /* Allocate the new queue structure and storage area. */
dflet 0:50cedd586816 339 pcAllocatedBuffer = ( int8_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes );
dflet 0:50cedd586816 340
dflet 0:50cedd586816 341 if( pcAllocatedBuffer != NULL )
dflet 0:50cedd586816 342 {
dflet 0:50cedd586816 343 pxNewQueue = ( Queue_t * ) pcAllocatedBuffer; /*lint !e826 MISRA The buffer cannot be too small because it was dimensioned by sizeof( Queue_t ) + xQueueSizeInBytes. */
dflet 0:50cedd586816 344
dflet 0:50cedd586816 345 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 346 {
dflet 0:50cedd586816 347 /* No RAM was allocated for the queue storage area, but PC head
dflet 0:50cedd586816 348 cannot be set to NULL because NULL is used as a key to say the queue
dflet 0:50cedd586816 349 is used as a mutex. Therefore just set pcHead to point to the queue
dflet 0:50cedd586816 350 as a benign value that is known to be within the memory map. */
dflet 0:50cedd586816 351 pxNewQueue->pcHead = ( int8_t * ) pxNewQueue;
dflet 0:50cedd586816 352 }
dflet 0:50cedd586816 353 else
dflet 0:50cedd586816 354 {
dflet 0:50cedd586816 355 /* Jump past the queue structure to find the location of the queue
dflet 0:50cedd586816 356 storage area - adding the padding bytes to get a better alignment. */
dflet 0:50cedd586816 357 pxNewQueue->pcHead = pcAllocatedBuffer + sizeof( Queue_t );
dflet 0:50cedd586816 358 }
dflet 0:50cedd586816 359
dflet 0:50cedd586816 360 /* Initialise the queue members as described above where the queue type
dflet 0:50cedd586816 361 is defined. */
dflet 0:50cedd586816 362 pxNewQueue->uxLength = uxQueueLength;
dflet 0:50cedd586816 363 pxNewQueue->uxItemSize = uxItemSize;
dflet 0:50cedd586816 364 ( void ) xQueueGenericReset( pxNewQueue, pdTRUE );
dflet 0:50cedd586816 365
dflet 0:50cedd586816 366 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 367 {
dflet 0:50cedd586816 368 pxNewQueue->ucQueueType = ucQueueType;
dflet 0:50cedd586816 369 }
dflet 0:50cedd586816 370 #endif /* configUSE_TRACE_FACILITY */
dflet 0:50cedd586816 371
dflet 0:50cedd586816 372 #if( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 373 {
dflet 0:50cedd586816 374 pxNewQueue->pxQueueSetContainer = NULL;
dflet 0:50cedd586816 375 }
dflet 0:50cedd586816 376 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 377
dflet 0:50cedd586816 378 traceQUEUE_CREATE( pxNewQueue );
dflet 0:50cedd586816 379 xReturn = pxNewQueue;
dflet 0:50cedd586816 380 }
dflet 0:50cedd586816 381 else
dflet 0:50cedd586816 382 {
dflet 0:50cedd586816 383 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 384 }
dflet 0:50cedd586816 385
dflet 0:50cedd586816 386 configASSERT( xReturn );
dflet 0:50cedd586816 387
dflet 0:50cedd586816 388 return xReturn;
dflet 0:50cedd586816 389 }
dflet 0:50cedd586816 390 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 391
dflet 0:50cedd586816 392 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 393
dflet 0:50cedd586816 394 QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType )
dflet 0:50cedd586816 395 {
dflet 0:50cedd586816 396 Queue_t *pxNewQueue;
dflet 0:50cedd586816 397
dflet 0:50cedd586816 398 /* Prevent compiler warnings about unused parameters if
dflet 0:50cedd586816 399 configUSE_TRACE_FACILITY does not equal 1. */
dflet 0:50cedd586816 400 ( void ) ucQueueType;
dflet 0:50cedd586816 401
dflet 0:50cedd586816 402 /* Allocate the new queue structure. */
dflet 0:50cedd586816 403 pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) );
dflet 0:50cedd586816 404 if( pxNewQueue != NULL )
dflet 0:50cedd586816 405 {
dflet 0:50cedd586816 406 /* Information required for priority inheritance. */
dflet 0:50cedd586816 407 pxNewQueue->pxMutexHolder = NULL;
dflet 0:50cedd586816 408 pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX;
dflet 0:50cedd586816 409
dflet 0:50cedd586816 410 /* Queues used as a mutex no data is actually copied into or out
dflet 0:50cedd586816 411 of the queue. */
dflet 0:50cedd586816 412 pxNewQueue->pcWriteTo = NULL;
dflet 0:50cedd586816 413 pxNewQueue->u.pcReadFrom = NULL;
dflet 0:50cedd586816 414
dflet 0:50cedd586816 415 /* Each mutex has a length of 1 (like a binary semaphore) and
dflet 0:50cedd586816 416 an item size of 0 as nothing is actually copied into or out
dflet 0:50cedd586816 417 of the mutex. */
dflet 0:50cedd586816 418 pxNewQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 0:50cedd586816 419 pxNewQueue->uxLength = ( UBaseType_t ) 1U;
dflet 0:50cedd586816 420 pxNewQueue->uxItemSize = ( UBaseType_t ) 0U;
dflet 0:50cedd586816 421 pxNewQueue->xRxLock = queueUNLOCKED;
dflet 0:50cedd586816 422 pxNewQueue->xTxLock = queueUNLOCKED;
dflet 0:50cedd586816 423
dflet 0:50cedd586816 424 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 425 {
dflet 0:50cedd586816 426 pxNewQueue->ucQueueType = ucQueueType;
dflet 0:50cedd586816 427 }
dflet 0:50cedd586816 428 #endif
dflet 0:50cedd586816 429
dflet 0:50cedd586816 430 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 431 {
dflet 0:50cedd586816 432 pxNewQueue->pxQueueSetContainer = NULL;
dflet 0:50cedd586816 433 }
dflet 0:50cedd586816 434 #endif
dflet 0:50cedd586816 435
dflet 0:50cedd586816 436 /* Ensure the event queues start with the correct state. */
dflet 0:50cedd586816 437 vListInitialise( &( pxNewQueue->xTasksWaitingToSend ) );
dflet 0:50cedd586816 438 vListInitialise( &( pxNewQueue->xTasksWaitingToReceive ) );
dflet 0:50cedd586816 439
dflet 0:50cedd586816 440 traceCREATE_MUTEX( pxNewQueue );
dflet 0:50cedd586816 441
dflet 0:50cedd586816 442 /* Start with the semaphore in the expected state. */
dflet 0:50cedd586816 443 ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK );
dflet 0:50cedd586816 444 }
dflet 0:50cedd586816 445 else
dflet 0:50cedd586816 446 {
dflet 0:50cedd586816 447 traceCREATE_MUTEX_FAILED();
dflet 0:50cedd586816 448 }
dflet 0:50cedd586816 449
dflet 0:50cedd586816 450 configASSERT( pxNewQueue );
dflet 0:50cedd586816 451 return pxNewQueue;
dflet 0:50cedd586816 452 }
dflet 0:50cedd586816 453
dflet 0:50cedd586816 454 #endif /* configUSE_MUTEXES */
dflet 0:50cedd586816 455 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 456
dflet 0:50cedd586816 457 #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) )
dflet 0:50cedd586816 458
dflet 0:50cedd586816 459 void* xQueueGetMutexHolder( QueueHandle_t xSemaphore )
dflet 0:50cedd586816 460 {
dflet 0:50cedd586816 461 void *pxReturn;
dflet 0:50cedd586816 462
dflet 0:50cedd586816 463 /* This function is called by xSemaphoreGetMutexHolder(), and should not
dflet 0:50cedd586816 464 be called directly. Note: This is a good way of determining if the
dflet 0:50cedd586816 465 calling task is the mutex holder, but not a good way of determining the
dflet 0:50cedd586816 466 identity of the mutex holder, as the holder may change between the
dflet 0:50cedd586816 467 following critical section exiting and the function returning. */
dflet 0:50cedd586816 468 taskENTER_CRITICAL();
dflet 0:50cedd586816 469 {
dflet 0:50cedd586816 470 if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 471 {
dflet 0:50cedd586816 472 pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder;
dflet 0:50cedd586816 473 }
dflet 0:50cedd586816 474 else
dflet 0:50cedd586816 475 {
dflet 0:50cedd586816 476 pxReturn = NULL;
dflet 0:50cedd586816 477 }
dflet 0:50cedd586816 478 }
dflet 0:50cedd586816 479 taskEXIT_CRITICAL();
dflet 0:50cedd586816 480
dflet 0:50cedd586816 481 return pxReturn;
dflet 0:50cedd586816 482 } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */
dflet 0:50cedd586816 483
dflet 0:50cedd586816 484 #endif
dflet 0:50cedd586816 485 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 486
dflet 0:50cedd586816 487 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 0:50cedd586816 488
dflet 0:50cedd586816 489 BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex )
dflet 0:50cedd586816 490 {
dflet 0:50cedd586816 491 BaseType_t xReturn;
dflet 0:50cedd586816 492 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 0:50cedd586816 493
dflet 0:50cedd586816 494 configASSERT( pxMutex );
dflet 0:50cedd586816 495
dflet 0:50cedd586816 496 /* If this is the task that holds the mutex then pxMutexHolder will not
dflet 0:50cedd586816 497 change outside of this task. If this task does not hold the mutex then
dflet 0:50cedd586816 498 pxMutexHolder can never coincidentally equal the tasks handle, and as
dflet 0:50cedd586816 499 this is the only condition we are interested in it does not matter if
dflet 0:50cedd586816 500 pxMutexHolder is accessed simultaneously by another task. Therefore no
dflet 0:50cedd586816 501 mutual exclusion is required to test the pxMutexHolder variable. */
dflet 0:50cedd586816 502 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */
dflet 0:50cedd586816 503 {
dflet 0:50cedd586816 504 traceGIVE_MUTEX_RECURSIVE( pxMutex );
dflet 0:50cedd586816 505
dflet 0:50cedd586816 506 /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to
dflet 0:50cedd586816 507 the task handle, therefore no underflow check is required. Also,
dflet 0:50cedd586816 508 uxRecursiveCallCount is only modified by the mutex holder, and as
dflet 0:50cedd586816 509 there can only be one, no mutual exclusion is required to modify the
dflet 0:50cedd586816 510 uxRecursiveCallCount member. */
dflet 0:50cedd586816 511 ( pxMutex->u.uxRecursiveCallCount )--;
dflet 0:50cedd586816 512
dflet 0:50cedd586816 513 /* Have we unwound the call count? */
dflet 0:50cedd586816 514 if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 515 {
dflet 0:50cedd586816 516 /* Return the mutex. This will automatically unblock any other
dflet 0:50cedd586816 517 task that might be waiting to access the mutex. */
dflet 0:50cedd586816 518 ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK );
dflet 0:50cedd586816 519 }
dflet 0:50cedd586816 520 else
dflet 0:50cedd586816 521 {
dflet 0:50cedd586816 522 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 523 }
dflet 0:50cedd586816 524
dflet 0:50cedd586816 525 xReturn = pdPASS;
dflet 0:50cedd586816 526 }
dflet 0:50cedd586816 527 else
dflet 0:50cedd586816 528 {
dflet 0:50cedd586816 529 /* The mutex cannot be given because the calling task is not the
dflet 0:50cedd586816 530 holder. */
dflet 0:50cedd586816 531 xReturn = pdFAIL;
dflet 0:50cedd586816 532
dflet 0:50cedd586816 533 traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 0:50cedd586816 534 }
dflet 0:50cedd586816 535
dflet 0:50cedd586816 536 return xReturn;
dflet 0:50cedd586816 537 }
dflet 0:50cedd586816 538
dflet 0:50cedd586816 539 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 0:50cedd586816 540 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 541
dflet 0:50cedd586816 542 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 0:50cedd586816 543
dflet 0:50cedd586816 544 BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait )
dflet 0:50cedd586816 545 {
dflet 0:50cedd586816 546 BaseType_t xReturn;
dflet 0:50cedd586816 547 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 0:50cedd586816 548
dflet 0:50cedd586816 549 configASSERT( pxMutex );
dflet 0:50cedd586816 550
dflet 0:50cedd586816 551 /* Comments regarding mutual exclusion as per those within
dflet 0:50cedd586816 552 xQueueGiveMutexRecursive(). */
dflet 0:50cedd586816 553
dflet 0:50cedd586816 554 traceTAKE_MUTEX_RECURSIVE( pxMutex );
dflet 0:50cedd586816 555
dflet 0:50cedd586816 556 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 0:50cedd586816 557 {
dflet 0:50cedd586816 558 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 0:50cedd586816 559 xReturn = pdPASS;
dflet 0:50cedd586816 560 }
dflet 0:50cedd586816 561 else
dflet 0:50cedd586816 562 {
dflet 0:50cedd586816 563 xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE );
dflet 0:50cedd586816 564
dflet 0:50cedd586816 565 /* pdPASS will only be returned if the mutex was successfully
dflet 0:50cedd586816 566 obtained. The calling task may have entered the Blocked state
dflet 0:50cedd586816 567 before reaching here. */
dflet 0:50cedd586816 568 if( xReturn == pdPASS )
dflet 0:50cedd586816 569 {
dflet 0:50cedd586816 570 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 0:50cedd586816 571 }
dflet 0:50cedd586816 572 else
dflet 0:50cedd586816 573 {
dflet 0:50cedd586816 574 traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 0:50cedd586816 575 }
dflet 0:50cedd586816 576 }
dflet 0:50cedd586816 577
dflet 0:50cedd586816 578 return xReturn;
dflet 0:50cedd586816 579 }
dflet 0:50cedd586816 580
dflet 0:50cedd586816 581 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 0:50cedd586816 582 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 583
dflet 0:50cedd586816 584 #if ( configUSE_COUNTING_SEMAPHORES == 1 )
dflet 0:50cedd586816 585
dflet 0:50cedd586816 586 QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount )
dflet 0:50cedd586816 587 {
dflet 0:50cedd586816 588 QueueHandle_t xHandle;
dflet 0:50cedd586816 589
dflet 0:50cedd586816 590 configASSERT( uxMaxCount != 0 );
dflet 0:50cedd586816 591 configASSERT( uxInitialCount <= uxMaxCount );
dflet 0:50cedd586816 592
dflet 0:50cedd586816 593 xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE );
dflet 0:50cedd586816 594
dflet 0:50cedd586816 595 if( xHandle != NULL )
dflet 0:50cedd586816 596 {
dflet 0:50cedd586816 597 ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount;
dflet 0:50cedd586816 598
dflet 0:50cedd586816 599 traceCREATE_COUNTING_SEMAPHORE();
dflet 0:50cedd586816 600 }
dflet 0:50cedd586816 601 else
dflet 0:50cedd586816 602 {
dflet 0:50cedd586816 603 traceCREATE_COUNTING_SEMAPHORE_FAILED();
dflet 0:50cedd586816 604 }
dflet 0:50cedd586816 605
dflet 0:50cedd586816 606 configASSERT( xHandle );
dflet 0:50cedd586816 607 return xHandle;
dflet 0:50cedd586816 608 }
dflet 0:50cedd586816 609
dflet 0:50cedd586816 610 #endif /* configUSE_COUNTING_SEMAPHORES */
dflet 0:50cedd586816 611 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 612
dflet 0:50cedd586816 613 BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition )
dflet 0:50cedd586816 614 {
dflet 0:50cedd586816 615 BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired;
dflet 0:50cedd586816 616 TimeOut_t xTimeOut;
dflet 0:50cedd586816 617 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 618
dflet 0:50cedd586816 619 configASSERT( pxQueue );
dflet 0:50cedd586816 620 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 621 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 0:50cedd586816 622 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 0:50cedd586816 623 {
dflet 0:50cedd586816 624 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 0:50cedd586816 625 }
dflet 0:50cedd586816 626 #endif
dflet 0:50cedd586816 627
dflet 0:50cedd586816 628
dflet 0:50cedd586816 629 /* This function relaxes the coding standard somewhat to allow return
dflet 0:50cedd586816 630 statements within the function itself. This is done in the interest
dflet 0:50cedd586816 631 of execution time efficiency. */
dflet 0:50cedd586816 632 for( ;; )
dflet 0:50cedd586816 633 {
dflet 0:50cedd586816 634 taskENTER_CRITICAL();
dflet 0:50cedd586816 635 {
dflet 0:50cedd586816 636 /* Is there room on the queue now? The running task must be the
dflet 0:50cedd586816 637 highest priority task wanting to access the queue. If the head item
dflet 0:50cedd586816 638 in the queue is to be overwritten then it does not matter if the
dflet 0:50cedd586816 639 queue is full. */
dflet 0:50cedd586816 640 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 0:50cedd586816 641 {
dflet 0:50cedd586816 642 traceQUEUE_SEND( pxQueue );
dflet 0:50cedd586816 643 xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:50cedd586816 644
dflet 0:50cedd586816 645 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 646 {
dflet 0:50cedd586816 647 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:50cedd586816 648 {
dflet 0:50cedd586816 649 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 0:50cedd586816 650 {
dflet 0:50cedd586816 651 /* The queue is a member of a queue set, and posting
dflet 0:50cedd586816 652 to the queue set caused a higher priority task to
dflet 0:50cedd586816 653 unblock. A context switch is required. */
dflet 0:50cedd586816 654 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 655 }
dflet 0:50cedd586816 656 else
dflet 0:50cedd586816 657 {
dflet 0:50cedd586816 658 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 659 }
dflet 0:50cedd586816 660 }
dflet 0:50cedd586816 661 else
dflet 0:50cedd586816 662 {
dflet 0:50cedd586816 663 /* If there was a task waiting for data to arrive on the
dflet 0:50cedd586816 664 queue then unblock it now. */
dflet 0:50cedd586816 665 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 666 {
dflet 0:50cedd586816 667 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:50cedd586816 668 {
dflet 0:50cedd586816 669 /* The unblocked task has a priority higher than
dflet 0:50cedd586816 670 our own so yield immediately. Yes it is ok to
dflet 0:50cedd586816 671 do this from within the critical section - the
dflet 0:50cedd586816 672 kernel takes care of that. */
dflet 0:50cedd586816 673 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 674 }
dflet 0:50cedd586816 675 else
dflet 0:50cedd586816 676 {
dflet 0:50cedd586816 677 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 678 }
dflet 0:50cedd586816 679 }
dflet 0:50cedd586816 680 else if( xYieldRequired != pdFALSE )
dflet 0:50cedd586816 681 {
dflet 0:50cedd586816 682 /* This path is a special case that will only get
dflet 0:50cedd586816 683 executed if the task was holding multiple mutexes
dflet 0:50cedd586816 684 and the mutexes were given back in an order that is
dflet 0:50cedd586816 685 different to that in which they were taken. */
dflet 0:50cedd586816 686 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 687 }
dflet 0:50cedd586816 688 else
dflet 0:50cedd586816 689 {
dflet 0:50cedd586816 690 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 691 }
dflet 0:50cedd586816 692 }
dflet 0:50cedd586816 693 }
dflet 0:50cedd586816 694 #else /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 695 {
dflet 0:50cedd586816 696 /* If there was a task waiting for data to arrive on the
dflet 0:50cedd586816 697 queue then unblock it now. */
dflet 0:50cedd586816 698 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 699 {
dflet 0:50cedd586816 700 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:50cedd586816 701 {
dflet 0:50cedd586816 702 /* The unblocked task has a priority higher than
dflet 0:50cedd586816 703 our own so yield immediately. Yes it is ok to do
dflet 0:50cedd586816 704 this from within the critical section - the kernel
dflet 0:50cedd586816 705 takes care of that. */
dflet 0:50cedd586816 706 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 707 }
dflet 0:50cedd586816 708 else
dflet 0:50cedd586816 709 {
dflet 0:50cedd586816 710 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 711 }
dflet 0:50cedd586816 712 }
dflet 0:50cedd586816 713 else if( xYieldRequired != pdFALSE )
dflet 0:50cedd586816 714 {
dflet 0:50cedd586816 715 /* This path is a special case that will only get
dflet 0:50cedd586816 716 executed if the task was holding multiple mutexes and
dflet 0:50cedd586816 717 the mutexes were given back in an order that is
dflet 0:50cedd586816 718 different to that in which they were taken. */
dflet 0:50cedd586816 719 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 720 }
dflet 0:50cedd586816 721 else
dflet 0:50cedd586816 722 {
dflet 0:50cedd586816 723 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 724 }
dflet 0:50cedd586816 725 }
dflet 0:50cedd586816 726 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 727
dflet 0:50cedd586816 728 taskEXIT_CRITICAL();
dflet 0:50cedd586816 729 return pdPASS;
dflet 0:50cedd586816 730 }
dflet 0:50cedd586816 731 else
dflet 0:50cedd586816 732 {
dflet 0:50cedd586816 733 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:50cedd586816 734 {
dflet 0:50cedd586816 735 /* The queue was full and no block time is specified (or
dflet 0:50cedd586816 736 the block time has expired) so leave now. */
dflet 0:50cedd586816 737 taskEXIT_CRITICAL();
dflet 0:50cedd586816 738
dflet 0:50cedd586816 739 /* Return to the original privilege level before exiting
dflet 0:50cedd586816 740 the function. */
dflet 0:50cedd586816 741 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:50cedd586816 742 return errQUEUE_FULL;
dflet 0:50cedd586816 743 }
dflet 0:50cedd586816 744 else if( xEntryTimeSet == pdFALSE )
dflet 0:50cedd586816 745 {
dflet 0:50cedd586816 746 /* The queue was full and a block time was specified so
dflet 0:50cedd586816 747 configure the timeout structure. */
dflet 0:50cedd586816 748 vTaskSetTimeOutState( &xTimeOut );
dflet 0:50cedd586816 749 xEntryTimeSet = pdTRUE;
dflet 0:50cedd586816 750 }
dflet 0:50cedd586816 751 else
dflet 0:50cedd586816 752 {
dflet 0:50cedd586816 753 /* Entry time was already set. */
dflet 0:50cedd586816 754 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 755 }
dflet 0:50cedd586816 756 }
dflet 0:50cedd586816 757 }
dflet 0:50cedd586816 758 taskEXIT_CRITICAL();
dflet 0:50cedd586816 759
dflet 0:50cedd586816 760 /* Interrupts and other tasks can send to and receive from the queue
dflet 0:50cedd586816 761 now the critical section has been exited. */
dflet 0:50cedd586816 762
dflet 0:50cedd586816 763 vTaskSuspendAll();
dflet 0:50cedd586816 764 prvLockQueue( pxQueue );
dflet 0:50cedd586816 765
dflet 0:50cedd586816 766 /* Update the timeout state to see if it has expired yet. */
dflet 0:50cedd586816 767 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:50cedd586816 768 {
dflet 0:50cedd586816 769 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:50cedd586816 770 {
dflet 0:50cedd586816 771 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 0:50cedd586816 772 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 0:50cedd586816 773
dflet 0:50cedd586816 774 /* Unlocking the queue means queue events can effect the
dflet 0:50cedd586816 775 event list. It is possible that interrupts occurring now
dflet 0:50cedd586816 776 remove this task from the event list again - but as the
dflet 0:50cedd586816 777 scheduler is suspended the task will go onto the pending
dflet 0:50cedd586816 778 ready last instead of the actual ready list. */
dflet 0:50cedd586816 779 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 780
dflet 0:50cedd586816 781 /* Resuming the scheduler will move tasks from the pending
dflet 0:50cedd586816 782 ready list into the ready list - so it is feasible that this
dflet 0:50cedd586816 783 task is already in a ready list before it yields - in which
dflet 0:50cedd586816 784 case the yield will not cause a context switch unless there
dflet 0:50cedd586816 785 is also a higher priority task in the pending ready list. */
dflet 0:50cedd586816 786 if( xTaskResumeAll() == pdFALSE )
dflet 0:50cedd586816 787 {
dflet 0:50cedd586816 788 portYIELD_WITHIN_API();
dflet 0:50cedd586816 789 }
dflet 0:50cedd586816 790 }
dflet 0:50cedd586816 791 else
dflet 0:50cedd586816 792 {
dflet 0:50cedd586816 793 /* Try again. */
dflet 0:50cedd586816 794 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 795 ( void ) xTaskResumeAll();
dflet 0:50cedd586816 796 }
dflet 0:50cedd586816 797 }
dflet 0:50cedd586816 798 else
dflet 0:50cedd586816 799 {
dflet 0:50cedd586816 800 /* The timeout has expired. */
dflet 0:50cedd586816 801 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 802 ( void ) xTaskResumeAll();
dflet 0:50cedd586816 803
dflet 0:50cedd586816 804 /* Return to the original privilege level before exiting the
dflet 0:50cedd586816 805 function. */
dflet 0:50cedd586816 806 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:50cedd586816 807 return errQUEUE_FULL;
dflet 0:50cedd586816 808 }
dflet 0:50cedd586816 809 }
dflet 0:50cedd586816 810 }
dflet 0:50cedd586816 811 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 812
dflet 0:50cedd586816 813 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 0:50cedd586816 814
dflet 0:50cedd586816 815 BaseType_t xQueueAltGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, BaseType_t xCopyPosition )
dflet 0:50cedd586816 816 {
dflet 0:50cedd586816 817 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:50cedd586816 818 TimeOut_t xTimeOut;
dflet 0:50cedd586816 819 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 820
dflet 0:50cedd586816 821 configASSERT( pxQueue );
dflet 0:50cedd586816 822 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 823
dflet 0:50cedd586816 824 for( ;; )
dflet 0:50cedd586816 825 {
dflet 0:50cedd586816 826 taskENTER_CRITICAL();
dflet 0:50cedd586816 827 {
dflet 0:50cedd586816 828 /* Is there room on the queue now? To be running we must be
dflet 0:50cedd586816 829 the highest priority task wanting to access the queue. */
dflet 0:50cedd586816 830 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:50cedd586816 831 {
dflet 0:50cedd586816 832 traceQUEUE_SEND( pxQueue );
dflet 0:50cedd586816 833 prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:50cedd586816 834
dflet 0:50cedd586816 835 /* If there was a task waiting for data to arrive on the
dflet 0:50cedd586816 836 queue then unblock it now. */
dflet 0:50cedd586816 837 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 838 {
dflet 0:50cedd586816 839 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:50cedd586816 840 {
dflet 0:50cedd586816 841 /* The unblocked task has a priority higher than
dflet 0:50cedd586816 842 our own so yield immediately. */
dflet 0:50cedd586816 843 portYIELD_WITHIN_API();
dflet 0:50cedd586816 844 }
dflet 0:50cedd586816 845 else
dflet 0:50cedd586816 846 {
dflet 0:50cedd586816 847 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 848 }
dflet 0:50cedd586816 849 }
dflet 0:50cedd586816 850 else
dflet 0:50cedd586816 851 {
dflet 0:50cedd586816 852 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 853 }
dflet 0:50cedd586816 854
dflet 0:50cedd586816 855 taskEXIT_CRITICAL();
dflet 0:50cedd586816 856 return pdPASS;
dflet 0:50cedd586816 857 }
dflet 0:50cedd586816 858 else
dflet 0:50cedd586816 859 {
dflet 0:50cedd586816 860 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:50cedd586816 861 {
dflet 0:50cedd586816 862 taskEXIT_CRITICAL();
dflet 0:50cedd586816 863 return errQUEUE_FULL;
dflet 0:50cedd586816 864 }
dflet 0:50cedd586816 865 else if( xEntryTimeSet == pdFALSE )
dflet 0:50cedd586816 866 {
dflet 0:50cedd586816 867 vTaskSetTimeOutState( &xTimeOut );
dflet 0:50cedd586816 868 xEntryTimeSet = pdTRUE;
dflet 0:50cedd586816 869 }
dflet 0:50cedd586816 870 }
dflet 0:50cedd586816 871 }
dflet 0:50cedd586816 872 taskEXIT_CRITICAL();
dflet 0:50cedd586816 873
dflet 0:50cedd586816 874 taskENTER_CRITICAL();
dflet 0:50cedd586816 875 {
dflet 0:50cedd586816 876 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:50cedd586816 877 {
dflet 0:50cedd586816 878 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:50cedd586816 879 {
dflet 0:50cedd586816 880 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 0:50cedd586816 881 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 0:50cedd586816 882 portYIELD_WITHIN_API();
dflet 0:50cedd586816 883 }
dflet 0:50cedd586816 884 else
dflet 0:50cedd586816 885 {
dflet 0:50cedd586816 886 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 887 }
dflet 0:50cedd586816 888 }
dflet 0:50cedd586816 889 else
dflet 0:50cedd586816 890 {
dflet 0:50cedd586816 891 taskEXIT_CRITICAL();
dflet 0:50cedd586816 892 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:50cedd586816 893 return errQUEUE_FULL;
dflet 0:50cedd586816 894 }
dflet 0:50cedd586816 895 }
dflet 0:50cedd586816 896 taskEXIT_CRITICAL();
dflet 0:50cedd586816 897 }
dflet 0:50cedd586816 898 }
dflet 0:50cedd586816 899
dflet 0:50cedd586816 900 #endif /* configUSE_ALTERNATIVE_API */
dflet 0:50cedd586816 901 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 902
dflet 0:50cedd586816 903 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 0:50cedd586816 904
dflet 0:50cedd586816 905 BaseType_t xQueueAltGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, BaseType_t xJustPeeking )
dflet 0:50cedd586816 906 {
dflet 0:50cedd586816 907 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:50cedd586816 908 TimeOut_t xTimeOut;
dflet 0:50cedd586816 909 int8_t *pcOriginalReadPosition;
dflet 0:50cedd586816 910 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 911
dflet 0:50cedd586816 912 configASSERT( pxQueue );
dflet 0:50cedd586816 913 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 914
dflet 0:50cedd586816 915 for( ;; )
dflet 0:50cedd586816 916 {
dflet 0:50cedd586816 917 taskENTER_CRITICAL();
dflet 0:50cedd586816 918 {
dflet 0:50cedd586816 919 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 920 {
dflet 0:50cedd586816 921 /* Remember our read position in case we are just peeking. */
dflet 0:50cedd586816 922 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:50cedd586816 923
dflet 0:50cedd586816 924 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:50cedd586816 925
dflet 0:50cedd586816 926 if( xJustPeeking == pdFALSE )
dflet 0:50cedd586816 927 {
dflet 0:50cedd586816 928 traceQUEUE_RECEIVE( pxQueue );
dflet 0:50cedd586816 929
dflet 0:50cedd586816 930 /* Data is actually being removed (not just peeked). */
dflet 0:50cedd586816 931 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 932
dflet 0:50cedd586816 933 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 934 {
dflet 0:50cedd586816 935 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 936 {
dflet 0:50cedd586816 937 /* Record the information required to implement
dflet 0:50cedd586816 938 priority inheritance should it become necessary. */
dflet 0:50cedd586816 939 pxQueue->pxMutexHolder = ( int8_t * ) xTaskGetCurrentTaskHandle();
dflet 0:50cedd586816 940 }
dflet 0:50cedd586816 941 else
dflet 0:50cedd586816 942 {
dflet 0:50cedd586816 943 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 944 }
dflet 0:50cedd586816 945 }
dflet 0:50cedd586816 946 #endif
dflet 0:50cedd586816 947
dflet 0:50cedd586816 948 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 949 {
dflet 0:50cedd586816 950 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:50cedd586816 951 {
dflet 0:50cedd586816 952 portYIELD_WITHIN_API();
dflet 0:50cedd586816 953 }
dflet 0:50cedd586816 954 else
dflet 0:50cedd586816 955 {
dflet 0:50cedd586816 956 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 957 }
dflet 0:50cedd586816 958 }
dflet 0:50cedd586816 959 }
dflet 0:50cedd586816 960 else
dflet 0:50cedd586816 961 {
dflet 0:50cedd586816 962 traceQUEUE_PEEK( pxQueue );
dflet 0:50cedd586816 963
dflet 0:50cedd586816 964 /* The data is not being removed, so reset our read
dflet 0:50cedd586816 965 pointer. */
dflet 0:50cedd586816 966 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:50cedd586816 967
dflet 0:50cedd586816 968 /* The data is being left in the queue, so see if there are
dflet 0:50cedd586816 969 any other tasks waiting for the data. */
dflet 0:50cedd586816 970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 971 {
dflet 0:50cedd586816 972 /* Tasks that are removed from the event list will get added to
dflet 0:50cedd586816 973 the pending ready list as the scheduler is still suspended. */
dflet 0:50cedd586816 974 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 975 {
dflet 0:50cedd586816 976 /* The task waiting has a higher priority than this task. */
dflet 0:50cedd586816 977 portYIELD_WITHIN_API();
dflet 0:50cedd586816 978 }
dflet 0:50cedd586816 979 else
dflet 0:50cedd586816 980 {
dflet 0:50cedd586816 981 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 982 }
dflet 0:50cedd586816 983 }
dflet 0:50cedd586816 984 else
dflet 0:50cedd586816 985 {
dflet 0:50cedd586816 986 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 987 }
dflet 0:50cedd586816 988 }
dflet 0:50cedd586816 989
dflet 0:50cedd586816 990 taskEXIT_CRITICAL();
dflet 0:50cedd586816 991 return pdPASS;
dflet 0:50cedd586816 992 }
dflet 0:50cedd586816 993 else
dflet 0:50cedd586816 994 {
dflet 0:50cedd586816 995 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:50cedd586816 996 {
dflet 0:50cedd586816 997 taskEXIT_CRITICAL();
dflet 0:50cedd586816 998 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:50cedd586816 999 return errQUEUE_EMPTY;
dflet 0:50cedd586816 1000 }
dflet 0:50cedd586816 1001 else if( xEntryTimeSet == pdFALSE )
dflet 0:50cedd586816 1002 {
dflet 0:50cedd586816 1003 vTaskSetTimeOutState( &xTimeOut );
dflet 0:50cedd586816 1004 xEntryTimeSet = pdTRUE;
dflet 0:50cedd586816 1005 }
dflet 0:50cedd586816 1006 }
dflet 0:50cedd586816 1007 }
dflet 0:50cedd586816 1008 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1009
dflet 0:50cedd586816 1010 taskENTER_CRITICAL();
dflet 0:50cedd586816 1011 {
dflet 0:50cedd586816 1012 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:50cedd586816 1013 {
dflet 0:50cedd586816 1014 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 0:50cedd586816 1015 {
dflet 0:50cedd586816 1016 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 0:50cedd586816 1017
dflet 0:50cedd586816 1018 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 1019 {
dflet 0:50cedd586816 1020 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 1021 {
dflet 0:50cedd586816 1022 taskENTER_CRITICAL();
dflet 0:50cedd586816 1023 {
dflet 0:50cedd586816 1024 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:50cedd586816 1025 }
dflet 0:50cedd586816 1026 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1027 }
dflet 0:50cedd586816 1028 else
dflet 0:50cedd586816 1029 {
dflet 0:50cedd586816 1030 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1031 }
dflet 0:50cedd586816 1032 }
dflet 0:50cedd586816 1033 #endif
dflet 0:50cedd586816 1034
dflet 0:50cedd586816 1035 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:50cedd586816 1036 portYIELD_WITHIN_API();
dflet 0:50cedd586816 1037 }
dflet 0:50cedd586816 1038 else
dflet 0:50cedd586816 1039 {
dflet 0:50cedd586816 1040 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1041 }
dflet 0:50cedd586816 1042 }
dflet 0:50cedd586816 1043 else
dflet 0:50cedd586816 1044 {
dflet 0:50cedd586816 1045 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1046 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:50cedd586816 1047 return errQUEUE_EMPTY;
dflet 0:50cedd586816 1048 }
dflet 0:50cedd586816 1049 }
dflet 0:50cedd586816 1050 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1051 }
dflet 0:50cedd586816 1052 }
dflet 0:50cedd586816 1053
dflet 0:50cedd586816 1054
dflet 0:50cedd586816 1055 #endif /* configUSE_ALTERNATIVE_API */
dflet 0:50cedd586816 1056 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1057
dflet 0:50cedd586816 1058 BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition )
dflet 0:50cedd586816 1059 {
dflet 0:50cedd586816 1060 BaseType_t xReturn;
dflet 0:50cedd586816 1061 UBaseType_t uxSavedInterruptStatus;
dflet 0:50cedd586816 1062 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1063
dflet 0:50cedd586816 1064 configASSERT( pxQueue );
dflet 0:50cedd586816 1065 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 1066 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 0:50cedd586816 1067
dflet 0:50cedd586816 1068 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:50cedd586816 1069 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:50cedd586816 1070 above the maximum system call priority are kept permanently enabled, even
dflet 0:50cedd586816 1071 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:50cedd586816 1072 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:50cedd586816 1073 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:50cedd586816 1074 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:50cedd586816 1075 assigned a priority above the configured maximum system call priority.
dflet 0:50cedd586816 1076 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:50cedd586816 1077 that have been assigned a priority at or (logically) below the maximum
dflet 0:50cedd586816 1078 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:50cedd586816 1079 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:50cedd586816 1080 More information (albeit Cortex-M specific) is provided on the following
dflet 0:50cedd586816 1081 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:50cedd586816 1082 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:50cedd586816 1083
dflet 0:50cedd586816 1084 /* Similar to xQueueGenericSend, except without blocking if there is no room
dflet 0:50cedd586816 1085 in the queue. Also don't directly wake a task that was blocked on a queue
dflet 0:50cedd586816 1086 read, instead return a flag to say whether a context switch is required or
dflet 0:50cedd586816 1087 not (i.e. has a task with a higher priority than us been woken by this
dflet 0:50cedd586816 1088 post). */
dflet 0:50cedd586816 1089 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:50cedd586816 1090 {
dflet 0:50cedd586816 1091 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 0:50cedd586816 1092 {
dflet 0:50cedd586816 1093 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 0:50cedd586816 1094
dflet 0:50cedd586816 1095 /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a
dflet 0:50cedd586816 1096 semaphore or mutex. That means prvCopyDataToQueue() cannot result
dflet 0:50cedd586816 1097 in a task disinheriting a priority and prvCopyDataToQueue() can be
dflet 0:50cedd586816 1098 called here even though the disinherit function does not check if
dflet 0:50cedd586816 1099 the scheduler is suspended before accessing the ready lists. */
dflet 0:50cedd586816 1100 ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:50cedd586816 1101
dflet 0:50cedd586816 1102 /* The event list is not altered if the queue is locked. This will
dflet 0:50cedd586816 1103 be done when the queue is unlocked later. */
dflet 0:50cedd586816 1104 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 0:50cedd586816 1105 {
dflet 0:50cedd586816 1106 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 1107 {
dflet 0:50cedd586816 1108 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:50cedd586816 1109 {
dflet 0:50cedd586816 1110 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 0:50cedd586816 1111 {
dflet 0:50cedd586816 1112 /* The queue is a member of a queue set, and posting
dflet 0:50cedd586816 1113 to the queue set caused a higher priority task to
dflet 0:50cedd586816 1114 unblock. A context switch is required. */
dflet 0:50cedd586816 1115 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1116 {
dflet 0:50cedd586816 1117 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1118 }
dflet 0:50cedd586816 1119 else
dflet 0:50cedd586816 1120 {
dflet 0:50cedd586816 1121 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1122 }
dflet 0:50cedd586816 1123 }
dflet 0:50cedd586816 1124 else
dflet 0:50cedd586816 1125 {
dflet 0:50cedd586816 1126 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1127 }
dflet 0:50cedd586816 1128 }
dflet 0:50cedd586816 1129 else
dflet 0:50cedd586816 1130 {
dflet 0:50cedd586816 1131 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1132 {
dflet 0:50cedd586816 1133 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1134 {
dflet 0:50cedd586816 1135 /* The task waiting has a higher priority so
dflet 0:50cedd586816 1136 record that a context switch is required. */
dflet 0:50cedd586816 1137 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1138 {
dflet 0:50cedd586816 1139 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1140 }
dflet 0:50cedd586816 1141 else
dflet 0:50cedd586816 1142 {
dflet 0:50cedd586816 1143 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1144 }
dflet 0:50cedd586816 1145 }
dflet 0:50cedd586816 1146 else
dflet 0:50cedd586816 1147 {
dflet 0:50cedd586816 1148 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1149 }
dflet 0:50cedd586816 1150 }
dflet 0:50cedd586816 1151 else
dflet 0:50cedd586816 1152 {
dflet 0:50cedd586816 1153 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1154 }
dflet 0:50cedd586816 1155 }
dflet 0:50cedd586816 1156 }
dflet 0:50cedd586816 1157 #else /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1158 {
dflet 0:50cedd586816 1159 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1160 {
dflet 0:50cedd586816 1161 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1162 {
dflet 0:50cedd586816 1163 /* The task waiting has a higher priority so record that a
dflet 0:50cedd586816 1164 context switch is required. */
dflet 0:50cedd586816 1165 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1166 {
dflet 0:50cedd586816 1167 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1168 }
dflet 0:50cedd586816 1169 else
dflet 0:50cedd586816 1170 {
dflet 0:50cedd586816 1171 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1172 }
dflet 0:50cedd586816 1173 }
dflet 0:50cedd586816 1174 else
dflet 0:50cedd586816 1175 {
dflet 0:50cedd586816 1176 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1177 }
dflet 0:50cedd586816 1178 }
dflet 0:50cedd586816 1179 else
dflet 0:50cedd586816 1180 {
dflet 0:50cedd586816 1181 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1182 }
dflet 0:50cedd586816 1183 }
dflet 0:50cedd586816 1184 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1185 }
dflet 0:50cedd586816 1186 else
dflet 0:50cedd586816 1187 {
dflet 0:50cedd586816 1188 /* Increment the lock count so the task that unlocks the queue
dflet 0:50cedd586816 1189 knows that data was posted while it was locked. */
dflet 0:50cedd586816 1190 ++( pxQueue->xTxLock );
dflet 0:50cedd586816 1191 }
dflet 0:50cedd586816 1192
dflet 0:50cedd586816 1193 xReturn = pdPASS;
dflet 0:50cedd586816 1194 }
dflet 0:50cedd586816 1195 else
dflet 0:50cedd586816 1196 {
dflet 0:50cedd586816 1197 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 0:50cedd586816 1198 xReturn = errQUEUE_FULL;
dflet 0:50cedd586816 1199 }
dflet 0:50cedd586816 1200 }
dflet 0:50cedd586816 1201 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:50cedd586816 1202
dflet 0:50cedd586816 1203 return xReturn;
dflet 0:50cedd586816 1204 }
dflet 0:50cedd586816 1205 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1206
dflet 0:50cedd586816 1207 BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 0:50cedd586816 1208 {
dflet 0:50cedd586816 1209 BaseType_t xReturn;
dflet 0:50cedd586816 1210 UBaseType_t uxSavedInterruptStatus;
dflet 0:50cedd586816 1211 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1212
dflet 0:50cedd586816 1213 /* Similar to xQueueGenericSendFromISR() but used with semaphores where the
dflet 0:50cedd586816 1214 item size is 0. Don't directly wake a task that was blocked on a queue
dflet 0:50cedd586816 1215 read, instead return a flag to say whether a context switch is required or
dflet 0:50cedd586816 1216 not (i.e. has a task with a higher priority than us been woken by this
dflet 0:50cedd586816 1217 post). */
dflet 0:50cedd586816 1218
dflet 0:50cedd586816 1219 configASSERT( pxQueue );
dflet 0:50cedd586816 1220
dflet 0:50cedd586816 1221 /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR()
dflet 0:50cedd586816 1222 if the item size is not 0. */
dflet 0:50cedd586816 1223 configASSERT( pxQueue->uxItemSize == 0 );
dflet 0:50cedd586816 1224
dflet 0:50cedd586816 1225 /* Normally a mutex would not be given from an interrupt, and doing so is
dflet 0:50cedd586816 1226 definitely wrong if there is a mutex holder as priority inheritance makes no
dflet 0:50cedd586816 1227 sense for an interrupts, only tasks. */
dflet 0:50cedd586816 1228 configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) );
dflet 0:50cedd586816 1229
dflet 0:50cedd586816 1230 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:50cedd586816 1231 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:50cedd586816 1232 above the maximum system call priority are kept permanently enabled, even
dflet 0:50cedd586816 1233 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:50cedd586816 1234 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:50cedd586816 1235 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:50cedd586816 1236 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:50cedd586816 1237 assigned a priority above the configured maximum system call priority.
dflet 0:50cedd586816 1238 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:50cedd586816 1239 that have been assigned a priority at or (logically) below the maximum
dflet 0:50cedd586816 1240 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:50cedd586816 1241 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:50cedd586816 1242 More information (albeit Cortex-M specific) is provided on the following
dflet 0:50cedd586816 1243 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:50cedd586816 1244 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:50cedd586816 1245
dflet 0:50cedd586816 1246 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:50cedd586816 1247 {
dflet 0:50cedd586816 1248 /* When the queue is used to implement a semaphore no data is ever
dflet 0:50cedd586816 1249 moved through the queue but it is still valid to see if the queue 'has
dflet 0:50cedd586816 1250 space'. */
dflet 0:50cedd586816 1251 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:50cedd586816 1252 {
dflet 0:50cedd586816 1253 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 0:50cedd586816 1254
dflet 0:50cedd586816 1255 /* A task can only have an inherited priority if it is a mutex
dflet 0:50cedd586816 1256 holder - and if there is a mutex holder then the mutex cannot be
dflet 0:50cedd586816 1257 given from an ISR. As this is the ISR version of the function it
dflet 0:50cedd586816 1258 can be assumed there is no mutex holder and no need to determine if
dflet 0:50cedd586816 1259 priority disinheritance is needed. Simply increase the count of
dflet 0:50cedd586816 1260 messages (semaphores) available. */
dflet 0:50cedd586816 1261 ++( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 1262
dflet 0:50cedd586816 1263 /* The event list is not altered if the queue is locked. This will
dflet 0:50cedd586816 1264 be done when the queue is unlocked later. */
dflet 0:50cedd586816 1265 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 0:50cedd586816 1266 {
dflet 0:50cedd586816 1267 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 1268 {
dflet 0:50cedd586816 1269 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:50cedd586816 1270 {
dflet 0:50cedd586816 1271 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 0:50cedd586816 1272 {
dflet 0:50cedd586816 1273 /* The semaphore is a member of a queue set, and
dflet 0:50cedd586816 1274 posting to the queue set caused a higher priority
dflet 0:50cedd586816 1275 task to unblock. A context switch is required. */
dflet 0:50cedd586816 1276 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1277 {
dflet 0:50cedd586816 1278 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1279 }
dflet 0:50cedd586816 1280 else
dflet 0:50cedd586816 1281 {
dflet 0:50cedd586816 1282 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1283 }
dflet 0:50cedd586816 1284 }
dflet 0:50cedd586816 1285 else
dflet 0:50cedd586816 1286 {
dflet 0:50cedd586816 1287 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1288 }
dflet 0:50cedd586816 1289 }
dflet 0:50cedd586816 1290 else
dflet 0:50cedd586816 1291 {
dflet 0:50cedd586816 1292 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1293 {
dflet 0:50cedd586816 1294 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1295 {
dflet 0:50cedd586816 1296 /* The task waiting has a higher priority so
dflet 0:50cedd586816 1297 record that a context switch is required. */
dflet 0:50cedd586816 1298 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1299 {
dflet 0:50cedd586816 1300 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1301 }
dflet 0:50cedd586816 1302 else
dflet 0:50cedd586816 1303 {
dflet 0:50cedd586816 1304 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1305 }
dflet 0:50cedd586816 1306 }
dflet 0:50cedd586816 1307 else
dflet 0:50cedd586816 1308 {
dflet 0:50cedd586816 1309 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1310 }
dflet 0:50cedd586816 1311 }
dflet 0:50cedd586816 1312 else
dflet 0:50cedd586816 1313 {
dflet 0:50cedd586816 1314 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1315 }
dflet 0:50cedd586816 1316 }
dflet 0:50cedd586816 1317 }
dflet 0:50cedd586816 1318 #else /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1319 {
dflet 0:50cedd586816 1320 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1321 {
dflet 0:50cedd586816 1322 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1323 {
dflet 0:50cedd586816 1324 /* The task waiting has a higher priority so record that a
dflet 0:50cedd586816 1325 context switch is required. */
dflet 0:50cedd586816 1326 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1327 {
dflet 0:50cedd586816 1328 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1329 }
dflet 0:50cedd586816 1330 else
dflet 0:50cedd586816 1331 {
dflet 0:50cedd586816 1332 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1333 }
dflet 0:50cedd586816 1334 }
dflet 0:50cedd586816 1335 else
dflet 0:50cedd586816 1336 {
dflet 0:50cedd586816 1337 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1338 }
dflet 0:50cedd586816 1339 }
dflet 0:50cedd586816 1340 else
dflet 0:50cedd586816 1341 {
dflet 0:50cedd586816 1342 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1343 }
dflet 0:50cedd586816 1344 }
dflet 0:50cedd586816 1345 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1346 }
dflet 0:50cedd586816 1347 else
dflet 0:50cedd586816 1348 {
dflet 0:50cedd586816 1349 /* Increment the lock count so the task that unlocks the queue
dflet 0:50cedd586816 1350 knows that data was posted while it was locked. */
dflet 0:50cedd586816 1351 ++( pxQueue->xTxLock );
dflet 0:50cedd586816 1352 }
dflet 0:50cedd586816 1353
dflet 0:50cedd586816 1354 xReturn = pdPASS;
dflet 0:50cedd586816 1355 }
dflet 0:50cedd586816 1356 else
dflet 0:50cedd586816 1357 {
dflet 0:50cedd586816 1358 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 0:50cedd586816 1359 xReturn = errQUEUE_FULL;
dflet 0:50cedd586816 1360 }
dflet 0:50cedd586816 1361 }
dflet 0:50cedd586816 1362 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:50cedd586816 1363
dflet 0:50cedd586816 1364 return xReturn;
dflet 0:50cedd586816 1365 }
dflet 0:50cedd586816 1366 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1367
dflet 0:50cedd586816 1368 BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking )
dflet 0:50cedd586816 1369 {
dflet 0:50cedd586816 1370 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:50cedd586816 1371 TimeOut_t xTimeOut;
dflet 0:50cedd586816 1372 int8_t *pcOriginalReadPosition;
dflet 0:50cedd586816 1373 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1374
dflet 0:50cedd586816 1375 configASSERT( pxQueue );
dflet 0:50cedd586816 1376 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 1377 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 0:50cedd586816 1378 {
dflet 0:50cedd586816 1379 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 0:50cedd586816 1380 }
dflet 0:50cedd586816 1381 #endif
dflet 0:50cedd586816 1382
dflet 0:50cedd586816 1383 /* This function relaxes the coding standard somewhat to allow return
dflet 0:50cedd586816 1384 statements within the function itself. This is done in the interest
dflet 0:50cedd586816 1385 of execution time efficiency. */
dflet 0:50cedd586816 1386
dflet 0:50cedd586816 1387 for( ;; )
dflet 0:50cedd586816 1388 {
dflet 0:50cedd586816 1389 taskENTER_CRITICAL();
dflet 0:50cedd586816 1390 {
dflet 0:50cedd586816 1391 /* Is there data in the queue now? To be running the calling task
dflet 0:50cedd586816 1392 must be the highest priority task wanting to access the queue. */
dflet 0:50cedd586816 1393 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1394 {
dflet 0:50cedd586816 1395 /* Remember the read position in case the queue is only being
dflet 0:50cedd586816 1396 peeked. */
dflet 0:50cedd586816 1397 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:50cedd586816 1398
dflet 0:50cedd586816 1399 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:50cedd586816 1400
dflet 0:50cedd586816 1401 if( xJustPeeking == pdFALSE )
dflet 0:50cedd586816 1402 {
dflet 0:50cedd586816 1403 traceQUEUE_RECEIVE( pxQueue );
dflet 0:50cedd586816 1404
dflet 0:50cedd586816 1405 /* Actually removing data, not just peeking. */
dflet 0:50cedd586816 1406 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 1407
dflet 0:50cedd586816 1408 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 1409 {
dflet 0:50cedd586816 1410 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 1411 {
dflet 0:50cedd586816 1412 /* Record the information required to implement
dflet 0:50cedd586816 1413 priority inheritance should it become necessary. */
dflet 0:50cedd586816 1414 pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 0:50cedd586816 1415 }
dflet 0:50cedd586816 1416 else
dflet 0:50cedd586816 1417 {
dflet 0:50cedd586816 1418 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1419 }
dflet 0:50cedd586816 1420 }
dflet 0:50cedd586816 1421 #endif /* configUSE_MUTEXES */
dflet 0:50cedd586816 1422
dflet 0:50cedd586816 1423 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 1424 {
dflet 0:50cedd586816 1425 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:50cedd586816 1426 {
dflet 0:50cedd586816 1427 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 1428 }
dflet 0:50cedd586816 1429 else
dflet 0:50cedd586816 1430 {
dflet 0:50cedd586816 1431 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1432 }
dflet 0:50cedd586816 1433 }
dflet 0:50cedd586816 1434 else
dflet 0:50cedd586816 1435 {
dflet 0:50cedd586816 1436 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1437 }
dflet 0:50cedd586816 1438 }
dflet 0:50cedd586816 1439 else
dflet 0:50cedd586816 1440 {
dflet 0:50cedd586816 1441 traceQUEUE_PEEK( pxQueue );
dflet 0:50cedd586816 1442
dflet 0:50cedd586816 1443 /* The data is not being removed, so reset the read
dflet 0:50cedd586816 1444 pointer. */
dflet 0:50cedd586816 1445 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:50cedd586816 1446
dflet 0:50cedd586816 1447 /* The data is being left in the queue, so see if there are
dflet 0:50cedd586816 1448 any other tasks waiting for the data. */
dflet 0:50cedd586816 1449 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1450 {
dflet 0:50cedd586816 1451 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1452 {
dflet 0:50cedd586816 1453 /* The task waiting has a higher priority than this task. */
dflet 0:50cedd586816 1454 queueYIELD_IF_USING_PREEMPTION();
dflet 0:50cedd586816 1455 }
dflet 0:50cedd586816 1456 else
dflet 0:50cedd586816 1457 {
dflet 0:50cedd586816 1458 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1459 }
dflet 0:50cedd586816 1460 }
dflet 0:50cedd586816 1461 else
dflet 0:50cedd586816 1462 {
dflet 0:50cedd586816 1463 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1464 }
dflet 0:50cedd586816 1465 }
dflet 0:50cedd586816 1466
dflet 0:50cedd586816 1467 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1468 return pdPASS;
dflet 0:50cedd586816 1469 }
dflet 0:50cedd586816 1470 else
dflet 0:50cedd586816 1471 {
dflet 0:50cedd586816 1472 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:50cedd586816 1473 {
dflet 0:50cedd586816 1474 /* The queue was empty and no block time is specified (or
dflet 0:50cedd586816 1475 the block time has expired) so leave now. */
dflet 0:50cedd586816 1476 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1477 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:50cedd586816 1478 return errQUEUE_EMPTY;
dflet 0:50cedd586816 1479 }
dflet 0:50cedd586816 1480 else if( xEntryTimeSet == pdFALSE )
dflet 0:50cedd586816 1481 {
dflet 0:50cedd586816 1482 /* The queue was empty and a block time was specified so
dflet 0:50cedd586816 1483 configure the timeout structure. */
dflet 0:50cedd586816 1484 vTaskSetTimeOutState( &xTimeOut );
dflet 0:50cedd586816 1485 xEntryTimeSet = pdTRUE;
dflet 0:50cedd586816 1486 }
dflet 0:50cedd586816 1487 else
dflet 0:50cedd586816 1488 {
dflet 0:50cedd586816 1489 /* Entry time was already set. */
dflet 0:50cedd586816 1490 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1491 }
dflet 0:50cedd586816 1492 }
dflet 0:50cedd586816 1493 }
dflet 0:50cedd586816 1494 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1495
dflet 0:50cedd586816 1496 /* Interrupts and other tasks can send to and receive from the queue
dflet 0:50cedd586816 1497 now the critical section has been exited. */
dflet 0:50cedd586816 1498
dflet 0:50cedd586816 1499 vTaskSuspendAll();
dflet 0:50cedd586816 1500 prvLockQueue( pxQueue );
dflet 0:50cedd586816 1501
dflet 0:50cedd586816 1502 /* Update the timeout state to see if it has expired yet. */
dflet 0:50cedd586816 1503 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:50cedd586816 1504 {
dflet 0:50cedd586816 1505 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 0:50cedd586816 1506 {
dflet 0:50cedd586816 1507 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 0:50cedd586816 1508
dflet 0:50cedd586816 1509 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 1510 {
dflet 0:50cedd586816 1511 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 1512 {
dflet 0:50cedd586816 1513 taskENTER_CRITICAL();
dflet 0:50cedd586816 1514 {
dflet 0:50cedd586816 1515 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:50cedd586816 1516 }
dflet 0:50cedd586816 1517 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1518 }
dflet 0:50cedd586816 1519 else
dflet 0:50cedd586816 1520 {
dflet 0:50cedd586816 1521 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1522 }
dflet 0:50cedd586816 1523 }
dflet 0:50cedd586816 1524 #endif
dflet 0:50cedd586816 1525
dflet 0:50cedd586816 1526 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:50cedd586816 1527 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 1528 if( xTaskResumeAll() == pdFALSE )
dflet 0:50cedd586816 1529 {
dflet 0:50cedd586816 1530 portYIELD_WITHIN_API();
dflet 0:50cedd586816 1531 }
dflet 0:50cedd586816 1532 else
dflet 0:50cedd586816 1533 {
dflet 0:50cedd586816 1534 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1535 }
dflet 0:50cedd586816 1536 }
dflet 0:50cedd586816 1537 else
dflet 0:50cedd586816 1538 {
dflet 0:50cedd586816 1539 /* Try again. */
dflet 0:50cedd586816 1540 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 1541 ( void ) xTaskResumeAll();
dflet 0:50cedd586816 1542 }
dflet 0:50cedd586816 1543 }
dflet 0:50cedd586816 1544 else
dflet 0:50cedd586816 1545 {
dflet 0:50cedd586816 1546 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 1547 ( void ) xTaskResumeAll();
dflet 0:50cedd586816 1548 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:50cedd586816 1549 return errQUEUE_EMPTY;
dflet 0:50cedd586816 1550 }
dflet 0:50cedd586816 1551 }
dflet 0:50cedd586816 1552 }
dflet 0:50cedd586816 1553 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1554
dflet 0:50cedd586816 1555 BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 0:50cedd586816 1556 {
dflet 0:50cedd586816 1557 BaseType_t xReturn;
dflet 0:50cedd586816 1558 UBaseType_t uxSavedInterruptStatus;
dflet 0:50cedd586816 1559 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1560
dflet 0:50cedd586816 1561 configASSERT( pxQueue );
dflet 0:50cedd586816 1562 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 1563
dflet 0:50cedd586816 1564 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:50cedd586816 1565 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:50cedd586816 1566 above the maximum system call priority are kept permanently enabled, even
dflet 0:50cedd586816 1567 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:50cedd586816 1568 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:50cedd586816 1569 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:50cedd586816 1570 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:50cedd586816 1571 assigned a priority above the configured maximum system call priority.
dflet 0:50cedd586816 1572 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:50cedd586816 1573 that have been assigned a priority at or (logically) below the maximum
dflet 0:50cedd586816 1574 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:50cedd586816 1575 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:50cedd586816 1576 More information (albeit Cortex-M specific) is provided on the following
dflet 0:50cedd586816 1577 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:50cedd586816 1578 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:50cedd586816 1579
dflet 0:50cedd586816 1580 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:50cedd586816 1581 {
dflet 0:50cedd586816 1582 /* Cannot block in an ISR, so check there is data available. */
dflet 0:50cedd586816 1583 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1584 {
dflet 0:50cedd586816 1585 traceQUEUE_RECEIVE_FROM_ISR( pxQueue );
dflet 0:50cedd586816 1586
dflet 0:50cedd586816 1587 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:50cedd586816 1588 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 1589
dflet 0:50cedd586816 1590 /* If the queue is locked the event list will not be modified.
dflet 0:50cedd586816 1591 Instead update the lock count so the task that unlocks the queue
dflet 0:50cedd586816 1592 will know that an ISR has removed data while the queue was
dflet 0:50cedd586816 1593 locked. */
dflet 0:50cedd586816 1594 if( pxQueue->xRxLock == queueUNLOCKED )
dflet 0:50cedd586816 1595 {
dflet 0:50cedd586816 1596 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 1597 {
dflet 0:50cedd586816 1598 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:50cedd586816 1599 {
dflet 0:50cedd586816 1600 /* The task waiting has a higher priority than us so
dflet 0:50cedd586816 1601 force a context switch. */
dflet 0:50cedd586816 1602 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:50cedd586816 1603 {
dflet 0:50cedd586816 1604 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:50cedd586816 1605 }
dflet 0:50cedd586816 1606 else
dflet 0:50cedd586816 1607 {
dflet 0:50cedd586816 1608 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1609 }
dflet 0:50cedd586816 1610 }
dflet 0:50cedd586816 1611 else
dflet 0:50cedd586816 1612 {
dflet 0:50cedd586816 1613 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1614 }
dflet 0:50cedd586816 1615 }
dflet 0:50cedd586816 1616 else
dflet 0:50cedd586816 1617 {
dflet 0:50cedd586816 1618 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1619 }
dflet 0:50cedd586816 1620 }
dflet 0:50cedd586816 1621 else
dflet 0:50cedd586816 1622 {
dflet 0:50cedd586816 1623 /* Increment the lock count so the task that unlocks the queue
dflet 0:50cedd586816 1624 knows that data was removed while it was locked. */
dflet 0:50cedd586816 1625 ++( pxQueue->xRxLock );
dflet 0:50cedd586816 1626 }
dflet 0:50cedd586816 1627
dflet 0:50cedd586816 1628 xReturn = pdPASS;
dflet 0:50cedd586816 1629 }
dflet 0:50cedd586816 1630 else
dflet 0:50cedd586816 1631 {
dflet 0:50cedd586816 1632 xReturn = pdFAIL;
dflet 0:50cedd586816 1633 traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue );
dflet 0:50cedd586816 1634 }
dflet 0:50cedd586816 1635 }
dflet 0:50cedd586816 1636 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:50cedd586816 1637
dflet 0:50cedd586816 1638 return xReturn;
dflet 0:50cedd586816 1639 }
dflet 0:50cedd586816 1640 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1641
dflet 0:50cedd586816 1642 BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer )
dflet 0:50cedd586816 1643 {
dflet 0:50cedd586816 1644 BaseType_t xReturn;
dflet 0:50cedd586816 1645 UBaseType_t uxSavedInterruptStatus;
dflet 0:50cedd586816 1646 int8_t *pcOriginalReadPosition;
dflet 0:50cedd586816 1647 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1648
dflet 0:50cedd586816 1649 configASSERT( pxQueue );
dflet 0:50cedd586816 1650 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:50cedd586816 1651 configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */
dflet 0:50cedd586816 1652
dflet 0:50cedd586816 1653 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:50cedd586816 1654 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:50cedd586816 1655 above the maximum system call priority are kept permanently enabled, even
dflet 0:50cedd586816 1656 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:50cedd586816 1657 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:50cedd586816 1658 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:50cedd586816 1659 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:50cedd586816 1660 assigned a priority above the configured maximum system call priority.
dflet 0:50cedd586816 1661 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:50cedd586816 1662 that have been assigned a priority at or (logically) below the maximum
dflet 0:50cedd586816 1663 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:50cedd586816 1664 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:50cedd586816 1665 More information (albeit Cortex-M specific) is provided on the following
dflet 0:50cedd586816 1666 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:50cedd586816 1667 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:50cedd586816 1668
dflet 0:50cedd586816 1669 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:50cedd586816 1670 {
dflet 0:50cedd586816 1671 /* Cannot block in an ISR, so check there is data available. */
dflet 0:50cedd586816 1672 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1673 {
dflet 0:50cedd586816 1674 traceQUEUE_PEEK_FROM_ISR( pxQueue );
dflet 0:50cedd586816 1675
dflet 0:50cedd586816 1676 /* Remember the read position so it can be reset as nothing is
dflet 0:50cedd586816 1677 actually being removed from the queue. */
dflet 0:50cedd586816 1678 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:50cedd586816 1679 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:50cedd586816 1680 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:50cedd586816 1681
dflet 0:50cedd586816 1682 xReturn = pdPASS;
dflet 0:50cedd586816 1683 }
dflet 0:50cedd586816 1684 else
dflet 0:50cedd586816 1685 {
dflet 0:50cedd586816 1686 xReturn = pdFAIL;
dflet 0:50cedd586816 1687 traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue );
dflet 0:50cedd586816 1688 }
dflet 0:50cedd586816 1689 }
dflet 0:50cedd586816 1690 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:50cedd586816 1691
dflet 0:50cedd586816 1692 return xReturn;
dflet 0:50cedd586816 1693 }
dflet 0:50cedd586816 1694 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1695
dflet 0:50cedd586816 1696 UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue )
dflet 0:50cedd586816 1697 {
dflet 0:50cedd586816 1698 UBaseType_t uxReturn;
dflet 0:50cedd586816 1699
dflet 0:50cedd586816 1700 configASSERT( xQueue );
dflet 0:50cedd586816 1701
dflet 0:50cedd586816 1702 taskENTER_CRITICAL();
dflet 0:50cedd586816 1703 {
dflet 0:50cedd586816 1704 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 0:50cedd586816 1705 }
dflet 0:50cedd586816 1706 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1707
dflet 0:50cedd586816 1708 return uxReturn;
dflet 0:50cedd586816 1709 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:50cedd586816 1710 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1711
dflet 0:50cedd586816 1712 UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue )
dflet 0:50cedd586816 1713 {
dflet 0:50cedd586816 1714 UBaseType_t uxReturn;
dflet 0:50cedd586816 1715 Queue_t *pxQueue;
dflet 0:50cedd586816 1716
dflet 0:50cedd586816 1717 pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1718 configASSERT( pxQueue );
dflet 0:50cedd586816 1719
dflet 0:50cedd586816 1720 taskENTER_CRITICAL();
dflet 0:50cedd586816 1721 {
dflet 0:50cedd586816 1722 uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting;
dflet 0:50cedd586816 1723 }
dflet 0:50cedd586816 1724 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1725
dflet 0:50cedd586816 1726 return uxReturn;
dflet 0:50cedd586816 1727 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:50cedd586816 1728 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1729
dflet 0:50cedd586816 1730 UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue )
dflet 0:50cedd586816 1731 {
dflet 0:50cedd586816 1732 UBaseType_t uxReturn;
dflet 0:50cedd586816 1733
dflet 0:50cedd586816 1734 configASSERT( xQueue );
dflet 0:50cedd586816 1735
dflet 0:50cedd586816 1736 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 0:50cedd586816 1737
dflet 0:50cedd586816 1738 return uxReturn;
dflet 0:50cedd586816 1739 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:50cedd586816 1740 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1741
dflet 0:50cedd586816 1742 void vQueueDelete( QueueHandle_t xQueue )
dflet 0:50cedd586816 1743 {
dflet 0:50cedd586816 1744 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 1745
dflet 0:50cedd586816 1746 configASSERT( pxQueue );
dflet 0:50cedd586816 1747
dflet 0:50cedd586816 1748 traceQUEUE_DELETE( pxQueue );
dflet 0:50cedd586816 1749 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:50cedd586816 1750 {
dflet 0:50cedd586816 1751 vQueueUnregisterQueue( pxQueue );
dflet 0:50cedd586816 1752 }
dflet 0:50cedd586816 1753 #endif
dflet 0:50cedd586816 1754 vPortFree( pxQueue );
dflet 0:50cedd586816 1755 }
dflet 0:50cedd586816 1756 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1757
dflet 0:50cedd586816 1758 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 1759
dflet 0:50cedd586816 1760 UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue )
dflet 0:50cedd586816 1761 {
dflet 0:50cedd586816 1762 return ( ( Queue_t * ) xQueue )->uxQueueNumber;
dflet 0:50cedd586816 1763 }
dflet 0:50cedd586816 1764
dflet 0:50cedd586816 1765 #endif /* configUSE_TRACE_FACILITY */
dflet 0:50cedd586816 1766 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1767
dflet 0:50cedd586816 1768 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 1769
dflet 0:50cedd586816 1770 void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber )
dflet 0:50cedd586816 1771 {
dflet 0:50cedd586816 1772 ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber;
dflet 0:50cedd586816 1773 }
dflet 0:50cedd586816 1774
dflet 0:50cedd586816 1775 #endif /* configUSE_TRACE_FACILITY */
dflet 0:50cedd586816 1776 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1777
dflet 0:50cedd586816 1778 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:50cedd586816 1779
dflet 0:50cedd586816 1780 uint8_t ucQueueGetQueueType( QueueHandle_t xQueue )
dflet 0:50cedd586816 1781 {
dflet 0:50cedd586816 1782 return ( ( Queue_t * ) xQueue )->ucQueueType;
dflet 0:50cedd586816 1783 }
dflet 0:50cedd586816 1784
dflet 0:50cedd586816 1785 #endif /* configUSE_TRACE_FACILITY */
dflet 0:50cedd586816 1786 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1787
dflet 0:50cedd586816 1788 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition )
dflet 0:50cedd586816 1789 {
dflet 0:50cedd586816 1790 BaseType_t xReturn = pdFALSE;
dflet 0:50cedd586816 1791
dflet 0:50cedd586816 1792 if( pxQueue->uxItemSize == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1793 {
dflet 0:50cedd586816 1794 #if ( configUSE_MUTEXES == 1 )
dflet 0:50cedd586816 1795 {
dflet 0:50cedd586816 1796 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:50cedd586816 1797 {
dflet 0:50cedd586816 1798 /* The mutex is no longer being held. */
dflet 0:50cedd586816 1799 xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:50cedd586816 1800 pxQueue->pxMutexHolder = NULL;
dflet 0:50cedd586816 1801 }
dflet 0:50cedd586816 1802 else
dflet 0:50cedd586816 1803 {
dflet 0:50cedd586816 1804 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1805 }
dflet 0:50cedd586816 1806 }
dflet 0:50cedd586816 1807 #endif /* configUSE_MUTEXES */
dflet 0:50cedd586816 1808 }
dflet 0:50cedd586816 1809 else if( xPosition == queueSEND_TO_BACK )
dflet 0:50cedd586816 1810 {
dflet 0:50cedd586816 1811 ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */
dflet 0:50cedd586816 1812 pxQueue->pcWriteTo += pxQueue->uxItemSize;
dflet 0:50cedd586816 1813 if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 0:50cedd586816 1814 {
dflet 0:50cedd586816 1815 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 0:50cedd586816 1816 }
dflet 0:50cedd586816 1817 else
dflet 0:50cedd586816 1818 {
dflet 0:50cedd586816 1819 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1820 }
dflet 0:50cedd586816 1821 }
dflet 0:50cedd586816 1822 else
dflet 0:50cedd586816 1823 {
dflet 0:50cedd586816 1824 ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 0:50cedd586816 1825 pxQueue->u.pcReadFrom -= pxQueue->uxItemSize;
dflet 0:50cedd586816 1826 if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 0:50cedd586816 1827 {
dflet 0:50cedd586816 1828 pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize );
dflet 0:50cedd586816 1829 }
dflet 0:50cedd586816 1830 else
dflet 0:50cedd586816 1831 {
dflet 0:50cedd586816 1832 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1833 }
dflet 0:50cedd586816 1834
dflet 0:50cedd586816 1835 if( xPosition == queueOVERWRITE )
dflet 0:50cedd586816 1836 {
dflet 0:50cedd586816 1837 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1838 {
dflet 0:50cedd586816 1839 /* An item is not being added but overwritten, so subtract
dflet 0:50cedd586816 1840 one from the recorded number of items in the queue so when
dflet 0:50cedd586816 1841 one is added again below the number of recorded items remains
dflet 0:50cedd586816 1842 correct. */
dflet 0:50cedd586816 1843 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 1844 }
dflet 0:50cedd586816 1845 else
dflet 0:50cedd586816 1846 {
dflet 0:50cedd586816 1847 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1848 }
dflet 0:50cedd586816 1849 }
dflet 0:50cedd586816 1850 else
dflet 0:50cedd586816 1851 {
dflet 0:50cedd586816 1852 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1853 }
dflet 0:50cedd586816 1854 }
dflet 0:50cedd586816 1855
dflet 0:50cedd586816 1856 ++( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 1857
dflet 0:50cedd586816 1858 return xReturn;
dflet 0:50cedd586816 1859 }
dflet 0:50cedd586816 1860 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1861
dflet 0:50cedd586816 1862 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer )
dflet 0:50cedd586816 1863 {
dflet 0:50cedd586816 1864 if( pxQueue->uxItemSize != ( UBaseType_t ) 0 )
dflet 0:50cedd586816 1865 {
dflet 0:50cedd586816 1866 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:50cedd586816 1867 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */
dflet 0:50cedd586816 1868 {
dflet 0:50cedd586816 1869 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:50cedd586816 1870 }
dflet 0:50cedd586816 1871 else
dflet 0:50cedd586816 1872 {
dflet 0:50cedd586816 1873 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1874 }
dflet 0:50cedd586816 1875 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */
dflet 0:50cedd586816 1876 }
dflet 0:50cedd586816 1877 }
dflet 0:50cedd586816 1878 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1879
dflet 0:50cedd586816 1880 static void prvUnlockQueue( Queue_t * const pxQueue )
dflet 0:50cedd586816 1881 {
dflet 0:50cedd586816 1882 /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */
dflet 0:50cedd586816 1883
dflet 0:50cedd586816 1884 /* The lock counts contains the number of extra data items placed or
dflet 0:50cedd586816 1885 removed from the queue while the queue was locked. When a queue is
dflet 0:50cedd586816 1886 locked items can be added or removed, but the event lists cannot be
dflet 0:50cedd586816 1887 updated. */
dflet 0:50cedd586816 1888 taskENTER_CRITICAL();
dflet 0:50cedd586816 1889 {
dflet 0:50cedd586816 1890 /* See if data was added to the queue while it was locked. */
dflet 0:50cedd586816 1891 while( pxQueue->xTxLock > queueLOCKED_UNMODIFIED )
dflet 0:50cedd586816 1892 {
dflet 0:50cedd586816 1893 /* Data was posted while the queue was locked. Are any tasks
dflet 0:50cedd586816 1894 blocked waiting for data to become available? */
dflet 0:50cedd586816 1895 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 1896 {
dflet 0:50cedd586816 1897 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:50cedd586816 1898 {
dflet 0:50cedd586816 1899 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 0:50cedd586816 1900 {
dflet 0:50cedd586816 1901 /* The queue is a member of a queue set, and posting to
dflet 0:50cedd586816 1902 the queue set caused a higher priority task to unblock.
dflet 0:50cedd586816 1903 A context switch is required. */
dflet 0:50cedd586816 1904 vTaskMissedYield();
dflet 0:50cedd586816 1905 }
dflet 0:50cedd586816 1906 else
dflet 0:50cedd586816 1907 {
dflet 0:50cedd586816 1908 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1909 }
dflet 0:50cedd586816 1910 }
dflet 0:50cedd586816 1911 else
dflet 0:50cedd586816 1912 {
dflet 0:50cedd586816 1913 /* Tasks that are removed from the event list will get added to
dflet 0:50cedd586816 1914 the pending ready list as the scheduler is still suspended. */
dflet 0:50cedd586816 1915 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1916 {
dflet 0:50cedd586816 1917 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1918 {
dflet 0:50cedd586816 1919 /* The task waiting has a higher priority so record that a
dflet 0:50cedd586816 1920 context switch is required. */
dflet 0:50cedd586816 1921 vTaskMissedYield();
dflet 0:50cedd586816 1922 }
dflet 0:50cedd586816 1923 else
dflet 0:50cedd586816 1924 {
dflet 0:50cedd586816 1925 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1926 }
dflet 0:50cedd586816 1927 }
dflet 0:50cedd586816 1928 else
dflet 0:50cedd586816 1929 {
dflet 0:50cedd586816 1930 break;
dflet 0:50cedd586816 1931 }
dflet 0:50cedd586816 1932 }
dflet 0:50cedd586816 1933 }
dflet 0:50cedd586816 1934 #else /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1935 {
dflet 0:50cedd586816 1936 /* Tasks that are removed from the event list will get added to
dflet 0:50cedd586816 1937 the pending ready list as the scheduler is still suspended. */
dflet 0:50cedd586816 1938 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 1939 {
dflet 0:50cedd586816 1940 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 1941 {
dflet 0:50cedd586816 1942 /* The task waiting has a higher priority so record that a
dflet 0:50cedd586816 1943 context switch is required. */
dflet 0:50cedd586816 1944 vTaskMissedYield();
dflet 0:50cedd586816 1945 }
dflet 0:50cedd586816 1946 else
dflet 0:50cedd586816 1947 {
dflet 0:50cedd586816 1948 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1949 }
dflet 0:50cedd586816 1950 }
dflet 0:50cedd586816 1951 else
dflet 0:50cedd586816 1952 {
dflet 0:50cedd586816 1953 break;
dflet 0:50cedd586816 1954 }
dflet 0:50cedd586816 1955 }
dflet 0:50cedd586816 1956 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 1957
dflet 0:50cedd586816 1958 --( pxQueue->xTxLock );
dflet 0:50cedd586816 1959 }
dflet 0:50cedd586816 1960
dflet 0:50cedd586816 1961 pxQueue->xTxLock = queueUNLOCKED;
dflet 0:50cedd586816 1962 }
dflet 0:50cedd586816 1963 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1964
dflet 0:50cedd586816 1965 /* Do the same for the Rx lock. */
dflet 0:50cedd586816 1966 taskENTER_CRITICAL();
dflet 0:50cedd586816 1967 {
dflet 0:50cedd586816 1968 while( pxQueue->xRxLock > queueLOCKED_UNMODIFIED )
dflet 0:50cedd586816 1969 {
dflet 0:50cedd586816 1970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 1971 {
dflet 0:50cedd586816 1972 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:50cedd586816 1973 {
dflet 0:50cedd586816 1974 vTaskMissedYield();
dflet 0:50cedd586816 1975 }
dflet 0:50cedd586816 1976 else
dflet 0:50cedd586816 1977 {
dflet 0:50cedd586816 1978 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 1979 }
dflet 0:50cedd586816 1980
dflet 0:50cedd586816 1981 --( pxQueue->xRxLock );
dflet 0:50cedd586816 1982 }
dflet 0:50cedd586816 1983 else
dflet 0:50cedd586816 1984 {
dflet 0:50cedd586816 1985 break;
dflet 0:50cedd586816 1986 }
dflet 0:50cedd586816 1987 }
dflet 0:50cedd586816 1988
dflet 0:50cedd586816 1989 pxQueue->xRxLock = queueUNLOCKED;
dflet 0:50cedd586816 1990 }
dflet 0:50cedd586816 1991 taskEXIT_CRITICAL();
dflet 0:50cedd586816 1992 }
dflet 0:50cedd586816 1993 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 1994
dflet 0:50cedd586816 1995 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue )
dflet 0:50cedd586816 1996 {
dflet 0:50cedd586816 1997 BaseType_t xReturn;
dflet 0:50cedd586816 1998
dflet 0:50cedd586816 1999 taskENTER_CRITICAL();
dflet 0:50cedd586816 2000 {
dflet 0:50cedd586816 2001 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2002 {
dflet 0:50cedd586816 2003 xReturn = pdTRUE;
dflet 0:50cedd586816 2004 }
dflet 0:50cedd586816 2005 else
dflet 0:50cedd586816 2006 {
dflet 0:50cedd586816 2007 xReturn = pdFALSE;
dflet 0:50cedd586816 2008 }
dflet 0:50cedd586816 2009 }
dflet 0:50cedd586816 2010 taskEXIT_CRITICAL();
dflet 0:50cedd586816 2011
dflet 0:50cedd586816 2012 return xReturn;
dflet 0:50cedd586816 2013 }
dflet 0:50cedd586816 2014 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2015
dflet 0:50cedd586816 2016 BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue )
dflet 0:50cedd586816 2017 {
dflet 0:50cedd586816 2018 BaseType_t xReturn;
dflet 0:50cedd586816 2019
dflet 0:50cedd586816 2020 configASSERT( xQueue );
dflet 0:50cedd586816 2021 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2022 {
dflet 0:50cedd586816 2023 xReturn = pdTRUE;
dflet 0:50cedd586816 2024 }
dflet 0:50cedd586816 2025 else
dflet 0:50cedd586816 2026 {
dflet 0:50cedd586816 2027 xReturn = pdFALSE;
dflet 0:50cedd586816 2028 }
dflet 0:50cedd586816 2029
dflet 0:50cedd586816 2030 return xReturn;
dflet 0:50cedd586816 2031 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:50cedd586816 2032 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2033
dflet 0:50cedd586816 2034 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue )
dflet 0:50cedd586816 2035 {
dflet 0:50cedd586816 2036 BaseType_t xReturn;
dflet 0:50cedd586816 2037
dflet 0:50cedd586816 2038 taskENTER_CRITICAL();
dflet 0:50cedd586816 2039 {
dflet 0:50cedd586816 2040 if( pxQueue->uxMessagesWaiting == pxQueue->uxLength )
dflet 0:50cedd586816 2041 {
dflet 0:50cedd586816 2042 xReturn = pdTRUE;
dflet 0:50cedd586816 2043 }
dflet 0:50cedd586816 2044 else
dflet 0:50cedd586816 2045 {
dflet 0:50cedd586816 2046 xReturn = pdFALSE;
dflet 0:50cedd586816 2047 }
dflet 0:50cedd586816 2048 }
dflet 0:50cedd586816 2049 taskEXIT_CRITICAL();
dflet 0:50cedd586816 2050
dflet 0:50cedd586816 2051 return xReturn;
dflet 0:50cedd586816 2052 }
dflet 0:50cedd586816 2053 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2054
dflet 0:50cedd586816 2055 BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue )
dflet 0:50cedd586816 2056 {
dflet 0:50cedd586816 2057 BaseType_t xReturn;
dflet 0:50cedd586816 2058
dflet 0:50cedd586816 2059 configASSERT( xQueue );
dflet 0:50cedd586816 2060 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength )
dflet 0:50cedd586816 2061 {
dflet 0:50cedd586816 2062 xReturn = pdTRUE;
dflet 0:50cedd586816 2063 }
dflet 0:50cedd586816 2064 else
dflet 0:50cedd586816 2065 {
dflet 0:50cedd586816 2066 xReturn = pdFALSE;
dflet 0:50cedd586816 2067 }
dflet 0:50cedd586816 2068
dflet 0:50cedd586816 2069 return xReturn;
dflet 0:50cedd586816 2070 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:50cedd586816 2071 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2072
dflet 0:50cedd586816 2073 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:50cedd586816 2074
dflet 0:50cedd586816 2075 BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait )
dflet 0:50cedd586816 2076 {
dflet 0:50cedd586816 2077 BaseType_t xReturn;
dflet 0:50cedd586816 2078 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 2079
dflet 0:50cedd586816 2080 /* If the queue is already full we may have to block. A critical section
dflet 0:50cedd586816 2081 is required to prevent an interrupt removing something from the queue
dflet 0:50cedd586816 2082 between the check to see if the queue is full and blocking on the queue. */
dflet 0:50cedd586816 2083 portDISABLE_INTERRUPTS();
dflet 0:50cedd586816 2084 {
dflet 0:50cedd586816 2085 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:50cedd586816 2086 {
dflet 0:50cedd586816 2087 /* The queue is full - do we want to block or just leave without
dflet 0:50cedd586816 2088 posting? */
dflet 0:50cedd586816 2089 if( xTicksToWait > ( TickType_t ) 0 )
dflet 0:50cedd586816 2090 {
dflet 0:50cedd586816 2091 /* As this is called from a coroutine we cannot block directly, but
dflet 0:50cedd586816 2092 return indicating that we need to block. */
dflet 0:50cedd586816 2093 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) );
dflet 0:50cedd586816 2094 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2095 return errQUEUE_BLOCKED;
dflet 0:50cedd586816 2096 }
dflet 0:50cedd586816 2097 else
dflet 0:50cedd586816 2098 {
dflet 0:50cedd586816 2099 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2100 return errQUEUE_FULL;
dflet 0:50cedd586816 2101 }
dflet 0:50cedd586816 2102 }
dflet 0:50cedd586816 2103 }
dflet 0:50cedd586816 2104 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2105
dflet 0:50cedd586816 2106 portDISABLE_INTERRUPTS();
dflet 0:50cedd586816 2107 {
dflet 0:50cedd586816 2108 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:50cedd586816 2109 {
dflet 0:50cedd586816 2110 /* There is room in the queue, copy the data into the queue. */
dflet 0:50cedd586816 2111 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 0:50cedd586816 2112 xReturn = pdPASS;
dflet 0:50cedd586816 2113
dflet 0:50cedd586816 2114 /* Were any co-routines waiting for data to become available? */
dflet 0:50cedd586816 2115 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 2116 {
dflet 0:50cedd586816 2117 /* In this instance the co-routine could be placed directly
dflet 0:50cedd586816 2118 into the ready list as we are within a critical section.
dflet 0:50cedd586816 2119 Instead the same pending ready list mechanism is used as if
dflet 0:50cedd586816 2120 the event were caused from within an interrupt. */
dflet 0:50cedd586816 2121 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 2122 {
dflet 0:50cedd586816 2123 /* The co-routine waiting has a higher priority so record
dflet 0:50cedd586816 2124 that a yield might be appropriate. */
dflet 0:50cedd586816 2125 xReturn = errQUEUE_YIELD;
dflet 0:50cedd586816 2126 }
dflet 0:50cedd586816 2127 else
dflet 0:50cedd586816 2128 {
dflet 0:50cedd586816 2129 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2130 }
dflet 0:50cedd586816 2131 }
dflet 0:50cedd586816 2132 else
dflet 0:50cedd586816 2133 {
dflet 0:50cedd586816 2134 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2135 }
dflet 0:50cedd586816 2136 }
dflet 0:50cedd586816 2137 else
dflet 0:50cedd586816 2138 {
dflet 0:50cedd586816 2139 xReturn = errQUEUE_FULL;
dflet 0:50cedd586816 2140 }
dflet 0:50cedd586816 2141 }
dflet 0:50cedd586816 2142 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2143
dflet 0:50cedd586816 2144 return xReturn;
dflet 0:50cedd586816 2145 }
dflet 0:50cedd586816 2146
dflet 0:50cedd586816 2147 #endif /* configUSE_CO_ROUTINES */
dflet 0:50cedd586816 2148 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2149
dflet 0:50cedd586816 2150 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:50cedd586816 2151
dflet 0:50cedd586816 2152 BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait )
dflet 0:50cedd586816 2153 {
dflet 0:50cedd586816 2154 BaseType_t xReturn;
dflet 0:50cedd586816 2155 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 2156
dflet 0:50cedd586816 2157 /* If the queue is already empty we may have to block. A critical section
dflet 0:50cedd586816 2158 is required to prevent an interrupt adding something to the queue
dflet 0:50cedd586816 2159 between the check to see if the queue is empty and blocking on the queue. */
dflet 0:50cedd586816 2160 portDISABLE_INTERRUPTS();
dflet 0:50cedd586816 2161 {
dflet 0:50cedd586816 2162 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2163 {
dflet 0:50cedd586816 2164 /* There are no messages in the queue, do we want to block or just
dflet 0:50cedd586816 2165 leave with nothing? */
dflet 0:50cedd586816 2166 if( xTicksToWait > ( TickType_t ) 0 )
dflet 0:50cedd586816 2167 {
dflet 0:50cedd586816 2168 /* As this is a co-routine we cannot block directly, but return
dflet 0:50cedd586816 2169 indicating that we need to block. */
dflet 0:50cedd586816 2170 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) );
dflet 0:50cedd586816 2171 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2172 return errQUEUE_BLOCKED;
dflet 0:50cedd586816 2173 }
dflet 0:50cedd586816 2174 else
dflet 0:50cedd586816 2175 {
dflet 0:50cedd586816 2176 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2177 return errQUEUE_FULL;
dflet 0:50cedd586816 2178 }
dflet 0:50cedd586816 2179 }
dflet 0:50cedd586816 2180 else
dflet 0:50cedd586816 2181 {
dflet 0:50cedd586816 2182 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2183 }
dflet 0:50cedd586816 2184 }
dflet 0:50cedd586816 2185 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2186
dflet 0:50cedd586816 2187 portDISABLE_INTERRUPTS();
dflet 0:50cedd586816 2188 {
dflet 0:50cedd586816 2189 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2190 {
dflet 0:50cedd586816 2191 /* Data is available from the queue. */
dflet 0:50cedd586816 2192 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:50cedd586816 2193 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 0:50cedd586816 2194 {
dflet 0:50cedd586816 2195 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:50cedd586816 2196 }
dflet 0:50cedd586816 2197 else
dflet 0:50cedd586816 2198 {
dflet 0:50cedd586816 2199 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2200 }
dflet 0:50cedd586816 2201 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 2202 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 0:50cedd586816 2203
dflet 0:50cedd586816 2204 xReturn = pdPASS;
dflet 0:50cedd586816 2205
dflet 0:50cedd586816 2206 /* Were any co-routines waiting for space to become available? */
dflet 0:50cedd586816 2207 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 2208 {
dflet 0:50cedd586816 2209 /* In this instance the co-routine could be placed directly
dflet 0:50cedd586816 2210 into the ready list as we are within a critical section.
dflet 0:50cedd586816 2211 Instead the same pending ready list mechanism is used as if
dflet 0:50cedd586816 2212 the event were caused from within an interrupt. */
dflet 0:50cedd586816 2213 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:50cedd586816 2214 {
dflet 0:50cedd586816 2215 xReturn = errQUEUE_YIELD;
dflet 0:50cedd586816 2216 }
dflet 0:50cedd586816 2217 else
dflet 0:50cedd586816 2218 {
dflet 0:50cedd586816 2219 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2220 }
dflet 0:50cedd586816 2221 }
dflet 0:50cedd586816 2222 else
dflet 0:50cedd586816 2223 {
dflet 0:50cedd586816 2224 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2225 }
dflet 0:50cedd586816 2226 }
dflet 0:50cedd586816 2227 else
dflet 0:50cedd586816 2228 {
dflet 0:50cedd586816 2229 xReturn = pdFAIL;
dflet 0:50cedd586816 2230 }
dflet 0:50cedd586816 2231 }
dflet 0:50cedd586816 2232 portENABLE_INTERRUPTS();
dflet 0:50cedd586816 2233
dflet 0:50cedd586816 2234 return xReturn;
dflet 0:50cedd586816 2235 }
dflet 0:50cedd586816 2236
dflet 0:50cedd586816 2237 #endif /* configUSE_CO_ROUTINES */
dflet 0:50cedd586816 2238 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2239
dflet 0:50cedd586816 2240 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:50cedd586816 2241
dflet 0:50cedd586816 2242 BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken )
dflet 0:50cedd586816 2243 {
dflet 0:50cedd586816 2244 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 2245
dflet 0:50cedd586816 2246 /* Cannot block within an ISR so if there is no space on the queue then
dflet 0:50cedd586816 2247 exit without doing anything. */
dflet 0:50cedd586816 2248 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:50cedd586816 2249 {
dflet 0:50cedd586816 2250 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 0:50cedd586816 2251
dflet 0:50cedd586816 2252 /* We only want to wake one co-routine per ISR, so check that a
dflet 0:50cedd586816 2253 co-routine has not already been woken. */
dflet 0:50cedd586816 2254 if( xCoRoutinePreviouslyWoken == pdFALSE )
dflet 0:50cedd586816 2255 {
dflet 0:50cedd586816 2256 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 2257 {
dflet 0:50cedd586816 2258 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 2259 {
dflet 0:50cedd586816 2260 return pdTRUE;
dflet 0:50cedd586816 2261 }
dflet 0:50cedd586816 2262 else
dflet 0:50cedd586816 2263 {
dflet 0:50cedd586816 2264 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2265 }
dflet 0:50cedd586816 2266 }
dflet 0:50cedd586816 2267 else
dflet 0:50cedd586816 2268 {
dflet 0:50cedd586816 2269 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2270 }
dflet 0:50cedd586816 2271 }
dflet 0:50cedd586816 2272 else
dflet 0:50cedd586816 2273 {
dflet 0:50cedd586816 2274 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2275 }
dflet 0:50cedd586816 2276 }
dflet 0:50cedd586816 2277 else
dflet 0:50cedd586816 2278 {
dflet 0:50cedd586816 2279 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2280 }
dflet 0:50cedd586816 2281
dflet 0:50cedd586816 2282 return xCoRoutinePreviouslyWoken;
dflet 0:50cedd586816 2283 }
dflet 0:50cedd586816 2284
dflet 0:50cedd586816 2285 #endif /* configUSE_CO_ROUTINES */
dflet 0:50cedd586816 2286 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2287
dflet 0:50cedd586816 2288 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:50cedd586816 2289
dflet 0:50cedd586816 2290 BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken )
dflet 0:50cedd586816 2291 {
dflet 0:50cedd586816 2292 BaseType_t xReturn;
dflet 0:50cedd586816 2293 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 2294
dflet 0:50cedd586816 2295 /* We cannot block from an ISR, so check there is data available. If
dflet 0:50cedd586816 2296 not then just leave without doing anything. */
dflet 0:50cedd586816 2297 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2298 {
dflet 0:50cedd586816 2299 /* Copy the data from the queue. */
dflet 0:50cedd586816 2300 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:50cedd586816 2301 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 0:50cedd586816 2302 {
dflet 0:50cedd586816 2303 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:50cedd586816 2304 }
dflet 0:50cedd586816 2305 else
dflet 0:50cedd586816 2306 {
dflet 0:50cedd586816 2307 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2308 }
dflet 0:50cedd586816 2309 --( pxQueue->uxMessagesWaiting );
dflet 0:50cedd586816 2310 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 0:50cedd586816 2311
dflet 0:50cedd586816 2312 if( ( *pxCoRoutineWoken ) == pdFALSE )
dflet 0:50cedd586816 2313 {
dflet 0:50cedd586816 2314 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:50cedd586816 2315 {
dflet 0:50cedd586816 2316 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:50cedd586816 2317 {
dflet 0:50cedd586816 2318 *pxCoRoutineWoken = pdTRUE;
dflet 0:50cedd586816 2319 }
dflet 0:50cedd586816 2320 else
dflet 0:50cedd586816 2321 {
dflet 0:50cedd586816 2322 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2323 }
dflet 0:50cedd586816 2324 }
dflet 0:50cedd586816 2325 else
dflet 0:50cedd586816 2326 {
dflet 0:50cedd586816 2327 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2328 }
dflet 0:50cedd586816 2329 }
dflet 0:50cedd586816 2330 else
dflet 0:50cedd586816 2331 {
dflet 0:50cedd586816 2332 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2333 }
dflet 0:50cedd586816 2334
dflet 0:50cedd586816 2335 xReturn = pdPASS;
dflet 0:50cedd586816 2336 }
dflet 0:50cedd586816 2337 else
dflet 0:50cedd586816 2338 {
dflet 0:50cedd586816 2339 xReturn = pdFAIL;
dflet 0:50cedd586816 2340 }
dflet 0:50cedd586816 2341
dflet 0:50cedd586816 2342 return xReturn;
dflet 0:50cedd586816 2343 }
dflet 0:50cedd586816 2344
dflet 0:50cedd586816 2345 #endif /* configUSE_CO_ROUTINES */
dflet 0:50cedd586816 2346 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2347
dflet 0:50cedd586816 2348 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:50cedd586816 2349
dflet 0:50cedd586816 2350 void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 0:50cedd586816 2351 {
dflet 0:50cedd586816 2352 UBaseType_t ux;
dflet 0:50cedd586816 2353
dflet 0:50cedd586816 2354 /* See if there is an empty space in the registry. A NULL name denotes
dflet 0:50cedd586816 2355 a free slot. */
dflet 0:50cedd586816 2356 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 0:50cedd586816 2357 {
dflet 0:50cedd586816 2358 if( xQueueRegistry[ ux ].pcQueueName == NULL )
dflet 0:50cedd586816 2359 {
dflet 0:50cedd586816 2360 /* Store the information on this queue. */
dflet 0:50cedd586816 2361 xQueueRegistry[ ux ].pcQueueName = pcQueueName;
dflet 0:50cedd586816 2362 xQueueRegistry[ ux ].xHandle = xQueue;
dflet 0:50cedd586816 2363
dflet 0:50cedd586816 2364 traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName );
dflet 0:50cedd586816 2365 break;
dflet 0:50cedd586816 2366 }
dflet 0:50cedd586816 2367 else
dflet 0:50cedd586816 2368 {
dflet 0:50cedd586816 2369 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2370 }
dflet 0:50cedd586816 2371 }
dflet 0:50cedd586816 2372 }
dflet 0:50cedd586816 2373
dflet 0:50cedd586816 2374 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:50cedd586816 2375 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2376
dflet 0:50cedd586816 2377 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:50cedd586816 2378
dflet 0:50cedd586816 2379 void vQueueUnregisterQueue( QueueHandle_t xQueue )
dflet 0:50cedd586816 2380 {
dflet 0:50cedd586816 2381 UBaseType_t ux;
dflet 0:50cedd586816 2382
dflet 0:50cedd586816 2383 /* See if the handle of the queue being unregistered in actually in the
dflet 0:50cedd586816 2384 registry. */
dflet 0:50cedd586816 2385 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 0:50cedd586816 2386 {
dflet 0:50cedd586816 2387 if( xQueueRegistry[ ux ].xHandle == xQueue )
dflet 0:50cedd586816 2388 {
dflet 0:50cedd586816 2389 /* Set the name to NULL to show that this slot if free again. */
dflet 0:50cedd586816 2390 xQueueRegistry[ ux ].pcQueueName = NULL;
dflet 0:50cedd586816 2391 break;
dflet 0:50cedd586816 2392 }
dflet 0:50cedd586816 2393 else
dflet 0:50cedd586816 2394 {
dflet 0:50cedd586816 2395 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2396 }
dflet 0:50cedd586816 2397 }
dflet 0:50cedd586816 2398
dflet 0:50cedd586816 2399 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:50cedd586816 2400
dflet 0:50cedd586816 2401 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:50cedd586816 2402 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2403
dflet 0:50cedd586816 2404 #if ( configUSE_TIMERS == 1 )
dflet 0:50cedd586816 2405
dflet 0:50cedd586816 2406 void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait )
dflet 0:50cedd586816 2407 {
dflet 0:50cedd586816 2408 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:50cedd586816 2409
dflet 0:50cedd586816 2410 /* This function should not be called by application code hence the
dflet 0:50cedd586816 2411 'Restricted' in its name. It is not part of the public API. It is
dflet 0:50cedd586816 2412 designed for use by kernel code, and has special calling requirements.
dflet 0:50cedd586816 2413 It can result in vListInsert() being called on a list that can only
dflet 0:50cedd586816 2414 possibly ever have one item in it, so the list will be fast, but even
dflet 0:50cedd586816 2415 so it should be called with the scheduler locked and not from a critical
dflet 0:50cedd586816 2416 section. */
dflet 0:50cedd586816 2417
dflet 0:50cedd586816 2418 /* Only do anything if there are no messages in the queue. This function
dflet 0:50cedd586816 2419 will not actually cause the task to block, just place it on a blocked
dflet 0:50cedd586816 2420 list. It will not block until the scheduler is unlocked - at which
dflet 0:50cedd586816 2421 time a yield will be performed. If an item is added to the queue while
dflet 0:50cedd586816 2422 the queue is locked, and the calling task blocks on the queue, then the
dflet 0:50cedd586816 2423 calling task will be immediately unblocked when the queue is unlocked. */
dflet 0:50cedd586816 2424 prvLockQueue( pxQueue );
dflet 0:50cedd586816 2425 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U )
dflet 0:50cedd586816 2426 {
dflet 0:50cedd586816 2427 /* There is nothing in the queue, block for the specified period. */
dflet 0:50cedd586816 2428 vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:50cedd586816 2429 }
dflet 0:50cedd586816 2430 else
dflet 0:50cedd586816 2431 {
dflet 0:50cedd586816 2432 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2433 }
dflet 0:50cedd586816 2434 prvUnlockQueue( pxQueue );
dflet 0:50cedd586816 2435 }
dflet 0:50cedd586816 2436
dflet 0:50cedd586816 2437 #endif /* configUSE_TIMERS */
dflet 0:50cedd586816 2438 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2439
dflet 0:50cedd586816 2440 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2441
dflet 0:50cedd586816 2442 QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength )
dflet 0:50cedd586816 2443 {
dflet 0:50cedd586816 2444 QueueSetHandle_t pxQueue;
dflet 0:50cedd586816 2445
dflet 0:50cedd586816 2446 pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET );
dflet 0:50cedd586816 2447
dflet 0:50cedd586816 2448 return pxQueue;
dflet 0:50cedd586816 2449 }
dflet 0:50cedd586816 2450
dflet 0:50cedd586816 2451 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2452 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2453
dflet 0:50cedd586816 2454 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2455
dflet 0:50cedd586816 2456 BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 0:50cedd586816 2457 {
dflet 0:50cedd586816 2458 BaseType_t xReturn;
dflet 0:50cedd586816 2459
dflet 0:50cedd586816 2460 taskENTER_CRITICAL();
dflet 0:50cedd586816 2461 {
dflet 0:50cedd586816 2462 if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL )
dflet 0:50cedd586816 2463 {
dflet 0:50cedd586816 2464 /* Cannot add a queue/semaphore to more than one queue set. */
dflet 0:50cedd586816 2465 xReturn = pdFAIL;
dflet 0:50cedd586816 2466 }
dflet 0:50cedd586816 2467 else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2468 {
dflet 0:50cedd586816 2469 /* Cannot add a queue/semaphore to a queue set if there are already
dflet 0:50cedd586816 2470 items in the queue/semaphore. */
dflet 0:50cedd586816 2471 xReturn = pdFAIL;
dflet 0:50cedd586816 2472 }
dflet 0:50cedd586816 2473 else
dflet 0:50cedd586816 2474 {
dflet 0:50cedd586816 2475 ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet;
dflet 0:50cedd586816 2476 xReturn = pdPASS;
dflet 0:50cedd586816 2477 }
dflet 0:50cedd586816 2478 }
dflet 0:50cedd586816 2479 taskEXIT_CRITICAL();
dflet 0:50cedd586816 2480
dflet 0:50cedd586816 2481 return xReturn;
dflet 0:50cedd586816 2482 }
dflet 0:50cedd586816 2483
dflet 0:50cedd586816 2484 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2485 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2486
dflet 0:50cedd586816 2487 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2488
dflet 0:50cedd586816 2489 BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 0:50cedd586816 2490 {
dflet 0:50cedd586816 2491 BaseType_t xReturn;
dflet 0:50cedd586816 2492 Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore;
dflet 0:50cedd586816 2493
dflet 0:50cedd586816 2494 if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet )
dflet 0:50cedd586816 2495 {
dflet 0:50cedd586816 2496 /* The queue was not a member of the set. */
dflet 0:50cedd586816 2497 xReturn = pdFAIL;
dflet 0:50cedd586816 2498 }
dflet 0:50cedd586816 2499 else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 0:50cedd586816 2500 {
dflet 0:50cedd586816 2501 /* It is dangerous to remove a queue from a set when the queue is
dflet 0:50cedd586816 2502 not empty because the queue set will still hold pending events for
dflet 0:50cedd586816 2503 the queue. */
dflet 0:50cedd586816 2504 xReturn = pdFAIL;
dflet 0:50cedd586816 2505 }
dflet 0:50cedd586816 2506 else
dflet 0:50cedd586816 2507 {
dflet 0:50cedd586816 2508 taskENTER_CRITICAL();
dflet 0:50cedd586816 2509 {
dflet 0:50cedd586816 2510 /* The queue is no longer contained in the set. */
dflet 0:50cedd586816 2511 pxQueueOrSemaphore->pxQueueSetContainer = NULL;
dflet 0:50cedd586816 2512 }
dflet 0:50cedd586816 2513 taskEXIT_CRITICAL();
dflet 0:50cedd586816 2514 xReturn = pdPASS;
dflet 0:50cedd586816 2515 }
dflet 0:50cedd586816 2516
dflet 0:50cedd586816 2517 return xReturn;
dflet 0:50cedd586816 2518 } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */
dflet 0:50cedd586816 2519
dflet 0:50cedd586816 2520 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2521 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2522
dflet 0:50cedd586816 2523 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2524
dflet 0:50cedd586816 2525 QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait )
dflet 0:50cedd586816 2526 {
dflet 0:50cedd586816 2527 QueueSetMemberHandle_t xReturn = NULL;
dflet 0:50cedd586816 2528
dflet 0:50cedd586816 2529 ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 0:50cedd586816 2530 return xReturn;
dflet 0:50cedd586816 2531 }
dflet 0:50cedd586816 2532
dflet 0:50cedd586816 2533 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2534 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2535
dflet 0:50cedd586816 2536 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2537
dflet 0:50cedd586816 2538 QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet )
dflet 0:50cedd586816 2539 {
dflet 0:50cedd586816 2540 QueueSetMemberHandle_t xReturn = NULL;
dflet 0:50cedd586816 2541
dflet 0:50cedd586816 2542 ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 0:50cedd586816 2543 return xReturn;
dflet 0:50cedd586816 2544 }
dflet 0:50cedd586816 2545
dflet 0:50cedd586816 2546 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2547 /*-----------------------------------------------------------*/
dflet 0:50cedd586816 2548
dflet 0:50cedd586816 2549 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:50cedd586816 2550
dflet 0:50cedd586816 2551 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition )
dflet 0:50cedd586816 2552 {
dflet 0:50cedd586816 2553 Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer;
dflet 0:50cedd586816 2554 BaseType_t xReturn = pdFALSE;
dflet 0:50cedd586816 2555
dflet 0:50cedd586816 2556 /* This function must be called form a critical section. */
dflet 0:50cedd586816 2557
dflet 0:50cedd586816 2558 configASSERT( pxQueueSetContainer );
dflet 0:50cedd586816 2559 configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength );
dflet 0:50cedd586816 2560
dflet 0:50cedd586816 2561 if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength )
dflet 0:50cedd586816 2562 {
dflet 0:50cedd586816 2563 traceQUEUE_SEND( pxQueueSetContainer );
dflet 0:50cedd586816 2564
dflet 0:50cedd586816 2565 /* The data copied is the handle of the queue that contains data. */
dflet 0:50cedd586816 2566 xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition );
dflet 0:50cedd586816 2567
dflet 0:50cedd586816 2568 if( pxQueueSetContainer->xTxLock == queueUNLOCKED )
dflet 0:50cedd586816 2569 {
dflet 0:50cedd586816 2570 if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:50cedd586816 2571 {
dflet 0:50cedd586816 2572 if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:50cedd586816 2573 {
dflet 0:50cedd586816 2574 /* The task waiting has a higher priority. */
dflet 0:50cedd586816 2575 xReturn = pdTRUE;
dflet 0:50cedd586816 2576 }
dflet 0:50cedd586816 2577 else
dflet 0:50cedd586816 2578 {
dflet 0:50cedd586816 2579 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2580 }
dflet 0:50cedd586816 2581 }
dflet 0:50cedd586816 2582 else
dflet 0:50cedd586816 2583 {
dflet 0:50cedd586816 2584 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2585 }
dflet 0:50cedd586816 2586 }
dflet 0:50cedd586816 2587 else
dflet 0:50cedd586816 2588 {
dflet 0:50cedd586816 2589 ( pxQueueSetContainer->xTxLock )++;
dflet 0:50cedd586816 2590 }
dflet 0:50cedd586816 2591 }
dflet 0:50cedd586816 2592 else
dflet 0:50cedd586816 2593 {
dflet 0:50cedd586816 2594 mtCOVERAGE_TEST_MARKER();
dflet 0:50cedd586816 2595 }
dflet 0:50cedd586816 2596
dflet 0:50cedd586816 2597 return xReturn;
dflet 0:50cedd586816 2598 }
dflet 0:50cedd586816 2599
dflet 0:50cedd586816 2600 #endif /* configUSE_QUEUE_SETS */
dflet 0:50cedd586816 2601
dflet 0:50cedd586816 2602
dflet 0:50cedd586816 2603
dflet 0:50cedd586816 2604
dflet 0:50cedd586816 2605
dflet 0:50cedd586816 2606
dflet 0:50cedd586816 2607
dflet 0:50cedd586816 2608
dflet 0:50cedd586816 2609
dflet 0:50cedd586816 2610
dflet 0:50cedd586816 2611
dflet 0:50cedd586816 2612
dflet 0:50cedd586816 2613