Francisco Paez / freertos-cm3

Dependents:   mbed_lpc1768_freertos_lib

Committer:
fep
Date:
Wed May 31 02:36:43 2017 +0000
Revision:
0:5ff20db10a96
FreeRTOS v9.0.0 for ARM Cortex-M3 based boards.

Who changed what in which revision?

UserRevisionLine numberNew contents of line
fep 0:5ff20db10a96 1 /*
fep 0:5ff20db10a96 2 FreeRTOS V9.0.0 - Copyright (C) 2016 Real Time Engineers Ltd.
fep 0:5ff20db10a96 3 All rights reserved
fep 0:5ff20db10a96 4
fep 0:5ff20db10a96 5 VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION.
fep 0:5ff20db10a96 6
fep 0:5ff20db10a96 7 This file is part of the FreeRTOS distribution.
fep 0:5ff20db10a96 8
fep 0:5ff20db10a96 9 FreeRTOS is free software; you can redistribute it and/or modify it under
fep 0:5ff20db10a96 10 the terms of the GNU General Public License (version 2) as published by the
fep 0:5ff20db10a96 11 Free Software Foundation >>>> AND MODIFIED BY <<<< the FreeRTOS exception.
fep 0:5ff20db10a96 12
fep 0:5ff20db10a96 13 ***************************************************************************
fep 0:5ff20db10a96 14 >>! NOTE: The modification to the GPL is included to allow you to !<<
fep 0:5ff20db10a96 15 >>! distribute a combined work that includes FreeRTOS without being !<<
fep 0:5ff20db10a96 16 >>! obliged to provide the source code for proprietary components !<<
fep 0:5ff20db10a96 17 >>! outside of the FreeRTOS kernel. !<<
fep 0:5ff20db10a96 18 ***************************************************************************
fep 0:5ff20db10a96 19
fep 0:5ff20db10a96 20 FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY
fep 0:5ff20db10a96 21 WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
fep 0:5ff20db10a96 22 FOR A PARTICULAR PURPOSE. Full license text is available on the following
fep 0:5ff20db10a96 23 link: http://www.freertos.org/a00114.html
fep 0:5ff20db10a96 24
fep 0:5ff20db10a96 25 ***************************************************************************
fep 0:5ff20db10a96 26 * *
fep 0:5ff20db10a96 27 * FreeRTOS provides completely free yet professionally developed, *
fep 0:5ff20db10a96 28 * robust, strictly quality controlled, supported, and cross *
fep 0:5ff20db10a96 29 * platform software that is more than just the market leader, it *
fep 0:5ff20db10a96 30 * is the industry's de facto standard. *
fep 0:5ff20db10a96 31 * *
fep 0:5ff20db10a96 32 * Help yourself get started quickly while simultaneously helping *
fep 0:5ff20db10a96 33 * to support the FreeRTOS project by purchasing a FreeRTOS *
fep 0:5ff20db10a96 34 * tutorial book, reference manual, or both: *
fep 0:5ff20db10a96 35 * http://www.FreeRTOS.org/Documentation *
fep 0:5ff20db10a96 36 * *
fep 0:5ff20db10a96 37 ***************************************************************************
fep 0:5ff20db10a96 38
fep 0:5ff20db10a96 39 http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading
fep 0:5ff20db10a96 40 the FAQ page "My application does not run, what could be wrong?". Have you
fep 0:5ff20db10a96 41 defined configASSERT()?
fep 0:5ff20db10a96 42
fep 0:5ff20db10a96 43 http://www.FreeRTOS.org/support - In return for receiving this top quality
fep 0:5ff20db10a96 44 embedded software for free we request you assist our global community by
fep 0:5ff20db10a96 45 participating in the support forum.
fep 0:5ff20db10a96 46
fep 0:5ff20db10a96 47 http://www.FreeRTOS.org/training - Investing in training allows your team to
fep 0:5ff20db10a96 48 be as productive as possible as early as possible. Now you can receive
fep 0:5ff20db10a96 49 FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers
fep 0:5ff20db10a96 50 Ltd, and the world's leading authority on the world's leading RTOS.
fep 0:5ff20db10a96 51
fep 0:5ff20db10a96 52 http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products,
fep 0:5ff20db10a96 53 including FreeRTOS+Trace - an indispensable productivity tool, a DOS
fep 0:5ff20db10a96 54 compatible FAT file system, and our tiny thread aware UDP/IP stack.
fep 0:5ff20db10a96 55
fep 0:5ff20db10a96 56 http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate.
fep 0:5ff20db10a96 57 Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS.
fep 0:5ff20db10a96 58
fep 0:5ff20db10a96 59 http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High
fep 0:5ff20db10a96 60 Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS
fep 0:5ff20db10a96 61 licenses offer ticketed support, indemnification and commercial middleware.
fep 0:5ff20db10a96 62
fep 0:5ff20db10a96 63 http://www.SafeRTOS.com - High Integrity Systems also provide a safety
fep 0:5ff20db10a96 64 engineered and independently SIL3 certified version for use in safety and
fep 0:5ff20db10a96 65 mission critical applications that require provable dependability.
fep 0:5ff20db10a96 66
fep 0:5ff20db10a96 67 1 tab == 4 spaces!
fep 0:5ff20db10a96 68 */
fep 0:5ff20db10a96 69
fep 0:5ff20db10a96 70 #include <stdlib.h>
fep 0:5ff20db10a96 71 #include <string.h>
fep 0:5ff20db10a96 72
fep 0:5ff20db10a96 73 /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining
fep 0:5ff20db10a96 74 all the API functions to use the MPU wrappers. That should only be done when
fep 0:5ff20db10a96 75 task.h is included from an application file. */
fep 0:5ff20db10a96 76 #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE
fep 0:5ff20db10a96 77
fep 0:5ff20db10a96 78 #include "FreeRTOS.h"
fep 0:5ff20db10a96 79 #include "task.h"
fep 0:5ff20db10a96 80 #include "queue.h"
fep 0:5ff20db10a96 81
fep 0:5ff20db10a96 82 #if ( configUSE_CO_ROUTINES == 1 )
fep 0:5ff20db10a96 83 #include "croutine.h"
fep 0:5ff20db10a96 84 #endif
fep 0:5ff20db10a96 85
fep 0:5ff20db10a96 86 /* Lint e961 and e750 are suppressed as a MISRA exception justified because the
fep 0:5ff20db10a96 87 MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the
fep 0:5ff20db10a96 88 header files above, but not in this file, in order to generate the correct
fep 0:5ff20db10a96 89 privileged Vs unprivileged linkage and placement. */
fep 0:5ff20db10a96 90 #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */
fep 0:5ff20db10a96 91
fep 0:5ff20db10a96 92
fep 0:5ff20db10a96 93 /* Constants used with the cRxLock and cTxLock structure members. */
fep 0:5ff20db10a96 94 #define queueUNLOCKED ( ( int8_t ) -1 )
fep 0:5ff20db10a96 95 #define queueLOCKED_UNMODIFIED ( ( int8_t ) 0 )
fep 0:5ff20db10a96 96
fep 0:5ff20db10a96 97 /* When the Queue_t structure is used to represent a base queue its pcHead and
fep 0:5ff20db10a96 98 pcTail members are used as pointers into the queue storage area. When the
fep 0:5ff20db10a96 99 Queue_t structure is used to represent a mutex pcHead and pcTail pointers are
fep 0:5ff20db10a96 100 not necessary, and the pcHead pointer is set to NULL to indicate that the
fep 0:5ff20db10a96 101 pcTail pointer actually points to the mutex holder (if any). Map alternative
fep 0:5ff20db10a96 102 names to the pcHead and pcTail structure members to ensure the readability of
fep 0:5ff20db10a96 103 the code is maintained despite this dual use of two structure members. An
fep 0:5ff20db10a96 104 alternative implementation would be to use a union, but use of a union is
fep 0:5ff20db10a96 105 against the coding standard (although an exception to the standard has been
fep 0:5ff20db10a96 106 permitted where the dual use also significantly changes the type of the
fep 0:5ff20db10a96 107 structure member). */
fep 0:5ff20db10a96 108 #define pxMutexHolder pcTail
fep 0:5ff20db10a96 109 #define uxQueueType pcHead
fep 0:5ff20db10a96 110 #define queueQUEUE_IS_MUTEX NULL
fep 0:5ff20db10a96 111
fep 0:5ff20db10a96 112 /* Semaphores do not actually store or copy data, so have an item size of
fep 0:5ff20db10a96 113 zero. */
fep 0:5ff20db10a96 114 #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 115 #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U )
fep 0:5ff20db10a96 116
fep 0:5ff20db10a96 117 #if( configUSE_PREEMPTION == 0 )
fep 0:5ff20db10a96 118 /* If the cooperative scheduler is being used then a yield should not be
fep 0:5ff20db10a96 119 performed just because a higher priority task has been woken. */
fep 0:5ff20db10a96 120 #define queueYIELD_IF_USING_PREEMPTION()
fep 0:5ff20db10a96 121 #else
fep 0:5ff20db10a96 122 #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API()
fep 0:5ff20db10a96 123 #endif
fep 0:5ff20db10a96 124
fep 0:5ff20db10a96 125 /*
fep 0:5ff20db10a96 126 * Definition of the queue used by the scheduler.
fep 0:5ff20db10a96 127 * Items are queued by copy, not reference. See the following link for the
fep 0:5ff20db10a96 128 * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html
fep 0:5ff20db10a96 129 */
fep 0:5ff20db10a96 130 typedef struct QueueDefinition
fep 0:5ff20db10a96 131 {
fep 0:5ff20db10a96 132 int8_t *pcHead; /*< Points to the beginning of the queue storage area. */
fep 0:5ff20db10a96 133 int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */
fep 0:5ff20db10a96 134 int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */
fep 0:5ff20db10a96 135
fep 0:5ff20db10a96 136 union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */
fep 0:5ff20db10a96 137 {
fep 0:5ff20db10a96 138 int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */
fep 0:5ff20db10a96 139 UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */
fep 0:5ff20db10a96 140 } u;
fep 0:5ff20db10a96 141
fep 0:5ff20db10a96 142 List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */
fep 0:5ff20db10a96 143 List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */
fep 0:5ff20db10a96 144
fep 0:5ff20db10a96 145 volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */
fep 0:5ff20db10a96 146 UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */
fep 0:5ff20db10a96 147 UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */
fep 0:5ff20db10a96 148
fep 0:5ff20db10a96 149 volatile int8_t cRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
fep 0:5ff20db10a96 150 volatile int8_t cTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
fep 0:5ff20db10a96 151
fep 0:5ff20db10a96 152 #if( ( configSUPPORT_STATIC_ALLOCATION == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 153 uint8_t ucStaticallyAllocated; /*< Set to pdTRUE if the memory used by the queue was statically allocated to ensure no attempt is made to free the memory. */
fep 0:5ff20db10a96 154 #endif
fep 0:5ff20db10a96 155
fep 0:5ff20db10a96 156 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 157 struct QueueDefinition *pxQueueSetContainer;
fep 0:5ff20db10a96 158 #endif
fep 0:5ff20db10a96 159
fep 0:5ff20db10a96 160 #if ( configUSE_TRACE_FACILITY == 1 )
fep 0:5ff20db10a96 161 UBaseType_t uxQueueNumber;
fep 0:5ff20db10a96 162 uint8_t ucQueueType;
fep 0:5ff20db10a96 163 #endif
fep 0:5ff20db10a96 164
fep 0:5ff20db10a96 165 } xQUEUE;
fep 0:5ff20db10a96 166
fep 0:5ff20db10a96 167 /* The old xQUEUE name is maintained above then typedefed to the new Queue_t
fep 0:5ff20db10a96 168 name below to enable the use of older kernel aware debuggers. */
fep 0:5ff20db10a96 169 typedef xQUEUE Queue_t;
fep 0:5ff20db10a96 170
fep 0:5ff20db10a96 171 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 172
fep 0:5ff20db10a96 173 /*
fep 0:5ff20db10a96 174 * The queue registry is just a means for kernel aware debuggers to locate
fep 0:5ff20db10a96 175 * queue structures. It has no other purpose so is an optional component.
fep 0:5ff20db10a96 176 */
fep 0:5ff20db10a96 177 #if ( configQUEUE_REGISTRY_SIZE > 0 )
fep 0:5ff20db10a96 178
fep 0:5ff20db10a96 179 /* The type stored within the queue registry array. This allows a name
fep 0:5ff20db10a96 180 to be assigned to each queue making kernel aware debugging a little
fep 0:5ff20db10a96 181 more user friendly. */
fep 0:5ff20db10a96 182 typedef struct QUEUE_REGISTRY_ITEM
fep 0:5ff20db10a96 183 {
fep 0:5ff20db10a96 184 const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
fep 0:5ff20db10a96 185 QueueHandle_t xHandle;
fep 0:5ff20db10a96 186 } xQueueRegistryItem;
fep 0:5ff20db10a96 187
fep 0:5ff20db10a96 188 /* The old xQueueRegistryItem name is maintained above then typedefed to the
fep 0:5ff20db10a96 189 new xQueueRegistryItem name below to enable the use of older kernel aware
fep 0:5ff20db10a96 190 debuggers. */
fep 0:5ff20db10a96 191 typedef xQueueRegistryItem QueueRegistryItem_t;
fep 0:5ff20db10a96 192
fep 0:5ff20db10a96 193 /* The queue registry is simply an array of QueueRegistryItem_t structures.
fep 0:5ff20db10a96 194 The pcQueueName member of a structure being NULL is indicative of the
fep 0:5ff20db10a96 195 array position being vacant. */
fep 0:5ff20db10a96 196 PRIVILEGED_DATA QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ];
fep 0:5ff20db10a96 197
fep 0:5ff20db10a96 198 #endif /* configQUEUE_REGISTRY_SIZE */
fep 0:5ff20db10a96 199
fep 0:5ff20db10a96 200 /*
fep 0:5ff20db10a96 201 * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not
fep 0:5ff20db10a96 202 * prevent an ISR from adding or removing items to the queue, but does prevent
fep 0:5ff20db10a96 203 * an ISR from removing tasks from the queue event lists. If an ISR finds a
fep 0:5ff20db10a96 204 * queue is locked it will instead increment the appropriate queue lock count
fep 0:5ff20db10a96 205 * to indicate that a task may require unblocking. When the queue in unlocked
fep 0:5ff20db10a96 206 * these lock counts are inspected, and the appropriate action taken.
fep 0:5ff20db10a96 207 */
fep 0:5ff20db10a96 208 static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 209
fep 0:5ff20db10a96 210 /*
fep 0:5ff20db10a96 211 * Uses a critical section to determine if there is any data in a queue.
fep 0:5ff20db10a96 212 *
fep 0:5ff20db10a96 213 * @return pdTRUE if the queue contains no items, otherwise pdFALSE.
fep 0:5ff20db10a96 214 */
fep 0:5ff20db10a96 215 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 216
fep 0:5ff20db10a96 217 /*
fep 0:5ff20db10a96 218 * Uses a critical section to determine if there is any space in a queue.
fep 0:5ff20db10a96 219 *
fep 0:5ff20db10a96 220 * @return pdTRUE if there is no space, otherwise pdFALSE;
fep 0:5ff20db10a96 221 */
fep 0:5ff20db10a96 222 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 223
fep 0:5ff20db10a96 224 /*
fep 0:5ff20db10a96 225 * Copies an item into the queue, either at the front of the queue or the
fep 0:5ff20db10a96 226 * back of the queue.
fep 0:5ff20db10a96 227 */
fep 0:5ff20db10a96 228 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 229
fep 0:5ff20db10a96 230 /*
fep 0:5ff20db10a96 231 * Copies an item out of a queue.
fep 0:5ff20db10a96 232 */
fep 0:5ff20db10a96 233 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 234
fep 0:5ff20db10a96 235 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 236 /*
fep 0:5ff20db10a96 237 * Checks to see if a queue is a member of a queue set, and if so, notifies
fep 0:5ff20db10a96 238 * the queue set that the queue contains data.
fep 0:5ff20db10a96 239 */
fep 0:5ff20db10a96 240 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 241 #endif
fep 0:5ff20db10a96 242
fep 0:5ff20db10a96 243 /*
fep 0:5ff20db10a96 244 * Called after a Queue_t structure has been allocated either statically or
fep 0:5ff20db10a96 245 * dynamically to fill in the structure's members.
fep 0:5ff20db10a96 246 */
fep 0:5ff20db10a96 247 static void prvInitialiseNewQueue( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, const uint8_t ucQueueType, Queue_t *pxNewQueue ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 248
fep 0:5ff20db10a96 249 /*
fep 0:5ff20db10a96 250 * Mutexes are a special type of queue. When a mutex is created, first the
fep 0:5ff20db10a96 251 * queue is created, then prvInitialiseMutex() is called to configure the queue
fep 0:5ff20db10a96 252 * as a mutex.
fep 0:5ff20db10a96 253 */
fep 0:5ff20db10a96 254 #if( configUSE_MUTEXES == 1 )
fep 0:5ff20db10a96 255 static void prvInitialiseMutex( Queue_t *pxNewQueue ) PRIVILEGED_FUNCTION;
fep 0:5ff20db10a96 256 #endif
fep 0:5ff20db10a96 257
fep 0:5ff20db10a96 258 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 259
fep 0:5ff20db10a96 260 /*
fep 0:5ff20db10a96 261 * Macro to mark a queue as locked. Locking a queue prevents an ISR from
fep 0:5ff20db10a96 262 * accessing the queue event lists.
fep 0:5ff20db10a96 263 */
fep 0:5ff20db10a96 264 #define prvLockQueue( pxQueue ) \
fep 0:5ff20db10a96 265 taskENTER_CRITICAL(); \
fep 0:5ff20db10a96 266 { \
fep 0:5ff20db10a96 267 if( ( pxQueue )->cRxLock == queueUNLOCKED ) \
fep 0:5ff20db10a96 268 { \
fep 0:5ff20db10a96 269 ( pxQueue )->cRxLock = queueLOCKED_UNMODIFIED; \
fep 0:5ff20db10a96 270 } \
fep 0:5ff20db10a96 271 if( ( pxQueue )->cTxLock == queueUNLOCKED ) \
fep 0:5ff20db10a96 272 { \
fep 0:5ff20db10a96 273 ( pxQueue )->cTxLock = queueLOCKED_UNMODIFIED; \
fep 0:5ff20db10a96 274 } \
fep 0:5ff20db10a96 275 } \
fep 0:5ff20db10a96 276 taskEXIT_CRITICAL()
fep 0:5ff20db10a96 277 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 278
fep 0:5ff20db10a96 279 BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue )
fep 0:5ff20db10a96 280 {
fep 0:5ff20db10a96 281 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 282
fep 0:5ff20db10a96 283 configASSERT( pxQueue );
fep 0:5ff20db10a96 284
fep 0:5ff20db10a96 285 taskENTER_CRITICAL();
fep 0:5ff20db10a96 286 {
fep 0:5ff20db10a96 287 pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize );
fep 0:5ff20db10a96 288 pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
fep 0:5ff20db10a96 289 pxQueue->pcWriteTo = pxQueue->pcHead;
fep 0:5ff20db10a96 290 pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize );
fep 0:5ff20db10a96 291 pxQueue->cRxLock = queueUNLOCKED;
fep 0:5ff20db10a96 292 pxQueue->cTxLock = queueUNLOCKED;
fep 0:5ff20db10a96 293
fep 0:5ff20db10a96 294 if( xNewQueue == pdFALSE )
fep 0:5ff20db10a96 295 {
fep 0:5ff20db10a96 296 /* If there are tasks blocked waiting to read from the queue, then
fep 0:5ff20db10a96 297 the tasks will remain blocked as after this function exits the queue
fep 0:5ff20db10a96 298 will still be empty. If there are tasks blocked waiting to write to
fep 0:5ff20db10a96 299 the queue, then one should be unblocked as after this function exits
fep 0:5ff20db10a96 300 it will be possible to write to it. */
fep 0:5ff20db10a96 301 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 302 {
fep 0:5ff20db10a96 303 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 304 {
fep 0:5ff20db10a96 305 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 306 }
fep 0:5ff20db10a96 307 else
fep 0:5ff20db10a96 308 {
fep 0:5ff20db10a96 309 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 310 }
fep 0:5ff20db10a96 311 }
fep 0:5ff20db10a96 312 else
fep 0:5ff20db10a96 313 {
fep 0:5ff20db10a96 314 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 315 }
fep 0:5ff20db10a96 316 }
fep 0:5ff20db10a96 317 else
fep 0:5ff20db10a96 318 {
fep 0:5ff20db10a96 319 /* Ensure the event queues start in the correct state. */
fep 0:5ff20db10a96 320 vListInitialise( &( pxQueue->xTasksWaitingToSend ) );
fep 0:5ff20db10a96 321 vListInitialise( &( pxQueue->xTasksWaitingToReceive ) );
fep 0:5ff20db10a96 322 }
fep 0:5ff20db10a96 323 }
fep 0:5ff20db10a96 324 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 325
fep 0:5ff20db10a96 326 /* A value is returned for calling semantic consistency with previous
fep 0:5ff20db10a96 327 versions. */
fep 0:5ff20db10a96 328 return pdPASS;
fep 0:5ff20db10a96 329 }
fep 0:5ff20db10a96 330 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 331
fep 0:5ff20db10a96 332 #if( configSUPPORT_STATIC_ALLOCATION == 1 )
fep 0:5ff20db10a96 333
fep 0:5ff20db10a96 334 QueueHandle_t xQueueGenericCreateStatic( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, StaticQueue_t *pxStaticQueue, const uint8_t ucQueueType )
fep 0:5ff20db10a96 335 {
fep 0:5ff20db10a96 336 Queue_t *pxNewQueue;
fep 0:5ff20db10a96 337
fep 0:5ff20db10a96 338 configASSERT( uxQueueLength > ( UBaseType_t ) 0 );
fep 0:5ff20db10a96 339
fep 0:5ff20db10a96 340 /* The StaticQueue_t structure and the queue storage area must be
fep 0:5ff20db10a96 341 supplied. */
fep 0:5ff20db10a96 342 configASSERT( pxStaticQueue != NULL );
fep 0:5ff20db10a96 343
fep 0:5ff20db10a96 344 /* A queue storage area should be provided if the item size is not 0, and
fep 0:5ff20db10a96 345 should not be provided if the item size is 0. */
fep 0:5ff20db10a96 346 configASSERT( !( ( pucQueueStorage != NULL ) && ( uxItemSize == 0 ) ) );
fep 0:5ff20db10a96 347 configASSERT( !( ( pucQueueStorage == NULL ) && ( uxItemSize != 0 ) ) );
fep 0:5ff20db10a96 348
fep 0:5ff20db10a96 349 #if( configASSERT_DEFINED == 1 )
fep 0:5ff20db10a96 350 {
fep 0:5ff20db10a96 351 /* Sanity check that the size of the structure used to declare a
fep 0:5ff20db10a96 352 variable of type StaticQueue_t or StaticSemaphore_t equals the size of
fep 0:5ff20db10a96 353 the real queue and semaphore structures. */
fep 0:5ff20db10a96 354 volatile size_t xSize = sizeof( StaticQueue_t );
fep 0:5ff20db10a96 355 configASSERT( xSize == sizeof( Queue_t ) );
fep 0:5ff20db10a96 356 }
fep 0:5ff20db10a96 357 #endif /* configASSERT_DEFINED */
fep 0:5ff20db10a96 358
fep 0:5ff20db10a96 359 /* The address of a statically allocated queue was passed in, use it.
fep 0:5ff20db10a96 360 The address of a statically allocated storage area was also passed in
fep 0:5ff20db10a96 361 but is already set. */
fep 0:5ff20db10a96 362 pxNewQueue = ( Queue_t * ) pxStaticQueue; /*lint !e740 Unusual cast is ok as the structures are designed to have the same alignment, and the size is checked by an assert. */
fep 0:5ff20db10a96 363
fep 0:5ff20db10a96 364 if( pxNewQueue != NULL )
fep 0:5ff20db10a96 365 {
fep 0:5ff20db10a96 366 #if( configSUPPORT_DYNAMIC_ALLOCATION == 1 )
fep 0:5ff20db10a96 367 {
fep 0:5ff20db10a96 368 /* Queues can be allocated wither statically or dynamically, so
fep 0:5ff20db10a96 369 note this queue was allocated statically in case the queue is
fep 0:5ff20db10a96 370 later deleted. */
fep 0:5ff20db10a96 371 pxNewQueue->ucStaticallyAllocated = pdTRUE;
fep 0:5ff20db10a96 372 }
fep 0:5ff20db10a96 373 #endif /* configSUPPORT_DYNAMIC_ALLOCATION */
fep 0:5ff20db10a96 374
fep 0:5ff20db10a96 375 prvInitialiseNewQueue( uxQueueLength, uxItemSize, pucQueueStorage, ucQueueType, pxNewQueue );
fep 0:5ff20db10a96 376 }
fep 0:5ff20db10a96 377
fep 0:5ff20db10a96 378 return pxNewQueue;
fep 0:5ff20db10a96 379 }
fep 0:5ff20db10a96 380
fep 0:5ff20db10a96 381 #endif /* configSUPPORT_STATIC_ALLOCATION */
fep 0:5ff20db10a96 382 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 383
fep 0:5ff20db10a96 384 #if( configSUPPORT_DYNAMIC_ALLOCATION == 1 )
fep 0:5ff20db10a96 385
fep 0:5ff20db10a96 386 QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType )
fep 0:5ff20db10a96 387 {
fep 0:5ff20db10a96 388 Queue_t *pxNewQueue;
fep 0:5ff20db10a96 389 size_t xQueueSizeInBytes;
fep 0:5ff20db10a96 390 uint8_t *pucQueueStorage;
fep 0:5ff20db10a96 391
fep 0:5ff20db10a96 392 configASSERT( uxQueueLength > ( UBaseType_t ) 0 );
fep 0:5ff20db10a96 393
fep 0:5ff20db10a96 394 if( uxItemSize == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 395 {
fep 0:5ff20db10a96 396 /* There is not going to be a queue storage area. */
fep 0:5ff20db10a96 397 xQueueSizeInBytes = ( size_t ) 0;
fep 0:5ff20db10a96 398 }
fep 0:5ff20db10a96 399 else
fep 0:5ff20db10a96 400 {
fep 0:5ff20db10a96 401 /* Allocate enough space to hold the maximum number of items that
fep 0:5ff20db10a96 402 can be in the queue at any time. */
fep 0:5ff20db10a96 403 xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
fep 0:5ff20db10a96 404 }
fep 0:5ff20db10a96 405
fep 0:5ff20db10a96 406 pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes );
fep 0:5ff20db10a96 407
fep 0:5ff20db10a96 408 if( pxNewQueue != NULL )
fep 0:5ff20db10a96 409 {
fep 0:5ff20db10a96 410 /* Jump past the queue structure to find the location of the queue
fep 0:5ff20db10a96 411 storage area. */
fep 0:5ff20db10a96 412 pucQueueStorage = ( ( uint8_t * ) pxNewQueue ) + sizeof( Queue_t );
fep 0:5ff20db10a96 413
fep 0:5ff20db10a96 414 #if( configSUPPORT_STATIC_ALLOCATION == 1 )
fep 0:5ff20db10a96 415 {
fep 0:5ff20db10a96 416 /* Queues can be created either statically or dynamically, so
fep 0:5ff20db10a96 417 note this task was created dynamically in case it is later
fep 0:5ff20db10a96 418 deleted. */
fep 0:5ff20db10a96 419 pxNewQueue->ucStaticallyAllocated = pdFALSE;
fep 0:5ff20db10a96 420 }
fep 0:5ff20db10a96 421 #endif /* configSUPPORT_STATIC_ALLOCATION */
fep 0:5ff20db10a96 422
fep 0:5ff20db10a96 423 prvInitialiseNewQueue( uxQueueLength, uxItemSize, pucQueueStorage, ucQueueType, pxNewQueue );
fep 0:5ff20db10a96 424 }
fep 0:5ff20db10a96 425
fep 0:5ff20db10a96 426 return pxNewQueue;
fep 0:5ff20db10a96 427 }
fep 0:5ff20db10a96 428
fep 0:5ff20db10a96 429 #endif /* configSUPPORT_STATIC_ALLOCATION */
fep 0:5ff20db10a96 430 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 431
fep 0:5ff20db10a96 432 static void prvInitialiseNewQueue( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, const uint8_t ucQueueType, Queue_t *pxNewQueue )
fep 0:5ff20db10a96 433 {
fep 0:5ff20db10a96 434 /* Remove compiler warnings about unused parameters should
fep 0:5ff20db10a96 435 configUSE_TRACE_FACILITY not be set to 1. */
fep 0:5ff20db10a96 436 ( void ) ucQueueType;
fep 0:5ff20db10a96 437
fep 0:5ff20db10a96 438 if( uxItemSize == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 439 {
fep 0:5ff20db10a96 440 /* No RAM was allocated for the queue storage area, but PC head cannot
fep 0:5ff20db10a96 441 be set to NULL because NULL is used as a key to say the queue is used as
fep 0:5ff20db10a96 442 a mutex. Therefore just set pcHead to point to the queue as a benign
fep 0:5ff20db10a96 443 value that is known to be within the memory map. */
fep 0:5ff20db10a96 444 pxNewQueue->pcHead = ( int8_t * ) pxNewQueue;
fep 0:5ff20db10a96 445 }
fep 0:5ff20db10a96 446 else
fep 0:5ff20db10a96 447 {
fep 0:5ff20db10a96 448 /* Set the head to the start of the queue storage area. */
fep 0:5ff20db10a96 449 pxNewQueue->pcHead = ( int8_t * ) pucQueueStorage;
fep 0:5ff20db10a96 450 }
fep 0:5ff20db10a96 451
fep 0:5ff20db10a96 452 /* Initialise the queue members as described where the queue type is
fep 0:5ff20db10a96 453 defined. */
fep 0:5ff20db10a96 454 pxNewQueue->uxLength = uxQueueLength;
fep 0:5ff20db10a96 455 pxNewQueue->uxItemSize = uxItemSize;
fep 0:5ff20db10a96 456 ( void ) xQueueGenericReset( pxNewQueue, pdTRUE );
fep 0:5ff20db10a96 457
fep 0:5ff20db10a96 458 #if ( configUSE_TRACE_FACILITY == 1 )
fep 0:5ff20db10a96 459 {
fep 0:5ff20db10a96 460 pxNewQueue->ucQueueType = ucQueueType;
fep 0:5ff20db10a96 461 }
fep 0:5ff20db10a96 462 #endif /* configUSE_TRACE_FACILITY */
fep 0:5ff20db10a96 463
fep 0:5ff20db10a96 464 #if( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 465 {
fep 0:5ff20db10a96 466 pxNewQueue->pxQueueSetContainer = NULL;
fep 0:5ff20db10a96 467 }
fep 0:5ff20db10a96 468 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 469
fep 0:5ff20db10a96 470 traceQUEUE_CREATE( pxNewQueue );
fep 0:5ff20db10a96 471 }
fep 0:5ff20db10a96 472 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 473
fep 0:5ff20db10a96 474 #if( configUSE_MUTEXES == 1 )
fep 0:5ff20db10a96 475
fep 0:5ff20db10a96 476 static void prvInitialiseMutex( Queue_t *pxNewQueue )
fep 0:5ff20db10a96 477 {
fep 0:5ff20db10a96 478 if( pxNewQueue != NULL )
fep 0:5ff20db10a96 479 {
fep 0:5ff20db10a96 480 /* The queue create function will set all the queue structure members
fep 0:5ff20db10a96 481 correctly for a generic queue, but this function is creating a
fep 0:5ff20db10a96 482 mutex. Overwrite those members that need to be set differently -
fep 0:5ff20db10a96 483 in particular the information required for priority inheritance. */
fep 0:5ff20db10a96 484 pxNewQueue->pxMutexHolder = NULL;
fep 0:5ff20db10a96 485 pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX;
fep 0:5ff20db10a96 486
fep 0:5ff20db10a96 487 /* In case this is a recursive mutex. */
fep 0:5ff20db10a96 488 pxNewQueue->u.uxRecursiveCallCount = 0;
fep 0:5ff20db10a96 489
fep 0:5ff20db10a96 490 traceCREATE_MUTEX( pxNewQueue );
fep 0:5ff20db10a96 491
fep 0:5ff20db10a96 492 /* Start with the semaphore in the expected state. */
fep 0:5ff20db10a96 493 ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK );
fep 0:5ff20db10a96 494 }
fep 0:5ff20db10a96 495 else
fep 0:5ff20db10a96 496 {
fep 0:5ff20db10a96 497 traceCREATE_MUTEX_FAILED();
fep 0:5ff20db10a96 498 }
fep 0:5ff20db10a96 499 }
fep 0:5ff20db10a96 500
fep 0:5ff20db10a96 501 #endif /* configUSE_MUTEXES */
fep 0:5ff20db10a96 502 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 503
fep 0:5ff20db10a96 504 #if( ( configUSE_MUTEXES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 505
fep 0:5ff20db10a96 506 QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType )
fep 0:5ff20db10a96 507 {
fep 0:5ff20db10a96 508 Queue_t *pxNewQueue;
fep 0:5ff20db10a96 509 const UBaseType_t uxMutexLength = ( UBaseType_t ) 1, uxMutexSize = ( UBaseType_t ) 0;
fep 0:5ff20db10a96 510
fep 0:5ff20db10a96 511 pxNewQueue = ( Queue_t * ) xQueueGenericCreate( uxMutexLength, uxMutexSize, ucQueueType );
fep 0:5ff20db10a96 512 prvInitialiseMutex( pxNewQueue );
fep 0:5ff20db10a96 513
fep 0:5ff20db10a96 514 return pxNewQueue;
fep 0:5ff20db10a96 515 }
fep 0:5ff20db10a96 516
fep 0:5ff20db10a96 517 #endif /* configUSE_MUTEXES */
fep 0:5ff20db10a96 518 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 519
fep 0:5ff20db10a96 520 #if( ( configUSE_MUTEXES == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 521
fep 0:5ff20db10a96 522 QueueHandle_t xQueueCreateMutexStatic( const uint8_t ucQueueType, StaticQueue_t *pxStaticQueue )
fep 0:5ff20db10a96 523 {
fep 0:5ff20db10a96 524 Queue_t *pxNewQueue;
fep 0:5ff20db10a96 525 const UBaseType_t uxMutexLength = ( UBaseType_t ) 1, uxMutexSize = ( UBaseType_t ) 0;
fep 0:5ff20db10a96 526
fep 0:5ff20db10a96 527 /* Prevent compiler warnings about unused parameters if
fep 0:5ff20db10a96 528 configUSE_TRACE_FACILITY does not equal 1. */
fep 0:5ff20db10a96 529 ( void ) ucQueueType;
fep 0:5ff20db10a96 530
fep 0:5ff20db10a96 531 pxNewQueue = ( Queue_t * ) xQueueGenericCreateStatic( uxMutexLength, uxMutexSize, NULL, pxStaticQueue, ucQueueType );
fep 0:5ff20db10a96 532 prvInitialiseMutex( pxNewQueue );
fep 0:5ff20db10a96 533
fep 0:5ff20db10a96 534 return pxNewQueue;
fep 0:5ff20db10a96 535 }
fep 0:5ff20db10a96 536
fep 0:5ff20db10a96 537 #endif /* configUSE_MUTEXES */
fep 0:5ff20db10a96 538 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 539
fep 0:5ff20db10a96 540 #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) )
fep 0:5ff20db10a96 541
fep 0:5ff20db10a96 542 void* xQueueGetMutexHolder( QueueHandle_t xSemaphore )
fep 0:5ff20db10a96 543 {
fep 0:5ff20db10a96 544 void *pxReturn;
fep 0:5ff20db10a96 545
fep 0:5ff20db10a96 546 /* This function is called by xSemaphoreGetMutexHolder(), and should not
fep 0:5ff20db10a96 547 be called directly. Note: This is a good way of determining if the
fep 0:5ff20db10a96 548 calling task is the mutex holder, but not a good way of determining the
fep 0:5ff20db10a96 549 identity of the mutex holder, as the holder may change between the
fep 0:5ff20db10a96 550 following critical section exiting and the function returning. */
fep 0:5ff20db10a96 551 taskENTER_CRITICAL();
fep 0:5ff20db10a96 552 {
fep 0:5ff20db10a96 553 if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX )
fep 0:5ff20db10a96 554 {
fep 0:5ff20db10a96 555 pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder;
fep 0:5ff20db10a96 556 }
fep 0:5ff20db10a96 557 else
fep 0:5ff20db10a96 558 {
fep 0:5ff20db10a96 559 pxReturn = NULL;
fep 0:5ff20db10a96 560 }
fep 0:5ff20db10a96 561 }
fep 0:5ff20db10a96 562 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 563
fep 0:5ff20db10a96 564 return pxReturn;
fep 0:5ff20db10a96 565 } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */
fep 0:5ff20db10a96 566
fep 0:5ff20db10a96 567 #endif
fep 0:5ff20db10a96 568 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 569
fep 0:5ff20db10a96 570 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
fep 0:5ff20db10a96 571
fep 0:5ff20db10a96 572 BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex )
fep 0:5ff20db10a96 573 {
fep 0:5ff20db10a96 574 BaseType_t xReturn;
fep 0:5ff20db10a96 575 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
fep 0:5ff20db10a96 576
fep 0:5ff20db10a96 577 configASSERT( pxMutex );
fep 0:5ff20db10a96 578
fep 0:5ff20db10a96 579 /* If this is the task that holds the mutex then pxMutexHolder will not
fep 0:5ff20db10a96 580 change outside of this task. If this task does not hold the mutex then
fep 0:5ff20db10a96 581 pxMutexHolder can never coincidentally equal the tasks handle, and as
fep 0:5ff20db10a96 582 this is the only condition we are interested in it does not matter if
fep 0:5ff20db10a96 583 pxMutexHolder is accessed simultaneously by another task. Therefore no
fep 0:5ff20db10a96 584 mutual exclusion is required to test the pxMutexHolder variable. */
fep 0:5ff20db10a96 585 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */
fep 0:5ff20db10a96 586 {
fep 0:5ff20db10a96 587 traceGIVE_MUTEX_RECURSIVE( pxMutex );
fep 0:5ff20db10a96 588
fep 0:5ff20db10a96 589 /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to
fep 0:5ff20db10a96 590 the task handle, therefore no underflow check is required. Also,
fep 0:5ff20db10a96 591 uxRecursiveCallCount is only modified by the mutex holder, and as
fep 0:5ff20db10a96 592 there can only be one, no mutual exclusion is required to modify the
fep 0:5ff20db10a96 593 uxRecursiveCallCount member. */
fep 0:5ff20db10a96 594 ( pxMutex->u.uxRecursiveCallCount )--;
fep 0:5ff20db10a96 595
fep 0:5ff20db10a96 596 /* Has the recursive call count unwound to 0? */
fep 0:5ff20db10a96 597 if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 598 {
fep 0:5ff20db10a96 599 /* Return the mutex. This will automatically unblock any other
fep 0:5ff20db10a96 600 task that might be waiting to access the mutex. */
fep 0:5ff20db10a96 601 ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK );
fep 0:5ff20db10a96 602 }
fep 0:5ff20db10a96 603 else
fep 0:5ff20db10a96 604 {
fep 0:5ff20db10a96 605 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 606 }
fep 0:5ff20db10a96 607
fep 0:5ff20db10a96 608 xReturn = pdPASS;
fep 0:5ff20db10a96 609 }
fep 0:5ff20db10a96 610 else
fep 0:5ff20db10a96 611 {
fep 0:5ff20db10a96 612 /* The mutex cannot be given because the calling task is not the
fep 0:5ff20db10a96 613 holder. */
fep 0:5ff20db10a96 614 xReturn = pdFAIL;
fep 0:5ff20db10a96 615
fep 0:5ff20db10a96 616 traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex );
fep 0:5ff20db10a96 617 }
fep 0:5ff20db10a96 618
fep 0:5ff20db10a96 619 return xReturn;
fep 0:5ff20db10a96 620 }
fep 0:5ff20db10a96 621
fep 0:5ff20db10a96 622 #endif /* configUSE_RECURSIVE_MUTEXES */
fep 0:5ff20db10a96 623 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 624
fep 0:5ff20db10a96 625 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
fep 0:5ff20db10a96 626
fep 0:5ff20db10a96 627 BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait )
fep 0:5ff20db10a96 628 {
fep 0:5ff20db10a96 629 BaseType_t xReturn;
fep 0:5ff20db10a96 630 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
fep 0:5ff20db10a96 631
fep 0:5ff20db10a96 632 configASSERT( pxMutex );
fep 0:5ff20db10a96 633
fep 0:5ff20db10a96 634 /* Comments regarding mutual exclusion as per those within
fep 0:5ff20db10a96 635 xQueueGiveMutexRecursive(). */
fep 0:5ff20db10a96 636
fep 0:5ff20db10a96 637 traceTAKE_MUTEX_RECURSIVE( pxMutex );
fep 0:5ff20db10a96 638
fep 0:5ff20db10a96 639 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
fep 0:5ff20db10a96 640 {
fep 0:5ff20db10a96 641 ( pxMutex->u.uxRecursiveCallCount )++;
fep 0:5ff20db10a96 642 xReturn = pdPASS;
fep 0:5ff20db10a96 643 }
fep 0:5ff20db10a96 644 else
fep 0:5ff20db10a96 645 {
fep 0:5ff20db10a96 646 xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE );
fep 0:5ff20db10a96 647
fep 0:5ff20db10a96 648 /* pdPASS will only be returned if the mutex was successfully
fep 0:5ff20db10a96 649 obtained. The calling task may have entered the Blocked state
fep 0:5ff20db10a96 650 before reaching here. */
fep 0:5ff20db10a96 651 if( xReturn != pdFAIL )
fep 0:5ff20db10a96 652 {
fep 0:5ff20db10a96 653 ( pxMutex->u.uxRecursiveCallCount )++;
fep 0:5ff20db10a96 654 }
fep 0:5ff20db10a96 655 else
fep 0:5ff20db10a96 656 {
fep 0:5ff20db10a96 657 traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex );
fep 0:5ff20db10a96 658 }
fep 0:5ff20db10a96 659 }
fep 0:5ff20db10a96 660
fep 0:5ff20db10a96 661 return xReturn;
fep 0:5ff20db10a96 662 }
fep 0:5ff20db10a96 663
fep 0:5ff20db10a96 664 #endif /* configUSE_RECURSIVE_MUTEXES */
fep 0:5ff20db10a96 665 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 666
fep 0:5ff20db10a96 667 #if( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 668
fep 0:5ff20db10a96 669 QueueHandle_t xQueueCreateCountingSemaphoreStatic( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount, StaticQueue_t *pxStaticQueue )
fep 0:5ff20db10a96 670 {
fep 0:5ff20db10a96 671 QueueHandle_t xHandle;
fep 0:5ff20db10a96 672
fep 0:5ff20db10a96 673 configASSERT( uxMaxCount != 0 );
fep 0:5ff20db10a96 674 configASSERT( uxInitialCount <= uxMaxCount );
fep 0:5ff20db10a96 675
fep 0:5ff20db10a96 676 xHandle = xQueueGenericCreateStatic( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, NULL, pxStaticQueue, queueQUEUE_TYPE_COUNTING_SEMAPHORE );
fep 0:5ff20db10a96 677
fep 0:5ff20db10a96 678 if( xHandle != NULL )
fep 0:5ff20db10a96 679 {
fep 0:5ff20db10a96 680 ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount;
fep 0:5ff20db10a96 681
fep 0:5ff20db10a96 682 traceCREATE_COUNTING_SEMAPHORE();
fep 0:5ff20db10a96 683 }
fep 0:5ff20db10a96 684 else
fep 0:5ff20db10a96 685 {
fep 0:5ff20db10a96 686 traceCREATE_COUNTING_SEMAPHORE_FAILED();
fep 0:5ff20db10a96 687 }
fep 0:5ff20db10a96 688
fep 0:5ff20db10a96 689 return xHandle;
fep 0:5ff20db10a96 690 }
fep 0:5ff20db10a96 691
fep 0:5ff20db10a96 692 #endif /* ( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) */
fep 0:5ff20db10a96 693 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 694
fep 0:5ff20db10a96 695 #if( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 696
fep 0:5ff20db10a96 697 QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount )
fep 0:5ff20db10a96 698 {
fep 0:5ff20db10a96 699 QueueHandle_t xHandle;
fep 0:5ff20db10a96 700
fep 0:5ff20db10a96 701 configASSERT( uxMaxCount != 0 );
fep 0:5ff20db10a96 702 configASSERT( uxInitialCount <= uxMaxCount );
fep 0:5ff20db10a96 703
fep 0:5ff20db10a96 704 xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE );
fep 0:5ff20db10a96 705
fep 0:5ff20db10a96 706 if( xHandle != NULL )
fep 0:5ff20db10a96 707 {
fep 0:5ff20db10a96 708 ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount;
fep 0:5ff20db10a96 709
fep 0:5ff20db10a96 710 traceCREATE_COUNTING_SEMAPHORE();
fep 0:5ff20db10a96 711 }
fep 0:5ff20db10a96 712 else
fep 0:5ff20db10a96 713 {
fep 0:5ff20db10a96 714 traceCREATE_COUNTING_SEMAPHORE_FAILED();
fep 0:5ff20db10a96 715 }
fep 0:5ff20db10a96 716
fep 0:5ff20db10a96 717 return xHandle;
fep 0:5ff20db10a96 718 }
fep 0:5ff20db10a96 719
fep 0:5ff20db10a96 720 #endif /* ( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) */
fep 0:5ff20db10a96 721 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 722
fep 0:5ff20db10a96 723 BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition )
fep 0:5ff20db10a96 724 {
fep 0:5ff20db10a96 725 BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired;
fep 0:5ff20db10a96 726 TimeOut_t xTimeOut;
fep 0:5ff20db10a96 727 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 728
fep 0:5ff20db10a96 729 configASSERT( pxQueue );
fep 0:5ff20db10a96 730 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
fep 0:5ff20db10a96 731 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
fep 0:5ff20db10a96 732 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
fep 0:5ff20db10a96 733 {
fep 0:5ff20db10a96 734 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
fep 0:5ff20db10a96 735 }
fep 0:5ff20db10a96 736 #endif
fep 0:5ff20db10a96 737
fep 0:5ff20db10a96 738
fep 0:5ff20db10a96 739 /* This function relaxes the coding standard somewhat to allow return
fep 0:5ff20db10a96 740 statements within the function itself. This is done in the interest
fep 0:5ff20db10a96 741 of execution time efficiency. */
fep 0:5ff20db10a96 742 for( ;; )
fep 0:5ff20db10a96 743 {
fep 0:5ff20db10a96 744 taskENTER_CRITICAL();
fep 0:5ff20db10a96 745 {
fep 0:5ff20db10a96 746 /* Is there room on the queue now? The running task must be the
fep 0:5ff20db10a96 747 highest priority task wanting to access the queue. If the head item
fep 0:5ff20db10a96 748 in the queue is to be overwritten then it does not matter if the
fep 0:5ff20db10a96 749 queue is full. */
fep 0:5ff20db10a96 750 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
fep 0:5ff20db10a96 751 {
fep 0:5ff20db10a96 752 traceQUEUE_SEND( pxQueue );
fep 0:5ff20db10a96 753 xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
fep 0:5ff20db10a96 754
fep 0:5ff20db10a96 755 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 756 {
fep 0:5ff20db10a96 757 if( pxQueue->pxQueueSetContainer != NULL )
fep 0:5ff20db10a96 758 {
fep 0:5ff20db10a96 759 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) != pdFALSE )
fep 0:5ff20db10a96 760 {
fep 0:5ff20db10a96 761 /* The queue is a member of a queue set, and posting
fep 0:5ff20db10a96 762 to the queue set caused a higher priority task to
fep 0:5ff20db10a96 763 unblock. A context switch is required. */
fep 0:5ff20db10a96 764 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 765 }
fep 0:5ff20db10a96 766 else
fep 0:5ff20db10a96 767 {
fep 0:5ff20db10a96 768 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 769 }
fep 0:5ff20db10a96 770 }
fep 0:5ff20db10a96 771 else
fep 0:5ff20db10a96 772 {
fep 0:5ff20db10a96 773 /* If there was a task waiting for data to arrive on the
fep 0:5ff20db10a96 774 queue then unblock it now. */
fep 0:5ff20db10a96 775 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 776 {
fep 0:5ff20db10a96 777 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 778 {
fep 0:5ff20db10a96 779 /* The unblocked task has a priority higher than
fep 0:5ff20db10a96 780 our own so yield immediately. Yes it is ok to
fep 0:5ff20db10a96 781 do this from within the critical section - the
fep 0:5ff20db10a96 782 kernel takes care of that. */
fep 0:5ff20db10a96 783 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 784 }
fep 0:5ff20db10a96 785 else
fep 0:5ff20db10a96 786 {
fep 0:5ff20db10a96 787 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 788 }
fep 0:5ff20db10a96 789 }
fep 0:5ff20db10a96 790 else if( xYieldRequired != pdFALSE )
fep 0:5ff20db10a96 791 {
fep 0:5ff20db10a96 792 /* This path is a special case that will only get
fep 0:5ff20db10a96 793 executed if the task was holding multiple mutexes
fep 0:5ff20db10a96 794 and the mutexes were given back in an order that is
fep 0:5ff20db10a96 795 different to that in which they were taken. */
fep 0:5ff20db10a96 796 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 797 }
fep 0:5ff20db10a96 798 else
fep 0:5ff20db10a96 799 {
fep 0:5ff20db10a96 800 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 801 }
fep 0:5ff20db10a96 802 }
fep 0:5ff20db10a96 803 }
fep 0:5ff20db10a96 804 #else /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 805 {
fep 0:5ff20db10a96 806 /* If there was a task waiting for data to arrive on the
fep 0:5ff20db10a96 807 queue then unblock it now. */
fep 0:5ff20db10a96 808 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 809 {
fep 0:5ff20db10a96 810 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 811 {
fep 0:5ff20db10a96 812 /* The unblocked task has a priority higher than
fep 0:5ff20db10a96 813 our own so yield immediately. Yes it is ok to do
fep 0:5ff20db10a96 814 this from within the critical section - the kernel
fep 0:5ff20db10a96 815 takes care of that. */
fep 0:5ff20db10a96 816 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 817 }
fep 0:5ff20db10a96 818 else
fep 0:5ff20db10a96 819 {
fep 0:5ff20db10a96 820 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 821 }
fep 0:5ff20db10a96 822 }
fep 0:5ff20db10a96 823 else if( xYieldRequired != pdFALSE )
fep 0:5ff20db10a96 824 {
fep 0:5ff20db10a96 825 /* This path is a special case that will only get
fep 0:5ff20db10a96 826 executed if the task was holding multiple mutexes and
fep 0:5ff20db10a96 827 the mutexes were given back in an order that is
fep 0:5ff20db10a96 828 different to that in which they were taken. */
fep 0:5ff20db10a96 829 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 830 }
fep 0:5ff20db10a96 831 else
fep 0:5ff20db10a96 832 {
fep 0:5ff20db10a96 833 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 834 }
fep 0:5ff20db10a96 835 }
fep 0:5ff20db10a96 836 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 837
fep 0:5ff20db10a96 838 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 839 return pdPASS;
fep 0:5ff20db10a96 840 }
fep 0:5ff20db10a96 841 else
fep 0:5ff20db10a96 842 {
fep 0:5ff20db10a96 843 if( xTicksToWait == ( TickType_t ) 0 )
fep 0:5ff20db10a96 844 {
fep 0:5ff20db10a96 845 /* The queue was full and no block time is specified (or
fep 0:5ff20db10a96 846 the block time has expired) so leave now. */
fep 0:5ff20db10a96 847 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 848
fep 0:5ff20db10a96 849 /* Return to the original privilege level before exiting
fep 0:5ff20db10a96 850 the function. */
fep 0:5ff20db10a96 851 traceQUEUE_SEND_FAILED( pxQueue );
fep 0:5ff20db10a96 852 return errQUEUE_FULL;
fep 0:5ff20db10a96 853 }
fep 0:5ff20db10a96 854 else if( xEntryTimeSet == pdFALSE )
fep 0:5ff20db10a96 855 {
fep 0:5ff20db10a96 856 /* The queue was full and a block time was specified so
fep 0:5ff20db10a96 857 configure the timeout structure. */
fep 0:5ff20db10a96 858 vTaskSetTimeOutState( &xTimeOut );
fep 0:5ff20db10a96 859 xEntryTimeSet = pdTRUE;
fep 0:5ff20db10a96 860 }
fep 0:5ff20db10a96 861 else
fep 0:5ff20db10a96 862 {
fep 0:5ff20db10a96 863 /* Entry time was already set. */
fep 0:5ff20db10a96 864 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 865 }
fep 0:5ff20db10a96 866 }
fep 0:5ff20db10a96 867 }
fep 0:5ff20db10a96 868 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 869
fep 0:5ff20db10a96 870 /* Interrupts and other tasks can send to and receive from the queue
fep 0:5ff20db10a96 871 now the critical section has been exited. */
fep 0:5ff20db10a96 872
fep 0:5ff20db10a96 873 vTaskSuspendAll();
fep 0:5ff20db10a96 874 prvLockQueue( pxQueue );
fep 0:5ff20db10a96 875
fep 0:5ff20db10a96 876 /* Update the timeout state to see if it has expired yet. */
fep 0:5ff20db10a96 877 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
fep 0:5ff20db10a96 878 {
fep 0:5ff20db10a96 879 if( prvIsQueueFull( pxQueue ) != pdFALSE )
fep 0:5ff20db10a96 880 {
fep 0:5ff20db10a96 881 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
fep 0:5ff20db10a96 882 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
fep 0:5ff20db10a96 883
fep 0:5ff20db10a96 884 /* Unlocking the queue means queue events can effect the
fep 0:5ff20db10a96 885 event list. It is possible that interrupts occurring now
fep 0:5ff20db10a96 886 remove this task from the event list again - but as the
fep 0:5ff20db10a96 887 scheduler is suspended the task will go onto the pending
fep 0:5ff20db10a96 888 ready last instead of the actual ready list. */
fep 0:5ff20db10a96 889 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 890
fep 0:5ff20db10a96 891 /* Resuming the scheduler will move tasks from the pending
fep 0:5ff20db10a96 892 ready list into the ready list - so it is feasible that this
fep 0:5ff20db10a96 893 task is already in a ready list before it yields - in which
fep 0:5ff20db10a96 894 case the yield will not cause a context switch unless there
fep 0:5ff20db10a96 895 is also a higher priority task in the pending ready list. */
fep 0:5ff20db10a96 896 if( xTaskResumeAll() == pdFALSE )
fep 0:5ff20db10a96 897 {
fep 0:5ff20db10a96 898 portYIELD_WITHIN_API();
fep 0:5ff20db10a96 899 }
fep 0:5ff20db10a96 900 }
fep 0:5ff20db10a96 901 else
fep 0:5ff20db10a96 902 {
fep 0:5ff20db10a96 903 /* Try again. */
fep 0:5ff20db10a96 904 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 905 ( void ) xTaskResumeAll();
fep 0:5ff20db10a96 906 }
fep 0:5ff20db10a96 907 }
fep 0:5ff20db10a96 908 else
fep 0:5ff20db10a96 909 {
fep 0:5ff20db10a96 910 /* The timeout has expired. */
fep 0:5ff20db10a96 911 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 912 ( void ) xTaskResumeAll();
fep 0:5ff20db10a96 913
fep 0:5ff20db10a96 914 traceQUEUE_SEND_FAILED( pxQueue );
fep 0:5ff20db10a96 915 return errQUEUE_FULL;
fep 0:5ff20db10a96 916 }
fep 0:5ff20db10a96 917 }
fep 0:5ff20db10a96 918 }
fep 0:5ff20db10a96 919 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 920
fep 0:5ff20db10a96 921 BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition )
fep 0:5ff20db10a96 922 {
fep 0:5ff20db10a96 923 BaseType_t xReturn;
fep 0:5ff20db10a96 924 UBaseType_t uxSavedInterruptStatus;
fep 0:5ff20db10a96 925 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 926
fep 0:5ff20db10a96 927 configASSERT( pxQueue );
fep 0:5ff20db10a96 928 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
fep 0:5ff20db10a96 929 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
fep 0:5ff20db10a96 930
fep 0:5ff20db10a96 931 /* RTOS ports that support interrupt nesting have the concept of a maximum
fep 0:5ff20db10a96 932 system call (or maximum API call) interrupt priority. Interrupts that are
fep 0:5ff20db10a96 933 above the maximum system call priority are kept permanently enabled, even
fep 0:5ff20db10a96 934 when the RTOS kernel is in a critical section, but cannot make any calls to
fep 0:5ff20db10a96 935 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
fep 0:5ff20db10a96 936 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
fep 0:5ff20db10a96 937 failure if a FreeRTOS API function is called from an interrupt that has been
fep 0:5ff20db10a96 938 assigned a priority above the configured maximum system call priority.
fep 0:5ff20db10a96 939 Only FreeRTOS functions that end in FromISR can be called from interrupts
fep 0:5ff20db10a96 940 that have been assigned a priority at or (logically) below the maximum
fep 0:5ff20db10a96 941 system call interrupt priority. FreeRTOS maintains a separate interrupt
fep 0:5ff20db10a96 942 safe API to ensure interrupt entry is as fast and as simple as possible.
fep 0:5ff20db10a96 943 More information (albeit Cortex-M specific) is provided on the following
fep 0:5ff20db10a96 944 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
fep 0:5ff20db10a96 945 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
fep 0:5ff20db10a96 946
fep 0:5ff20db10a96 947 /* Similar to xQueueGenericSend, except without blocking if there is no room
fep 0:5ff20db10a96 948 in the queue. Also don't directly wake a task that was blocked on a queue
fep 0:5ff20db10a96 949 read, instead return a flag to say whether a context switch is required or
fep 0:5ff20db10a96 950 not (i.e. has a task with a higher priority than us been woken by this
fep 0:5ff20db10a96 951 post). */
fep 0:5ff20db10a96 952 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
fep 0:5ff20db10a96 953 {
fep 0:5ff20db10a96 954 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
fep 0:5ff20db10a96 955 {
fep 0:5ff20db10a96 956 const int8_t cTxLock = pxQueue->cTxLock;
fep 0:5ff20db10a96 957
fep 0:5ff20db10a96 958 traceQUEUE_SEND_FROM_ISR( pxQueue );
fep 0:5ff20db10a96 959
fep 0:5ff20db10a96 960 /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a
fep 0:5ff20db10a96 961 semaphore or mutex. That means prvCopyDataToQueue() cannot result
fep 0:5ff20db10a96 962 in a task disinheriting a priority and prvCopyDataToQueue() can be
fep 0:5ff20db10a96 963 called here even though the disinherit function does not check if
fep 0:5ff20db10a96 964 the scheduler is suspended before accessing the ready lists. */
fep 0:5ff20db10a96 965 ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
fep 0:5ff20db10a96 966
fep 0:5ff20db10a96 967 /* The event list is not altered if the queue is locked. This will
fep 0:5ff20db10a96 968 be done when the queue is unlocked later. */
fep 0:5ff20db10a96 969 if( cTxLock == queueUNLOCKED )
fep 0:5ff20db10a96 970 {
fep 0:5ff20db10a96 971 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 972 {
fep 0:5ff20db10a96 973 if( pxQueue->pxQueueSetContainer != NULL )
fep 0:5ff20db10a96 974 {
fep 0:5ff20db10a96 975 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) != pdFALSE )
fep 0:5ff20db10a96 976 {
fep 0:5ff20db10a96 977 /* The queue is a member of a queue set, and posting
fep 0:5ff20db10a96 978 to the queue set caused a higher priority task to
fep 0:5ff20db10a96 979 unblock. A context switch is required. */
fep 0:5ff20db10a96 980 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 981 {
fep 0:5ff20db10a96 982 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 983 }
fep 0:5ff20db10a96 984 else
fep 0:5ff20db10a96 985 {
fep 0:5ff20db10a96 986 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 987 }
fep 0:5ff20db10a96 988 }
fep 0:5ff20db10a96 989 else
fep 0:5ff20db10a96 990 {
fep 0:5ff20db10a96 991 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 992 }
fep 0:5ff20db10a96 993 }
fep 0:5ff20db10a96 994 else
fep 0:5ff20db10a96 995 {
fep 0:5ff20db10a96 996 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 997 {
fep 0:5ff20db10a96 998 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 999 {
fep 0:5ff20db10a96 1000 /* The task waiting has a higher priority so
fep 0:5ff20db10a96 1001 record that a context switch is required. */
fep 0:5ff20db10a96 1002 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1003 {
fep 0:5ff20db10a96 1004 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1005 }
fep 0:5ff20db10a96 1006 else
fep 0:5ff20db10a96 1007 {
fep 0:5ff20db10a96 1008 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1009 }
fep 0:5ff20db10a96 1010 }
fep 0:5ff20db10a96 1011 else
fep 0:5ff20db10a96 1012 {
fep 0:5ff20db10a96 1013 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1014 }
fep 0:5ff20db10a96 1015 }
fep 0:5ff20db10a96 1016 else
fep 0:5ff20db10a96 1017 {
fep 0:5ff20db10a96 1018 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1019 }
fep 0:5ff20db10a96 1020 }
fep 0:5ff20db10a96 1021 }
fep 0:5ff20db10a96 1022 #else /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1023 {
fep 0:5ff20db10a96 1024 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1025 {
fep 0:5ff20db10a96 1026 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1027 {
fep 0:5ff20db10a96 1028 /* The task waiting has a higher priority so record that a
fep 0:5ff20db10a96 1029 context switch is required. */
fep 0:5ff20db10a96 1030 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1031 {
fep 0:5ff20db10a96 1032 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1033 }
fep 0:5ff20db10a96 1034 else
fep 0:5ff20db10a96 1035 {
fep 0:5ff20db10a96 1036 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1037 }
fep 0:5ff20db10a96 1038 }
fep 0:5ff20db10a96 1039 else
fep 0:5ff20db10a96 1040 {
fep 0:5ff20db10a96 1041 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1042 }
fep 0:5ff20db10a96 1043 }
fep 0:5ff20db10a96 1044 else
fep 0:5ff20db10a96 1045 {
fep 0:5ff20db10a96 1046 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1047 }
fep 0:5ff20db10a96 1048 }
fep 0:5ff20db10a96 1049 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1050 }
fep 0:5ff20db10a96 1051 else
fep 0:5ff20db10a96 1052 {
fep 0:5ff20db10a96 1053 /* Increment the lock count so the task that unlocks the queue
fep 0:5ff20db10a96 1054 knows that data was posted while it was locked. */
fep 0:5ff20db10a96 1055 pxQueue->cTxLock = ( int8_t ) ( cTxLock + 1 );
fep 0:5ff20db10a96 1056 }
fep 0:5ff20db10a96 1057
fep 0:5ff20db10a96 1058 xReturn = pdPASS;
fep 0:5ff20db10a96 1059 }
fep 0:5ff20db10a96 1060 else
fep 0:5ff20db10a96 1061 {
fep 0:5ff20db10a96 1062 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
fep 0:5ff20db10a96 1063 xReturn = errQUEUE_FULL;
fep 0:5ff20db10a96 1064 }
fep 0:5ff20db10a96 1065 }
fep 0:5ff20db10a96 1066 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
fep 0:5ff20db10a96 1067
fep 0:5ff20db10a96 1068 return xReturn;
fep 0:5ff20db10a96 1069 }
fep 0:5ff20db10a96 1070 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1071
fep 0:5ff20db10a96 1072 BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken )
fep 0:5ff20db10a96 1073 {
fep 0:5ff20db10a96 1074 BaseType_t xReturn;
fep 0:5ff20db10a96 1075 UBaseType_t uxSavedInterruptStatus;
fep 0:5ff20db10a96 1076 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1077
fep 0:5ff20db10a96 1078 /* Similar to xQueueGenericSendFromISR() but used with semaphores where the
fep 0:5ff20db10a96 1079 item size is 0. Don't directly wake a task that was blocked on a queue
fep 0:5ff20db10a96 1080 read, instead return a flag to say whether a context switch is required or
fep 0:5ff20db10a96 1081 not (i.e. has a task with a higher priority than us been woken by this
fep 0:5ff20db10a96 1082 post). */
fep 0:5ff20db10a96 1083
fep 0:5ff20db10a96 1084 configASSERT( pxQueue );
fep 0:5ff20db10a96 1085
fep 0:5ff20db10a96 1086 /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR()
fep 0:5ff20db10a96 1087 if the item size is not 0. */
fep 0:5ff20db10a96 1088 configASSERT( pxQueue->uxItemSize == 0 );
fep 0:5ff20db10a96 1089
fep 0:5ff20db10a96 1090 /* Normally a mutex would not be given from an interrupt, especially if
fep 0:5ff20db10a96 1091 there is a mutex holder, as priority inheritance makes no sense for an
fep 0:5ff20db10a96 1092 interrupts, only tasks. */
fep 0:5ff20db10a96 1093 configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) );
fep 0:5ff20db10a96 1094
fep 0:5ff20db10a96 1095 /* RTOS ports that support interrupt nesting have the concept of a maximum
fep 0:5ff20db10a96 1096 system call (or maximum API call) interrupt priority. Interrupts that are
fep 0:5ff20db10a96 1097 above the maximum system call priority are kept permanently enabled, even
fep 0:5ff20db10a96 1098 when the RTOS kernel is in a critical section, but cannot make any calls to
fep 0:5ff20db10a96 1099 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
fep 0:5ff20db10a96 1100 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
fep 0:5ff20db10a96 1101 failure if a FreeRTOS API function is called from an interrupt that has been
fep 0:5ff20db10a96 1102 assigned a priority above the configured maximum system call priority.
fep 0:5ff20db10a96 1103 Only FreeRTOS functions that end in FromISR can be called from interrupts
fep 0:5ff20db10a96 1104 that have been assigned a priority at or (logically) below the maximum
fep 0:5ff20db10a96 1105 system call interrupt priority. FreeRTOS maintains a separate interrupt
fep 0:5ff20db10a96 1106 safe API to ensure interrupt entry is as fast and as simple as possible.
fep 0:5ff20db10a96 1107 More information (albeit Cortex-M specific) is provided on the following
fep 0:5ff20db10a96 1108 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
fep 0:5ff20db10a96 1109 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
fep 0:5ff20db10a96 1110
fep 0:5ff20db10a96 1111 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
fep 0:5ff20db10a96 1112 {
fep 0:5ff20db10a96 1113 const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting;
fep 0:5ff20db10a96 1114
fep 0:5ff20db10a96 1115 /* When the queue is used to implement a semaphore no data is ever
fep 0:5ff20db10a96 1116 moved through the queue but it is still valid to see if the queue 'has
fep 0:5ff20db10a96 1117 space'. */
fep 0:5ff20db10a96 1118 if( uxMessagesWaiting < pxQueue->uxLength )
fep 0:5ff20db10a96 1119 {
fep 0:5ff20db10a96 1120 const int8_t cTxLock = pxQueue->cTxLock;
fep 0:5ff20db10a96 1121
fep 0:5ff20db10a96 1122 traceQUEUE_SEND_FROM_ISR( pxQueue );
fep 0:5ff20db10a96 1123
fep 0:5ff20db10a96 1124 /* A task can only have an inherited priority if it is a mutex
fep 0:5ff20db10a96 1125 holder - and if there is a mutex holder then the mutex cannot be
fep 0:5ff20db10a96 1126 given from an ISR. As this is the ISR version of the function it
fep 0:5ff20db10a96 1127 can be assumed there is no mutex holder and no need to determine if
fep 0:5ff20db10a96 1128 priority disinheritance is needed. Simply increase the count of
fep 0:5ff20db10a96 1129 messages (semaphores) available. */
fep 0:5ff20db10a96 1130 pxQueue->uxMessagesWaiting = uxMessagesWaiting + 1;
fep 0:5ff20db10a96 1131
fep 0:5ff20db10a96 1132 /* The event list is not altered if the queue is locked. This will
fep 0:5ff20db10a96 1133 be done when the queue is unlocked later. */
fep 0:5ff20db10a96 1134 if( cTxLock == queueUNLOCKED )
fep 0:5ff20db10a96 1135 {
fep 0:5ff20db10a96 1136 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 1137 {
fep 0:5ff20db10a96 1138 if( pxQueue->pxQueueSetContainer != NULL )
fep 0:5ff20db10a96 1139 {
fep 0:5ff20db10a96 1140 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) != pdFALSE )
fep 0:5ff20db10a96 1141 {
fep 0:5ff20db10a96 1142 /* The semaphore is a member of a queue set, and
fep 0:5ff20db10a96 1143 posting to the queue set caused a higher priority
fep 0:5ff20db10a96 1144 task to unblock. A context switch is required. */
fep 0:5ff20db10a96 1145 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1146 {
fep 0:5ff20db10a96 1147 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1148 }
fep 0:5ff20db10a96 1149 else
fep 0:5ff20db10a96 1150 {
fep 0:5ff20db10a96 1151 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1152 }
fep 0:5ff20db10a96 1153 }
fep 0:5ff20db10a96 1154 else
fep 0:5ff20db10a96 1155 {
fep 0:5ff20db10a96 1156 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1157 }
fep 0:5ff20db10a96 1158 }
fep 0:5ff20db10a96 1159 else
fep 0:5ff20db10a96 1160 {
fep 0:5ff20db10a96 1161 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1162 {
fep 0:5ff20db10a96 1163 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1164 {
fep 0:5ff20db10a96 1165 /* The task waiting has a higher priority so
fep 0:5ff20db10a96 1166 record that a context switch is required. */
fep 0:5ff20db10a96 1167 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1168 {
fep 0:5ff20db10a96 1169 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1170 }
fep 0:5ff20db10a96 1171 else
fep 0:5ff20db10a96 1172 {
fep 0:5ff20db10a96 1173 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1174 }
fep 0:5ff20db10a96 1175 }
fep 0:5ff20db10a96 1176 else
fep 0:5ff20db10a96 1177 {
fep 0:5ff20db10a96 1178 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1179 }
fep 0:5ff20db10a96 1180 }
fep 0:5ff20db10a96 1181 else
fep 0:5ff20db10a96 1182 {
fep 0:5ff20db10a96 1183 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1184 }
fep 0:5ff20db10a96 1185 }
fep 0:5ff20db10a96 1186 }
fep 0:5ff20db10a96 1187 #else /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1188 {
fep 0:5ff20db10a96 1189 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1190 {
fep 0:5ff20db10a96 1191 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1192 {
fep 0:5ff20db10a96 1193 /* The task waiting has a higher priority so record that a
fep 0:5ff20db10a96 1194 context switch is required. */
fep 0:5ff20db10a96 1195 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1196 {
fep 0:5ff20db10a96 1197 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1198 }
fep 0:5ff20db10a96 1199 else
fep 0:5ff20db10a96 1200 {
fep 0:5ff20db10a96 1201 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1202 }
fep 0:5ff20db10a96 1203 }
fep 0:5ff20db10a96 1204 else
fep 0:5ff20db10a96 1205 {
fep 0:5ff20db10a96 1206 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1207 }
fep 0:5ff20db10a96 1208 }
fep 0:5ff20db10a96 1209 else
fep 0:5ff20db10a96 1210 {
fep 0:5ff20db10a96 1211 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1212 }
fep 0:5ff20db10a96 1213 }
fep 0:5ff20db10a96 1214 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1215 }
fep 0:5ff20db10a96 1216 else
fep 0:5ff20db10a96 1217 {
fep 0:5ff20db10a96 1218 /* Increment the lock count so the task that unlocks the queue
fep 0:5ff20db10a96 1219 knows that data was posted while it was locked. */
fep 0:5ff20db10a96 1220 pxQueue->cTxLock = ( int8_t ) ( cTxLock + 1 );
fep 0:5ff20db10a96 1221 }
fep 0:5ff20db10a96 1222
fep 0:5ff20db10a96 1223 xReturn = pdPASS;
fep 0:5ff20db10a96 1224 }
fep 0:5ff20db10a96 1225 else
fep 0:5ff20db10a96 1226 {
fep 0:5ff20db10a96 1227 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
fep 0:5ff20db10a96 1228 xReturn = errQUEUE_FULL;
fep 0:5ff20db10a96 1229 }
fep 0:5ff20db10a96 1230 }
fep 0:5ff20db10a96 1231 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
fep 0:5ff20db10a96 1232
fep 0:5ff20db10a96 1233 return xReturn;
fep 0:5ff20db10a96 1234 }
fep 0:5ff20db10a96 1235 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1236
fep 0:5ff20db10a96 1237 BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking )
fep 0:5ff20db10a96 1238 {
fep 0:5ff20db10a96 1239 BaseType_t xEntryTimeSet = pdFALSE;
fep 0:5ff20db10a96 1240 TimeOut_t xTimeOut;
fep 0:5ff20db10a96 1241 int8_t *pcOriginalReadPosition;
fep 0:5ff20db10a96 1242 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1243
fep 0:5ff20db10a96 1244 configASSERT( pxQueue );
fep 0:5ff20db10a96 1245 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
fep 0:5ff20db10a96 1246 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
fep 0:5ff20db10a96 1247 {
fep 0:5ff20db10a96 1248 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
fep 0:5ff20db10a96 1249 }
fep 0:5ff20db10a96 1250 #endif
fep 0:5ff20db10a96 1251
fep 0:5ff20db10a96 1252 /* This function relaxes the coding standard somewhat to allow return
fep 0:5ff20db10a96 1253 statements within the function itself. This is done in the interest
fep 0:5ff20db10a96 1254 of execution time efficiency. */
fep 0:5ff20db10a96 1255
fep 0:5ff20db10a96 1256 for( ;; )
fep 0:5ff20db10a96 1257 {
fep 0:5ff20db10a96 1258 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1259 {
fep 0:5ff20db10a96 1260 const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting;
fep 0:5ff20db10a96 1261
fep 0:5ff20db10a96 1262 /* Is there data in the queue now? To be running the calling task
fep 0:5ff20db10a96 1263 must be the highest priority task wanting to access the queue. */
fep 0:5ff20db10a96 1264 if( uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1265 {
fep 0:5ff20db10a96 1266 /* Remember the read position in case the queue is only being
fep 0:5ff20db10a96 1267 peeked. */
fep 0:5ff20db10a96 1268 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
fep 0:5ff20db10a96 1269
fep 0:5ff20db10a96 1270 prvCopyDataFromQueue( pxQueue, pvBuffer );
fep 0:5ff20db10a96 1271
fep 0:5ff20db10a96 1272 if( xJustPeeking == pdFALSE )
fep 0:5ff20db10a96 1273 {
fep 0:5ff20db10a96 1274 traceQUEUE_RECEIVE( pxQueue );
fep 0:5ff20db10a96 1275
fep 0:5ff20db10a96 1276 /* Actually removing data, not just peeking. */
fep 0:5ff20db10a96 1277 pxQueue->uxMessagesWaiting = uxMessagesWaiting - 1;
fep 0:5ff20db10a96 1278
fep 0:5ff20db10a96 1279 #if ( configUSE_MUTEXES == 1 )
fep 0:5ff20db10a96 1280 {
fep 0:5ff20db10a96 1281 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
fep 0:5ff20db10a96 1282 {
fep 0:5ff20db10a96 1283 /* Record the information required to implement
fep 0:5ff20db10a96 1284 priority inheritance should it become necessary. */
fep 0:5ff20db10a96 1285 pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
fep 0:5ff20db10a96 1286 }
fep 0:5ff20db10a96 1287 else
fep 0:5ff20db10a96 1288 {
fep 0:5ff20db10a96 1289 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1290 }
fep 0:5ff20db10a96 1291 }
fep 0:5ff20db10a96 1292 #endif /* configUSE_MUTEXES */
fep 0:5ff20db10a96 1293
fep 0:5ff20db10a96 1294 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 1295 {
fep 0:5ff20db10a96 1296 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 1297 {
fep 0:5ff20db10a96 1298 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 1299 }
fep 0:5ff20db10a96 1300 else
fep 0:5ff20db10a96 1301 {
fep 0:5ff20db10a96 1302 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1303 }
fep 0:5ff20db10a96 1304 }
fep 0:5ff20db10a96 1305 else
fep 0:5ff20db10a96 1306 {
fep 0:5ff20db10a96 1307 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1308 }
fep 0:5ff20db10a96 1309 }
fep 0:5ff20db10a96 1310 else
fep 0:5ff20db10a96 1311 {
fep 0:5ff20db10a96 1312 traceQUEUE_PEEK( pxQueue );
fep 0:5ff20db10a96 1313
fep 0:5ff20db10a96 1314 /* The data is not being removed, so reset the read
fep 0:5ff20db10a96 1315 pointer. */
fep 0:5ff20db10a96 1316 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
fep 0:5ff20db10a96 1317
fep 0:5ff20db10a96 1318 /* The data is being left in the queue, so see if there are
fep 0:5ff20db10a96 1319 any other tasks waiting for the data. */
fep 0:5ff20db10a96 1320 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1321 {
fep 0:5ff20db10a96 1322 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1323 {
fep 0:5ff20db10a96 1324 /* The task waiting has a higher priority than this task. */
fep 0:5ff20db10a96 1325 queueYIELD_IF_USING_PREEMPTION();
fep 0:5ff20db10a96 1326 }
fep 0:5ff20db10a96 1327 else
fep 0:5ff20db10a96 1328 {
fep 0:5ff20db10a96 1329 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1330 }
fep 0:5ff20db10a96 1331 }
fep 0:5ff20db10a96 1332 else
fep 0:5ff20db10a96 1333 {
fep 0:5ff20db10a96 1334 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1335 }
fep 0:5ff20db10a96 1336 }
fep 0:5ff20db10a96 1337
fep 0:5ff20db10a96 1338 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1339 return pdPASS;
fep 0:5ff20db10a96 1340 }
fep 0:5ff20db10a96 1341 else
fep 0:5ff20db10a96 1342 {
fep 0:5ff20db10a96 1343 if( xTicksToWait == ( TickType_t ) 0 )
fep 0:5ff20db10a96 1344 {
fep 0:5ff20db10a96 1345 /* The queue was empty and no block time is specified (or
fep 0:5ff20db10a96 1346 the block time has expired) so leave now. */
fep 0:5ff20db10a96 1347 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1348 traceQUEUE_RECEIVE_FAILED( pxQueue );
fep 0:5ff20db10a96 1349 return errQUEUE_EMPTY;
fep 0:5ff20db10a96 1350 }
fep 0:5ff20db10a96 1351 else if( xEntryTimeSet == pdFALSE )
fep 0:5ff20db10a96 1352 {
fep 0:5ff20db10a96 1353 /* The queue was empty and a block time was specified so
fep 0:5ff20db10a96 1354 configure the timeout structure. */
fep 0:5ff20db10a96 1355 vTaskSetTimeOutState( &xTimeOut );
fep 0:5ff20db10a96 1356 xEntryTimeSet = pdTRUE;
fep 0:5ff20db10a96 1357 }
fep 0:5ff20db10a96 1358 else
fep 0:5ff20db10a96 1359 {
fep 0:5ff20db10a96 1360 /* Entry time was already set. */
fep 0:5ff20db10a96 1361 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1362 }
fep 0:5ff20db10a96 1363 }
fep 0:5ff20db10a96 1364 }
fep 0:5ff20db10a96 1365 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1366
fep 0:5ff20db10a96 1367 /* Interrupts and other tasks can send to and receive from the queue
fep 0:5ff20db10a96 1368 now the critical section has been exited. */
fep 0:5ff20db10a96 1369
fep 0:5ff20db10a96 1370 vTaskSuspendAll();
fep 0:5ff20db10a96 1371 prvLockQueue( pxQueue );
fep 0:5ff20db10a96 1372
fep 0:5ff20db10a96 1373 /* Update the timeout state to see if it has expired yet. */
fep 0:5ff20db10a96 1374 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
fep 0:5ff20db10a96 1375 {
fep 0:5ff20db10a96 1376 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
fep 0:5ff20db10a96 1377 {
fep 0:5ff20db10a96 1378 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
fep 0:5ff20db10a96 1379
fep 0:5ff20db10a96 1380 #if ( configUSE_MUTEXES == 1 )
fep 0:5ff20db10a96 1381 {
fep 0:5ff20db10a96 1382 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
fep 0:5ff20db10a96 1383 {
fep 0:5ff20db10a96 1384 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1385 {
fep 0:5ff20db10a96 1386 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
fep 0:5ff20db10a96 1387 }
fep 0:5ff20db10a96 1388 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1389 }
fep 0:5ff20db10a96 1390 else
fep 0:5ff20db10a96 1391 {
fep 0:5ff20db10a96 1392 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1393 }
fep 0:5ff20db10a96 1394 }
fep 0:5ff20db10a96 1395 #endif
fep 0:5ff20db10a96 1396
fep 0:5ff20db10a96 1397 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
fep 0:5ff20db10a96 1398 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 1399 if( xTaskResumeAll() == pdFALSE )
fep 0:5ff20db10a96 1400 {
fep 0:5ff20db10a96 1401 portYIELD_WITHIN_API();
fep 0:5ff20db10a96 1402 }
fep 0:5ff20db10a96 1403 else
fep 0:5ff20db10a96 1404 {
fep 0:5ff20db10a96 1405 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1406 }
fep 0:5ff20db10a96 1407 }
fep 0:5ff20db10a96 1408 else
fep 0:5ff20db10a96 1409 {
fep 0:5ff20db10a96 1410 /* Try again. */
fep 0:5ff20db10a96 1411 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 1412 ( void ) xTaskResumeAll();
fep 0:5ff20db10a96 1413 }
fep 0:5ff20db10a96 1414 }
fep 0:5ff20db10a96 1415 else
fep 0:5ff20db10a96 1416 {
fep 0:5ff20db10a96 1417 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 1418 ( void ) xTaskResumeAll();
fep 0:5ff20db10a96 1419
fep 0:5ff20db10a96 1420 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
fep 0:5ff20db10a96 1421 {
fep 0:5ff20db10a96 1422 traceQUEUE_RECEIVE_FAILED( pxQueue );
fep 0:5ff20db10a96 1423 return errQUEUE_EMPTY;
fep 0:5ff20db10a96 1424 }
fep 0:5ff20db10a96 1425 else
fep 0:5ff20db10a96 1426 {
fep 0:5ff20db10a96 1427 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1428 }
fep 0:5ff20db10a96 1429 }
fep 0:5ff20db10a96 1430 }
fep 0:5ff20db10a96 1431 }
fep 0:5ff20db10a96 1432 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1433
fep 0:5ff20db10a96 1434 BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken )
fep 0:5ff20db10a96 1435 {
fep 0:5ff20db10a96 1436 BaseType_t xReturn;
fep 0:5ff20db10a96 1437 UBaseType_t uxSavedInterruptStatus;
fep 0:5ff20db10a96 1438 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1439
fep 0:5ff20db10a96 1440 configASSERT( pxQueue );
fep 0:5ff20db10a96 1441 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
fep 0:5ff20db10a96 1442
fep 0:5ff20db10a96 1443 /* RTOS ports that support interrupt nesting have the concept of a maximum
fep 0:5ff20db10a96 1444 system call (or maximum API call) interrupt priority. Interrupts that are
fep 0:5ff20db10a96 1445 above the maximum system call priority are kept permanently enabled, even
fep 0:5ff20db10a96 1446 when the RTOS kernel is in a critical section, but cannot make any calls to
fep 0:5ff20db10a96 1447 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
fep 0:5ff20db10a96 1448 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
fep 0:5ff20db10a96 1449 failure if a FreeRTOS API function is called from an interrupt that has been
fep 0:5ff20db10a96 1450 assigned a priority above the configured maximum system call priority.
fep 0:5ff20db10a96 1451 Only FreeRTOS functions that end in FromISR can be called from interrupts
fep 0:5ff20db10a96 1452 that have been assigned a priority at or (logically) below the maximum
fep 0:5ff20db10a96 1453 system call interrupt priority. FreeRTOS maintains a separate interrupt
fep 0:5ff20db10a96 1454 safe API to ensure interrupt entry is as fast and as simple as possible.
fep 0:5ff20db10a96 1455 More information (albeit Cortex-M specific) is provided on the following
fep 0:5ff20db10a96 1456 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
fep 0:5ff20db10a96 1457 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
fep 0:5ff20db10a96 1458
fep 0:5ff20db10a96 1459 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
fep 0:5ff20db10a96 1460 {
fep 0:5ff20db10a96 1461 const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting;
fep 0:5ff20db10a96 1462
fep 0:5ff20db10a96 1463 /* Cannot block in an ISR, so check there is data available. */
fep 0:5ff20db10a96 1464 if( uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1465 {
fep 0:5ff20db10a96 1466 const int8_t cRxLock = pxQueue->cRxLock;
fep 0:5ff20db10a96 1467
fep 0:5ff20db10a96 1468 traceQUEUE_RECEIVE_FROM_ISR( pxQueue );
fep 0:5ff20db10a96 1469
fep 0:5ff20db10a96 1470 prvCopyDataFromQueue( pxQueue, pvBuffer );
fep 0:5ff20db10a96 1471 pxQueue->uxMessagesWaiting = uxMessagesWaiting - 1;
fep 0:5ff20db10a96 1472
fep 0:5ff20db10a96 1473 /* If the queue is locked the event list will not be modified.
fep 0:5ff20db10a96 1474 Instead update the lock count so the task that unlocks the queue
fep 0:5ff20db10a96 1475 will know that an ISR has removed data while the queue was
fep 0:5ff20db10a96 1476 locked. */
fep 0:5ff20db10a96 1477 if( cRxLock == queueUNLOCKED )
fep 0:5ff20db10a96 1478 {
fep 0:5ff20db10a96 1479 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 1480 {
fep 0:5ff20db10a96 1481 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 1482 {
fep 0:5ff20db10a96 1483 /* The task waiting has a higher priority than us so
fep 0:5ff20db10a96 1484 force a context switch. */
fep 0:5ff20db10a96 1485 if( pxHigherPriorityTaskWoken != NULL )
fep 0:5ff20db10a96 1486 {
fep 0:5ff20db10a96 1487 *pxHigherPriorityTaskWoken = pdTRUE;
fep 0:5ff20db10a96 1488 }
fep 0:5ff20db10a96 1489 else
fep 0:5ff20db10a96 1490 {
fep 0:5ff20db10a96 1491 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1492 }
fep 0:5ff20db10a96 1493 }
fep 0:5ff20db10a96 1494 else
fep 0:5ff20db10a96 1495 {
fep 0:5ff20db10a96 1496 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1497 }
fep 0:5ff20db10a96 1498 }
fep 0:5ff20db10a96 1499 else
fep 0:5ff20db10a96 1500 {
fep 0:5ff20db10a96 1501 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1502 }
fep 0:5ff20db10a96 1503 }
fep 0:5ff20db10a96 1504 else
fep 0:5ff20db10a96 1505 {
fep 0:5ff20db10a96 1506 /* Increment the lock count so the task that unlocks the queue
fep 0:5ff20db10a96 1507 knows that data was removed while it was locked. */
fep 0:5ff20db10a96 1508 pxQueue->cRxLock = ( int8_t ) ( cRxLock + 1 );
fep 0:5ff20db10a96 1509 }
fep 0:5ff20db10a96 1510
fep 0:5ff20db10a96 1511 xReturn = pdPASS;
fep 0:5ff20db10a96 1512 }
fep 0:5ff20db10a96 1513 else
fep 0:5ff20db10a96 1514 {
fep 0:5ff20db10a96 1515 xReturn = pdFAIL;
fep 0:5ff20db10a96 1516 traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue );
fep 0:5ff20db10a96 1517 }
fep 0:5ff20db10a96 1518 }
fep 0:5ff20db10a96 1519 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
fep 0:5ff20db10a96 1520
fep 0:5ff20db10a96 1521 return xReturn;
fep 0:5ff20db10a96 1522 }
fep 0:5ff20db10a96 1523 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1524
fep 0:5ff20db10a96 1525 BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer )
fep 0:5ff20db10a96 1526 {
fep 0:5ff20db10a96 1527 BaseType_t xReturn;
fep 0:5ff20db10a96 1528 UBaseType_t uxSavedInterruptStatus;
fep 0:5ff20db10a96 1529 int8_t *pcOriginalReadPosition;
fep 0:5ff20db10a96 1530 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1531
fep 0:5ff20db10a96 1532 configASSERT( pxQueue );
fep 0:5ff20db10a96 1533 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
fep 0:5ff20db10a96 1534 configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */
fep 0:5ff20db10a96 1535
fep 0:5ff20db10a96 1536 /* RTOS ports that support interrupt nesting have the concept of a maximum
fep 0:5ff20db10a96 1537 system call (or maximum API call) interrupt priority. Interrupts that are
fep 0:5ff20db10a96 1538 above the maximum system call priority are kept permanently enabled, even
fep 0:5ff20db10a96 1539 when the RTOS kernel is in a critical section, but cannot make any calls to
fep 0:5ff20db10a96 1540 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
fep 0:5ff20db10a96 1541 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
fep 0:5ff20db10a96 1542 failure if a FreeRTOS API function is called from an interrupt that has been
fep 0:5ff20db10a96 1543 assigned a priority above the configured maximum system call priority.
fep 0:5ff20db10a96 1544 Only FreeRTOS functions that end in FromISR can be called from interrupts
fep 0:5ff20db10a96 1545 that have been assigned a priority at or (logically) below the maximum
fep 0:5ff20db10a96 1546 system call interrupt priority. FreeRTOS maintains a separate interrupt
fep 0:5ff20db10a96 1547 safe API to ensure interrupt entry is as fast and as simple as possible.
fep 0:5ff20db10a96 1548 More information (albeit Cortex-M specific) is provided on the following
fep 0:5ff20db10a96 1549 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
fep 0:5ff20db10a96 1550 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
fep 0:5ff20db10a96 1551
fep 0:5ff20db10a96 1552 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
fep 0:5ff20db10a96 1553 {
fep 0:5ff20db10a96 1554 /* Cannot block in an ISR, so check there is data available. */
fep 0:5ff20db10a96 1555 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1556 {
fep 0:5ff20db10a96 1557 traceQUEUE_PEEK_FROM_ISR( pxQueue );
fep 0:5ff20db10a96 1558
fep 0:5ff20db10a96 1559 /* Remember the read position so it can be reset as nothing is
fep 0:5ff20db10a96 1560 actually being removed from the queue. */
fep 0:5ff20db10a96 1561 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
fep 0:5ff20db10a96 1562 prvCopyDataFromQueue( pxQueue, pvBuffer );
fep 0:5ff20db10a96 1563 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
fep 0:5ff20db10a96 1564
fep 0:5ff20db10a96 1565 xReturn = pdPASS;
fep 0:5ff20db10a96 1566 }
fep 0:5ff20db10a96 1567 else
fep 0:5ff20db10a96 1568 {
fep 0:5ff20db10a96 1569 xReturn = pdFAIL;
fep 0:5ff20db10a96 1570 traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue );
fep 0:5ff20db10a96 1571 }
fep 0:5ff20db10a96 1572 }
fep 0:5ff20db10a96 1573 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
fep 0:5ff20db10a96 1574
fep 0:5ff20db10a96 1575 return xReturn;
fep 0:5ff20db10a96 1576 }
fep 0:5ff20db10a96 1577 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1578
fep 0:5ff20db10a96 1579 UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue )
fep 0:5ff20db10a96 1580 {
fep 0:5ff20db10a96 1581 UBaseType_t uxReturn;
fep 0:5ff20db10a96 1582
fep 0:5ff20db10a96 1583 configASSERT( xQueue );
fep 0:5ff20db10a96 1584
fep 0:5ff20db10a96 1585 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1586 {
fep 0:5ff20db10a96 1587 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
fep 0:5ff20db10a96 1588 }
fep 0:5ff20db10a96 1589 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1590
fep 0:5ff20db10a96 1591 return uxReturn;
fep 0:5ff20db10a96 1592 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
fep 0:5ff20db10a96 1593 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1594
fep 0:5ff20db10a96 1595 UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue )
fep 0:5ff20db10a96 1596 {
fep 0:5ff20db10a96 1597 UBaseType_t uxReturn;
fep 0:5ff20db10a96 1598 Queue_t *pxQueue;
fep 0:5ff20db10a96 1599
fep 0:5ff20db10a96 1600 pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1601 configASSERT( pxQueue );
fep 0:5ff20db10a96 1602
fep 0:5ff20db10a96 1603 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1604 {
fep 0:5ff20db10a96 1605 uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting;
fep 0:5ff20db10a96 1606 }
fep 0:5ff20db10a96 1607 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1608
fep 0:5ff20db10a96 1609 return uxReturn;
fep 0:5ff20db10a96 1610 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
fep 0:5ff20db10a96 1611 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1612
fep 0:5ff20db10a96 1613 UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue )
fep 0:5ff20db10a96 1614 {
fep 0:5ff20db10a96 1615 UBaseType_t uxReturn;
fep 0:5ff20db10a96 1616
fep 0:5ff20db10a96 1617 configASSERT( xQueue );
fep 0:5ff20db10a96 1618
fep 0:5ff20db10a96 1619 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
fep 0:5ff20db10a96 1620
fep 0:5ff20db10a96 1621 return uxReturn;
fep 0:5ff20db10a96 1622 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
fep 0:5ff20db10a96 1623 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1624
fep 0:5ff20db10a96 1625 void vQueueDelete( QueueHandle_t xQueue )
fep 0:5ff20db10a96 1626 {
fep 0:5ff20db10a96 1627 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1628
fep 0:5ff20db10a96 1629 configASSERT( pxQueue );
fep 0:5ff20db10a96 1630 traceQUEUE_DELETE( pxQueue );
fep 0:5ff20db10a96 1631
fep 0:5ff20db10a96 1632 #if ( configQUEUE_REGISTRY_SIZE > 0 )
fep 0:5ff20db10a96 1633 {
fep 0:5ff20db10a96 1634 vQueueUnregisterQueue( pxQueue );
fep 0:5ff20db10a96 1635 }
fep 0:5ff20db10a96 1636 #endif
fep 0:5ff20db10a96 1637
fep 0:5ff20db10a96 1638 #if( ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 0 ) )
fep 0:5ff20db10a96 1639 {
fep 0:5ff20db10a96 1640 /* The queue can only have been allocated dynamically - free it
fep 0:5ff20db10a96 1641 again. */
fep 0:5ff20db10a96 1642 vPortFree( pxQueue );
fep 0:5ff20db10a96 1643 }
fep 0:5ff20db10a96 1644 #elif( ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 1645 {
fep 0:5ff20db10a96 1646 /* The queue could have been allocated statically or dynamically, so
fep 0:5ff20db10a96 1647 check before attempting to free the memory. */
fep 0:5ff20db10a96 1648 if( pxQueue->ucStaticallyAllocated == ( uint8_t ) pdFALSE )
fep 0:5ff20db10a96 1649 {
fep 0:5ff20db10a96 1650 vPortFree( pxQueue );
fep 0:5ff20db10a96 1651 }
fep 0:5ff20db10a96 1652 else
fep 0:5ff20db10a96 1653 {
fep 0:5ff20db10a96 1654 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1655 }
fep 0:5ff20db10a96 1656 }
fep 0:5ff20db10a96 1657 #else
fep 0:5ff20db10a96 1658 {
fep 0:5ff20db10a96 1659 /* The queue must have been statically allocated, so is not going to be
fep 0:5ff20db10a96 1660 deleted. Avoid compiler warnings about the unused parameter. */
fep 0:5ff20db10a96 1661 ( void ) pxQueue;
fep 0:5ff20db10a96 1662 }
fep 0:5ff20db10a96 1663 #endif /* configSUPPORT_DYNAMIC_ALLOCATION */
fep 0:5ff20db10a96 1664 }
fep 0:5ff20db10a96 1665 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1666
fep 0:5ff20db10a96 1667 #if ( configUSE_TRACE_FACILITY == 1 )
fep 0:5ff20db10a96 1668
fep 0:5ff20db10a96 1669 UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue )
fep 0:5ff20db10a96 1670 {
fep 0:5ff20db10a96 1671 return ( ( Queue_t * ) xQueue )->uxQueueNumber;
fep 0:5ff20db10a96 1672 }
fep 0:5ff20db10a96 1673
fep 0:5ff20db10a96 1674 #endif /* configUSE_TRACE_FACILITY */
fep 0:5ff20db10a96 1675 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1676
fep 0:5ff20db10a96 1677 #if ( configUSE_TRACE_FACILITY == 1 )
fep 0:5ff20db10a96 1678
fep 0:5ff20db10a96 1679 void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber )
fep 0:5ff20db10a96 1680 {
fep 0:5ff20db10a96 1681 ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber;
fep 0:5ff20db10a96 1682 }
fep 0:5ff20db10a96 1683
fep 0:5ff20db10a96 1684 #endif /* configUSE_TRACE_FACILITY */
fep 0:5ff20db10a96 1685 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1686
fep 0:5ff20db10a96 1687 #if ( configUSE_TRACE_FACILITY == 1 )
fep 0:5ff20db10a96 1688
fep 0:5ff20db10a96 1689 uint8_t ucQueueGetQueueType( QueueHandle_t xQueue )
fep 0:5ff20db10a96 1690 {
fep 0:5ff20db10a96 1691 return ( ( Queue_t * ) xQueue )->ucQueueType;
fep 0:5ff20db10a96 1692 }
fep 0:5ff20db10a96 1693
fep 0:5ff20db10a96 1694 #endif /* configUSE_TRACE_FACILITY */
fep 0:5ff20db10a96 1695 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1696
fep 0:5ff20db10a96 1697 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition )
fep 0:5ff20db10a96 1698 {
fep 0:5ff20db10a96 1699 BaseType_t xReturn = pdFALSE;
fep 0:5ff20db10a96 1700 UBaseType_t uxMessagesWaiting;
fep 0:5ff20db10a96 1701
fep 0:5ff20db10a96 1702 /* This function is called from a critical section. */
fep 0:5ff20db10a96 1703
fep 0:5ff20db10a96 1704 uxMessagesWaiting = pxQueue->uxMessagesWaiting;
fep 0:5ff20db10a96 1705
fep 0:5ff20db10a96 1706 if( pxQueue->uxItemSize == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1707 {
fep 0:5ff20db10a96 1708 #if ( configUSE_MUTEXES == 1 )
fep 0:5ff20db10a96 1709 {
fep 0:5ff20db10a96 1710 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
fep 0:5ff20db10a96 1711 {
fep 0:5ff20db10a96 1712 /* The mutex is no longer being held. */
fep 0:5ff20db10a96 1713 xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder );
fep 0:5ff20db10a96 1714 pxQueue->pxMutexHolder = NULL;
fep 0:5ff20db10a96 1715 }
fep 0:5ff20db10a96 1716 else
fep 0:5ff20db10a96 1717 {
fep 0:5ff20db10a96 1718 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1719 }
fep 0:5ff20db10a96 1720 }
fep 0:5ff20db10a96 1721 #endif /* configUSE_MUTEXES */
fep 0:5ff20db10a96 1722 }
fep 0:5ff20db10a96 1723 else if( xPosition == queueSEND_TO_BACK )
fep 0:5ff20db10a96 1724 {
fep 0:5ff20db10a96 1725 ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */
fep 0:5ff20db10a96 1726 pxQueue->pcWriteTo += pxQueue->uxItemSize;
fep 0:5ff20db10a96 1727 if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
fep 0:5ff20db10a96 1728 {
fep 0:5ff20db10a96 1729 pxQueue->pcWriteTo = pxQueue->pcHead;
fep 0:5ff20db10a96 1730 }
fep 0:5ff20db10a96 1731 else
fep 0:5ff20db10a96 1732 {
fep 0:5ff20db10a96 1733 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1734 }
fep 0:5ff20db10a96 1735 }
fep 0:5ff20db10a96 1736 else
fep 0:5ff20db10a96 1737 {
fep 0:5ff20db10a96 1738 ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
fep 0:5ff20db10a96 1739 pxQueue->u.pcReadFrom -= pxQueue->uxItemSize;
fep 0:5ff20db10a96 1740 if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
fep 0:5ff20db10a96 1741 {
fep 0:5ff20db10a96 1742 pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize );
fep 0:5ff20db10a96 1743 }
fep 0:5ff20db10a96 1744 else
fep 0:5ff20db10a96 1745 {
fep 0:5ff20db10a96 1746 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1747 }
fep 0:5ff20db10a96 1748
fep 0:5ff20db10a96 1749 if( xPosition == queueOVERWRITE )
fep 0:5ff20db10a96 1750 {
fep 0:5ff20db10a96 1751 if( uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1752 {
fep 0:5ff20db10a96 1753 /* An item is not being added but overwritten, so subtract
fep 0:5ff20db10a96 1754 one from the recorded number of items in the queue so when
fep 0:5ff20db10a96 1755 one is added again below the number of recorded items remains
fep 0:5ff20db10a96 1756 correct. */
fep 0:5ff20db10a96 1757 --uxMessagesWaiting;
fep 0:5ff20db10a96 1758 }
fep 0:5ff20db10a96 1759 else
fep 0:5ff20db10a96 1760 {
fep 0:5ff20db10a96 1761 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1762 }
fep 0:5ff20db10a96 1763 }
fep 0:5ff20db10a96 1764 else
fep 0:5ff20db10a96 1765 {
fep 0:5ff20db10a96 1766 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1767 }
fep 0:5ff20db10a96 1768 }
fep 0:5ff20db10a96 1769
fep 0:5ff20db10a96 1770 pxQueue->uxMessagesWaiting = uxMessagesWaiting + 1;
fep 0:5ff20db10a96 1771
fep 0:5ff20db10a96 1772 return xReturn;
fep 0:5ff20db10a96 1773 }
fep 0:5ff20db10a96 1774 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1775
fep 0:5ff20db10a96 1776 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer )
fep 0:5ff20db10a96 1777 {
fep 0:5ff20db10a96 1778 if( pxQueue->uxItemSize != ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1779 {
fep 0:5ff20db10a96 1780 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
fep 0:5ff20db10a96 1781 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */
fep 0:5ff20db10a96 1782 {
fep 0:5ff20db10a96 1783 pxQueue->u.pcReadFrom = pxQueue->pcHead;
fep 0:5ff20db10a96 1784 }
fep 0:5ff20db10a96 1785 else
fep 0:5ff20db10a96 1786 {
fep 0:5ff20db10a96 1787 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1788 }
fep 0:5ff20db10a96 1789 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */
fep 0:5ff20db10a96 1790 }
fep 0:5ff20db10a96 1791 }
fep 0:5ff20db10a96 1792 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1793
fep 0:5ff20db10a96 1794 static void prvUnlockQueue( Queue_t * const pxQueue )
fep 0:5ff20db10a96 1795 {
fep 0:5ff20db10a96 1796 /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */
fep 0:5ff20db10a96 1797
fep 0:5ff20db10a96 1798 /* The lock counts contains the number of extra data items placed or
fep 0:5ff20db10a96 1799 removed from the queue while the queue was locked. When a queue is
fep 0:5ff20db10a96 1800 locked items can be added or removed, but the event lists cannot be
fep 0:5ff20db10a96 1801 updated. */
fep 0:5ff20db10a96 1802 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1803 {
fep 0:5ff20db10a96 1804 int8_t cTxLock = pxQueue->cTxLock;
fep 0:5ff20db10a96 1805
fep 0:5ff20db10a96 1806 /* See if data was added to the queue while it was locked. */
fep 0:5ff20db10a96 1807 while( cTxLock > queueLOCKED_UNMODIFIED )
fep 0:5ff20db10a96 1808 {
fep 0:5ff20db10a96 1809 /* Data was posted while the queue was locked. Are any tasks
fep 0:5ff20db10a96 1810 blocked waiting for data to become available? */
fep 0:5ff20db10a96 1811 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 1812 {
fep 0:5ff20db10a96 1813 if( pxQueue->pxQueueSetContainer != NULL )
fep 0:5ff20db10a96 1814 {
fep 0:5ff20db10a96 1815 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) != pdFALSE )
fep 0:5ff20db10a96 1816 {
fep 0:5ff20db10a96 1817 /* The queue is a member of a queue set, and posting to
fep 0:5ff20db10a96 1818 the queue set caused a higher priority task to unblock.
fep 0:5ff20db10a96 1819 A context switch is required. */
fep 0:5ff20db10a96 1820 vTaskMissedYield();
fep 0:5ff20db10a96 1821 }
fep 0:5ff20db10a96 1822 else
fep 0:5ff20db10a96 1823 {
fep 0:5ff20db10a96 1824 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1825 }
fep 0:5ff20db10a96 1826 }
fep 0:5ff20db10a96 1827 else
fep 0:5ff20db10a96 1828 {
fep 0:5ff20db10a96 1829 /* Tasks that are removed from the event list will get
fep 0:5ff20db10a96 1830 added to the pending ready list as the scheduler is still
fep 0:5ff20db10a96 1831 suspended. */
fep 0:5ff20db10a96 1832 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1833 {
fep 0:5ff20db10a96 1834 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1835 {
fep 0:5ff20db10a96 1836 /* The task waiting has a higher priority so record that a
fep 0:5ff20db10a96 1837 context switch is required. */
fep 0:5ff20db10a96 1838 vTaskMissedYield();
fep 0:5ff20db10a96 1839 }
fep 0:5ff20db10a96 1840 else
fep 0:5ff20db10a96 1841 {
fep 0:5ff20db10a96 1842 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1843 }
fep 0:5ff20db10a96 1844 }
fep 0:5ff20db10a96 1845 else
fep 0:5ff20db10a96 1846 {
fep 0:5ff20db10a96 1847 break;
fep 0:5ff20db10a96 1848 }
fep 0:5ff20db10a96 1849 }
fep 0:5ff20db10a96 1850 }
fep 0:5ff20db10a96 1851 #else /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1852 {
fep 0:5ff20db10a96 1853 /* Tasks that are removed from the event list will get added to
fep 0:5ff20db10a96 1854 the pending ready list as the scheduler is still suspended. */
fep 0:5ff20db10a96 1855 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 1856 {
fep 0:5ff20db10a96 1857 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 1858 {
fep 0:5ff20db10a96 1859 /* The task waiting has a higher priority so record that
fep 0:5ff20db10a96 1860 a context switch is required. */
fep 0:5ff20db10a96 1861 vTaskMissedYield();
fep 0:5ff20db10a96 1862 }
fep 0:5ff20db10a96 1863 else
fep 0:5ff20db10a96 1864 {
fep 0:5ff20db10a96 1865 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1866 }
fep 0:5ff20db10a96 1867 }
fep 0:5ff20db10a96 1868 else
fep 0:5ff20db10a96 1869 {
fep 0:5ff20db10a96 1870 break;
fep 0:5ff20db10a96 1871 }
fep 0:5ff20db10a96 1872 }
fep 0:5ff20db10a96 1873 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 1874
fep 0:5ff20db10a96 1875 --cTxLock;
fep 0:5ff20db10a96 1876 }
fep 0:5ff20db10a96 1877
fep 0:5ff20db10a96 1878 pxQueue->cTxLock = queueUNLOCKED;
fep 0:5ff20db10a96 1879 }
fep 0:5ff20db10a96 1880 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1881
fep 0:5ff20db10a96 1882 /* Do the same for the Rx lock. */
fep 0:5ff20db10a96 1883 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1884 {
fep 0:5ff20db10a96 1885 int8_t cRxLock = pxQueue->cRxLock;
fep 0:5ff20db10a96 1886
fep 0:5ff20db10a96 1887 while( cRxLock > queueLOCKED_UNMODIFIED )
fep 0:5ff20db10a96 1888 {
fep 0:5ff20db10a96 1889 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 1890 {
fep 0:5ff20db10a96 1891 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 1892 {
fep 0:5ff20db10a96 1893 vTaskMissedYield();
fep 0:5ff20db10a96 1894 }
fep 0:5ff20db10a96 1895 else
fep 0:5ff20db10a96 1896 {
fep 0:5ff20db10a96 1897 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 1898 }
fep 0:5ff20db10a96 1899
fep 0:5ff20db10a96 1900 --cRxLock;
fep 0:5ff20db10a96 1901 }
fep 0:5ff20db10a96 1902 else
fep 0:5ff20db10a96 1903 {
fep 0:5ff20db10a96 1904 break;
fep 0:5ff20db10a96 1905 }
fep 0:5ff20db10a96 1906 }
fep 0:5ff20db10a96 1907
fep 0:5ff20db10a96 1908 pxQueue->cRxLock = queueUNLOCKED;
fep 0:5ff20db10a96 1909 }
fep 0:5ff20db10a96 1910 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1911 }
fep 0:5ff20db10a96 1912 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1913
fep 0:5ff20db10a96 1914 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue )
fep 0:5ff20db10a96 1915 {
fep 0:5ff20db10a96 1916 BaseType_t xReturn;
fep 0:5ff20db10a96 1917
fep 0:5ff20db10a96 1918 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1919 {
fep 0:5ff20db10a96 1920 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1921 {
fep 0:5ff20db10a96 1922 xReturn = pdTRUE;
fep 0:5ff20db10a96 1923 }
fep 0:5ff20db10a96 1924 else
fep 0:5ff20db10a96 1925 {
fep 0:5ff20db10a96 1926 xReturn = pdFALSE;
fep 0:5ff20db10a96 1927 }
fep 0:5ff20db10a96 1928 }
fep 0:5ff20db10a96 1929 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1930
fep 0:5ff20db10a96 1931 return xReturn;
fep 0:5ff20db10a96 1932 }
fep 0:5ff20db10a96 1933 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1934
fep 0:5ff20db10a96 1935 BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue )
fep 0:5ff20db10a96 1936 {
fep 0:5ff20db10a96 1937 BaseType_t xReturn;
fep 0:5ff20db10a96 1938
fep 0:5ff20db10a96 1939 configASSERT( xQueue );
fep 0:5ff20db10a96 1940 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 1941 {
fep 0:5ff20db10a96 1942 xReturn = pdTRUE;
fep 0:5ff20db10a96 1943 }
fep 0:5ff20db10a96 1944 else
fep 0:5ff20db10a96 1945 {
fep 0:5ff20db10a96 1946 xReturn = pdFALSE;
fep 0:5ff20db10a96 1947 }
fep 0:5ff20db10a96 1948
fep 0:5ff20db10a96 1949 return xReturn;
fep 0:5ff20db10a96 1950 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
fep 0:5ff20db10a96 1951 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1952
fep 0:5ff20db10a96 1953 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue )
fep 0:5ff20db10a96 1954 {
fep 0:5ff20db10a96 1955 BaseType_t xReturn;
fep 0:5ff20db10a96 1956
fep 0:5ff20db10a96 1957 taskENTER_CRITICAL();
fep 0:5ff20db10a96 1958 {
fep 0:5ff20db10a96 1959 if( pxQueue->uxMessagesWaiting == pxQueue->uxLength )
fep 0:5ff20db10a96 1960 {
fep 0:5ff20db10a96 1961 xReturn = pdTRUE;
fep 0:5ff20db10a96 1962 }
fep 0:5ff20db10a96 1963 else
fep 0:5ff20db10a96 1964 {
fep 0:5ff20db10a96 1965 xReturn = pdFALSE;
fep 0:5ff20db10a96 1966 }
fep 0:5ff20db10a96 1967 }
fep 0:5ff20db10a96 1968 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 1969
fep 0:5ff20db10a96 1970 return xReturn;
fep 0:5ff20db10a96 1971 }
fep 0:5ff20db10a96 1972 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1973
fep 0:5ff20db10a96 1974 BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue )
fep 0:5ff20db10a96 1975 {
fep 0:5ff20db10a96 1976 BaseType_t xReturn;
fep 0:5ff20db10a96 1977
fep 0:5ff20db10a96 1978 configASSERT( xQueue );
fep 0:5ff20db10a96 1979 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength )
fep 0:5ff20db10a96 1980 {
fep 0:5ff20db10a96 1981 xReturn = pdTRUE;
fep 0:5ff20db10a96 1982 }
fep 0:5ff20db10a96 1983 else
fep 0:5ff20db10a96 1984 {
fep 0:5ff20db10a96 1985 xReturn = pdFALSE;
fep 0:5ff20db10a96 1986 }
fep 0:5ff20db10a96 1987
fep 0:5ff20db10a96 1988 return xReturn;
fep 0:5ff20db10a96 1989 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
fep 0:5ff20db10a96 1990 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 1991
fep 0:5ff20db10a96 1992 #if ( configUSE_CO_ROUTINES == 1 )
fep 0:5ff20db10a96 1993
fep 0:5ff20db10a96 1994 BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait )
fep 0:5ff20db10a96 1995 {
fep 0:5ff20db10a96 1996 BaseType_t xReturn;
fep 0:5ff20db10a96 1997 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 1998
fep 0:5ff20db10a96 1999 /* If the queue is already full we may have to block. A critical section
fep 0:5ff20db10a96 2000 is required to prevent an interrupt removing something from the queue
fep 0:5ff20db10a96 2001 between the check to see if the queue is full and blocking on the queue. */
fep 0:5ff20db10a96 2002 portDISABLE_INTERRUPTS();
fep 0:5ff20db10a96 2003 {
fep 0:5ff20db10a96 2004 if( prvIsQueueFull( pxQueue ) != pdFALSE )
fep 0:5ff20db10a96 2005 {
fep 0:5ff20db10a96 2006 /* The queue is full - do we want to block or just leave without
fep 0:5ff20db10a96 2007 posting? */
fep 0:5ff20db10a96 2008 if( xTicksToWait > ( TickType_t ) 0 )
fep 0:5ff20db10a96 2009 {
fep 0:5ff20db10a96 2010 /* As this is called from a coroutine we cannot block directly, but
fep 0:5ff20db10a96 2011 return indicating that we need to block. */
fep 0:5ff20db10a96 2012 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) );
fep 0:5ff20db10a96 2013 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2014 return errQUEUE_BLOCKED;
fep 0:5ff20db10a96 2015 }
fep 0:5ff20db10a96 2016 else
fep 0:5ff20db10a96 2017 {
fep 0:5ff20db10a96 2018 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2019 return errQUEUE_FULL;
fep 0:5ff20db10a96 2020 }
fep 0:5ff20db10a96 2021 }
fep 0:5ff20db10a96 2022 }
fep 0:5ff20db10a96 2023 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2024
fep 0:5ff20db10a96 2025 portDISABLE_INTERRUPTS();
fep 0:5ff20db10a96 2026 {
fep 0:5ff20db10a96 2027 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
fep 0:5ff20db10a96 2028 {
fep 0:5ff20db10a96 2029 /* There is room in the queue, copy the data into the queue. */
fep 0:5ff20db10a96 2030 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
fep 0:5ff20db10a96 2031 xReturn = pdPASS;
fep 0:5ff20db10a96 2032
fep 0:5ff20db10a96 2033 /* Were any co-routines waiting for data to become available? */
fep 0:5ff20db10a96 2034 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 2035 {
fep 0:5ff20db10a96 2036 /* In this instance the co-routine could be placed directly
fep 0:5ff20db10a96 2037 into the ready list as we are within a critical section.
fep 0:5ff20db10a96 2038 Instead the same pending ready list mechanism is used as if
fep 0:5ff20db10a96 2039 the event were caused from within an interrupt. */
fep 0:5ff20db10a96 2040 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 2041 {
fep 0:5ff20db10a96 2042 /* The co-routine waiting has a higher priority so record
fep 0:5ff20db10a96 2043 that a yield might be appropriate. */
fep 0:5ff20db10a96 2044 xReturn = errQUEUE_YIELD;
fep 0:5ff20db10a96 2045 }
fep 0:5ff20db10a96 2046 else
fep 0:5ff20db10a96 2047 {
fep 0:5ff20db10a96 2048 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2049 }
fep 0:5ff20db10a96 2050 }
fep 0:5ff20db10a96 2051 else
fep 0:5ff20db10a96 2052 {
fep 0:5ff20db10a96 2053 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2054 }
fep 0:5ff20db10a96 2055 }
fep 0:5ff20db10a96 2056 else
fep 0:5ff20db10a96 2057 {
fep 0:5ff20db10a96 2058 xReturn = errQUEUE_FULL;
fep 0:5ff20db10a96 2059 }
fep 0:5ff20db10a96 2060 }
fep 0:5ff20db10a96 2061 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2062
fep 0:5ff20db10a96 2063 return xReturn;
fep 0:5ff20db10a96 2064 }
fep 0:5ff20db10a96 2065
fep 0:5ff20db10a96 2066 #endif /* configUSE_CO_ROUTINES */
fep 0:5ff20db10a96 2067 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2068
fep 0:5ff20db10a96 2069 #if ( configUSE_CO_ROUTINES == 1 )
fep 0:5ff20db10a96 2070
fep 0:5ff20db10a96 2071 BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait )
fep 0:5ff20db10a96 2072 {
fep 0:5ff20db10a96 2073 BaseType_t xReturn;
fep 0:5ff20db10a96 2074 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 2075
fep 0:5ff20db10a96 2076 /* If the queue is already empty we may have to block. A critical section
fep 0:5ff20db10a96 2077 is required to prevent an interrupt adding something to the queue
fep 0:5ff20db10a96 2078 between the check to see if the queue is empty and blocking on the queue. */
fep 0:5ff20db10a96 2079 portDISABLE_INTERRUPTS();
fep 0:5ff20db10a96 2080 {
fep 0:5ff20db10a96 2081 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 2082 {
fep 0:5ff20db10a96 2083 /* There are no messages in the queue, do we want to block or just
fep 0:5ff20db10a96 2084 leave with nothing? */
fep 0:5ff20db10a96 2085 if( xTicksToWait > ( TickType_t ) 0 )
fep 0:5ff20db10a96 2086 {
fep 0:5ff20db10a96 2087 /* As this is a co-routine we cannot block directly, but return
fep 0:5ff20db10a96 2088 indicating that we need to block. */
fep 0:5ff20db10a96 2089 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) );
fep 0:5ff20db10a96 2090 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2091 return errQUEUE_BLOCKED;
fep 0:5ff20db10a96 2092 }
fep 0:5ff20db10a96 2093 else
fep 0:5ff20db10a96 2094 {
fep 0:5ff20db10a96 2095 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2096 return errQUEUE_FULL;
fep 0:5ff20db10a96 2097 }
fep 0:5ff20db10a96 2098 }
fep 0:5ff20db10a96 2099 else
fep 0:5ff20db10a96 2100 {
fep 0:5ff20db10a96 2101 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2102 }
fep 0:5ff20db10a96 2103 }
fep 0:5ff20db10a96 2104 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2105
fep 0:5ff20db10a96 2106 portDISABLE_INTERRUPTS();
fep 0:5ff20db10a96 2107 {
fep 0:5ff20db10a96 2108 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 2109 {
fep 0:5ff20db10a96 2110 /* Data is available from the queue. */
fep 0:5ff20db10a96 2111 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
fep 0:5ff20db10a96 2112 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
fep 0:5ff20db10a96 2113 {
fep 0:5ff20db10a96 2114 pxQueue->u.pcReadFrom = pxQueue->pcHead;
fep 0:5ff20db10a96 2115 }
fep 0:5ff20db10a96 2116 else
fep 0:5ff20db10a96 2117 {
fep 0:5ff20db10a96 2118 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2119 }
fep 0:5ff20db10a96 2120 --( pxQueue->uxMessagesWaiting );
fep 0:5ff20db10a96 2121 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
fep 0:5ff20db10a96 2122
fep 0:5ff20db10a96 2123 xReturn = pdPASS;
fep 0:5ff20db10a96 2124
fep 0:5ff20db10a96 2125 /* Were any co-routines waiting for space to become available? */
fep 0:5ff20db10a96 2126 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 2127 {
fep 0:5ff20db10a96 2128 /* In this instance the co-routine could be placed directly
fep 0:5ff20db10a96 2129 into the ready list as we are within a critical section.
fep 0:5ff20db10a96 2130 Instead the same pending ready list mechanism is used as if
fep 0:5ff20db10a96 2131 the event were caused from within an interrupt. */
fep 0:5ff20db10a96 2132 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 2133 {
fep 0:5ff20db10a96 2134 xReturn = errQUEUE_YIELD;
fep 0:5ff20db10a96 2135 }
fep 0:5ff20db10a96 2136 else
fep 0:5ff20db10a96 2137 {
fep 0:5ff20db10a96 2138 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2139 }
fep 0:5ff20db10a96 2140 }
fep 0:5ff20db10a96 2141 else
fep 0:5ff20db10a96 2142 {
fep 0:5ff20db10a96 2143 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2144 }
fep 0:5ff20db10a96 2145 }
fep 0:5ff20db10a96 2146 else
fep 0:5ff20db10a96 2147 {
fep 0:5ff20db10a96 2148 xReturn = pdFAIL;
fep 0:5ff20db10a96 2149 }
fep 0:5ff20db10a96 2150 }
fep 0:5ff20db10a96 2151 portENABLE_INTERRUPTS();
fep 0:5ff20db10a96 2152
fep 0:5ff20db10a96 2153 return xReturn;
fep 0:5ff20db10a96 2154 }
fep 0:5ff20db10a96 2155
fep 0:5ff20db10a96 2156 #endif /* configUSE_CO_ROUTINES */
fep 0:5ff20db10a96 2157 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2158
fep 0:5ff20db10a96 2159 #if ( configUSE_CO_ROUTINES == 1 )
fep 0:5ff20db10a96 2160
fep 0:5ff20db10a96 2161 BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken )
fep 0:5ff20db10a96 2162 {
fep 0:5ff20db10a96 2163 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 2164
fep 0:5ff20db10a96 2165 /* Cannot block within an ISR so if there is no space on the queue then
fep 0:5ff20db10a96 2166 exit without doing anything. */
fep 0:5ff20db10a96 2167 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
fep 0:5ff20db10a96 2168 {
fep 0:5ff20db10a96 2169 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
fep 0:5ff20db10a96 2170
fep 0:5ff20db10a96 2171 /* We only want to wake one co-routine per ISR, so check that a
fep 0:5ff20db10a96 2172 co-routine has not already been woken. */
fep 0:5ff20db10a96 2173 if( xCoRoutinePreviouslyWoken == pdFALSE )
fep 0:5ff20db10a96 2174 {
fep 0:5ff20db10a96 2175 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 2176 {
fep 0:5ff20db10a96 2177 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 2178 {
fep 0:5ff20db10a96 2179 return pdTRUE;
fep 0:5ff20db10a96 2180 }
fep 0:5ff20db10a96 2181 else
fep 0:5ff20db10a96 2182 {
fep 0:5ff20db10a96 2183 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2184 }
fep 0:5ff20db10a96 2185 }
fep 0:5ff20db10a96 2186 else
fep 0:5ff20db10a96 2187 {
fep 0:5ff20db10a96 2188 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2189 }
fep 0:5ff20db10a96 2190 }
fep 0:5ff20db10a96 2191 else
fep 0:5ff20db10a96 2192 {
fep 0:5ff20db10a96 2193 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2194 }
fep 0:5ff20db10a96 2195 }
fep 0:5ff20db10a96 2196 else
fep 0:5ff20db10a96 2197 {
fep 0:5ff20db10a96 2198 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2199 }
fep 0:5ff20db10a96 2200
fep 0:5ff20db10a96 2201 return xCoRoutinePreviouslyWoken;
fep 0:5ff20db10a96 2202 }
fep 0:5ff20db10a96 2203
fep 0:5ff20db10a96 2204 #endif /* configUSE_CO_ROUTINES */
fep 0:5ff20db10a96 2205 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2206
fep 0:5ff20db10a96 2207 #if ( configUSE_CO_ROUTINES == 1 )
fep 0:5ff20db10a96 2208
fep 0:5ff20db10a96 2209 BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken )
fep 0:5ff20db10a96 2210 {
fep 0:5ff20db10a96 2211 BaseType_t xReturn;
fep 0:5ff20db10a96 2212 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 2213
fep 0:5ff20db10a96 2214 /* We cannot block from an ISR, so check there is data available. If
fep 0:5ff20db10a96 2215 not then just leave without doing anything. */
fep 0:5ff20db10a96 2216 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 2217 {
fep 0:5ff20db10a96 2218 /* Copy the data from the queue. */
fep 0:5ff20db10a96 2219 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
fep 0:5ff20db10a96 2220 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
fep 0:5ff20db10a96 2221 {
fep 0:5ff20db10a96 2222 pxQueue->u.pcReadFrom = pxQueue->pcHead;
fep 0:5ff20db10a96 2223 }
fep 0:5ff20db10a96 2224 else
fep 0:5ff20db10a96 2225 {
fep 0:5ff20db10a96 2226 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2227 }
fep 0:5ff20db10a96 2228 --( pxQueue->uxMessagesWaiting );
fep 0:5ff20db10a96 2229 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
fep 0:5ff20db10a96 2230
fep 0:5ff20db10a96 2231 if( ( *pxCoRoutineWoken ) == pdFALSE )
fep 0:5ff20db10a96 2232 {
fep 0:5ff20db10a96 2233 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
fep 0:5ff20db10a96 2234 {
fep 0:5ff20db10a96 2235 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
fep 0:5ff20db10a96 2236 {
fep 0:5ff20db10a96 2237 *pxCoRoutineWoken = pdTRUE;
fep 0:5ff20db10a96 2238 }
fep 0:5ff20db10a96 2239 else
fep 0:5ff20db10a96 2240 {
fep 0:5ff20db10a96 2241 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2242 }
fep 0:5ff20db10a96 2243 }
fep 0:5ff20db10a96 2244 else
fep 0:5ff20db10a96 2245 {
fep 0:5ff20db10a96 2246 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2247 }
fep 0:5ff20db10a96 2248 }
fep 0:5ff20db10a96 2249 else
fep 0:5ff20db10a96 2250 {
fep 0:5ff20db10a96 2251 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2252 }
fep 0:5ff20db10a96 2253
fep 0:5ff20db10a96 2254 xReturn = pdPASS;
fep 0:5ff20db10a96 2255 }
fep 0:5ff20db10a96 2256 else
fep 0:5ff20db10a96 2257 {
fep 0:5ff20db10a96 2258 xReturn = pdFAIL;
fep 0:5ff20db10a96 2259 }
fep 0:5ff20db10a96 2260
fep 0:5ff20db10a96 2261 return xReturn;
fep 0:5ff20db10a96 2262 }
fep 0:5ff20db10a96 2263
fep 0:5ff20db10a96 2264 #endif /* configUSE_CO_ROUTINES */
fep 0:5ff20db10a96 2265 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2266
fep 0:5ff20db10a96 2267 #if ( configQUEUE_REGISTRY_SIZE > 0 )
fep 0:5ff20db10a96 2268
fep 0:5ff20db10a96 2269 void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
fep 0:5ff20db10a96 2270 {
fep 0:5ff20db10a96 2271 UBaseType_t ux;
fep 0:5ff20db10a96 2272
fep 0:5ff20db10a96 2273 /* See if there is an empty space in the registry. A NULL name denotes
fep 0:5ff20db10a96 2274 a free slot. */
fep 0:5ff20db10a96 2275 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
fep 0:5ff20db10a96 2276 {
fep 0:5ff20db10a96 2277 if( xQueueRegistry[ ux ].pcQueueName == NULL )
fep 0:5ff20db10a96 2278 {
fep 0:5ff20db10a96 2279 /* Store the information on this queue. */
fep 0:5ff20db10a96 2280 xQueueRegistry[ ux ].pcQueueName = pcQueueName;
fep 0:5ff20db10a96 2281 xQueueRegistry[ ux ].xHandle = xQueue;
fep 0:5ff20db10a96 2282
fep 0:5ff20db10a96 2283 traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName );
fep 0:5ff20db10a96 2284 break;
fep 0:5ff20db10a96 2285 }
fep 0:5ff20db10a96 2286 else
fep 0:5ff20db10a96 2287 {
fep 0:5ff20db10a96 2288 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2289 }
fep 0:5ff20db10a96 2290 }
fep 0:5ff20db10a96 2291 }
fep 0:5ff20db10a96 2292
fep 0:5ff20db10a96 2293 #endif /* configQUEUE_REGISTRY_SIZE */
fep 0:5ff20db10a96 2294 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2295
fep 0:5ff20db10a96 2296 #if ( configQUEUE_REGISTRY_SIZE > 0 )
fep 0:5ff20db10a96 2297
fep 0:5ff20db10a96 2298 const char *pcQueueGetName( QueueHandle_t xQueue ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
fep 0:5ff20db10a96 2299 {
fep 0:5ff20db10a96 2300 UBaseType_t ux;
fep 0:5ff20db10a96 2301 const char *pcReturn = NULL; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
fep 0:5ff20db10a96 2302
fep 0:5ff20db10a96 2303 /* Note there is nothing here to protect against another task adding or
fep 0:5ff20db10a96 2304 removing entries from the registry while it is being searched. */
fep 0:5ff20db10a96 2305 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
fep 0:5ff20db10a96 2306 {
fep 0:5ff20db10a96 2307 if( xQueueRegistry[ ux ].xHandle == xQueue )
fep 0:5ff20db10a96 2308 {
fep 0:5ff20db10a96 2309 pcReturn = xQueueRegistry[ ux ].pcQueueName;
fep 0:5ff20db10a96 2310 break;
fep 0:5ff20db10a96 2311 }
fep 0:5ff20db10a96 2312 else
fep 0:5ff20db10a96 2313 {
fep 0:5ff20db10a96 2314 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2315 }
fep 0:5ff20db10a96 2316 }
fep 0:5ff20db10a96 2317
fep 0:5ff20db10a96 2318 return pcReturn;
fep 0:5ff20db10a96 2319 }
fep 0:5ff20db10a96 2320
fep 0:5ff20db10a96 2321 #endif /* configQUEUE_REGISTRY_SIZE */
fep 0:5ff20db10a96 2322 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2323
fep 0:5ff20db10a96 2324 #if ( configQUEUE_REGISTRY_SIZE > 0 )
fep 0:5ff20db10a96 2325
fep 0:5ff20db10a96 2326 void vQueueUnregisterQueue( QueueHandle_t xQueue )
fep 0:5ff20db10a96 2327 {
fep 0:5ff20db10a96 2328 UBaseType_t ux;
fep 0:5ff20db10a96 2329
fep 0:5ff20db10a96 2330 /* See if the handle of the queue being unregistered in actually in the
fep 0:5ff20db10a96 2331 registry. */
fep 0:5ff20db10a96 2332 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
fep 0:5ff20db10a96 2333 {
fep 0:5ff20db10a96 2334 if( xQueueRegistry[ ux ].xHandle == xQueue )
fep 0:5ff20db10a96 2335 {
fep 0:5ff20db10a96 2336 /* Set the name to NULL to show that this slot if free again. */
fep 0:5ff20db10a96 2337 xQueueRegistry[ ux ].pcQueueName = NULL;
fep 0:5ff20db10a96 2338
fep 0:5ff20db10a96 2339 /* Set the handle to NULL to ensure the same queue handle cannot
fep 0:5ff20db10a96 2340 appear in the registry twice if it is added, removed, then
fep 0:5ff20db10a96 2341 added again. */
fep 0:5ff20db10a96 2342 xQueueRegistry[ ux ].xHandle = ( QueueHandle_t ) 0;
fep 0:5ff20db10a96 2343 break;
fep 0:5ff20db10a96 2344 }
fep 0:5ff20db10a96 2345 else
fep 0:5ff20db10a96 2346 {
fep 0:5ff20db10a96 2347 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2348 }
fep 0:5ff20db10a96 2349 }
fep 0:5ff20db10a96 2350
fep 0:5ff20db10a96 2351 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
fep 0:5ff20db10a96 2352
fep 0:5ff20db10a96 2353 #endif /* configQUEUE_REGISTRY_SIZE */
fep 0:5ff20db10a96 2354 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2355
fep 0:5ff20db10a96 2356 #if ( configUSE_TIMERS == 1 )
fep 0:5ff20db10a96 2357
fep 0:5ff20db10a96 2358 void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait, const BaseType_t xWaitIndefinitely )
fep 0:5ff20db10a96 2359 {
fep 0:5ff20db10a96 2360 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
fep 0:5ff20db10a96 2361
fep 0:5ff20db10a96 2362 /* This function should not be called by application code hence the
fep 0:5ff20db10a96 2363 'Restricted' in its name. It is not part of the public API. It is
fep 0:5ff20db10a96 2364 designed for use by kernel code, and has special calling requirements.
fep 0:5ff20db10a96 2365 It can result in vListInsert() being called on a list that can only
fep 0:5ff20db10a96 2366 possibly ever have one item in it, so the list will be fast, but even
fep 0:5ff20db10a96 2367 so it should be called with the scheduler locked and not from a critical
fep 0:5ff20db10a96 2368 section. */
fep 0:5ff20db10a96 2369
fep 0:5ff20db10a96 2370 /* Only do anything if there are no messages in the queue. This function
fep 0:5ff20db10a96 2371 will not actually cause the task to block, just place it on a blocked
fep 0:5ff20db10a96 2372 list. It will not block until the scheduler is unlocked - at which
fep 0:5ff20db10a96 2373 time a yield will be performed. If an item is added to the queue while
fep 0:5ff20db10a96 2374 the queue is locked, and the calling task blocks on the queue, then the
fep 0:5ff20db10a96 2375 calling task will be immediately unblocked when the queue is unlocked. */
fep 0:5ff20db10a96 2376 prvLockQueue( pxQueue );
fep 0:5ff20db10a96 2377 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U )
fep 0:5ff20db10a96 2378 {
fep 0:5ff20db10a96 2379 /* There is nothing in the queue, block for the specified period. */
fep 0:5ff20db10a96 2380 vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait, xWaitIndefinitely );
fep 0:5ff20db10a96 2381 }
fep 0:5ff20db10a96 2382 else
fep 0:5ff20db10a96 2383 {
fep 0:5ff20db10a96 2384 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2385 }
fep 0:5ff20db10a96 2386 prvUnlockQueue( pxQueue );
fep 0:5ff20db10a96 2387 }
fep 0:5ff20db10a96 2388
fep 0:5ff20db10a96 2389 #endif /* configUSE_TIMERS */
fep 0:5ff20db10a96 2390 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2391
fep 0:5ff20db10a96 2392 #if( ( configUSE_QUEUE_SETS == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) )
fep 0:5ff20db10a96 2393
fep 0:5ff20db10a96 2394 QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength )
fep 0:5ff20db10a96 2395 {
fep 0:5ff20db10a96 2396 QueueSetHandle_t pxQueue;
fep 0:5ff20db10a96 2397
fep 0:5ff20db10a96 2398 pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET );
fep 0:5ff20db10a96 2399
fep 0:5ff20db10a96 2400 return pxQueue;
fep 0:5ff20db10a96 2401 }
fep 0:5ff20db10a96 2402
fep 0:5ff20db10a96 2403 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2404 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2405
fep 0:5ff20db10a96 2406 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 2407
fep 0:5ff20db10a96 2408 BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
fep 0:5ff20db10a96 2409 {
fep 0:5ff20db10a96 2410 BaseType_t xReturn;
fep 0:5ff20db10a96 2411
fep 0:5ff20db10a96 2412 taskENTER_CRITICAL();
fep 0:5ff20db10a96 2413 {
fep 0:5ff20db10a96 2414 if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL )
fep 0:5ff20db10a96 2415 {
fep 0:5ff20db10a96 2416 /* Cannot add a queue/semaphore to more than one queue set. */
fep 0:5ff20db10a96 2417 xReturn = pdFAIL;
fep 0:5ff20db10a96 2418 }
fep 0:5ff20db10a96 2419 else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 2420 {
fep 0:5ff20db10a96 2421 /* Cannot add a queue/semaphore to a queue set if there are already
fep 0:5ff20db10a96 2422 items in the queue/semaphore. */
fep 0:5ff20db10a96 2423 xReturn = pdFAIL;
fep 0:5ff20db10a96 2424 }
fep 0:5ff20db10a96 2425 else
fep 0:5ff20db10a96 2426 {
fep 0:5ff20db10a96 2427 ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet;
fep 0:5ff20db10a96 2428 xReturn = pdPASS;
fep 0:5ff20db10a96 2429 }
fep 0:5ff20db10a96 2430 }
fep 0:5ff20db10a96 2431 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 2432
fep 0:5ff20db10a96 2433 return xReturn;
fep 0:5ff20db10a96 2434 }
fep 0:5ff20db10a96 2435
fep 0:5ff20db10a96 2436 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2437 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2438
fep 0:5ff20db10a96 2439 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 2440
fep 0:5ff20db10a96 2441 BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
fep 0:5ff20db10a96 2442 {
fep 0:5ff20db10a96 2443 BaseType_t xReturn;
fep 0:5ff20db10a96 2444 Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore;
fep 0:5ff20db10a96 2445
fep 0:5ff20db10a96 2446 if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet )
fep 0:5ff20db10a96 2447 {
fep 0:5ff20db10a96 2448 /* The queue was not a member of the set. */
fep 0:5ff20db10a96 2449 xReturn = pdFAIL;
fep 0:5ff20db10a96 2450 }
fep 0:5ff20db10a96 2451 else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 )
fep 0:5ff20db10a96 2452 {
fep 0:5ff20db10a96 2453 /* It is dangerous to remove a queue from a set when the queue is
fep 0:5ff20db10a96 2454 not empty because the queue set will still hold pending events for
fep 0:5ff20db10a96 2455 the queue. */
fep 0:5ff20db10a96 2456 xReturn = pdFAIL;
fep 0:5ff20db10a96 2457 }
fep 0:5ff20db10a96 2458 else
fep 0:5ff20db10a96 2459 {
fep 0:5ff20db10a96 2460 taskENTER_CRITICAL();
fep 0:5ff20db10a96 2461 {
fep 0:5ff20db10a96 2462 /* The queue is no longer contained in the set. */
fep 0:5ff20db10a96 2463 pxQueueOrSemaphore->pxQueueSetContainer = NULL;
fep 0:5ff20db10a96 2464 }
fep 0:5ff20db10a96 2465 taskEXIT_CRITICAL();
fep 0:5ff20db10a96 2466 xReturn = pdPASS;
fep 0:5ff20db10a96 2467 }
fep 0:5ff20db10a96 2468
fep 0:5ff20db10a96 2469 return xReturn;
fep 0:5ff20db10a96 2470 } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */
fep 0:5ff20db10a96 2471
fep 0:5ff20db10a96 2472 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2473 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2474
fep 0:5ff20db10a96 2475 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 2476
fep 0:5ff20db10a96 2477 QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait )
fep 0:5ff20db10a96 2478 {
fep 0:5ff20db10a96 2479 QueueSetMemberHandle_t xReturn = NULL;
fep 0:5ff20db10a96 2480
fep 0:5ff20db10a96 2481 ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */
fep 0:5ff20db10a96 2482 return xReturn;
fep 0:5ff20db10a96 2483 }
fep 0:5ff20db10a96 2484
fep 0:5ff20db10a96 2485 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2486 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2487
fep 0:5ff20db10a96 2488 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 2489
fep 0:5ff20db10a96 2490 QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet )
fep 0:5ff20db10a96 2491 {
fep 0:5ff20db10a96 2492 QueueSetMemberHandle_t xReturn = NULL;
fep 0:5ff20db10a96 2493
fep 0:5ff20db10a96 2494 ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */
fep 0:5ff20db10a96 2495 return xReturn;
fep 0:5ff20db10a96 2496 }
fep 0:5ff20db10a96 2497
fep 0:5ff20db10a96 2498 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2499 /*-----------------------------------------------------------*/
fep 0:5ff20db10a96 2500
fep 0:5ff20db10a96 2501 #if ( configUSE_QUEUE_SETS == 1 )
fep 0:5ff20db10a96 2502
fep 0:5ff20db10a96 2503 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition )
fep 0:5ff20db10a96 2504 {
fep 0:5ff20db10a96 2505 Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer;
fep 0:5ff20db10a96 2506 BaseType_t xReturn = pdFALSE;
fep 0:5ff20db10a96 2507
fep 0:5ff20db10a96 2508 /* This function must be called form a critical section. */
fep 0:5ff20db10a96 2509
fep 0:5ff20db10a96 2510 configASSERT( pxQueueSetContainer );
fep 0:5ff20db10a96 2511 configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength );
fep 0:5ff20db10a96 2512
fep 0:5ff20db10a96 2513 if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength )
fep 0:5ff20db10a96 2514 {
fep 0:5ff20db10a96 2515 const int8_t cTxLock = pxQueueSetContainer->cTxLock;
fep 0:5ff20db10a96 2516
fep 0:5ff20db10a96 2517 traceQUEUE_SEND( pxQueueSetContainer );
fep 0:5ff20db10a96 2518
fep 0:5ff20db10a96 2519 /* The data copied is the handle of the queue that contains data. */
fep 0:5ff20db10a96 2520 xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition );
fep 0:5ff20db10a96 2521
fep 0:5ff20db10a96 2522 if( cTxLock == queueUNLOCKED )
fep 0:5ff20db10a96 2523 {
fep 0:5ff20db10a96 2524 if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE )
fep 0:5ff20db10a96 2525 {
fep 0:5ff20db10a96 2526 if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE )
fep 0:5ff20db10a96 2527 {
fep 0:5ff20db10a96 2528 /* The task waiting has a higher priority. */
fep 0:5ff20db10a96 2529 xReturn = pdTRUE;
fep 0:5ff20db10a96 2530 }
fep 0:5ff20db10a96 2531 else
fep 0:5ff20db10a96 2532 {
fep 0:5ff20db10a96 2533 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2534 }
fep 0:5ff20db10a96 2535 }
fep 0:5ff20db10a96 2536 else
fep 0:5ff20db10a96 2537 {
fep 0:5ff20db10a96 2538 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2539 }
fep 0:5ff20db10a96 2540 }
fep 0:5ff20db10a96 2541 else
fep 0:5ff20db10a96 2542 {
fep 0:5ff20db10a96 2543 pxQueueSetContainer->cTxLock = ( int8_t ) ( cTxLock + 1 );
fep 0:5ff20db10a96 2544 }
fep 0:5ff20db10a96 2545 }
fep 0:5ff20db10a96 2546 else
fep 0:5ff20db10a96 2547 {
fep 0:5ff20db10a96 2548 mtCOVERAGE_TEST_MARKER();
fep 0:5ff20db10a96 2549 }
fep 0:5ff20db10a96 2550
fep 0:5ff20db10a96 2551 return xReturn;
fep 0:5ff20db10a96 2552 }
fep 0:5ff20db10a96 2553
fep 0:5ff20db10a96 2554 #endif /* configUSE_QUEUE_SETS */
fep 0:5ff20db10a96 2555
fep 0:5ff20db10a96 2556
fep 0:5ff20db10a96 2557
fep 0:5ff20db10a96 2558
fep 0:5ff20db10a96 2559
fep 0:5ff20db10a96 2560
fep 0:5ff20db10a96 2561
fep 0:5ff20db10a96 2562
fep 0:5ff20db10a96 2563
fep 0:5ff20db10a96 2564
fep 0:5ff20db10a96 2565
fep 0:5ff20db10a96 2566
fep 0:5ff20db10a96 2567