David Fletcher / Mbed 2 deprecated cc3100_Test_mqtt_CM4F

Dependencies:   mbed

Committer:
dflet
Date:
Thu Sep 03 14:07:01 2015 +0000
Revision:
0:1e7b5dd9edb4
First commit, it's been hanging around for a while. Updated SPI mode change 1 to 0.

Who changed what in which revision?

UserRevisionLine numberNew contents of line
dflet 0:1e7b5dd9edb4 1 /*
dflet 0:1e7b5dd9edb4 2 FreeRTOS V8.2.1 - Copyright (C) 2015 Real Time Engineers Ltd.
dflet 0:1e7b5dd9edb4 3 All rights reserved
dflet 0:1e7b5dd9edb4 4
dflet 0:1e7b5dd9edb4 5 VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION.
dflet 0:1e7b5dd9edb4 6
dflet 0:1e7b5dd9edb4 7 This file is part of the FreeRTOS distribution.
dflet 0:1e7b5dd9edb4 8
dflet 0:1e7b5dd9edb4 9 FreeRTOS is free software; you can redistribute it and/or modify it under
dflet 0:1e7b5dd9edb4 10 the terms of the GNU General Public License (version 2) as published by the
dflet 0:1e7b5dd9edb4 11 Free Software Foundation >>!AND MODIFIED BY!<< the FreeRTOS exception.
dflet 0:1e7b5dd9edb4 12
dflet 0:1e7b5dd9edb4 13 ***************************************************************************
dflet 0:1e7b5dd9edb4 14 >>! NOTE: The modification to the GPL is included to allow you to !<<
dflet 0:1e7b5dd9edb4 15 >>! distribute a combined work that includes FreeRTOS without being !<<
dflet 0:1e7b5dd9edb4 16 >>! obliged to provide the source code for proprietary components !<<
dflet 0:1e7b5dd9edb4 17 >>! outside of the FreeRTOS kernel. !<<
dflet 0:1e7b5dd9edb4 18 ***************************************************************************
dflet 0:1e7b5dd9edb4 19
dflet 0:1e7b5dd9edb4 20 FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY
dflet 0:1e7b5dd9edb4 21 WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
dflet 0:1e7b5dd9edb4 22 FOR A PARTICULAR PURPOSE. Full license text is available on the following
dflet 0:1e7b5dd9edb4 23 link: http://www.freertos.org/a00114.html
dflet 0:1e7b5dd9edb4 24
dflet 0:1e7b5dd9edb4 25 ***************************************************************************
dflet 0:1e7b5dd9edb4 26 * *
dflet 0:1e7b5dd9edb4 27 * FreeRTOS provides completely free yet professionally developed, *
dflet 0:1e7b5dd9edb4 28 * robust, strictly quality controlled, supported, and cross *
dflet 0:1e7b5dd9edb4 29 * platform software that is more than just the market leader, it *
dflet 0:1e7b5dd9edb4 30 * is the industry's de facto standard. *
dflet 0:1e7b5dd9edb4 31 * *
dflet 0:1e7b5dd9edb4 32 * Help yourself get started quickly while simultaneously helping *
dflet 0:1e7b5dd9edb4 33 * to support the FreeRTOS project by purchasing a FreeRTOS *
dflet 0:1e7b5dd9edb4 34 * tutorial book, reference manual, or both: *
dflet 0:1e7b5dd9edb4 35 * http://www.FreeRTOS.org/Documentation *
dflet 0:1e7b5dd9edb4 36 * *
dflet 0:1e7b5dd9edb4 37 ***************************************************************************
dflet 0:1e7b5dd9edb4 38
dflet 0:1e7b5dd9edb4 39 http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading
dflet 0:1e7b5dd9edb4 40 the FAQ page "My application does not run, what could be wrong?". Have you
dflet 0:1e7b5dd9edb4 41 defined configASSERT()?
dflet 0:1e7b5dd9edb4 42
dflet 0:1e7b5dd9edb4 43 http://www.FreeRTOS.org/support - In return for receiving this top quality
dflet 0:1e7b5dd9edb4 44 embedded software for free we request you assist our global community by
dflet 0:1e7b5dd9edb4 45 participating in the support forum.
dflet 0:1e7b5dd9edb4 46
dflet 0:1e7b5dd9edb4 47 http://www.FreeRTOS.org/training - Investing in training allows your team to
dflet 0:1e7b5dd9edb4 48 be as productive as possible as early as possible. Now you can receive
dflet 0:1e7b5dd9edb4 49 FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers
dflet 0:1e7b5dd9edb4 50 Ltd, and the world's leading authority on the world's leading RTOS.
dflet 0:1e7b5dd9edb4 51
dflet 0:1e7b5dd9edb4 52 http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products,
dflet 0:1e7b5dd9edb4 53 including FreeRTOS+Trace - an indispensable productivity tool, a DOS
dflet 0:1e7b5dd9edb4 54 compatible FAT file system, and our tiny thread aware UDP/IP stack.
dflet 0:1e7b5dd9edb4 55
dflet 0:1e7b5dd9edb4 56 http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate.
dflet 0:1e7b5dd9edb4 57 Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS.
dflet 0:1e7b5dd9edb4 58
dflet 0:1e7b5dd9edb4 59 http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High
dflet 0:1e7b5dd9edb4 60 Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS
dflet 0:1e7b5dd9edb4 61 licenses offer ticketed support, indemnification and commercial middleware.
dflet 0:1e7b5dd9edb4 62
dflet 0:1e7b5dd9edb4 63 http://www.SafeRTOS.com - High Integrity Systems also provide a safety
dflet 0:1e7b5dd9edb4 64 engineered and independently SIL3 certified version for use in safety and
dflet 0:1e7b5dd9edb4 65 mission critical applications that require provable dependability.
dflet 0:1e7b5dd9edb4 66
dflet 0:1e7b5dd9edb4 67 1 tab == 4 spaces!
dflet 0:1e7b5dd9edb4 68 */
dflet 0:1e7b5dd9edb4 69
dflet 0:1e7b5dd9edb4 70 #include <stdlib.h>
dflet 0:1e7b5dd9edb4 71 #include <string.h>
dflet 0:1e7b5dd9edb4 72
dflet 0:1e7b5dd9edb4 73 /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining
dflet 0:1e7b5dd9edb4 74 all the API functions to use the MPU wrappers. That should only be done when
dflet 0:1e7b5dd9edb4 75 task.h is included from an application file. */
dflet 0:1e7b5dd9edb4 76 #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE
dflet 0:1e7b5dd9edb4 77
dflet 0:1e7b5dd9edb4 78 #include "FreeRTOS.h"
dflet 0:1e7b5dd9edb4 79 #include "task.h"
dflet 0:1e7b5dd9edb4 80 #include "queue.h"
dflet 0:1e7b5dd9edb4 81
dflet 0:1e7b5dd9edb4 82 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:1e7b5dd9edb4 83 #include "croutine.h"
dflet 0:1e7b5dd9edb4 84 #endif
dflet 0:1e7b5dd9edb4 85
dflet 0:1e7b5dd9edb4 86 /* Lint e961 and e750 are suppressed as a MISRA exception justified because the
dflet 0:1e7b5dd9edb4 87 MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the
dflet 0:1e7b5dd9edb4 88 header files above, but not in this file, in order to generate the correct
dflet 0:1e7b5dd9edb4 89 privileged Vs unprivileged linkage and placement. */
dflet 0:1e7b5dd9edb4 90 #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */
dflet 0:1e7b5dd9edb4 91
dflet 0:1e7b5dd9edb4 92
dflet 0:1e7b5dd9edb4 93 /* Constants used with the xRxLock and xTxLock structure members. */
dflet 0:1e7b5dd9edb4 94 #define queueUNLOCKED ( ( BaseType_t ) -1 )
dflet 0:1e7b5dd9edb4 95 #define queueLOCKED_UNMODIFIED ( ( BaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 96
dflet 0:1e7b5dd9edb4 97 /* When the Queue_t structure is used to represent a base queue its pcHead and
dflet 0:1e7b5dd9edb4 98 pcTail members are used as pointers into the queue storage area. When the
dflet 0:1e7b5dd9edb4 99 Queue_t structure is used to represent a mutex pcHead and pcTail pointers are
dflet 0:1e7b5dd9edb4 100 not necessary, and the pcHead pointer is set to NULL to indicate that the
dflet 0:1e7b5dd9edb4 101 pcTail pointer actually points to the mutex holder (if any). Map alternative
dflet 0:1e7b5dd9edb4 102 names to the pcHead and pcTail structure members to ensure the readability of
dflet 0:1e7b5dd9edb4 103 the code is maintained despite this dual use of two structure members. An
dflet 0:1e7b5dd9edb4 104 alternative implementation would be to use a union, but use of a union is
dflet 0:1e7b5dd9edb4 105 against the coding standard (although an exception to the standard has been
dflet 0:1e7b5dd9edb4 106 permitted where the dual use also significantly changes the type of the
dflet 0:1e7b5dd9edb4 107 structure member). */
dflet 0:1e7b5dd9edb4 108 #define pxMutexHolder pcTail
dflet 0:1e7b5dd9edb4 109 #define uxQueueType pcHead
dflet 0:1e7b5dd9edb4 110 #define queueQUEUE_IS_MUTEX NULL
dflet 0:1e7b5dd9edb4 111
dflet 0:1e7b5dd9edb4 112 /* Semaphores do not actually store or copy data, so have an item size of
dflet 0:1e7b5dd9edb4 113 zero. */
dflet 0:1e7b5dd9edb4 114 #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 115 #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U )
dflet 0:1e7b5dd9edb4 116
dflet 0:1e7b5dd9edb4 117 #if( configUSE_PREEMPTION == 0 )
dflet 0:1e7b5dd9edb4 118 /* If the cooperative scheduler is being used then a yield should not be
dflet 0:1e7b5dd9edb4 119 performed just because a higher priority task has been woken. */
dflet 0:1e7b5dd9edb4 120 #define queueYIELD_IF_USING_PREEMPTION()
dflet 0:1e7b5dd9edb4 121 #else
dflet 0:1e7b5dd9edb4 122 #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API()
dflet 0:1e7b5dd9edb4 123 #endif
dflet 0:1e7b5dd9edb4 124
dflet 0:1e7b5dd9edb4 125 /*
dflet 0:1e7b5dd9edb4 126 * Definition of the queue used by the scheduler.
dflet 0:1e7b5dd9edb4 127 * Items are queued by copy, not reference. See the following link for the
dflet 0:1e7b5dd9edb4 128 * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html
dflet 0:1e7b5dd9edb4 129 */
dflet 0:1e7b5dd9edb4 130 typedef struct QueueDefinition
dflet 0:1e7b5dd9edb4 131 {
dflet 0:1e7b5dd9edb4 132 int8_t *pcHead; /*< Points to the beginning of the queue storage area. */
dflet 0:1e7b5dd9edb4 133 int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */
dflet 0:1e7b5dd9edb4 134 int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */
dflet 0:1e7b5dd9edb4 135
dflet 0:1e7b5dd9edb4 136 union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */
dflet 0:1e7b5dd9edb4 137 {
dflet 0:1e7b5dd9edb4 138 int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */
dflet 0:1e7b5dd9edb4 139 UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */
dflet 0:1e7b5dd9edb4 140 } u;
dflet 0:1e7b5dd9edb4 141
dflet 0:1e7b5dd9edb4 142 List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */
dflet 0:1e7b5dd9edb4 143 List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */
dflet 0:1e7b5dd9edb4 144
dflet 0:1e7b5dd9edb4 145 volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */
dflet 0:1e7b5dd9edb4 146 UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */
dflet 0:1e7b5dd9edb4 147 UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */
dflet 0:1e7b5dd9edb4 148
dflet 0:1e7b5dd9edb4 149 volatile BaseType_t xRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 0:1e7b5dd9edb4 150 volatile BaseType_t xTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 0:1e7b5dd9edb4 151
dflet 0:1e7b5dd9edb4 152 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 153 UBaseType_t uxQueueNumber;
dflet 0:1e7b5dd9edb4 154 uint8_t ucQueueType;
dflet 0:1e7b5dd9edb4 155 #endif
dflet 0:1e7b5dd9edb4 156
dflet 0:1e7b5dd9edb4 157 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 158 struct QueueDefinition *pxQueueSetContainer;
dflet 0:1e7b5dd9edb4 159 #endif
dflet 0:1e7b5dd9edb4 160
dflet 0:1e7b5dd9edb4 161 } xQUEUE;
dflet 0:1e7b5dd9edb4 162
dflet 0:1e7b5dd9edb4 163 /* The old xQUEUE name is maintained above then typedefed to the new Queue_t
dflet 0:1e7b5dd9edb4 164 name below to enable the use of older kernel aware debuggers. */
dflet 0:1e7b5dd9edb4 165 typedef xQUEUE Queue_t;
dflet 0:1e7b5dd9edb4 166
dflet 0:1e7b5dd9edb4 167 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 168
dflet 0:1e7b5dd9edb4 169 /*
dflet 0:1e7b5dd9edb4 170 * The queue registry is just a means for kernel aware debuggers to locate
dflet 0:1e7b5dd9edb4 171 * queue structures. It has no other purpose so is an optional component.
dflet 0:1e7b5dd9edb4 172 */
dflet 0:1e7b5dd9edb4 173 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:1e7b5dd9edb4 174
dflet 0:1e7b5dd9edb4 175 /* The type stored within the queue registry array. This allows a name
dflet 0:1e7b5dd9edb4 176 to be assigned to each queue making kernel aware debugging a little
dflet 0:1e7b5dd9edb4 177 more user friendly. */
dflet 0:1e7b5dd9edb4 178 typedef struct QUEUE_REGISTRY_ITEM
dflet 0:1e7b5dd9edb4 179 {
dflet 0:1e7b5dd9edb4 180 const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 0:1e7b5dd9edb4 181 QueueHandle_t xHandle;
dflet 0:1e7b5dd9edb4 182 } xQueueRegistryItem;
dflet 0:1e7b5dd9edb4 183
dflet 0:1e7b5dd9edb4 184 /* The old xQueueRegistryItem name is maintained above then typedefed to the
dflet 0:1e7b5dd9edb4 185 new xQueueRegistryItem name below to enable the use of older kernel aware
dflet 0:1e7b5dd9edb4 186 debuggers. */
dflet 0:1e7b5dd9edb4 187 typedef xQueueRegistryItem QueueRegistryItem_t;
dflet 0:1e7b5dd9edb4 188
dflet 0:1e7b5dd9edb4 189 /* The queue registry is simply an array of QueueRegistryItem_t structures.
dflet 0:1e7b5dd9edb4 190 The pcQueueName member of a structure being NULL is indicative of the
dflet 0:1e7b5dd9edb4 191 array position being vacant. */
dflet 0:1e7b5dd9edb4 192 QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ];
dflet 0:1e7b5dd9edb4 193
dflet 0:1e7b5dd9edb4 194 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:1e7b5dd9edb4 195
dflet 0:1e7b5dd9edb4 196 /*
dflet 0:1e7b5dd9edb4 197 * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not
dflet 0:1e7b5dd9edb4 198 * prevent an ISR from adding or removing items to the queue, but does prevent
dflet 0:1e7b5dd9edb4 199 * an ISR from removing tasks from the queue event lists. If an ISR finds a
dflet 0:1e7b5dd9edb4 200 * queue is locked it will instead increment the appropriate queue lock count
dflet 0:1e7b5dd9edb4 201 * to indicate that a task may require unblocking. When the queue in unlocked
dflet 0:1e7b5dd9edb4 202 * these lock counts are inspected, and the appropriate action taken.
dflet 0:1e7b5dd9edb4 203 */
dflet 0:1e7b5dd9edb4 204 static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 205
dflet 0:1e7b5dd9edb4 206 /*
dflet 0:1e7b5dd9edb4 207 * Uses a critical section to determine if there is any data in a queue.
dflet 0:1e7b5dd9edb4 208 *
dflet 0:1e7b5dd9edb4 209 * @return pdTRUE if the queue contains no items, otherwise pdFALSE.
dflet 0:1e7b5dd9edb4 210 */
dflet 0:1e7b5dd9edb4 211 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 212
dflet 0:1e7b5dd9edb4 213 /*
dflet 0:1e7b5dd9edb4 214 * Uses a critical section to determine if there is any space in a queue.
dflet 0:1e7b5dd9edb4 215 *
dflet 0:1e7b5dd9edb4 216 * @return pdTRUE if there is no space, otherwise pdFALSE;
dflet 0:1e7b5dd9edb4 217 */
dflet 0:1e7b5dd9edb4 218 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 219
dflet 0:1e7b5dd9edb4 220 /*
dflet 0:1e7b5dd9edb4 221 * Copies an item into the queue, either at the front of the queue or the
dflet 0:1e7b5dd9edb4 222 * back of the queue.
dflet 0:1e7b5dd9edb4 223 */
dflet 0:1e7b5dd9edb4 224 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 225
dflet 0:1e7b5dd9edb4 226 /*
dflet 0:1e7b5dd9edb4 227 * Copies an item out of a queue.
dflet 0:1e7b5dd9edb4 228 */
dflet 0:1e7b5dd9edb4 229 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 230
dflet 0:1e7b5dd9edb4 231 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 232 /*
dflet 0:1e7b5dd9edb4 233 * Checks to see if a queue is a member of a queue set, and if so, notifies
dflet 0:1e7b5dd9edb4 234 * the queue set that the queue contains data.
dflet 0:1e7b5dd9edb4 235 */
dflet 0:1e7b5dd9edb4 236 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION;
dflet 0:1e7b5dd9edb4 237 #endif
dflet 0:1e7b5dd9edb4 238
dflet 0:1e7b5dd9edb4 239 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 240
dflet 0:1e7b5dd9edb4 241 /*
dflet 0:1e7b5dd9edb4 242 * Macro to mark a queue as locked. Locking a queue prevents an ISR from
dflet 0:1e7b5dd9edb4 243 * accessing the queue event lists.
dflet 0:1e7b5dd9edb4 244 */
dflet 0:1e7b5dd9edb4 245 #define prvLockQueue( pxQueue ) \
dflet 0:1e7b5dd9edb4 246 taskENTER_CRITICAL(); \
dflet 0:1e7b5dd9edb4 247 { \
dflet 0:1e7b5dd9edb4 248 if( ( pxQueue )->xRxLock == queueUNLOCKED ) \
dflet 0:1e7b5dd9edb4 249 { \
dflet 0:1e7b5dd9edb4 250 ( pxQueue )->xRxLock = queueLOCKED_UNMODIFIED; \
dflet 0:1e7b5dd9edb4 251 } \
dflet 0:1e7b5dd9edb4 252 if( ( pxQueue )->xTxLock == queueUNLOCKED ) \
dflet 0:1e7b5dd9edb4 253 { \
dflet 0:1e7b5dd9edb4 254 ( pxQueue )->xTxLock = queueLOCKED_UNMODIFIED; \
dflet 0:1e7b5dd9edb4 255 } \
dflet 0:1e7b5dd9edb4 256 } \
dflet 0:1e7b5dd9edb4 257 taskEXIT_CRITICAL()
dflet 0:1e7b5dd9edb4 258 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 259
dflet 0:1e7b5dd9edb4 260 BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue )
dflet 0:1e7b5dd9edb4 261 {
dflet 0:1e7b5dd9edb4 262 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 263
dflet 0:1e7b5dd9edb4 264 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 265
dflet 0:1e7b5dd9edb4 266 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 267 {
dflet 0:1e7b5dd9edb4 268 pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize );
dflet 0:1e7b5dd9edb4 269 pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 0:1e7b5dd9edb4 270 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 0:1e7b5dd9edb4 271 pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize );
dflet 0:1e7b5dd9edb4 272 pxQueue->xRxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 273 pxQueue->xTxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 274
dflet 0:1e7b5dd9edb4 275 if( xNewQueue == pdFALSE )
dflet 0:1e7b5dd9edb4 276 {
dflet 0:1e7b5dd9edb4 277 /* If there are tasks blocked waiting to read from the queue, then
dflet 0:1e7b5dd9edb4 278 the tasks will remain blocked as after this function exits the queue
dflet 0:1e7b5dd9edb4 279 will still be empty. If there are tasks blocked waiting to write to
dflet 0:1e7b5dd9edb4 280 the queue, then one should be unblocked as after this function exits
dflet 0:1e7b5dd9edb4 281 it will be possible to write to it. */
dflet 0:1e7b5dd9edb4 282 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 283 {
dflet 0:1e7b5dd9edb4 284 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 285 {
dflet 0:1e7b5dd9edb4 286 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 287 }
dflet 0:1e7b5dd9edb4 288 else
dflet 0:1e7b5dd9edb4 289 {
dflet 0:1e7b5dd9edb4 290 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 291 }
dflet 0:1e7b5dd9edb4 292 }
dflet 0:1e7b5dd9edb4 293 else
dflet 0:1e7b5dd9edb4 294 {
dflet 0:1e7b5dd9edb4 295 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 296 }
dflet 0:1e7b5dd9edb4 297 }
dflet 0:1e7b5dd9edb4 298 else
dflet 0:1e7b5dd9edb4 299 {
dflet 0:1e7b5dd9edb4 300 /* Ensure the event queues start in the correct state. */
dflet 0:1e7b5dd9edb4 301 vListInitialise( &( pxQueue->xTasksWaitingToSend ) );
dflet 0:1e7b5dd9edb4 302 vListInitialise( &( pxQueue->xTasksWaitingToReceive ) );
dflet 0:1e7b5dd9edb4 303 }
dflet 0:1e7b5dd9edb4 304 }
dflet 0:1e7b5dd9edb4 305 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 306
dflet 0:1e7b5dd9edb4 307 /* A value is returned for calling semantic consistency with previous
dflet 0:1e7b5dd9edb4 308 versions. */
dflet 0:1e7b5dd9edb4 309 return pdPASS;
dflet 0:1e7b5dd9edb4 310 }
dflet 0:1e7b5dd9edb4 311 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 312
dflet 0:1e7b5dd9edb4 313 QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType )
dflet 0:1e7b5dd9edb4 314 {
dflet 0:1e7b5dd9edb4 315 Queue_t *pxNewQueue;
dflet 0:1e7b5dd9edb4 316 size_t xQueueSizeInBytes;
dflet 0:1e7b5dd9edb4 317 QueueHandle_t xReturn = NULL;
dflet 0:1e7b5dd9edb4 318 int8_t *pcAllocatedBuffer;
dflet 0:1e7b5dd9edb4 319
dflet 0:1e7b5dd9edb4 320 /* Remove compiler warnings about unused parameters should
dflet 0:1e7b5dd9edb4 321 configUSE_TRACE_FACILITY not be set to 1. */
dflet 0:1e7b5dd9edb4 322 ( void ) ucQueueType;
dflet 0:1e7b5dd9edb4 323
dflet 0:1e7b5dd9edb4 324 configASSERT( uxQueueLength > ( UBaseType_t ) 0 );
dflet 0:1e7b5dd9edb4 325
dflet 0:1e7b5dd9edb4 326 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 327 {
dflet 0:1e7b5dd9edb4 328 /* There is not going to be a queue storage area. */
dflet 0:1e7b5dd9edb4 329 xQueueSizeInBytes = ( size_t ) 0;
dflet 0:1e7b5dd9edb4 330 }
dflet 0:1e7b5dd9edb4 331 else
dflet 0:1e7b5dd9edb4 332 {
dflet 0:1e7b5dd9edb4 333 /* The queue is one byte longer than asked for to make wrap checking
dflet 0:1e7b5dd9edb4 334 easier/faster. */
dflet 0:1e7b5dd9edb4 335 xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ) + ( size_t ) 1; /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 0:1e7b5dd9edb4 336 }
dflet 0:1e7b5dd9edb4 337
dflet 0:1e7b5dd9edb4 338 /* Allocate the new queue structure and storage area. */
dflet 0:1e7b5dd9edb4 339 pcAllocatedBuffer = ( int8_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes );
dflet 0:1e7b5dd9edb4 340
dflet 0:1e7b5dd9edb4 341 if( pcAllocatedBuffer != NULL )
dflet 0:1e7b5dd9edb4 342 {
dflet 0:1e7b5dd9edb4 343 pxNewQueue = ( Queue_t * ) pcAllocatedBuffer; /*lint !e826 MISRA The buffer cannot be too small because it was dimensioned by sizeof( Queue_t ) + xQueueSizeInBytes. */
dflet 0:1e7b5dd9edb4 344
dflet 0:1e7b5dd9edb4 345 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 346 {
dflet 0:1e7b5dd9edb4 347 /* No RAM was allocated for the queue storage area, but PC head
dflet 0:1e7b5dd9edb4 348 cannot be set to NULL because NULL is used as a key to say the queue
dflet 0:1e7b5dd9edb4 349 is used as a mutex. Therefore just set pcHead to point to the queue
dflet 0:1e7b5dd9edb4 350 as a benign value that is known to be within the memory map. */
dflet 0:1e7b5dd9edb4 351 pxNewQueue->pcHead = ( int8_t * ) pxNewQueue;
dflet 0:1e7b5dd9edb4 352 }
dflet 0:1e7b5dd9edb4 353 else
dflet 0:1e7b5dd9edb4 354 {
dflet 0:1e7b5dd9edb4 355 /* Jump past the queue structure to find the location of the queue
dflet 0:1e7b5dd9edb4 356 storage area - adding the padding bytes to get a better alignment. */
dflet 0:1e7b5dd9edb4 357 pxNewQueue->pcHead = pcAllocatedBuffer + sizeof( Queue_t );
dflet 0:1e7b5dd9edb4 358 }
dflet 0:1e7b5dd9edb4 359
dflet 0:1e7b5dd9edb4 360 /* Initialise the queue members as described above where the queue type
dflet 0:1e7b5dd9edb4 361 is defined. */
dflet 0:1e7b5dd9edb4 362 pxNewQueue->uxLength = uxQueueLength;
dflet 0:1e7b5dd9edb4 363 pxNewQueue->uxItemSize = uxItemSize;
dflet 0:1e7b5dd9edb4 364 ( void ) xQueueGenericReset( pxNewQueue, pdTRUE );
dflet 0:1e7b5dd9edb4 365
dflet 0:1e7b5dd9edb4 366 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 367 {
dflet 0:1e7b5dd9edb4 368 pxNewQueue->ucQueueType = ucQueueType;
dflet 0:1e7b5dd9edb4 369 }
dflet 0:1e7b5dd9edb4 370 #endif /* configUSE_TRACE_FACILITY */
dflet 0:1e7b5dd9edb4 371
dflet 0:1e7b5dd9edb4 372 #if( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 373 {
dflet 0:1e7b5dd9edb4 374 pxNewQueue->pxQueueSetContainer = NULL;
dflet 0:1e7b5dd9edb4 375 }
dflet 0:1e7b5dd9edb4 376 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 377
dflet 0:1e7b5dd9edb4 378 traceQUEUE_CREATE( pxNewQueue );
dflet 0:1e7b5dd9edb4 379 xReturn = pxNewQueue;
dflet 0:1e7b5dd9edb4 380 }
dflet 0:1e7b5dd9edb4 381 else
dflet 0:1e7b5dd9edb4 382 {
dflet 0:1e7b5dd9edb4 383 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 384 }
dflet 0:1e7b5dd9edb4 385
dflet 0:1e7b5dd9edb4 386 configASSERT( xReturn );
dflet 0:1e7b5dd9edb4 387
dflet 0:1e7b5dd9edb4 388 return xReturn;
dflet 0:1e7b5dd9edb4 389 }
dflet 0:1e7b5dd9edb4 390 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 391
dflet 0:1e7b5dd9edb4 392 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 393
dflet 0:1e7b5dd9edb4 394 QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType )
dflet 0:1e7b5dd9edb4 395 {
dflet 0:1e7b5dd9edb4 396 Queue_t *pxNewQueue;
dflet 0:1e7b5dd9edb4 397
dflet 0:1e7b5dd9edb4 398 /* Prevent compiler warnings about unused parameters if
dflet 0:1e7b5dd9edb4 399 configUSE_TRACE_FACILITY does not equal 1. */
dflet 0:1e7b5dd9edb4 400 ( void ) ucQueueType;
dflet 0:1e7b5dd9edb4 401
dflet 0:1e7b5dd9edb4 402 /* Allocate the new queue structure. */
dflet 0:1e7b5dd9edb4 403 pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) );
dflet 0:1e7b5dd9edb4 404 if( pxNewQueue != NULL )
dflet 0:1e7b5dd9edb4 405 {
dflet 0:1e7b5dd9edb4 406 /* Information required for priority inheritance. */
dflet 0:1e7b5dd9edb4 407 pxNewQueue->pxMutexHolder = NULL;
dflet 0:1e7b5dd9edb4 408 pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX;
dflet 0:1e7b5dd9edb4 409
dflet 0:1e7b5dd9edb4 410 /* Queues used as a mutex no data is actually copied into or out
dflet 0:1e7b5dd9edb4 411 of the queue. */
dflet 0:1e7b5dd9edb4 412 pxNewQueue->pcWriteTo = NULL;
dflet 0:1e7b5dd9edb4 413 pxNewQueue->u.pcReadFrom = NULL;
dflet 0:1e7b5dd9edb4 414
dflet 0:1e7b5dd9edb4 415 /* Each mutex has a length of 1 (like a binary semaphore) and
dflet 0:1e7b5dd9edb4 416 an item size of 0 as nothing is actually copied into or out
dflet 0:1e7b5dd9edb4 417 of the mutex. */
dflet 0:1e7b5dd9edb4 418 pxNewQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 0:1e7b5dd9edb4 419 pxNewQueue->uxLength = ( UBaseType_t ) 1U;
dflet 0:1e7b5dd9edb4 420 pxNewQueue->uxItemSize = ( UBaseType_t ) 0U;
dflet 0:1e7b5dd9edb4 421 pxNewQueue->xRxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 422 pxNewQueue->xTxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 423
dflet 0:1e7b5dd9edb4 424 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 425 {
dflet 0:1e7b5dd9edb4 426 pxNewQueue->ucQueueType = ucQueueType;
dflet 0:1e7b5dd9edb4 427 }
dflet 0:1e7b5dd9edb4 428 #endif
dflet 0:1e7b5dd9edb4 429
dflet 0:1e7b5dd9edb4 430 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 431 {
dflet 0:1e7b5dd9edb4 432 pxNewQueue->pxQueueSetContainer = NULL;
dflet 0:1e7b5dd9edb4 433 }
dflet 0:1e7b5dd9edb4 434 #endif
dflet 0:1e7b5dd9edb4 435
dflet 0:1e7b5dd9edb4 436 /* Ensure the event queues start with the correct state. */
dflet 0:1e7b5dd9edb4 437 vListInitialise( &( pxNewQueue->xTasksWaitingToSend ) );
dflet 0:1e7b5dd9edb4 438 vListInitialise( &( pxNewQueue->xTasksWaitingToReceive ) );
dflet 0:1e7b5dd9edb4 439
dflet 0:1e7b5dd9edb4 440 traceCREATE_MUTEX( pxNewQueue );
dflet 0:1e7b5dd9edb4 441
dflet 0:1e7b5dd9edb4 442 /* Start with the semaphore in the expected state. */
dflet 0:1e7b5dd9edb4 443 ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK );
dflet 0:1e7b5dd9edb4 444 }
dflet 0:1e7b5dd9edb4 445 else
dflet 0:1e7b5dd9edb4 446 {
dflet 0:1e7b5dd9edb4 447 traceCREATE_MUTEX_FAILED();
dflet 0:1e7b5dd9edb4 448 }
dflet 0:1e7b5dd9edb4 449
dflet 0:1e7b5dd9edb4 450 configASSERT( pxNewQueue );
dflet 0:1e7b5dd9edb4 451 return pxNewQueue;
dflet 0:1e7b5dd9edb4 452 }
dflet 0:1e7b5dd9edb4 453
dflet 0:1e7b5dd9edb4 454 #endif /* configUSE_MUTEXES */
dflet 0:1e7b5dd9edb4 455 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 456
dflet 0:1e7b5dd9edb4 457 #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) )
dflet 0:1e7b5dd9edb4 458
dflet 0:1e7b5dd9edb4 459 void* xQueueGetMutexHolder( QueueHandle_t xSemaphore )
dflet 0:1e7b5dd9edb4 460 {
dflet 0:1e7b5dd9edb4 461 void *pxReturn;
dflet 0:1e7b5dd9edb4 462
dflet 0:1e7b5dd9edb4 463 /* This function is called by xSemaphoreGetMutexHolder(), and should not
dflet 0:1e7b5dd9edb4 464 be called directly. Note: This is a good way of determining if the
dflet 0:1e7b5dd9edb4 465 calling task is the mutex holder, but not a good way of determining the
dflet 0:1e7b5dd9edb4 466 identity of the mutex holder, as the holder may change between the
dflet 0:1e7b5dd9edb4 467 following critical section exiting and the function returning. */
dflet 0:1e7b5dd9edb4 468 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 469 {
dflet 0:1e7b5dd9edb4 470 if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 471 {
dflet 0:1e7b5dd9edb4 472 pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder;
dflet 0:1e7b5dd9edb4 473 }
dflet 0:1e7b5dd9edb4 474 else
dflet 0:1e7b5dd9edb4 475 {
dflet 0:1e7b5dd9edb4 476 pxReturn = NULL;
dflet 0:1e7b5dd9edb4 477 }
dflet 0:1e7b5dd9edb4 478 }
dflet 0:1e7b5dd9edb4 479 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 480
dflet 0:1e7b5dd9edb4 481 return pxReturn;
dflet 0:1e7b5dd9edb4 482 } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */
dflet 0:1e7b5dd9edb4 483
dflet 0:1e7b5dd9edb4 484 #endif
dflet 0:1e7b5dd9edb4 485 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 486
dflet 0:1e7b5dd9edb4 487 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 488
dflet 0:1e7b5dd9edb4 489 BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex )
dflet 0:1e7b5dd9edb4 490 {
dflet 0:1e7b5dd9edb4 491 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 492 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 0:1e7b5dd9edb4 493
dflet 0:1e7b5dd9edb4 494 configASSERT( pxMutex );
dflet 0:1e7b5dd9edb4 495
dflet 0:1e7b5dd9edb4 496 /* If this is the task that holds the mutex then pxMutexHolder will not
dflet 0:1e7b5dd9edb4 497 change outside of this task. If this task does not hold the mutex then
dflet 0:1e7b5dd9edb4 498 pxMutexHolder can never coincidentally equal the tasks handle, and as
dflet 0:1e7b5dd9edb4 499 this is the only condition we are interested in it does not matter if
dflet 0:1e7b5dd9edb4 500 pxMutexHolder is accessed simultaneously by another task. Therefore no
dflet 0:1e7b5dd9edb4 501 mutual exclusion is required to test the pxMutexHolder variable. */
dflet 0:1e7b5dd9edb4 502 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */
dflet 0:1e7b5dd9edb4 503 {
dflet 0:1e7b5dd9edb4 504 traceGIVE_MUTEX_RECURSIVE( pxMutex );
dflet 0:1e7b5dd9edb4 505
dflet 0:1e7b5dd9edb4 506 /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to
dflet 0:1e7b5dd9edb4 507 the task handle, therefore no underflow check is required. Also,
dflet 0:1e7b5dd9edb4 508 uxRecursiveCallCount is only modified by the mutex holder, and as
dflet 0:1e7b5dd9edb4 509 there can only be one, no mutual exclusion is required to modify the
dflet 0:1e7b5dd9edb4 510 uxRecursiveCallCount member. */
dflet 0:1e7b5dd9edb4 511 ( pxMutex->u.uxRecursiveCallCount )--;
dflet 0:1e7b5dd9edb4 512
dflet 0:1e7b5dd9edb4 513 /* Have we unwound the call count? */
dflet 0:1e7b5dd9edb4 514 if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 515 {
dflet 0:1e7b5dd9edb4 516 /* Return the mutex. This will automatically unblock any other
dflet 0:1e7b5dd9edb4 517 task that might be waiting to access the mutex. */
dflet 0:1e7b5dd9edb4 518 ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK );
dflet 0:1e7b5dd9edb4 519 }
dflet 0:1e7b5dd9edb4 520 else
dflet 0:1e7b5dd9edb4 521 {
dflet 0:1e7b5dd9edb4 522 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 523 }
dflet 0:1e7b5dd9edb4 524
dflet 0:1e7b5dd9edb4 525 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 526 }
dflet 0:1e7b5dd9edb4 527 else
dflet 0:1e7b5dd9edb4 528 {
dflet 0:1e7b5dd9edb4 529 /* The mutex cannot be given because the calling task is not the
dflet 0:1e7b5dd9edb4 530 holder. */
dflet 0:1e7b5dd9edb4 531 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 532
dflet 0:1e7b5dd9edb4 533 traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 0:1e7b5dd9edb4 534 }
dflet 0:1e7b5dd9edb4 535
dflet 0:1e7b5dd9edb4 536 return xReturn;
dflet 0:1e7b5dd9edb4 537 }
dflet 0:1e7b5dd9edb4 538
dflet 0:1e7b5dd9edb4 539 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 0:1e7b5dd9edb4 540 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 541
dflet 0:1e7b5dd9edb4 542 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 543
dflet 0:1e7b5dd9edb4 544 BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait )
dflet 0:1e7b5dd9edb4 545 {
dflet 0:1e7b5dd9edb4 546 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 547 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 0:1e7b5dd9edb4 548
dflet 0:1e7b5dd9edb4 549 configASSERT( pxMutex );
dflet 0:1e7b5dd9edb4 550
dflet 0:1e7b5dd9edb4 551 /* Comments regarding mutual exclusion as per those within
dflet 0:1e7b5dd9edb4 552 xQueueGiveMutexRecursive(). */
dflet 0:1e7b5dd9edb4 553
dflet 0:1e7b5dd9edb4 554 traceTAKE_MUTEX_RECURSIVE( pxMutex );
dflet 0:1e7b5dd9edb4 555
dflet 0:1e7b5dd9edb4 556 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 0:1e7b5dd9edb4 557 {
dflet 0:1e7b5dd9edb4 558 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 0:1e7b5dd9edb4 559 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 560 }
dflet 0:1e7b5dd9edb4 561 else
dflet 0:1e7b5dd9edb4 562 {
dflet 0:1e7b5dd9edb4 563 xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE );
dflet 0:1e7b5dd9edb4 564
dflet 0:1e7b5dd9edb4 565 /* pdPASS will only be returned if the mutex was successfully
dflet 0:1e7b5dd9edb4 566 obtained. The calling task may have entered the Blocked state
dflet 0:1e7b5dd9edb4 567 before reaching here. */
dflet 0:1e7b5dd9edb4 568 if( xReturn == pdPASS )
dflet 0:1e7b5dd9edb4 569 {
dflet 0:1e7b5dd9edb4 570 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 0:1e7b5dd9edb4 571 }
dflet 0:1e7b5dd9edb4 572 else
dflet 0:1e7b5dd9edb4 573 {
dflet 0:1e7b5dd9edb4 574 traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 0:1e7b5dd9edb4 575 }
dflet 0:1e7b5dd9edb4 576 }
dflet 0:1e7b5dd9edb4 577
dflet 0:1e7b5dd9edb4 578 return xReturn;
dflet 0:1e7b5dd9edb4 579 }
dflet 0:1e7b5dd9edb4 580
dflet 0:1e7b5dd9edb4 581 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 0:1e7b5dd9edb4 582 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 583
dflet 0:1e7b5dd9edb4 584 #if ( configUSE_COUNTING_SEMAPHORES == 1 )
dflet 0:1e7b5dd9edb4 585
dflet 0:1e7b5dd9edb4 586 QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount )
dflet 0:1e7b5dd9edb4 587 {
dflet 0:1e7b5dd9edb4 588 QueueHandle_t xHandle;
dflet 0:1e7b5dd9edb4 589
dflet 0:1e7b5dd9edb4 590 configASSERT( uxMaxCount != 0 );
dflet 0:1e7b5dd9edb4 591 configASSERT( uxInitialCount <= uxMaxCount );
dflet 0:1e7b5dd9edb4 592
dflet 0:1e7b5dd9edb4 593 xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE );
dflet 0:1e7b5dd9edb4 594
dflet 0:1e7b5dd9edb4 595 if( xHandle != NULL )
dflet 0:1e7b5dd9edb4 596 {
dflet 0:1e7b5dd9edb4 597 ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount;
dflet 0:1e7b5dd9edb4 598
dflet 0:1e7b5dd9edb4 599 traceCREATE_COUNTING_SEMAPHORE();
dflet 0:1e7b5dd9edb4 600 }
dflet 0:1e7b5dd9edb4 601 else
dflet 0:1e7b5dd9edb4 602 {
dflet 0:1e7b5dd9edb4 603 traceCREATE_COUNTING_SEMAPHORE_FAILED();
dflet 0:1e7b5dd9edb4 604 }
dflet 0:1e7b5dd9edb4 605
dflet 0:1e7b5dd9edb4 606 configASSERT( xHandle );
dflet 0:1e7b5dd9edb4 607 return xHandle;
dflet 0:1e7b5dd9edb4 608 }
dflet 0:1e7b5dd9edb4 609
dflet 0:1e7b5dd9edb4 610 #endif /* configUSE_COUNTING_SEMAPHORES */
dflet 0:1e7b5dd9edb4 611 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 612
dflet 0:1e7b5dd9edb4 613 BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition )
dflet 0:1e7b5dd9edb4 614 {
dflet 0:1e7b5dd9edb4 615 BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired;
dflet 0:1e7b5dd9edb4 616 TimeOut_t xTimeOut;
dflet 0:1e7b5dd9edb4 617 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 618
dflet 0:1e7b5dd9edb4 619 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 620 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 621 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 0:1e7b5dd9edb4 622 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 0:1e7b5dd9edb4 623 {
dflet 0:1e7b5dd9edb4 624 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 0:1e7b5dd9edb4 625 }
dflet 0:1e7b5dd9edb4 626 #endif
dflet 0:1e7b5dd9edb4 627
dflet 0:1e7b5dd9edb4 628
dflet 0:1e7b5dd9edb4 629 /* This function relaxes the coding standard somewhat to allow return
dflet 0:1e7b5dd9edb4 630 statements within the function itself. This is done in the interest
dflet 0:1e7b5dd9edb4 631 of execution time efficiency. */
dflet 0:1e7b5dd9edb4 632 for( ;; )
dflet 0:1e7b5dd9edb4 633 {
dflet 0:1e7b5dd9edb4 634 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 635 {
dflet 0:1e7b5dd9edb4 636 /* Is there room on the queue now? The running task must be the
dflet 0:1e7b5dd9edb4 637 highest priority task wanting to access the queue. If the head item
dflet 0:1e7b5dd9edb4 638 in the queue is to be overwritten then it does not matter if the
dflet 0:1e7b5dd9edb4 639 queue is full. */
dflet 0:1e7b5dd9edb4 640 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 0:1e7b5dd9edb4 641 {
dflet 0:1e7b5dd9edb4 642 traceQUEUE_SEND( pxQueue );
dflet 0:1e7b5dd9edb4 643 xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:1e7b5dd9edb4 644
dflet 0:1e7b5dd9edb4 645 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 646 {
dflet 0:1e7b5dd9edb4 647 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:1e7b5dd9edb4 648 {
dflet 0:1e7b5dd9edb4 649 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 0:1e7b5dd9edb4 650 {
dflet 0:1e7b5dd9edb4 651 /* The queue is a member of a queue set, and posting
dflet 0:1e7b5dd9edb4 652 to the queue set caused a higher priority task to
dflet 0:1e7b5dd9edb4 653 unblock. A context switch is required. */
dflet 0:1e7b5dd9edb4 654 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 655 }
dflet 0:1e7b5dd9edb4 656 else
dflet 0:1e7b5dd9edb4 657 {
dflet 0:1e7b5dd9edb4 658 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 659 }
dflet 0:1e7b5dd9edb4 660 }
dflet 0:1e7b5dd9edb4 661 else
dflet 0:1e7b5dd9edb4 662 {
dflet 0:1e7b5dd9edb4 663 /* If there was a task waiting for data to arrive on the
dflet 0:1e7b5dd9edb4 664 queue then unblock it now. */
dflet 0:1e7b5dd9edb4 665 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 666 {
dflet 0:1e7b5dd9edb4 667 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 668 {
dflet 0:1e7b5dd9edb4 669 /* The unblocked task has a priority higher than
dflet 0:1e7b5dd9edb4 670 our own so yield immediately. Yes it is ok to
dflet 0:1e7b5dd9edb4 671 do this from within the critical section - the
dflet 0:1e7b5dd9edb4 672 kernel takes care of that. */
dflet 0:1e7b5dd9edb4 673 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 674 }
dflet 0:1e7b5dd9edb4 675 else
dflet 0:1e7b5dd9edb4 676 {
dflet 0:1e7b5dd9edb4 677 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 678 }
dflet 0:1e7b5dd9edb4 679 }
dflet 0:1e7b5dd9edb4 680 else if( xYieldRequired != pdFALSE )
dflet 0:1e7b5dd9edb4 681 {
dflet 0:1e7b5dd9edb4 682 /* This path is a special case that will only get
dflet 0:1e7b5dd9edb4 683 executed if the task was holding multiple mutexes
dflet 0:1e7b5dd9edb4 684 and the mutexes were given back in an order that is
dflet 0:1e7b5dd9edb4 685 different to that in which they were taken. */
dflet 0:1e7b5dd9edb4 686 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 687 }
dflet 0:1e7b5dd9edb4 688 else
dflet 0:1e7b5dd9edb4 689 {
dflet 0:1e7b5dd9edb4 690 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 691 }
dflet 0:1e7b5dd9edb4 692 }
dflet 0:1e7b5dd9edb4 693 }
dflet 0:1e7b5dd9edb4 694 #else /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 695 {
dflet 0:1e7b5dd9edb4 696 /* If there was a task waiting for data to arrive on the
dflet 0:1e7b5dd9edb4 697 queue then unblock it now. */
dflet 0:1e7b5dd9edb4 698 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 699 {
dflet 0:1e7b5dd9edb4 700 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 701 {
dflet 0:1e7b5dd9edb4 702 /* The unblocked task has a priority higher than
dflet 0:1e7b5dd9edb4 703 our own so yield immediately. Yes it is ok to do
dflet 0:1e7b5dd9edb4 704 this from within the critical section - the kernel
dflet 0:1e7b5dd9edb4 705 takes care of that. */
dflet 0:1e7b5dd9edb4 706 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 707 }
dflet 0:1e7b5dd9edb4 708 else
dflet 0:1e7b5dd9edb4 709 {
dflet 0:1e7b5dd9edb4 710 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 711 }
dflet 0:1e7b5dd9edb4 712 }
dflet 0:1e7b5dd9edb4 713 else if( xYieldRequired != pdFALSE )
dflet 0:1e7b5dd9edb4 714 {
dflet 0:1e7b5dd9edb4 715 /* This path is a special case that will only get
dflet 0:1e7b5dd9edb4 716 executed if the task was holding multiple mutexes and
dflet 0:1e7b5dd9edb4 717 the mutexes were given back in an order that is
dflet 0:1e7b5dd9edb4 718 different to that in which they were taken. */
dflet 0:1e7b5dd9edb4 719 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 720 }
dflet 0:1e7b5dd9edb4 721 else
dflet 0:1e7b5dd9edb4 722 {
dflet 0:1e7b5dd9edb4 723 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 724 }
dflet 0:1e7b5dd9edb4 725 }
dflet 0:1e7b5dd9edb4 726 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 727
dflet 0:1e7b5dd9edb4 728 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 729 return pdPASS;
dflet 0:1e7b5dd9edb4 730 }
dflet 0:1e7b5dd9edb4 731 else
dflet 0:1e7b5dd9edb4 732 {
dflet 0:1e7b5dd9edb4 733 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 734 {
dflet 0:1e7b5dd9edb4 735 /* The queue was full and no block time is specified (or
dflet 0:1e7b5dd9edb4 736 the block time has expired) so leave now. */
dflet 0:1e7b5dd9edb4 737 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 738
dflet 0:1e7b5dd9edb4 739 /* Return to the original privilege level before exiting
dflet 0:1e7b5dd9edb4 740 the function. */
dflet 0:1e7b5dd9edb4 741 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 742 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 743 }
dflet 0:1e7b5dd9edb4 744 else if( xEntryTimeSet == pdFALSE )
dflet 0:1e7b5dd9edb4 745 {
dflet 0:1e7b5dd9edb4 746 /* The queue was full and a block time was specified so
dflet 0:1e7b5dd9edb4 747 configure the timeout structure. */
dflet 0:1e7b5dd9edb4 748 vTaskSetTimeOutState( &xTimeOut );
dflet 0:1e7b5dd9edb4 749 xEntryTimeSet = pdTRUE;
dflet 0:1e7b5dd9edb4 750 }
dflet 0:1e7b5dd9edb4 751 else
dflet 0:1e7b5dd9edb4 752 {
dflet 0:1e7b5dd9edb4 753 /* Entry time was already set. */
dflet 0:1e7b5dd9edb4 754 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 755 }
dflet 0:1e7b5dd9edb4 756 }
dflet 0:1e7b5dd9edb4 757 }
dflet 0:1e7b5dd9edb4 758 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 759
dflet 0:1e7b5dd9edb4 760 /* Interrupts and other tasks can send to and receive from the queue
dflet 0:1e7b5dd9edb4 761 now the critical section has been exited. */
dflet 0:1e7b5dd9edb4 762
dflet 0:1e7b5dd9edb4 763 vTaskSuspendAll();
dflet 0:1e7b5dd9edb4 764 prvLockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 765
dflet 0:1e7b5dd9edb4 766 /* Update the timeout state to see if it has expired yet. */
dflet 0:1e7b5dd9edb4 767 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:1e7b5dd9edb4 768 {
dflet 0:1e7b5dd9edb4 769 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:1e7b5dd9edb4 770 {
dflet 0:1e7b5dd9edb4 771 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 0:1e7b5dd9edb4 772 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 0:1e7b5dd9edb4 773
dflet 0:1e7b5dd9edb4 774 /* Unlocking the queue means queue events can effect the
dflet 0:1e7b5dd9edb4 775 event list. It is possible that interrupts occurring now
dflet 0:1e7b5dd9edb4 776 remove this task from the event list again - but as the
dflet 0:1e7b5dd9edb4 777 scheduler is suspended the task will go onto the pending
dflet 0:1e7b5dd9edb4 778 ready last instead of the actual ready list. */
dflet 0:1e7b5dd9edb4 779 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 780
dflet 0:1e7b5dd9edb4 781 /* Resuming the scheduler will move tasks from the pending
dflet 0:1e7b5dd9edb4 782 ready list into the ready list - so it is feasible that this
dflet 0:1e7b5dd9edb4 783 task is already in a ready list before it yields - in which
dflet 0:1e7b5dd9edb4 784 case the yield will not cause a context switch unless there
dflet 0:1e7b5dd9edb4 785 is also a higher priority task in the pending ready list. */
dflet 0:1e7b5dd9edb4 786 if( xTaskResumeAll() == pdFALSE )
dflet 0:1e7b5dd9edb4 787 {
dflet 0:1e7b5dd9edb4 788 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 789 }
dflet 0:1e7b5dd9edb4 790 }
dflet 0:1e7b5dd9edb4 791 else
dflet 0:1e7b5dd9edb4 792 {
dflet 0:1e7b5dd9edb4 793 /* Try again. */
dflet 0:1e7b5dd9edb4 794 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 795 ( void ) xTaskResumeAll();
dflet 0:1e7b5dd9edb4 796 }
dflet 0:1e7b5dd9edb4 797 }
dflet 0:1e7b5dd9edb4 798 else
dflet 0:1e7b5dd9edb4 799 {
dflet 0:1e7b5dd9edb4 800 /* The timeout has expired. */
dflet 0:1e7b5dd9edb4 801 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 802 ( void ) xTaskResumeAll();
dflet 0:1e7b5dd9edb4 803
dflet 0:1e7b5dd9edb4 804 /* Return to the original privilege level before exiting the
dflet 0:1e7b5dd9edb4 805 function. */
dflet 0:1e7b5dd9edb4 806 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 807 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 808 }
dflet 0:1e7b5dd9edb4 809 }
dflet 0:1e7b5dd9edb4 810 }
dflet 0:1e7b5dd9edb4 811 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 812
dflet 0:1e7b5dd9edb4 813 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 0:1e7b5dd9edb4 814
dflet 0:1e7b5dd9edb4 815 BaseType_t xQueueAltGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, BaseType_t xCopyPosition )
dflet 0:1e7b5dd9edb4 816 {
dflet 0:1e7b5dd9edb4 817 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:1e7b5dd9edb4 818 TimeOut_t xTimeOut;
dflet 0:1e7b5dd9edb4 819 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 820
dflet 0:1e7b5dd9edb4 821 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 822 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 823
dflet 0:1e7b5dd9edb4 824 for( ;; )
dflet 0:1e7b5dd9edb4 825 {
dflet 0:1e7b5dd9edb4 826 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 827 {
dflet 0:1e7b5dd9edb4 828 /* Is there room on the queue now? To be running we must be
dflet 0:1e7b5dd9edb4 829 the highest priority task wanting to access the queue. */
dflet 0:1e7b5dd9edb4 830 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:1e7b5dd9edb4 831 {
dflet 0:1e7b5dd9edb4 832 traceQUEUE_SEND( pxQueue );
dflet 0:1e7b5dd9edb4 833 prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:1e7b5dd9edb4 834
dflet 0:1e7b5dd9edb4 835 /* If there was a task waiting for data to arrive on the
dflet 0:1e7b5dd9edb4 836 queue then unblock it now. */
dflet 0:1e7b5dd9edb4 837 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 838 {
dflet 0:1e7b5dd9edb4 839 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 840 {
dflet 0:1e7b5dd9edb4 841 /* The unblocked task has a priority higher than
dflet 0:1e7b5dd9edb4 842 our own so yield immediately. */
dflet 0:1e7b5dd9edb4 843 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 844 }
dflet 0:1e7b5dd9edb4 845 else
dflet 0:1e7b5dd9edb4 846 {
dflet 0:1e7b5dd9edb4 847 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 848 }
dflet 0:1e7b5dd9edb4 849 }
dflet 0:1e7b5dd9edb4 850 else
dflet 0:1e7b5dd9edb4 851 {
dflet 0:1e7b5dd9edb4 852 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 853 }
dflet 0:1e7b5dd9edb4 854
dflet 0:1e7b5dd9edb4 855 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 856 return pdPASS;
dflet 0:1e7b5dd9edb4 857 }
dflet 0:1e7b5dd9edb4 858 else
dflet 0:1e7b5dd9edb4 859 {
dflet 0:1e7b5dd9edb4 860 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 861 {
dflet 0:1e7b5dd9edb4 862 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 863 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 864 }
dflet 0:1e7b5dd9edb4 865 else if( xEntryTimeSet == pdFALSE )
dflet 0:1e7b5dd9edb4 866 {
dflet 0:1e7b5dd9edb4 867 vTaskSetTimeOutState( &xTimeOut );
dflet 0:1e7b5dd9edb4 868 xEntryTimeSet = pdTRUE;
dflet 0:1e7b5dd9edb4 869 }
dflet 0:1e7b5dd9edb4 870 }
dflet 0:1e7b5dd9edb4 871 }
dflet 0:1e7b5dd9edb4 872 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 873
dflet 0:1e7b5dd9edb4 874 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 875 {
dflet 0:1e7b5dd9edb4 876 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:1e7b5dd9edb4 877 {
dflet 0:1e7b5dd9edb4 878 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:1e7b5dd9edb4 879 {
dflet 0:1e7b5dd9edb4 880 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 0:1e7b5dd9edb4 881 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 0:1e7b5dd9edb4 882 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 883 }
dflet 0:1e7b5dd9edb4 884 else
dflet 0:1e7b5dd9edb4 885 {
dflet 0:1e7b5dd9edb4 886 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 887 }
dflet 0:1e7b5dd9edb4 888 }
dflet 0:1e7b5dd9edb4 889 else
dflet 0:1e7b5dd9edb4 890 {
dflet 0:1e7b5dd9edb4 891 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 892 traceQUEUE_SEND_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 893 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 894 }
dflet 0:1e7b5dd9edb4 895 }
dflet 0:1e7b5dd9edb4 896 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 897 }
dflet 0:1e7b5dd9edb4 898 }
dflet 0:1e7b5dd9edb4 899
dflet 0:1e7b5dd9edb4 900 #endif /* configUSE_ALTERNATIVE_API */
dflet 0:1e7b5dd9edb4 901 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 902
dflet 0:1e7b5dd9edb4 903 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 0:1e7b5dd9edb4 904
dflet 0:1e7b5dd9edb4 905 BaseType_t xQueueAltGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, BaseType_t xJustPeeking )
dflet 0:1e7b5dd9edb4 906 {
dflet 0:1e7b5dd9edb4 907 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:1e7b5dd9edb4 908 TimeOut_t xTimeOut;
dflet 0:1e7b5dd9edb4 909 int8_t *pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 910 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 911
dflet 0:1e7b5dd9edb4 912 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 913 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 914
dflet 0:1e7b5dd9edb4 915 for( ;; )
dflet 0:1e7b5dd9edb4 916 {
dflet 0:1e7b5dd9edb4 917 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 918 {
dflet 0:1e7b5dd9edb4 919 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 920 {
dflet 0:1e7b5dd9edb4 921 /* Remember our read position in case we are just peeking. */
dflet 0:1e7b5dd9edb4 922 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:1e7b5dd9edb4 923
dflet 0:1e7b5dd9edb4 924 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:1e7b5dd9edb4 925
dflet 0:1e7b5dd9edb4 926 if( xJustPeeking == pdFALSE )
dflet 0:1e7b5dd9edb4 927 {
dflet 0:1e7b5dd9edb4 928 traceQUEUE_RECEIVE( pxQueue );
dflet 0:1e7b5dd9edb4 929
dflet 0:1e7b5dd9edb4 930 /* Data is actually being removed (not just peeked). */
dflet 0:1e7b5dd9edb4 931 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 932
dflet 0:1e7b5dd9edb4 933 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 934 {
dflet 0:1e7b5dd9edb4 935 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 936 {
dflet 0:1e7b5dd9edb4 937 /* Record the information required to implement
dflet 0:1e7b5dd9edb4 938 priority inheritance should it become necessary. */
dflet 0:1e7b5dd9edb4 939 pxQueue->pxMutexHolder = ( int8_t * ) xTaskGetCurrentTaskHandle();
dflet 0:1e7b5dd9edb4 940 }
dflet 0:1e7b5dd9edb4 941 else
dflet 0:1e7b5dd9edb4 942 {
dflet 0:1e7b5dd9edb4 943 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 944 }
dflet 0:1e7b5dd9edb4 945 }
dflet 0:1e7b5dd9edb4 946 #endif
dflet 0:1e7b5dd9edb4 947
dflet 0:1e7b5dd9edb4 948 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 949 {
dflet 0:1e7b5dd9edb4 950 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 951 {
dflet 0:1e7b5dd9edb4 952 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 953 }
dflet 0:1e7b5dd9edb4 954 else
dflet 0:1e7b5dd9edb4 955 {
dflet 0:1e7b5dd9edb4 956 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 957 }
dflet 0:1e7b5dd9edb4 958 }
dflet 0:1e7b5dd9edb4 959 }
dflet 0:1e7b5dd9edb4 960 else
dflet 0:1e7b5dd9edb4 961 {
dflet 0:1e7b5dd9edb4 962 traceQUEUE_PEEK( pxQueue );
dflet 0:1e7b5dd9edb4 963
dflet 0:1e7b5dd9edb4 964 /* The data is not being removed, so reset our read
dflet 0:1e7b5dd9edb4 965 pointer. */
dflet 0:1e7b5dd9edb4 966 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 967
dflet 0:1e7b5dd9edb4 968 /* The data is being left in the queue, so see if there are
dflet 0:1e7b5dd9edb4 969 any other tasks waiting for the data. */
dflet 0:1e7b5dd9edb4 970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 971 {
dflet 0:1e7b5dd9edb4 972 /* Tasks that are removed from the event list will get added to
dflet 0:1e7b5dd9edb4 973 the pending ready list as the scheduler is still suspended. */
dflet 0:1e7b5dd9edb4 974 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 975 {
dflet 0:1e7b5dd9edb4 976 /* The task waiting has a higher priority than this task. */
dflet 0:1e7b5dd9edb4 977 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 978 }
dflet 0:1e7b5dd9edb4 979 else
dflet 0:1e7b5dd9edb4 980 {
dflet 0:1e7b5dd9edb4 981 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 982 }
dflet 0:1e7b5dd9edb4 983 }
dflet 0:1e7b5dd9edb4 984 else
dflet 0:1e7b5dd9edb4 985 {
dflet 0:1e7b5dd9edb4 986 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 987 }
dflet 0:1e7b5dd9edb4 988 }
dflet 0:1e7b5dd9edb4 989
dflet 0:1e7b5dd9edb4 990 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 991 return pdPASS;
dflet 0:1e7b5dd9edb4 992 }
dflet 0:1e7b5dd9edb4 993 else
dflet 0:1e7b5dd9edb4 994 {
dflet 0:1e7b5dd9edb4 995 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 996 {
dflet 0:1e7b5dd9edb4 997 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 998 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 999 return errQUEUE_EMPTY;
dflet 0:1e7b5dd9edb4 1000 }
dflet 0:1e7b5dd9edb4 1001 else if( xEntryTimeSet == pdFALSE )
dflet 0:1e7b5dd9edb4 1002 {
dflet 0:1e7b5dd9edb4 1003 vTaskSetTimeOutState( &xTimeOut );
dflet 0:1e7b5dd9edb4 1004 xEntryTimeSet = pdTRUE;
dflet 0:1e7b5dd9edb4 1005 }
dflet 0:1e7b5dd9edb4 1006 }
dflet 0:1e7b5dd9edb4 1007 }
dflet 0:1e7b5dd9edb4 1008 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1009
dflet 0:1e7b5dd9edb4 1010 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1011 {
dflet 0:1e7b5dd9edb4 1012 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1013 {
dflet 0:1e7b5dd9edb4 1014 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1015 {
dflet 0:1e7b5dd9edb4 1016 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 0:1e7b5dd9edb4 1017
dflet 0:1e7b5dd9edb4 1018 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 1019 {
dflet 0:1e7b5dd9edb4 1020 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 1021 {
dflet 0:1e7b5dd9edb4 1022 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1023 {
dflet 0:1e7b5dd9edb4 1024 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:1e7b5dd9edb4 1025 }
dflet 0:1e7b5dd9edb4 1026 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1027 }
dflet 0:1e7b5dd9edb4 1028 else
dflet 0:1e7b5dd9edb4 1029 {
dflet 0:1e7b5dd9edb4 1030 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1031 }
dflet 0:1e7b5dd9edb4 1032 }
dflet 0:1e7b5dd9edb4 1033 #endif
dflet 0:1e7b5dd9edb4 1034
dflet 0:1e7b5dd9edb4 1035 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:1e7b5dd9edb4 1036 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 1037 }
dflet 0:1e7b5dd9edb4 1038 else
dflet 0:1e7b5dd9edb4 1039 {
dflet 0:1e7b5dd9edb4 1040 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1041 }
dflet 0:1e7b5dd9edb4 1042 }
dflet 0:1e7b5dd9edb4 1043 else
dflet 0:1e7b5dd9edb4 1044 {
dflet 0:1e7b5dd9edb4 1045 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1046 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1047 return errQUEUE_EMPTY;
dflet 0:1e7b5dd9edb4 1048 }
dflet 0:1e7b5dd9edb4 1049 }
dflet 0:1e7b5dd9edb4 1050 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1051 }
dflet 0:1e7b5dd9edb4 1052 }
dflet 0:1e7b5dd9edb4 1053
dflet 0:1e7b5dd9edb4 1054
dflet 0:1e7b5dd9edb4 1055 #endif /* configUSE_ALTERNATIVE_API */
dflet 0:1e7b5dd9edb4 1056 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1057
dflet 0:1e7b5dd9edb4 1058 BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition )
dflet 0:1e7b5dd9edb4 1059 {
dflet 0:1e7b5dd9edb4 1060 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 1061 UBaseType_t uxSavedInterruptStatus;
dflet 0:1e7b5dd9edb4 1062 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1063
dflet 0:1e7b5dd9edb4 1064 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1065 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 1066 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 0:1e7b5dd9edb4 1067
dflet 0:1e7b5dd9edb4 1068 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:1e7b5dd9edb4 1069 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:1e7b5dd9edb4 1070 above the maximum system call priority are kept permanently enabled, even
dflet 0:1e7b5dd9edb4 1071 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:1e7b5dd9edb4 1072 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:1e7b5dd9edb4 1073 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:1e7b5dd9edb4 1074 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:1e7b5dd9edb4 1075 assigned a priority above the configured maximum system call priority.
dflet 0:1e7b5dd9edb4 1076 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:1e7b5dd9edb4 1077 that have been assigned a priority at or (logically) below the maximum
dflet 0:1e7b5dd9edb4 1078 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:1e7b5dd9edb4 1079 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:1e7b5dd9edb4 1080 More information (albeit Cortex-M specific) is provided on the following
dflet 0:1e7b5dd9edb4 1081 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:1e7b5dd9edb4 1082 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:1e7b5dd9edb4 1083
dflet 0:1e7b5dd9edb4 1084 /* Similar to xQueueGenericSend, except without blocking if there is no room
dflet 0:1e7b5dd9edb4 1085 in the queue. Also don't directly wake a task that was blocked on a queue
dflet 0:1e7b5dd9edb4 1086 read, instead return a flag to say whether a context switch is required or
dflet 0:1e7b5dd9edb4 1087 not (i.e. has a task with a higher priority than us been woken by this
dflet 0:1e7b5dd9edb4 1088 post). */
dflet 0:1e7b5dd9edb4 1089 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:1e7b5dd9edb4 1090 {
dflet 0:1e7b5dd9edb4 1091 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 0:1e7b5dd9edb4 1092 {
dflet 0:1e7b5dd9edb4 1093 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 0:1e7b5dd9edb4 1094
dflet 0:1e7b5dd9edb4 1095 /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a
dflet 0:1e7b5dd9edb4 1096 semaphore or mutex. That means prvCopyDataToQueue() cannot result
dflet 0:1e7b5dd9edb4 1097 in a task disinheriting a priority and prvCopyDataToQueue() can be
dflet 0:1e7b5dd9edb4 1098 called here even though the disinherit function does not check if
dflet 0:1e7b5dd9edb4 1099 the scheduler is suspended before accessing the ready lists. */
dflet 0:1e7b5dd9edb4 1100 ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 0:1e7b5dd9edb4 1101
dflet 0:1e7b5dd9edb4 1102 /* The event list is not altered if the queue is locked. This will
dflet 0:1e7b5dd9edb4 1103 be done when the queue is unlocked later. */
dflet 0:1e7b5dd9edb4 1104 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 0:1e7b5dd9edb4 1105 {
dflet 0:1e7b5dd9edb4 1106 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 1107 {
dflet 0:1e7b5dd9edb4 1108 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:1e7b5dd9edb4 1109 {
dflet 0:1e7b5dd9edb4 1110 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 0:1e7b5dd9edb4 1111 {
dflet 0:1e7b5dd9edb4 1112 /* The queue is a member of a queue set, and posting
dflet 0:1e7b5dd9edb4 1113 to the queue set caused a higher priority task to
dflet 0:1e7b5dd9edb4 1114 unblock. A context switch is required. */
dflet 0:1e7b5dd9edb4 1115 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1116 {
dflet 0:1e7b5dd9edb4 1117 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1118 }
dflet 0:1e7b5dd9edb4 1119 else
dflet 0:1e7b5dd9edb4 1120 {
dflet 0:1e7b5dd9edb4 1121 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1122 }
dflet 0:1e7b5dd9edb4 1123 }
dflet 0:1e7b5dd9edb4 1124 else
dflet 0:1e7b5dd9edb4 1125 {
dflet 0:1e7b5dd9edb4 1126 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1127 }
dflet 0:1e7b5dd9edb4 1128 }
dflet 0:1e7b5dd9edb4 1129 else
dflet 0:1e7b5dd9edb4 1130 {
dflet 0:1e7b5dd9edb4 1131 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1132 {
dflet 0:1e7b5dd9edb4 1133 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1134 {
dflet 0:1e7b5dd9edb4 1135 /* The task waiting has a higher priority so
dflet 0:1e7b5dd9edb4 1136 record that a context switch is required. */
dflet 0:1e7b5dd9edb4 1137 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1138 {
dflet 0:1e7b5dd9edb4 1139 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1140 }
dflet 0:1e7b5dd9edb4 1141 else
dflet 0:1e7b5dd9edb4 1142 {
dflet 0:1e7b5dd9edb4 1143 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1144 }
dflet 0:1e7b5dd9edb4 1145 }
dflet 0:1e7b5dd9edb4 1146 else
dflet 0:1e7b5dd9edb4 1147 {
dflet 0:1e7b5dd9edb4 1148 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1149 }
dflet 0:1e7b5dd9edb4 1150 }
dflet 0:1e7b5dd9edb4 1151 else
dflet 0:1e7b5dd9edb4 1152 {
dflet 0:1e7b5dd9edb4 1153 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1154 }
dflet 0:1e7b5dd9edb4 1155 }
dflet 0:1e7b5dd9edb4 1156 }
dflet 0:1e7b5dd9edb4 1157 #else /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1158 {
dflet 0:1e7b5dd9edb4 1159 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1160 {
dflet 0:1e7b5dd9edb4 1161 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1162 {
dflet 0:1e7b5dd9edb4 1163 /* The task waiting has a higher priority so record that a
dflet 0:1e7b5dd9edb4 1164 context switch is required. */
dflet 0:1e7b5dd9edb4 1165 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1166 {
dflet 0:1e7b5dd9edb4 1167 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1168 }
dflet 0:1e7b5dd9edb4 1169 else
dflet 0:1e7b5dd9edb4 1170 {
dflet 0:1e7b5dd9edb4 1171 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1172 }
dflet 0:1e7b5dd9edb4 1173 }
dflet 0:1e7b5dd9edb4 1174 else
dflet 0:1e7b5dd9edb4 1175 {
dflet 0:1e7b5dd9edb4 1176 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1177 }
dflet 0:1e7b5dd9edb4 1178 }
dflet 0:1e7b5dd9edb4 1179 else
dflet 0:1e7b5dd9edb4 1180 {
dflet 0:1e7b5dd9edb4 1181 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1182 }
dflet 0:1e7b5dd9edb4 1183 }
dflet 0:1e7b5dd9edb4 1184 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1185 }
dflet 0:1e7b5dd9edb4 1186 else
dflet 0:1e7b5dd9edb4 1187 {
dflet 0:1e7b5dd9edb4 1188 /* Increment the lock count so the task that unlocks the queue
dflet 0:1e7b5dd9edb4 1189 knows that data was posted while it was locked. */
dflet 0:1e7b5dd9edb4 1190 ++( pxQueue->xTxLock );
dflet 0:1e7b5dd9edb4 1191 }
dflet 0:1e7b5dd9edb4 1192
dflet 0:1e7b5dd9edb4 1193 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 1194 }
dflet 0:1e7b5dd9edb4 1195 else
dflet 0:1e7b5dd9edb4 1196 {
dflet 0:1e7b5dd9edb4 1197 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1198 xReturn = errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 1199 }
dflet 0:1e7b5dd9edb4 1200 }
dflet 0:1e7b5dd9edb4 1201 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:1e7b5dd9edb4 1202
dflet 0:1e7b5dd9edb4 1203 return xReturn;
dflet 0:1e7b5dd9edb4 1204 }
dflet 0:1e7b5dd9edb4 1205 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1206
dflet 0:1e7b5dd9edb4 1207 BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 0:1e7b5dd9edb4 1208 {
dflet 0:1e7b5dd9edb4 1209 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 1210 UBaseType_t uxSavedInterruptStatus;
dflet 0:1e7b5dd9edb4 1211 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1212
dflet 0:1e7b5dd9edb4 1213 /* Similar to xQueueGenericSendFromISR() but used with semaphores where the
dflet 0:1e7b5dd9edb4 1214 item size is 0. Don't directly wake a task that was blocked on a queue
dflet 0:1e7b5dd9edb4 1215 read, instead return a flag to say whether a context switch is required or
dflet 0:1e7b5dd9edb4 1216 not (i.e. has a task with a higher priority than us been woken by this
dflet 0:1e7b5dd9edb4 1217 post). */
dflet 0:1e7b5dd9edb4 1218
dflet 0:1e7b5dd9edb4 1219 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1220
dflet 0:1e7b5dd9edb4 1221 /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR()
dflet 0:1e7b5dd9edb4 1222 if the item size is not 0. */
dflet 0:1e7b5dd9edb4 1223 configASSERT( pxQueue->uxItemSize == 0 );
dflet 0:1e7b5dd9edb4 1224
dflet 0:1e7b5dd9edb4 1225 /* Normally a mutex would not be given from an interrupt, and doing so is
dflet 0:1e7b5dd9edb4 1226 definitely wrong if there is a mutex holder as priority inheritance makes no
dflet 0:1e7b5dd9edb4 1227 sense for an interrupts, only tasks. */
dflet 0:1e7b5dd9edb4 1228 configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) );
dflet 0:1e7b5dd9edb4 1229
dflet 0:1e7b5dd9edb4 1230 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:1e7b5dd9edb4 1231 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:1e7b5dd9edb4 1232 above the maximum system call priority are kept permanently enabled, even
dflet 0:1e7b5dd9edb4 1233 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:1e7b5dd9edb4 1234 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:1e7b5dd9edb4 1235 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:1e7b5dd9edb4 1236 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:1e7b5dd9edb4 1237 assigned a priority above the configured maximum system call priority.
dflet 0:1e7b5dd9edb4 1238 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:1e7b5dd9edb4 1239 that have been assigned a priority at or (logically) below the maximum
dflet 0:1e7b5dd9edb4 1240 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:1e7b5dd9edb4 1241 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:1e7b5dd9edb4 1242 More information (albeit Cortex-M specific) is provided on the following
dflet 0:1e7b5dd9edb4 1243 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:1e7b5dd9edb4 1244 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:1e7b5dd9edb4 1245
dflet 0:1e7b5dd9edb4 1246 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:1e7b5dd9edb4 1247 {
dflet 0:1e7b5dd9edb4 1248 /* When the queue is used to implement a semaphore no data is ever
dflet 0:1e7b5dd9edb4 1249 moved through the queue but it is still valid to see if the queue 'has
dflet 0:1e7b5dd9edb4 1250 space'. */
dflet 0:1e7b5dd9edb4 1251 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:1e7b5dd9edb4 1252 {
dflet 0:1e7b5dd9edb4 1253 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 0:1e7b5dd9edb4 1254
dflet 0:1e7b5dd9edb4 1255 /* A task can only have an inherited priority if it is a mutex
dflet 0:1e7b5dd9edb4 1256 holder - and if there is a mutex holder then the mutex cannot be
dflet 0:1e7b5dd9edb4 1257 given from an ISR. As this is the ISR version of the function it
dflet 0:1e7b5dd9edb4 1258 can be assumed there is no mutex holder and no need to determine if
dflet 0:1e7b5dd9edb4 1259 priority disinheritance is needed. Simply increase the count of
dflet 0:1e7b5dd9edb4 1260 messages (semaphores) available. */
dflet 0:1e7b5dd9edb4 1261 ++( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 1262
dflet 0:1e7b5dd9edb4 1263 /* The event list is not altered if the queue is locked. This will
dflet 0:1e7b5dd9edb4 1264 be done when the queue is unlocked later. */
dflet 0:1e7b5dd9edb4 1265 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 0:1e7b5dd9edb4 1266 {
dflet 0:1e7b5dd9edb4 1267 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 1268 {
dflet 0:1e7b5dd9edb4 1269 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:1e7b5dd9edb4 1270 {
dflet 0:1e7b5dd9edb4 1271 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 0:1e7b5dd9edb4 1272 {
dflet 0:1e7b5dd9edb4 1273 /* The semaphore is a member of a queue set, and
dflet 0:1e7b5dd9edb4 1274 posting to the queue set caused a higher priority
dflet 0:1e7b5dd9edb4 1275 task to unblock. A context switch is required. */
dflet 0:1e7b5dd9edb4 1276 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1277 {
dflet 0:1e7b5dd9edb4 1278 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1279 }
dflet 0:1e7b5dd9edb4 1280 else
dflet 0:1e7b5dd9edb4 1281 {
dflet 0:1e7b5dd9edb4 1282 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1283 }
dflet 0:1e7b5dd9edb4 1284 }
dflet 0:1e7b5dd9edb4 1285 else
dflet 0:1e7b5dd9edb4 1286 {
dflet 0:1e7b5dd9edb4 1287 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1288 }
dflet 0:1e7b5dd9edb4 1289 }
dflet 0:1e7b5dd9edb4 1290 else
dflet 0:1e7b5dd9edb4 1291 {
dflet 0:1e7b5dd9edb4 1292 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1293 {
dflet 0:1e7b5dd9edb4 1294 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1295 {
dflet 0:1e7b5dd9edb4 1296 /* The task waiting has a higher priority so
dflet 0:1e7b5dd9edb4 1297 record that a context switch is required. */
dflet 0:1e7b5dd9edb4 1298 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1299 {
dflet 0:1e7b5dd9edb4 1300 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1301 }
dflet 0:1e7b5dd9edb4 1302 else
dflet 0:1e7b5dd9edb4 1303 {
dflet 0:1e7b5dd9edb4 1304 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1305 }
dflet 0:1e7b5dd9edb4 1306 }
dflet 0:1e7b5dd9edb4 1307 else
dflet 0:1e7b5dd9edb4 1308 {
dflet 0:1e7b5dd9edb4 1309 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1310 }
dflet 0:1e7b5dd9edb4 1311 }
dflet 0:1e7b5dd9edb4 1312 else
dflet 0:1e7b5dd9edb4 1313 {
dflet 0:1e7b5dd9edb4 1314 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1315 }
dflet 0:1e7b5dd9edb4 1316 }
dflet 0:1e7b5dd9edb4 1317 }
dflet 0:1e7b5dd9edb4 1318 #else /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1319 {
dflet 0:1e7b5dd9edb4 1320 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1321 {
dflet 0:1e7b5dd9edb4 1322 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1323 {
dflet 0:1e7b5dd9edb4 1324 /* The task waiting has a higher priority so record that a
dflet 0:1e7b5dd9edb4 1325 context switch is required. */
dflet 0:1e7b5dd9edb4 1326 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1327 {
dflet 0:1e7b5dd9edb4 1328 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1329 }
dflet 0:1e7b5dd9edb4 1330 else
dflet 0:1e7b5dd9edb4 1331 {
dflet 0:1e7b5dd9edb4 1332 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1333 }
dflet 0:1e7b5dd9edb4 1334 }
dflet 0:1e7b5dd9edb4 1335 else
dflet 0:1e7b5dd9edb4 1336 {
dflet 0:1e7b5dd9edb4 1337 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1338 }
dflet 0:1e7b5dd9edb4 1339 }
dflet 0:1e7b5dd9edb4 1340 else
dflet 0:1e7b5dd9edb4 1341 {
dflet 0:1e7b5dd9edb4 1342 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1343 }
dflet 0:1e7b5dd9edb4 1344 }
dflet 0:1e7b5dd9edb4 1345 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1346 }
dflet 0:1e7b5dd9edb4 1347 else
dflet 0:1e7b5dd9edb4 1348 {
dflet 0:1e7b5dd9edb4 1349 /* Increment the lock count so the task that unlocks the queue
dflet 0:1e7b5dd9edb4 1350 knows that data was posted while it was locked. */
dflet 0:1e7b5dd9edb4 1351 ++( pxQueue->xTxLock );
dflet 0:1e7b5dd9edb4 1352 }
dflet 0:1e7b5dd9edb4 1353
dflet 0:1e7b5dd9edb4 1354 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 1355 }
dflet 0:1e7b5dd9edb4 1356 else
dflet 0:1e7b5dd9edb4 1357 {
dflet 0:1e7b5dd9edb4 1358 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1359 xReturn = errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 1360 }
dflet 0:1e7b5dd9edb4 1361 }
dflet 0:1e7b5dd9edb4 1362 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:1e7b5dd9edb4 1363
dflet 0:1e7b5dd9edb4 1364 return xReturn;
dflet 0:1e7b5dd9edb4 1365 }
dflet 0:1e7b5dd9edb4 1366 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1367
dflet 0:1e7b5dd9edb4 1368 BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking )
dflet 0:1e7b5dd9edb4 1369 {
dflet 0:1e7b5dd9edb4 1370 BaseType_t xEntryTimeSet = pdFALSE;
dflet 0:1e7b5dd9edb4 1371 TimeOut_t xTimeOut;
dflet 0:1e7b5dd9edb4 1372 int8_t *pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 1373 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1374
dflet 0:1e7b5dd9edb4 1375 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1376 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 1377 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 0:1e7b5dd9edb4 1378 {
dflet 0:1e7b5dd9edb4 1379 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 0:1e7b5dd9edb4 1380 }
dflet 0:1e7b5dd9edb4 1381 #endif
dflet 0:1e7b5dd9edb4 1382
dflet 0:1e7b5dd9edb4 1383 /* This function relaxes the coding standard somewhat to allow return
dflet 0:1e7b5dd9edb4 1384 statements within the function itself. This is done in the interest
dflet 0:1e7b5dd9edb4 1385 of execution time efficiency. */
dflet 0:1e7b5dd9edb4 1386
dflet 0:1e7b5dd9edb4 1387 for( ;; )
dflet 0:1e7b5dd9edb4 1388 {
dflet 0:1e7b5dd9edb4 1389 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1390 {
dflet 0:1e7b5dd9edb4 1391 /* Is there data in the queue now? To be running the calling task
dflet 0:1e7b5dd9edb4 1392 must be the highest priority task wanting to access the queue. */
dflet 0:1e7b5dd9edb4 1393 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1394 {
dflet 0:1e7b5dd9edb4 1395 /* Remember the read position in case the queue is only being
dflet 0:1e7b5dd9edb4 1396 peeked. */
dflet 0:1e7b5dd9edb4 1397 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:1e7b5dd9edb4 1398
dflet 0:1e7b5dd9edb4 1399 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:1e7b5dd9edb4 1400
dflet 0:1e7b5dd9edb4 1401 if( xJustPeeking == pdFALSE )
dflet 0:1e7b5dd9edb4 1402 {
dflet 0:1e7b5dd9edb4 1403 traceQUEUE_RECEIVE( pxQueue );
dflet 0:1e7b5dd9edb4 1404
dflet 0:1e7b5dd9edb4 1405 /* Actually removing data, not just peeking. */
dflet 0:1e7b5dd9edb4 1406 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 1407
dflet 0:1e7b5dd9edb4 1408 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 1409 {
dflet 0:1e7b5dd9edb4 1410 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 1411 {
dflet 0:1e7b5dd9edb4 1412 /* Record the information required to implement
dflet 0:1e7b5dd9edb4 1413 priority inheritance should it become necessary. */
dflet 0:1e7b5dd9edb4 1414 pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 0:1e7b5dd9edb4 1415 }
dflet 0:1e7b5dd9edb4 1416 else
dflet 0:1e7b5dd9edb4 1417 {
dflet 0:1e7b5dd9edb4 1418 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1419 }
dflet 0:1e7b5dd9edb4 1420 }
dflet 0:1e7b5dd9edb4 1421 #endif /* configUSE_MUTEXES */
dflet 0:1e7b5dd9edb4 1422
dflet 0:1e7b5dd9edb4 1423 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1424 {
dflet 0:1e7b5dd9edb4 1425 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 0:1e7b5dd9edb4 1426 {
dflet 0:1e7b5dd9edb4 1427 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 1428 }
dflet 0:1e7b5dd9edb4 1429 else
dflet 0:1e7b5dd9edb4 1430 {
dflet 0:1e7b5dd9edb4 1431 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1432 }
dflet 0:1e7b5dd9edb4 1433 }
dflet 0:1e7b5dd9edb4 1434 else
dflet 0:1e7b5dd9edb4 1435 {
dflet 0:1e7b5dd9edb4 1436 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1437 }
dflet 0:1e7b5dd9edb4 1438 }
dflet 0:1e7b5dd9edb4 1439 else
dflet 0:1e7b5dd9edb4 1440 {
dflet 0:1e7b5dd9edb4 1441 traceQUEUE_PEEK( pxQueue );
dflet 0:1e7b5dd9edb4 1442
dflet 0:1e7b5dd9edb4 1443 /* The data is not being removed, so reset the read
dflet 0:1e7b5dd9edb4 1444 pointer. */
dflet 0:1e7b5dd9edb4 1445 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 1446
dflet 0:1e7b5dd9edb4 1447 /* The data is being left in the queue, so see if there are
dflet 0:1e7b5dd9edb4 1448 any other tasks waiting for the data. */
dflet 0:1e7b5dd9edb4 1449 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1450 {
dflet 0:1e7b5dd9edb4 1451 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1452 {
dflet 0:1e7b5dd9edb4 1453 /* The task waiting has a higher priority than this task. */
dflet 0:1e7b5dd9edb4 1454 queueYIELD_IF_USING_PREEMPTION();
dflet 0:1e7b5dd9edb4 1455 }
dflet 0:1e7b5dd9edb4 1456 else
dflet 0:1e7b5dd9edb4 1457 {
dflet 0:1e7b5dd9edb4 1458 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1459 }
dflet 0:1e7b5dd9edb4 1460 }
dflet 0:1e7b5dd9edb4 1461 else
dflet 0:1e7b5dd9edb4 1462 {
dflet 0:1e7b5dd9edb4 1463 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1464 }
dflet 0:1e7b5dd9edb4 1465 }
dflet 0:1e7b5dd9edb4 1466
dflet 0:1e7b5dd9edb4 1467 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1468 return pdPASS;
dflet 0:1e7b5dd9edb4 1469 }
dflet 0:1e7b5dd9edb4 1470 else
dflet 0:1e7b5dd9edb4 1471 {
dflet 0:1e7b5dd9edb4 1472 if( xTicksToWait == ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 1473 {
dflet 0:1e7b5dd9edb4 1474 /* The queue was empty and no block time is specified (or
dflet 0:1e7b5dd9edb4 1475 the block time has expired) so leave now. */
dflet 0:1e7b5dd9edb4 1476 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1477 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1478 return errQUEUE_EMPTY;
dflet 0:1e7b5dd9edb4 1479 }
dflet 0:1e7b5dd9edb4 1480 else if( xEntryTimeSet == pdFALSE )
dflet 0:1e7b5dd9edb4 1481 {
dflet 0:1e7b5dd9edb4 1482 /* The queue was empty and a block time was specified so
dflet 0:1e7b5dd9edb4 1483 configure the timeout structure. */
dflet 0:1e7b5dd9edb4 1484 vTaskSetTimeOutState( &xTimeOut );
dflet 0:1e7b5dd9edb4 1485 xEntryTimeSet = pdTRUE;
dflet 0:1e7b5dd9edb4 1486 }
dflet 0:1e7b5dd9edb4 1487 else
dflet 0:1e7b5dd9edb4 1488 {
dflet 0:1e7b5dd9edb4 1489 /* Entry time was already set. */
dflet 0:1e7b5dd9edb4 1490 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1491 }
dflet 0:1e7b5dd9edb4 1492 }
dflet 0:1e7b5dd9edb4 1493 }
dflet 0:1e7b5dd9edb4 1494 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1495
dflet 0:1e7b5dd9edb4 1496 /* Interrupts and other tasks can send to and receive from the queue
dflet 0:1e7b5dd9edb4 1497 now the critical section has been exited. */
dflet 0:1e7b5dd9edb4 1498
dflet 0:1e7b5dd9edb4 1499 vTaskSuspendAll();
dflet 0:1e7b5dd9edb4 1500 prvLockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 1501
dflet 0:1e7b5dd9edb4 1502 /* Update the timeout state to see if it has expired yet. */
dflet 0:1e7b5dd9edb4 1503 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1504 {
dflet 0:1e7b5dd9edb4 1505 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1506 {
dflet 0:1e7b5dd9edb4 1507 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 0:1e7b5dd9edb4 1508
dflet 0:1e7b5dd9edb4 1509 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 1510 {
dflet 0:1e7b5dd9edb4 1511 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 1512 {
dflet 0:1e7b5dd9edb4 1513 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1514 {
dflet 0:1e7b5dd9edb4 1515 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:1e7b5dd9edb4 1516 }
dflet 0:1e7b5dd9edb4 1517 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1518 }
dflet 0:1e7b5dd9edb4 1519 else
dflet 0:1e7b5dd9edb4 1520 {
dflet 0:1e7b5dd9edb4 1521 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1522 }
dflet 0:1e7b5dd9edb4 1523 }
dflet 0:1e7b5dd9edb4 1524 #endif
dflet 0:1e7b5dd9edb4 1525
dflet 0:1e7b5dd9edb4 1526 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:1e7b5dd9edb4 1527 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 1528 if( xTaskResumeAll() == pdFALSE )
dflet 0:1e7b5dd9edb4 1529 {
dflet 0:1e7b5dd9edb4 1530 portYIELD_WITHIN_API();
dflet 0:1e7b5dd9edb4 1531 }
dflet 0:1e7b5dd9edb4 1532 else
dflet 0:1e7b5dd9edb4 1533 {
dflet 0:1e7b5dd9edb4 1534 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1535 }
dflet 0:1e7b5dd9edb4 1536 }
dflet 0:1e7b5dd9edb4 1537 else
dflet 0:1e7b5dd9edb4 1538 {
dflet 0:1e7b5dd9edb4 1539 /* Try again. */
dflet 0:1e7b5dd9edb4 1540 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 1541 ( void ) xTaskResumeAll();
dflet 0:1e7b5dd9edb4 1542 }
dflet 0:1e7b5dd9edb4 1543 }
dflet 0:1e7b5dd9edb4 1544 else
dflet 0:1e7b5dd9edb4 1545 {
dflet 0:1e7b5dd9edb4 1546 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 1547 ( void ) xTaskResumeAll();
dflet 0:1e7b5dd9edb4 1548 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1549 return errQUEUE_EMPTY;
dflet 0:1e7b5dd9edb4 1550 }
dflet 0:1e7b5dd9edb4 1551 }
dflet 0:1e7b5dd9edb4 1552 }
dflet 0:1e7b5dd9edb4 1553 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1554
dflet 0:1e7b5dd9edb4 1555 BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 0:1e7b5dd9edb4 1556 {
dflet 0:1e7b5dd9edb4 1557 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 1558 UBaseType_t uxSavedInterruptStatus;
dflet 0:1e7b5dd9edb4 1559 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1560
dflet 0:1e7b5dd9edb4 1561 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1562 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 1563
dflet 0:1e7b5dd9edb4 1564 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:1e7b5dd9edb4 1565 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:1e7b5dd9edb4 1566 above the maximum system call priority are kept permanently enabled, even
dflet 0:1e7b5dd9edb4 1567 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:1e7b5dd9edb4 1568 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:1e7b5dd9edb4 1569 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:1e7b5dd9edb4 1570 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:1e7b5dd9edb4 1571 assigned a priority above the configured maximum system call priority.
dflet 0:1e7b5dd9edb4 1572 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:1e7b5dd9edb4 1573 that have been assigned a priority at or (logically) below the maximum
dflet 0:1e7b5dd9edb4 1574 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:1e7b5dd9edb4 1575 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:1e7b5dd9edb4 1576 More information (albeit Cortex-M specific) is provided on the following
dflet 0:1e7b5dd9edb4 1577 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:1e7b5dd9edb4 1578 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:1e7b5dd9edb4 1579
dflet 0:1e7b5dd9edb4 1580 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:1e7b5dd9edb4 1581 {
dflet 0:1e7b5dd9edb4 1582 /* Cannot block in an ISR, so check there is data available. */
dflet 0:1e7b5dd9edb4 1583 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1584 {
dflet 0:1e7b5dd9edb4 1585 traceQUEUE_RECEIVE_FROM_ISR( pxQueue );
dflet 0:1e7b5dd9edb4 1586
dflet 0:1e7b5dd9edb4 1587 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:1e7b5dd9edb4 1588 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 1589
dflet 0:1e7b5dd9edb4 1590 /* If the queue is locked the event list will not be modified.
dflet 0:1e7b5dd9edb4 1591 Instead update the lock count so the task that unlocks the queue
dflet 0:1e7b5dd9edb4 1592 will know that an ISR has removed data while the queue was
dflet 0:1e7b5dd9edb4 1593 locked. */
dflet 0:1e7b5dd9edb4 1594 if( pxQueue->xRxLock == queueUNLOCKED )
dflet 0:1e7b5dd9edb4 1595 {
dflet 0:1e7b5dd9edb4 1596 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1597 {
dflet 0:1e7b5dd9edb4 1598 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1599 {
dflet 0:1e7b5dd9edb4 1600 /* The task waiting has a higher priority than us so
dflet 0:1e7b5dd9edb4 1601 force a context switch. */
dflet 0:1e7b5dd9edb4 1602 if( pxHigherPriorityTaskWoken != NULL )
dflet 0:1e7b5dd9edb4 1603 {
dflet 0:1e7b5dd9edb4 1604 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 1605 }
dflet 0:1e7b5dd9edb4 1606 else
dflet 0:1e7b5dd9edb4 1607 {
dflet 0:1e7b5dd9edb4 1608 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1609 }
dflet 0:1e7b5dd9edb4 1610 }
dflet 0:1e7b5dd9edb4 1611 else
dflet 0:1e7b5dd9edb4 1612 {
dflet 0:1e7b5dd9edb4 1613 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1614 }
dflet 0:1e7b5dd9edb4 1615 }
dflet 0:1e7b5dd9edb4 1616 else
dflet 0:1e7b5dd9edb4 1617 {
dflet 0:1e7b5dd9edb4 1618 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1619 }
dflet 0:1e7b5dd9edb4 1620 }
dflet 0:1e7b5dd9edb4 1621 else
dflet 0:1e7b5dd9edb4 1622 {
dflet 0:1e7b5dd9edb4 1623 /* Increment the lock count so the task that unlocks the queue
dflet 0:1e7b5dd9edb4 1624 knows that data was removed while it was locked. */
dflet 0:1e7b5dd9edb4 1625 ++( pxQueue->xRxLock );
dflet 0:1e7b5dd9edb4 1626 }
dflet 0:1e7b5dd9edb4 1627
dflet 0:1e7b5dd9edb4 1628 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 1629 }
dflet 0:1e7b5dd9edb4 1630 else
dflet 0:1e7b5dd9edb4 1631 {
dflet 0:1e7b5dd9edb4 1632 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 1633 traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1634 }
dflet 0:1e7b5dd9edb4 1635 }
dflet 0:1e7b5dd9edb4 1636 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:1e7b5dd9edb4 1637
dflet 0:1e7b5dd9edb4 1638 return xReturn;
dflet 0:1e7b5dd9edb4 1639 }
dflet 0:1e7b5dd9edb4 1640 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1641
dflet 0:1e7b5dd9edb4 1642 BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer )
dflet 0:1e7b5dd9edb4 1643 {
dflet 0:1e7b5dd9edb4 1644 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 1645 UBaseType_t uxSavedInterruptStatus;
dflet 0:1e7b5dd9edb4 1646 int8_t *pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 1647 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1648
dflet 0:1e7b5dd9edb4 1649 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1650 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 0:1e7b5dd9edb4 1651 configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */
dflet 0:1e7b5dd9edb4 1652
dflet 0:1e7b5dd9edb4 1653 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 0:1e7b5dd9edb4 1654 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 0:1e7b5dd9edb4 1655 above the maximum system call priority are kept permanently enabled, even
dflet 0:1e7b5dd9edb4 1656 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 0:1e7b5dd9edb4 1657 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 0:1e7b5dd9edb4 1658 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 0:1e7b5dd9edb4 1659 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 0:1e7b5dd9edb4 1660 assigned a priority above the configured maximum system call priority.
dflet 0:1e7b5dd9edb4 1661 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 0:1e7b5dd9edb4 1662 that have been assigned a priority at or (logically) below the maximum
dflet 0:1e7b5dd9edb4 1663 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 0:1e7b5dd9edb4 1664 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 0:1e7b5dd9edb4 1665 More information (albeit Cortex-M specific) is provided on the following
dflet 0:1e7b5dd9edb4 1666 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 0:1e7b5dd9edb4 1667 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 0:1e7b5dd9edb4 1668
dflet 0:1e7b5dd9edb4 1669 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 0:1e7b5dd9edb4 1670 {
dflet 0:1e7b5dd9edb4 1671 /* Cannot block in an ISR, so check there is data available. */
dflet 0:1e7b5dd9edb4 1672 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1673 {
dflet 0:1e7b5dd9edb4 1674 traceQUEUE_PEEK_FROM_ISR( pxQueue );
dflet 0:1e7b5dd9edb4 1675
dflet 0:1e7b5dd9edb4 1676 /* Remember the read position so it can be reset as nothing is
dflet 0:1e7b5dd9edb4 1677 actually being removed from the queue. */
dflet 0:1e7b5dd9edb4 1678 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 0:1e7b5dd9edb4 1679 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 0:1e7b5dd9edb4 1680 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 0:1e7b5dd9edb4 1681
dflet 0:1e7b5dd9edb4 1682 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 1683 }
dflet 0:1e7b5dd9edb4 1684 else
dflet 0:1e7b5dd9edb4 1685 {
dflet 0:1e7b5dd9edb4 1686 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 1687 traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue );
dflet 0:1e7b5dd9edb4 1688 }
dflet 0:1e7b5dd9edb4 1689 }
dflet 0:1e7b5dd9edb4 1690 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 0:1e7b5dd9edb4 1691
dflet 0:1e7b5dd9edb4 1692 return xReturn;
dflet 0:1e7b5dd9edb4 1693 }
dflet 0:1e7b5dd9edb4 1694 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1695
dflet 0:1e7b5dd9edb4 1696 UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1697 {
dflet 0:1e7b5dd9edb4 1698 UBaseType_t uxReturn;
dflet 0:1e7b5dd9edb4 1699
dflet 0:1e7b5dd9edb4 1700 configASSERT( xQueue );
dflet 0:1e7b5dd9edb4 1701
dflet 0:1e7b5dd9edb4 1702 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1703 {
dflet 0:1e7b5dd9edb4 1704 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 0:1e7b5dd9edb4 1705 }
dflet 0:1e7b5dd9edb4 1706 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1707
dflet 0:1e7b5dd9edb4 1708 return uxReturn;
dflet 0:1e7b5dd9edb4 1709 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:1e7b5dd9edb4 1710 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1711
dflet 0:1e7b5dd9edb4 1712 UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1713 {
dflet 0:1e7b5dd9edb4 1714 UBaseType_t uxReturn;
dflet 0:1e7b5dd9edb4 1715 Queue_t *pxQueue;
dflet 0:1e7b5dd9edb4 1716
dflet 0:1e7b5dd9edb4 1717 pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1718 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1719
dflet 0:1e7b5dd9edb4 1720 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1721 {
dflet 0:1e7b5dd9edb4 1722 uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting;
dflet 0:1e7b5dd9edb4 1723 }
dflet 0:1e7b5dd9edb4 1724 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1725
dflet 0:1e7b5dd9edb4 1726 return uxReturn;
dflet 0:1e7b5dd9edb4 1727 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:1e7b5dd9edb4 1728 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1729
dflet 0:1e7b5dd9edb4 1730 UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1731 {
dflet 0:1e7b5dd9edb4 1732 UBaseType_t uxReturn;
dflet 0:1e7b5dd9edb4 1733
dflet 0:1e7b5dd9edb4 1734 configASSERT( xQueue );
dflet 0:1e7b5dd9edb4 1735
dflet 0:1e7b5dd9edb4 1736 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 0:1e7b5dd9edb4 1737
dflet 0:1e7b5dd9edb4 1738 return uxReturn;
dflet 0:1e7b5dd9edb4 1739 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 0:1e7b5dd9edb4 1740 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1741
dflet 0:1e7b5dd9edb4 1742 void vQueueDelete( QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1743 {
dflet 0:1e7b5dd9edb4 1744 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 1745
dflet 0:1e7b5dd9edb4 1746 configASSERT( pxQueue );
dflet 0:1e7b5dd9edb4 1747
dflet 0:1e7b5dd9edb4 1748 traceQUEUE_DELETE( pxQueue );
dflet 0:1e7b5dd9edb4 1749 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:1e7b5dd9edb4 1750 {
dflet 0:1e7b5dd9edb4 1751 vQueueUnregisterQueue( pxQueue );
dflet 0:1e7b5dd9edb4 1752 }
dflet 0:1e7b5dd9edb4 1753 #endif
dflet 0:1e7b5dd9edb4 1754 vPortFree( pxQueue );
dflet 0:1e7b5dd9edb4 1755 }
dflet 0:1e7b5dd9edb4 1756 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1757
dflet 0:1e7b5dd9edb4 1758 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 1759
dflet 0:1e7b5dd9edb4 1760 UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1761 {
dflet 0:1e7b5dd9edb4 1762 return ( ( Queue_t * ) xQueue )->uxQueueNumber;
dflet 0:1e7b5dd9edb4 1763 }
dflet 0:1e7b5dd9edb4 1764
dflet 0:1e7b5dd9edb4 1765 #endif /* configUSE_TRACE_FACILITY */
dflet 0:1e7b5dd9edb4 1766 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1767
dflet 0:1e7b5dd9edb4 1768 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 1769
dflet 0:1e7b5dd9edb4 1770 void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber )
dflet 0:1e7b5dd9edb4 1771 {
dflet 0:1e7b5dd9edb4 1772 ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber;
dflet 0:1e7b5dd9edb4 1773 }
dflet 0:1e7b5dd9edb4 1774
dflet 0:1e7b5dd9edb4 1775 #endif /* configUSE_TRACE_FACILITY */
dflet 0:1e7b5dd9edb4 1776 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1777
dflet 0:1e7b5dd9edb4 1778 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 0:1e7b5dd9edb4 1779
dflet 0:1e7b5dd9edb4 1780 uint8_t ucQueueGetQueueType( QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 1781 {
dflet 0:1e7b5dd9edb4 1782 return ( ( Queue_t * ) xQueue )->ucQueueType;
dflet 0:1e7b5dd9edb4 1783 }
dflet 0:1e7b5dd9edb4 1784
dflet 0:1e7b5dd9edb4 1785 #endif /* configUSE_TRACE_FACILITY */
dflet 0:1e7b5dd9edb4 1786 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1787
dflet 0:1e7b5dd9edb4 1788 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition )
dflet 0:1e7b5dd9edb4 1789 {
dflet 0:1e7b5dd9edb4 1790 BaseType_t xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 1791
dflet 0:1e7b5dd9edb4 1792 if( pxQueue->uxItemSize == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1793 {
dflet 0:1e7b5dd9edb4 1794 #if ( configUSE_MUTEXES == 1 )
dflet 0:1e7b5dd9edb4 1795 {
dflet 0:1e7b5dd9edb4 1796 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 0:1e7b5dd9edb4 1797 {
dflet 0:1e7b5dd9edb4 1798 /* The mutex is no longer being held. */
dflet 0:1e7b5dd9edb4 1799 xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder );
dflet 0:1e7b5dd9edb4 1800 pxQueue->pxMutexHolder = NULL;
dflet 0:1e7b5dd9edb4 1801 }
dflet 0:1e7b5dd9edb4 1802 else
dflet 0:1e7b5dd9edb4 1803 {
dflet 0:1e7b5dd9edb4 1804 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1805 }
dflet 0:1e7b5dd9edb4 1806 }
dflet 0:1e7b5dd9edb4 1807 #endif /* configUSE_MUTEXES */
dflet 0:1e7b5dd9edb4 1808 }
dflet 0:1e7b5dd9edb4 1809 else if( xPosition == queueSEND_TO_BACK )
dflet 0:1e7b5dd9edb4 1810 {
dflet 0:1e7b5dd9edb4 1811 ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */
dflet 0:1e7b5dd9edb4 1812 pxQueue->pcWriteTo += pxQueue->uxItemSize;
dflet 0:1e7b5dd9edb4 1813 if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 0:1e7b5dd9edb4 1814 {
dflet 0:1e7b5dd9edb4 1815 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 0:1e7b5dd9edb4 1816 }
dflet 0:1e7b5dd9edb4 1817 else
dflet 0:1e7b5dd9edb4 1818 {
dflet 0:1e7b5dd9edb4 1819 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1820 }
dflet 0:1e7b5dd9edb4 1821 }
dflet 0:1e7b5dd9edb4 1822 else
dflet 0:1e7b5dd9edb4 1823 {
dflet 0:1e7b5dd9edb4 1824 ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 0:1e7b5dd9edb4 1825 pxQueue->u.pcReadFrom -= pxQueue->uxItemSize;
dflet 0:1e7b5dd9edb4 1826 if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 0:1e7b5dd9edb4 1827 {
dflet 0:1e7b5dd9edb4 1828 pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize );
dflet 0:1e7b5dd9edb4 1829 }
dflet 0:1e7b5dd9edb4 1830 else
dflet 0:1e7b5dd9edb4 1831 {
dflet 0:1e7b5dd9edb4 1832 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1833 }
dflet 0:1e7b5dd9edb4 1834
dflet 0:1e7b5dd9edb4 1835 if( xPosition == queueOVERWRITE )
dflet 0:1e7b5dd9edb4 1836 {
dflet 0:1e7b5dd9edb4 1837 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1838 {
dflet 0:1e7b5dd9edb4 1839 /* An item is not being added but overwritten, so subtract
dflet 0:1e7b5dd9edb4 1840 one from the recorded number of items in the queue so when
dflet 0:1e7b5dd9edb4 1841 one is added again below the number of recorded items remains
dflet 0:1e7b5dd9edb4 1842 correct. */
dflet 0:1e7b5dd9edb4 1843 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 1844 }
dflet 0:1e7b5dd9edb4 1845 else
dflet 0:1e7b5dd9edb4 1846 {
dflet 0:1e7b5dd9edb4 1847 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1848 }
dflet 0:1e7b5dd9edb4 1849 }
dflet 0:1e7b5dd9edb4 1850 else
dflet 0:1e7b5dd9edb4 1851 {
dflet 0:1e7b5dd9edb4 1852 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1853 }
dflet 0:1e7b5dd9edb4 1854 }
dflet 0:1e7b5dd9edb4 1855
dflet 0:1e7b5dd9edb4 1856 ++( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 1857
dflet 0:1e7b5dd9edb4 1858 return xReturn;
dflet 0:1e7b5dd9edb4 1859 }
dflet 0:1e7b5dd9edb4 1860 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1861
dflet 0:1e7b5dd9edb4 1862 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer )
dflet 0:1e7b5dd9edb4 1863 {
dflet 0:1e7b5dd9edb4 1864 if( pxQueue->uxItemSize != ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 1865 {
dflet 0:1e7b5dd9edb4 1866 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:1e7b5dd9edb4 1867 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */
dflet 0:1e7b5dd9edb4 1868 {
dflet 0:1e7b5dd9edb4 1869 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:1e7b5dd9edb4 1870 }
dflet 0:1e7b5dd9edb4 1871 else
dflet 0:1e7b5dd9edb4 1872 {
dflet 0:1e7b5dd9edb4 1873 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1874 }
dflet 0:1e7b5dd9edb4 1875 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */
dflet 0:1e7b5dd9edb4 1876 }
dflet 0:1e7b5dd9edb4 1877 }
dflet 0:1e7b5dd9edb4 1878 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1879
dflet 0:1e7b5dd9edb4 1880 static void prvUnlockQueue( Queue_t * const pxQueue )
dflet 0:1e7b5dd9edb4 1881 {
dflet 0:1e7b5dd9edb4 1882 /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */
dflet 0:1e7b5dd9edb4 1883
dflet 0:1e7b5dd9edb4 1884 /* The lock counts contains the number of extra data items placed or
dflet 0:1e7b5dd9edb4 1885 removed from the queue while the queue was locked. When a queue is
dflet 0:1e7b5dd9edb4 1886 locked items can be added or removed, but the event lists cannot be
dflet 0:1e7b5dd9edb4 1887 updated. */
dflet 0:1e7b5dd9edb4 1888 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1889 {
dflet 0:1e7b5dd9edb4 1890 /* See if data was added to the queue while it was locked. */
dflet 0:1e7b5dd9edb4 1891 while( pxQueue->xTxLock > queueLOCKED_UNMODIFIED )
dflet 0:1e7b5dd9edb4 1892 {
dflet 0:1e7b5dd9edb4 1893 /* Data was posted while the queue was locked. Are any tasks
dflet 0:1e7b5dd9edb4 1894 blocked waiting for data to become available? */
dflet 0:1e7b5dd9edb4 1895 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 1896 {
dflet 0:1e7b5dd9edb4 1897 if( pxQueue->pxQueueSetContainer != NULL )
dflet 0:1e7b5dd9edb4 1898 {
dflet 0:1e7b5dd9edb4 1899 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 0:1e7b5dd9edb4 1900 {
dflet 0:1e7b5dd9edb4 1901 /* The queue is a member of a queue set, and posting to
dflet 0:1e7b5dd9edb4 1902 the queue set caused a higher priority task to unblock.
dflet 0:1e7b5dd9edb4 1903 A context switch is required. */
dflet 0:1e7b5dd9edb4 1904 vTaskMissedYield();
dflet 0:1e7b5dd9edb4 1905 }
dflet 0:1e7b5dd9edb4 1906 else
dflet 0:1e7b5dd9edb4 1907 {
dflet 0:1e7b5dd9edb4 1908 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1909 }
dflet 0:1e7b5dd9edb4 1910 }
dflet 0:1e7b5dd9edb4 1911 else
dflet 0:1e7b5dd9edb4 1912 {
dflet 0:1e7b5dd9edb4 1913 /* Tasks that are removed from the event list will get added to
dflet 0:1e7b5dd9edb4 1914 the pending ready list as the scheduler is still suspended. */
dflet 0:1e7b5dd9edb4 1915 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1916 {
dflet 0:1e7b5dd9edb4 1917 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1918 {
dflet 0:1e7b5dd9edb4 1919 /* The task waiting has a higher priority so record that a
dflet 0:1e7b5dd9edb4 1920 context switch is required. */
dflet 0:1e7b5dd9edb4 1921 vTaskMissedYield();
dflet 0:1e7b5dd9edb4 1922 }
dflet 0:1e7b5dd9edb4 1923 else
dflet 0:1e7b5dd9edb4 1924 {
dflet 0:1e7b5dd9edb4 1925 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1926 }
dflet 0:1e7b5dd9edb4 1927 }
dflet 0:1e7b5dd9edb4 1928 else
dflet 0:1e7b5dd9edb4 1929 {
dflet 0:1e7b5dd9edb4 1930 break;
dflet 0:1e7b5dd9edb4 1931 }
dflet 0:1e7b5dd9edb4 1932 }
dflet 0:1e7b5dd9edb4 1933 }
dflet 0:1e7b5dd9edb4 1934 #else /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1935 {
dflet 0:1e7b5dd9edb4 1936 /* Tasks that are removed from the event list will get added to
dflet 0:1e7b5dd9edb4 1937 the pending ready list as the scheduler is still suspended. */
dflet 0:1e7b5dd9edb4 1938 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1939 {
dflet 0:1e7b5dd9edb4 1940 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1941 {
dflet 0:1e7b5dd9edb4 1942 /* The task waiting has a higher priority so record that a
dflet 0:1e7b5dd9edb4 1943 context switch is required. */
dflet 0:1e7b5dd9edb4 1944 vTaskMissedYield();
dflet 0:1e7b5dd9edb4 1945 }
dflet 0:1e7b5dd9edb4 1946 else
dflet 0:1e7b5dd9edb4 1947 {
dflet 0:1e7b5dd9edb4 1948 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1949 }
dflet 0:1e7b5dd9edb4 1950 }
dflet 0:1e7b5dd9edb4 1951 else
dflet 0:1e7b5dd9edb4 1952 {
dflet 0:1e7b5dd9edb4 1953 break;
dflet 0:1e7b5dd9edb4 1954 }
dflet 0:1e7b5dd9edb4 1955 }
dflet 0:1e7b5dd9edb4 1956 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 1957
dflet 0:1e7b5dd9edb4 1958 --( pxQueue->xTxLock );
dflet 0:1e7b5dd9edb4 1959 }
dflet 0:1e7b5dd9edb4 1960
dflet 0:1e7b5dd9edb4 1961 pxQueue->xTxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 1962 }
dflet 0:1e7b5dd9edb4 1963 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1964
dflet 0:1e7b5dd9edb4 1965 /* Do the same for the Rx lock. */
dflet 0:1e7b5dd9edb4 1966 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 1967 {
dflet 0:1e7b5dd9edb4 1968 while( pxQueue->xRxLock > queueLOCKED_UNMODIFIED )
dflet 0:1e7b5dd9edb4 1969 {
dflet 0:1e7b5dd9edb4 1970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 1971 {
dflet 0:1e7b5dd9edb4 1972 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 1973 {
dflet 0:1e7b5dd9edb4 1974 vTaskMissedYield();
dflet 0:1e7b5dd9edb4 1975 }
dflet 0:1e7b5dd9edb4 1976 else
dflet 0:1e7b5dd9edb4 1977 {
dflet 0:1e7b5dd9edb4 1978 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 1979 }
dflet 0:1e7b5dd9edb4 1980
dflet 0:1e7b5dd9edb4 1981 --( pxQueue->xRxLock );
dflet 0:1e7b5dd9edb4 1982 }
dflet 0:1e7b5dd9edb4 1983 else
dflet 0:1e7b5dd9edb4 1984 {
dflet 0:1e7b5dd9edb4 1985 break;
dflet 0:1e7b5dd9edb4 1986 }
dflet 0:1e7b5dd9edb4 1987 }
dflet 0:1e7b5dd9edb4 1988
dflet 0:1e7b5dd9edb4 1989 pxQueue->xRxLock = queueUNLOCKED;
dflet 0:1e7b5dd9edb4 1990 }
dflet 0:1e7b5dd9edb4 1991 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 1992 }
dflet 0:1e7b5dd9edb4 1993 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 1994
dflet 0:1e7b5dd9edb4 1995 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue )
dflet 0:1e7b5dd9edb4 1996 {
dflet 0:1e7b5dd9edb4 1997 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 1998
dflet 0:1e7b5dd9edb4 1999 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 2000 {
dflet 0:1e7b5dd9edb4 2001 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2002 {
dflet 0:1e7b5dd9edb4 2003 xReturn = pdTRUE;
dflet 0:1e7b5dd9edb4 2004 }
dflet 0:1e7b5dd9edb4 2005 else
dflet 0:1e7b5dd9edb4 2006 {
dflet 0:1e7b5dd9edb4 2007 xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 2008 }
dflet 0:1e7b5dd9edb4 2009 }
dflet 0:1e7b5dd9edb4 2010 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 2011
dflet 0:1e7b5dd9edb4 2012 return xReturn;
dflet 0:1e7b5dd9edb4 2013 }
dflet 0:1e7b5dd9edb4 2014 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2015
dflet 0:1e7b5dd9edb4 2016 BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 2017 {
dflet 0:1e7b5dd9edb4 2018 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2019
dflet 0:1e7b5dd9edb4 2020 configASSERT( xQueue );
dflet 0:1e7b5dd9edb4 2021 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2022 {
dflet 0:1e7b5dd9edb4 2023 xReturn = pdTRUE;
dflet 0:1e7b5dd9edb4 2024 }
dflet 0:1e7b5dd9edb4 2025 else
dflet 0:1e7b5dd9edb4 2026 {
dflet 0:1e7b5dd9edb4 2027 xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 2028 }
dflet 0:1e7b5dd9edb4 2029
dflet 0:1e7b5dd9edb4 2030 return xReturn;
dflet 0:1e7b5dd9edb4 2031 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:1e7b5dd9edb4 2032 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2033
dflet 0:1e7b5dd9edb4 2034 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue )
dflet 0:1e7b5dd9edb4 2035 {
dflet 0:1e7b5dd9edb4 2036 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2037
dflet 0:1e7b5dd9edb4 2038 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 2039 {
dflet 0:1e7b5dd9edb4 2040 if( pxQueue->uxMessagesWaiting == pxQueue->uxLength )
dflet 0:1e7b5dd9edb4 2041 {
dflet 0:1e7b5dd9edb4 2042 xReturn = pdTRUE;
dflet 0:1e7b5dd9edb4 2043 }
dflet 0:1e7b5dd9edb4 2044 else
dflet 0:1e7b5dd9edb4 2045 {
dflet 0:1e7b5dd9edb4 2046 xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 2047 }
dflet 0:1e7b5dd9edb4 2048 }
dflet 0:1e7b5dd9edb4 2049 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 2050
dflet 0:1e7b5dd9edb4 2051 return xReturn;
dflet 0:1e7b5dd9edb4 2052 }
dflet 0:1e7b5dd9edb4 2053 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2054
dflet 0:1e7b5dd9edb4 2055 BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 2056 {
dflet 0:1e7b5dd9edb4 2057 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2058
dflet 0:1e7b5dd9edb4 2059 configASSERT( xQueue );
dflet 0:1e7b5dd9edb4 2060 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength )
dflet 0:1e7b5dd9edb4 2061 {
dflet 0:1e7b5dd9edb4 2062 xReturn = pdTRUE;
dflet 0:1e7b5dd9edb4 2063 }
dflet 0:1e7b5dd9edb4 2064 else
dflet 0:1e7b5dd9edb4 2065 {
dflet 0:1e7b5dd9edb4 2066 xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 2067 }
dflet 0:1e7b5dd9edb4 2068
dflet 0:1e7b5dd9edb4 2069 return xReturn;
dflet 0:1e7b5dd9edb4 2070 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:1e7b5dd9edb4 2071 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2072
dflet 0:1e7b5dd9edb4 2073 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:1e7b5dd9edb4 2074
dflet 0:1e7b5dd9edb4 2075 BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait )
dflet 0:1e7b5dd9edb4 2076 {
dflet 0:1e7b5dd9edb4 2077 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2078 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 2079
dflet 0:1e7b5dd9edb4 2080 /* If the queue is already full we may have to block. A critical section
dflet 0:1e7b5dd9edb4 2081 is required to prevent an interrupt removing something from the queue
dflet 0:1e7b5dd9edb4 2082 between the check to see if the queue is full and blocking on the queue. */
dflet 0:1e7b5dd9edb4 2083 portDISABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2084 {
dflet 0:1e7b5dd9edb4 2085 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2086 {
dflet 0:1e7b5dd9edb4 2087 /* The queue is full - do we want to block or just leave without
dflet 0:1e7b5dd9edb4 2088 posting? */
dflet 0:1e7b5dd9edb4 2089 if( xTicksToWait > ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 2090 {
dflet 0:1e7b5dd9edb4 2091 /* As this is called from a coroutine we cannot block directly, but
dflet 0:1e7b5dd9edb4 2092 return indicating that we need to block. */
dflet 0:1e7b5dd9edb4 2093 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) );
dflet 0:1e7b5dd9edb4 2094 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2095 return errQUEUE_BLOCKED;
dflet 0:1e7b5dd9edb4 2096 }
dflet 0:1e7b5dd9edb4 2097 else
dflet 0:1e7b5dd9edb4 2098 {
dflet 0:1e7b5dd9edb4 2099 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2100 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 2101 }
dflet 0:1e7b5dd9edb4 2102 }
dflet 0:1e7b5dd9edb4 2103 }
dflet 0:1e7b5dd9edb4 2104 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2105
dflet 0:1e7b5dd9edb4 2106 portDISABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2107 {
dflet 0:1e7b5dd9edb4 2108 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:1e7b5dd9edb4 2109 {
dflet 0:1e7b5dd9edb4 2110 /* There is room in the queue, copy the data into the queue. */
dflet 0:1e7b5dd9edb4 2111 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 0:1e7b5dd9edb4 2112 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 2113
dflet 0:1e7b5dd9edb4 2114 /* Were any co-routines waiting for data to become available? */
dflet 0:1e7b5dd9edb4 2115 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2116 {
dflet 0:1e7b5dd9edb4 2117 /* In this instance the co-routine could be placed directly
dflet 0:1e7b5dd9edb4 2118 into the ready list as we are within a critical section.
dflet 0:1e7b5dd9edb4 2119 Instead the same pending ready list mechanism is used as if
dflet 0:1e7b5dd9edb4 2120 the event were caused from within an interrupt. */
dflet 0:1e7b5dd9edb4 2121 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2122 {
dflet 0:1e7b5dd9edb4 2123 /* The co-routine waiting has a higher priority so record
dflet 0:1e7b5dd9edb4 2124 that a yield might be appropriate. */
dflet 0:1e7b5dd9edb4 2125 xReturn = errQUEUE_YIELD;
dflet 0:1e7b5dd9edb4 2126 }
dflet 0:1e7b5dd9edb4 2127 else
dflet 0:1e7b5dd9edb4 2128 {
dflet 0:1e7b5dd9edb4 2129 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2130 }
dflet 0:1e7b5dd9edb4 2131 }
dflet 0:1e7b5dd9edb4 2132 else
dflet 0:1e7b5dd9edb4 2133 {
dflet 0:1e7b5dd9edb4 2134 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2135 }
dflet 0:1e7b5dd9edb4 2136 }
dflet 0:1e7b5dd9edb4 2137 else
dflet 0:1e7b5dd9edb4 2138 {
dflet 0:1e7b5dd9edb4 2139 xReturn = errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 2140 }
dflet 0:1e7b5dd9edb4 2141 }
dflet 0:1e7b5dd9edb4 2142 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2143
dflet 0:1e7b5dd9edb4 2144 return xReturn;
dflet 0:1e7b5dd9edb4 2145 }
dflet 0:1e7b5dd9edb4 2146
dflet 0:1e7b5dd9edb4 2147 #endif /* configUSE_CO_ROUTINES */
dflet 0:1e7b5dd9edb4 2148 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2149
dflet 0:1e7b5dd9edb4 2150 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:1e7b5dd9edb4 2151
dflet 0:1e7b5dd9edb4 2152 BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait )
dflet 0:1e7b5dd9edb4 2153 {
dflet 0:1e7b5dd9edb4 2154 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2155 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 2156
dflet 0:1e7b5dd9edb4 2157 /* If the queue is already empty we may have to block. A critical section
dflet 0:1e7b5dd9edb4 2158 is required to prevent an interrupt adding something to the queue
dflet 0:1e7b5dd9edb4 2159 between the check to see if the queue is empty and blocking on the queue. */
dflet 0:1e7b5dd9edb4 2160 portDISABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2161 {
dflet 0:1e7b5dd9edb4 2162 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2163 {
dflet 0:1e7b5dd9edb4 2164 /* There are no messages in the queue, do we want to block or just
dflet 0:1e7b5dd9edb4 2165 leave with nothing? */
dflet 0:1e7b5dd9edb4 2166 if( xTicksToWait > ( TickType_t ) 0 )
dflet 0:1e7b5dd9edb4 2167 {
dflet 0:1e7b5dd9edb4 2168 /* As this is a co-routine we cannot block directly, but return
dflet 0:1e7b5dd9edb4 2169 indicating that we need to block. */
dflet 0:1e7b5dd9edb4 2170 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) );
dflet 0:1e7b5dd9edb4 2171 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2172 return errQUEUE_BLOCKED;
dflet 0:1e7b5dd9edb4 2173 }
dflet 0:1e7b5dd9edb4 2174 else
dflet 0:1e7b5dd9edb4 2175 {
dflet 0:1e7b5dd9edb4 2176 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2177 return errQUEUE_FULL;
dflet 0:1e7b5dd9edb4 2178 }
dflet 0:1e7b5dd9edb4 2179 }
dflet 0:1e7b5dd9edb4 2180 else
dflet 0:1e7b5dd9edb4 2181 {
dflet 0:1e7b5dd9edb4 2182 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2183 }
dflet 0:1e7b5dd9edb4 2184 }
dflet 0:1e7b5dd9edb4 2185 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2186
dflet 0:1e7b5dd9edb4 2187 portDISABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2188 {
dflet 0:1e7b5dd9edb4 2189 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2190 {
dflet 0:1e7b5dd9edb4 2191 /* Data is available from the queue. */
dflet 0:1e7b5dd9edb4 2192 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:1e7b5dd9edb4 2193 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 0:1e7b5dd9edb4 2194 {
dflet 0:1e7b5dd9edb4 2195 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:1e7b5dd9edb4 2196 }
dflet 0:1e7b5dd9edb4 2197 else
dflet 0:1e7b5dd9edb4 2198 {
dflet 0:1e7b5dd9edb4 2199 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2200 }
dflet 0:1e7b5dd9edb4 2201 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 2202 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 0:1e7b5dd9edb4 2203
dflet 0:1e7b5dd9edb4 2204 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 2205
dflet 0:1e7b5dd9edb4 2206 /* Were any co-routines waiting for space to become available? */
dflet 0:1e7b5dd9edb4 2207 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2208 {
dflet 0:1e7b5dd9edb4 2209 /* In this instance the co-routine could be placed directly
dflet 0:1e7b5dd9edb4 2210 into the ready list as we are within a critical section.
dflet 0:1e7b5dd9edb4 2211 Instead the same pending ready list mechanism is used as if
dflet 0:1e7b5dd9edb4 2212 the event were caused from within an interrupt. */
dflet 0:1e7b5dd9edb4 2213 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2214 {
dflet 0:1e7b5dd9edb4 2215 xReturn = errQUEUE_YIELD;
dflet 0:1e7b5dd9edb4 2216 }
dflet 0:1e7b5dd9edb4 2217 else
dflet 0:1e7b5dd9edb4 2218 {
dflet 0:1e7b5dd9edb4 2219 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2220 }
dflet 0:1e7b5dd9edb4 2221 }
dflet 0:1e7b5dd9edb4 2222 else
dflet 0:1e7b5dd9edb4 2223 {
dflet 0:1e7b5dd9edb4 2224 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2225 }
dflet 0:1e7b5dd9edb4 2226 }
dflet 0:1e7b5dd9edb4 2227 else
dflet 0:1e7b5dd9edb4 2228 {
dflet 0:1e7b5dd9edb4 2229 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2230 }
dflet 0:1e7b5dd9edb4 2231 }
dflet 0:1e7b5dd9edb4 2232 portENABLE_INTERRUPTS();
dflet 0:1e7b5dd9edb4 2233
dflet 0:1e7b5dd9edb4 2234 return xReturn;
dflet 0:1e7b5dd9edb4 2235 }
dflet 0:1e7b5dd9edb4 2236
dflet 0:1e7b5dd9edb4 2237 #endif /* configUSE_CO_ROUTINES */
dflet 0:1e7b5dd9edb4 2238 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2239
dflet 0:1e7b5dd9edb4 2240 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:1e7b5dd9edb4 2241
dflet 0:1e7b5dd9edb4 2242 BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken )
dflet 0:1e7b5dd9edb4 2243 {
dflet 0:1e7b5dd9edb4 2244 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 2245
dflet 0:1e7b5dd9edb4 2246 /* Cannot block within an ISR so if there is no space on the queue then
dflet 0:1e7b5dd9edb4 2247 exit without doing anything. */
dflet 0:1e7b5dd9edb4 2248 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 0:1e7b5dd9edb4 2249 {
dflet 0:1e7b5dd9edb4 2250 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 0:1e7b5dd9edb4 2251
dflet 0:1e7b5dd9edb4 2252 /* We only want to wake one co-routine per ISR, so check that a
dflet 0:1e7b5dd9edb4 2253 co-routine has not already been woken. */
dflet 0:1e7b5dd9edb4 2254 if( xCoRoutinePreviouslyWoken == pdFALSE )
dflet 0:1e7b5dd9edb4 2255 {
dflet 0:1e7b5dd9edb4 2256 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2257 {
dflet 0:1e7b5dd9edb4 2258 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2259 {
dflet 0:1e7b5dd9edb4 2260 return pdTRUE;
dflet 0:1e7b5dd9edb4 2261 }
dflet 0:1e7b5dd9edb4 2262 else
dflet 0:1e7b5dd9edb4 2263 {
dflet 0:1e7b5dd9edb4 2264 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2265 }
dflet 0:1e7b5dd9edb4 2266 }
dflet 0:1e7b5dd9edb4 2267 else
dflet 0:1e7b5dd9edb4 2268 {
dflet 0:1e7b5dd9edb4 2269 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2270 }
dflet 0:1e7b5dd9edb4 2271 }
dflet 0:1e7b5dd9edb4 2272 else
dflet 0:1e7b5dd9edb4 2273 {
dflet 0:1e7b5dd9edb4 2274 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2275 }
dflet 0:1e7b5dd9edb4 2276 }
dflet 0:1e7b5dd9edb4 2277 else
dflet 0:1e7b5dd9edb4 2278 {
dflet 0:1e7b5dd9edb4 2279 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2280 }
dflet 0:1e7b5dd9edb4 2281
dflet 0:1e7b5dd9edb4 2282 return xCoRoutinePreviouslyWoken;
dflet 0:1e7b5dd9edb4 2283 }
dflet 0:1e7b5dd9edb4 2284
dflet 0:1e7b5dd9edb4 2285 #endif /* configUSE_CO_ROUTINES */
dflet 0:1e7b5dd9edb4 2286 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2287
dflet 0:1e7b5dd9edb4 2288 #if ( configUSE_CO_ROUTINES == 1 )
dflet 0:1e7b5dd9edb4 2289
dflet 0:1e7b5dd9edb4 2290 BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken )
dflet 0:1e7b5dd9edb4 2291 {
dflet 0:1e7b5dd9edb4 2292 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2293 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 2294
dflet 0:1e7b5dd9edb4 2295 /* We cannot block from an ISR, so check there is data available. If
dflet 0:1e7b5dd9edb4 2296 not then just leave without doing anything. */
dflet 0:1e7b5dd9edb4 2297 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2298 {
dflet 0:1e7b5dd9edb4 2299 /* Copy the data from the queue. */
dflet 0:1e7b5dd9edb4 2300 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 0:1e7b5dd9edb4 2301 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 0:1e7b5dd9edb4 2302 {
dflet 0:1e7b5dd9edb4 2303 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 0:1e7b5dd9edb4 2304 }
dflet 0:1e7b5dd9edb4 2305 else
dflet 0:1e7b5dd9edb4 2306 {
dflet 0:1e7b5dd9edb4 2307 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2308 }
dflet 0:1e7b5dd9edb4 2309 --( pxQueue->uxMessagesWaiting );
dflet 0:1e7b5dd9edb4 2310 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 0:1e7b5dd9edb4 2311
dflet 0:1e7b5dd9edb4 2312 if( ( *pxCoRoutineWoken ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2313 {
dflet 0:1e7b5dd9edb4 2314 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2315 {
dflet 0:1e7b5dd9edb4 2316 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2317 {
dflet 0:1e7b5dd9edb4 2318 *pxCoRoutineWoken = pdTRUE;
dflet 0:1e7b5dd9edb4 2319 }
dflet 0:1e7b5dd9edb4 2320 else
dflet 0:1e7b5dd9edb4 2321 {
dflet 0:1e7b5dd9edb4 2322 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2323 }
dflet 0:1e7b5dd9edb4 2324 }
dflet 0:1e7b5dd9edb4 2325 else
dflet 0:1e7b5dd9edb4 2326 {
dflet 0:1e7b5dd9edb4 2327 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2328 }
dflet 0:1e7b5dd9edb4 2329 }
dflet 0:1e7b5dd9edb4 2330 else
dflet 0:1e7b5dd9edb4 2331 {
dflet 0:1e7b5dd9edb4 2332 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2333 }
dflet 0:1e7b5dd9edb4 2334
dflet 0:1e7b5dd9edb4 2335 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 2336 }
dflet 0:1e7b5dd9edb4 2337 else
dflet 0:1e7b5dd9edb4 2338 {
dflet 0:1e7b5dd9edb4 2339 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2340 }
dflet 0:1e7b5dd9edb4 2341
dflet 0:1e7b5dd9edb4 2342 return xReturn;
dflet 0:1e7b5dd9edb4 2343 }
dflet 0:1e7b5dd9edb4 2344
dflet 0:1e7b5dd9edb4 2345 #endif /* configUSE_CO_ROUTINES */
dflet 0:1e7b5dd9edb4 2346 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2347
dflet 0:1e7b5dd9edb4 2348 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:1e7b5dd9edb4 2349
dflet 0:1e7b5dd9edb4 2350 void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 0:1e7b5dd9edb4 2351 {
dflet 0:1e7b5dd9edb4 2352 UBaseType_t ux;
dflet 0:1e7b5dd9edb4 2353
dflet 0:1e7b5dd9edb4 2354 /* See if there is an empty space in the registry. A NULL name denotes
dflet 0:1e7b5dd9edb4 2355 a free slot. */
dflet 0:1e7b5dd9edb4 2356 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 0:1e7b5dd9edb4 2357 {
dflet 0:1e7b5dd9edb4 2358 if( xQueueRegistry[ ux ].pcQueueName == NULL )
dflet 0:1e7b5dd9edb4 2359 {
dflet 0:1e7b5dd9edb4 2360 /* Store the information on this queue. */
dflet 0:1e7b5dd9edb4 2361 xQueueRegistry[ ux ].pcQueueName = pcQueueName;
dflet 0:1e7b5dd9edb4 2362 xQueueRegistry[ ux ].xHandle = xQueue;
dflet 0:1e7b5dd9edb4 2363
dflet 0:1e7b5dd9edb4 2364 traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName );
dflet 0:1e7b5dd9edb4 2365 break;
dflet 0:1e7b5dd9edb4 2366 }
dflet 0:1e7b5dd9edb4 2367 else
dflet 0:1e7b5dd9edb4 2368 {
dflet 0:1e7b5dd9edb4 2369 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2370 }
dflet 0:1e7b5dd9edb4 2371 }
dflet 0:1e7b5dd9edb4 2372 }
dflet 0:1e7b5dd9edb4 2373
dflet 0:1e7b5dd9edb4 2374 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:1e7b5dd9edb4 2375 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2376
dflet 0:1e7b5dd9edb4 2377 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 0:1e7b5dd9edb4 2378
dflet 0:1e7b5dd9edb4 2379 void vQueueUnregisterQueue( QueueHandle_t xQueue )
dflet 0:1e7b5dd9edb4 2380 {
dflet 0:1e7b5dd9edb4 2381 UBaseType_t ux;
dflet 0:1e7b5dd9edb4 2382
dflet 0:1e7b5dd9edb4 2383 /* See if the handle of the queue being unregistered in actually in the
dflet 0:1e7b5dd9edb4 2384 registry. */
dflet 0:1e7b5dd9edb4 2385 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 0:1e7b5dd9edb4 2386 {
dflet 0:1e7b5dd9edb4 2387 if( xQueueRegistry[ ux ].xHandle == xQueue )
dflet 0:1e7b5dd9edb4 2388 {
dflet 0:1e7b5dd9edb4 2389 /* Set the name to NULL to show that this slot if free again. */
dflet 0:1e7b5dd9edb4 2390 xQueueRegistry[ ux ].pcQueueName = NULL;
dflet 0:1e7b5dd9edb4 2391 break;
dflet 0:1e7b5dd9edb4 2392 }
dflet 0:1e7b5dd9edb4 2393 else
dflet 0:1e7b5dd9edb4 2394 {
dflet 0:1e7b5dd9edb4 2395 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2396 }
dflet 0:1e7b5dd9edb4 2397 }
dflet 0:1e7b5dd9edb4 2398
dflet 0:1e7b5dd9edb4 2399 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 0:1e7b5dd9edb4 2400
dflet 0:1e7b5dd9edb4 2401 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 0:1e7b5dd9edb4 2402 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2403
dflet 0:1e7b5dd9edb4 2404 #if ( configUSE_TIMERS == 1 )
dflet 0:1e7b5dd9edb4 2405
dflet 0:1e7b5dd9edb4 2406 void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait )
dflet 0:1e7b5dd9edb4 2407 {
dflet 0:1e7b5dd9edb4 2408 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 0:1e7b5dd9edb4 2409
dflet 0:1e7b5dd9edb4 2410 /* This function should not be called by application code hence the
dflet 0:1e7b5dd9edb4 2411 'Restricted' in its name. It is not part of the public API. It is
dflet 0:1e7b5dd9edb4 2412 designed for use by kernel code, and has special calling requirements.
dflet 0:1e7b5dd9edb4 2413 It can result in vListInsert() being called on a list that can only
dflet 0:1e7b5dd9edb4 2414 possibly ever have one item in it, so the list will be fast, but even
dflet 0:1e7b5dd9edb4 2415 so it should be called with the scheduler locked and not from a critical
dflet 0:1e7b5dd9edb4 2416 section. */
dflet 0:1e7b5dd9edb4 2417
dflet 0:1e7b5dd9edb4 2418 /* Only do anything if there are no messages in the queue. This function
dflet 0:1e7b5dd9edb4 2419 will not actually cause the task to block, just place it on a blocked
dflet 0:1e7b5dd9edb4 2420 list. It will not block until the scheduler is unlocked - at which
dflet 0:1e7b5dd9edb4 2421 time a yield will be performed. If an item is added to the queue while
dflet 0:1e7b5dd9edb4 2422 the queue is locked, and the calling task blocks on the queue, then the
dflet 0:1e7b5dd9edb4 2423 calling task will be immediately unblocked when the queue is unlocked. */
dflet 0:1e7b5dd9edb4 2424 prvLockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 2425 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U )
dflet 0:1e7b5dd9edb4 2426 {
dflet 0:1e7b5dd9edb4 2427 /* There is nothing in the queue, block for the specified period. */
dflet 0:1e7b5dd9edb4 2428 vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 0:1e7b5dd9edb4 2429 }
dflet 0:1e7b5dd9edb4 2430 else
dflet 0:1e7b5dd9edb4 2431 {
dflet 0:1e7b5dd9edb4 2432 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2433 }
dflet 0:1e7b5dd9edb4 2434 prvUnlockQueue( pxQueue );
dflet 0:1e7b5dd9edb4 2435 }
dflet 0:1e7b5dd9edb4 2436
dflet 0:1e7b5dd9edb4 2437 #endif /* configUSE_TIMERS */
dflet 0:1e7b5dd9edb4 2438 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2439
dflet 0:1e7b5dd9edb4 2440 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2441
dflet 0:1e7b5dd9edb4 2442 QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength )
dflet 0:1e7b5dd9edb4 2443 {
dflet 0:1e7b5dd9edb4 2444 QueueSetHandle_t pxQueue;
dflet 0:1e7b5dd9edb4 2445
dflet 0:1e7b5dd9edb4 2446 pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET );
dflet 0:1e7b5dd9edb4 2447
dflet 0:1e7b5dd9edb4 2448 return pxQueue;
dflet 0:1e7b5dd9edb4 2449 }
dflet 0:1e7b5dd9edb4 2450
dflet 0:1e7b5dd9edb4 2451 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2452 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2453
dflet 0:1e7b5dd9edb4 2454 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2455
dflet 0:1e7b5dd9edb4 2456 BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 0:1e7b5dd9edb4 2457 {
dflet 0:1e7b5dd9edb4 2458 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2459
dflet 0:1e7b5dd9edb4 2460 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 2461 {
dflet 0:1e7b5dd9edb4 2462 if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL )
dflet 0:1e7b5dd9edb4 2463 {
dflet 0:1e7b5dd9edb4 2464 /* Cannot add a queue/semaphore to more than one queue set. */
dflet 0:1e7b5dd9edb4 2465 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2466 }
dflet 0:1e7b5dd9edb4 2467 else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2468 {
dflet 0:1e7b5dd9edb4 2469 /* Cannot add a queue/semaphore to a queue set if there are already
dflet 0:1e7b5dd9edb4 2470 items in the queue/semaphore. */
dflet 0:1e7b5dd9edb4 2471 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2472 }
dflet 0:1e7b5dd9edb4 2473 else
dflet 0:1e7b5dd9edb4 2474 {
dflet 0:1e7b5dd9edb4 2475 ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet;
dflet 0:1e7b5dd9edb4 2476 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 2477 }
dflet 0:1e7b5dd9edb4 2478 }
dflet 0:1e7b5dd9edb4 2479 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 2480
dflet 0:1e7b5dd9edb4 2481 return xReturn;
dflet 0:1e7b5dd9edb4 2482 }
dflet 0:1e7b5dd9edb4 2483
dflet 0:1e7b5dd9edb4 2484 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2485 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2486
dflet 0:1e7b5dd9edb4 2487 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2488
dflet 0:1e7b5dd9edb4 2489 BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 0:1e7b5dd9edb4 2490 {
dflet 0:1e7b5dd9edb4 2491 BaseType_t xReturn;
dflet 0:1e7b5dd9edb4 2492 Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore;
dflet 0:1e7b5dd9edb4 2493
dflet 0:1e7b5dd9edb4 2494 if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet )
dflet 0:1e7b5dd9edb4 2495 {
dflet 0:1e7b5dd9edb4 2496 /* The queue was not a member of the set. */
dflet 0:1e7b5dd9edb4 2497 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2498 }
dflet 0:1e7b5dd9edb4 2499 else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 0:1e7b5dd9edb4 2500 {
dflet 0:1e7b5dd9edb4 2501 /* It is dangerous to remove a queue from a set when the queue is
dflet 0:1e7b5dd9edb4 2502 not empty because the queue set will still hold pending events for
dflet 0:1e7b5dd9edb4 2503 the queue. */
dflet 0:1e7b5dd9edb4 2504 xReturn = pdFAIL;
dflet 0:1e7b5dd9edb4 2505 }
dflet 0:1e7b5dd9edb4 2506 else
dflet 0:1e7b5dd9edb4 2507 {
dflet 0:1e7b5dd9edb4 2508 taskENTER_CRITICAL();
dflet 0:1e7b5dd9edb4 2509 {
dflet 0:1e7b5dd9edb4 2510 /* The queue is no longer contained in the set. */
dflet 0:1e7b5dd9edb4 2511 pxQueueOrSemaphore->pxQueueSetContainer = NULL;
dflet 0:1e7b5dd9edb4 2512 }
dflet 0:1e7b5dd9edb4 2513 taskEXIT_CRITICAL();
dflet 0:1e7b5dd9edb4 2514 xReturn = pdPASS;
dflet 0:1e7b5dd9edb4 2515 }
dflet 0:1e7b5dd9edb4 2516
dflet 0:1e7b5dd9edb4 2517 return xReturn;
dflet 0:1e7b5dd9edb4 2518 } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */
dflet 0:1e7b5dd9edb4 2519
dflet 0:1e7b5dd9edb4 2520 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2521 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2522
dflet 0:1e7b5dd9edb4 2523 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2524
dflet 0:1e7b5dd9edb4 2525 QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait )
dflet 0:1e7b5dd9edb4 2526 {
dflet 0:1e7b5dd9edb4 2527 QueueSetMemberHandle_t xReturn = NULL;
dflet 0:1e7b5dd9edb4 2528
dflet 0:1e7b5dd9edb4 2529 ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 0:1e7b5dd9edb4 2530 return xReturn;
dflet 0:1e7b5dd9edb4 2531 }
dflet 0:1e7b5dd9edb4 2532
dflet 0:1e7b5dd9edb4 2533 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2534 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2535
dflet 0:1e7b5dd9edb4 2536 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2537
dflet 0:1e7b5dd9edb4 2538 QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet )
dflet 0:1e7b5dd9edb4 2539 {
dflet 0:1e7b5dd9edb4 2540 QueueSetMemberHandle_t xReturn = NULL;
dflet 0:1e7b5dd9edb4 2541
dflet 0:1e7b5dd9edb4 2542 ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 0:1e7b5dd9edb4 2543 return xReturn;
dflet 0:1e7b5dd9edb4 2544 }
dflet 0:1e7b5dd9edb4 2545
dflet 0:1e7b5dd9edb4 2546 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2547 /*-----------------------------------------------------------*/
dflet 0:1e7b5dd9edb4 2548
dflet 0:1e7b5dd9edb4 2549 #if ( configUSE_QUEUE_SETS == 1 )
dflet 0:1e7b5dd9edb4 2550
dflet 0:1e7b5dd9edb4 2551 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition )
dflet 0:1e7b5dd9edb4 2552 {
dflet 0:1e7b5dd9edb4 2553 Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer;
dflet 0:1e7b5dd9edb4 2554 BaseType_t xReturn = pdFALSE;
dflet 0:1e7b5dd9edb4 2555
dflet 0:1e7b5dd9edb4 2556 /* This function must be called form a critical section. */
dflet 0:1e7b5dd9edb4 2557
dflet 0:1e7b5dd9edb4 2558 configASSERT( pxQueueSetContainer );
dflet 0:1e7b5dd9edb4 2559 configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength );
dflet 0:1e7b5dd9edb4 2560
dflet 0:1e7b5dd9edb4 2561 if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength )
dflet 0:1e7b5dd9edb4 2562 {
dflet 0:1e7b5dd9edb4 2563 traceQUEUE_SEND( pxQueueSetContainer );
dflet 0:1e7b5dd9edb4 2564
dflet 0:1e7b5dd9edb4 2565 /* The data copied is the handle of the queue that contains data. */
dflet 0:1e7b5dd9edb4 2566 xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition );
dflet 0:1e7b5dd9edb4 2567
dflet 0:1e7b5dd9edb4 2568 if( pxQueueSetContainer->xTxLock == queueUNLOCKED )
dflet 0:1e7b5dd9edb4 2569 {
dflet 0:1e7b5dd9edb4 2570 if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 0:1e7b5dd9edb4 2571 {
dflet 0:1e7b5dd9edb4 2572 if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 0:1e7b5dd9edb4 2573 {
dflet 0:1e7b5dd9edb4 2574 /* The task waiting has a higher priority. */
dflet 0:1e7b5dd9edb4 2575 xReturn = pdTRUE;
dflet 0:1e7b5dd9edb4 2576 }
dflet 0:1e7b5dd9edb4 2577 else
dflet 0:1e7b5dd9edb4 2578 {
dflet 0:1e7b5dd9edb4 2579 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2580 }
dflet 0:1e7b5dd9edb4 2581 }
dflet 0:1e7b5dd9edb4 2582 else
dflet 0:1e7b5dd9edb4 2583 {
dflet 0:1e7b5dd9edb4 2584 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2585 }
dflet 0:1e7b5dd9edb4 2586 }
dflet 0:1e7b5dd9edb4 2587 else
dflet 0:1e7b5dd9edb4 2588 {
dflet 0:1e7b5dd9edb4 2589 ( pxQueueSetContainer->xTxLock )++;
dflet 0:1e7b5dd9edb4 2590 }
dflet 0:1e7b5dd9edb4 2591 }
dflet 0:1e7b5dd9edb4 2592 else
dflet 0:1e7b5dd9edb4 2593 {
dflet 0:1e7b5dd9edb4 2594 mtCOVERAGE_TEST_MARKER();
dflet 0:1e7b5dd9edb4 2595 }
dflet 0:1e7b5dd9edb4 2596
dflet 0:1e7b5dd9edb4 2597 return xReturn;
dflet 0:1e7b5dd9edb4 2598 }
dflet 0:1e7b5dd9edb4 2599
dflet 0:1e7b5dd9edb4 2600 #endif /* configUSE_QUEUE_SETS */
dflet 0:1e7b5dd9edb4 2601
dflet 0:1e7b5dd9edb4 2602
dflet 0:1e7b5dd9edb4 2603
dflet 0:1e7b5dd9edb4 2604
dflet 0:1e7b5dd9edb4 2605
dflet 0:1e7b5dd9edb4 2606
dflet 0:1e7b5dd9edb4 2607
dflet 0:1e7b5dd9edb4 2608
dflet 0:1e7b5dd9edb4 2609
dflet 0:1e7b5dd9edb4 2610
dflet 0:1e7b5dd9edb4 2611
dflet 0:1e7b5dd9edb4 2612
dflet 0:1e7b5dd9edb4 2613