Update revision to use TI's mqtt and Freertos.

Dependencies:   mbed client server

Fork of cc3100_Test_mqtt_CM3 by David Fletcher

Committer:
dflet
Date:
Thu Sep 03 14:02:37 2015 +0000
Revision:
3:a8c249046181
SPI Mode change 1 to 0

Who changed what in which revision?

UserRevisionLine numberNew contents of line
dflet 3:a8c249046181 1 /*
dflet 3:a8c249046181 2 FreeRTOS V8.2.1 - Copyright (C) 2015 Real Time Engineers Ltd.
dflet 3:a8c249046181 3 All rights reserved
dflet 3:a8c249046181 4
dflet 3:a8c249046181 5 VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION.
dflet 3:a8c249046181 6
dflet 3:a8c249046181 7 This file is part of the FreeRTOS distribution.
dflet 3:a8c249046181 8
dflet 3:a8c249046181 9 FreeRTOS is free software; you can redistribute it and/or modify it under
dflet 3:a8c249046181 10 the terms of the GNU General Public License (version 2) as published by the
dflet 3:a8c249046181 11 Free Software Foundation >>!AND MODIFIED BY!<< the FreeRTOS exception.
dflet 3:a8c249046181 12
dflet 3:a8c249046181 13 ***************************************************************************
dflet 3:a8c249046181 14 >>! NOTE: The modification to the GPL is included to allow you to !<<
dflet 3:a8c249046181 15 >>! distribute a combined work that includes FreeRTOS without being !<<
dflet 3:a8c249046181 16 >>! obliged to provide the source code for proprietary components !<<
dflet 3:a8c249046181 17 >>! outside of the FreeRTOS kernel. !<<
dflet 3:a8c249046181 18 ***************************************************************************
dflet 3:a8c249046181 19
dflet 3:a8c249046181 20 FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY
dflet 3:a8c249046181 21 WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
dflet 3:a8c249046181 22 FOR A PARTICULAR PURPOSE. Full license text is available on the following
dflet 3:a8c249046181 23 link: http://www.freertos.org/a00114.html
dflet 3:a8c249046181 24
dflet 3:a8c249046181 25 ***************************************************************************
dflet 3:a8c249046181 26 * *
dflet 3:a8c249046181 27 * FreeRTOS provides completely free yet professionally developed, *
dflet 3:a8c249046181 28 * robust, strictly quality controlled, supported, and cross *
dflet 3:a8c249046181 29 * platform software that is more than just the market leader, it *
dflet 3:a8c249046181 30 * is the industry's de facto standard. *
dflet 3:a8c249046181 31 * *
dflet 3:a8c249046181 32 * Help yourself get started quickly while simultaneously helping *
dflet 3:a8c249046181 33 * to support the FreeRTOS project by purchasing a FreeRTOS *
dflet 3:a8c249046181 34 * tutorial book, reference manual, or both: *
dflet 3:a8c249046181 35 * http://www.FreeRTOS.org/Documentation *
dflet 3:a8c249046181 36 * *
dflet 3:a8c249046181 37 ***************************************************************************
dflet 3:a8c249046181 38
dflet 3:a8c249046181 39 http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading
dflet 3:a8c249046181 40 the FAQ page "My application does not run, what could be wrong?". Have you
dflet 3:a8c249046181 41 defined configASSERT()?
dflet 3:a8c249046181 42
dflet 3:a8c249046181 43 http://www.FreeRTOS.org/support - In return for receiving this top quality
dflet 3:a8c249046181 44 embedded software for free we request you assist our global community by
dflet 3:a8c249046181 45 participating in the support forum.
dflet 3:a8c249046181 46
dflet 3:a8c249046181 47 http://www.FreeRTOS.org/training - Investing in training allows your team to
dflet 3:a8c249046181 48 be as productive as possible as early as possible. Now you can receive
dflet 3:a8c249046181 49 FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers
dflet 3:a8c249046181 50 Ltd, and the world's leading authority on the world's leading RTOS.
dflet 3:a8c249046181 51
dflet 3:a8c249046181 52 http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products,
dflet 3:a8c249046181 53 including FreeRTOS+Trace - an indispensable productivity tool, a DOS
dflet 3:a8c249046181 54 compatible FAT file system, and our tiny thread aware UDP/IP stack.
dflet 3:a8c249046181 55
dflet 3:a8c249046181 56 http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate.
dflet 3:a8c249046181 57 Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS.
dflet 3:a8c249046181 58
dflet 3:a8c249046181 59 http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High
dflet 3:a8c249046181 60 Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS
dflet 3:a8c249046181 61 licenses offer ticketed support, indemnification and commercial middleware.
dflet 3:a8c249046181 62
dflet 3:a8c249046181 63 http://www.SafeRTOS.com - High Integrity Systems also provide a safety
dflet 3:a8c249046181 64 engineered and independently SIL3 certified version for use in safety and
dflet 3:a8c249046181 65 mission critical applications that require provable dependability.
dflet 3:a8c249046181 66
dflet 3:a8c249046181 67 1 tab == 4 spaces!
dflet 3:a8c249046181 68 */
dflet 3:a8c249046181 69
dflet 3:a8c249046181 70 #include <stdlib.h>
dflet 3:a8c249046181 71 #include <string.h>
dflet 3:a8c249046181 72
dflet 3:a8c249046181 73 /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining
dflet 3:a8c249046181 74 all the API functions to use the MPU wrappers. That should only be done when
dflet 3:a8c249046181 75 task.h is included from an application file. */
dflet 3:a8c249046181 76 #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE
dflet 3:a8c249046181 77
dflet 3:a8c249046181 78 #include "FreeRTOS.h"
dflet 3:a8c249046181 79 #include "task.h"
dflet 3:a8c249046181 80 #include "queue.h"
dflet 3:a8c249046181 81
dflet 3:a8c249046181 82 #if ( configUSE_CO_ROUTINES == 1 )
dflet 3:a8c249046181 83 #include "croutine.h"
dflet 3:a8c249046181 84 #endif
dflet 3:a8c249046181 85
dflet 3:a8c249046181 86 /* Lint e961 and e750 are suppressed as a MISRA exception justified because the
dflet 3:a8c249046181 87 MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the
dflet 3:a8c249046181 88 header files above, but not in this file, in order to generate the correct
dflet 3:a8c249046181 89 privileged Vs unprivileged linkage and placement. */
dflet 3:a8c249046181 90 #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */
dflet 3:a8c249046181 91
dflet 3:a8c249046181 92
dflet 3:a8c249046181 93 /* Constants used with the xRxLock and xTxLock structure members. */
dflet 3:a8c249046181 94 #define queueUNLOCKED ( ( BaseType_t ) -1 )
dflet 3:a8c249046181 95 #define queueLOCKED_UNMODIFIED ( ( BaseType_t ) 0 )
dflet 3:a8c249046181 96
dflet 3:a8c249046181 97 /* When the Queue_t structure is used to represent a base queue its pcHead and
dflet 3:a8c249046181 98 pcTail members are used as pointers into the queue storage area. When the
dflet 3:a8c249046181 99 Queue_t structure is used to represent a mutex pcHead and pcTail pointers are
dflet 3:a8c249046181 100 not necessary, and the pcHead pointer is set to NULL to indicate that the
dflet 3:a8c249046181 101 pcTail pointer actually points to the mutex holder (if any). Map alternative
dflet 3:a8c249046181 102 names to the pcHead and pcTail structure members to ensure the readability of
dflet 3:a8c249046181 103 the code is maintained despite this dual use of two structure members. An
dflet 3:a8c249046181 104 alternative implementation would be to use a union, but use of a union is
dflet 3:a8c249046181 105 against the coding standard (although an exception to the standard has been
dflet 3:a8c249046181 106 permitted where the dual use also significantly changes the type of the
dflet 3:a8c249046181 107 structure member). */
dflet 3:a8c249046181 108 #define pxMutexHolder pcTail
dflet 3:a8c249046181 109 #define uxQueueType pcHead
dflet 3:a8c249046181 110 #define queueQUEUE_IS_MUTEX NULL
dflet 3:a8c249046181 111
dflet 3:a8c249046181 112 /* Semaphores do not actually store or copy data, so have an item size of
dflet 3:a8c249046181 113 zero. */
dflet 3:a8c249046181 114 #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 )
dflet 3:a8c249046181 115 #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U )
dflet 3:a8c249046181 116
dflet 3:a8c249046181 117 #if( configUSE_PREEMPTION == 0 )
dflet 3:a8c249046181 118 /* If the cooperative scheduler is being used then a yield should not be
dflet 3:a8c249046181 119 performed just because a higher priority task has been woken. */
dflet 3:a8c249046181 120 #define queueYIELD_IF_USING_PREEMPTION()
dflet 3:a8c249046181 121 #else
dflet 3:a8c249046181 122 #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API()
dflet 3:a8c249046181 123 #endif
dflet 3:a8c249046181 124
dflet 3:a8c249046181 125 /*
dflet 3:a8c249046181 126 * Definition of the queue used by the scheduler.
dflet 3:a8c249046181 127 * Items are queued by copy, not reference. See the following link for the
dflet 3:a8c249046181 128 * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html
dflet 3:a8c249046181 129 */
dflet 3:a8c249046181 130 typedef struct QueueDefinition
dflet 3:a8c249046181 131 {
dflet 3:a8c249046181 132 int8_t *pcHead; /*< Points to the beginning of the queue storage area. */
dflet 3:a8c249046181 133 int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */
dflet 3:a8c249046181 134 int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */
dflet 3:a8c249046181 135
dflet 3:a8c249046181 136 union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */
dflet 3:a8c249046181 137 {
dflet 3:a8c249046181 138 int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */
dflet 3:a8c249046181 139 UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */
dflet 3:a8c249046181 140 } u;
dflet 3:a8c249046181 141
dflet 3:a8c249046181 142 List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */
dflet 3:a8c249046181 143 List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */
dflet 3:a8c249046181 144
dflet 3:a8c249046181 145 volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */
dflet 3:a8c249046181 146 UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */
dflet 3:a8c249046181 147 UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */
dflet 3:a8c249046181 148
dflet 3:a8c249046181 149 volatile BaseType_t xRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 3:a8c249046181 150 volatile BaseType_t xTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */
dflet 3:a8c249046181 151
dflet 3:a8c249046181 152 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 153 UBaseType_t uxQueueNumber;
dflet 3:a8c249046181 154 uint8_t ucQueueType;
dflet 3:a8c249046181 155 #endif
dflet 3:a8c249046181 156
dflet 3:a8c249046181 157 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 158 struct QueueDefinition *pxQueueSetContainer;
dflet 3:a8c249046181 159 #endif
dflet 3:a8c249046181 160
dflet 3:a8c249046181 161 } xQUEUE;
dflet 3:a8c249046181 162
dflet 3:a8c249046181 163 /* The old xQUEUE name is maintained above then typedefed to the new Queue_t
dflet 3:a8c249046181 164 name below to enable the use of older kernel aware debuggers. */
dflet 3:a8c249046181 165 typedef xQUEUE Queue_t;
dflet 3:a8c249046181 166
dflet 3:a8c249046181 167 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 168
dflet 3:a8c249046181 169 /*
dflet 3:a8c249046181 170 * The queue registry is just a means for kernel aware debuggers to locate
dflet 3:a8c249046181 171 * queue structures. It has no other purpose so is an optional component.
dflet 3:a8c249046181 172 */
dflet 3:a8c249046181 173 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 3:a8c249046181 174
dflet 3:a8c249046181 175 /* The type stored within the queue registry array. This allows a name
dflet 3:a8c249046181 176 to be assigned to each queue making kernel aware debugging a little
dflet 3:a8c249046181 177 more user friendly. */
dflet 3:a8c249046181 178 typedef struct QUEUE_REGISTRY_ITEM
dflet 3:a8c249046181 179 {
dflet 3:a8c249046181 180 const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 3:a8c249046181 181 QueueHandle_t xHandle;
dflet 3:a8c249046181 182 } xQueueRegistryItem;
dflet 3:a8c249046181 183
dflet 3:a8c249046181 184 /* The old xQueueRegistryItem name is maintained above then typedefed to the
dflet 3:a8c249046181 185 new xQueueRegistryItem name below to enable the use of older kernel aware
dflet 3:a8c249046181 186 debuggers. */
dflet 3:a8c249046181 187 typedef xQueueRegistryItem QueueRegistryItem_t;
dflet 3:a8c249046181 188
dflet 3:a8c249046181 189 /* The queue registry is simply an array of QueueRegistryItem_t structures.
dflet 3:a8c249046181 190 The pcQueueName member of a structure being NULL is indicative of the
dflet 3:a8c249046181 191 array position being vacant. */
dflet 3:a8c249046181 192 QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ];
dflet 3:a8c249046181 193
dflet 3:a8c249046181 194 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 3:a8c249046181 195
dflet 3:a8c249046181 196 /*
dflet 3:a8c249046181 197 * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not
dflet 3:a8c249046181 198 * prevent an ISR from adding or removing items to the queue, but does prevent
dflet 3:a8c249046181 199 * an ISR from removing tasks from the queue event lists. If an ISR finds a
dflet 3:a8c249046181 200 * queue is locked it will instead increment the appropriate queue lock count
dflet 3:a8c249046181 201 * to indicate that a task may require unblocking. When the queue in unlocked
dflet 3:a8c249046181 202 * these lock counts are inspected, and the appropriate action taken.
dflet 3:a8c249046181 203 */
dflet 3:a8c249046181 204 static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 205
dflet 3:a8c249046181 206 /*
dflet 3:a8c249046181 207 * Uses a critical section to determine if there is any data in a queue.
dflet 3:a8c249046181 208 *
dflet 3:a8c249046181 209 * @return pdTRUE if the queue contains no items, otherwise pdFALSE.
dflet 3:a8c249046181 210 */
dflet 3:a8c249046181 211 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 212
dflet 3:a8c249046181 213 /*
dflet 3:a8c249046181 214 * Uses a critical section to determine if there is any space in a queue.
dflet 3:a8c249046181 215 *
dflet 3:a8c249046181 216 * @return pdTRUE if there is no space, otherwise pdFALSE;
dflet 3:a8c249046181 217 */
dflet 3:a8c249046181 218 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 219
dflet 3:a8c249046181 220 /*
dflet 3:a8c249046181 221 * Copies an item into the queue, either at the front of the queue or the
dflet 3:a8c249046181 222 * back of the queue.
dflet 3:a8c249046181 223 */
dflet 3:a8c249046181 224 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 225
dflet 3:a8c249046181 226 /*
dflet 3:a8c249046181 227 * Copies an item out of a queue.
dflet 3:a8c249046181 228 */
dflet 3:a8c249046181 229 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 230
dflet 3:a8c249046181 231 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 232 /*
dflet 3:a8c249046181 233 * Checks to see if a queue is a member of a queue set, and if so, notifies
dflet 3:a8c249046181 234 * the queue set that the queue contains data.
dflet 3:a8c249046181 235 */
dflet 3:a8c249046181 236 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION;
dflet 3:a8c249046181 237 #endif
dflet 3:a8c249046181 238
dflet 3:a8c249046181 239 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 240
dflet 3:a8c249046181 241 /*
dflet 3:a8c249046181 242 * Macro to mark a queue as locked. Locking a queue prevents an ISR from
dflet 3:a8c249046181 243 * accessing the queue event lists.
dflet 3:a8c249046181 244 */
dflet 3:a8c249046181 245 #define prvLockQueue( pxQueue ) \
dflet 3:a8c249046181 246 taskENTER_CRITICAL(); \
dflet 3:a8c249046181 247 { \
dflet 3:a8c249046181 248 if( ( pxQueue )->xRxLock == queueUNLOCKED ) \
dflet 3:a8c249046181 249 { \
dflet 3:a8c249046181 250 ( pxQueue )->xRxLock = queueLOCKED_UNMODIFIED; \
dflet 3:a8c249046181 251 } \
dflet 3:a8c249046181 252 if( ( pxQueue )->xTxLock == queueUNLOCKED ) \
dflet 3:a8c249046181 253 { \
dflet 3:a8c249046181 254 ( pxQueue )->xTxLock = queueLOCKED_UNMODIFIED; \
dflet 3:a8c249046181 255 } \
dflet 3:a8c249046181 256 } \
dflet 3:a8c249046181 257 taskEXIT_CRITICAL()
dflet 3:a8c249046181 258 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 259
dflet 3:a8c249046181 260 BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue )
dflet 3:a8c249046181 261 {
dflet 3:a8c249046181 262 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 263
dflet 3:a8c249046181 264 configASSERT( pxQueue );
dflet 3:a8c249046181 265
dflet 3:a8c249046181 266 taskENTER_CRITICAL();
dflet 3:a8c249046181 267 {
dflet 3:a8c249046181 268 pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize );
dflet 3:a8c249046181 269 pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 3:a8c249046181 270 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 3:a8c249046181 271 pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize );
dflet 3:a8c249046181 272 pxQueue->xRxLock = queueUNLOCKED;
dflet 3:a8c249046181 273 pxQueue->xTxLock = queueUNLOCKED;
dflet 3:a8c249046181 274
dflet 3:a8c249046181 275 if( xNewQueue == pdFALSE )
dflet 3:a8c249046181 276 {
dflet 3:a8c249046181 277 /* If there are tasks blocked waiting to read from the queue, then
dflet 3:a8c249046181 278 the tasks will remain blocked as after this function exits the queue
dflet 3:a8c249046181 279 will still be empty. If there are tasks blocked waiting to write to
dflet 3:a8c249046181 280 the queue, then one should be unblocked as after this function exits
dflet 3:a8c249046181 281 it will be possible to write to it. */
dflet 3:a8c249046181 282 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 283 {
dflet 3:a8c249046181 284 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 3:a8c249046181 285 {
dflet 3:a8c249046181 286 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 287 }
dflet 3:a8c249046181 288 else
dflet 3:a8c249046181 289 {
dflet 3:a8c249046181 290 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 291 }
dflet 3:a8c249046181 292 }
dflet 3:a8c249046181 293 else
dflet 3:a8c249046181 294 {
dflet 3:a8c249046181 295 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 296 }
dflet 3:a8c249046181 297 }
dflet 3:a8c249046181 298 else
dflet 3:a8c249046181 299 {
dflet 3:a8c249046181 300 /* Ensure the event queues start in the correct state. */
dflet 3:a8c249046181 301 vListInitialise( &( pxQueue->xTasksWaitingToSend ) );
dflet 3:a8c249046181 302 vListInitialise( &( pxQueue->xTasksWaitingToReceive ) );
dflet 3:a8c249046181 303 }
dflet 3:a8c249046181 304 }
dflet 3:a8c249046181 305 taskEXIT_CRITICAL();
dflet 3:a8c249046181 306
dflet 3:a8c249046181 307 /* A value is returned for calling semantic consistency with previous
dflet 3:a8c249046181 308 versions. */
dflet 3:a8c249046181 309 return pdPASS;
dflet 3:a8c249046181 310 }
dflet 3:a8c249046181 311 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 312
dflet 3:a8c249046181 313 QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType )
dflet 3:a8c249046181 314 {
dflet 3:a8c249046181 315 Queue_t *pxNewQueue;
dflet 3:a8c249046181 316 size_t xQueueSizeInBytes;
dflet 3:a8c249046181 317 QueueHandle_t xReturn = NULL;
dflet 3:a8c249046181 318 int8_t *pcAllocatedBuffer;
dflet 3:a8c249046181 319
dflet 3:a8c249046181 320 /* Remove compiler warnings about unused parameters should
dflet 3:a8c249046181 321 configUSE_TRACE_FACILITY not be set to 1. */
dflet 3:a8c249046181 322 ( void ) ucQueueType;
dflet 3:a8c249046181 323
dflet 3:a8c249046181 324 configASSERT( uxQueueLength > ( UBaseType_t ) 0 );
dflet 3:a8c249046181 325
dflet 3:a8c249046181 326 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 327 {
dflet 3:a8c249046181 328 /* There is not going to be a queue storage area. */
dflet 3:a8c249046181 329 xQueueSizeInBytes = ( size_t ) 0;
dflet 3:a8c249046181 330 }
dflet 3:a8c249046181 331 else
dflet 3:a8c249046181 332 {
dflet 3:a8c249046181 333 /* The queue is one byte longer than asked for to make wrap checking
dflet 3:a8c249046181 334 easier/faster. */
dflet 3:a8c249046181 335 xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ) + ( size_t ) 1; /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 3:a8c249046181 336 }
dflet 3:a8c249046181 337
dflet 3:a8c249046181 338 /* Allocate the new queue structure and storage area. */
dflet 3:a8c249046181 339 pcAllocatedBuffer = ( int8_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes );
dflet 3:a8c249046181 340
dflet 3:a8c249046181 341 if( pcAllocatedBuffer != NULL )
dflet 3:a8c249046181 342 {
dflet 3:a8c249046181 343 pxNewQueue = ( Queue_t * ) pcAllocatedBuffer; /*lint !e826 MISRA The buffer cannot be too small because it was dimensioned by sizeof( Queue_t ) + xQueueSizeInBytes. */
dflet 3:a8c249046181 344
dflet 3:a8c249046181 345 if( uxItemSize == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 346 {
dflet 3:a8c249046181 347 /* No RAM was allocated for the queue storage area, but PC head
dflet 3:a8c249046181 348 cannot be set to NULL because NULL is used as a key to say the queue
dflet 3:a8c249046181 349 is used as a mutex. Therefore just set pcHead to point to the queue
dflet 3:a8c249046181 350 as a benign value that is known to be within the memory map. */
dflet 3:a8c249046181 351 pxNewQueue->pcHead = ( int8_t * ) pxNewQueue;
dflet 3:a8c249046181 352 }
dflet 3:a8c249046181 353 else
dflet 3:a8c249046181 354 {
dflet 3:a8c249046181 355 /* Jump past the queue structure to find the location of the queue
dflet 3:a8c249046181 356 storage area - adding the padding bytes to get a better alignment. */
dflet 3:a8c249046181 357 pxNewQueue->pcHead = pcAllocatedBuffer + sizeof( Queue_t );
dflet 3:a8c249046181 358 }
dflet 3:a8c249046181 359
dflet 3:a8c249046181 360 /* Initialise the queue members as described above where the queue type
dflet 3:a8c249046181 361 is defined. */
dflet 3:a8c249046181 362 pxNewQueue->uxLength = uxQueueLength;
dflet 3:a8c249046181 363 pxNewQueue->uxItemSize = uxItemSize;
dflet 3:a8c249046181 364 ( void ) xQueueGenericReset( pxNewQueue, pdTRUE );
dflet 3:a8c249046181 365
dflet 3:a8c249046181 366 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 367 {
dflet 3:a8c249046181 368 pxNewQueue->ucQueueType = ucQueueType;
dflet 3:a8c249046181 369 }
dflet 3:a8c249046181 370 #endif /* configUSE_TRACE_FACILITY */
dflet 3:a8c249046181 371
dflet 3:a8c249046181 372 #if( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 373 {
dflet 3:a8c249046181 374 pxNewQueue->pxQueueSetContainer = NULL;
dflet 3:a8c249046181 375 }
dflet 3:a8c249046181 376 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 377
dflet 3:a8c249046181 378 traceQUEUE_CREATE( pxNewQueue );
dflet 3:a8c249046181 379 xReturn = pxNewQueue;
dflet 3:a8c249046181 380 }
dflet 3:a8c249046181 381 else
dflet 3:a8c249046181 382 {
dflet 3:a8c249046181 383 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 384 }
dflet 3:a8c249046181 385
dflet 3:a8c249046181 386 configASSERT( xReturn );
dflet 3:a8c249046181 387
dflet 3:a8c249046181 388 return xReturn;
dflet 3:a8c249046181 389 }
dflet 3:a8c249046181 390 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 391
dflet 3:a8c249046181 392 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 393
dflet 3:a8c249046181 394 QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType )
dflet 3:a8c249046181 395 {
dflet 3:a8c249046181 396 Queue_t *pxNewQueue;
dflet 3:a8c249046181 397
dflet 3:a8c249046181 398 /* Prevent compiler warnings about unused parameters if
dflet 3:a8c249046181 399 configUSE_TRACE_FACILITY does not equal 1. */
dflet 3:a8c249046181 400 ( void ) ucQueueType;
dflet 3:a8c249046181 401
dflet 3:a8c249046181 402 /* Allocate the new queue structure. */
dflet 3:a8c249046181 403 pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) );
dflet 3:a8c249046181 404 if( pxNewQueue != NULL )
dflet 3:a8c249046181 405 {
dflet 3:a8c249046181 406 /* Information required for priority inheritance. */
dflet 3:a8c249046181 407 pxNewQueue->pxMutexHolder = NULL;
dflet 3:a8c249046181 408 pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX;
dflet 3:a8c249046181 409
dflet 3:a8c249046181 410 /* Queues used as a mutex no data is actually copied into or out
dflet 3:a8c249046181 411 of the queue. */
dflet 3:a8c249046181 412 pxNewQueue->pcWriteTo = NULL;
dflet 3:a8c249046181 413 pxNewQueue->u.pcReadFrom = NULL;
dflet 3:a8c249046181 414
dflet 3:a8c249046181 415 /* Each mutex has a length of 1 (like a binary semaphore) and
dflet 3:a8c249046181 416 an item size of 0 as nothing is actually copied into or out
dflet 3:a8c249046181 417 of the mutex. */
dflet 3:a8c249046181 418 pxNewQueue->uxMessagesWaiting = ( UBaseType_t ) 0U;
dflet 3:a8c249046181 419 pxNewQueue->uxLength = ( UBaseType_t ) 1U;
dflet 3:a8c249046181 420 pxNewQueue->uxItemSize = ( UBaseType_t ) 0U;
dflet 3:a8c249046181 421 pxNewQueue->xRxLock = queueUNLOCKED;
dflet 3:a8c249046181 422 pxNewQueue->xTxLock = queueUNLOCKED;
dflet 3:a8c249046181 423
dflet 3:a8c249046181 424 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 425 {
dflet 3:a8c249046181 426 pxNewQueue->ucQueueType = ucQueueType;
dflet 3:a8c249046181 427 }
dflet 3:a8c249046181 428 #endif
dflet 3:a8c249046181 429
dflet 3:a8c249046181 430 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 431 {
dflet 3:a8c249046181 432 pxNewQueue->pxQueueSetContainer = NULL;
dflet 3:a8c249046181 433 }
dflet 3:a8c249046181 434 #endif
dflet 3:a8c249046181 435
dflet 3:a8c249046181 436 /* Ensure the event queues start with the correct state. */
dflet 3:a8c249046181 437 vListInitialise( &( pxNewQueue->xTasksWaitingToSend ) );
dflet 3:a8c249046181 438 vListInitialise( &( pxNewQueue->xTasksWaitingToReceive ) );
dflet 3:a8c249046181 439
dflet 3:a8c249046181 440 traceCREATE_MUTEX( pxNewQueue );
dflet 3:a8c249046181 441
dflet 3:a8c249046181 442 /* Start with the semaphore in the expected state. */
dflet 3:a8c249046181 443 ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK );
dflet 3:a8c249046181 444 }
dflet 3:a8c249046181 445 else
dflet 3:a8c249046181 446 {
dflet 3:a8c249046181 447 traceCREATE_MUTEX_FAILED();
dflet 3:a8c249046181 448 }
dflet 3:a8c249046181 449
dflet 3:a8c249046181 450 configASSERT( pxNewQueue );
dflet 3:a8c249046181 451 return pxNewQueue;
dflet 3:a8c249046181 452 }
dflet 3:a8c249046181 453
dflet 3:a8c249046181 454 #endif /* configUSE_MUTEXES */
dflet 3:a8c249046181 455 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 456
dflet 3:a8c249046181 457 #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) )
dflet 3:a8c249046181 458
dflet 3:a8c249046181 459 void* xQueueGetMutexHolder( QueueHandle_t xSemaphore )
dflet 3:a8c249046181 460 {
dflet 3:a8c249046181 461 void *pxReturn;
dflet 3:a8c249046181 462
dflet 3:a8c249046181 463 /* This function is called by xSemaphoreGetMutexHolder(), and should not
dflet 3:a8c249046181 464 be called directly. Note: This is a good way of determining if the
dflet 3:a8c249046181 465 calling task is the mutex holder, but not a good way of determining the
dflet 3:a8c249046181 466 identity of the mutex holder, as the holder may change between the
dflet 3:a8c249046181 467 following critical section exiting and the function returning. */
dflet 3:a8c249046181 468 taskENTER_CRITICAL();
dflet 3:a8c249046181 469 {
dflet 3:a8c249046181 470 if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 471 {
dflet 3:a8c249046181 472 pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder;
dflet 3:a8c249046181 473 }
dflet 3:a8c249046181 474 else
dflet 3:a8c249046181 475 {
dflet 3:a8c249046181 476 pxReturn = NULL;
dflet 3:a8c249046181 477 }
dflet 3:a8c249046181 478 }
dflet 3:a8c249046181 479 taskEXIT_CRITICAL();
dflet 3:a8c249046181 480
dflet 3:a8c249046181 481 return pxReturn;
dflet 3:a8c249046181 482 } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */
dflet 3:a8c249046181 483
dflet 3:a8c249046181 484 #endif
dflet 3:a8c249046181 485 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 486
dflet 3:a8c249046181 487 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 3:a8c249046181 488
dflet 3:a8c249046181 489 BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex )
dflet 3:a8c249046181 490 {
dflet 3:a8c249046181 491 BaseType_t xReturn;
dflet 3:a8c249046181 492 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 3:a8c249046181 493
dflet 3:a8c249046181 494 configASSERT( pxMutex );
dflet 3:a8c249046181 495
dflet 3:a8c249046181 496 /* If this is the task that holds the mutex then pxMutexHolder will not
dflet 3:a8c249046181 497 change outside of this task. If this task does not hold the mutex then
dflet 3:a8c249046181 498 pxMutexHolder can never coincidentally equal the tasks handle, and as
dflet 3:a8c249046181 499 this is the only condition we are interested in it does not matter if
dflet 3:a8c249046181 500 pxMutexHolder is accessed simultaneously by another task. Therefore no
dflet 3:a8c249046181 501 mutual exclusion is required to test the pxMutexHolder variable. */
dflet 3:a8c249046181 502 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */
dflet 3:a8c249046181 503 {
dflet 3:a8c249046181 504 traceGIVE_MUTEX_RECURSIVE( pxMutex );
dflet 3:a8c249046181 505
dflet 3:a8c249046181 506 /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to
dflet 3:a8c249046181 507 the task handle, therefore no underflow check is required. Also,
dflet 3:a8c249046181 508 uxRecursiveCallCount is only modified by the mutex holder, and as
dflet 3:a8c249046181 509 there can only be one, no mutual exclusion is required to modify the
dflet 3:a8c249046181 510 uxRecursiveCallCount member. */
dflet 3:a8c249046181 511 ( pxMutex->u.uxRecursiveCallCount )--;
dflet 3:a8c249046181 512
dflet 3:a8c249046181 513 /* Have we unwound the call count? */
dflet 3:a8c249046181 514 if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 515 {
dflet 3:a8c249046181 516 /* Return the mutex. This will automatically unblock any other
dflet 3:a8c249046181 517 task that might be waiting to access the mutex. */
dflet 3:a8c249046181 518 ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK );
dflet 3:a8c249046181 519 }
dflet 3:a8c249046181 520 else
dflet 3:a8c249046181 521 {
dflet 3:a8c249046181 522 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 523 }
dflet 3:a8c249046181 524
dflet 3:a8c249046181 525 xReturn = pdPASS;
dflet 3:a8c249046181 526 }
dflet 3:a8c249046181 527 else
dflet 3:a8c249046181 528 {
dflet 3:a8c249046181 529 /* The mutex cannot be given because the calling task is not the
dflet 3:a8c249046181 530 holder. */
dflet 3:a8c249046181 531 xReturn = pdFAIL;
dflet 3:a8c249046181 532
dflet 3:a8c249046181 533 traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 3:a8c249046181 534 }
dflet 3:a8c249046181 535
dflet 3:a8c249046181 536 return xReturn;
dflet 3:a8c249046181 537 }
dflet 3:a8c249046181 538
dflet 3:a8c249046181 539 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 3:a8c249046181 540 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 541
dflet 3:a8c249046181 542 #if ( configUSE_RECURSIVE_MUTEXES == 1 )
dflet 3:a8c249046181 543
dflet 3:a8c249046181 544 BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait )
dflet 3:a8c249046181 545 {
dflet 3:a8c249046181 546 BaseType_t xReturn;
dflet 3:a8c249046181 547 Queue_t * const pxMutex = ( Queue_t * ) xMutex;
dflet 3:a8c249046181 548
dflet 3:a8c249046181 549 configASSERT( pxMutex );
dflet 3:a8c249046181 550
dflet 3:a8c249046181 551 /* Comments regarding mutual exclusion as per those within
dflet 3:a8c249046181 552 xQueueGiveMutexRecursive(). */
dflet 3:a8c249046181 553
dflet 3:a8c249046181 554 traceTAKE_MUTEX_RECURSIVE( pxMutex );
dflet 3:a8c249046181 555
dflet 3:a8c249046181 556 if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 3:a8c249046181 557 {
dflet 3:a8c249046181 558 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 3:a8c249046181 559 xReturn = pdPASS;
dflet 3:a8c249046181 560 }
dflet 3:a8c249046181 561 else
dflet 3:a8c249046181 562 {
dflet 3:a8c249046181 563 xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE );
dflet 3:a8c249046181 564
dflet 3:a8c249046181 565 /* pdPASS will only be returned if the mutex was successfully
dflet 3:a8c249046181 566 obtained. The calling task may have entered the Blocked state
dflet 3:a8c249046181 567 before reaching here. */
dflet 3:a8c249046181 568 if( xReturn == pdPASS )
dflet 3:a8c249046181 569 {
dflet 3:a8c249046181 570 ( pxMutex->u.uxRecursiveCallCount )++;
dflet 3:a8c249046181 571 }
dflet 3:a8c249046181 572 else
dflet 3:a8c249046181 573 {
dflet 3:a8c249046181 574 traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex );
dflet 3:a8c249046181 575 }
dflet 3:a8c249046181 576 }
dflet 3:a8c249046181 577
dflet 3:a8c249046181 578 return xReturn;
dflet 3:a8c249046181 579 }
dflet 3:a8c249046181 580
dflet 3:a8c249046181 581 #endif /* configUSE_RECURSIVE_MUTEXES */
dflet 3:a8c249046181 582 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 583
dflet 3:a8c249046181 584 #if ( configUSE_COUNTING_SEMAPHORES == 1 )
dflet 3:a8c249046181 585
dflet 3:a8c249046181 586 QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount )
dflet 3:a8c249046181 587 {
dflet 3:a8c249046181 588 QueueHandle_t xHandle;
dflet 3:a8c249046181 589
dflet 3:a8c249046181 590 configASSERT( uxMaxCount != 0 );
dflet 3:a8c249046181 591 configASSERT( uxInitialCount <= uxMaxCount );
dflet 3:a8c249046181 592
dflet 3:a8c249046181 593 xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE );
dflet 3:a8c249046181 594
dflet 3:a8c249046181 595 if( xHandle != NULL )
dflet 3:a8c249046181 596 {
dflet 3:a8c249046181 597 ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount;
dflet 3:a8c249046181 598
dflet 3:a8c249046181 599 traceCREATE_COUNTING_SEMAPHORE();
dflet 3:a8c249046181 600 }
dflet 3:a8c249046181 601 else
dflet 3:a8c249046181 602 {
dflet 3:a8c249046181 603 traceCREATE_COUNTING_SEMAPHORE_FAILED();
dflet 3:a8c249046181 604 }
dflet 3:a8c249046181 605
dflet 3:a8c249046181 606 configASSERT( xHandle );
dflet 3:a8c249046181 607 return xHandle;
dflet 3:a8c249046181 608 }
dflet 3:a8c249046181 609
dflet 3:a8c249046181 610 #endif /* configUSE_COUNTING_SEMAPHORES */
dflet 3:a8c249046181 611 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 612
dflet 3:a8c249046181 613 BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition )
dflet 3:a8c249046181 614 {
dflet 3:a8c249046181 615 BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired;
dflet 3:a8c249046181 616 TimeOut_t xTimeOut;
dflet 3:a8c249046181 617 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 618
dflet 3:a8c249046181 619 configASSERT( pxQueue );
dflet 3:a8c249046181 620 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 621 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 3:a8c249046181 622 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 3:a8c249046181 623 {
dflet 3:a8c249046181 624 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 3:a8c249046181 625 }
dflet 3:a8c249046181 626 #endif
dflet 3:a8c249046181 627
dflet 3:a8c249046181 628
dflet 3:a8c249046181 629 /* This function relaxes the coding standard somewhat to allow return
dflet 3:a8c249046181 630 statements within the function itself. This is done in the interest
dflet 3:a8c249046181 631 of execution time efficiency. */
dflet 3:a8c249046181 632 for( ;; )
dflet 3:a8c249046181 633 {
dflet 3:a8c249046181 634 taskENTER_CRITICAL();
dflet 3:a8c249046181 635 {
dflet 3:a8c249046181 636 /* Is there room on the queue now? The running task must be the
dflet 3:a8c249046181 637 highest priority task wanting to access the queue. If the head item
dflet 3:a8c249046181 638 in the queue is to be overwritten then it does not matter if the
dflet 3:a8c249046181 639 queue is full. */
dflet 3:a8c249046181 640 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 3:a8c249046181 641 {
dflet 3:a8c249046181 642 traceQUEUE_SEND( pxQueue );
dflet 3:a8c249046181 643 xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 3:a8c249046181 644
dflet 3:a8c249046181 645 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 646 {
dflet 3:a8c249046181 647 if( pxQueue->pxQueueSetContainer != NULL )
dflet 3:a8c249046181 648 {
dflet 3:a8c249046181 649 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 3:a8c249046181 650 {
dflet 3:a8c249046181 651 /* The queue is a member of a queue set, and posting
dflet 3:a8c249046181 652 to the queue set caused a higher priority task to
dflet 3:a8c249046181 653 unblock. A context switch is required. */
dflet 3:a8c249046181 654 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 655 }
dflet 3:a8c249046181 656 else
dflet 3:a8c249046181 657 {
dflet 3:a8c249046181 658 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 659 }
dflet 3:a8c249046181 660 }
dflet 3:a8c249046181 661 else
dflet 3:a8c249046181 662 {
dflet 3:a8c249046181 663 /* If there was a task waiting for data to arrive on the
dflet 3:a8c249046181 664 queue then unblock it now. */
dflet 3:a8c249046181 665 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 666 {
dflet 3:a8c249046181 667 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 3:a8c249046181 668 {
dflet 3:a8c249046181 669 /* The unblocked task has a priority higher than
dflet 3:a8c249046181 670 our own so yield immediately. Yes it is ok to
dflet 3:a8c249046181 671 do this from within the critical section - the
dflet 3:a8c249046181 672 kernel takes care of that. */
dflet 3:a8c249046181 673 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 674 }
dflet 3:a8c249046181 675 else
dflet 3:a8c249046181 676 {
dflet 3:a8c249046181 677 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 678 }
dflet 3:a8c249046181 679 }
dflet 3:a8c249046181 680 else if( xYieldRequired != pdFALSE )
dflet 3:a8c249046181 681 {
dflet 3:a8c249046181 682 /* This path is a special case that will only get
dflet 3:a8c249046181 683 executed if the task was holding multiple mutexes
dflet 3:a8c249046181 684 and the mutexes were given back in an order that is
dflet 3:a8c249046181 685 different to that in which they were taken. */
dflet 3:a8c249046181 686 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 687 }
dflet 3:a8c249046181 688 else
dflet 3:a8c249046181 689 {
dflet 3:a8c249046181 690 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 691 }
dflet 3:a8c249046181 692 }
dflet 3:a8c249046181 693 }
dflet 3:a8c249046181 694 #else /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 695 {
dflet 3:a8c249046181 696 /* If there was a task waiting for data to arrive on the
dflet 3:a8c249046181 697 queue then unblock it now. */
dflet 3:a8c249046181 698 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 699 {
dflet 3:a8c249046181 700 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 3:a8c249046181 701 {
dflet 3:a8c249046181 702 /* The unblocked task has a priority higher than
dflet 3:a8c249046181 703 our own so yield immediately. Yes it is ok to do
dflet 3:a8c249046181 704 this from within the critical section - the kernel
dflet 3:a8c249046181 705 takes care of that. */
dflet 3:a8c249046181 706 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 707 }
dflet 3:a8c249046181 708 else
dflet 3:a8c249046181 709 {
dflet 3:a8c249046181 710 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 711 }
dflet 3:a8c249046181 712 }
dflet 3:a8c249046181 713 else if( xYieldRequired != pdFALSE )
dflet 3:a8c249046181 714 {
dflet 3:a8c249046181 715 /* This path is a special case that will only get
dflet 3:a8c249046181 716 executed if the task was holding multiple mutexes and
dflet 3:a8c249046181 717 the mutexes were given back in an order that is
dflet 3:a8c249046181 718 different to that in which they were taken. */
dflet 3:a8c249046181 719 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 720 }
dflet 3:a8c249046181 721 else
dflet 3:a8c249046181 722 {
dflet 3:a8c249046181 723 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 724 }
dflet 3:a8c249046181 725 }
dflet 3:a8c249046181 726 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 727
dflet 3:a8c249046181 728 taskEXIT_CRITICAL();
dflet 3:a8c249046181 729 return pdPASS;
dflet 3:a8c249046181 730 }
dflet 3:a8c249046181 731 else
dflet 3:a8c249046181 732 {
dflet 3:a8c249046181 733 if( xTicksToWait == ( TickType_t ) 0 )
dflet 3:a8c249046181 734 {
dflet 3:a8c249046181 735 /* The queue was full and no block time is specified (or
dflet 3:a8c249046181 736 the block time has expired) so leave now. */
dflet 3:a8c249046181 737 taskEXIT_CRITICAL();
dflet 3:a8c249046181 738
dflet 3:a8c249046181 739 /* Return to the original privilege level before exiting
dflet 3:a8c249046181 740 the function. */
dflet 3:a8c249046181 741 traceQUEUE_SEND_FAILED( pxQueue );
dflet 3:a8c249046181 742 return errQUEUE_FULL;
dflet 3:a8c249046181 743 }
dflet 3:a8c249046181 744 else if( xEntryTimeSet == pdFALSE )
dflet 3:a8c249046181 745 {
dflet 3:a8c249046181 746 /* The queue was full and a block time was specified so
dflet 3:a8c249046181 747 configure the timeout structure. */
dflet 3:a8c249046181 748 vTaskSetTimeOutState( &xTimeOut );
dflet 3:a8c249046181 749 xEntryTimeSet = pdTRUE;
dflet 3:a8c249046181 750 }
dflet 3:a8c249046181 751 else
dflet 3:a8c249046181 752 {
dflet 3:a8c249046181 753 /* Entry time was already set. */
dflet 3:a8c249046181 754 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 755 }
dflet 3:a8c249046181 756 }
dflet 3:a8c249046181 757 }
dflet 3:a8c249046181 758 taskEXIT_CRITICAL();
dflet 3:a8c249046181 759
dflet 3:a8c249046181 760 /* Interrupts and other tasks can send to and receive from the queue
dflet 3:a8c249046181 761 now the critical section has been exited. */
dflet 3:a8c249046181 762
dflet 3:a8c249046181 763 vTaskSuspendAll();
dflet 3:a8c249046181 764 prvLockQueue( pxQueue );
dflet 3:a8c249046181 765
dflet 3:a8c249046181 766 /* Update the timeout state to see if it has expired yet. */
dflet 3:a8c249046181 767 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 3:a8c249046181 768 {
dflet 3:a8c249046181 769 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 3:a8c249046181 770 {
dflet 3:a8c249046181 771 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 3:a8c249046181 772 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 3:a8c249046181 773
dflet 3:a8c249046181 774 /* Unlocking the queue means queue events can effect the
dflet 3:a8c249046181 775 event list. It is possible that interrupts occurring now
dflet 3:a8c249046181 776 remove this task from the event list again - but as the
dflet 3:a8c249046181 777 scheduler is suspended the task will go onto the pending
dflet 3:a8c249046181 778 ready last instead of the actual ready list. */
dflet 3:a8c249046181 779 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 780
dflet 3:a8c249046181 781 /* Resuming the scheduler will move tasks from the pending
dflet 3:a8c249046181 782 ready list into the ready list - so it is feasible that this
dflet 3:a8c249046181 783 task is already in a ready list before it yields - in which
dflet 3:a8c249046181 784 case the yield will not cause a context switch unless there
dflet 3:a8c249046181 785 is also a higher priority task in the pending ready list. */
dflet 3:a8c249046181 786 if( xTaskResumeAll() == pdFALSE )
dflet 3:a8c249046181 787 {
dflet 3:a8c249046181 788 portYIELD_WITHIN_API();
dflet 3:a8c249046181 789 }
dflet 3:a8c249046181 790 }
dflet 3:a8c249046181 791 else
dflet 3:a8c249046181 792 {
dflet 3:a8c249046181 793 /* Try again. */
dflet 3:a8c249046181 794 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 795 ( void ) xTaskResumeAll();
dflet 3:a8c249046181 796 }
dflet 3:a8c249046181 797 }
dflet 3:a8c249046181 798 else
dflet 3:a8c249046181 799 {
dflet 3:a8c249046181 800 /* The timeout has expired. */
dflet 3:a8c249046181 801 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 802 ( void ) xTaskResumeAll();
dflet 3:a8c249046181 803
dflet 3:a8c249046181 804 /* Return to the original privilege level before exiting the
dflet 3:a8c249046181 805 function. */
dflet 3:a8c249046181 806 traceQUEUE_SEND_FAILED( pxQueue );
dflet 3:a8c249046181 807 return errQUEUE_FULL;
dflet 3:a8c249046181 808 }
dflet 3:a8c249046181 809 }
dflet 3:a8c249046181 810 }
dflet 3:a8c249046181 811 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 812
dflet 3:a8c249046181 813 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 3:a8c249046181 814
dflet 3:a8c249046181 815 BaseType_t xQueueAltGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, BaseType_t xCopyPosition )
dflet 3:a8c249046181 816 {
dflet 3:a8c249046181 817 BaseType_t xEntryTimeSet = pdFALSE;
dflet 3:a8c249046181 818 TimeOut_t xTimeOut;
dflet 3:a8c249046181 819 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 820
dflet 3:a8c249046181 821 configASSERT( pxQueue );
dflet 3:a8c249046181 822 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 823
dflet 3:a8c249046181 824 for( ;; )
dflet 3:a8c249046181 825 {
dflet 3:a8c249046181 826 taskENTER_CRITICAL();
dflet 3:a8c249046181 827 {
dflet 3:a8c249046181 828 /* Is there room on the queue now? To be running we must be
dflet 3:a8c249046181 829 the highest priority task wanting to access the queue. */
dflet 3:a8c249046181 830 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 3:a8c249046181 831 {
dflet 3:a8c249046181 832 traceQUEUE_SEND( pxQueue );
dflet 3:a8c249046181 833 prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 3:a8c249046181 834
dflet 3:a8c249046181 835 /* If there was a task waiting for data to arrive on the
dflet 3:a8c249046181 836 queue then unblock it now. */
dflet 3:a8c249046181 837 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 838 {
dflet 3:a8c249046181 839 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE )
dflet 3:a8c249046181 840 {
dflet 3:a8c249046181 841 /* The unblocked task has a priority higher than
dflet 3:a8c249046181 842 our own so yield immediately. */
dflet 3:a8c249046181 843 portYIELD_WITHIN_API();
dflet 3:a8c249046181 844 }
dflet 3:a8c249046181 845 else
dflet 3:a8c249046181 846 {
dflet 3:a8c249046181 847 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 848 }
dflet 3:a8c249046181 849 }
dflet 3:a8c249046181 850 else
dflet 3:a8c249046181 851 {
dflet 3:a8c249046181 852 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 853 }
dflet 3:a8c249046181 854
dflet 3:a8c249046181 855 taskEXIT_CRITICAL();
dflet 3:a8c249046181 856 return pdPASS;
dflet 3:a8c249046181 857 }
dflet 3:a8c249046181 858 else
dflet 3:a8c249046181 859 {
dflet 3:a8c249046181 860 if( xTicksToWait == ( TickType_t ) 0 )
dflet 3:a8c249046181 861 {
dflet 3:a8c249046181 862 taskEXIT_CRITICAL();
dflet 3:a8c249046181 863 return errQUEUE_FULL;
dflet 3:a8c249046181 864 }
dflet 3:a8c249046181 865 else if( xEntryTimeSet == pdFALSE )
dflet 3:a8c249046181 866 {
dflet 3:a8c249046181 867 vTaskSetTimeOutState( &xTimeOut );
dflet 3:a8c249046181 868 xEntryTimeSet = pdTRUE;
dflet 3:a8c249046181 869 }
dflet 3:a8c249046181 870 }
dflet 3:a8c249046181 871 }
dflet 3:a8c249046181 872 taskEXIT_CRITICAL();
dflet 3:a8c249046181 873
dflet 3:a8c249046181 874 taskENTER_CRITICAL();
dflet 3:a8c249046181 875 {
dflet 3:a8c249046181 876 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 3:a8c249046181 877 {
dflet 3:a8c249046181 878 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 3:a8c249046181 879 {
dflet 3:a8c249046181 880 traceBLOCKING_ON_QUEUE_SEND( pxQueue );
dflet 3:a8c249046181 881 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait );
dflet 3:a8c249046181 882 portYIELD_WITHIN_API();
dflet 3:a8c249046181 883 }
dflet 3:a8c249046181 884 else
dflet 3:a8c249046181 885 {
dflet 3:a8c249046181 886 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 887 }
dflet 3:a8c249046181 888 }
dflet 3:a8c249046181 889 else
dflet 3:a8c249046181 890 {
dflet 3:a8c249046181 891 taskEXIT_CRITICAL();
dflet 3:a8c249046181 892 traceQUEUE_SEND_FAILED( pxQueue );
dflet 3:a8c249046181 893 return errQUEUE_FULL;
dflet 3:a8c249046181 894 }
dflet 3:a8c249046181 895 }
dflet 3:a8c249046181 896 taskEXIT_CRITICAL();
dflet 3:a8c249046181 897 }
dflet 3:a8c249046181 898 }
dflet 3:a8c249046181 899
dflet 3:a8c249046181 900 #endif /* configUSE_ALTERNATIVE_API */
dflet 3:a8c249046181 901 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 902
dflet 3:a8c249046181 903 #if ( configUSE_ALTERNATIVE_API == 1 )
dflet 3:a8c249046181 904
dflet 3:a8c249046181 905 BaseType_t xQueueAltGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, BaseType_t xJustPeeking )
dflet 3:a8c249046181 906 {
dflet 3:a8c249046181 907 BaseType_t xEntryTimeSet = pdFALSE;
dflet 3:a8c249046181 908 TimeOut_t xTimeOut;
dflet 3:a8c249046181 909 int8_t *pcOriginalReadPosition;
dflet 3:a8c249046181 910 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 911
dflet 3:a8c249046181 912 configASSERT( pxQueue );
dflet 3:a8c249046181 913 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 914
dflet 3:a8c249046181 915 for( ;; )
dflet 3:a8c249046181 916 {
dflet 3:a8c249046181 917 taskENTER_CRITICAL();
dflet 3:a8c249046181 918 {
dflet 3:a8c249046181 919 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 920 {
dflet 3:a8c249046181 921 /* Remember our read position in case we are just peeking. */
dflet 3:a8c249046181 922 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 3:a8c249046181 923
dflet 3:a8c249046181 924 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 3:a8c249046181 925
dflet 3:a8c249046181 926 if( xJustPeeking == pdFALSE )
dflet 3:a8c249046181 927 {
dflet 3:a8c249046181 928 traceQUEUE_RECEIVE( pxQueue );
dflet 3:a8c249046181 929
dflet 3:a8c249046181 930 /* Data is actually being removed (not just peeked). */
dflet 3:a8c249046181 931 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 932
dflet 3:a8c249046181 933 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 934 {
dflet 3:a8c249046181 935 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 936 {
dflet 3:a8c249046181 937 /* Record the information required to implement
dflet 3:a8c249046181 938 priority inheritance should it become necessary. */
dflet 3:a8c249046181 939 pxQueue->pxMutexHolder = ( int8_t * ) xTaskGetCurrentTaskHandle();
dflet 3:a8c249046181 940 }
dflet 3:a8c249046181 941 else
dflet 3:a8c249046181 942 {
dflet 3:a8c249046181 943 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 944 }
dflet 3:a8c249046181 945 }
dflet 3:a8c249046181 946 #endif
dflet 3:a8c249046181 947
dflet 3:a8c249046181 948 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 949 {
dflet 3:a8c249046181 950 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 3:a8c249046181 951 {
dflet 3:a8c249046181 952 portYIELD_WITHIN_API();
dflet 3:a8c249046181 953 }
dflet 3:a8c249046181 954 else
dflet 3:a8c249046181 955 {
dflet 3:a8c249046181 956 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 957 }
dflet 3:a8c249046181 958 }
dflet 3:a8c249046181 959 }
dflet 3:a8c249046181 960 else
dflet 3:a8c249046181 961 {
dflet 3:a8c249046181 962 traceQUEUE_PEEK( pxQueue );
dflet 3:a8c249046181 963
dflet 3:a8c249046181 964 /* The data is not being removed, so reset our read
dflet 3:a8c249046181 965 pointer. */
dflet 3:a8c249046181 966 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 3:a8c249046181 967
dflet 3:a8c249046181 968 /* The data is being left in the queue, so see if there are
dflet 3:a8c249046181 969 any other tasks waiting for the data. */
dflet 3:a8c249046181 970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 971 {
dflet 3:a8c249046181 972 /* Tasks that are removed from the event list will get added to
dflet 3:a8c249046181 973 the pending ready list as the scheduler is still suspended. */
dflet 3:a8c249046181 974 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 975 {
dflet 3:a8c249046181 976 /* The task waiting has a higher priority than this task. */
dflet 3:a8c249046181 977 portYIELD_WITHIN_API();
dflet 3:a8c249046181 978 }
dflet 3:a8c249046181 979 else
dflet 3:a8c249046181 980 {
dflet 3:a8c249046181 981 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 982 }
dflet 3:a8c249046181 983 }
dflet 3:a8c249046181 984 else
dflet 3:a8c249046181 985 {
dflet 3:a8c249046181 986 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 987 }
dflet 3:a8c249046181 988 }
dflet 3:a8c249046181 989
dflet 3:a8c249046181 990 taskEXIT_CRITICAL();
dflet 3:a8c249046181 991 return pdPASS;
dflet 3:a8c249046181 992 }
dflet 3:a8c249046181 993 else
dflet 3:a8c249046181 994 {
dflet 3:a8c249046181 995 if( xTicksToWait == ( TickType_t ) 0 )
dflet 3:a8c249046181 996 {
dflet 3:a8c249046181 997 taskEXIT_CRITICAL();
dflet 3:a8c249046181 998 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 3:a8c249046181 999 return errQUEUE_EMPTY;
dflet 3:a8c249046181 1000 }
dflet 3:a8c249046181 1001 else if( xEntryTimeSet == pdFALSE )
dflet 3:a8c249046181 1002 {
dflet 3:a8c249046181 1003 vTaskSetTimeOutState( &xTimeOut );
dflet 3:a8c249046181 1004 xEntryTimeSet = pdTRUE;
dflet 3:a8c249046181 1005 }
dflet 3:a8c249046181 1006 }
dflet 3:a8c249046181 1007 }
dflet 3:a8c249046181 1008 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1009
dflet 3:a8c249046181 1010 taskENTER_CRITICAL();
dflet 3:a8c249046181 1011 {
dflet 3:a8c249046181 1012 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 3:a8c249046181 1013 {
dflet 3:a8c249046181 1014 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 3:a8c249046181 1015 {
dflet 3:a8c249046181 1016 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 3:a8c249046181 1017
dflet 3:a8c249046181 1018 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 1019 {
dflet 3:a8c249046181 1020 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 1021 {
dflet 3:a8c249046181 1022 taskENTER_CRITICAL();
dflet 3:a8c249046181 1023 {
dflet 3:a8c249046181 1024 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 3:a8c249046181 1025 }
dflet 3:a8c249046181 1026 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1027 }
dflet 3:a8c249046181 1028 else
dflet 3:a8c249046181 1029 {
dflet 3:a8c249046181 1030 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1031 }
dflet 3:a8c249046181 1032 }
dflet 3:a8c249046181 1033 #endif
dflet 3:a8c249046181 1034
dflet 3:a8c249046181 1035 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 3:a8c249046181 1036 portYIELD_WITHIN_API();
dflet 3:a8c249046181 1037 }
dflet 3:a8c249046181 1038 else
dflet 3:a8c249046181 1039 {
dflet 3:a8c249046181 1040 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1041 }
dflet 3:a8c249046181 1042 }
dflet 3:a8c249046181 1043 else
dflet 3:a8c249046181 1044 {
dflet 3:a8c249046181 1045 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1046 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 3:a8c249046181 1047 return errQUEUE_EMPTY;
dflet 3:a8c249046181 1048 }
dflet 3:a8c249046181 1049 }
dflet 3:a8c249046181 1050 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1051 }
dflet 3:a8c249046181 1052 }
dflet 3:a8c249046181 1053
dflet 3:a8c249046181 1054
dflet 3:a8c249046181 1055 #endif /* configUSE_ALTERNATIVE_API */
dflet 3:a8c249046181 1056 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1057
dflet 3:a8c249046181 1058 BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition )
dflet 3:a8c249046181 1059 {
dflet 3:a8c249046181 1060 BaseType_t xReturn;
dflet 3:a8c249046181 1061 UBaseType_t uxSavedInterruptStatus;
dflet 3:a8c249046181 1062 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1063
dflet 3:a8c249046181 1064 configASSERT( pxQueue );
dflet 3:a8c249046181 1065 configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 1066 configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) );
dflet 3:a8c249046181 1067
dflet 3:a8c249046181 1068 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 3:a8c249046181 1069 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 3:a8c249046181 1070 above the maximum system call priority are kept permanently enabled, even
dflet 3:a8c249046181 1071 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 3:a8c249046181 1072 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 3:a8c249046181 1073 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 3:a8c249046181 1074 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 3:a8c249046181 1075 assigned a priority above the configured maximum system call priority.
dflet 3:a8c249046181 1076 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 3:a8c249046181 1077 that have been assigned a priority at or (logically) below the maximum
dflet 3:a8c249046181 1078 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 3:a8c249046181 1079 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 3:a8c249046181 1080 More information (albeit Cortex-M specific) is provided on the following
dflet 3:a8c249046181 1081 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 3:a8c249046181 1082 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 3:a8c249046181 1083
dflet 3:a8c249046181 1084 /* Similar to xQueueGenericSend, except without blocking if there is no room
dflet 3:a8c249046181 1085 in the queue. Also don't directly wake a task that was blocked on a queue
dflet 3:a8c249046181 1086 read, instead return a flag to say whether a context switch is required or
dflet 3:a8c249046181 1087 not (i.e. has a task with a higher priority than us been woken by this
dflet 3:a8c249046181 1088 post). */
dflet 3:a8c249046181 1089 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 3:a8c249046181 1090 {
dflet 3:a8c249046181 1091 if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) )
dflet 3:a8c249046181 1092 {
dflet 3:a8c249046181 1093 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 3:a8c249046181 1094
dflet 3:a8c249046181 1095 /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a
dflet 3:a8c249046181 1096 semaphore or mutex. That means prvCopyDataToQueue() cannot result
dflet 3:a8c249046181 1097 in a task disinheriting a priority and prvCopyDataToQueue() can be
dflet 3:a8c249046181 1098 called here even though the disinherit function does not check if
dflet 3:a8c249046181 1099 the scheduler is suspended before accessing the ready lists. */
dflet 3:a8c249046181 1100 ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition );
dflet 3:a8c249046181 1101
dflet 3:a8c249046181 1102 /* The event list is not altered if the queue is locked. This will
dflet 3:a8c249046181 1103 be done when the queue is unlocked later. */
dflet 3:a8c249046181 1104 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 3:a8c249046181 1105 {
dflet 3:a8c249046181 1106 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 1107 {
dflet 3:a8c249046181 1108 if( pxQueue->pxQueueSetContainer != NULL )
dflet 3:a8c249046181 1109 {
dflet 3:a8c249046181 1110 if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE )
dflet 3:a8c249046181 1111 {
dflet 3:a8c249046181 1112 /* The queue is a member of a queue set, and posting
dflet 3:a8c249046181 1113 to the queue set caused a higher priority task to
dflet 3:a8c249046181 1114 unblock. A context switch is required. */
dflet 3:a8c249046181 1115 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1116 {
dflet 3:a8c249046181 1117 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1118 }
dflet 3:a8c249046181 1119 else
dflet 3:a8c249046181 1120 {
dflet 3:a8c249046181 1121 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1122 }
dflet 3:a8c249046181 1123 }
dflet 3:a8c249046181 1124 else
dflet 3:a8c249046181 1125 {
dflet 3:a8c249046181 1126 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1127 }
dflet 3:a8c249046181 1128 }
dflet 3:a8c249046181 1129 else
dflet 3:a8c249046181 1130 {
dflet 3:a8c249046181 1131 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1132 {
dflet 3:a8c249046181 1133 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1134 {
dflet 3:a8c249046181 1135 /* The task waiting has a higher priority so
dflet 3:a8c249046181 1136 record that a context switch is required. */
dflet 3:a8c249046181 1137 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1138 {
dflet 3:a8c249046181 1139 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1140 }
dflet 3:a8c249046181 1141 else
dflet 3:a8c249046181 1142 {
dflet 3:a8c249046181 1143 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1144 }
dflet 3:a8c249046181 1145 }
dflet 3:a8c249046181 1146 else
dflet 3:a8c249046181 1147 {
dflet 3:a8c249046181 1148 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1149 }
dflet 3:a8c249046181 1150 }
dflet 3:a8c249046181 1151 else
dflet 3:a8c249046181 1152 {
dflet 3:a8c249046181 1153 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1154 }
dflet 3:a8c249046181 1155 }
dflet 3:a8c249046181 1156 }
dflet 3:a8c249046181 1157 #else /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1158 {
dflet 3:a8c249046181 1159 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1160 {
dflet 3:a8c249046181 1161 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1162 {
dflet 3:a8c249046181 1163 /* The task waiting has a higher priority so record that a
dflet 3:a8c249046181 1164 context switch is required. */
dflet 3:a8c249046181 1165 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1166 {
dflet 3:a8c249046181 1167 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1168 }
dflet 3:a8c249046181 1169 else
dflet 3:a8c249046181 1170 {
dflet 3:a8c249046181 1171 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1172 }
dflet 3:a8c249046181 1173 }
dflet 3:a8c249046181 1174 else
dflet 3:a8c249046181 1175 {
dflet 3:a8c249046181 1176 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1177 }
dflet 3:a8c249046181 1178 }
dflet 3:a8c249046181 1179 else
dflet 3:a8c249046181 1180 {
dflet 3:a8c249046181 1181 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1182 }
dflet 3:a8c249046181 1183 }
dflet 3:a8c249046181 1184 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1185 }
dflet 3:a8c249046181 1186 else
dflet 3:a8c249046181 1187 {
dflet 3:a8c249046181 1188 /* Increment the lock count so the task that unlocks the queue
dflet 3:a8c249046181 1189 knows that data was posted while it was locked. */
dflet 3:a8c249046181 1190 ++( pxQueue->xTxLock );
dflet 3:a8c249046181 1191 }
dflet 3:a8c249046181 1192
dflet 3:a8c249046181 1193 xReturn = pdPASS;
dflet 3:a8c249046181 1194 }
dflet 3:a8c249046181 1195 else
dflet 3:a8c249046181 1196 {
dflet 3:a8c249046181 1197 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 3:a8c249046181 1198 xReturn = errQUEUE_FULL;
dflet 3:a8c249046181 1199 }
dflet 3:a8c249046181 1200 }
dflet 3:a8c249046181 1201 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 3:a8c249046181 1202
dflet 3:a8c249046181 1203 return xReturn;
dflet 3:a8c249046181 1204 }
dflet 3:a8c249046181 1205 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1206
dflet 3:a8c249046181 1207 BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 3:a8c249046181 1208 {
dflet 3:a8c249046181 1209 BaseType_t xReturn;
dflet 3:a8c249046181 1210 UBaseType_t uxSavedInterruptStatus;
dflet 3:a8c249046181 1211 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1212
dflet 3:a8c249046181 1213 /* Similar to xQueueGenericSendFromISR() but used with semaphores where the
dflet 3:a8c249046181 1214 item size is 0. Don't directly wake a task that was blocked on a queue
dflet 3:a8c249046181 1215 read, instead return a flag to say whether a context switch is required or
dflet 3:a8c249046181 1216 not (i.e. has a task with a higher priority than us been woken by this
dflet 3:a8c249046181 1217 post). */
dflet 3:a8c249046181 1218
dflet 3:a8c249046181 1219 configASSERT( pxQueue );
dflet 3:a8c249046181 1220
dflet 3:a8c249046181 1221 /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR()
dflet 3:a8c249046181 1222 if the item size is not 0. */
dflet 3:a8c249046181 1223 configASSERT( pxQueue->uxItemSize == 0 );
dflet 3:a8c249046181 1224
dflet 3:a8c249046181 1225 /* Normally a mutex would not be given from an interrupt, and doing so is
dflet 3:a8c249046181 1226 definitely wrong if there is a mutex holder as priority inheritance makes no
dflet 3:a8c249046181 1227 sense for an interrupts, only tasks. */
dflet 3:a8c249046181 1228 configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) );
dflet 3:a8c249046181 1229
dflet 3:a8c249046181 1230 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 3:a8c249046181 1231 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 3:a8c249046181 1232 above the maximum system call priority are kept permanently enabled, even
dflet 3:a8c249046181 1233 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 3:a8c249046181 1234 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 3:a8c249046181 1235 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 3:a8c249046181 1236 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 3:a8c249046181 1237 assigned a priority above the configured maximum system call priority.
dflet 3:a8c249046181 1238 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 3:a8c249046181 1239 that have been assigned a priority at or (logically) below the maximum
dflet 3:a8c249046181 1240 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 3:a8c249046181 1241 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 3:a8c249046181 1242 More information (albeit Cortex-M specific) is provided on the following
dflet 3:a8c249046181 1243 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 3:a8c249046181 1244 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 3:a8c249046181 1245
dflet 3:a8c249046181 1246 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 3:a8c249046181 1247 {
dflet 3:a8c249046181 1248 /* When the queue is used to implement a semaphore no data is ever
dflet 3:a8c249046181 1249 moved through the queue but it is still valid to see if the queue 'has
dflet 3:a8c249046181 1250 space'. */
dflet 3:a8c249046181 1251 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 3:a8c249046181 1252 {
dflet 3:a8c249046181 1253 traceQUEUE_SEND_FROM_ISR( pxQueue );
dflet 3:a8c249046181 1254
dflet 3:a8c249046181 1255 /* A task can only have an inherited priority if it is a mutex
dflet 3:a8c249046181 1256 holder - and if there is a mutex holder then the mutex cannot be
dflet 3:a8c249046181 1257 given from an ISR. As this is the ISR version of the function it
dflet 3:a8c249046181 1258 can be assumed there is no mutex holder and no need to determine if
dflet 3:a8c249046181 1259 priority disinheritance is needed. Simply increase the count of
dflet 3:a8c249046181 1260 messages (semaphores) available. */
dflet 3:a8c249046181 1261 ++( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 1262
dflet 3:a8c249046181 1263 /* The event list is not altered if the queue is locked. This will
dflet 3:a8c249046181 1264 be done when the queue is unlocked later. */
dflet 3:a8c249046181 1265 if( pxQueue->xTxLock == queueUNLOCKED )
dflet 3:a8c249046181 1266 {
dflet 3:a8c249046181 1267 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 1268 {
dflet 3:a8c249046181 1269 if( pxQueue->pxQueueSetContainer != NULL )
dflet 3:a8c249046181 1270 {
dflet 3:a8c249046181 1271 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 3:a8c249046181 1272 {
dflet 3:a8c249046181 1273 /* The semaphore is a member of a queue set, and
dflet 3:a8c249046181 1274 posting to the queue set caused a higher priority
dflet 3:a8c249046181 1275 task to unblock. A context switch is required. */
dflet 3:a8c249046181 1276 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1277 {
dflet 3:a8c249046181 1278 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1279 }
dflet 3:a8c249046181 1280 else
dflet 3:a8c249046181 1281 {
dflet 3:a8c249046181 1282 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1283 }
dflet 3:a8c249046181 1284 }
dflet 3:a8c249046181 1285 else
dflet 3:a8c249046181 1286 {
dflet 3:a8c249046181 1287 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1288 }
dflet 3:a8c249046181 1289 }
dflet 3:a8c249046181 1290 else
dflet 3:a8c249046181 1291 {
dflet 3:a8c249046181 1292 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1293 {
dflet 3:a8c249046181 1294 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1295 {
dflet 3:a8c249046181 1296 /* The task waiting has a higher priority so
dflet 3:a8c249046181 1297 record that a context switch is required. */
dflet 3:a8c249046181 1298 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1299 {
dflet 3:a8c249046181 1300 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1301 }
dflet 3:a8c249046181 1302 else
dflet 3:a8c249046181 1303 {
dflet 3:a8c249046181 1304 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1305 }
dflet 3:a8c249046181 1306 }
dflet 3:a8c249046181 1307 else
dflet 3:a8c249046181 1308 {
dflet 3:a8c249046181 1309 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1310 }
dflet 3:a8c249046181 1311 }
dflet 3:a8c249046181 1312 else
dflet 3:a8c249046181 1313 {
dflet 3:a8c249046181 1314 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1315 }
dflet 3:a8c249046181 1316 }
dflet 3:a8c249046181 1317 }
dflet 3:a8c249046181 1318 #else /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1319 {
dflet 3:a8c249046181 1320 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1321 {
dflet 3:a8c249046181 1322 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1323 {
dflet 3:a8c249046181 1324 /* The task waiting has a higher priority so record that a
dflet 3:a8c249046181 1325 context switch is required. */
dflet 3:a8c249046181 1326 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1327 {
dflet 3:a8c249046181 1328 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1329 }
dflet 3:a8c249046181 1330 else
dflet 3:a8c249046181 1331 {
dflet 3:a8c249046181 1332 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1333 }
dflet 3:a8c249046181 1334 }
dflet 3:a8c249046181 1335 else
dflet 3:a8c249046181 1336 {
dflet 3:a8c249046181 1337 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1338 }
dflet 3:a8c249046181 1339 }
dflet 3:a8c249046181 1340 else
dflet 3:a8c249046181 1341 {
dflet 3:a8c249046181 1342 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1343 }
dflet 3:a8c249046181 1344 }
dflet 3:a8c249046181 1345 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1346 }
dflet 3:a8c249046181 1347 else
dflet 3:a8c249046181 1348 {
dflet 3:a8c249046181 1349 /* Increment the lock count so the task that unlocks the queue
dflet 3:a8c249046181 1350 knows that data was posted while it was locked. */
dflet 3:a8c249046181 1351 ++( pxQueue->xTxLock );
dflet 3:a8c249046181 1352 }
dflet 3:a8c249046181 1353
dflet 3:a8c249046181 1354 xReturn = pdPASS;
dflet 3:a8c249046181 1355 }
dflet 3:a8c249046181 1356 else
dflet 3:a8c249046181 1357 {
dflet 3:a8c249046181 1358 traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue );
dflet 3:a8c249046181 1359 xReturn = errQUEUE_FULL;
dflet 3:a8c249046181 1360 }
dflet 3:a8c249046181 1361 }
dflet 3:a8c249046181 1362 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 3:a8c249046181 1363
dflet 3:a8c249046181 1364 return xReturn;
dflet 3:a8c249046181 1365 }
dflet 3:a8c249046181 1366 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1367
dflet 3:a8c249046181 1368 BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking )
dflet 3:a8c249046181 1369 {
dflet 3:a8c249046181 1370 BaseType_t xEntryTimeSet = pdFALSE;
dflet 3:a8c249046181 1371 TimeOut_t xTimeOut;
dflet 3:a8c249046181 1372 int8_t *pcOriginalReadPosition;
dflet 3:a8c249046181 1373 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1374
dflet 3:a8c249046181 1375 configASSERT( pxQueue );
dflet 3:a8c249046181 1376 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 1377 #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) )
dflet 3:a8c249046181 1378 {
dflet 3:a8c249046181 1379 configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) );
dflet 3:a8c249046181 1380 }
dflet 3:a8c249046181 1381 #endif
dflet 3:a8c249046181 1382
dflet 3:a8c249046181 1383 /* This function relaxes the coding standard somewhat to allow return
dflet 3:a8c249046181 1384 statements within the function itself. This is done in the interest
dflet 3:a8c249046181 1385 of execution time efficiency. */
dflet 3:a8c249046181 1386
dflet 3:a8c249046181 1387 for( ;; )
dflet 3:a8c249046181 1388 {
dflet 3:a8c249046181 1389 taskENTER_CRITICAL();
dflet 3:a8c249046181 1390 {
dflet 3:a8c249046181 1391 /* Is there data in the queue now? To be running the calling task
dflet 3:a8c249046181 1392 must be the highest priority task wanting to access the queue. */
dflet 3:a8c249046181 1393 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1394 {
dflet 3:a8c249046181 1395 /* Remember the read position in case the queue is only being
dflet 3:a8c249046181 1396 peeked. */
dflet 3:a8c249046181 1397 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 3:a8c249046181 1398
dflet 3:a8c249046181 1399 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 3:a8c249046181 1400
dflet 3:a8c249046181 1401 if( xJustPeeking == pdFALSE )
dflet 3:a8c249046181 1402 {
dflet 3:a8c249046181 1403 traceQUEUE_RECEIVE( pxQueue );
dflet 3:a8c249046181 1404
dflet 3:a8c249046181 1405 /* Actually removing data, not just peeking. */
dflet 3:a8c249046181 1406 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 1407
dflet 3:a8c249046181 1408 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 1409 {
dflet 3:a8c249046181 1410 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 1411 {
dflet 3:a8c249046181 1412 /* Record the information required to implement
dflet 3:a8c249046181 1413 priority inheritance should it become necessary. */
dflet 3:a8c249046181 1414 pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */
dflet 3:a8c249046181 1415 }
dflet 3:a8c249046181 1416 else
dflet 3:a8c249046181 1417 {
dflet 3:a8c249046181 1418 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1419 }
dflet 3:a8c249046181 1420 }
dflet 3:a8c249046181 1421 #endif /* configUSE_MUTEXES */
dflet 3:a8c249046181 1422
dflet 3:a8c249046181 1423 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 1424 {
dflet 3:a8c249046181 1425 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE )
dflet 3:a8c249046181 1426 {
dflet 3:a8c249046181 1427 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 1428 }
dflet 3:a8c249046181 1429 else
dflet 3:a8c249046181 1430 {
dflet 3:a8c249046181 1431 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1432 }
dflet 3:a8c249046181 1433 }
dflet 3:a8c249046181 1434 else
dflet 3:a8c249046181 1435 {
dflet 3:a8c249046181 1436 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1437 }
dflet 3:a8c249046181 1438 }
dflet 3:a8c249046181 1439 else
dflet 3:a8c249046181 1440 {
dflet 3:a8c249046181 1441 traceQUEUE_PEEK( pxQueue );
dflet 3:a8c249046181 1442
dflet 3:a8c249046181 1443 /* The data is not being removed, so reset the read
dflet 3:a8c249046181 1444 pointer. */
dflet 3:a8c249046181 1445 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 3:a8c249046181 1446
dflet 3:a8c249046181 1447 /* The data is being left in the queue, so see if there are
dflet 3:a8c249046181 1448 any other tasks waiting for the data. */
dflet 3:a8c249046181 1449 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1450 {
dflet 3:a8c249046181 1451 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1452 {
dflet 3:a8c249046181 1453 /* The task waiting has a higher priority than this task. */
dflet 3:a8c249046181 1454 queueYIELD_IF_USING_PREEMPTION();
dflet 3:a8c249046181 1455 }
dflet 3:a8c249046181 1456 else
dflet 3:a8c249046181 1457 {
dflet 3:a8c249046181 1458 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1459 }
dflet 3:a8c249046181 1460 }
dflet 3:a8c249046181 1461 else
dflet 3:a8c249046181 1462 {
dflet 3:a8c249046181 1463 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1464 }
dflet 3:a8c249046181 1465 }
dflet 3:a8c249046181 1466
dflet 3:a8c249046181 1467 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1468 return pdPASS;
dflet 3:a8c249046181 1469 }
dflet 3:a8c249046181 1470 else
dflet 3:a8c249046181 1471 {
dflet 3:a8c249046181 1472 if( xTicksToWait == ( TickType_t ) 0 )
dflet 3:a8c249046181 1473 {
dflet 3:a8c249046181 1474 /* The queue was empty and no block time is specified (or
dflet 3:a8c249046181 1475 the block time has expired) so leave now. */
dflet 3:a8c249046181 1476 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1477 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 3:a8c249046181 1478 return errQUEUE_EMPTY;
dflet 3:a8c249046181 1479 }
dflet 3:a8c249046181 1480 else if( xEntryTimeSet == pdFALSE )
dflet 3:a8c249046181 1481 {
dflet 3:a8c249046181 1482 /* The queue was empty and a block time was specified so
dflet 3:a8c249046181 1483 configure the timeout structure. */
dflet 3:a8c249046181 1484 vTaskSetTimeOutState( &xTimeOut );
dflet 3:a8c249046181 1485 xEntryTimeSet = pdTRUE;
dflet 3:a8c249046181 1486 }
dflet 3:a8c249046181 1487 else
dflet 3:a8c249046181 1488 {
dflet 3:a8c249046181 1489 /* Entry time was already set. */
dflet 3:a8c249046181 1490 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1491 }
dflet 3:a8c249046181 1492 }
dflet 3:a8c249046181 1493 }
dflet 3:a8c249046181 1494 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1495
dflet 3:a8c249046181 1496 /* Interrupts and other tasks can send to and receive from the queue
dflet 3:a8c249046181 1497 now the critical section has been exited. */
dflet 3:a8c249046181 1498
dflet 3:a8c249046181 1499 vTaskSuspendAll();
dflet 3:a8c249046181 1500 prvLockQueue( pxQueue );
dflet 3:a8c249046181 1501
dflet 3:a8c249046181 1502 /* Update the timeout state to see if it has expired yet. */
dflet 3:a8c249046181 1503 if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE )
dflet 3:a8c249046181 1504 {
dflet 3:a8c249046181 1505 if( prvIsQueueEmpty( pxQueue ) != pdFALSE )
dflet 3:a8c249046181 1506 {
dflet 3:a8c249046181 1507 traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue );
dflet 3:a8c249046181 1508
dflet 3:a8c249046181 1509 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 1510 {
dflet 3:a8c249046181 1511 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 1512 {
dflet 3:a8c249046181 1513 taskENTER_CRITICAL();
dflet 3:a8c249046181 1514 {
dflet 3:a8c249046181 1515 vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder );
dflet 3:a8c249046181 1516 }
dflet 3:a8c249046181 1517 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1518 }
dflet 3:a8c249046181 1519 else
dflet 3:a8c249046181 1520 {
dflet 3:a8c249046181 1521 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1522 }
dflet 3:a8c249046181 1523 }
dflet 3:a8c249046181 1524 #endif
dflet 3:a8c249046181 1525
dflet 3:a8c249046181 1526 vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 3:a8c249046181 1527 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 1528 if( xTaskResumeAll() == pdFALSE )
dflet 3:a8c249046181 1529 {
dflet 3:a8c249046181 1530 portYIELD_WITHIN_API();
dflet 3:a8c249046181 1531 }
dflet 3:a8c249046181 1532 else
dflet 3:a8c249046181 1533 {
dflet 3:a8c249046181 1534 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1535 }
dflet 3:a8c249046181 1536 }
dflet 3:a8c249046181 1537 else
dflet 3:a8c249046181 1538 {
dflet 3:a8c249046181 1539 /* Try again. */
dflet 3:a8c249046181 1540 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 1541 ( void ) xTaskResumeAll();
dflet 3:a8c249046181 1542 }
dflet 3:a8c249046181 1543 }
dflet 3:a8c249046181 1544 else
dflet 3:a8c249046181 1545 {
dflet 3:a8c249046181 1546 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 1547 ( void ) xTaskResumeAll();
dflet 3:a8c249046181 1548 traceQUEUE_RECEIVE_FAILED( pxQueue );
dflet 3:a8c249046181 1549 return errQUEUE_EMPTY;
dflet 3:a8c249046181 1550 }
dflet 3:a8c249046181 1551 }
dflet 3:a8c249046181 1552 }
dflet 3:a8c249046181 1553 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1554
dflet 3:a8c249046181 1555 BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken )
dflet 3:a8c249046181 1556 {
dflet 3:a8c249046181 1557 BaseType_t xReturn;
dflet 3:a8c249046181 1558 UBaseType_t uxSavedInterruptStatus;
dflet 3:a8c249046181 1559 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1560
dflet 3:a8c249046181 1561 configASSERT( pxQueue );
dflet 3:a8c249046181 1562 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 1563
dflet 3:a8c249046181 1564 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 3:a8c249046181 1565 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 3:a8c249046181 1566 above the maximum system call priority are kept permanently enabled, even
dflet 3:a8c249046181 1567 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 3:a8c249046181 1568 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 3:a8c249046181 1569 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 3:a8c249046181 1570 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 3:a8c249046181 1571 assigned a priority above the configured maximum system call priority.
dflet 3:a8c249046181 1572 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 3:a8c249046181 1573 that have been assigned a priority at or (logically) below the maximum
dflet 3:a8c249046181 1574 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 3:a8c249046181 1575 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 3:a8c249046181 1576 More information (albeit Cortex-M specific) is provided on the following
dflet 3:a8c249046181 1577 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 3:a8c249046181 1578 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 3:a8c249046181 1579
dflet 3:a8c249046181 1580 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 3:a8c249046181 1581 {
dflet 3:a8c249046181 1582 /* Cannot block in an ISR, so check there is data available. */
dflet 3:a8c249046181 1583 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1584 {
dflet 3:a8c249046181 1585 traceQUEUE_RECEIVE_FROM_ISR( pxQueue );
dflet 3:a8c249046181 1586
dflet 3:a8c249046181 1587 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 3:a8c249046181 1588 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 1589
dflet 3:a8c249046181 1590 /* If the queue is locked the event list will not be modified.
dflet 3:a8c249046181 1591 Instead update the lock count so the task that unlocks the queue
dflet 3:a8c249046181 1592 will know that an ISR has removed data while the queue was
dflet 3:a8c249046181 1593 locked. */
dflet 3:a8c249046181 1594 if( pxQueue->xRxLock == queueUNLOCKED )
dflet 3:a8c249046181 1595 {
dflet 3:a8c249046181 1596 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 1597 {
dflet 3:a8c249046181 1598 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 3:a8c249046181 1599 {
dflet 3:a8c249046181 1600 /* The task waiting has a higher priority than us so
dflet 3:a8c249046181 1601 force a context switch. */
dflet 3:a8c249046181 1602 if( pxHigherPriorityTaskWoken != NULL )
dflet 3:a8c249046181 1603 {
dflet 3:a8c249046181 1604 *pxHigherPriorityTaskWoken = pdTRUE;
dflet 3:a8c249046181 1605 }
dflet 3:a8c249046181 1606 else
dflet 3:a8c249046181 1607 {
dflet 3:a8c249046181 1608 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1609 }
dflet 3:a8c249046181 1610 }
dflet 3:a8c249046181 1611 else
dflet 3:a8c249046181 1612 {
dflet 3:a8c249046181 1613 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1614 }
dflet 3:a8c249046181 1615 }
dflet 3:a8c249046181 1616 else
dflet 3:a8c249046181 1617 {
dflet 3:a8c249046181 1618 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1619 }
dflet 3:a8c249046181 1620 }
dflet 3:a8c249046181 1621 else
dflet 3:a8c249046181 1622 {
dflet 3:a8c249046181 1623 /* Increment the lock count so the task that unlocks the queue
dflet 3:a8c249046181 1624 knows that data was removed while it was locked. */
dflet 3:a8c249046181 1625 ++( pxQueue->xRxLock );
dflet 3:a8c249046181 1626 }
dflet 3:a8c249046181 1627
dflet 3:a8c249046181 1628 xReturn = pdPASS;
dflet 3:a8c249046181 1629 }
dflet 3:a8c249046181 1630 else
dflet 3:a8c249046181 1631 {
dflet 3:a8c249046181 1632 xReturn = pdFAIL;
dflet 3:a8c249046181 1633 traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue );
dflet 3:a8c249046181 1634 }
dflet 3:a8c249046181 1635 }
dflet 3:a8c249046181 1636 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 3:a8c249046181 1637
dflet 3:a8c249046181 1638 return xReturn;
dflet 3:a8c249046181 1639 }
dflet 3:a8c249046181 1640 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1641
dflet 3:a8c249046181 1642 BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer )
dflet 3:a8c249046181 1643 {
dflet 3:a8c249046181 1644 BaseType_t xReturn;
dflet 3:a8c249046181 1645 UBaseType_t uxSavedInterruptStatus;
dflet 3:a8c249046181 1646 int8_t *pcOriginalReadPosition;
dflet 3:a8c249046181 1647 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1648
dflet 3:a8c249046181 1649 configASSERT( pxQueue );
dflet 3:a8c249046181 1650 configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) );
dflet 3:a8c249046181 1651 configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */
dflet 3:a8c249046181 1652
dflet 3:a8c249046181 1653 /* RTOS ports that support interrupt nesting have the concept of a maximum
dflet 3:a8c249046181 1654 system call (or maximum API call) interrupt priority. Interrupts that are
dflet 3:a8c249046181 1655 above the maximum system call priority are kept permanently enabled, even
dflet 3:a8c249046181 1656 when the RTOS kernel is in a critical section, but cannot make any calls to
dflet 3:a8c249046181 1657 FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h
dflet 3:a8c249046181 1658 then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion
dflet 3:a8c249046181 1659 failure if a FreeRTOS API function is called from an interrupt that has been
dflet 3:a8c249046181 1660 assigned a priority above the configured maximum system call priority.
dflet 3:a8c249046181 1661 Only FreeRTOS functions that end in FromISR can be called from interrupts
dflet 3:a8c249046181 1662 that have been assigned a priority at or (logically) below the maximum
dflet 3:a8c249046181 1663 system call interrupt priority. FreeRTOS maintains a separate interrupt
dflet 3:a8c249046181 1664 safe API to ensure interrupt entry is as fast and as simple as possible.
dflet 3:a8c249046181 1665 More information (albeit Cortex-M specific) is provided on the following
dflet 3:a8c249046181 1666 link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */
dflet 3:a8c249046181 1667 portASSERT_IF_INTERRUPT_PRIORITY_INVALID();
dflet 3:a8c249046181 1668
dflet 3:a8c249046181 1669 uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR();
dflet 3:a8c249046181 1670 {
dflet 3:a8c249046181 1671 /* Cannot block in an ISR, so check there is data available. */
dflet 3:a8c249046181 1672 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1673 {
dflet 3:a8c249046181 1674 traceQUEUE_PEEK_FROM_ISR( pxQueue );
dflet 3:a8c249046181 1675
dflet 3:a8c249046181 1676 /* Remember the read position so it can be reset as nothing is
dflet 3:a8c249046181 1677 actually being removed from the queue. */
dflet 3:a8c249046181 1678 pcOriginalReadPosition = pxQueue->u.pcReadFrom;
dflet 3:a8c249046181 1679 prvCopyDataFromQueue( pxQueue, pvBuffer );
dflet 3:a8c249046181 1680 pxQueue->u.pcReadFrom = pcOriginalReadPosition;
dflet 3:a8c249046181 1681
dflet 3:a8c249046181 1682 xReturn = pdPASS;
dflet 3:a8c249046181 1683 }
dflet 3:a8c249046181 1684 else
dflet 3:a8c249046181 1685 {
dflet 3:a8c249046181 1686 xReturn = pdFAIL;
dflet 3:a8c249046181 1687 traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue );
dflet 3:a8c249046181 1688 }
dflet 3:a8c249046181 1689 }
dflet 3:a8c249046181 1690 portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus );
dflet 3:a8c249046181 1691
dflet 3:a8c249046181 1692 return xReturn;
dflet 3:a8c249046181 1693 }
dflet 3:a8c249046181 1694 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1695
dflet 3:a8c249046181 1696 UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue )
dflet 3:a8c249046181 1697 {
dflet 3:a8c249046181 1698 UBaseType_t uxReturn;
dflet 3:a8c249046181 1699
dflet 3:a8c249046181 1700 configASSERT( xQueue );
dflet 3:a8c249046181 1701
dflet 3:a8c249046181 1702 taskENTER_CRITICAL();
dflet 3:a8c249046181 1703 {
dflet 3:a8c249046181 1704 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 3:a8c249046181 1705 }
dflet 3:a8c249046181 1706 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1707
dflet 3:a8c249046181 1708 return uxReturn;
dflet 3:a8c249046181 1709 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 3:a8c249046181 1710 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1711
dflet 3:a8c249046181 1712 UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue )
dflet 3:a8c249046181 1713 {
dflet 3:a8c249046181 1714 UBaseType_t uxReturn;
dflet 3:a8c249046181 1715 Queue_t *pxQueue;
dflet 3:a8c249046181 1716
dflet 3:a8c249046181 1717 pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1718 configASSERT( pxQueue );
dflet 3:a8c249046181 1719
dflet 3:a8c249046181 1720 taskENTER_CRITICAL();
dflet 3:a8c249046181 1721 {
dflet 3:a8c249046181 1722 uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting;
dflet 3:a8c249046181 1723 }
dflet 3:a8c249046181 1724 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1725
dflet 3:a8c249046181 1726 return uxReturn;
dflet 3:a8c249046181 1727 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 3:a8c249046181 1728 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1729
dflet 3:a8c249046181 1730 UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue )
dflet 3:a8c249046181 1731 {
dflet 3:a8c249046181 1732 UBaseType_t uxReturn;
dflet 3:a8c249046181 1733
dflet 3:a8c249046181 1734 configASSERT( xQueue );
dflet 3:a8c249046181 1735
dflet 3:a8c249046181 1736 uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
dflet 3:a8c249046181 1737
dflet 3:a8c249046181 1738 return uxReturn;
dflet 3:a8c249046181 1739 } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */
dflet 3:a8c249046181 1740 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1741
dflet 3:a8c249046181 1742 void vQueueDelete( QueueHandle_t xQueue )
dflet 3:a8c249046181 1743 {
dflet 3:a8c249046181 1744 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 1745
dflet 3:a8c249046181 1746 configASSERT( pxQueue );
dflet 3:a8c249046181 1747
dflet 3:a8c249046181 1748 traceQUEUE_DELETE( pxQueue );
dflet 3:a8c249046181 1749 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 3:a8c249046181 1750 {
dflet 3:a8c249046181 1751 vQueueUnregisterQueue( pxQueue );
dflet 3:a8c249046181 1752 }
dflet 3:a8c249046181 1753 #endif
dflet 3:a8c249046181 1754 vPortFree( pxQueue );
dflet 3:a8c249046181 1755 }
dflet 3:a8c249046181 1756 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1757
dflet 3:a8c249046181 1758 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 1759
dflet 3:a8c249046181 1760 UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue )
dflet 3:a8c249046181 1761 {
dflet 3:a8c249046181 1762 return ( ( Queue_t * ) xQueue )->uxQueueNumber;
dflet 3:a8c249046181 1763 }
dflet 3:a8c249046181 1764
dflet 3:a8c249046181 1765 #endif /* configUSE_TRACE_FACILITY */
dflet 3:a8c249046181 1766 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1767
dflet 3:a8c249046181 1768 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 1769
dflet 3:a8c249046181 1770 void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber )
dflet 3:a8c249046181 1771 {
dflet 3:a8c249046181 1772 ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber;
dflet 3:a8c249046181 1773 }
dflet 3:a8c249046181 1774
dflet 3:a8c249046181 1775 #endif /* configUSE_TRACE_FACILITY */
dflet 3:a8c249046181 1776 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1777
dflet 3:a8c249046181 1778 #if ( configUSE_TRACE_FACILITY == 1 )
dflet 3:a8c249046181 1779
dflet 3:a8c249046181 1780 uint8_t ucQueueGetQueueType( QueueHandle_t xQueue )
dflet 3:a8c249046181 1781 {
dflet 3:a8c249046181 1782 return ( ( Queue_t * ) xQueue )->ucQueueType;
dflet 3:a8c249046181 1783 }
dflet 3:a8c249046181 1784
dflet 3:a8c249046181 1785 #endif /* configUSE_TRACE_FACILITY */
dflet 3:a8c249046181 1786 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1787
dflet 3:a8c249046181 1788 static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition )
dflet 3:a8c249046181 1789 {
dflet 3:a8c249046181 1790 BaseType_t xReturn = pdFALSE;
dflet 3:a8c249046181 1791
dflet 3:a8c249046181 1792 if( pxQueue->uxItemSize == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1793 {
dflet 3:a8c249046181 1794 #if ( configUSE_MUTEXES == 1 )
dflet 3:a8c249046181 1795 {
dflet 3:a8c249046181 1796 if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX )
dflet 3:a8c249046181 1797 {
dflet 3:a8c249046181 1798 /* The mutex is no longer being held. */
dflet 3:a8c249046181 1799 xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder );
dflet 3:a8c249046181 1800 pxQueue->pxMutexHolder = NULL;
dflet 3:a8c249046181 1801 }
dflet 3:a8c249046181 1802 else
dflet 3:a8c249046181 1803 {
dflet 3:a8c249046181 1804 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1805 }
dflet 3:a8c249046181 1806 }
dflet 3:a8c249046181 1807 #endif /* configUSE_MUTEXES */
dflet 3:a8c249046181 1808 }
dflet 3:a8c249046181 1809 else if( xPosition == queueSEND_TO_BACK )
dflet 3:a8c249046181 1810 {
dflet 3:a8c249046181 1811 ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */
dflet 3:a8c249046181 1812 pxQueue->pcWriteTo += pxQueue->uxItemSize;
dflet 3:a8c249046181 1813 if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 3:a8c249046181 1814 {
dflet 3:a8c249046181 1815 pxQueue->pcWriteTo = pxQueue->pcHead;
dflet 3:a8c249046181 1816 }
dflet 3:a8c249046181 1817 else
dflet 3:a8c249046181 1818 {
dflet 3:a8c249046181 1819 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1820 }
dflet 3:a8c249046181 1821 }
dflet 3:a8c249046181 1822 else
dflet 3:a8c249046181 1823 {
dflet 3:a8c249046181 1824 ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */
dflet 3:a8c249046181 1825 pxQueue->u.pcReadFrom -= pxQueue->uxItemSize;
dflet 3:a8c249046181 1826 if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */
dflet 3:a8c249046181 1827 {
dflet 3:a8c249046181 1828 pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize );
dflet 3:a8c249046181 1829 }
dflet 3:a8c249046181 1830 else
dflet 3:a8c249046181 1831 {
dflet 3:a8c249046181 1832 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1833 }
dflet 3:a8c249046181 1834
dflet 3:a8c249046181 1835 if( xPosition == queueOVERWRITE )
dflet 3:a8c249046181 1836 {
dflet 3:a8c249046181 1837 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1838 {
dflet 3:a8c249046181 1839 /* An item is not being added but overwritten, so subtract
dflet 3:a8c249046181 1840 one from the recorded number of items in the queue so when
dflet 3:a8c249046181 1841 one is added again below the number of recorded items remains
dflet 3:a8c249046181 1842 correct. */
dflet 3:a8c249046181 1843 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 1844 }
dflet 3:a8c249046181 1845 else
dflet 3:a8c249046181 1846 {
dflet 3:a8c249046181 1847 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1848 }
dflet 3:a8c249046181 1849 }
dflet 3:a8c249046181 1850 else
dflet 3:a8c249046181 1851 {
dflet 3:a8c249046181 1852 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1853 }
dflet 3:a8c249046181 1854 }
dflet 3:a8c249046181 1855
dflet 3:a8c249046181 1856 ++( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 1857
dflet 3:a8c249046181 1858 return xReturn;
dflet 3:a8c249046181 1859 }
dflet 3:a8c249046181 1860 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1861
dflet 3:a8c249046181 1862 static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer )
dflet 3:a8c249046181 1863 {
dflet 3:a8c249046181 1864 if( pxQueue->uxItemSize != ( UBaseType_t ) 0 )
dflet 3:a8c249046181 1865 {
dflet 3:a8c249046181 1866 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 3:a8c249046181 1867 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */
dflet 3:a8c249046181 1868 {
dflet 3:a8c249046181 1869 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 3:a8c249046181 1870 }
dflet 3:a8c249046181 1871 else
dflet 3:a8c249046181 1872 {
dflet 3:a8c249046181 1873 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1874 }
dflet 3:a8c249046181 1875 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */
dflet 3:a8c249046181 1876 }
dflet 3:a8c249046181 1877 }
dflet 3:a8c249046181 1878 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1879
dflet 3:a8c249046181 1880 static void prvUnlockQueue( Queue_t * const pxQueue )
dflet 3:a8c249046181 1881 {
dflet 3:a8c249046181 1882 /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */
dflet 3:a8c249046181 1883
dflet 3:a8c249046181 1884 /* The lock counts contains the number of extra data items placed or
dflet 3:a8c249046181 1885 removed from the queue while the queue was locked. When a queue is
dflet 3:a8c249046181 1886 locked items can be added or removed, but the event lists cannot be
dflet 3:a8c249046181 1887 updated. */
dflet 3:a8c249046181 1888 taskENTER_CRITICAL();
dflet 3:a8c249046181 1889 {
dflet 3:a8c249046181 1890 /* See if data was added to the queue while it was locked. */
dflet 3:a8c249046181 1891 while( pxQueue->xTxLock > queueLOCKED_UNMODIFIED )
dflet 3:a8c249046181 1892 {
dflet 3:a8c249046181 1893 /* Data was posted while the queue was locked. Are any tasks
dflet 3:a8c249046181 1894 blocked waiting for data to become available? */
dflet 3:a8c249046181 1895 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 1896 {
dflet 3:a8c249046181 1897 if( pxQueue->pxQueueSetContainer != NULL )
dflet 3:a8c249046181 1898 {
dflet 3:a8c249046181 1899 if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE )
dflet 3:a8c249046181 1900 {
dflet 3:a8c249046181 1901 /* The queue is a member of a queue set, and posting to
dflet 3:a8c249046181 1902 the queue set caused a higher priority task to unblock.
dflet 3:a8c249046181 1903 A context switch is required. */
dflet 3:a8c249046181 1904 vTaskMissedYield();
dflet 3:a8c249046181 1905 }
dflet 3:a8c249046181 1906 else
dflet 3:a8c249046181 1907 {
dflet 3:a8c249046181 1908 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1909 }
dflet 3:a8c249046181 1910 }
dflet 3:a8c249046181 1911 else
dflet 3:a8c249046181 1912 {
dflet 3:a8c249046181 1913 /* Tasks that are removed from the event list will get added to
dflet 3:a8c249046181 1914 the pending ready list as the scheduler is still suspended. */
dflet 3:a8c249046181 1915 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1916 {
dflet 3:a8c249046181 1917 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1918 {
dflet 3:a8c249046181 1919 /* The task waiting has a higher priority so record that a
dflet 3:a8c249046181 1920 context switch is required. */
dflet 3:a8c249046181 1921 vTaskMissedYield();
dflet 3:a8c249046181 1922 }
dflet 3:a8c249046181 1923 else
dflet 3:a8c249046181 1924 {
dflet 3:a8c249046181 1925 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1926 }
dflet 3:a8c249046181 1927 }
dflet 3:a8c249046181 1928 else
dflet 3:a8c249046181 1929 {
dflet 3:a8c249046181 1930 break;
dflet 3:a8c249046181 1931 }
dflet 3:a8c249046181 1932 }
dflet 3:a8c249046181 1933 }
dflet 3:a8c249046181 1934 #else /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1935 {
dflet 3:a8c249046181 1936 /* Tasks that are removed from the event list will get added to
dflet 3:a8c249046181 1937 the pending ready list as the scheduler is still suspended. */
dflet 3:a8c249046181 1938 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 1939 {
dflet 3:a8c249046181 1940 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 1941 {
dflet 3:a8c249046181 1942 /* The task waiting has a higher priority so record that a
dflet 3:a8c249046181 1943 context switch is required. */
dflet 3:a8c249046181 1944 vTaskMissedYield();
dflet 3:a8c249046181 1945 }
dflet 3:a8c249046181 1946 else
dflet 3:a8c249046181 1947 {
dflet 3:a8c249046181 1948 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1949 }
dflet 3:a8c249046181 1950 }
dflet 3:a8c249046181 1951 else
dflet 3:a8c249046181 1952 {
dflet 3:a8c249046181 1953 break;
dflet 3:a8c249046181 1954 }
dflet 3:a8c249046181 1955 }
dflet 3:a8c249046181 1956 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 1957
dflet 3:a8c249046181 1958 --( pxQueue->xTxLock );
dflet 3:a8c249046181 1959 }
dflet 3:a8c249046181 1960
dflet 3:a8c249046181 1961 pxQueue->xTxLock = queueUNLOCKED;
dflet 3:a8c249046181 1962 }
dflet 3:a8c249046181 1963 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1964
dflet 3:a8c249046181 1965 /* Do the same for the Rx lock. */
dflet 3:a8c249046181 1966 taskENTER_CRITICAL();
dflet 3:a8c249046181 1967 {
dflet 3:a8c249046181 1968 while( pxQueue->xRxLock > queueLOCKED_UNMODIFIED )
dflet 3:a8c249046181 1969 {
dflet 3:a8c249046181 1970 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 1971 {
dflet 3:a8c249046181 1972 if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 3:a8c249046181 1973 {
dflet 3:a8c249046181 1974 vTaskMissedYield();
dflet 3:a8c249046181 1975 }
dflet 3:a8c249046181 1976 else
dflet 3:a8c249046181 1977 {
dflet 3:a8c249046181 1978 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 1979 }
dflet 3:a8c249046181 1980
dflet 3:a8c249046181 1981 --( pxQueue->xRxLock );
dflet 3:a8c249046181 1982 }
dflet 3:a8c249046181 1983 else
dflet 3:a8c249046181 1984 {
dflet 3:a8c249046181 1985 break;
dflet 3:a8c249046181 1986 }
dflet 3:a8c249046181 1987 }
dflet 3:a8c249046181 1988
dflet 3:a8c249046181 1989 pxQueue->xRxLock = queueUNLOCKED;
dflet 3:a8c249046181 1990 }
dflet 3:a8c249046181 1991 taskEXIT_CRITICAL();
dflet 3:a8c249046181 1992 }
dflet 3:a8c249046181 1993 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 1994
dflet 3:a8c249046181 1995 static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue )
dflet 3:a8c249046181 1996 {
dflet 3:a8c249046181 1997 BaseType_t xReturn;
dflet 3:a8c249046181 1998
dflet 3:a8c249046181 1999 taskENTER_CRITICAL();
dflet 3:a8c249046181 2000 {
dflet 3:a8c249046181 2001 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2002 {
dflet 3:a8c249046181 2003 xReturn = pdTRUE;
dflet 3:a8c249046181 2004 }
dflet 3:a8c249046181 2005 else
dflet 3:a8c249046181 2006 {
dflet 3:a8c249046181 2007 xReturn = pdFALSE;
dflet 3:a8c249046181 2008 }
dflet 3:a8c249046181 2009 }
dflet 3:a8c249046181 2010 taskEXIT_CRITICAL();
dflet 3:a8c249046181 2011
dflet 3:a8c249046181 2012 return xReturn;
dflet 3:a8c249046181 2013 }
dflet 3:a8c249046181 2014 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2015
dflet 3:a8c249046181 2016 BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue )
dflet 3:a8c249046181 2017 {
dflet 3:a8c249046181 2018 BaseType_t xReturn;
dflet 3:a8c249046181 2019
dflet 3:a8c249046181 2020 configASSERT( xQueue );
dflet 3:a8c249046181 2021 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2022 {
dflet 3:a8c249046181 2023 xReturn = pdTRUE;
dflet 3:a8c249046181 2024 }
dflet 3:a8c249046181 2025 else
dflet 3:a8c249046181 2026 {
dflet 3:a8c249046181 2027 xReturn = pdFALSE;
dflet 3:a8c249046181 2028 }
dflet 3:a8c249046181 2029
dflet 3:a8c249046181 2030 return xReturn;
dflet 3:a8c249046181 2031 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 3:a8c249046181 2032 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2033
dflet 3:a8c249046181 2034 static BaseType_t prvIsQueueFull( const Queue_t *pxQueue )
dflet 3:a8c249046181 2035 {
dflet 3:a8c249046181 2036 BaseType_t xReturn;
dflet 3:a8c249046181 2037
dflet 3:a8c249046181 2038 taskENTER_CRITICAL();
dflet 3:a8c249046181 2039 {
dflet 3:a8c249046181 2040 if( pxQueue->uxMessagesWaiting == pxQueue->uxLength )
dflet 3:a8c249046181 2041 {
dflet 3:a8c249046181 2042 xReturn = pdTRUE;
dflet 3:a8c249046181 2043 }
dflet 3:a8c249046181 2044 else
dflet 3:a8c249046181 2045 {
dflet 3:a8c249046181 2046 xReturn = pdFALSE;
dflet 3:a8c249046181 2047 }
dflet 3:a8c249046181 2048 }
dflet 3:a8c249046181 2049 taskEXIT_CRITICAL();
dflet 3:a8c249046181 2050
dflet 3:a8c249046181 2051 return xReturn;
dflet 3:a8c249046181 2052 }
dflet 3:a8c249046181 2053 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2054
dflet 3:a8c249046181 2055 BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue )
dflet 3:a8c249046181 2056 {
dflet 3:a8c249046181 2057 BaseType_t xReturn;
dflet 3:a8c249046181 2058
dflet 3:a8c249046181 2059 configASSERT( xQueue );
dflet 3:a8c249046181 2060 if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength )
dflet 3:a8c249046181 2061 {
dflet 3:a8c249046181 2062 xReturn = pdTRUE;
dflet 3:a8c249046181 2063 }
dflet 3:a8c249046181 2064 else
dflet 3:a8c249046181 2065 {
dflet 3:a8c249046181 2066 xReturn = pdFALSE;
dflet 3:a8c249046181 2067 }
dflet 3:a8c249046181 2068
dflet 3:a8c249046181 2069 return xReturn;
dflet 3:a8c249046181 2070 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 3:a8c249046181 2071 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2072
dflet 3:a8c249046181 2073 #if ( configUSE_CO_ROUTINES == 1 )
dflet 3:a8c249046181 2074
dflet 3:a8c249046181 2075 BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait )
dflet 3:a8c249046181 2076 {
dflet 3:a8c249046181 2077 BaseType_t xReturn;
dflet 3:a8c249046181 2078 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 2079
dflet 3:a8c249046181 2080 /* If the queue is already full we may have to block. A critical section
dflet 3:a8c249046181 2081 is required to prevent an interrupt removing something from the queue
dflet 3:a8c249046181 2082 between the check to see if the queue is full and blocking on the queue. */
dflet 3:a8c249046181 2083 portDISABLE_INTERRUPTS();
dflet 3:a8c249046181 2084 {
dflet 3:a8c249046181 2085 if( prvIsQueueFull( pxQueue ) != pdFALSE )
dflet 3:a8c249046181 2086 {
dflet 3:a8c249046181 2087 /* The queue is full - do we want to block or just leave without
dflet 3:a8c249046181 2088 posting? */
dflet 3:a8c249046181 2089 if( xTicksToWait > ( TickType_t ) 0 )
dflet 3:a8c249046181 2090 {
dflet 3:a8c249046181 2091 /* As this is called from a coroutine we cannot block directly, but
dflet 3:a8c249046181 2092 return indicating that we need to block. */
dflet 3:a8c249046181 2093 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) );
dflet 3:a8c249046181 2094 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2095 return errQUEUE_BLOCKED;
dflet 3:a8c249046181 2096 }
dflet 3:a8c249046181 2097 else
dflet 3:a8c249046181 2098 {
dflet 3:a8c249046181 2099 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2100 return errQUEUE_FULL;
dflet 3:a8c249046181 2101 }
dflet 3:a8c249046181 2102 }
dflet 3:a8c249046181 2103 }
dflet 3:a8c249046181 2104 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2105
dflet 3:a8c249046181 2106 portDISABLE_INTERRUPTS();
dflet 3:a8c249046181 2107 {
dflet 3:a8c249046181 2108 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 3:a8c249046181 2109 {
dflet 3:a8c249046181 2110 /* There is room in the queue, copy the data into the queue. */
dflet 3:a8c249046181 2111 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 3:a8c249046181 2112 xReturn = pdPASS;
dflet 3:a8c249046181 2113
dflet 3:a8c249046181 2114 /* Were any co-routines waiting for data to become available? */
dflet 3:a8c249046181 2115 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 2116 {
dflet 3:a8c249046181 2117 /* In this instance the co-routine could be placed directly
dflet 3:a8c249046181 2118 into the ready list as we are within a critical section.
dflet 3:a8c249046181 2119 Instead the same pending ready list mechanism is used as if
dflet 3:a8c249046181 2120 the event were caused from within an interrupt. */
dflet 3:a8c249046181 2121 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 2122 {
dflet 3:a8c249046181 2123 /* The co-routine waiting has a higher priority so record
dflet 3:a8c249046181 2124 that a yield might be appropriate. */
dflet 3:a8c249046181 2125 xReturn = errQUEUE_YIELD;
dflet 3:a8c249046181 2126 }
dflet 3:a8c249046181 2127 else
dflet 3:a8c249046181 2128 {
dflet 3:a8c249046181 2129 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2130 }
dflet 3:a8c249046181 2131 }
dflet 3:a8c249046181 2132 else
dflet 3:a8c249046181 2133 {
dflet 3:a8c249046181 2134 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2135 }
dflet 3:a8c249046181 2136 }
dflet 3:a8c249046181 2137 else
dflet 3:a8c249046181 2138 {
dflet 3:a8c249046181 2139 xReturn = errQUEUE_FULL;
dflet 3:a8c249046181 2140 }
dflet 3:a8c249046181 2141 }
dflet 3:a8c249046181 2142 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2143
dflet 3:a8c249046181 2144 return xReturn;
dflet 3:a8c249046181 2145 }
dflet 3:a8c249046181 2146
dflet 3:a8c249046181 2147 #endif /* configUSE_CO_ROUTINES */
dflet 3:a8c249046181 2148 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2149
dflet 3:a8c249046181 2150 #if ( configUSE_CO_ROUTINES == 1 )
dflet 3:a8c249046181 2151
dflet 3:a8c249046181 2152 BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait )
dflet 3:a8c249046181 2153 {
dflet 3:a8c249046181 2154 BaseType_t xReturn;
dflet 3:a8c249046181 2155 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 2156
dflet 3:a8c249046181 2157 /* If the queue is already empty we may have to block. A critical section
dflet 3:a8c249046181 2158 is required to prevent an interrupt adding something to the queue
dflet 3:a8c249046181 2159 between the check to see if the queue is empty and blocking on the queue. */
dflet 3:a8c249046181 2160 portDISABLE_INTERRUPTS();
dflet 3:a8c249046181 2161 {
dflet 3:a8c249046181 2162 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2163 {
dflet 3:a8c249046181 2164 /* There are no messages in the queue, do we want to block or just
dflet 3:a8c249046181 2165 leave with nothing? */
dflet 3:a8c249046181 2166 if( xTicksToWait > ( TickType_t ) 0 )
dflet 3:a8c249046181 2167 {
dflet 3:a8c249046181 2168 /* As this is a co-routine we cannot block directly, but return
dflet 3:a8c249046181 2169 indicating that we need to block. */
dflet 3:a8c249046181 2170 vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) );
dflet 3:a8c249046181 2171 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2172 return errQUEUE_BLOCKED;
dflet 3:a8c249046181 2173 }
dflet 3:a8c249046181 2174 else
dflet 3:a8c249046181 2175 {
dflet 3:a8c249046181 2176 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2177 return errQUEUE_FULL;
dflet 3:a8c249046181 2178 }
dflet 3:a8c249046181 2179 }
dflet 3:a8c249046181 2180 else
dflet 3:a8c249046181 2181 {
dflet 3:a8c249046181 2182 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2183 }
dflet 3:a8c249046181 2184 }
dflet 3:a8c249046181 2185 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2186
dflet 3:a8c249046181 2187 portDISABLE_INTERRUPTS();
dflet 3:a8c249046181 2188 {
dflet 3:a8c249046181 2189 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2190 {
dflet 3:a8c249046181 2191 /* Data is available from the queue. */
dflet 3:a8c249046181 2192 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 3:a8c249046181 2193 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 3:a8c249046181 2194 {
dflet 3:a8c249046181 2195 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 3:a8c249046181 2196 }
dflet 3:a8c249046181 2197 else
dflet 3:a8c249046181 2198 {
dflet 3:a8c249046181 2199 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2200 }
dflet 3:a8c249046181 2201 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 2202 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 3:a8c249046181 2203
dflet 3:a8c249046181 2204 xReturn = pdPASS;
dflet 3:a8c249046181 2205
dflet 3:a8c249046181 2206 /* Were any co-routines waiting for space to become available? */
dflet 3:a8c249046181 2207 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 2208 {
dflet 3:a8c249046181 2209 /* In this instance the co-routine could be placed directly
dflet 3:a8c249046181 2210 into the ready list as we are within a critical section.
dflet 3:a8c249046181 2211 Instead the same pending ready list mechanism is used as if
dflet 3:a8c249046181 2212 the event were caused from within an interrupt. */
dflet 3:a8c249046181 2213 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 3:a8c249046181 2214 {
dflet 3:a8c249046181 2215 xReturn = errQUEUE_YIELD;
dflet 3:a8c249046181 2216 }
dflet 3:a8c249046181 2217 else
dflet 3:a8c249046181 2218 {
dflet 3:a8c249046181 2219 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2220 }
dflet 3:a8c249046181 2221 }
dflet 3:a8c249046181 2222 else
dflet 3:a8c249046181 2223 {
dflet 3:a8c249046181 2224 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2225 }
dflet 3:a8c249046181 2226 }
dflet 3:a8c249046181 2227 else
dflet 3:a8c249046181 2228 {
dflet 3:a8c249046181 2229 xReturn = pdFAIL;
dflet 3:a8c249046181 2230 }
dflet 3:a8c249046181 2231 }
dflet 3:a8c249046181 2232 portENABLE_INTERRUPTS();
dflet 3:a8c249046181 2233
dflet 3:a8c249046181 2234 return xReturn;
dflet 3:a8c249046181 2235 }
dflet 3:a8c249046181 2236
dflet 3:a8c249046181 2237 #endif /* configUSE_CO_ROUTINES */
dflet 3:a8c249046181 2238 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2239
dflet 3:a8c249046181 2240 #if ( configUSE_CO_ROUTINES == 1 )
dflet 3:a8c249046181 2241
dflet 3:a8c249046181 2242 BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken )
dflet 3:a8c249046181 2243 {
dflet 3:a8c249046181 2244 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 2245
dflet 3:a8c249046181 2246 /* Cannot block within an ISR so if there is no space on the queue then
dflet 3:a8c249046181 2247 exit without doing anything. */
dflet 3:a8c249046181 2248 if( pxQueue->uxMessagesWaiting < pxQueue->uxLength )
dflet 3:a8c249046181 2249 {
dflet 3:a8c249046181 2250 prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK );
dflet 3:a8c249046181 2251
dflet 3:a8c249046181 2252 /* We only want to wake one co-routine per ISR, so check that a
dflet 3:a8c249046181 2253 co-routine has not already been woken. */
dflet 3:a8c249046181 2254 if( xCoRoutinePreviouslyWoken == pdFALSE )
dflet 3:a8c249046181 2255 {
dflet 3:a8c249046181 2256 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 2257 {
dflet 3:a8c249046181 2258 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 2259 {
dflet 3:a8c249046181 2260 return pdTRUE;
dflet 3:a8c249046181 2261 }
dflet 3:a8c249046181 2262 else
dflet 3:a8c249046181 2263 {
dflet 3:a8c249046181 2264 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2265 }
dflet 3:a8c249046181 2266 }
dflet 3:a8c249046181 2267 else
dflet 3:a8c249046181 2268 {
dflet 3:a8c249046181 2269 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2270 }
dflet 3:a8c249046181 2271 }
dflet 3:a8c249046181 2272 else
dflet 3:a8c249046181 2273 {
dflet 3:a8c249046181 2274 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2275 }
dflet 3:a8c249046181 2276 }
dflet 3:a8c249046181 2277 else
dflet 3:a8c249046181 2278 {
dflet 3:a8c249046181 2279 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2280 }
dflet 3:a8c249046181 2281
dflet 3:a8c249046181 2282 return xCoRoutinePreviouslyWoken;
dflet 3:a8c249046181 2283 }
dflet 3:a8c249046181 2284
dflet 3:a8c249046181 2285 #endif /* configUSE_CO_ROUTINES */
dflet 3:a8c249046181 2286 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2287
dflet 3:a8c249046181 2288 #if ( configUSE_CO_ROUTINES == 1 )
dflet 3:a8c249046181 2289
dflet 3:a8c249046181 2290 BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken )
dflet 3:a8c249046181 2291 {
dflet 3:a8c249046181 2292 BaseType_t xReturn;
dflet 3:a8c249046181 2293 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 2294
dflet 3:a8c249046181 2295 /* We cannot block from an ISR, so check there is data available. If
dflet 3:a8c249046181 2296 not then just leave without doing anything. */
dflet 3:a8c249046181 2297 if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2298 {
dflet 3:a8c249046181 2299 /* Copy the data from the queue. */
dflet 3:a8c249046181 2300 pxQueue->u.pcReadFrom += pxQueue->uxItemSize;
dflet 3:a8c249046181 2301 if( pxQueue->u.pcReadFrom >= pxQueue->pcTail )
dflet 3:a8c249046181 2302 {
dflet 3:a8c249046181 2303 pxQueue->u.pcReadFrom = pxQueue->pcHead;
dflet 3:a8c249046181 2304 }
dflet 3:a8c249046181 2305 else
dflet 3:a8c249046181 2306 {
dflet 3:a8c249046181 2307 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2308 }
dflet 3:a8c249046181 2309 --( pxQueue->uxMessagesWaiting );
dflet 3:a8c249046181 2310 ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize );
dflet 3:a8c249046181 2311
dflet 3:a8c249046181 2312 if( ( *pxCoRoutineWoken ) == pdFALSE )
dflet 3:a8c249046181 2313 {
dflet 3:a8c249046181 2314 if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE )
dflet 3:a8c249046181 2315 {
dflet 3:a8c249046181 2316 if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
dflet 3:a8c249046181 2317 {
dflet 3:a8c249046181 2318 *pxCoRoutineWoken = pdTRUE;
dflet 3:a8c249046181 2319 }
dflet 3:a8c249046181 2320 else
dflet 3:a8c249046181 2321 {
dflet 3:a8c249046181 2322 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2323 }
dflet 3:a8c249046181 2324 }
dflet 3:a8c249046181 2325 else
dflet 3:a8c249046181 2326 {
dflet 3:a8c249046181 2327 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2328 }
dflet 3:a8c249046181 2329 }
dflet 3:a8c249046181 2330 else
dflet 3:a8c249046181 2331 {
dflet 3:a8c249046181 2332 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2333 }
dflet 3:a8c249046181 2334
dflet 3:a8c249046181 2335 xReturn = pdPASS;
dflet 3:a8c249046181 2336 }
dflet 3:a8c249046181 2337 else
dflet 3:a8c249046181 2338 {
dflet 3:a8c249046181 2339 xReturn = pdFAIL;
dflet 3:a8c249046181 2340 }
dflet 3:a8c249046181 2341
dflet 3:a8c249046181 2342 return xReturn;
dflet 3:a8c249046181 2343 }
dflet 3:a8c249046181 2344
dflet 3:a8c249046181 2345 #endif /* configUSE_CO_ROUTINES */
dflet 3:a8c249046181 2346 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2347
dflet 3:a8c249046181 2348 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 3:a8c249046181 2349
dflet 3:a8c249046181 2350 void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */
dflet 3:a8c249046181 2351 {
dflet 3:a8c249046181 2352 UBaseType_t ux;
dflet 3:a8c249046181 2353
dflet 3:a8c249046181 2354 /* See if there is an empty space in the registry. A NULL name denotes
dflet 3:a8c249046181 2355 a free slot. */
dflet 3:a8c249046181 2356 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 3:a8c249046181 2357 {
dflet 3:a8c249046181 2358 if( xQueueRegistry[ ux ].pcQueueName == NULL )
dflet 3:a8c249046181 2359 {
dflet 3:a8c249046181 2360 /* Store the information on this queue. */
dflet 3:a8c249046181 2361 xQueueRegistry[ ux ].pcQueueName = pcQueueName;
dflet 3:a8c249046181 2362 xQueueRegistry[ ux ].xHandle = xQueue;
dflet 3:a8c249046181 2363
dflet 3:a8c249046181 2364 traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName );
dflet 3:a8c249046181 2365 break;
dflet 3:a8c249046181 2366 }
dflet 3:a8c249046181 2367 else
dflet 3:a8c249046181 2368 {
dflet 3:a8c249046181 2369 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2370 }
dflet 3:a8c249046181 2371 }
dflet 3:a8c249046181 2372 }
dflet 3:a8c249046181 2373
dflet 3:a8c249046181 2374 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 3:a8c249046181 2375 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2376
dflet 3:a8c249046181 2377 #if ( configQUEUE_REGISTRY_SIZE > 0 )
dflet 3:a8c249046181 2378
dflet 3:a8c249046181 2379 void vQueueUnregisterQueue( QueueHandle_t xQueue )
dflet 3:a8c249046181 2380 {
dflet 3:a8c249046181 2381 UBaseType_t ux;
dflet 3:a8c249046181 2382
dflet 3:a8c249046181 2383 /* See if the handle of the queue being unregistered in actually in the
dflet 3:a8c249046181 2384 registry. */
dflet 3:a8c249046181 2385 for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ )
dflet 3:a8c249046181 2386 {
dflet 3:a8c249046181 2387 if( xQueueRegistry[ ux ].xHandle == xQueue )
dflet 3:a8c249046181 2388 {
dflet 3:a8c249046181 2389 /* Set the name to NULL to show that this slot if free again. */
dflet 3:a8c249046181 2390 xQueueRegistry[ ux ].pcQueueName = NULL;
dflet 3:a8c249046181 2391 break;
dflet 3:a8c249046181 2392 }
dflet 3:a8c249046181 2393 else
dflet 3:a8c249046181 2394 {
dflet 3:a8c249046181 2395 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2396 }
dflet 3:a8c249046181 2397 }
dflet 3:a8c249046181 2398
dflet 3:a8c249046181 2399 } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */
dflet 3:a8c249046181 2400
dflet 3:a8c249046181 2401 #endif /* configQUEUE_REGISTRY_SIZE */
dflet 3:a8c249046181 2402 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2403
dflet 3:a8c249046181 2404 #if ( configUSE_TIMERS == 1 )
dflet 3:a8c249046181 2405
dflet 3:a8c249046181 2406 void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait )
dflet 3:a8c249046181 2407 {
dflet 3:a8c249046181 2408 Queue_t * const pxQueue = ( Queue_t * ) xQueue;
dflet 3:a8c249046181 2409
dflet 3:a8c249046181 2410 /* This function should not be called by application code hence the
dflet 3:a8c249046181 2411 'Restricted' in its name. It is not part of the public API. It is
dflet 3:a8c249046181 2412 designed for use by kernel code, and has special calling requirements.
dflet 3:a8c249046181 2413 It can result in vListInsert() being called on a list that can only
dflet 3:a8c249046181 2414 possibly ever have one item in it, so the list will be fast, but even
dflet 3:a8c249046181 2415 so it should be called with the scheduler locked and not from a critical
dflet 3:a8c249046181 2416 section. */
dflet 3:a8c249046181 2417
dflet 3:a8c249046181 2418 /* Only do anything if there are no messages in the queue. This function
dflet 3:a8c249046181 2419 will not actually cause the task to block, just place it on a blocked
dflet 3:a8c249046181 2420 list. It will not block until the scheduler is unlocked - at which
dflet 3:a8c249046181 2421 time a yield will be performed. If an item is added to the queue while
dflet 3:a8c249046181 2422 the queue is locked, and the calling task blocks on the queue, then the
dflet 3:a8c249046181 2423 calling task will be immediately unblocked when the queue is unlocked. */
dflet 3:a8c249046181 2424 prvLockQueue( pxQueue );
dflet 3:a8c249046181 2425 if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U )
dflet 3:a8c249046181 2426 {
dflet 3:a8c249046181 2427 /* There is nothing in the queue, block for the specified period. */
dflet 3:a8c249046181 2428 vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait );
dflet 3:a8c249046181 2429 }
dflet 3:a8c249046181 2430 else
dflet 3:a8c249046181 2431 {
dflet 3:a8c249046181 2432 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2433 }
dflet 3:a8c249046181 2434 prvUnlockQueue( pxQueue );
dflet 3:a8c249046181 2435 }
dflet 3:a8c249046181 2436
dflet 3:a8c249046181 2437 #endif /* configUSE_TIMERS */
dflet 3:a8c249046181 2438 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2439
dflet 3:a8c249046181 2440 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2441
dflet 3:a8c249046181 2442 QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength )
dflet 3:a8c249046181 2443 {
dflet 3:a8c249046181 2444 QueueSetHandle_t pxQueue;
dflet 3:a8c249046181 2445
dflet 3:a8c249046181 2446 pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET );
dflet 3:a8c249046181 2447
dflet 3:a8c249046181 2448 return pxQueue;
dflet 3:a8c249046181 2449 }
dflet 3:a8c249046181 2450
dflet 3:a8c249046181 2451 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2452 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2453
dflet 3:a8c249046181 2454 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2455
dflet 3:a8c249046181 2456 BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 3:a8c249046181 2457 {
dflet 3:a8c249046181 2458 BaseType_t xReturn;
dflet 3:a8c249046181 2459
dflet 3:a8c249046181 2460 taskENTER_CRITICAL();
dflet 3:a8c249046181 2461 {
dflet 3:a8c249046181 2462 if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL )
dflet 3:a8c249046181 2463 {
dflet 3:a8c249046181 2464 /* Cannot add a queue/semaphore to more than one queue set. */
dflet 3:a8c249046181 2465 xReturn = pdFAIL;
dflet 3:a8c249046181 2466 }
dflet 3:a8c249046181 2467 else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2468 {
dflet 3:a8c249046181 2469 /* Cannot add a queue/semaphore to a queue set if there are already
dflet 3:a8c249046181 2470 items in the queue/semaphore. */
dflet 3:a8c249046181 2471 xReturn = pdFAIL;
dflet 3:a8c249046181 2472 }
dflet 3:a8c249046181 2473 else
dflet 3:a8c249046181 2474 {
dflet 3:a8c249046181 2475 ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet;
dflet 3:a8c249046181 2476 xReturn = pdPASS;
dflet 3:a8c249046181 2477 }
dflet 3:a8c249046181 2478 }
dflet 3:a8c249046181 2479 taskEXIT_CRITICAL();
dflet 3:a8c249046181 2480
dflet 3:a8c249046181 2481 return xReturn;
dflet 3:a8c249046181 2482 }
dflet 3:a8c249046181 2483
dflet 3:a8c249046181 2484 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2485 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2486
dflet 3:a8c249046181 2487 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2488
dflet 3:a8c249046181 2489 BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet )
dflet 3:a8c249046181 2490 {
dflet 3:a8c249046181 2491 BaseType_t xReturn;
dflet 3:a8c249046181 2492 Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore;
dflet 3:a8c249046181 2493
dflet 3:a8c249046181 2494 if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet )
dflet 3:a8c249046181 2495 {
dflet 3:a8c249046181 2496 /* The queue was not a member of the set. */
dflet 3:a8c249046181 2497 xReturn = pdFAIL;
dflet 3:a8c249046181 2498 }
dflet 3:a8c249046181 2499 else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 )
dflet 3:a8c249046181 2500 {
dflet 3:a8c249046181 2501 /* It is dangerous to remove a queue from a set when the queue is
dflet 3:a8c249046181 2502 not empty because the queue set will still hold pending events for
dflet 3:a8c249046181 2503 the queue. */
dflet 3:a8c249046181 2504 xReturn = pdFAIL;
dflet 3:a8c249046181 2505 }
dflet 3:a8c249046181 2506 else
dflet 3:a8c249046181 2507 {
dflet 3:a8c249046181 2508 taskENTER_CRITICAL();
dflet 3:a8c249046181 2509 {
dflet 3:a8c249046181 2510 /* The queue is no longer contained in the set. */
dflet 3:a8c249046181 2511 pxQueueOrSemaphore->pxQueueSetContainer = NULL;
dflet 3:a8c249046181 2512 }
dflet 3:a8c249046181 2513 taskEXIT_CRITICAL();
dflet 3:a8c249046181 2514 xReturn = pdPASS;
dflet 3:a8c249046181 2515 }
dflet 3:a8c249046181 2516
dflet 3:a8c249046181 2517 return xReturn;
dflet 3:a8c249046181 2518 } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */
dflet 3:a8c249046181 2519
dflet 3:a8c249046181 2520 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2521 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2522
dflet 3:a8c249046181 2523 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2524
dflet 3:a8c249046181 2525 QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait )
dflet 3:a8c249046181 2526 {
dflet 3:a8c249046181 2527 QueueSetMemberHandle_t xReturn = NULL;
dflet 3:a8c249046181 2528
dflet 3:a8c249046181 2529 ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 3:a8c249046181 2530 return xReturn;
dflet 3:a8c249046181 2531 }
dflet 3:a8c249046181 2532
dflet 3:a8c249046181 2533 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2534 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2535
dflet 3:a8c249046181 2536 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2537
dflet 3:a8c249046181 2538 QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet )
dflet 3:a8c249046181 2539 {
dflet 3:a8c249046181 2540 QueueSetMemberHandle_t xReturn = NULL;
dflet 3:a8c249046181 2541
dflet 3:a8c249046181 2542 ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */
dflet 3:a8c249046181 2543 return xReturn;
dflet 3:a8c249046181 2544 }
dflet 3:a8c249046181 2545
dflet 3:a8c249046181 2546 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2547 /*-----------------------------------------------------------*/
dflet 3:a8c249046181 2548
dflet 3:a8c249046181 2549 #if ( configUSE_QUEUE_SETS == 1 )
dflet 3:a8c249046181 2550
dflet 3:a8c249046181 2551 static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition )
dflet 3:a8c249046181 2552 {
dflet 3:a8c249046181 2553 Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer;
dflet 3:a8c249046181 2554 BaseType_t xReturn = pdFALSE;
dflet 3:a8c249046181 2555
dflet 3:a8c249046181 2556 /* This function must be called form a critical section. */
dflet 3:a8c249046181 2557
dflet 3:a8c249046181 2558 configASSERT( pxQueueSetContainer );
dflet 3:a8c249046181 2559 configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength );
dflet 3:a8c249046181 2560
dflet 3:a8c249046181 2561 if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength )
dflet 3:a8c249046181 2562 {
dflet 3:a8c249046181 2563 traceQUEUE_SEND( pxQueueSetContainer );
dflet 3:a8c249046181 2564
dflet 3:a8c249046181 2565 /* The data copied is the handle of the queue that contains data. */
dflet 3:a8c249046181 2566 xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition );
dflet 3:a8c249046181 2567
dflet 3:a8c249046181 2568 if( pxQueueSetContainer->xTxLock == queueUNLOCKED )
dflet 3:a8c249046181 2569 {
dflet 3:a8c249046181 2570 if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE )
dflet 3:a8c249046181 2571 {
dflet 3:a8c249046181 2572 if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE )
dflet 3:a8c249046181 2573 {
dflet 3:a8c249046181 2574 /* The task waiting has a higher priority. */
dflet 3:a8c249046181 2575 xReturn = pdTRUE;
dflet 3:a8c249046181 2576 }
dflet 3:a8c249046181 2577 else
dflet 3:a8c249046181 2578 {
dflet 3:a8c249046181 2579 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2580 }
dflet 3:a8c249046181 2581 }
dflet 3:a8c249046181 2582 else
dflet 3:a8c249046181 2583 {
dflet 3:a8c249046181 2584 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2585 }
dflet 3:a8c249046181 2586 }
dflet 3:a8c249046181 2587 else
dflet 3:a8c249046181 2588 {
dflet 3:a8c249046181 2589 ( pxQueueSetContainer->xTxLock )++;
dflet 3:a8c249046181 2590 }
dflet 3:a8c249046181 2591 }
dflet 3:a8c249046181 2592 else
dflet 3:a8c249046181 2593 {
dflet 3:a8c249046181 2594 mtCOVERAGE_TEST_MARKER();
dflet 3:a8c249046181 2595 }
dflet 3:a8c249046181 2596
dflet 3:a8c249046181 2597 return xReturn;
dflet 3:a8c249046181 2598 }
dflet 3:a8c249046181 2599
dflet 3:a8c249046181 2600 #endif /* configUSE_QUEUE_SETS */
dflet 3:a8c249046181 2601
dflet 3:a8c249046181 2602
dflet 3:a8c249046181 2603
dflet 3:a8c249046181 2604
dflet 3:a8c249046181 2605
dflet 3:a8c249046181 2606
dflet 3:a8c249046181 2607
dflet 3:a8c249046181 2608
dflet 3:a8c249046181 2609
dflet 3:a8c249046181 2610
dflet 3:a8c249046181 2611
dflet 3:a8c249046181 2612
dflet 3:a8c249046181 2613