Important changes to repositories hosted on mbed.com
Mbed hosted mercurial repositories are deprecated and are due to be permanently deleted in July 2026.
To keep a copy of this software download the repository Zip archive or clone locally using Mercurial.
It is also possible to export all your personal repositories from the account settings page.
Dependents: frdm_k64f_freertos_lib
src/queue.c@0:62cd296ba2a7, 2017-05-31 (annotated)
- Committer:
- fep
- Date:
- Wed May 31 02:27:10 2017 +0000
- Revision:
- 0:62cd296ba2a7
FreeRTOS v9.0.0 for Cortex-M4F (FRDM-K64F and others...)
Who changed what in which revision?
| User | Revision | Line number | New contents of line |
|---|---|---|---|
| fep | 0:62cd296ba2a7 | 1 | /* |
| fep | 0:62cd296ba2a7 | 2 | FreeRTOS V9.0.0 - Copyright (C) 2016 Real Time Engineers Ltd. |
| fep | 0:62cd296ba2a7 | 3 | All rights reserved |
| fep | 0:62cd296ba2a7 | 4 | |
| fep | 0:62cd296ba2a7 | 5 | VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION. |
| fep | 0:62cd296ba2a7 | 6 | |
| fep | 0:62cd296ba2a7 | 7 | This file is part of the FreeRTOS distribution. |
| fep | 0:62cd296ba2a7 | 8 | |
| fep | 0:62cd296ba2a7 | 9 | FreeRTOS is free software; you can redistribute it and/or modify it under |
| fep | 0:62cd296ba2a7 | 10 | the terms of the GNU General Public License (version 2) as published by the |
| fep | 0:62cd296ba2a7 | 11 | Free Software Foundation >>>> AND MODIFIED BY <<<< the FreeRTOS exception. |
| fep | 0:62cd296ba2a7 | 12 | |
| fep | 0:62cd296ba2a7 | 13 | *************************************************************************** |
| fep | 0:62cd296ba2a7 | 14 | >>! NOTE: The modification to the GPL is included to allow you to !<< |
| fep | 0:62cd296ba2a7 | 15 | >>! distribute a combined work that includes FreeRTOS without being !<< |
| fep | 0:62cd296ba2a7 | 16 | >>! obliged to provide the source code for proprietary components !<< |
| fep | 0:62cd296ba2a7 | 17 | >>! outside of the FreeRTOS kernel. !<< |
| fep | 0:62cd296ba2a7 | 18 | *************************************************************************** |
| fep | 0:62cd296ba2a7 | 19 | |
| fep | 0:62cd296ba2a7 | 20 | FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY |
| fep | 0:62cd296ba2a7 | 21 | WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
| fep | 0:62cd296ba2a7 | 22 | FOR A PARTICULAR PURPOSE. Full license text is available on the following |
| fep | 0:62cd296ba2a7 | 23 | link: http://www.freertos.org/a00114.html |
| fep | 0:62cd296ba2a7 | 24 | |
| fep | 0:62cd296ba2a7 | 25 | *************************************************************************** |
| fep | 0:62cd296ba2a7 | 26 | * * |
| fep | 0:62cd296ba2a7 | 27 | * FreeRTOS provides completely free yet professionally developed, * |
| fep | 0:62cd296ba2a7 | 28 | * robust, strictly quality controlled, supported, and cross * |
| fep | 0:62cd296ba2a7 | 29 | * platform software that is more than just the market leader, it * |
| fep | 0:62cd296ba2a7 | 30 | * is the industry's de facto standard. * |
| fep | 0:62cd296ba2a7 | 31 | * * |
| fep | 0:62cd296ba2a7 | 32 | * Help yourself get started quickly while simultaneously helping * |
| fep | 0:62cd296ba2a7 | 33 | * to support the FreeRTOS project by purchasing a FreeRTOS * |
| fep | 0:62cd296ba2a7 | 34 | * tutorial book, reference manual, or both: * |
| fep | 0:62cd296ba2a7 | 35 | * http://www.FreeRTOS.org/Documentation * |
| fep | 0:62cd296ba2a7 | 36 | * * |
| fep | 0:62cd296ba2a7 | 37 | *************************************************************************** |
| fep | 0:62cd296ba2a7 | 38 | |
| fep | 0:62cd296ba2a7 | 39 | http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading |
| fep | 0:62cd296ba2a7 | 40 | the FAQ page "My application does not run, what could be wrong?". Have you |
| fep | 0:62cd296ba2a7 | 41 | defined configASSERT()? |
| fep | 0:62cd296ba2a7 | 42 | |
| fep | 0:62cd296ba2a7 | 43 | http://www.FreeRTOS.org/support - In return for receiving this top quality |
| fep | 0:62cd296ba2a7 | 44 | embedded software for free we request you assist our global community by |
| fep | 0:62cd296ba2a7 | 45 | participating in the support forum. |
| fep | 0:62cd296ba2a7 | 46 | |
| fep | 0:62cd296ba2a7 | 47 | http://www.FreeRTOS.org/training - Investing in training allows your team to |
| fep | 0:62cd296ba2a7 | 48 | be as productive as possible as early as possible. Now you can receive |
| fep | 0:62cd296ba2a7 | 49 | FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers |
| fep | 0:62cd296ba2a7 | 50 | Ltd, and the world's leading authority on the world's leading RTOS. |
| fep | 0:62cd296ba2a7 | 51 | |
| fep | 0:62cd296ba2a7 | 52 | http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products, |
| fep | 0:62cd296ba2a7 | 53 | including FreeRTOS+Trace - an indispensable productivity tool, a DOS |
| fep | 0:62cd296ba2a7 | 54 | compatible FAT file system, and our tiny thread aware UDP/IP stack. |
| fep | 0:62cd296ba2a7 | 55 | |
| fep | 0:62cd296ba2a7 | 56 | http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate. |
| fep | 0:62cd296ba2a7 | 57 | Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS. |
| fep | 0:62cd296ba2a7 | 58 | |
| fep | 0:62cd296ba2a7 | 59 | http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High |
| fep | 0:62cd296ba2a7 | 60 | Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS |
| fep | 0:62cd296ba2a7 | 61 | licenses offer ticketed support, indemnification and commercial middleware. |
| fep | 0:62cd296ba2a7 | 62 | |
| fep | 0:62cd296ba2a7 | 63 | http://www.SafeRTOS.com - High Integrity Systems also provide a safety |
| fep | 0:62cd296ba2a7 | 64 | engineered and independently SIL3 certified version for use in safety and |
| fep | 0:62cd296ba2a7 | 65 | mission critical applications that require provable dependability. |
| fep | 0:62cd296ba2a7 | 66 | |
| fep | 0:62cd296ba2a7 | 67 | 1 tab == 4 spaces! |
| fep | 0:62cd296ba2a7 | 68 | */ |
| fep | 0:62cd296ba2a7 | 69 | |
| fep | 0:62cd296ba2a7 | 70 | #include <stdlib.h> |
| fep | 0:62cd296ba2a7 | 71 | #include <string.h> |
| fep | 0:62cd296ba2a7 | 72 | |
| fep | 0:62cd296ba2a7 | 73 | /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining |
| fep | 0:62cd296ba2a7 | 74 | all the API functions to use the MPU wrappers. That should only be done when |
| fep | 0:62cd296ba2a7 | 75 | task.h is included from an application file. */ |
| fep | 0:62cd296ba2a7 | 76 | #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE |
| fep | 0:62cd296ba2a7 | 77 | |
| fep | 0:62cd296ba2a7 | 78 | #include "FreeRTOS.h" |
| fep | 0:62cd296ba2a7 | 79 | #include "task.h" |
| fep | 0:62cd296ba2a7 | 80 | #include "queue.h" |
| fep | 0:62cd296ba2a7 | 81 | |
| fep | 0:62cd296ba2a7 | 82 | #if ( configUSE_CO_ROUTINES == 1 ) |
| fep | 0:62cd296ba2a7 | 83 | #include "croutine.h" |
| fep | 0:62cd296ba2a7 | 84 | #endif |
| fep | 0:62cd296ba2a7 | 85 | |
| fep | 0:62cd296ba2a7 | 86 | /* Lint e961 and e750 are suppressed as a MISRA exception justified because the |
| fep | 0:62cd296ba2a7 | 87 | MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the |
| fep | 0:62cd296ba2a7 | 88 | header files above, but not in this file, in order to generate the correct |
| fep | 0:62cd296ba2a7 | 89 | privileged Vs unprivileged linkage and placement. */ |
| fep | 0:62cd296ba2a7 | 90 | #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */ |
| fep | 0:62cd296ba2a7 | 91 | |
| fep | 0:62cd296ba2a7 | 92 | |
| fep | 0:62cd296ba2a7 | 93 | /* Constants used with the cRxLock and cTxLock structure members. */ |
| fep | 0:62cd296ba2a7 | 94 | #define queueUNLOCKED ( ( int8_t ) -1 ) |
| fep | 0:62cd296ba2a7 | 95 | #define queueLOCKED_UNMODIFIED ( ( int8_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 96 | |
| fep | 0:62cd296ba2a7 | 97 | /* When the Queue_t structure is used to represent a base queue its pcHead and |
| fep | 0:62cd296ba2a7 | 98 | pcTail members are used as pointers into the queue storage area. When the |
| fep | 0:62cd296ba2a7 | 99 | Queue_t structure is used to represent a mutex pcHead and pcTail pointers are |
| fep | 0:62cd296ba2a7 | 100 | not necessary, and the pcHead pointer is set to NULL to indicate that the |
| fep | 0:62cd296ba2a7 | 101 | pcTail pointer actually points to the mutex holder (if any). Map alternative |
| fep | 0:62cd296ba2a7 | 102 | names to the pcHead and pcTail structure members to ensure the readability of |
| fep | 0:62cd296ba2a7 | 103 | the code is maintained despite this dual use of two structure members. An |
| fep | 0:62cd296ba2a7 | 104 | alternative implementation would be to use a union, but use of a union is |
| fep | 0:62cd296ba2a7 | 105 | against the coding standard (although an exception to the standard has been |
| fep | 0:62cd296ba2a7 | 106 | permitted where the dual use also significantly changes the type of the |
| fep | 0:62cd296ba2a7 | 107 | structure member). */ |
| fep | 0:62cd296ba2a7 | 108 | #define pxMutexHolder pcTail |
| fep | 0:62cd296ba2a7 | 109 | #define uxQueueType pcHead |
| fep | 0:62cd296ba2a7 | 110 | #define queueQUEUE_IS_MUTEX NULL |
| fep | 0:62cd296ba2a7 | 111 | |
| fep | 0:62cd296ba2a7 | 112 | /* Semaphores do not actually store or copy data, so have an item size of |
| fep | 0:62cd296ba2a7 | 113 | zero. */ |
| fep | 0:62cd296ba2a7 | 114 | #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 115 | #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U ) |
| fep | 0:62cd296ba2a7 | 116 | |
| fep | 0:62cd296ba2a7 | 117 | #if( configUSE_PREEMPTION == 0 ) |
| fep | 0:62cd296ba2a7 | 118 | /* If the cooperative scheduler is being used then a yield should not be |
| fep | 0:62cd296ba2a7 | 119 | performed just because a higher priority task has been woken. */ |
| fep | 0:62cd296ba2a7 | 120 | #define queueYIELD_IF_USING_PREEMPTION() |
| fep | 0:62cd296ba2a7 | 121 | #else |
| fep | 0:62cd296ba2a7 | 122 | #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API() |
| fep | 0:62cd296ba2a7 | 123 | #endif |
| fep | 0:62cd296ba2a7 | 124 | |
| fep | 0:62cd296ba2a7 | 125 | /* |
| fep | 0:62cd296ba2a7 | 126 | * Definition of the queue used by the scheduler. |
| fep | 0:62cd296ba2a7 | 127 | * Items are queued by copy, not reference. See the following link for the |
| fep | 0:62cd296ba2a7 | 128 | * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html |
| fep | 0:62cd296ba2a7 | 129 | */ |
| fep | 0:62cd296ba2a7 | 130 | typedef struct QueueDefinition |
| fep | 0:62cd296ba2a7 | 131 | { |
| fep | 0:62cd296ba2a7 | 132 | int8_t *pcHead; /*< Points to the beginning of the queue storage area. */ |
| fep | 0:62cd296ba2a7 | 133 | int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */ |
| fep | 0:62cd296ba2a7 | 134 | int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */ |
| fep | 0:62cd296ba2a7 | 135 | |
| fep | 0:62cd296ba2a7 | 136 | union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */ |
| fep | 0:62cd296ba2a7 | 137 | { |
| fep | 0:62cd296ba2a7 | 138 | int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */ |
| fep | 0:62cd296ba2a7 | 139 | UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */ |
| fep | 0:62cd296ba2a7 | 140 | } u; |
| fep | 0:62cd296ba2a7 | 141 | |
| fep | 0:62cd296ba2a7 | 142 | List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */ |
| fep | 0:62cd296ba2a7 | 143 | List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */ |
| fep | 0:62cd296ba2a7 | 144 | |
| fep | 0:62cd296ba2a7 | 145 | volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */ |
| fep | 0:62cd296ba2a7 | 146 | UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */ |
| fep | 0:62cd296ba2a7 | 147 | UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */ |
| fep | 0:62cd296ba2a7 | 148 | |
| fep | 0:62cd296ba2a7 | 149 | volatile int8_t cRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */ |
| fep | 0:62cd296ba2a7 | 150 | volatile int8_t cTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */ |
| fep | 0:62cd296ba2a7 | 151 | |
| fep | 0:62cd296ba2a7 | 152 | #if( ( configSUPPORT_STATIC_ALLOCATION == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 153 | uint8_t ucStaticallyAllocated; /*< Set to pdTRUE if the memory used by the queue was statically allocated to ensure no attempt is made to free the memory. */ |
| fep | 0:62cd296ba2a7 | 154 | #endif |
| fep | 0:62cd296ba2a7 | 155 | |
| fep | 0:62cd296ba2a7 | 156 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 157 | struct QueueDefinition *pxQueueSetContainer; |
| fep | 0:62cd296ba2a7 | 158 | #endif |
| fep | 0:62cd296ba2a7 | 159 | |
| fep | 0:62cd296ba2a7 | 160 | #if ( configUSE_TRACE_FACILITY == 1 ) |
| fep | 0:62cd296ba2a7 | 161 | UBaseType_t uxQueueNumber; |
| fep | 0:62cd296ba2a7 | 162 | uint8_t ucQueueType; |
| fep | 0:62cd296ba2a7 | 163 | #endif |
| fep | 0:62cd296ba2a7 | 164 | |
| fep | 0:62cd296ba2a7 | 165 | } xQUEUE; |
| fep | 0:62cd296ba2a7 | 166 | |
| fep | 0:62cd296ba2a7 | 167 | /* The old xQUEUE name is maintained above then typedefed to the new Queue_t |
| fep | 0:62cd296ba2a7 | 168 | name below to enable the use of older kernel aware debuggers. */ |
| fep | 0:62cd296ba2a7 | 169 | typedef xQUEUE Queue_t; |
| fep | 0:62cd296ba2a7 | 170 | |
| fep | 0:62cd296ba2a7 | 171 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 172 | |
| fep | 0:62cd296ba2a7 | 173 | /* |
| fep | 0:62cd296ba2a7 | 174 | * The queue registry is just a means for kernel aware debuggers to locate |
| fep | 0:62cd296ba2a7 | 175 | * queue structures. It has no other purpose so is an optional component. |
| fep | 0:62cd296ba2a7 | 176 | */ |
| fep | 0:62cd296ba2a7 | 177 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
| fep | 0:62cd296ba2a7 | 178 | |
| fep | 0:62cd296ba2a7 | 179 | /* The type stored within the queue registry array. This allows a name |
| fep | 0:62cd296ba2a7 | 180 | to be assigned to each queue making kernel aware debugging a little |
| fep | 0:62cd296ba2a7 | 181 | more user friendly. */ |
| fep | 0:62cd296ba2a7 | 182 | typedef struct QUEUE_REGISTRY_ITEM |
| fep | 0:62cd296ba2a7 | 183 | { |
| fep | 0:62cd296ba2a7 | 184 | const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
| fep | 0:62cd296ba2a7 | 185 | QueueHandle_t xHandle; |
| fep | 0:62cd296ba2a7 | 186 | } xQueueRegistryItem; |
| fep | 0:62cd296ba2a7 | 187 | |
| fep | 0:62cd296ba2a7 | 188 | /* The old xQueueRegistryItem name is maintained above then typedefed to the |
| fep | 0:62cd296ba2a7 | 189 | new xQueueRegistryItem name below to enable the use of older kernel aware |
| fep | 0:62cd296ba2a7 | 190 | debuggers. */ |
| fep | 0:62cd296ba2a7 | 191 | typedef xQueueRegistryItem QueueRegistryItem_t; |
| fep | 0:62cd296ba2a7 | 192 | |
| fep | 0:62cd296ba2a7 | 193 | /* The queue registry is simply an array of QueueRegistryItem_t structures. |
| fep | 0:62cd296ba2a7 | 194 | The pcQueueName member of a structure being NULL is indicative of the |
| fep | 0:62cd296ba2a7 | 195 | array position being vacant. */ |
| fep | 0:62cd296ba2a7 | 196 | PRIVILEGED_DATA QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ]; |
| fep | 0:62cd296ba2a7 | 197 | |
| fep | 0:62cd296ba2a7 | 198 | #endif /* configQUEUE_REGISTRY_SIZE */ |
| fep | 0:62cd296ba2a7 | 199 | |
| fep | 0:62cd296ba2a7 | 200 | /* |
| fep | 0:62cd296ba2a7 | 201 | * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not |
| fep | 0:62cd296ba2a7 | 202 | * prevent an ISR from adding or removing items to the queue, but does prevent |
| fep | 0:62cd296ba2a7 | 203 | * an ISR from removing tasks from the queue event lists. If an ISR finds a |
| fep | 0:62cd296ba2a7 | 204 | * queue is locked it will instead increment the appropriate queue lock count |
| fep | 0:62cd296ba2a7 | 205 | * to indicate that a task may require unblocking. When the queue in unlocked |
| fep | 0:62cd296ba2a7 | 206 | * these lock counts are inspected, and the appropriate action taken. |
| fep | 0:62cd296ba2a7 | 207 | */ |
| fep | 0:62cd296ba2a7 | 208 | static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 209 | |
| fep | 0:62cd296ba2a7 | 210 | /* |
| fep | 0:62cd296ba2a7 | 211 | * Uses a critical section to determine if there is any data in a queue. |
| fep | 0:62cd296ba2a7 | 212 | * |
| fep | 0:62cd296ba2a7 | 213 | * @return pdTRUE if the queue contains no items, otherwise pdFALSE. |
| fep | 0:62cd296ba2a7 | 214 | */ |
| fep | 0:62cd296ba2a7 | 215 | static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 216 | |
| fep | 0:62cd296ba2a7 | 217 | /* |
| fep | 0:62cd296ba2a7 | 218 | * Uses a critical section to determine if there is any space in a queue. |
| fep | 0:62cd296ba2a7 | 219 | * |
| fep | 0:62cd296ba2a7 | 220 | * @return pdTRUE if there is no space, otherwise pdFALSE; |
| fep | 0:62cd296ba2a7 | 221 | */ |
| fep | 0:62cd296ba2a7 | 222 | static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 223 | |
| fep | 0:62cd296ba2a7 | 224 | /* |
| fep | 0:62cd296ba2a7 | 225 | * Copies an item into the queue, either at the front of the queue or the |
| fep | 0:62cd296ba2a7 | 226 | * back of the queue. |
| fep | 0:62cd296ba2a7 | 227 | */ |
| fep | 0:62cd296ba2a7 | 228 | static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 229 | |
| fep | 0:62cd296ba2a7 | 230 | /* |
| fep | 0:62cd296ba2a7 | 231 | * Copies an item out of a queue. |
| fep | 0:62cd296ba2a7 | 232 | */ |
| fep | 0:62cd296ba2a7 | 233 | static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 234 | |
| fep | 0:62cd296ba2a7 | 235 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 236 | /* |
| fep | 0:62cd296ba2a7 | 237 | * Checks to see if a queue is a member of a queue set, and if so, notifies |
| fep | 0:62cd296ba2a7 | 238 | * the queue set that the queue contains data. |
| fep | 0:62cd296ba2a7 | 239 | */ |
| fep | 0:62cd296ba2a7 | 240 | static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 241 | #endif |
| fep | 0:62cd296ba2a7 | 242 | |
| fep | 0:62cd296ba2a7 | 243 | /* |
| fep | 0:62cd296ba2a7 | 244 | * Called after a Queue_t structure has been allocated either statically or |
| fep | 0:62cd296ba2a7 | 245 | * dynamically to fill in the structure's members. |
| fep | 0:62cd296ba2a7 | 246 | */ |
| fep | 0:62cd296ba2a7 | 247 | static void prvInitialiseNewQueue( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, const uint8_t ucQueueType, Queue_t *pxNewQueue ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 248 | |
| fep | 0:62cd296ba2a7 | 249 | /* |
| fep | 0:62cd296ba2a7 | 250 | * Mutexes are a special type of queue. When a mutex is created, first the |
| fep | 0:62cd296ba2a7 | 251 | * queue is created, then prvInitialiseMutex() is called to configure the queue |
| fep | 0:62cd296ba2a7 | 252 | * as a mutex. |
| fep | 0:62cd296ba2a7 | 253 | */ |
| fep | 0:62cd296ba2a7 | 254 | #if( configUSE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 255 | static void prvInitialiseMutex( Queue_t *pxNewQueue ) PRIVILEGED_FUNCTION; |
| fep | 0:62cd296ba2a7 | 256 | #endif |
| fep | 0:62cd296ba2a7 | 257 | |
| fep | 0:62cd296ba2a7 | 258 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 259 | |
| fep | 0:62cd296ba2a7 | 260 | /* |
| fep | 0:62cd296ba2a7 | 261 | * Macro to mark a queue as locked. Locking a queue prevents an ISR from |
| fep | 0:62cd296ba2a7 | 262 | * accessing the queue event lists. |
| fep | 0:62cd296ba2a7 | 263 | */ |
| fep | 0:62cd296ba2a7 | 264 | #define prvLockQueue( pxQueue ) \ |
| fep | 0:62cd296ba2a7 | 265 | taskENTER_CRITICAL(); \ |
| fep | 0:62cd296ba2a7 | 266 | { \ |
| fep | 0:62cd296ba2a7 | 267 | if( ( pxQueue )->cRxLock == queueUNLOCKED ) \ |
| fep | 0:62cd296ba2a7 | 268 | { \ |
| fep | 0:62cd296ba2a7 | 269 | ( pxQueue )->cRxLock = queueLOCKED_UNMODIFIED; \ |
| fep | 0:62cd296ba2a7 | 270 | } \ |
| fep | 0:62cd296ba2a7 | 271 | if( ( pxQueue )->cTxLock == queueUNLOCKED ) \ |
| fep | 0:62cd296ba2a7 | 272 | { \ |
| fep | 0:62cd296ba2a7 | 273 | ( pxQueue )->cTxLock = queueLOCKED_UNMODIFIED; \ |
| fep | 0:62cd296ba2a7 | 274 | } \ |
| fep | 0:62cd296ba2a7 | 275 | } \ |
| fep | 0:62cd296ba2a7 | 276 | taskEXIT_CRITICAL() |
| fep | 0:62cd296ba2a7 | 277 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 278 | |
| fep | 0:62cd296ba2a7 | 279 | BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue ) |
| fep | 0:62cd296ba2a7 | 280 | { |
| fep | 0:62cd296ba2a7 | 281 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 282 | |
| fep | 0:62cd296ba2a7 | 283 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 284 | |
| fep | 0:62cd296ba2a7 | 285 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 286 | { |
| fep | 0:62cd296ba2a7 | 287 | pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize ); |
| fep | 0:62cd296ba2a7 | 288 | pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U; |
| fep | 0:62cd296ba2a7 | 289 | pxQueue->pcWriteTo = pxQueue->pcHead; |
| fep | 0:62cd296ba2a7 | 290 | pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize ); |
| fep | 0:62cd296ba2a7 | 291 | pxQueue->cRxLock = queueUNLOCKED; |
| fep | 0:62cd296ba2a7 | 292 | pxQueue->cTxLock = queueUNLOCKED; |
| fep | 0:62cd296ba2a7 | 293 | |
| fep | 0:62cd296ba2a7 | 294 | if( xNewQueue == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 295 | { |
| fep | 0:62cd296ba2a7 | 296 | /* If there are tasks blocked waiting to read from the queue, then |
| fep | 0:62cd296ba2a7 | 297 | the tasks will remain blocked as after this function exits the queue |
| fep | 0:62cd296ba2a7 | 298 | will still be empty. If there are tasks blocked waiting to write to |
| fep | 0:62cd296ba2a7 | 299 | the queue, then one should be unblocked as after this function exits |
| fep | 0:62cd296ba2a7 | 300 | it will be possible to write to it. */ |
| fep | 0:62cd296ba2a7 | 301 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 302 | { |
| fep | 0:62cd296ba2a7 | 303 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 304 | { |
| fep | 0:62cd296ba2a7 | 305 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 306 | } |
| fep | 0:62cd296ba2a7 | 307 | else |
| fep | 0:62cd296ba2a7 | 308 | { |
| fep | 0:62cd296ba2a7 | 309 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 310 | } |
| fep | 0:62cd296ba2a7 | 311 | } |
| fep | 0:62cd296ba2a7 | 312 | else |
| fep | 0:62cd296ba2a7 | 313 | { |
| fep | 0:62cd296ba2a7 | 314 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 315 | } |
| fep | 0:62cd296ba2a7 | 316 | } |
| fep | 0:62cd296ba2a7 | 317 | else |
| fep | 0:62cd296ba2a7 | 318 | { |
| fep | 0:62cd296ba2a7 | 319 | /* Ensure the event queues start in the correct state. */ |
| fep | 0:62cd296ba2a7 | 320 | vListInitialise( &( pxQueue->xTasksWaitingToSend ) ); |
| fep | 0:62cd296ba2a7 | 321 | vListInitialise( &( pxQueue->xTasksWaitingToReceive ) ); |
| fep | 0:62cd296ba2a7 | 322 | } |
| fep | 0:62cd296ba2a7 | 323 | } |
| fep | 0:62cd296ba2a7 | 324 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 325 | |
| fep | 0:62cd296ba2a7 | 326 | /* A value is returned for calling semantic consistency with previous |
| fep | 0:62cd296ba2a7 | 327 | versions. */ |
| fep | 0:62cd296ba2a7 | 328 | return pdPASS; |
| fep | 0:62cd296ba2a7 | 329 | } |
| fep | 0:62cd296ba2a7 | 330 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 331 | |
| fep | 0:62cd296ba2a7 | 332 | #if( configSUPPORT_STATIC_ALLOCATION == 1 ) |
| fep | 0:62cd296ba2a7 | 333 | |
| fep | 0:62cd296ba2a7 | 334 | QueueHandle_t xQueueGenericCreateStatic( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, StaticQueue_t *pxStaticQueue, const uint8_t ucQueueType ) |
| fep | 0:62cd296ba2a7 | 335 | { |
| fep | 0:62cd296ba2a7 | 336 | Queue_t *pxNewQueue; |
| fep | 0:62cd296ba2a7 | 337 | |
| fep | 0:62cd296ba2a7 | 338 | configASSERT( uxQueueLength > ( UBaseType_t ) 0 ); |
| fep | 0:62cd296ba2a7 | 339 | |
| fep | 0:62cd296ba2a7 | 340 | /* The StaticQueue_t structure and the queue storage area must be |
| fep | 0:62cd296ba2a7 | 341 | supplied. */ |
| fep | 0:62cd296ba2a7 | 342 | configASSERT( pxStaticQueue != NULL ); |
| fep | 0:62cd296ba2a7 | 343 | |
| fep | 0:62cd296ba2a7 | 344 | /* A queue storage area should be provided if the item size is not 0, and |
| fep | 0:62cd296ba2a7 | 345 | should not be provided if the item size is 0. */ |
| fep | 0:62cd296ba2a7 | 346 | configASSERT( !( ( pucQueueStorage != NULL ) && ( uxItemSize == 0 ) ) ); |
| fep | 0:62cd296ba2a7 | 347 | configASSERT( !( ( pucQueueStorage == NULL ) && ( uxItemSize != 0 ) ) ); |
| fep | 0:62cd296ba2a7 | 348 | |
| fep | 0:62cd296ba2a7 | 349 | #if( configASSERT_DEFINED == 1 ) |
| fep | 0:62cd296ba2a7 | 350 | { |
| fep | 0:62cd296ba2a7 | 351 | /* Sanity check that the size of the structure used to declare a |
| fep | 0:62cd296ba2a7 | 352 | variable of type StaticQueue_t or StaticSemaphore_t equals the size of |
| fep | 0:62cd296ba2a7 | 353 | the real queue and semaphore structures. */ |
| fep | 0:62cd296ba2a7 | 354 | volatile size_t xSize = sizeof( StaticQueue_t ); |
| fep | 0:62cd296ba2a7 | 355 | configASSERT( xSize == sizeof( Queue_t ) ); |
| fep | 0:62cd296ba2a7 | 356 | } |
| fep | 0:62cd296ba2a7 | 357 | #endif /* configASSERT_DEFINED */ |
| fep | 0:62cd296ba2a7 | 358 | |
| fep | 0:62cd296ba2a7 | 359 | /* The address of a statically allocated queue was passed in, use it. |
| fep | 0:62cd296ba2a7 | 360 | The address of a statically allocated storage area was also passed in |
| fep | 0:62cd296ba2a7 | 361 | but is already set. */ |
| fep | 0:62cd296ba2a7 | 362 | pxNewQueue = ( Queue_t * ) pxStaticQueue; /*lint !e740 Unusual cast is ok as the structures are designed to have the same alignment, and the size is checked by an assert. */ |
| fep | 0:62cd296ba2a7 | 363 | |
| fep | 0:62cd296ba2a7 | 364 | if( pxNewQueue != NULL ) |
| fep | 0:62cd296ba2a7 | 365 | { |
| fep | 0:62cd296ba2a7 | 366 | #if( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) |
| fep | 0:62cd296ba2a7 | 367 | { |
| fep | 0:62cd296ba2a7 | 368 | /* Queues can be allocated wither statically or dynamically, so |
| fep | 0:62cd296ba2a7 | 369 | note this queue was allocated statically in case the queue is |
| fep | 0:62cd296ba2a7 | 370 | later deleted. */ |
| fep | 0:62cd296ba2a7 | 371 | pxNewQueue->ucStaticallyAllocated = pdTRUE; |
| fep | 0:62cd296ba2a7 | 372 | } |
| fep | 0:62cd296ba2a7 | 373 | #endif /* configSUPPORT_DYNAMIC_ALLOCATION */ |
| fep | 0:62cd296ba2a7 | 374 | |
| fep | 0:62cd296ba2a7 | 375 | prvInitialiseNewQueue( uxQueueLength, uxItemSize, pucQueueStorage, ucQueueType, pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 376 | } |
| fep | 0:62cd296ba2a7 | 377 | |
| fep | 0:62cd296ba2a7 | 378 | return pxNewQueue; |
| fep | 0:62cd296ba2a7 | 379 | } |
| fep | 0:62cd296ba2a7 | 380 | |
| fep | 0:62cd296ba2a7 | 381 | #endif /* configSUPPORT_STATIC_ALLOCATION */ |
| fep | 0:62cd296ba2a7 | 382 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 383 | |
| fep | 0:62cd296ba2a7 | 384 | #if( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) |
| fep | 0:62cd296ba2a7 | 385 | |
| fep | 0:62cd296ba2a7 | 386 | QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType ) |
| fep | 0:62cd296ba2a7 | 387 | { |
| fep | 0:62cd296ba2a7 | 388 | Queue_t *pxNewQueue; |
| fep | 0:62cd296ba2a7 | 389 | size_t xQueueSizeInBytes; |
| fep | 0:62cd296ba2a7 | 390 | uint8_t *pucQueueStorage; |
| fep | 0:62cd296ba2a7 | 391 | |
| fep | 0:62cd296ba2a7 | 392 | configASSERT( uxQueueLength > ( UBaseType_t ) 0 ); |
| fep | 0:62cd296ba2a7 | 393 | |
| fep | 0:62cd296ba2a7 | 394 | if( uxItemSize == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 395 | { |
| fep | 0:62cd296ba2a7 | 396 | /* There is not going to be a queue storage area. */ |
| fep | 0:62cd296ba2a7 | 397 | xQueueSizeInBytes = ( size_t ) 0; |
| fep | 0:62cd296ba2a7 | 398 | } |
| fep | 0:62cd296ba2a7 | 399 | else |
| fep | 0:62cd296ba2a7 | 400 | { |
| fep | 0:62cd296ba2a7 | 401 | /* Allocate enough space to hold the maximum number of items that |
| fep | 0:62cd296ba2a7 | 402 | can be in the queue at any time. */ |
| fep | 0:62cd296ba2a7 | 403 | xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */ |
| fep | 0:62cd296ba2a7 | 404 | } |
| fep | 0:62cd296ba2a7 | 405 | |
| fep | 0:62cd296ba2a7 | 406 | pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes ); |
| fep | 0:62cd296ba2a7 | 407 | |
| fep | 0:62cd296ba2a7 | 408 | if( pxNewQueue != NULL ) |
| fep | 0:62cd296ba2a7 | 409 | { |
| fep | 0:62cd296ba2a7 | 410 | /* Jump past the queue structure to find the location of the queue |
| fep | 0:62cd296ba2a7 | 411 | storage area. */ |
| fep | 0:62cd296ba2a7 | 412 | pucQueueStorage = ( ( uint8_t * ) pxNewQueue ) + sizeof( Queue_t ); |
| fep | 0:62cd296ba2a7 | 413 | |
| fep | 0:62cd296ba2a7 | 414 | #if( configSUPPORT_STATIC_ALLOCATION == 1 ) |
| fep | 0:62cd296ba2a7 | 415 | { |
| fep | 0:62cd296ba2a7 | 416 | /* Queues can be created either statically or dynamically, so |
| fep | 0:62cd296ba2a7 | 417 | note this task was created dynamically in case it is later |
| fep | 0:62cd296ba2a7 | 418 | deleted. */ |
| fep | 0:62cd296ba2a7 | 419 | pxNewQueue->ucStaticallyAllocated = pdFALSE; |
| fep | 0:62cd296ba2a7 | 420 | } |
| fep | 0:62cd296ba2a7 | 421 | #endif /* configSUPPORT_STATIC_ALLOCATION */ |
| fep | 0:62cd296ba2a7 | 422 | |
| fep | 0:62cd296ba2a7 | 423 | prvInitialiseNewQueue( uxQueueLength, uxItemSize, pucQueueStorage, ucQueueType, pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 424 | } |
| fep | 0:62cd296ba2a7 | 425 | |
| fep | 0:62cd296ba2a7 | 426 | return pxNewQueue; |
| fep | 0:62cd296ba2a7 | 427 | } |
| fep | 0:62cd296ba2a7 | 428 | |
| fep | 0:62cd296ba2a7 | 429 | #endif /* configSUPPORT_STATIC_ALLOCATION */ |
| fep | 0:62cd296ba2a7 | 430 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 431 | |
| fep | 0:62cd296ba2a7 | 432 | static void prvInitialiseNewQueue( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, uint8_t *pucQueueStorage, const uint8_t ucQueueType, Queue_t *pxNewQueue ) |
| fep | 0:62cd296ba2a7 | 433 | { |
| fep | 0:62cd296ba2a7 | 434 | /* Remove compiler warnings about unused parameters should |
| fep | 0:62cd296ba2a7 | 435 | configUSE_TRACE_FACILITY not be set to 1. */ |
| fep | 0:62cd296ba2a7 | 436 | ( void ) ucQueueType; |
| fep | 0:62cd296ba2a7 | 437 | |
| fep | 0:62cd296ba2a7 | 438 | if( uxItemSize == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 439 | { |
| fep | 0:62cd296ba2a7 | 440 | /* No RAM was allocated for the queue storage area, but PC head cannot |
| fep | 0:62cd296ba2a7 | 441 | be set to NULL because NULL is used as a key to say the queue is used as |
| fep | 0:62cd296ba2a7 | 442 | a mutex. Therefore just set pcHead to point to the queue as a benign |
| fep | 0:62cd296ba2a7 | 443 | value that is known to be within the memory map. */ |
| fep | 0:62cd296ba2a7 | 444 | pxNewQueue->pcHead = ( int8_t * ) pxNewQueue; |
| fep | 0:62cd296ba2a7 | 445 | } |
| fep | 0:62cd296ba2a7 | 446 | else |
| fep | 0:62cd296ba2a7 | 447 | { |
| fep | 0:62cd296ba2a7 | 448 | /* Set the head to the start of the queue storage area. */ |
| fep | 0:62cd296ba2a7 | 449 | pxNewQueue->pcHead = ( int8_t * ) pucQueueStorage; |
| fep | 0:62cd296ba2a7 | 450 | } |
| fep | 0:62cd296ba2a7 | 451 | |
| fep | 0:62cd296ba2a7 | 452 | /* Initialise the queue members as described where the queue type is |
| fep | 0:62cd296ba2a7 | 453 | defined. */ |
| fep | 0:62cd296ba2a7 | 454 | pxNewQueue->uxLength = uxQueueLength; |
| fep | 0:62cd296ba2a7 | 455 | pxNewQueue->uxItemSize = uxItemSize; |
| fep | 0:62cd296ba2a7 | 456 | ( void ) xQueueGenericReset( pxNewQueue, pdTRUE ); |
| fep | 0:62cd296ba2a7 | 457 | |
| fep | 0:62cd296ba2a7 | 458 | #if ( configUSE_TRACE_FACILITY == 1 ) |
| fep | 0:62cd296ba2a7 | 459 | { |
| fep | 0:62cd296ba2a7 | 460 | pxNewQueue->ucQueueType = ucQueueType; |
| fep | 0:62cd296ba2a7 | 461 | } |
| fep | 0:62cd296ba2a7 | 462 | #endif /* configUSE_TRACE_FACILITY */ |
| fep | 0:62cd296ba2a7 | 463 | |
| fep | 0:62cd296ba2a7 | 464 | #if( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 465 | { |
| fep | 0:62cd296ba2a7 | 466 | pxNewQueue->pxQueueSetContainer = NULL; |
| fep | 0:62cd296ba2a7 | 467 | } |
| fep | 0:62cd296ba2a7 | 468 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 469 | |
| fep | 0:62cd296ba2a7 | 470 | traceQUEUE_CREATE( pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 471 | } |
| fep | 0:62cd296ba2a7 | 472 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 473 | |
| fep | 0:62cd296ba2a7 | 474 | #if( configUSE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 475 | |
| fep | 0:62cd296ba2a7 | 476 | static void prvInitialiseMutex( Queue_t *pxNewQueue ) |
| fep | 0:62cd296ba2a7 | 477 | { |
| fep | 0:62cd296ba2a7 | 478 | if( pxNewQueue != NULL ) |
| fep | 0:62cd296ba2a7 | 479 | { |
| fep | 0:62cd296ba2a7 | 480 | /* The queue create function will set all the queue structure members |
| fep | 0:62cd296ba2a7 | 481 | correctly for a generic queue, but this function is creating a |
| fep | 0:62cd296ba2a7 | 482 | mutex. Overwrite those members that need to be set differently - |
| fep | 0:62cd296ba2a7 | 483 | in particular the information required for priority inheritance. */ |
| fep | 0:62cd296ba2a7 | 484 | pxNewQueue->pxMutexHolder = NULL; |
| fep | 0:62cd296ba2a7 | 485 | pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX; |
| fep | 0:62cd296ba2a7 | 486 | |
| fep | 0:62cd296ba2a7 | 487 | /* In case this is a recursive mutex. */ |
| fep | 0:62cd296ba2a7 | 488 | pxNewQueue->u.uxRecursiveCallCount = 0; |
| fep | 0:62cd296ba2a7 | 489 | |
| fep | 0:62cd296ba2a7 | 490 | traceCREATE_MUTEX( pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 491 | |
| fep | 0:62cd296ba2a7 | 492 | /* Start with the semaphore in the expected state. */ |
| fep | 0:62cd296ba2a7 | 493 | ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK ); |
| fep | 0:62cd296ba2a7 | 494 | } |
| fep | 0:62cd296ba2a7 | 495 | else |
| fep | 0:62cd296ba2a7 | 496 | { |
| fep | 0:62cd296ba2a7 | 497 | traceCREATE_MUTEX_FAILED(); |
| fep | 0:62cd296ba2a7 | 498 | } |
| fep | 0:62cd296ba2a7 | 499 | } |
| fep | 0:62cd296ba2a7 | 500 | |
| fep | 0:62cd296ba2a7 | 501 | #endif /* configUSE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 502 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 503 | |
| fep | 0:62cd296ba2a7 | 504 | #if( ( configUSE_MUTEXES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 505 | |
| fep | 0:62cd296ba2a7 | 506 | QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType ) |
| fep | 0:62cd296ba2a7 | 507 | { |
| fep | 0:62cd296ba2a7 | 508 | Queue_t *pxNewQueue; |
| fep | 0:62cd296ba2a7 | 509 | const UBaseType_t uxMutexLength = ( UBaseType_t ) 1, uxMutexSize = ( UBaseType_t ) 0; |
| fep | 0:62cd296ba2a7 | 510 | |
| fep | 0:62cd296ba2a7 | 511 | pxNewQueue = ( Queue_t * ) xQueueGenericCreate( uxMutexLength, uxMutexSize, ucQueueType ); |
| fep | 0:62cd296ba2a7 | 512 | prvInitialiseMutex( pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 513 | |
| fep | 0:62cd296ba2a7 | 514 | return pxNewQueue; |
| fep | 0:62cd296ba2a7 | 515 | } |
| fep | 0:62cd296ba2a7 | 516 | |
| fep | 0:62cd296ba2a7 | 517 | #endif /* configUSE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 518 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 519 | |
| fep | 0:62cd296ba2a7 | 520 | #if( ( configUSE_MUTEXES == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 521 | |
| fep | 0:62cd296ba2a7 | 522 | QueueHandle_t xQueueCreateMutexStatic( const uint8_t ucQueueType, StaticQueue_t *pxStaticQueue ) |
| fep | 0:62cd296ba2a7 | 523 | { |
| fep | 0:62cd296ba2a7 | 524 | Queue_t *pxNewQueue; |
| fep | 0:62cd296ba2a7 | 525 | const UBaseType_t uxMutexLength = ( UBaseType_t ) 1, uxMutexSize = ( UBaseType_t ) 0; |
| fep | 0:62cd296ba2a7 | 526 | |
| fep | 0:62cd296ba2a7 | 527 | /* Prevent compiler warnings about unused parameters if |
| fep | 0:62cd296ba2a7 | 528 | configUSE_TRACE_FACILITY does not equal 1. */ |
| fep | 0:62cd296ba2a7 | 529 | ( void ) ucQueueType; |
| fep | 0:62cd296ba2a7 | 530 | |
| fep | 0:62cd296ba2a7 | 531 | pxNewQueue = ( Queue_t * ) xQueueGenericCreateStatic( uxMutexLength, uxMutexSize, NULL, pxStaticQueue, ucQueueType ); |
| fep | 0:62cd296ba2a7 | 532 | prvInitialiseMutex( pxNewQueue ); |
| fep | 0:62cd296ba2a7 | 533 | |
| fep | 0:62cd296ba2a7 | 534 | return pxNewQueue; |
| fep | 0:62cd296ba2a7 | 535 | } |
| fep | 0:62cd296ba2a7 | 536 | |
| fep | 0:62cd296ba2a7 | 537 | #endif /* configUSE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 538 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 539 | |
| fep | 0:62cd296ba2a7 | 540 | #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) ) |
| fep | 0:62cd296ba2a7 | 541 | |
| fep | 0:62cd296ba2a7 | 542 | void* xQueueGetMutexHolder( QueueHandle_t xSemaphore ) |
| fep | 0:62cd296ba2a7 | 543 | { |
| fep | 0:62cd296ba2a7 | 544 | void *pxReturn; |
| fep | 0:62cd296ba2a7 | 545 | |
| fep | 0:62cd296ba2a7 | 546 | /* This function is called by xSemaphoreGetMutexHolder(), and should not |
| fep | 0:62cd296ba2a7 | 547 | be called directly. Note: This is a good way of determining if the |
| fep | 0:62cd296ba2a7 | 548 | calling task is the mutex holder, but not a good way of determining the |
| fep | 0:62cd296ba2a7 | 549 | identity of the mutex holder, as the holder may change between the |
| fep | 0:62cd296ba2a7 | 550 | following critical section exiting and the function returning. */ |
| fep | 0:62cd296ba2a7 | 551 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 552 | { |
| fep | 0:62cd296ba2a7 | 553 | if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX ) |
| fep | 0:62cd296ba2a7 | 554 | { |
| fep | 0:62cd296ba2a7 | 555 | pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder; |
| fep | 0:62cd296ba2a7 | 556 | } |
| fep | 0:62cd296ba2a7 | 557 | else |
| fep | 0:62cd296ba2a7 | 558 | { |
| fep | 0:62cd296ba2a7 | 559 | pxReturn = NULL; |
| fep | 0:62cd296ba2a7 | 560 | } |
| fep | 0:62cd296ba2a7 | 561 | } |
| fep | 0:62cd296ba2a7 | 562 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 563 | |
| fep | 0:62cd296ba2a7 | 564 | return pxReturn; |
| fep | 0:62cd296ba2a7 | 565 | } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */ |
| fep | 0:62cd296ba2a7 | 566 | |
| fep | 0:62cd296ba2a7 | 567 | #endif |
| fep | 0:62cd296ba2a7 | 568 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 569 | |
| fep | 0:62cd296ba2a7 | 570 | #if ( configUSE_RECURSIVE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 571 | |
| fep | 0:62cd296ba2a7 | 572 | BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex ) |
| fep | 0:62cd296ba2a7 | 573 | { |
| fep | 0:62cd296ba2a7 | 574 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 575 | Queue_t * const pxMutex = ( Queue_t * ) xMutex; |
| fep | 0:62cd296ba2a7 | 576 | |
| fep | 0:62cd296ba2a7 | 577 | configASSERT( pxMutex ); |
| fep | 0:62cd296ba2a7 | 578 | |
| fep | 0:62cd296ba2a7 | 579 | /* If this is the task that holds the mutex then pxMutexHolder will not |
| fep | 0:62cd296ba2a7 | 580 | change outside of this task. If this task does not hold the mutex then |
| fep | 0:62cd296ba2a7 | 581 | pxMutexHolder can never coincidentally equal the tasks handle, and as |
| fep | 0:62cd296ba2a7 | 582 | this is the only condition we are interested in it does not matter if |
| fep | 0:62cd296ba2a7 | 583 | pxMutexHolder is accessed simultaneously by another task. Therefore no |
| fep | 0:62cd296ba2a7 | 584 | mutual exclusion is required to test the pxMutexHolder variable. */ |
| fep | 0:62cd296ba2a7 | 585 | if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */ |
| fep | 0:62cd296ba2a7 | 586 | { |
| fep | 0:62cd296ba2a7 | 587 | traceGIVE_MUTEX_RECURSIVE( pxMutex ); |
| fep | 0:62cd296ba2a7 | 588 | |
| fep | 0:62cd296ba2a7 | 589 | /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to |
| fep | 0:62cd296ba2a7 | 590 | the task handle, therefore no underflow check is required. Also, |
| fep | 0:62cd296ba2a7 | 591 | uxRecursiveCallCount is only modified by the mutex holder, and as |
| fep | 0:62cd296ba2a7 | 592 | there can only be one, no mutual exclusion is required to modify the |
| fep | 0:62cd296ba2a7 | 593 | uxRecursiveCallCount member. */ |
| fep | 0:62cd296ba2a7 | 594 | ( pxMutex->u.uxRecursiveCallCount )--; |
| fep | 0:62cd296ba2a7 | 595 | |
| fep | 0:62cd296ba2a7 | 596 | /* Has the recursive call count unwound to 0? */ |
| fep | 0:62cd296ba2a7 | 597 | if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 598 | { |
| fep | 0:62cd296ba2a7 | 599 | /* Return the mutex. This will automatically unblock any other |
| fep | 0:62cd296ba2a7 | 600 | task that might be waiting to access the mutex. */ |
| fep | 0:62cd296ba2a7 | 601 | ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK ); |
| fep | 0:62cd296ba2a7 | 602 | } |
| fep | 0:62cd296ba2a7 | 603 | else |
| fep | 0:62cd296ba2a7 | 604 | { |
| fep | 0:62cd296ba2a7 | 605 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 606 | } |
| fep | 0:62cd296ba2a7 | 607 | |
| fep | 0:62cd296ba2a7 | 608 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 609 | } |
| fep | 0:62cd296ba2a7 | 610 | else |
| fep | 0:62cd296ba2a7 | 611 | { |
| fep | 0:62cd296ba2a7 | 612 | /* The mutex cannot be given because the calling task is not the |
| fep | 0:62cd296ba2a7 | 613 | holder. */ |
| fep | 0:62cd296ba2a7 | 614 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 615 | |
| fep | 0:62cd296ba2a7 | 616 | traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex ); |
| fep | 0:62cd296ba2a7 | 617 | } |
| fep | 0:62cd296ba2a7 | 618 | |
| fep | 0:62cd296ba2a7 | 619 | return xReturn; |
| fep | 0:62cd296ba2a7 | 620 | } |
| fep | 0:62cd296ba2a7 | 621 | |
| fep | 0:62cd296ba2a7 | 622 | #endif /* configUSE_RECURSIVE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 623 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 624 | |
| fep | 0:62cd296ba2a7 | 625 | #if ( configUSE_RECURSIVE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 626 | |
| fep | 0:62cd296ba2a7 | 627 | BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait ) |
| fep | 0:62cd296ba2a7 | 628 | { |
| fep | 0:62cd296ba2a7 | 629 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 630 | Queue_t * const pxMutex = ( Queue_t * ) xMutex; |
| fep | 0:62cd296ba2a7 | 631 | |
| fep | 0:62cd296ba2a7 | 632 | configASSERT( pxMutex ); |
| fep | 0:62cd296ba2a7 | 633 | |
| fep | 0:62cd296ba2a7 | 634 | /* Comments regarding mutual exclusion as per those within |
| fep | 0:62cd296ba2a7 | 635 | xQueueGiveMutexRecursive(). */ |
| fep | 0:62cd296ba2a7 | 636 | |
| fep | 0:62cd296ba2a7 | 637 | traceTAKE_MUTEX_RECURSIVE( pxMutex ); |
| fep | 0:62cd296ba2a7 | 638 | |
| fep | 0:62cd296ba2a7 | 639 | if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */ |
| fep | 0:62cd296ba2a7 | 640 | { |
| fep | 0:62cd296ba2a7 | 641 | ( pxMutex->u.uxRecursiveCallCount )++; |
| fep | 0:62cd296ba2a7 | 642 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 643 | } |
| fep | 0:62cd296ba2a7 | 644 | else |
| fep | 0:62cd296ba2a7 | 645 | { |
| fep | 0:62cd296ba2a7 | 646 | xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE ); |
| fep | 0:62cd296ba2a7 | 647 | |
| fep | 0:62cd296ba2a7 | 648 | /* pdPASS will only be returned if the mutex was successfully |
| fep | 0:62cd296ba2a7 | 649 | obtained. The calling task may have entered the Blocked state |
| fep | 0:62cd296ba2a7 | 650 | before reaching here. */ |
| fep | 0:62cd296ba2a7 | 651 | if( xReturn != pdFAIL ) |
| fep | 0:62cd296ba2a7 | 652 | { |
| fep | 0:62cd296ba2a7 | 653 | ( pxMutex->u.uxRecursiveCallCount )++; |
| fep | 0:62cd296ba2a7 | 654 | } |
| fep | 0:62cd296ba2a7 | 655 | else |
| fep | 0:62cd296ba2a7 | 656 | { |
| fep | 0:62cd296ba2a7 | 657 | traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex ); |
| fep | 0:62cd296ba2a7 | 658 | } |
| fep | 0:62cd296ba2a7 | 659 | } |
| fep | 0:62cd296ba2a7 | 660 | |
| fep | 0:62cd296ba2a7 | 661 | return xReturn; |
| fep | 0:62cd296ba2a7 | 662 | } |
| fep | 0:62cd296ba2a7 | 663 | |
| fep | 0:62cd296ba2a7 | 664 | #endif /* configUSE_RECURSIVE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 665 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 666 | |
| fep | 0:62cd296ba2a7 | 667 | #if( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 668 | |
| fep | 0:62cd296ba2a7 | 669 | QueueHandle_t xQueueCreateCountingSemaphoreStatic( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount, StaticQueue_t *pxStaticQueue ) |
| fep | 0:62cd296ba2a7 | 670 | { |
| fep | 0:62cd296ba2a7 | 671 | QueueHandle_t xHandle; |
| fep | 0:62cd296ba2a7 | 672 | |
| fep | 0:62cd296ba2a7 | 673 | configASSERT( uxMaxCount != 0 ); |
| fep | 0:62cd296ba2a7 | 674 | configASSERT( uxInitialCount <= uxMaxCount ); |
| fep | 0:62cd296ba2a7 | 675 | |
| fep | 0:62cd296ba2a7 | 676 | xHandle = xQueueGenericCreateStatic( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, NULL, pxStaticQueue, queueQUEUE_TYPE_COUNTING_SEMAPHORE ); |
| fep | 0:62cd296ba2a7 | 677 | |
| fep | 0:62cd296ba2a7 | 678 | if( xHandle != NULL ) |
| fep | 0:62cd296ba2a7 | 679 | { |
| fep | 0:62cd296ba2a7 | 680 | ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount; |
| fep | 0:62cd296ba2a7 | 681 | |
| fep | 0:62cd296ba2a7 | 682 | traceCREATE_COUNTING_SEMAPHORE(); |
| fep | 0:62cd296ba2a7 | 683 | } |
| fep | 0:62cd296ba2a7 | 684 | else |
| fep | 0:62cd296ba2a7 | 685 | { |
| fep | 0:62cd296ba2a7 | 686 | traceCREATE_COUNTING_SEMAPHORE_FAILED(); |
| fep | 0:62cd296ba2a7 | 687 | } |
| fep | 0:62cd296ba2a7 | 688 | |
| fep | 0:62cd296ba2a7 | 689 | return xHandle; |
| fep | 0:62cd296ba2a7 | 690 | } |
| fep | 0:62cd296ba2a7 | 691 | |
| fep | 0:62cd296ba2a7 | 692 | #endif /* ( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) */ |
| fep | 0:62cd296ba2a7 | 693 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 694 | |
| fep | 0:62cd296ba2a7 | 695 | #if( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 696 | |
| fep | 0:62cd296ba2a7 | 697 | QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount ) |
| fep | 0:62cd296ba2a7 | 698 | { |
| fep | 0:62cd296ba2a7 | 699 | QueueHandle_t xHandle; |
| fep | 0:62cd296ba2a7 | 700 | |
| fep | 0:62cd296ba2a7 | 701 | configASSERT( uxMaxCount != 0 ); |
| fep | 0:62cd296ba2a7 | 702 | configASSERT( uxInitialCount <= uxMaxCount ); |
| fep | 0:62cd296ba2a7 | 703 | |
| fep | 0:62cd296ba2a7 | 704 | xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE ); |
| fep | 0:62cd296ba2a7 | 705 | |
| fep | 0:62cd296ba2a7 | 706 | if( xHandle != NULL ) |
| fep | 0:62cd296ba2a7 | 707 | { |
| fep | 0:62cd296ba2a7 | 708 | ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount; |
| fep | 0:62cd296ba2a7 | 709 | |
| fep | 0:62cd296ba2a7 | 710 | traceCREATE_COUNTING_SEMAPHORE(); |
| fep | 0:62cd296ba2a7 | 711 | } |
| fep | 0:62cd296ba2a7 | 712 | else |
| fep | 0:62cd296ba2a7 | 713 | { |
| fep | 0:62cd296ba2a7 | 714 | traceCREATE_COUNTING_SEMAPHORE_FAILED(); |
| fep | 0:62cd296ba2a7 | 715 | } |
| fep | 0:62cd296ba2a7 | 716 | |
| fep | 0:62cd296ba2a7 | 717 | return xHandle; |
| fep | 0:62cd296ba2a7 | 718 | } |
| fep | 0:62cd296ba2a7 | 719 | |
| fep | 0:62cd296ba2a7 | 720 | #endif /* ( ( configUSE_COUNTING_SEMAPHORES == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) */ |
| fep | 0:62cd296ba2a7 | 721 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 722 | |
| fep | 0:62cd296ba2a7 | 723 | BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition ) |
| fep | 0:62cd296ba2a7 | 724 | { |
| fep | 0:62cd296ba2a7 | 725 | BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired; |
| fep | 0:62cd296ba2a7 | 726 | TimeOut_t xTimeOut; |
| fep | 0:62cd296ba2a7 | 727 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 728 | |
| fep | 0:62cd296ba2a7 | 729 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 730 | configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
| fep | 0:62cd296ba2a7 | 731 | configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) ); |
| fep | 0:62cd296ba2a7 | 732 | #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) ) |
| fep | 0:62cd296ba2a7 | 733 | { |
| fep | 0:62cd296ba2a7 | 734 | configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) ); |
| fep | 0:62cd296ba2a7 | 735 | } |
| fep | 0:62cd296ba2a7 | 736 | #endif |
| fep | 0:62cd296ba2a7 | 737 | |
| fep | 0:62cd296ba2a7 | 738 | |
| fep | 0:62cd296ba2a7 | 739 | /* This function relaxes the coding standard somewhat to allow return |
| fep | 0:62cd296ba2a7 | 740 | statements within the function itself. This is done in the interest |
| fep | 0:62cd296ba2a7 | 741 | of execution time efficiency. */ |
| fep | 0:62cd296ba2a7 | 742 | for( ;; ) |
| fep | 0:62cd296ba2a7 | 743 | { |
| fep | 0:62cd296ba2a7 | 744 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 745 | { |
| fep | 0:62cd296ba2a7 | 746 | /* Is there room on the queue now? The running task must be the |
| fep | 0:62cd296ba2a7 | 747 | highest priority task wanting to access the queue. If the head item |
| fep | 0:62cd296ba2a7 | 748 | in the queue is to be overwritten then it does not matter if the |
| fep | 0:62cd296ba2a7 | 749 | queue is full. */ |
| fep | 0:62cd296ba2a7 | 750 | if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) ) |
| fep | 0:62cd296ba2a7 | 751 | { |
| fep | 0:62cd296ba2a7 | 752 | traceQUEUE_SEND( pxQueue ); |
| fep | 0:62cd296ba2a7 | 753 | xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); |
| fep | 0:62cd296ba2a7 | 754 | |
| fep | 0:62cd296ba2a7 | 755 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 756 | { |
| fep | 0:62cd296ba2a7 | 757 | if( pxQueue->pxQueueSetContainer != NULL ) |
| fep | 0:62cd296ba2a7 | 758 | { |
| fep | 0:62cd296ba2a7 | 759 | if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 760 | { |
| fep | 0:62cd296ba2a7 | 761 | /* The queue is a member of a queue set, and posting |
| fep | 0:62cd296ba2a7 | 762 | to the queue set caused a higher priority task to |
| fep | 0:62cd296ba2a7 | 763 | unblock. A context switch is required. */ |
| fep | 0:62cd296ba2a7 | 764 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 765 | } |
| fep | 0:62cd296ba2a7 | 766 | else |
| fep | 0:62cd296ba2a7 | 767 | { |
| fep | 0:62cd296ba2a7 | 768 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 769 | } |
| fep | 0:62cd296ba2a7 | 770 | } |
| fep | 0:62cd296ba2a7 | 771 | else |
| fep | 0:62cd296ba2a7 | 772 | { |
| fep | 0:62cd296ba2a7 | 773 | /* If there was a task waiting for data to arrive on the |
| fep | 0:62cd296ba2a7 | 774 | queue then unblock it now. */ |
| fep | 0:62cd296ba2a7 | 775 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 776 | { |
| fep | 0:62cd296ba2a7 | 777 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 778 | { |
| fep | 0:62cd296ba2a7 | 779 | /* The unblocked task has a priority higher than |
| fep | 0:62cd296ba2a7 | 780 | our own so yield immediately. Yes it is ok to |
| fep | 0:62cd296ba2a7 | 781 | do this from within the critical section - the |
| fep | 0:62cd296ba2a7 | 782 | kernel takes care of that. */ |
| fep | 0:62cd296ba2a7 | 783 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 784 | } |
| fep | 0:62cd296ba2a7 | 785 | else |
| fep | 0:62cd296ba2a7 | 786 | { |
| fep | 0:62cd296ba2a7 | 787 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 788 | } |
| fep | 0:62cd296ba2a7 | 789 | } |
| fep | 0:62cd296ba2a7 | 790 | else if( xYieldRequired != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 791 | { |
| fep | 0:62cd296ba2a7 | 792 | /* This path is a special case that will only get |
| fep | 0:62cd296ba2a7 | 793 | executed if the task was holding multiple mutexes |
| fep | 0:62cd296ba2a7 | 794 | and the mutexes were given back in an order that is |
| fep | 0:62cd296ba2a7 | 795 | different to that in which they were taken. */ |
| fep | 0:62cd296ba2a7 | 796 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 797 | } |
| fep | 0:62cd296ba2a7 | 798 | else |
| fep | 0:62cd296ba2a7 | 799 | { |
| fep | 0:62cd296ba2a7 | 800 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 801 | } |
| fep | 0:62cd296ba2a7 | 802 | } |
| fep | 0:62cd296ba2a7 | 803 | } |
| fep | 0:62cd296ba2a7 | 804 | #else /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 805 | { |
| fep | 0:62cd296ba2a7 | 806 | /* If there was a task waiting for data to arrive on the |
| fep | 0:62cd296ba2a7 | 807 | queue then unblock it now. */ |
| fep | 0:62cd296ba2a7 | 808 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 809 | { |
| fep | 0:62cd296ba2a7 | 810 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 811 | { |
| fep | 0:62cd296ba2a7 | 812 | /* The unblocked task has a priority higher than |
| fep | 0:62cd296ba2a7 | 813 | our own so yield immediately. Yes it is ok to do |
| fep | 0:62cd296ba2a7 | 814 | this from within the critical section - the kernel |
| fep | 0:62cd296ba2a7 | 815 | takes care of that. */ |
| fep | 0:62cd296ba2a7 | 816 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 817 | } |
| fep | 0:62cd296ba2a7 | 818 | else |
| fep | 0:62cd296ba2a7 | 819 | { |
| fep | 0:62cd296ba2a7 | 820 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 821 | } |
| fep | 0:62cd296ba2a7 | 822 | } |
| fep | 0:62cd296ba2a7 | 823 | else if( xYieldRequired != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 824 | { |
| fep | 0:62cd296ba2a7 | 825 | /* This path is a special case that will only get |
| fep | 0:62cd296ba2a7 | 826 | executed if the task was holding multiple mutexes and |
| fep | 0:62cd296ba2a7 | 827 | the mutexes were given back in an order that is |
| fep | 0:62cd296ba2a7 | 828 | different to that in which they were taken. */ |
| fep | 0:62cd296ba2a7 | 829 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 830 | } |
| fep | 0:62cd296ba2a7 | 831 | else |
| fep | 0:62cd296ba2a7 | 832 | { |
| fep | 0:62cd296ba2a7 | 833 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 834 | } |
| fep | 0:62cd296ba2a7 | 835 | } |
| fep | 0:62cd296ba2a7 | 836 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 837 | |
| fep | 0:62cd296ba2a7 | 838 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 839 | return pdPASS; |
| fep | 0:62cd296ba2a7 | 840 | } |
| fep | 0:62cd296ba2a7 | 841 | else |
| fep | 0:62cd296ba2a7 | 842 | { |
| fep | 0:62cd296ba2a7 | 843 | if( xTicksToWait == ( TickType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 844 | { |
| fep | 0:62cd296ba2a7 | 845 | /* The queue was full and no block time is specified (or |
| fep | 0:62cd296ba2a7 | 846 | the block time has expired) so leave now. */ |
| fep | 0:62cd296ba2a7 | 847 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 848 | |
| fep | 0:62cd296ba2a7 | 849 | /* Return to the original privilege level before exiting |
| fep | 0:62cd296ba2a7 | 850 | the function. */ |
| fep | 0:62cd296ba2a7 | 851 | traceQUEUE_SEND_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 852 | return errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 853 | } |
| fep | 0:62cd296ba2a7 | 854 | else if( xEntryTimeSet == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 855 | { |
| fep | 0:62cd296ba2a7 | 856 | /* The queue was full and a block time was specified so |
| fep | 0:62cd296ba2a7 | 857 | configure the timeout structure. */ |
| fep | 0:62cd296ba2a7 | 858 | vTaskSetTimeOutState( &xTimeOut ); |
| fep | 0:62cd296ba2a7 | 859 | xEntryTimeSet = pdTRUE; |
| fep | 0:62cd296ba2a7 | 860 | } |
| fep | 0:62cd296ba2a7 | 861 | else |
| fep | 0:62cd296ba2a7 | 862 | { |
| fep | 0:62cd296ba2a7 | 863 | /* Entry time was already set. */ |
| fep | 0:62cd296ba2a7 | 864 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 865 | } |
| fep | 0:62cd296ba2a7 | 866 | } |
| fep | 0:62cd296ba2a7 | 867 | } |
| fep | 0:62cd296ba2a7 | 868 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 869 | |
| fep | 0:62cd296ba2a7 | 870 | /* Interrupts and other tasks can send to and receive from the queue |
| fep | 0:62cd296ba2a7 | 871 | now the critical section has been exited. */ |
| fep | 0:62cd296ba2a7 | 872 | |
| fep | 0:62cd296ba2a7 | 873 | vTaskSuspendAll(); |
| fep | 0:62cd296ba2a7 | 874 | prvLockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 875 | |
| fep | 0:62cd296ba2a7 | 876 | /* Update the timeout state to see if it has expired yet. */ |
| fep | 0:62cd296ba2a7 | 877 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 878 | { |
| fep | 0:62cd296ba2a7 | 879 | if( prvIsQueueFull( pxQueue ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 880 | { |
| fep | 0:62cd296ba2a7 | 881 | traceBLOCKING_ON_QUEUE_SEND( pxQueue ); |
| fep | 0:62cd296ba2a7 | 882 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait ); |
| fep | 0:62cd296ba2a7 | 883 | |
| fep | 0:62cd296ba2a7 | 884 | /* Unlocking the queue means queue events can effect the |
| fep | 0:62cd296ba2a7 | 885 | event list. It is possible that interrupts occurring now |
| fep | 0:62cd296ba2a7 | 886 | remove this task from the event list again - but as the |
| fep | 0:62cd296ba2a7 | 887 | scheduler is suspended the task will go onto the pending |
| fep | 0:62cd296ba2a7 | 888 | ready last instead of the actual ready list. */ |
| fep | 0:62cd296ba2a7 | 889 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 890 | |
| fep | 0:62cd296ba2a7 | 891 | /* Resuming the scheduler will move tasks from the pending |
| fep | 0:62cd296ba2a7 | 892 | ready list into the ready list - so it is feasible that this |
| fep | 0:62cd296ba2a7 | 893 | task is already in a ready list before it yields - in which |
| fep | 0:62cd296ba2a7 | 894 | case the yield will not cause a context switch unless there |
| fep | 0:62cd296ba2a7 | 895 | is also a higher priority task in the pending ready list. */ |
| fep | 0:62cd296ba2a7 | 896 | if( xTaskResumeAll() == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 897 | { |
| fep | 0:62cd296ba2a7 | 898 | portYIELD_WITHIN_API(); |
| fep | 0:62cd296ba2a7 | 899 | } |
| fep | 0:62cd296ba2a7 | 900 | } |
| fep | 0:62cd296ba2a7 | 901 | else |
| fep | 0:62cd296ba2a7 | 902 | { |
| fep | 0:62cd296ba2a7 | 903 | /* Try again. */ |
| fep | 0:62cd296ba2a7 | 904 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 905 | ( void ) xTaskResumeAll(); |
| fep | 0:62cd296ba2a7 | 906 | } |
| fep | 0:62cd296ba2a7 | 907 | } |
| fep | 0:62cd296ba2a7 | 908 | else |
| fep | 0:62cd296ba2a7 | 909 | { |
| fep | 0:62cd296ba2a7 | 910 | /* The timeout has expired. */ |
| fep | 0:62cd296ba2a7 | 911 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 912 | ( void ) xTaskResumeAll(); |
| fep | 0:62cd296ba2a7 | 913 | |
| fep | 0:62cd296ba2a7 | 914 | traceQUEUE_SEND_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 915 | return errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 916 | } |
| fep | 0:62cd296ba2a7 | 917 | } |
| fep | 0:62cd296ba2a7 | 918 | } |
| fep | 0:62cd296ba2a7 | 919 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 920 | |
| fep | 0:62cd296ba2a7 | 921 | BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition ) |
| fep | 0:62cd296ba2a7 | 922 | { |
| fep | 0:62cd296ba2a7 | 923 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 924 | UBaseType_t uxSavedInterruptStatus; |
| fep | 0:62cd296ba2a7 | 925 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 926 | |
| fep | 0:62cd296ba2a7 | 927 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 928 | configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
| fep | 0:62cd296ba2a7 | 929 | configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) ); |
| fep | 0:62cd296ba2a7 | 930 | |
| fep | 0:62cd296ba2a7 | 931 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
| fep | 0:62cd296ba2a7 | 932 | system call (or maximum API call) interrupt priority. Interrupts that are |
| fep | 0:62cd296ba2a7 | 933 | above the maximum system call priority are kept permanently enabled, even |
| fep | 0:62cd296ba2a7 | 934 | when the RTOS kernel is in a critical section, but cannot make any calls to |
| fep | 0:62cd296ba2a7 | 935 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
| fep | 0:62cd296ba2a7 | 936 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
| fep | 0:62cd296ba2a7 | 937 | failure if a FreeRTOS API function is called from an interrupt that has been |
| fep | 0:62cd296ba2a7 | 938 | assigned a priority above the configured maximum system call priority. |
| fep | 0:62cd296ba2a7 | 939 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
| fep | 0:62cd296ba2a7 | 940 | that have been assigned a priority at or (logically) below the maximum |
| fep | 0:62cd296ba2a7 | 941 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
| fep | 0:62cd296ba2a7 | 942 | safe API to ensure interrupt entry is as fast and as simple as possible. |
| fep | 0:62cd296ba2a7 | 943 | More information (albeit Cortex-M specific) is provided on the following |
| fep | 0:62cd296ba2a7 | 944 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
| fep | 0:62cd296ba2a7 | 945 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
| fep | 0:62cd296ba2a7 | 946 | |
| fep | 0:62cd296ba2a7 | 947 | /* Similar to xQueueGenericSend, except without blocking if there is no room |
| fep | 0:62cd296ba2a7 | 948 | in the queue. Also don't directly wake a task that was blocked on a queue |
| fep | 0:62cd296ba2a7 | 949 | read, instead return a flag to say whether a context switch is required or |
| fep | 0:62cd296ba2a7 | 950 | not (i.e. has a task with a higher priority than us been woken by this |
| fep | 0:62cd296ba2a7 | 951 | post). */ |
| fep | 0:62cd296ba2a7 | 952 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
| fep | 0:62cd296ba2a7 | 953 | { |
| fep | 0:62cd296ba2a7 | 954 | if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) ) |
| fep | 0:62cd296ba2a7 | 955 | { |
| fep | 0:62cd296ba2a7 | 956 | const int8_t cTxLock = pxQueue->cTxLock; |
| fep | 0:62cd296ba2a7 | 957 | |
| fep | 0:62cd296ba2a7 | 958 | traceQUEUE_SEND_FROM_ISR( pxQueue ); |
| fep | 0:62cd296ba2a7 | 959 | |
| fep | 0:62cd296ba2a7 | 960 | /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a |
| fep | 0:62cd296ba2a7 | 961 | semaphore or mutex. That means prvCopyDataToQueue() cannot result |
| fep | 0:62cd296ba2a7 | 962 | in a task disinheriting a priority and prvCopyDataToQueue() can be |
| fep | 0:62cd296ba2a7 | 963 | called here even though the disinherit function does not check if |
| fep | 0:62cd296ba2a7 | 964 | the scheduler is suspended before accessing the ready lists. */ |
| fep | 0:62cd296ba2a7 | 965 | ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); |
| fep | 0:62cd296ba2a7 | 966 | |
| fep | 0:62cd296ba2a7 | 967 | /* The event list is not altered if the queue is locked. This will |
| fep | 0:62cd296ba2a7 | 968 | be done when the queue is unlocked later. */ |
| fep | 0:62cd296ba2a7 | 969 | if( cTxLock == queueUNLOCKED ) |
| fep | 0:62cd296ba2a7 | 970 | { |
| fep | 0:62cd296ba2a7 | 971 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 972 | { |
| fep | 0:62cd296ba2a7 | 973 | if( pxQueue->pxQueueSetContainer != NULL ) |
| fep | 0:62cd296ba2a7 | 974 | { |
| fep | 0:62cd296ba2a7 | 975 | if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 976 | { |
| fep | 0:62cd296ba2a7 | 977 | /* The queue is a member of a queue set, and posting |
| fep | 0:62cd296ba2a7 | 978 | to the queue set caused a higher priority task to |
| fep | 0:62cd296ba2a7 | 979 | unblock. A context switch is required. */ |
| fep | 0:62cd296ba2a7 | 980 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 981 | { |
| fep | 0:62cd296ba2a7 | 982 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 983 | } |
| fep | 0:62cd296ba2a7 | 984 | else |
| fep | 0:62cd296ba2a7 | 985 | { |
| fep | 0:62cd296ba2a7 | 986 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 987 | } |
| fep | 0:62cd296ba2a7 | 988 | } |
| fep | 0:62cd296ba2a7 | 989 | else |
| fep | 0:62cd296ba2a7 | 990 | { |
| fep | 0:62cd296ba2a7 | 991 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 992 | } |
| fep | 0:62cd296ba2a7 | 993 | } |
| fep | 0:62cd296ba2a7 | 994 | else |
| fep | 0:62cd296ba2a7 | 995 | { |
| fep | 0:62cd296ba2a7 | 996 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 997 | { |
| fep | 0:62cd296ba2a7 | 998 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 999 | { |
| fep | 0:62cd296ba2a7 | 1000 | /* The task waiting has a higher priority so |
| fep | 0:62cd296ba2a7 | 1001 | record that a context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1002 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1003 | { |
| fep | 0:62cd296ba2a7 | 1004 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1005 | } |
| fep | 0:62cd296ba2a7 | 1006 | else |
| fep | 0:62cd296ba2a7 | 1007 | { |
| fep | 0:62cd296ba2a7 | 1008 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1009 | } |
| fep | 0:62cd296ba2a7 | 1010 | } |
| fep | 0:62cd296ba2a7 | 1011 | else |
| fep | 0:62cd296ba2a7 | 1012 | { |
| fep | 0:62cd296ba2a7 | 1013 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1014 | } |
| fep | 0:62cd296ba2a7 | 1015 | } |
| fep | 0:62cd296ba2a7 | 1016 | else |
| fep | 0:62cd296ba2a7 | 1017 | { |
| fep | 0:62cd296ba2a7 | 1018 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1019 | } |
| fep | 0:62cd296ba2a7 | 1020 | } |
| fep | 0:62cd296ba2a7 | 1021 | } |
| fep | 0:62cd296ba2a7 | 1022 | #else /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1023 | { |
| fep | 0:62cd296ba2a7 | 1024 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1025 | { |
| fep | 0:62cd296ba2a7 | 1026 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1027 | { |
| fep | 0:62cd296ba2a7 | 1028 | /* The task waiting has a higher priority so record that a |
| fep | 0:62cd296ba2a7 | 1029 | context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1030 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1031 | { |
| fep | 0:62cd296ba2a7 | 1032 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1033 | } |
| fep | 0:62cd296ba2a7 | 1034 | else |
| fep | 0:62cd296ba2a7 | 1035 | { |
| fep | 0:62cd296ba2a7 | 1036 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1037 | } |
| fep | 0:62cd296ba2a7 | 1038 | } |
| fep | 0:62cd296ba2a7 | 1039 | else |
| fep | 0:62cd296ba2a7 | 1040 | { |
| fep | 0:62cd296ba2a7 | 1041 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1042 | } |
| fep | 0:62cd296ba2a7 | 1043 | } |
| fep | 0:62cd296ba2a7 | 1044 | else |
| fep | 0:62cd296ba2a7 | 1045 | { |
| fep | 0:62cd296ba2a7 | 1046 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1047 | } |
| fep | 0:62cd296ba2a7 | 1048 | } |
| fep | 0:62cd296ba2a7 | 1049 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1050 | } |
| fep | 0:62cd296ba2a7 | 1051 | else |
| fep | 0:62cd296ba2a7 | 1052 | { |
| fep | 0:62cd296ba2a7 | 1053 | /* Increment the lock count so the task that unlocks the queue |
| fep | 0:62cd296ba2a7 | 1054 | knows that data was posted while it was locked. */ |
| fep | 0:62cd296ba2a7 | 1055 | pxQueue->cTxLock = ( int8_t ) ( cTxLock + 1 ); |
| fep | 0:62cd296ba2a7 | 1056 | } |
| fep | 0:62cd296ba2a7 | 1057 | |
| fep | 0:62cd296ba2a7 | 1058 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 1059 | } |
| fep | 0:62cd296ba2a7 | 1060 | else |
| fep | 0:62cd296ba2a7 | 1061 | { |
| fep | 0:62cd296ba2a7 | 1062 | traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1063 | xReturn = errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 1064 | } |
| fep | 0:62cd296ba2a7 | 1065 | } |
| fep | 0:62cd296ba2a7 | 1066 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
| fep | 0:62cd296ba2a7 | 1067 | |
| fep | 0:62cd296ba2a7 | 1068 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1069 | } |
| fep | 0:62cd296ba2a7 | 1070 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1071 | |
| fep | 0:62cd296ba2a7 | 1072 | BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken ) |
| fep | 0:62cd296ba2a7 | 1073 | { |
| fep | 0:62cd296ba2a7 | 1074 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1075 | UBaseType_t uxSavedInterruptStatus; |
| fep | 0:62cd296ba2a7 | 1076 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1077 | |
| fep | 0:62cd296ba2a7 | 1078 | /* Similar to xQueueGenericSendFromISR() but used with semaphores where the |
| fep | 0:62cd296ba2a7 | 1079 | item size is 0. Don't directly wake a task that was blocked on a queue |
| fep | 0:62cd296ba2a7 | 1080 | read, instead return a flag to say whether a context switch is required or |
| fep | 0:62cd296ba2a7 | 1081 | not (i.e. has a task with a higher priority than us been woken by this |
| fep | 0:62cd296ba2a7 | 1082 | post). */ |
| fep | 0:62cd296ba2a7 | 1083 | |
| fep | 0:62cd296ba2a7 | 1084 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1085 | |
| fep | 0:62cd296ba2a7 | 1086 | /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR() |
| fep | 0:62cd296ba2a7 | 1087 | if the item size is not 0. */ |
| fep | 0:62cd296ba2a7 | 1088 | configASSERT( pxQueue->uxItemSize == 0 ); |
| fep | 0:62cd296ba2a7 | 1089 | |
| fep | 0:62cd296ba2a7 | 1090 | /* Normally a mutex would not be given from an interrupt, especially if |
| fep | 0:62cd296ba2a7 | 1091 | there is a mutex holder, as priority inheritance makes no sense for an |
| fep | 0:62cd296ba2a7 | 1092 | interrupts, only tasks. */ |
| fep | 0:62cd296ba2a7 | 1093 | configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) ); |
| fep | 0:62cd296ba2a7 | 1094 | |
| fep | 0:62cd296ba2a7 | 1095 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
| fep | 0:62cd296ba2a7 | 1096 | system call (or maximum API call) interrupt priority. Interrupts that are |
| fep | 0:62cd296ba2a7 | 1097 | above the maximum system call priority are kept permanently enabled, even |
| fep | 0:62cd296ba2a7 | 1098 | when the RTOS kernel is in a critical section, but cannot make any calls to |
| fep | 0:62cd296ba2a7 | 1099 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
| fep | 0:62cd296ba2a7 | 1100 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
| fep | 0:62cd296ba2a7 | 1101 | failure if a FreeRTOS API function is called from an interrupt that has been |
| fep | 0:62cd296ba2a7 | 1102 | assigned a priority above the configured maximum system call priority. |
| fep | 0:62cd296ba2a7 | 1103 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
| fep | 0:62cd296ba2a7 | 1104 | that have been assigned a priority at or (logically) below the maximum |
| fep | 0:62cd296ba2a7 | 1105 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
| fep | 0:62cd296ba2a7 | 1106 | safe API to ensure interrupt entry is as fast and as simple as possible. |
| fep | 0:62cd296ba2a7 | 1107 | More information (albeit Cortex-M specific) is provided on the following |
| fep | 0:62cd296ba2a7 | 1108 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
| fep | 0:62cd296ba2a7 | 1109 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
| fep | 0:62cd296ba2a7 | 1110 | |
| fep | 0:62cd296ba2a7 | 1111 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
| fep | 0:62cd296ba2a7 | 1112 | { |
| fep | 0:62cd296ba2a7 | 1113 | const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1114 | |
| fep | 0:62cd296ba2a7 | 1115 | /* When the queue is used to implement a semaphore no data is ever |
| fep | 0:62cd296ba2a7 | 1116 | moved through the queue but it is still valid to see if the queue 'has |
| fep | 0:62cd296ba2a7 | 1117 | space'. */ |
| fep | 0:62cd296ba2a7 | 1118 | if( uxMessagesWaiting < pxQueue->uxLength ) |
| fep | 0:62cd296ba2a7 | 1119 | { |
| fep | 0:62cd296ba2a7 | 1120 | const int8_t cTxLock = pxQueue->cTxLock; |
| fep | 0:62cd296ba2a7 | 1121 | |
| fep | 0:62cd296ba2a7 | 1122 | traceQUEUE_SEND_FROM_ISR( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1123 | |
| fep | 0:62cd296ba2a7 | 1124 | /* A task can only have an inherited priority if it is a mutex |
| fep | 0:62cd296ba2a7 | 1125 | holder - and if there is a mutex holder then the mutex cannot be |
| fep | 0:62cd296ba2a7 | 1126 | given from an ISR. As this is the ISR version of the function it |
| fep | 0:62cd296ba2a7 | 1127 | can be assumed there is no mutex holder and no need to determine if |
| fep | 0:62cd296ba2a7 | 1128 | priority disinheritance is needed. Simply increase the count of |
| fep | 0:62cd296ba2a7 | 1129 | messages (semaphores) available. */ |
| fep | 0:62cd296ba2a7 | 1130 | pxQueue->uxMessagesWaiting = uxMessagesWaiting + 1; |
| fep | 0:62cd296ba2a7 | 1131 | |
| fep | 0:62cd296ba2a7 | 1132 | /* The event list is not altered if the queue is locked. This will |
| fep | 0:62cd296ba2a7 | 1133 | be done when the queue is unlocked later. */ |
| fep | 0:62cd296ba2a7 | 1134 | if( cTxLock == queueUNLOCKED ) |
| fep | 0:62cd296ba2a7 | 1135 | { |
| fep | 0:62cd296ba2a7 | 1136 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 1137 | { |
| fep | 0:62cd296ba2a7 | 1138 | if( pxQueue->pxQueueSetContainer != NULL ) |
| fep | 0:62cd296ba2a7 | 1139 | { |
| fep | 0:62cd296ba2a7 | 1140 | if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1141 | { |
| fep | 0:62cd296ba2a7 | 1142 | /* The semaphore is a member of a queue set, and |
| fep | 0:62cd296ba2a7 | 1143 | posting to the queue set caused a higher priority |
| fep | 0:62cd296ba2a7 | 1144 | task to unblock. A context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1145 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1146 | { |
| fep | 0:62cd296ba2a7 | 1147 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1148 | } |
| fep | 0:62cd296ba2a7 | 1149 | else |
| fep | 0:62cd296ba2a7 | 1150 | { |
| fep | 0:62cd296ba2a7 | 1151 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1152 | } |
| fep | 0:62cd296ba2a7 | 1153 | } |
| fep | 0:62cd296ba2a7 | 1154 | else |
| fep | 0:62cd296ba2a7 | 1155 | { |
| fep | 0:62cd296ba2a7 | 1156 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1157 | } |
| fep | 0:62cd296ba2a7 | 1158 | } |
| fep | 0:62cd296ba2a7 | 1159 | else |
| fep | 0:62cd296ba2a7 | 1160 | { |
| fep | 0:62cd296ba2a7 | 1161 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1162 | { |
| fep | 0:62cd296ba2a7 | 1163 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1164 | { |
| fep | 0:62cd296ba2a7 | 1165 | /* The task waiting has a higher priority so |
| fep | 0:62cd296ba2a7 | 1166 | record that a context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1167 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1168 | { |
| fep | 0:62cd296ba2a7 | 1169 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1170 | } |
| fep | 0:62cd296ba2a7 | 1171 | else |
| fep | 0:62cd296ba2a7 | 1172 | { |
| fep | 0:62cd296ba2a7 | 1173 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1174 | } |
| fep | 0:62cd296ba2a7 | 1175 | } |
| fep | 0:62cd296ba2a7 | 1176 | else |
| fep | 0:62cd296ba2a7 | 1177 | { |
| fep | 0:62cd296ba2a7 | 1178 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1179 | } |
| fep | 0:62cd296ba2a7 | 1180 | } |
| fep | 0:62cd296ba2a7 | 1181 | else |
| fep | 0:62cd296ba2a7 | 1182 | { |
| fep | 0:62cd296ba2a7 | 1183 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1184 | } |
| fep | 0:62cd296ba2a7 | 1185 | } |
| fep | 0:62cd296ba2a7 | 1186 | } |
| fep | 0:62cd296ba2a7 | 1187 | #else /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1188 | { |
| fep | 0:62cd296ba2a7 | 1189 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1190 | { |
| fep | 0:62cd296ba2a7 | 1191 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1192 | { |
| fep | 0:62cd296ba2a7 | 1193 | /* The task waiting has a higher priority so record that a |
| fep | 0:62cd296ba2a7 | 1194 | context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1195 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1196 | { |
| fep | 0:62cd296ba2a7 | 1197 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1198 | } |
| fep | 0:62cd296ba2a7 | 1199 | else |
| fep | 0:62cd296ba2a7 | 1200 | { |
| fep | 0:62cd296ba2a7 | 1201 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1202 | } |
| fep | 0:62cd296ba2a7 | 1203 | } |
| fep | 0:62cd296ba2a7 | 1204 | else |
| fep | 0:62cd296ba2a7 | 1205 | { |
| fep | 0:62cd296ba2a7 | 1206 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1207 | } |
| fep | 0:62cd296ba2a7 | 1208 | } |
| fep | 0:62cd296ba2a7 | 1209 | else |
| fep | 0:62cd296ba2a7 | 1210 | { |
| fep | 0:62cd296ba2a7 | 1211 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1212 | } |
| fep | 0:62cd296ba2a7 | 1213 | } |
| fep | 0:62cd296ba2a7 | 1214 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1215 | } |
| fep | 0:62cd296ba2a7 | 1216 | else |
| fep | 0:62cd296ba2a7 | 1217 | { |
| fep | 0:62cd296ba2a7 | 1218 | /* Increment the lock count so the task that unlocks the queue |
| fep | 0:62cd296ba2a7 | 1219 | knows that data was posted while it was locked. */ |
| fep | 0:62cd296ba2a7 | 1220 | pxQueue->cTxLock = ( int8_t ) ( cTxLock + 1 ); |
| fep | 0:62cd296ba2a7 | 1221 | } |
| fep | 0:62cd296ba2a7 | 1222 | |
| fep | 0:62cd296ba2a7 | 1223 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 1224 | } |
| fep | 0:62cd296ba2a7 | 1225 | else |
| fep | 0:62cd296ba2a7 | 1226 | { |
| fep | 0:62cd296ba2a7 | 1227 | traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1228 | xReturn = errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 1229 | } |
| fep | 0:62cd296ba2a7 | 1230 | } |
| fep | 0:62cd296ba2a7 | 1231 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
| fep | 0:62cd296ba2a7 | 1232 | |
| fep | 0:62cd296ba2a7 | 1233 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1234 | } |
| fep | 0:62cd296ba2a7 | 1235 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1236 | |
| fep | 0:62cd296ba2a7 | 1237 | BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking ) |
| fep | 0:62cd296ba2a7 | 1238 | { |
| fep | 0:62cd296ba2a7 | 1239 | BaseType_t xEntryTimeSet = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1240 | TimeOut_t xTimeOut; |
| fep | 0:62cd296ba2a7 | 1241 | int8_t *pcOriginalReadPosition; |
| fep | 0:62cd296ba2a7 | 1242 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1243 | |
| fep | 0:62cd296ba2a7 | 1244 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1245 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
| fep | 0:62cd296ba2a7 | 1246 | #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) ) |
| fep | 0:62cd296ba2a7 | 1247 | { |
| fep | 0:62cd296ba2a7 | 1248 | configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) ); |
| fep | 0:62cd296ba2a7 | 1249 | } |
| fep | 0:62cd296ba2a7 | 1250 | #endif |
| fep | 0:62cd296ba2a7 | 1251 | |
| fep | 0:62cd296ba2a7 | 1252 | /* This function relaxes the coding standard somewhat to allow return |
| fep | 0:62cd296ba2a7 | 1253 | statements within the function itself. This is done in the interest |
| fep | 0:62cd296ba2a7 | 1254 | of execution time efficiency. */ |
| fep | 0:62cd296ba2a7 | 1255 | |
| fep | 0:62cd296ba2a7 | 1256 | for( ;; ) |
| fep | 0:62cd296ba2a7 | 1257 | { |
| fep | 0:62cd296ba2a7 | 1258 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1259 | { |
| fep | 0:62cd296ba2a7 | 1260 | const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1261 | |
| fep | 0:62cd296ba2a7 | 1262 | /* Is there data in the queue now? To be running the calling task |
| fep | 0:62cd296ba2a7 | 1263 | must be the highest priority task wanting to access the queue. */ |
| fep | 0:62cd296ba2a7 | 1264 | if( uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1265 | { |
| fep | 0:62cd296ba2a7 | 1266 | /* Remember the read position in case the queue is only being |
| fep | 0:62cd296ba2a7 | 1267 | peeked. */ |
| fep | 0:62cd296ba2a7 | 1268 | pcOriginalReadPosition = pxQueue->u.pcReadFrom; |
| fep | 0:62cd296ba2a7 | 1269 | |
| fep | 0:62cd296ba2a7 | 1270 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
| fep | 0:62cd296ba2a7 | 1271 | |
| fep | 0:62cd296ba2a7 | 1272 | if( xJustPeeking == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1273 | { |
| fep | 0:62cd296ba2a7 | 1274 | traceQUEUE_RECEIVE( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1275 | |
| fep | 0:62cd296ba2a7 | 1276 | /* Actually removing data, not just peeking. */ |
| fep | 0:62cd296ba2a7 | 1277 | pxQueue->uxMessagesWaiting = uxMessagesWaiting - 1; |
| fep | 0:62cd296ba2a7 | 1278 | |
| fep | 0:62cd296ba2a7 | 1279 | #if ( configUSE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 1280 | { |
| fep | 0:62cd296ba2a7 | 1281 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
| fep | 0:62cd296ba2a7 | 1282 | { |
| fep | 0:62cd296ba2a7 | 1283 | /* Record the information required to implement |
| fep | 0:62cd296ba2a7 | 1284 | priority inheritance should it become necessary. */ |
| fep | 0:62cd296ba2a7 | 1285 | pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */ |
| fep | 0:62cd296ba2a7 | 1286 | } |
| fep | 0:62cd296ba2a7 | 1287 | else |
| fep | 0:62cd296ba2a7 | 1288 | { |
| fep | 0:62cd296ba2a7 | 1289 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1290 | } |
| fep | 0:62cd296ba2a7 | 1291 | } |
| fep | 0:62cd296ba2a7 | 1292 | #endif /* configUSE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 1293 | |
| fep | 0:62cd296ba2a7 | 1294 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1295 | { |
| fep | 0:62cd296ba2a7 | 1296 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1297 | { |
| fep | 0:62cd296ba2a7 | 1298 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 1299 | } |
| fep | 0:62cd296ba2a7 | 1300 | else |
| fep | 0:62cd296ba2a7 | 1301 | { |
| fep | 0:62cd296ba2a7 | 1302 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1303 | } |
| fep | 0:62cd296ba2a7 | 1304 | } |
| fep | 0:62cd296ba2a7 | 1305 | else |
| fep | 0:62cd296ba2a7 | 1306 | { |
| fep | 0:62cd296ba2a7 | 1307 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1308 | } |
| fep | 0:62cd296ba2a7 | 1309 | } |
| fep | 0:62cd296ba2a7 | 1310 | else |
| fep | 0:62cd296ba2a7 | 1311 | { |
| fep | 0:62cd296ba2a7 | 1312 | traceQUEUE_PEEK( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1313 | |
| fep | 0:62cd296ba2a7 | 1314 | /* The data is not being removed, so reset the read |
| fep | 0:62cd296ba2a7 | 1315 | pointer. */ |
| fep | 0:62cd296ba2a7 | 1316 | pxQueue->u.pcReadFrom = pcOriginalReadPosition; |
| fep | 0:62cd296ba2a7 | 1317 | |
| fep | 0:62cd296ba2a7 | 1318 | /* The data is being left in the queue, so see if there are |
| fep | 0:62cd296ba2a7 | 1319 | any other tasks waiting for the data. */ |
| fep | 0:62cd296ba2a7 | 1320 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1321 | { |
| fep | 0:62cd296ba2a7 | 1322 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1323 | { |
| fep | 0:62cd296ba2a7 | 1324 | /* The task waiting has a higher priority than this task. */ |
| fep | 0:62cd296ba2a7 | 1325 | queueYIELD_IF_USING_PREEMPTION(); |
| fep | 0:62cd296ba2a7 | 1326 | } |
| fep | 0:62cd296ba2a7 | 1327 | else |
| fep | 0:62cd296ba2a7 | 1328 | { |
| fep | 0:62cd296ba2a7 | 1329 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1330 | } |
| fep | 0:62cd296ba2a7 | 1331 | } |
| fep | 0:62cd296ba2a7 | 1332 | else |
| fep | 0:62cd296ba2a7 | 1333 | { |
| fep | 0:62cd296ba2a7 | 1334 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1335 | } |
| fep | 0:62cd296ba2a7 | 1336 | } |
| fep | 0:62cd296ba2a7 | 1337 | |
| fep | 0:62cd296ba2a7 | 1338 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1339 | return pdPASS; |
| fep | 0:62cd296ba2a7 | 1340 | } |
| fep | 0:62cd296ba2a7 | 1341 | else |
| fep | 0:62cd296ba2a7 | 1342 | { |
| fep | 0:62cd296ba2a7 | 1343 | if( xTicksToWait == ( TickType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1344 | { |
| fep | 0:62cd296ba2a7 | 1345 | /* The queue was empty and no block time is specified (or |
| fep | 0:62cd296ba2a7 | 1346 | the block time has expired) so leave now. */ |
| fep | 0:62cd296ba2a7 | 1347 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1348 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1349 | return errQUEUE_EMPTY; |
| fep | 0:62cd296ba2a7 | 1350 | } |
| fep | 0:62cd296ba2a7 | 1351 | else if( xEntryTimeSet == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1352 | { |
| fep | 0:62cd296ba2a7 | 1353 | /* The queue was empty and a block time was specified so |
| fep | 0:62cd296ba2a7 | 1354 | configure the timeout structure. */ |
| fep | 0:62cd296ba2a7 | 1355 | vTaskSetTimeOutState( &xTimeOut ); |
| fep | 0:62cd296ba2a7 | 1356 | xEntryTimeSet = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1357 | } |
| fep | 0:62cd296ba2a7 | 1358 | else |
| fep | 0:62cd296ba2a7 | 1359 | { |
| fep | 0:62cd296ba2a7 | 1360 | /* Entry time was already set. */ |
| fep | 0:62cd296ba2a7 | 1361 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1362 | } |
| fep | 0:62cd296ba2a7 | 1363 | } |
| fep | 0:62cd296ba2a7 | 1364 | } |
| fep | 0:62cd296ba2a7 | 1365 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1366 | |
| fep | 0:62cd296ba2a7 | 1367 | /* Interrupts and other tasks can send to and receive from the queue |
| fep | 0:62cd296ba2a7 | 1368 | now the critical section has been exited. */ |
| fep | 0:62cd296ba2a7 | 1369 | |
| fep | 0:62cd296ba2a7 | 1370 | vTaskSuspendAll(); |
| fep | 0:62cd296ba2a7 | 1371 | prvLockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1372 | |
| fep | 0:62cd296ba2a7 | 1373 | /* Update the timeout state to see if it has expired yet. */ |
| fep | 0:62cd296ba2a7 | 1374 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1375 | { |
| fep | 0:62cd296ba2a7 | 1376 | if( prvIsQueueEmpty( pxQueue ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1377 | { |
| fep | 0:62cd296ba2a7 | 1378 | traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1379 | |
| fep | 0:62cd296ba2a7 | 1380 | #if ( configUSE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 1381 | { |
| fep | 0:62cd296ba2a7 | 1382 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
| fep | 0:62cd296ba2a7 | 1383 | { |
| fep | 0:62cd296ba2a7 | 1384 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1385 | { |
| fep | 0:62cd296ba2a7 | 1386 | vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder ); |
| fep | 0:62cd296ba2a7 | 1387 | } |
| fep | 0:62cd296ba2a7 | 1388 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1389 | } |
| fep | 0:62cd296ba2a7 | 1390 | else |
| fep | 0:62cd296ba2a7 | 1391 | { |
| fep | 0:62cd296ba2a7 | 1392 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1393 | } |
| fep | 0:62cd296ba2a7 | 1394 | } |
| fep | 0:62cd296ba2a7 | 1395 | #endif |
| fep | 0:62cd296ba2a7 | 1396 | |
| fep | 0:62cd296ba2a7 | 1397 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait ); |
| fep | 0:62cd296ba2a7 | 1398 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1399 | if( xTaskResumeAll() == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1400 | { |
| fep | 0:62cd296ba2a7 | 1401 | portYIELD_WITHIN_API(); |
| fep | 0:62cd296ba2a7 | 1402 | } |
| fep | 0:62cd296ba2a7 | 1403 | else |
| fep | 0:62cd296ba2a7 | 1404 | { |
| fep | 0:62cd296ba2a7 | 1405 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1406 | } |
| fep | 0:62cd296ba2a7 | 1407 | } |
| fep | 0:62cd296ba2a7 | 1408 | else |
| fep | 0:62cd296ba2a7 | 1409 | { |
| fep | 0:62cd296ba2a7 | 1410 | /* Try again. */ |
| fep | 0:62cd296ba2a7 | 1411 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1412 | ( void ) xTaskResumeAll(); |
| fep | 0:62cd296ba2a7 | 1413 | } |
| fep | 0:62cd296ba2a7 | 1414 | } |
| fep | 0:62cd296ba2a7 | 1415 | else |
| fep | 0:62cd296ba2a7 | 1416 | { |
| fep | 0:62cd296ba2a7 | 1417 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1418 | ( void ) xTaskResumeAll(); |
| fep | 0:62cd296ba2a7 | 1419 | |
| fep | 0:62cd296ba2a7 | 1420 | if( prvIsQueueEmpty( pxQueue ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1421 | { |
| fep | 0:62cd296ba2a7 | 1422 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1423 | return errQUEUE_EMPTY; |
| fep | 0:62cd296ba2a7 | 1424 | } |
| fep | 0:62cd296ba2a7 | 1425 | else |
| fep | 0:62cd296ba2a7 | 1426 | { |
| fep | 0:62cd296ba2a7 | 1427 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1428 | } |
| fep | 0:62cd296ba2a7 | 1429 | } |
| fep | 0:62cd296ba2a7 | 1430 | } |
| fep | 0:62cd296ba2a7 | 1431 | } |
| fep | 0:62cd296ba2a7 | 1432 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1433 | |
| fep | 0:62cd296ba2a7 | 1434 | BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken ) |
| fep | 0:62cd296ba2a7 | 1435 | { |
| fep | 0:62cd296ba2a7 | 1436 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1437 | UBaseType_t uxSavedInterruptStatus; |
| fep | 0:62cd296ba2a7 | 1438 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1439 | |
| fep | 0:62cd296ba2a7 | 1440 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1441 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
| fep | 0:62cd296ba2a7 | 1442 | |
| fep | 0:62cd296ba2a7 | 1443 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
| fep | 0:62cd296ba2a7 | 1444 | system call (or maximum API call) interrupt priority. Interrupts that are |
| fep | 0:62cd296ba2a7 | 1445 | above the maximum system call priority are kept permanently enabled, even |
| fep | 0:62cd296ba2a7 | 1446 | when the RTOS kernel is in a critical section, but cannot make any calls to |
| fep | 0:62cd296ba2a7 | 1447 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
| fep | 0:62cd296ba2a7 | 1448 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
| fep | 0:62cd296ba2a7 | 1449 | failure if a FreeRTOS API function is called from an interrupt that has been |
| fep | 0:62cd296ba2a7 | 1450 | assigned a priority above the configured maximum system call priority. |
| fep | 0:62cd296ba2a7 | 1451 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
| fep | 0:62cd296ba2a7 | 1452 | that have been assigned a priority at or (logically) below the maximum |
| fep | 0:62cd296ba2a7 | 1453 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
| fep | 0:62cd296ba2a7 | 1454 | safe API to ensure interrupt entry is as fast and as simple as possible. |
| fep | 0:62cd296ba2a7 | 1455 | More information (albeit Cortex-M specific) is provided on the following |
| fep | 0:62cd296ba2a7 | 1456 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
| fep | 0:62cd296ba2a7 | 1457 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
| fep | 0:62cd296ba2a7 | 1458 | |
| fep | 0:62cd296ba2a7 | 1459 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
| fep | 0:62cd296ba2a7 | 1460 | { |
| fep | 0:62cd296ba2a7 | 1461 | const UBaseType_t uxMessagesWaiting = pxQueue->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1462 | |
| fep | 0:62cd296ba2a7 | 1463 | /* Cannot block in an ISR, so check there is data available. */ |
| fep | 0:62cd296ba2a7 | 1464 | if( uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1465 | { |
| fep | 0:62cd296ba2a7 | 1466 | const int8_t cRxLock = pxQueue->cRxLock; |
| fep | 0:62cd296ba2a7 | 1467 | |
| fep | 0:62cd296ba2a7 | 1468 | traceQUEUE_RECEIVE_FROM_ISR( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1469 | |
| fep | 0:62cd296ba2a7 | 1470 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
| fep | 0:62cd296ba2a7 | 1471 | pxQueue->uxMessagesWaiting = uxMessagesWaiting - 1; |
| fep | 0:62cd296ba2a7 | 1472 | |
| fep | 0:62cd296ba2a7 | 1473 | /* If the queue is locked the event list will not be modified. |
| fep | 0:62cd296ba2a7 | 1474 | Instead update the lock count so the task that unlocks the queue |
| fep | 0:62cd296ba2a7 | 1475 | will know that an ISR has removed data while the queue was |
| fep | 0:62cd296ba2a7 | 1476 | locked. */ |
| fep | 0:62cd296ba2a7 | 1477 | if( cRxLock == queueUNLOCKED ) |
| fep | 0:62cd296ba2a7 | 1478 | { |
| fep | 0:62cd296ba2a7 | 1479 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1480 | { |
| fep | 0:62cd296ba2a7 | 1481 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1482 | { |
| fep | 0:62cd296ba2a7 | 1483 | /* The task waiting has a higher priority than us so |
| fep | 0:62cd296ba2a7 | 1484 | force a context switch. */ |
| fep | 0:62cd296ba2a7 | 1485 | if( pxHigherPriorityTaskWoken != NULL ) |
| fep | 0:62cd296ba2a7 | 1486 | { |
| fep | 0:62cd296ba2a7 | 1487 | *pxHigherPriorityTaskWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1488 | } |
| fep | 0:62cd296ba2a7 | 1489 | else |
| fep | 0:62cd296ba2a7 | 1490 | { |
| fep | 0:62cd296ba2a7 | 1491 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1492 | } |
| fep | 0:62cd296ba2a7 | 1493 | } |
| fep | 0:62cd296ba2a7 | 1494 | else |
| fep | 0:62cd296ba2a7 | 1495 | { |
| fep | 0:62cd296ba2a7 | 1496 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1497 | } |
| fep | 0:62cd296ba2a7 | 1498 | } |
| fep | 0:62cd296ba2a7 | 1499 | else |
| fep | 0:62cd296ba2a7 | 1500 | { |
| fep | 0:62cd296ba2a7 | 1501 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1502 | } |
| fep | 0:62cd296ba2a7 | 1503 | } |
| fep | 0:62cd296ba2a7 | 1504 | else |
| fep | 0:62cd296ba2a7 | 1505 | { |
| fep | 0:62cd296ba2a7 | 1506 | /* Increment the lock count so the task that unlocks the queue |
| fep | 0:62cd296ba2a7 | 1507 | knows that data was removed while it was locked. */ |
| fep | 0:62cd296ba2a7 | 1508 | pxQueue->cRxLock = ( int8_t ) ( cRxLock + 1 ); |
| fep | 0:62cd296ba2a7 | 1509 | } |
| fep | 0:62cd296ba2a7 | 1510 | |
| fep | 0:62cd296ba2a7 | 1511 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 1512 | } |
| fep | 0:62cd296ba2a7 | 1513 | else |
| fep | 0:62cd296ba2a7 | 1514 | { |
| fep | 0:62cd296ba2a7 | 1515 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 1516 | traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1517 | } |
| fep | 0:62cd296ba2a7 | 1518 | } |
| fep | 0:62cd296ba2a7 | 1519 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
| fep | 0:62cd296ba2a7 | 1520 | |
| fep | 0:62cd296ba2a7 | 1521 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1522 | } |
| fep | 0:62cd296ba2a7 | 1523 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1524 | |
| fep | 0:62cd296ba2a7 | 1525 | BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer ) |
| fep | 0:62cd296ba2a7 | 1526 | { |
| fep | 0:62cd296ba2a7 | 1527 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1528 | UBaseType_t uxSavedInterruptStatus; |
| fep | 0:62cd296ba2a7 | 1529 | int8_t *pcOriginalReadPosition; |
| fep | 0:62cd296ba2a7 | 1530 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1531 | |
| fep | 0:62cd296ba2a7 | 1532 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1533 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
| fep | 0:62cd296ba2a7 | 1534 | configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */ |
| fep | 0:62cd296ba2a7 | 1535 | |
| fep | 0:62cd296ba2a7 | 1536 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
| fep | 0:62cd296ba2a7 | 1537 | system call (or maximum API call) interrupt priority. Interrupts that are |
| fep | 0:62cd296ba2a7 | 1538 | above the maximum system call priority are kept permanently enabled, even |
| fep | 0:62cd296ba2a7 | 1539 | when the RTOS kernel is in a critical section, but cannot make any calls to |
| fep | 0:62cd296ba2a7 | 1540 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
| fep | 0:62cd296ba2a7 | 1541 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
| fep | 0:62cd296ba2a7 | 1542 | failure if a FreeRTOS API function is called from an interrupt that has been |
| fep | 0:62cd296ba2a7 | 1543 | assigned a priority above the configured maximum system call priority. |
| fep | 0:62cd296ba2a7 | 1544 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
| fep | 0:62cd296ba2a7 | 1545 | that have been assigned a priority at or (logically) below the maximum |
| fep | 0:62cd296ba2a7 | 1546 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
| fep | 0:62cd296ba2a7 | 1547 | safe API to ensure interrupt entry is as fast and as simple as possible. |
| fep | 0:62cd296ba2a7 | 1548 | More information (albeit Cortex-M specific) is provided on the following |
| fep | 0:62cd296ba2a7 | 1549 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
| fep | 0:62cd296ba2a7 | 1550 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
| fep | 0:62cd296ba2a7 | 1551 | |
| fep | 0:62cd296ba2a7 | 1552 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
| fep | 0:62cd296ba2a7 | 1553 | { |
| fep | 0:62cd296ba2a7 | 1554 | /* Cannot block in an ISR, so check there is data available. */ |
| fep | 0:62cd296ba2a7 | 1555 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1556 | { |
| fep | 0:62cd296ba2a7 | 1557 | traceQUEUE_PEEK_FROM_ISR( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1558 | |
| fep | 0:62cd296ba2a7 | 1559 | /* Remember the read position so it can be reset as nothing is |
| fep | 0:62cd296ba2a7 | 1560 | actually being removed from the queue. */ |
| fep | 0:62cd296ba2a7 | 1561 | pcOriginalReadPosition = pxQueue->u.pcReadFrom; |
| fep | 0:62cd296ba2a7 | 1562 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
| fep | 0:62cd296ba2a7 | 1563 | pxQueue->u.pcReadFrom = pcOriginalReadPosition; |
| fep | 0:62cd296ba2a7 | 1564 | |
| fep | 0:62cd296ba2a7 | 1565 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 1566 | } |
| fep | 0:62cd296ba2a7 | 1567 | else |
| fep | 0:62cd296ba2a7 | 1568 | { |
| fep | 0:62cd296ba2a7 | 1569 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 1570 | traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1571 | } |
| fep | 0:62cd296ba2a7 | 1572 | } |
| fep | 0:62cd296ba2a7 | 1573 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
| fep | 0:62cd296ba2a7 | 1574 | |
| fep | 0:62cd296ba2a7 | 1575 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1576 | } |
| fep | 0:62cd296ba2a7 | 1577 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1578 | |
| fep | 0:62cd296ba2a7 | 1579 | UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1580 | { |
| fep | 0:62cd296ba2a7 | 1581 | UBaseType_t uxReturn; |
| fep | 0:62cd296ba2a7 | 1582 | |
| fep | 0:62cd296ba2a7 | 1583 | configASSERT( xQueue ); |
| fep | 0:62cd296ba2a7 | 1584 | |
| fep | 0:62cd296ba2a7 | 1585 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1586 | { |
| fep | 0:62cd296ba2a7 | 1587 | uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1588 | } |
| fep | 0:62cd296ba2a7 | 1589 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1590 | |
| fep | 0:62cd296ba2a7 | 1591 | return uxReturn; |
| fep | 0:62cd296ba2a7 | 1592 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
| fep | 0:62cd296ba2a7 | 1593 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1594 | |
| fep | 0:62cd296ba2a7 | 1595 | UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1596 | { |
| fep | 0:62cd296ba2a7 | 1597 | UBaseType_t uxReturn; |
| fep | 0:62cd296ba2a7 | 1598 | Queue_t *pxQueue; |
| fep | 0:62cd296ba2a7 | 1599 | |
| fep | 0:62cd296ba2a7 | 1600 | pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1601 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1602 | |
| fep | 0:62cd296ba2a7 | 1603 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1604 | { |
| fep | 0:62cd296ba2a7 | 1605 | uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1606 | } |
| fep | 0:62cd296ba2a7 | 1607 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1608 | |
| fep | 0:62cd296ba2a7 | 1609 | return uxReturn; |
| fep | 0:62cd296ba2a7 | 1610 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
| fep | 0:62cd296ba2a7 | 1611 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1612 | |
| fep | 0:62cd296ba2a7 | 1613 | UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1614 | { |
| fep | 0:62cd296ba2a7 | 1615 | UBaseType_t uxReturn; |
| fep | 0:62cd296ba2a7 | 1616 | |
| fep | 0:62cd296ba2a7 | 1617 | configASSERT( xQueue ); |
| fep | 0:62cd296ba2a7 | 1618 | |
| fep | 0:62cd296ba2a7 | 1619 | uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1620 | |
| fep | 0:62cd296ba2a7 | 1621 | return uxReturn; |
| fep | 0:62cd296ba2a7 | 1622 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
| fep | 0:62cd296ba2a7 | 1623 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1624 | |
| fep | 0:62cd296ba2a7 | 1625 | void vQueueDelete( QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1626 | { |
| fep | 0:62cd296ba2a7 | 1627 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1628 | |
| fep | 0:62cd296ba2a7 | 1629 | configASSERT( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1630 | traceQUEUE_DELETE( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1631 | |
| fep | 0:62cd296ba2a7 | 1632 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
| fep | 0:62cd296ba2a7 | 1633 | { |
| fep | 0:62cd296ba2a7 | 1634 | vQueueUnregisterQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1635 | } |
| fep | 0:62cd296ba2a7 | 1636 | #endif |
| fep | 0:62cd296ba2a7 | 1637 | |
| fep | 0:62cd296ba2a7 | 1638 | #if( ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 0 ) ) |
| fep | 0:62cd296ba2a7 | 1639 | { |
| fep | 0:62cd296ba2a7 | 1640 | /* The queue can only have been allocated dynamically - free it |
| fep | 0:62cd296ba2a7 | 1641 | again. */ |
| fep | 0:62cd296ba2a7 | 1642 | vPortFree( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1643 | } |
| fep | 0:62cd296ba2a7 | 1644 | #elif( ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) && ( configSUPPORT_STATIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 1645 | { |
| fep | 0:62cd296ba2a7 | 1646 | /* The queue could have been allocated statically or dynamically, so |
| fep | 0:62cd296ba2a7 | 1647 | check before attempting to free the memory. */ |
| fep | 0:62cd296ba2a7 | 1648 | if( pxQueue->ucStaticallyAllocated == ( uint8_t ) pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1649 | { |
| fep | 0:62cd296ba2a7 | 1650 | vPortFree( pxQueue ); |
| fep | 0:62cd296ba2a7 | 1651 | } |
| fep | 0:62cd296ba2a7 | 1652 | else |
| fep | 0:62cd296ba2a7 | 1653 | { |
| fep | 0:62cd296ba2a7 | 1654 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1655 | } |
| fep | 0:62cd296ba2a7 | 1656 | } |
| fep | 0:62cd296ba2a7 | 1657 | #else |
| fep | 0:62cd296ba2a7 | 1658 | { |
| fep | 0:62cd296ba2a7 | 1659 | /* The queue must have been statically allocated, so is not going to be |
| fep | 0:62cd296ba2a7 | 1660 | deleted. Avoid compiler warnings about the unused parameter. */ |
| fep | 0:62cd296ba2a7 | 1661 | ( void ) pxQueue; |
| fep | 0:62cd296ba2a7 | 1662 | } |
| fep | 0:62cd296ba2a7 | 1663 | #endif /* configSUPPORT_DYNAMIC_ALLOCATION */ |
| fep | 0:62cd296ba2a7 | 1664 | } |
| fep | 0:62cd296ba2a7 | 1665 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1666 | |
| fep | 0:62cd296ba2a7 | 1667 | #if ( configUSE_TRACE_FACILITY == 1 ) |
| fep | 0:62cd296ba2a7 | 1668 | |
| fep | 0:62cd296ba2a7 | 1669 | UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1670 | { |
| fep | 0:62cd296ba2a7 | 1671 | return ( ( Queue_t * ) xQueue )->uxQueueNumber; |
| fep | 0:62cd296ba2a7 | 1672 | } |
| fep | 0:62cd296ba2a7 | 1673 | |
| fep | 0:62cd296ba2a7 | 1674 | #endif /* configUSE_TRACE_FACILITY */ |
| fep | 0:62cd296ba2a7 | 1675 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1676 | |
| fep | 0:62cd296ba2a7 | 1677 | #if ( configUSE_TRACE_FACILITY == 1 ) |
| fep | 0:62cd296ba2a7 | 1678 | |
| fep | 0:62cd296ba2a7 | 1679 | void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber ) |
| fep | 0:62cd296ba2a7 | 1680 | { |
| fep | 0:62cd296ba2a7 | 1681 | ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber; |
| fep | 0:62cd296ba2a7 | 1682 | } |
| fep | 0:62cd296ba2a7 | 1683 | |
| fep | 0:62cd296ba2a7 | 1684 | #endif /* configUSE_TRACE_FACILITY */ |
| fep | 0:62cd296ba2a7 | 1685 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1686 | |
| fep | 0:62cd296ba2a7 | 1687 | #if ( configUSE_TRACE_FACILITY == 1 ) |
| fep | 0:62cd296ba2a7 | 1688 | |
| fep | 0:62cd296ba2a7 | 1689 | uint8_t ucQueueGetQueueType( QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1690 | { |
| fep | 0:62cd296ba2a7 | 1691 | return ( ( Queue_t * ) xQueue )->ucQueueType; |
| fep | 0:62cd296ba2a7 | 1692 | } |
| fep | 0:62cd296ba2a7 | 1693 | |
| fep | 0:62cd296ba2a7 | 1694 | #endif /* configUSE_TRACE_FACILITY */ |
| fep | 0:62cd296ba2a7 | 1695 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1696 | |
| fep | 0:62cd296ba2a7 | 1697 | static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) |
| fep | 0:62cd296ba2a7 | 1698 | { |
| fep | 0:62cd296ba2a7 | 1699 | BaseType_t xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1700 | UBaseType_t uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1701 | |
| fep | 0:62cd296ba2a7 | 1702 | /* This function is called from a critical section. */ |
| fep | 0:62cd296ba2a7 | 1703 | |
| fep | 0:62cd296ba2a7 | 1704 | uxMessagesWaiting = pxQueue->uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1705 | |
| fep | 0:62cd296ba2a7 | 1706 | if( pxQueue->uxItemSize == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1707 | { |
| fep | 0:62cd296ba2a7 | 1708 | #if ( configUSE_MUTEXES == 1 ) |
| fep | 0:62cd296ba2a7 | 1709 | { |
| fep | 0:62cd296ba2a7 | 1710 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
| fep | 0:62cd296ba2a7 | 1711 | { |
| fep | 0:62cd296ba2a7 | 1712 | /* The mutex is no longer being held. */ |
| fep | 0:62cd296ba2a7 | 1713 | xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder ); |
| fep | 0:62cd296ba2a7 | 1714 | pxQueue->pxMutexHolder = NULL; |
| fep | 0:62cd296ba2a7 | 1715 | } |
| fep | 0:62cd296ba2a7 | 1716 | else |
| fep | 0:62cd296ba2a7 | 1717 | { |
| fep | 0:62cd296ba2a7 | 1718 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1719 | } |
| fep | 0:62cd296ba2a7 | 1720 | } |
| fep | 0:62cd296ba2a7 | 1721 | #endif /* configUSE_MUTEXES */ |
| fep | 0:62cd296ba2a7 | 1722 | } |
| fep | 0:62cd296ba2a7 | 1723 | else if( xPosition == queueSEND_TO_BACK ) |
| fep | 0:62cd296ba2a7 | 1724 | { |
| fep | 0:62cd296ba2a7 | 1725 | ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */ |
| fep | 0:62cd296ba2a7 | 1726 | pxQueue->pcWriteTo += pxQueue->uxItemSize; |
| fep | 0:62cd296ba2a7 | 1727 | if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */ |
| fep | 0:62cd296ba2a7 | 1728 | { |
| fep | 0:62cd296ba2a7 | 1729 | pxQueue->pcWriteTo = pxQueue->pcHead; |
| fep | 0:62cd296ba2a7 | 1730 | } |
| fep | 0:62cd296ba2a7 | 1731 | else |
| fep | 0:62cd296ba2a7 | 1732 | { |
| fep | 0:62cd296ba2a7 | 1733 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1734 | } |
| fep | 0:62cd296ba2a7 | 1735 | } |
| fep | 0:62cd296ba2a7 | 1736 | else |
| fep | 0:62cd296ba2a7 | 1737 | { |
| fep | 0:62cd296ba2a7 | 1738 | ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */ |
| fep | 0:62cd296ba2a7 | 1739 | pxQueue->u.pcReadFrom -= pxQueue->uxItemSize; |
| fep | 0:62cd296ba2a7 | 1740 | if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */ |
| fep | 0:62cd296ba2a7 | 1741 | { |
| fep | 0:62cd296ba2a7 | 1742 | pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize ); |
| fep | 0:62cd296ba2a7 | 1743 | } |
| fep | 0:62cd296ba2a7 | 1744 | else |
| fep | 0:62cd296ba2a7 | 1745 | { |
| fep | 0:62cd296ba2a7 | 1746 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1747 | } |
| fep | 0:62cd296ba2a7 | 1748 | |
| fep | 0:62cd296ba2a7 | 1749 | if( xPosition == queueOVERWRITE ) |
| fep | 0:62cd296ba2a7 | 1750 | { |
| fep | 0:62cd296ba2a7 | 1751 | if( uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1752 | { |
| fep | 0:62cd296ba2a7 | 1753 | /* An item is not being added but overwritten, so subtract |
| fep | 0:62cd296ba2a7 | 1754 | one from the recorded number of items in the queue so when |
| fep | 0:62cd296ba2a7 | 1755 | one is added again below the number of recorded items remains |
| fep | 0:62cd296ba2a7 | 1756 | correct. */ |
| fep | 0:62cd296ba2a7 | 1757 | --uxMessagesWaiting; |
| fep | 0:62cd296ba2a7 | 1758 | } |
| fep | 0:62cd296ba2a7 | 1759 | else |
| fep | 0:62cd296ba2a7 | 1760 | { |
| fep | 0:62cd296ba2a7 | 1761 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1762 | } |
| fep | 0:62cd296ba2a7 | 1763 | } |
| fep | 0:62cd296ba2a7 | 1764 | else |
| fep | 0:62cd296ba2a7 | 1765 | { |
| fep | 0:62cd296ba2a7 | 1766 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1767 | } |
| fep | 0:62cd296ba2a7 | 1768 | } |
| fep | 0:62cd296ba2a7 | 1769 | |
| fep | 0:62cd296ba2a7 | 1770 | pxQueue->uxMessagesWaiting = uxMessagesWaiting + 1; |
| fep | 0:62cd296ba2a7 | 1771 | |
| fep | 0:62cd296ba2a7 | 1772 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1773 | } |
| fep | 0:62cd296ba2a7 | 1774 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1775 | |
| fep | 0:62cd296ba2a7 | 1776 | static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) |
| fep | 0:62cd296ba2a7 | 1777 | { |
| fep | 0:62cd296ba2a7 | 1778 | if( pxQueue->uxItemSize != ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1779 | { |
| fep | 0:62cd296ba2a7 | 1780 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
| fep | 0:62cd296ba2a7 | 1781 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */ |
| fep | 0:62cd296ba2a7 | 1782 | { |
| fep | 0:62cd296ba2a7 | 1783 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
| fep | 0:62cd296ba2a7 | 1784 | } |
| fep | 0:62cd296ba2a7 | 1785 | else |
| fep | 0:62cd296ba2a7 | 1786 | { |
| fep | 0:62cd296ba2a7 | 1787 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1788 | } |
| fep | 0:62cd296ba2a7 | 1789 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */ |
| fep | 0:62cd296ba2a7 | 1790 | } |
| fep | 0:62cd296ba2a7 | 1791 | } |
| fep | 0:62cd296ba2a7 | 1792 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1793 | |
| fep | 0:62cd296ba2a7 | 1794 | static void prvUnlockQueue( Queue_t * const pxQueue ) |
| fep | 0:62cd296ba2a7 | 1795 | { |
| fep | 0:62cd296ba2a7 | 1796 | /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */ |
| fep | 0:62cd296ba2a7 | 1797 | |
| fep | 0:62cd296ba2a7 | 1798 | /* The lock counts contains the number of extra data items placed or |
| fep | 0:62cd296ba2a7 | 1799 | removed from the queue while the queue was locked. When a queue is |
| fep | 0:62cd296ba2a7 | 1800 | locked items can be added or removed, but the event lists cannot be |
| fep | 0:62cd296ba2a7 | 1801 | updated. */ |
| fep | 0:62cd296ba2a7 | 1802 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1803 | { |
| fep | 0:62cd296ba2a7 | 1804 | int8_t cTxLock = pxQueue->cTxLock; |
| fep | 0:62cd296ba2a7 | 1805 | |
| fep | 0:62cd296ba2a7 | 1806 | /* See if data was added to the queue while it was locked. */ |
| fep | 0:62cd296ba2a7 | 1807 | while( cTxLock > queueLOCKED_UNMODIFIED ) |
| fep | 0:62cd296ba2a7 | 1808 | { |
| fep | 0:62cd296ba2a7 | 1809 | /* Data was posted while the queue was locked. Are any tasks |
| fep | 0:62cd296ba2a7 | 1810 | blocked waiting for data to become available? */ |
| fep | 0:62cd296ba2a7 | 1811 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 1812 | { |
| fep | 0:62cd296ba2a7 | 1813 | if( pxQueue->pxQueueSetContainer != NULL ) |
| fep | 0:62cd296ba2a7 | 1814 | { |
| fep | 0:62cd296ba2a7 | 1815 | if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1816 | { |
| fep | 0:62cd296ba2a7 | 1817 | /* The queue is a member of a queue set, and posting to |
| fep | 0:62cd296ba2a7 | 1818 | the queue set caused a higher priority task to unblock. |
| fep | 0:62cd296ba2a7 | 1819 | A context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1820 | vTaskMissedYield(); |
| fep | 0:62cd296ba2a7 | 1821 | } |
| fep | 0:62cd296ba2a7 | 1822 | else |
| fep | 0:62cd296ba2a7 | 1823 | { |
| fep | 0:62cd296ba2a7 | 1824 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1825 | } |
| fep | 0:62cd296ba2a7 | 1826 | } |
| fep | 0:62cd296ba2a7 | 1827 | else |
| fep | 0:62cd296ba2a7 | 1828 | { |
| fep | 0:62cd296ba2a7 | 1829 | /* Tasks that are removed from the event list will get |
| fep | 0:62cd296ba2a7 | 1830 | added to the pending ready list as the scheduler is still |
| fep | 0:62cd296ba2a7 | 1831 | suspended. */ |
| fep | 0:62cd296ba2a7 | 1832 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1833 | { |
| fep | 0:62cd296ba2a7 | 1834 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1835 | { |
| fep | 0:62cd296ba2a7 | 1836 | /* The task waiting has a higher priority so record that a |
| fep | 0:62cd296ba2a7 | 1837 | context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1838 | vTaskMissedYield(); |
| fep | 0:62cd296ba2a7 | 1839 | } |
| fep | 0:62cd296ba2a7 | 1840 | else |
| fep | 0:62cd296ba2a7 | 1841 | { |
| fep | 0:62cd296ba2a7 | 1842 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1843 | } |
| fep | 0:62cd296ba2a7 | 1844 | } |
| fep | 0:62cd296ba2a7 | 1845 | else |
| fep | 0:62cd296ba2a7 | 1846 | { |
| fep | 0:62cd296ba2a7 | 1847 | break; |
| fep | 0:62cd296ba2a7 | 1848 | } |
| fep | 0:62cd296ba2a7 | 1849 | } |
| fep | 0:62cd296ba2a7 | 1850 | } |
| fep | 0:62cd296ba2a7 | 1851 | #else /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1852 | { |
| fep | 0:62cd296ba2a7 | 1853 | /* Tasks that are removed from the event list will get added to |
| fep | 0:62cd296ba2a7 | 1854 | the pending ready list as the scheduler is still suspended. */ |
| fep | 0:62cd296ba2a7 | 1855 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1856 | { |
| fep | 0:62cd296ba2a7 | 1857 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1858 | { |
| fep | 0:62cd296ba2a7 | 1859 | /* The task waiting has a higher priority so record that |
| fep | 0:62cd296ba2a7 | 1860 | a context switch is required. */ |
| fep | 0:62cd296ba2a7 | 1861 | vTaskMissedYield(); |
| fep | 0:62cd296ba2a7 | 1862 | } |
| fep | 0:62cd296ba2a7 | 1863 | else |
| fep | 0:62cd296ba2a7 | 1864 | { |
| fep | 0:62cd296ba2a7 | 1865 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1866 | } |
| fep | 0:62cd296ba2a7 | 1867 | } |
| fep | 0:62cd296ba2a7 | 1868 | else |
| fep | 0:62cd296ba2a7 | 1869 | { |
| fep | 0:62cd296ba2a7 | 1870 | break; |
| fep | 0:62cd296ba2a7 | 1871 | } |
| fep | 0:62cd296ba2a7 | 1872 | } |
| fep | 0:62cd296ba2a7 | 1873 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 1874 | |
| fep | 0:62cd296ba2a7 | 1875 | --cTxLock; |
| fep | 0:62cd296ba2a7 | 1876 | } |
| fep | 0:62cd296ba2a7 | 1877 | |
| fep | 0:62cd296ba2a7 | 1878 | pxQueue->cTxLock = queueUNLOCKED; |
| fep | 0:62cd296ba2a7 | 1879 | } |
| fep | 0:62cd296ba2a7 | 1880 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1881 | |
| fep | 0:62cd296ba2a7 | 1882 | /* Do the same for the Rx lock. */ |
| fep | 0:62cd296ba2a7 | 1883 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1884 | { |
| fep | 0:62cd296ba2a7 | 1885 | int8_t cRxLock = pxQueue->cRxLock; |
| fep | 0:62cd296ba2a7 | 1886 | |
| fep | 0:62cd296ba2a7 | 1887 | while( cRxLock > queueLOCKED_UNMODIFIED ) |
| fep | 0:62cd296ba2a7 | 1888 | { |
| fep | 0:62cd296ba2a7 | 1889 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1890 | { |
| fep | 0:62cd296ba2a7 | 1891 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 1892 | { |
| fep | 0:62cd296ba2a7 | 1893 | vTaskMissedYield(); |
| fep | 0:62cd296ba2a7 | 1894 | } |
| fep | 0:62cd296ba2a7 | 1895 | else |
| fep | 0:62cd296ba2a7 | 1896 | { |
| fep | 0:62cd296ba2a7 | 1897 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 1898 | } |
| fep | 0:62cd296ba2a7 | 1899 | |
| fep | 0:62cd296ba2a7 | 1900 | --cRxLock; |
| fep | 0:62cd296ba2a7 | 1901 | } |
| fep | 0:62cd296ba2a7 | 1902 | else |
| fep | 0:62cd296ba2a7 | 1903 | { |
| fep | 0:62cd296ba2a7 | 1904 | break; |
| fep | 0:62cd296ba2a7 | 1905 | } |
| fep | 0:62cd296ba2a7 | 1906 | } |
| fep | 0:62cd296ba2a7 | 1907 | |
| fep | 0:62cd296ba2a7 | 1908 | pxQueue->cRxLock = queueUNLOCKED; |
| fep | 0:62cd296ba2a7 | 1909 | } |
| fep | 0:62cd296ba2a7 | 1910 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1911 | } |
| fep | 0:62cd296ba2a7 | 1912 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1913 | |
| fep | 0:62cd296ba2a7 | 1914 | static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) |
| fep | 0:62cd296ba2a7 | 1915 | { |
| fep | 0:62cd296ba2a7 | 1916 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1917 | |
| fep | 0:62cd296ba2a7 | 1918 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1919 | { |
| fep | 0:62cd296ba2a7 | 1920 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1921 | { |
| fep | 0:62cd296ba2a7 | 1922 | xReturn = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1923 | } |
| fep | 0:62cd296ba2a7 | 1924 | else |
| fep | 0:62cd296ba2a7 | 1925 | { |
| fep | 0:62cd296ba2a7 | 1926 | xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1927 | } |
| fep | 0:62cd296ba2a7 | 1928 | } |
| fep | 0:62cd296ba2a7 | 1929 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1930 | |
| fep | 0:62cd296ba2a7 | 1931 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1932 | } |
| fep | 0:62cd296ba2a7 | 1933 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1934 | |
| fep | 0:62cd296ba2a7 | 1935 | BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1936 | { |
| fep | 0:62cd296ba2a7 | 1937 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1938 | |
| fep | 0:62cd296ba2a7 | 1939 | configASSERT( xQueue ); |
| fep | 0:62cd296ba2a7 | 1940 | if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 1941 | { |
| fep | 0:62cd296ba2a7 | 1942 | xReturn = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1943 | } |
| fep | 0:62cd296ba2a7 | 1944 | else |
| fep | 0:62cd296ba2a7 | 1945 | { |
| fep | 0:62cd296ba2a7 | 1946 | xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1947 | } |
| fep | 0:62cd296ba2a7 | 1948 | |
| fep | 0:62cd296ba2a7 | 1949 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1950 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
| fep | 0:62cd296ba2a7 | 1951 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1952 | |
| fep | 0:62cd296ba2a7 | 1953 | static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) |
| fep | 0:62cd296ba2a7 | 1954 | { |
| fep | 0:62cd296ba2a7 | 1955 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1956 | |
| fep | 0:62cd296ba2a7 | 1957 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1958 | { |
| fep | 0:62cd296ba2a7 | 1959 | if( pxQueue->uxMessagesWaiting == pxQueue->uxLength ) |
| fep | 0:62cd296ba2a7 | 1960 | { |
| fep | 0:62cd296ba2a7 | 1961 | xReturn = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1962 | } |
| fep | 0:62cd296ba2a7 | 1963 | else |
| fep | 0:62cd296ba2a7 | 1964 | { |
| fep | 0:62cd296ba2a7 | 1965 | xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1966 | } |
| fep | 0:62cd296ba2a7 | 1967 | } |
| fep | 0:62cd296ba2a7 | 1968 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 1969 | |
| fep | 0:62cd296ba2a7 | 1970 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1971 | } |
| fep | 0:62cd296ba2a7 | 1972 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1973 | |
| fep | 0:62cd296ba2a7 | 1974 | BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 1975 | { |
| fep | 0:62cd296ba2a7 | 1976 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1977 | |
| fep | 0:62cd296ba2a7 | 1978 | configASSERT( xQueue ); |
| fep | 0:62cd296ba2a7 | 1979 | if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength ) |
| fep | 0:62cd296ba2a7 | 1980 | { |
| fep | 0:62cd296ba2a7 | 1981 | xReturn = pdTRUE; |
| fep | 0:62cd296ba2a7 | 1982 | } |
| fep | 0:62cd296ba2a7 | 1983 | else |
| fep | 0:62cd296ba2a7 | 1984 | { |
| fep | 0:62cd296ba2a7 | 1985 | xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 1986 | } |
| fep | 0:62cd296ba2a7 | 1987 | |
| fep | 0:62cd296ba2a7 | 1988 | return xReturn; |
| fep | 0:62cd296ba2a7 | 1989 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
| fep | 0:62cd296ba2a7 | 1990 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 1991 | |
| fep | 0:62cd296ba2a7 | 1992 | #if ( configUSE_CO_ROUTINES == 1 ) |
| fep | 0:62cd296ba2a7 | 1993 | |
| fep | 0:62cd296ba2a7 | 1994 | BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait ) |
| fep | 0:62cd296ba2a7 | 1995 | { |
| fep | 0:62cd296ba2a7 | 1996 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 1997 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 1998 | |
| fep | 0:62cd296ba2a7 | 1999 | /* If the queue is already full we may have to block. A critical section |
| fep | 0:62cd296ba2a7 | 2000 | is required to prevent an interrupt removing something from the queue |
| fep | 0:62cd296ba2a7 | 2001 | between the check to see if the queue is full and blocking on the queue. */ |
| fep | 0:62cd296ba2a7 | 2002 | portDISABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2003 | { |
| fep | 0:62cd296ba2a7 | 2004 | if( prvIsQueueFull( pxQueue ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2005 | { |
| fep | 0:62cd296ba2a7 | 2006 | /* The queue is full - do we want to block or just leave without |
| fep | 0:62cd296ba2a7 | 2007 | posting? */ |
| fep | 0:62cd296ba2a7 | 2008 | if( xTicksToWait > ( TickType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2009 | { |
| fep | 0:62cd296ba2a7 | 2010 | /* As this is called from a coroutine we cannot block directly, but |
| fep | 0:62cd296ba2a7 | 2011 | return indicating that we need to block. */ |
| fep | 0:62cd296ba2a7 | 2012 | vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) ); |
| fep | 0:62cd296ba2a7 | 2013 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2014 | return errQUEUE_BLOCKED; |
| fep | 0:62cd296ba2a7 | 2015 | } |
| fep | 0:62cd296ba2a7 | 2016 | else |
| fep | 0:62cd296ba2a7 | 2017 | { |
| fep | 0:62cd296ba2a7 | 2018 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2019 | return errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 2020 | } |
| fep | 0:62cd296ba2a7 | 2021 | } |
| fep | 0:62cd296ba2a7 | 2022 | } |
| fep | 0:62cd296ba2a7 | 2023 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2024 | |
| fep | 0:62cd296ba2a7 | 2025 | portDISABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2026 | { |
| fep | 0:62cd296ba2a7 | 2027 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
| fep | 0:62cd296ba2a7 | 2028 | { |
| fep | 0:62cd296ba2a7 | 2029 | /* There is room in the queue, copy the data into the queue. */ |
| fep | 0:62cd296ba2a7 | 2030 | prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK ); |
| fep | 0:62cd296ba2a7 | 2031 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 2032 | |
| fep | 0:62cd296ba2a7 | 2033 | /* Were any co-routines waiting for data to become available? */ |
| fep | 0:62cd296ba2a7 | 2034 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2035 | { |
| fep | 0:62cd296ba2a7 | 2036 | /* In this instance the co-routine could be placed directly |
| fep | 0:62cd296ba2a7 | 2037 | into the ready list as we are within a critical section. |
| fep | 0:62cd296ba2a7 | 2038 | Instead the same pending ready list mechanism is used as if |
| fep | 0:62cd296ba2a7 | 2039 | the event were caused from within an interrupt. */ |
| fep | 0:62cd296ba2a7 | 2040 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2041 | { |
| fep | 0:62cd296ba2a7 | 2042 | /* The co-routine waiting has a higher priority so record |
| fep | 0:62cd296ba2a7 | 2043 | that a yield might be appropriate. */ |
| fep | 0:62cd296ba2a7 | 2044 | xReturn = errQUEUE_YIELD; |
| fep | 0:62cd296ba2a7 | 2045 | } |
| fep | 0:62cd296ba2a7 | 2046 | else |
| fep | 0:62cd296ba2a7 | 2047 | { |
| fep | 0:62cd296ba2a7 | 2048 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2049 | } |
| fep | 0:62cd296ba2a7 | 2050 | } |
| fep | 0:62cd296ba2a7 | 2051 | else |
| fep | 0:62cd296ba2a7 | 2052 | { |
| fep | 0:62cd296ba2a7 | 2053 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2054 | } |
| fep | 0:62cd296ba2a7 | 2055 | } |
| fep | 0:62cd296ba2a7 | 2056 | else |
| fep | 0:62cd296ba2a7 | 2057 | { |
| fep | 0:62cd296ba2a7 | 2058 | xReturn = errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 2059 | } |
| fep | 0:62cd296ba2a7 | 2060 | } |
| fep | 0:62cd296ba2a7 | 2061 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2062 | |
| fep | 0:62cd296ba2a7 | 2063 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2064 | } |
| fep | 0:62cd296ba2a7 | 2065 | |
| fep | 0:62cd296ba2a7 | 2066 | #endif /* configUSE_CO_ROUTINES */ |
| fep | 0:62cd296ba2a7 | 2067 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2068 | |
| fep | 0:62cd296ba2a7 | 2069 | #if ( configUSE_CO_ROUTINES == 1 ) |
| fep | 0:62cd296ba2a7 | 2070 | |
| fep | 0:62cd296ba2a7 | 2071 | BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait ) |
| fep | 0:62cd296ba2a7 | 2072 | { |
| fep | 0:62cd296ba2a7 | 2073 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 2074 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 2075 | |
| fep | 0:62cd296ba2a7 | 2076 | /* If the queue is already empty we may have to block. A critical section |
| fep | 0:62cd296ba2a7 | 2077 | is required to prevent an interrupt adding something to the queue |
| fep | 0:62cd296ba2a7 | 2078 | between the check to see if the queue is empty and blocking on the queue. */ |
| fep | 0:62cd296ba2a7 | 2079 | portDISABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2080 | { |
| fep | 0:62cd296ba2a7 | 2081 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2082 | { |
| fep | 0:62cd296ba2a7 | 2083 | /* There are no messages in the queue, do we want to block or just |
| fep | 0:62cd296ba2a7 | 2084 | leave with nothing? */ |
| fep | 0:62cd296ba2a7 | 2085 | if( xTicksToWait > ( TickType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2086 | { |
| fep | 0:62cd296ba2a7 | 2087 | /* As this is a co-routine we cannot block directly, but return |
| fep | 0:62cd296ba2a7 | 2088 | indicating that we need to block. */ |
| fep | 0:62cd296ba2a7 | 2089 | vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) ); |
| fep | 0:62cd296ba2a7 | 2090 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2091 | return errQUEUE_BLOCKED; |
| fep | 0:62cd296ba2a7 | 2092 | } |
| fep | 0:62cd296ba2a7 | 2093 | else |
| fep | 0:62cd296ba2a7 | 2094 | { |
| fep | 0:62cd296ba2a7 | 2095 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2096 | return errQUEUE_FULL; |
| fep | 0:62cd296ba2a7 | 2097 | } |
| fep | 0:62cd296ba2a7 | 2098 | } |
| fep | 0:62cd296ba2a7 | 2099 | else |
| fep | 0:62cd296ba2a7 | 2100 | { |
| fep | 0:62cd296ba2a7 | 2101 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2102 | } |
| fep | 0:62cd296ba2a7 | 2103 | } |
| fep | 0:62cd296ba2a7 | 2104 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2105 | |
| fep | 0:62cd296ba2a7 | 2106 | portDISABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2107 | { |
| fep | 0:62cd296ba2a7 | 2108 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2109 | { |
| fep | 0:62cd296ba2a7 | 2110 | /* Data is available from the queue. */ |
| fep | 0:62cd296ba2a7 | 2111 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
| fep | 0:62cd296ba2a7 | 2112 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) |
| fep | 0:62cd296ba2a7 | 2113 | { |
| fep | 0:62cd296ba2a7 | 2114 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
| fep | 0:62cd296ba2a7 | 2115 | } |
| fep | 0:62cd296ba2a7 | 2116 | else |
| fep | 0:62cd296ba2a7 | 2117 | { |
| fep | 0:62cd296ba2a7 | 2118 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2119 | } |
| fep | 0:62cd296ba2a7 | 2120 | --( pxQueue->uxMessagesWaiting ); |
| fep | 0:62cd296ba2a7 | 2121 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize ); |
| fep | 0:62cd296ba2a7 | 2122 | |
| fep | 0:62cd296ba2a7 | 2123 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 2124 | |
| fep | 0:62cd296ba2a7 | 2125 | /* Were any co-routines waiting for space to become available? */ |
| fep | 0:62cd296ba2a7 | 2126 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2127 | { |
| fep | 0:62cd296ba2a7 | 2128 | /* In this instance the co-routine could be placed directly |
| fep | 0:62cd296ba2a7 | 2129 | into the ready list as we are within a critical section. |
| fep | 0:62cd296ba2a7 | 2130 | Instead the same pending ready list mechanism is used as if |
| fep | 0:62cd296ba2a7 | 2131 | the event were caused from within an interrupt. */ |
| fep | 0:62cd296ba2a7 | 2132 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2133 | { |
| fep | 0:62cd296ba2a7 | 2134 | xReturn = errQUEUE_YIELD; |
| fep | 0:62cd296ba2a7 | 2135 | } |
| fep | 0:62cd296ba2a7 | 2136 | else |
| fep | 0:62cd296ba2a7 | 2137 | { |
| fep | 0:62cd296ba2a7 | 2138 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2139 | } |
| fep | 0:62cd296ba2a7 | 2140 | } |
| fep | 0:62cd296ba2a7 | 2141 | else |
| fep | 0:62cd296ba2a7 | 2142 | { |
| fep | 0:62cd296ba2a7 | 2143 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2144 | } |
| fep | 0:62cd296ba2a7 | 2145 | } |
| fep | 0:62cd296ba2a7 | 2146 | else |
| fep | 0:62cd296ba2a7 | 2147 | { |
| fep | 0:62cd296ba2a7 | 2148 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2149 | } |
| fep | 0:62cd296ba2a7 | 2150 | } |
| fep | 0:62cd296ba2a7 | 2151 | portENABLE_INTERRUPTS(); |
| fep | 0:62cd296ba2a7 | 2152 | |
| fep | 0:62cd296ba2a7 | 2153 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2154 | } |
| fep | 0:62cd296ba2a7 | 2155 | |
| fep | 0:62cd296ba2a7 | 2156 | #endif /* configUSE_CO_ROUTINES */ |
| fep | 0:62cd296ba2a7 | 2157 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2158 | |
| fep | 0:62cd296ba2a7 | 2159 | #if ( configUSE_CO_ROUTINES == 1 ) |
| fep | 0:62cd296ba2a7 | 2160 | |
| fep | 0:62cd296ba2a7 | 2161 | BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken ) |
| fep | 0:62cd296ba2a7 | 2162 | { |
| fep | 0:62cd296ba2a7 | 2163 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 2164 | |
| fep | 0:62cd296ba2a7 | 2165 | /* Cannot block within an ISR so if there is no space on the queue then |
| fep | 0:62cd296ba2a7 | 2166 | exit without doing anything. */ |
| fep | 0:62cd296ba2a7 | 2167 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
| fep | 0:62cd296ba2a7 | 2168 | { |
| fep | 0:62cd296ba2a7 | 2169 | prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK ); |
| fep | 0:62cd296ba2a7 | 2170 | |
| fep | 0:62cd296ba2a7 | 2171 | /* We only want to wake one co-routine per ISR, so check that a |
| fep | 0:62cd296ba2a7 | 2172 | co-routine has not already been woken. */ |
| fep | 0:62cd296ba2a7 | 2173 | if( xCoRoutinePreviouslyWoken == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2174 | { |
| fep | 0:62cd296ba2a7 | 2175 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2176 | { |
| fep | 0:62cd296ba2a7 | 2177 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2178 | { |
| fep | 0:62cd296ba2a7 | 2179 | return pdTRUE; |
| fep | 0:62cd296ba2a7 | 2180 | } |
| fep | 0:62cd296ba2a7 | 2181 | else |
| fep | 0:62cd296ba2a7 | 2182 | { |
| fep | 0:62cd296ba2a7 | 2183 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2184 | } |
| fep | 0:62cd296ba2a7 | 2185 | } |
| fep | 0:62cd296ba2a7 | 2186 | else |
| fep | 0:62cd296ba2a7 | 2187 | { |
| fep | 0:62cd296ba2a7 | 2188 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2189 | } |
| fep | 0:62cd296ba2a7 | 2190 | } |
| fep | 0:62cd296ba2a7 | 2191 | else |
| fep | 0:62cd296ba2a7 | 2192 | { |
| fep | 0:62cd296ba2a7 | 2193 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2194 | } |
| fep | 0:62cd296ba2a7 | 2195 | } |
| fep | 0:62cd296ba2a7 | 2196 | else |
| fep | 0:62cd296ba2a7 | 2197 | { |
| fep | 0:62cd296ba2a7 | 2198 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2199 | } |
| fep | 0:62cd296ba2a7 | 2200 | |
| fep | 0:62cd296ba2a7 | 2201 | return xCoRoutinePreviouslyWoken; |
| fep | 0:62cd296ba2a7 | 2202 | } |
| fep | 0:62cd296ba2a7 | 2203 | |
| fep | 0:62cd296ba2a7 | 2204 | #endif /* configUSE_CO_ROUTINES */ |
| fep | 0:62cd296ba2a7 | 2205 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2206 | |
| fep | 0:62cd296ba2a7 | 2207 | #if ( configUSE_CO_ROUTINES == 1 ) |
| fep | 0:62cd296ba2a7 | 2208 | |
| fep | 0:62cd296ba2a7 | 2209 | BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken ) |
| fep | 0:62cd296ba2a7 | 2210 | { |
| fep | 0:62cd296ba2a7 | 2211 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 2212 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 2213 | |
| fep | 0:62cd296ba2a7 | 2214 | /* We cannot block from an ISR, so check there is data available. If |
| fep | 0:62cd296ba2a7 | 2215 | not then just leave without doing anything. */ |
| fep | 0:62cd296ba2a7 | 2216 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2217 | { |
| fep | 0:62cd296ba2a7 | 2218 | /* Copy the data from the queue. */ |
| fep | 0:62cd296ba2a7 | 2219 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
| fep | 0:62cd296ba2a7 | 2220 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) |
| fep | 0:62cd296ba2a7 | 2221 | { |
| fep | 0:62cd296ba2a7 | 2222 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
| fep | 0:62cd296ba2a7 | 2223 | } |
| fep | 0:62cd296ba2a7 | 2224 | else |
| fep | 0:62cd296ba2a7 | 2225 | { |
| fep | 0:62cd296ba2a7 | 2226 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2227 | } |
| fep | 0:62cd296ba2a7 | 2228 | --( pxQueue->uxMessagesWaiting ); |
| fep | 0:62cd296ba2a7 | 2229 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize ); |
| fep | 0:62cd296ba2a7 | 2230 | |
| fep | 0:62cd296ba2a7 | 2231 | if( ( *pxCoRoutineWoken ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2232 | { |
| fep | 0:62cd296ba2a7 | 2233 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2234 | { |
| fep | 0:62cd296ba2a7 | 2235 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2236 | { |
| fep | 0:62cd296ba2a7 | 2237 | *pxCoRoutineWoken = pdTRUE; |
| fep | 0:62cd296ba2a7 | 2238 | } |
| fep | 0:62cd296ba2a7 | 2239 | else |
| fep | 0:62cd296ba2a7 | 2240 | { |
| fep | 0:62cd296ba2a7 | 2241 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2242 | } |
| fep | 0:62cd296ba2a7 | 2243 | } |
| fep | 0:62cd296ba2a7 | 2244 | else |
| fep | 0:62cd296ba2a7 | 2245 | { |
| fep | 0:62cd296ba2a7 | 2246 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2247 | } |
| fep | 0:62cd296ba2a7 | 2248 | } |
| fep | 0:62cd296ba2a7 | 2249 | else |
| fep | 0:62cd296ba2a7 | 2250 | { |
| fep | 0:62cd296ba2a7 | 2251 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2252 | } |
| fep | 0:62cd296ba2a7 | 2253 | |
| fep | 0:62cd296ba2a7 | 2254 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 2255 | } |
| fep | 0:62cd296ba2a7 | 2256 | else |
| fep | 0:62cd296ba2a7 | 2257 | { |
| fep | 0:62cd296ba2a7 | 2258 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2259 | } |
| fep | 0:62cd296ba2a7 | 2260 | |
| fep | 0:62cd296ba2a7 | 2261 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2262 | } |
| fep | 0:62cd296ba2a7 | 2263 | |
| fep | 0:62cd296ba2a7 | 2264 | #endif /* configUSE_CO_ROUTINES */ |
| fep | 0:62cd296ba2a7 | 2265 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2266 | |
| fep | 0:62cd296ba2a7 | 2267 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
| fep | 0:62cd296ba2a7 | 2268 | |
| fep | 0:62cd296ba2a7 | 2269 | void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
| fep | 0:62cd296ba2a7 | 2270 | { |
| fep | 0:62cd296ba2a7 | 2271 | UBaseType_t ux; |
| fep | 0:62cd296ba2a7 | 2272 | |
| fep | 0:62cd296ba2a7 | 2273 | /* See if there is an empty space in the registry. A NULL name denotes |
| fep | 0:62cd296ba2a7 | 2274 | a free slot. */ |
| fep | 0:62cd296ba2a7 | 2275 | for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ ) |
| fep | 0:62cd296ba2a7 | 2276 | { |
| fep | 0:62cd296ba2a7 | 2277 | if( xQueueRegistry[ ux ].pcQueueName == NULL ) |
| fep | 0:62cd296ba2a7 | 2278 | { |
| fep | 0:62cd296ba2a7 | 2279 | /* Store the information on this queue. */ |
| fep | 0:62cd296ba2a7 | 2280 | xQueueRegistry[ ux ].pcQueueName = pcQueueName; |
| fep | 0:62cd296ba2a7 | 2281 | xQueueRegistry[ ux ].xHandle = xQueue; |
| fep | 0:62cd296ba2a7 | 2282 | |
| fep | 0:62cd296ba2a7 | 2283 | traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName ); |
| fep | 0:62cd296ba2a7 | 2284 | break; |
| fep | 0:62cd296ba2a7 | 2285 | } |
| fep | 0:62cd296ba2a7 | 2286 | else |
| fep | 0:62cd296ba2a7 | 2287 | { |
| fep | 0:62cd296ba2a7 | 2288 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2289 | } |
| fep | 0:62cd296ba2a7 | 2290 | } |
| fep | 0:62cd296ba2a7 | 2291 | } |
| fep | 0:62cd296ba2a7 | 2292 | |
| fep | 0:62cd296ba2a7 | 2293 | #endif /* configQUEUE_REGISTRY_SIZE */ |
| fep | 0:62cd296ba2a7 | 2294 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2295 | |
| fep | 0:62cd296ba2a7 | 2296 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
| fep | 0:62cd296ba2a7 | 2297 | |
| fep | 0:62cd296ba2a7 | 2298 | const char *pcQueueGetName( QueueHandle_t xQueue ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
| fep | 0:62cd296ba2a7 | 2299 | { |
| fep | 0:62cd296ba2a7 | 2300 | UBaseType_t ux; |
| fep | 0:62cd296ba2a7 | 2301 | const char *pcReturn = NULL; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
| fep | 0:62cd296ba2a7 | 2302 | |
| fep | 0:62cd296ba2a7 | 2303 | /* Note there is nothing here to protect against another task adding or |
| fep | 0:62cd296ba2a7 | 2304 | removing entries from the registry while it is being searched. */ |
| fep | 0:62cd296ba2a7 | 2305 | for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ ) |
| fep | 0:62cd296ba2a7 | 2306 | { |
| fep | 0:62cd296ba2a7 | 2307 | if( xQueueRegistry[ ux ].xHandle == xQueue ) |
| fep | 0:62cd296ba2a7 | 2308 | { |
| fep | 0:62cd296ba2a7 | 2309 | pcReturn = xQueueRegistry[ ux ].pcQueueName; |
| fep | 0:62cd296ba2a7 | 2310 | break; |
| fep | 0:62cd296ba2a7 | 2311 | } |
| fep | 0:62cd296ba2a7 | 2312 | else |
| fep | 0:62cd296ba2a7 | 2313 | { |
| fep | 0:62cd296ba2a7 | 2314 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2315 | } |
| fep | 0:62cd296ba2a7 | 2316 | } |
| fep | 0:62cd296ba2a7 | 2317 | |
| fep | 0:62cd296ba2a7 | 2318 | return pcReturn; |
| fep | 0:62cd296ba2a7 | 2319 | } |
| fep | 0:62cd296ba2a7 | 2320 | |
| fep | 0:62cd296ba2a7 | 2321 | #endif /* configQUEUE_REGISTRY_SIZE */ |
| fep | 0:62cd296ba2a7 | 2322 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2323 | |
| fep | 0:62cd296ba2a7 | 2324 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
| fep | 0:62cd296ba2a7 | 2325 | |
| fep | 0:62cd296ba2a7 | 2326 | void vQueueUnregisterQueue( QueueHandle_t xQueue ) |
| fep | 0:62cd296ba2a7 | 2327 | { |
| fep | 0:62cd296ba2a7 | 2328 | UBaseType_t ux; |
| fep | 0:62cd296ba2a7 | 2329 | |
| fep | 0:62cd296ba2a7 | 2330 | /* See if the handle of the queue being unregistered in actually in the |
| fep | 0:62cd296ba2a7 | 2331 | registry. */ |
| fep | 0:62cd296ba2a7 | 2332 | for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ ) |
| fep | 0:62cd296ba2a7 | 2333 | { |
| fep | 0:62cd296ba2a7 | 2334 | if( xQueueRegistry[ ux ].xHandle == xQueue ) |
| fep | 0:62cd296ba2a7 | 2335 | { |
| fep | 0:62cd296ba2a7 | 2336 | /* Set the name to NULL to show that this slot if free again. */ |
| fep | 0:62cd296ba2a7 | 2337 | xQueueRegistry[ ux ].pcQueueName = NULL; |
| fep | 0:62cd296ba2a7 | 2338 | |
| fep | 0:62cd296ba2a7 | 2339 | /* Set the handle to NULL to ensure the same queue handle cannot |
| fep | 0:62cd296ba2a7 | 2340 | appear in the registry twice if it is added, removed, then |
| fep | 0:62cd296ba2a7 | 2341 | added again. */ |
| fep | 0:62cd296ba2a7 | 2342 | xQueueRegistry[ ux ].xHandle = ( QueueHandle_t ) 0; |
| fep | 0:62cd296ba2a7 | 2343 | break; |
| fep | 0:62cd296ba2a7 | 2344 | } |
| fep | 0:62cd296ba2a7 | 2345 | else |
| fep | 0:62cd296ba2a7 | 2346 | { |
| fep | 0:62cd296ba2a7 | 2347 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2348 | } |
| fep | 0:62cd296ba2a7 | 2349 | } |
| fep | 0:62cd296ba2a7 | 2350 | |
| fep | 0:62cd296ba2a7 | 2351 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
| fep | 0:62cd296ba2a7 | 2352 | |
| fep | 0:62cd296ba2a7 | 2353 | #endif /* configQUEUE_REGISTRY_SIZE */ |
| fep | 0:62cd296ba2a7 | 2354 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2355 | |
| fep | 0:62cd296ba2a7 | 2356 | #if ( configUSE_TIMERS == 1 ) |
| fep | 0:62cd296ba2a7 | 2357 | |
| fep | 0:62cd296ba2a7 | 2358 | void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait, const BaseType_t xWaitIndefinitely ) |
| fep | 0:62cd296ba2a7 | 2359 | { |
| fep | 0:62cd296ba2a7 | 2360 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
| fep | 0:62cd296ba2a7 | 2361 | |
| fep | 0:62cd296ba2a7 | 2362 | /* This function should not be called by application code hence the |
| fep | 0:62cd296ba2a7 | 2363 | 'Restricted' in its name. It is not part of the public API. It is |
| fep | 0:62cd296ba2a7 | 2364 | designed for use by kernel code, and has special calling requirements. |
| fep | 0:62cd296ba2a7 | 2365 | It can result in vListInsert() being called on a list that can only |
| fep | 0:62cd296ba2a7 | 2366 | possibly ever have one item in it, so the list will be fast, but even |
| fep | 0:62cd296ba2a7 | 2367 | so it should be called with the scheduler locked and not from a critical |
| fep | 0:62cd296ba2a7 | 2368 | section. */ |
| fep | 0:62cd296ba2a7 | 2369 | |
| fep | 0:62cd296ba2a7 | 2370 | /* Only do anything if there are no messages in the queue. This function |
| fep | 0:62cd296ba2a7 | 2371 | will not actually cause the task to block, just place it on a blocked |
| fep | 0:62cd296ba2a7 | 2372 | list. It will not block until the scheduler is unlocked - at which |
| fep | 0:62cd296ba2a7 | 2373 | time a yield will be performed. If an item is added to the queue while |
| fep | 0:62cd296ba2a7 | 2374 | the queue is locked, and the calling task blocks on the queue, then the |
| fep | 0:62cd296ba2a7 | 2375 | calling task will be immediately unblocked when the queue is unlocked. */ |
| fep | 0:62cd296ba2a7 | 2376 | prvLockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 2377 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U ) |
| fep | 0:62cd296ba2a7 | 2378 | { |
| fep | 0:62cd296ba2a7 | 2379 | /* There is nothing in the queue, block for the specified period. */ |
| fep | 0:62cd296ba2a7 | 2380 | vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait, xWaitIndefinitely ); |
| fep | 0:62cd296ba2a7 | 2381 | } |
| fep | 0:62cd296ba2a7 | 2382 | else |
| fep | 0:62cd296ba2a7 | 2383 | { |
| fep | 0:62cd296ba2a7 | 2384 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2385 | } |
| fep | 0:62cd296ba2a7 | 2386 | prvUnlockQueue( pxQueue ); |
| fep | 0:62cd296ba2a7 | 2387 | } |
| fep | 0:62cd296ba2a7 | 2388 | |
| fep | 0:62cd296ba2a7 | 2389 | #endif /* configUSE_TIMERS */ |
| fep | 0:62cd296ba2a7 | 2390 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2391 | |
| fep | 0:62cd296ba2a7 | 2392 | #if( ( configUSE_QUEUE_SETS == 1 ) && ( configSUPPORT_DYNAMIC_ALLOCATION == 1 ) ) |
| fep | 0:62cd296ba2a7 | 2393 | |
| fep | 0:62cd296ba2a7 | 2394 | QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength ) |
| fep | 0:62cd296ba2a7 | 2395 | { |
| fep | 0:62cd296ba2a7 | 2396 | QueueSetHandle_t pxQueue; |
| fep | 0:62cd296ba2a7 | 2397 | |
| fep | 0:62cd296ba2a7 | 2398 | pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET ); |
| fep | 0:62cd296ba2a7 | 2399 | |
| fep | 0:62cd296ba2a7 | 2400 | return pxQueue; |
| fep | 0:62cd296ba2a7 | 2401 | } |
| fep | 0:62cd296ba2a7 | 2402 | |
| fep | 0:62cd296ba2a7 | 2403 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2404 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2405 | |
| fep | 0:62cd296ba2a7 | 2406 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 2407 | |
| fep | 0:62cd296ba2a7 | 2408 | BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet ) |
| fep | 0:62cd296ba2a7 | 2409 | { |
| fep | 0:62cd296ba2a7 | 2410 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 2411 | |
| fep | 0:62cd296ba2a7 | 2412 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 2413 | { |
| fep | 0:62cd296ba2a7 | 2414 | if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL ) |
| fep | 0:62cd296ba2a7 | 2415 | { |
| fep | 0:62cd296ba2a7 | 2416 | /* Cannot add a queue/semaphore to more than one queue set. */ |
| fep | 0:62cd296ba2a7 | 2417 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2418 | } |
| fep | 0:62cd296ba2a7 | 2419 | else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2420 | { |
| fep | 0:62cd296ba2a7 | 2421 | /* Cannot add a queue/semaphore to a queue set if there are already |
| fep | 0:62cd296ba2a7 | 2422 | items in the queue/semaphore. */ |
| fep | 0:62cd296ba2a7 | 2423 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2424 | } |
| fep | 0:62cd296ba2a7 | 2425 | else |
| fep | 0:62cd296ba2a7 | 2426 | { |
| fep | 0:62cd296ba2a7 | 2427 | ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet; |
| fep | 0:62cd296ba2a7 | 2428 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 2429 | } |
| fep | 0:62cd296ba2a7 | 2430 | } |
| fep | 0:62cd296ba2a7 | 2431 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 2432 | |
| fep | 0:62cd296ba2a7 | 2433 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2434 | } |
| fep | 0:62cd296ba2a7 | 2435 | |
| fep | 0:62cd296ba2a7 | 2436 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2437 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2438 | |
| fep | 0:62cd296ba2a7 | 2439 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 2440 | |
| fep | 0:62cd296ba2a7 | 2441 | BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet ) |
| fep | 0:62cd296ba2a7 | 2442 | { |
| fep | 0:62cd296ba2a7 | 2443 | BaseType_t xReturn; |
| fep | 0:62cd296ba2a7 | 2444 | Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore; |
| fep | 0:62cd296ba2a7 | 2445 | |
| fep | 0:62cd296ba2a7 | 2446 | if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet ) |
| fep | 0:62cd296ba2a7 | 2447 | { |
| fep | 0:62cd296ba2a7 | 2448 | /* The queue was not a member of the set. */ |
| fep | 0:62cd296ba2a7 | 2449 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2450 | } |
| fep | 0:62cd296ba2a7 | 2451 | else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 ) |
| fep | 0:62cd296ba2a7 | 2452 | { |
| fep | 0:62cd296ba2a7 | 2453 | /* It is dangerous to remove a queue from a set when the queue is |
| fep | 0:62cd296ba2a7 | 2454 | not empty because the queue set will still hold pending events for |
| fep | 0:62cd296ba2a7 | 2455 | the queue. */ |
| fep | 0:62cd296ba2a7 | 2456 | xReturn = pdFAIL; |
| fep | 0:62cd296ba2a7 | 2457 | } |
| fep | 0:62cd296ba2a7 | 2458 | else |
| fep | 0:62cd296ba2a7 | 2459 | { |
| fep | 0:62cd296ba2a7 | 2460 | taskENTER_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 2461 | { |
| fep | 0:62cd296ba2a7 | 2462 | /* The queue is no longer contained in the set. */ |
| fep | 0:62cd296ba2a7 | 2463 | pxQueueOrSemaphore->pxQueueSetContainer = NULL; |
| fep | 0:62cd296ba2a7 | 2464 | } |
| fep | 0:62cd296ba2a7 | 2465 | taskEXIT_CRITICAL(); |
| fep | 0:62cd296ba2a7 | 2466 | xReturn = pdPASS; |
| fep | 0:62cd296ba2a7 | 2467 | } |
| fep | 0:62cd296ba2a7 | 2468 | |
| fep | 0:62cd296ba2a7 | 2469 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2470 | } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */ |
| fep | 0:62cd296ba2a7 | 2471 | |
| fep | 0:62cd296ba2a7 | 2472 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2473 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2474 | |
| fep | 0:62cd296ba2a7 | 2475 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 2476 | |
| fep | 0:62cd296ba2a7 | 2477 | QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait ) |
| fep | 0:62cd296ba2a7 | 2478 | { |
| fep | 0:62cd296ba2a7 | 2479 | QueueSetMemberHandle_t xReturn = NULL; |
| fep | 0:62cd296ba2a7 | 2480 | |
| fep | 0:62cd296ba2a7 | 2481 | ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */ |
| fep | 0:62cd296ba2a7 | 2482 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2483 | } |
| fep | 0:62cd296ba2a7 | 2484 | |
| fep | 0:62cd296ba2a7 | 2485 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2486 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2487 | |
| fep | 0:62cd296ba2a7 | 2488 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 2489 | |
| fep | 0:62cd296ba2a7 | 2490 | QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet ) |
| fep | 0:62cd296ba2a7 | 2491 | { |
| fep | 0:62cd296ba2a7 | 2492 | QueueSetMemberHandle_t xReturn = NULL; |
| fep | 0:62cd296ba2a7 | 2493 | |
| fep | 0:62cd296ba2a7 | 2494 | ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */ |
| fep | 0:62cd296ba2a7 | 2495 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2496 | } |
| fep | 0:62cd296ba2a7 | 2497 | |
| fep | 0:62cd296ba2a7 | 2498 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2499 | /*-----------------------------------------------------------*/ |
| fep | 0:62cd296ba2a7 | 2500 | |
| fep | 0:62cd296ba2a7 | 2501 | #if ( configUSE_QUEUE_SETS == 1 ) |
| fep | 0:62cd296ba2a7 | 2502 | |
| fep | 0:62cd296ba2a7 | 2503 | static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) |
| fep | 0:62cd296ba2a7 | 2504 | { |
| fep | 0:62cd296ba2a7 | 2505 | Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer; |
| fep | 0:62cd296ba2a7 | 2506 | BaseType_t xReturn = pdFALSE; |
| fep | 0:62cd296ba2a7 | 2507 | |
| fep | 0:62cd296ba2a7 | 2508 | /* This function must be called form a critical section. */ |
| fep | 0:62cd296ba2a7 | 2509 | |
| fep | 0:62cd296ba2a7 | 2510 | configASSERT( pxQueueSetContainer ); |
| fep | 0:62cd296ba2a7 | 2511 | configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength ); |
| fep | 0:62cd296ba2a7 | 2512 | |
| fep | 0:62cd296ba2a7 | 2513 | if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength ) |
| fep | 0:62cd296ba2a7 | 2514 | { |
| fep | 0:62cd296ba2a7 | 2515 | const int8_t cTxLock = pxQueueSetContainer->cTxLock; |
| fep | 0:62cd296ba2a7 | 2516 | |
| fep | 0:62cd296ba2a7 | 2517 | traceQUEUE_SEND( pxQueueSetContainer ); |
| fep | 0:62cd296ba2a7 | 2518 | |
| fep | 0:62cd296ba2a7 | 2519 | /* The data copied is the handle of the queue that contains data. */ |
| fep | 0:62cd296ba2a7 | 2520 | xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition ); |
| fep | 0:62cd296ba2a7 | 2521 | |
| fep | 0:62cd296ba2a7 | 2522 | if( cTxLock == queueUNLOCKED ) |
| fep | 0:62cd296ba2a7 | 2523 | { |
| fep | 0:62cd296ba2a7 | 2524 | if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2525 | { |
| fep | 0:62cd296ba2a7 | 2526 | if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE ) |
| fep | 0:62cd296ba2a7 | 2527 | { |
| fep | 0:62cd296ba2a7 | 2528 | /* The task waiting has a higher priority. */ |
| fep | 0:62cd296ba2a7 | 2529 | xReturn = pdTRUE; |
| fep | 0:62cd296ba2a7 | 2530 | } |
| fep | 0:62cd296ba2a7 | 2531 | else |
| fep | 0:62cd296ba2a7 | 2532 | { |
| fep | 0:62cd296ba2a7 | 2533 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2534 | } |
| fep | 0:62cd296ba2a7 | 2535 | } |
| fep | 0:62cd296ba2a7 | 2536 | else |
| fep | 0:62cd296ba2a7 | 2537 | { |
| fep | 0:62cd296ba2a7 | 2538 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2539 | } |
| fep | 0:62cd296ba2a7 | 2540 | } |
| fep | 0:62cd296ba2a7 | 2541 | else |
| fep | 0:62cd296ba2a7 | 2542 | { |
| fep | 0:62cd296ba2a7 | 2543 | pxQueueSetContainer->cTxLock = ( int8_t ) ( cTxLock + 1 ); |
| fep | 0:62cd296ba2a7 | 2544 | } |
| fep | 0:62cd296ba2a7 | 2545 | } |
| fep | 0:62cd296ba2a7 | 2546 | else |
| fep | 0:62cd296ba2a7 | 2547 | { |
| fep | 0:62cd296ba2a7 | 2548 | mtCOVERAGE_TEST_MARKER(); |
| fep | 0:62cd296ba2a7 | 2549 | } |
| fep | 0:62cd296ba2a7 | 2550 | |
| fep | 0:62cd296ba2a7 | 2551 | return xReturn; |
| fep | 0:62cd296ba2a7 | 2552 | } |
| fep | 0:62cd296ba2a7 | 2553 | |
| fep | 0:62cd296ba2a7 | 2554 | #endif /* configUSE_QUEUE_SETS */ |
| fep | 0:62cd296ba2a7 | 2555 | |
| fep | 0:62cd296ba2a7 | 2556 | |
| fep | 0:62cd296ba2a7 | 2557 | |
| fep | 0:62cd296ba2a7 | 2558 | |
| fep | 0:62cd296ba2a7 | 2559 | |
| fep | 0:62cd296ba2a7 | 2560 | |
| fep | 0:62cd296ba2a7 | 2561 | |
| fep | 0:62cd296ba2a7 | 2562 | |
| fep | 0:62cd296ba2a7 | 2563 | |
| fep | 0:62cd296ba2a7 | 2564 | |
| fep | 0:62cd296ba2a7 | 2565 | |
| fep | 0:62cd296ba2a7 | 2566 | |
| fep | 0:62cd296ba2a7 | 2567 |