FreeRTOS v_8.2.1 for LPC1768
Dependents: frtos_v_8_bluetooth frtos_v_8_pololu frtos_v_8_Final
source/queue.c@1:2f4de0d9dc8b, 2018-12-10 (annotated)
- Committer:
- JoaoJardim
- Date:
- Mon Dec 10 10:04:09 2018 +0000
- Revision:
- 1:2f4de0d9dc8b
- Parent:
- 0:91ad48ad5687
Same implementation as freertos_bluetooth at this time, but with FreeRTOS v_8.2.1
Who changed what in which revision?
User | Revision | Line number | New contents of line |
---|---|---|---|
dflet | 0:91ad48ad5687 | 1 | /* |
dflet | 0:91ad48ad5687 | 2 | FreeRTOS V8.2.1 - Copyright (C) 2015 Real Time Engineers Ltd. |
dflet | 0:91ad48ad5687 | 3 | All rights reserved |
dflet | 0:91ad48ad5687 | 4 | |
dflet | 0:91ad48ad5687 | 5 | VISIT http://www.FreeRTOS.org TO ENSURE YOU ARE USING THE LATEST VERSION. |
dflet | 0:91ad48ad5687 | 6 | |
dflet | 0:91ad48ad5687 | 7 | This file is part of the FreeRTOS distribution. |
dflet | 0:91ad48ad5687 | 8 | |
dflet | 0:91ad48ad5687 | 9 | FreeRTOS is free software; you can redistribute it and/or modify it under |
dflet | 0:91ad48ad5687 | 10 | the terms of the GNU General Public License (version 2) as published by the |
dflet | 0:91ad48ad5687 | 11 | Free Software Foundation >>!AND MODIFIED BY!<< the FreeRTOS exception. |
dflet | 0:91ad48ad5687 | 12 | |
dflet | 0:91ad48ad5687 | 13 | *************************************************************************** |
dflet | 0:91ad48ad5687 | 14 | >>! NOTE: The modification to the GPL is included to allow you to !<< |
dflet | 0:91ad48ad5687 | 15 | >>! distribute a combined work that includes FreeRTOS without being !<< |
dflet | 0:91ad48ad5687 | 16 | >>! obliged to provide the source code for proprietary components !<< |
dflet | 0:91ad48ad5687 | 17 | >>! outside of the FreeRTOS kernel. !<< |
dflet | 0:91ad48ad5687 | 18 | *************************************************************************** |
dflet | 0:91ad48ad5687 | 19 | |
dflet | 0:91ad48ad5687 | 20 | FreeRTOS is distributed in the hope that it will be useful, but WITHOUT ANY |
dflet | 0:91ad48ad5687 | 21 | WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
dflet | 0:91ad48ad5687 | 22 | FOR A PARTICULAR PURPOSE. Full license text is available on the following |
dflet | 0:91ad48ad5687 | 23 | link: http://www.freertos.org/a00114.html |
dflet | 0:91ad48ad5687 | 24 | |
dflet | 0:91ad48ad5687 | 25 | *************************************************************************** |
dflet | 0:91ad48ad5687 | 26 | * * |
dflet | 0:91ad48ad5687 | 27 | * FreeRTOS provides completely free yet professionally developed, * |
dflet | 0:91ad48ad5687 | 28 | * robust, strictly quality controlled, supported, and cross * |
dflet | 0:91ad48ad5687 | 29 | * platform software that is more than just the market leader, it * |
dflet | 0:91ad48ad5687 | 30 | * is the industry's de facto standard. * |
dflet | 0:91ad48ad5687 | 31 | * * |
dflet | 0:91ad48ad5687 | 32 | * Help yourself get started quickly while simultaneously helping * |
dflet | 0:91ad48ad5687 | 33 | * to support the FreeRTOS project by purchasing a FreeRTOS * |
dflet | 0:91ad48ad5687 | 34 | * tutorial book, reference manual, or both: * |
dflet | 0:91ad48ad5687 | 35 | * http://www.FreeRTOS.org/Documentation * |
dflet | 0:91ad48ad5687 | 36 | * * |
dflet | 0:91ad48ad5687 | 37 | *************************************************************************** |
dflet | 0:91ad48ad5687 | 38 | |
dflet | 0:91ad48ad5687 | 39 | http://www.FreeRTOS.org/FAQHelp.html - Having a problem? Start by reading |
dflet | 0:91ad48ad5687 | 40 | the FAQ page "My application does not run, what could be wrong?". Have you |
dflet | 0:91ad48ad5687 | 41 | defined configASSERT()? |
dflet | 0:91ad48ad5687 | 42 | |
dflet | 0:91ad48ad5687 | 43 | http://www.FreeRTOS.org/support - In return for receiving this top quality |
dflet | 0:91ad48ad5687 | 44 | embedded software for free we request you assist our global community by |
dflet | 0:91ad48ad5687 | 45 | participating in the support forum. |
dflet | 0:91ad48ad5687 | 46 | |
dflet | 0:91ad48ad5687 | 47 | http://www.FreeRTOS.org/training - Investing in training allows your team to |
dflet | 0:91ad48ad5687 | 48 | be as productive as possible as early as possible. Now you can receive |
dflet | 0:91ad48ad5687 | 49 | FreeRTOS training directly from Richard Barry, CEO of Real Time Engineers |
dflet | 0:91ad48ad5687 | 50 | Ltd, and the world's leading authority on the world's leading RTOS. |
dflet | 0:91ad48ad5687 | 51 | |
dflet | 0:91ad48ad5687 | 52 | http://www.FreeRTOS.org/plus - A selection of FreeRTOS ecosystem products, |
dflet | 0:91ad48ad5687 | 53 | including FreeRTOS+Trace - an indispensable productivity tool, a DOS |
dflet | 0:91ad48ad5687 | 54 | compatible FAT file system, and our tiny thread aware UDP/IP stack. |
dflet | 0:91ad48ad5687 | 55 | |
dflet | 0:91ad48ad5687 | 56 | http://www.FreeRTOS.org/labs - Where new FreeRTOS products go to incubate. |
dflet | 0:91ad48ad5687 | 57 | Come and try FreeRTOS+TCP, our new open source TCP/IP stack for FreeRTOS. |
dflet | 0:91ad48ad5687 | 58 | |
dflet | 0:91ad48ad5687 | 59 | http://www.OpenRTOS.com - Real Time Engineers ltd. license FreeRTOS to High |
dflet | 0:91ad48ad5687 | 60 | Integrity Systems ltd. to sell under the OpenRTOS brand. Low cost OpenRTOS |
dflet | 0:91ad48ad5687 | 61 | licenses offer ticketed support, indemnification and commercial middleware. |
dflet | 0:91ad48ad5687 | 62 | |
dflet | 0:91ad48ad5687 | 63 | http://www.SafeRTOS.com - High Integrity Systems also provide a safety |
dflet | 0:91ad48ad5687 | 64 | engineered and independently SIL3 certified version for use in safety and |
dflet | 0:91ad48ad5687 | 65 | mission critical applications that require provable dependability. |
dflet | 0:91ad48ad5687 | 66 | |
dflet | 0:91ad48ad5687 | 67 | 1 tab == 4 spaces! |
dflet | 0:91ad48ad5687 | 68 | */ |
dflet | 0:91ad48ad5687 | 69 | |
dflet | 0:91ad48ad5687 | 70 | #include <stdlib.h> |
dflet | 0:91ad48ad5687 | 71 | #include <string.h> |
dflet | 0:91ad48ad5687 | 72 | |
dflet | 0:91ad48ad5687 | 73 | /* Defining MPU_WRAPPERS_INCLUDED_FROM_API_FILE prevents task.h from redefining |
dflet | 0:91ad48ad5687 | 74 | all the API functions to use the MPU wrappers. That should only be done when |
dflet | 0:91ad48ad5687 | 75 | task.h is included from an application file. */ |
dflet | 0:91ad48ad5687 | 76 | #define MPU_WRAPPERS_INCLUDED_FROM_API_FILE |
dflet | 0:91ad48ad5687 | 77 | |
dflet | 0:91ad48ad5687 | 78 | #include "FreeRTOS.h" |
dflet | 0:91ad48ad5687 | 79 | #include "task.h" |
dflet | 0:91ad48ad5687 | 80 | #include "queue.h" |
dflet | 0:91ad48ad5687 | 81 | |
dflet | 0:91ad48ad5687 | 82 | #if ( configUSE_CO_ROUTINES == 1 ) |
dflet | 0:91ad48ad5687 | 83 | #include "croutine.h" |
dflet | 0:91ad48ad5687 | 84 | #endif |
dflet | 0:91ad48ad5687 | 85 | |
dflet | 0:91ad48ad5687 | 86 | /* Lint e961 and e750 are suppressed as a MISRA exception justified because the |
dflet | 0:91ad48ad5687 | 87 | MPU ports require MPU_WRAPPERS_INCLUDED_FROM_API_FILE to be defined for the |
dflet | 0:91ad48ad5687 | 88 | header files above, but not in this file, in order to generate the correct |
dflet | 0:91ad48ad5687 | 89 | privileged Vs unprivileged linkage and placement. */ |
dflet | 0:91ad48ad5687 | 90 | #undef MPU_WRAPPERS_INCLUDED_FROM_API_FILE /*lint !e961 !e750. */ |
dflet | 0:91ad48ad5687 | 91 | |
dflet | 0:91ad48ad5687 | 92 | |
dflet | 0:91ad48ad5687 | 93 | /* Constants used with the xRxLock and xTxLock structure members. */ |
dflet | 0:91ad48ad5687 | 94 | #define queueUNLOCKED ( ( BaseType_t ) -1 ) |
dflet | 0:91ad48ad5687 | 95 | #define queueLOCKED_UNMODIFIED ( ( BaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 96 | |
dflet | 0:91ad48ad5687 | 97 | /* When the Queue_t structure is used to represent a base queue its pcHead and |
dflet | 0:91ad48ad5687 | 98 | pcTail members are used as pointers into the queue storage area. When the |
dflet | 0:91ad48ad5687 | 99 | Queue_t structure is used to represent a mutex pcHead and pcTail pointers are |
dflet | 0:91ad48ad5687 | 100 | not necessary, and the pcHead pointer is set to NULL to indicate that the |
dflet | 0:91ad48ad5687 | 101 | pcTail pointer actually points to the mutex holder (if any). Map alternative |
dflet | 0:91ad48ad5687 | 102 | names to the pcHead and pcTail structure members to ensure the readability of |
dflet | 0:91ad48ad5687 | 103 | the code is maintained despite this dual use of two structure members. An |
dflet | 0:91ad48ad5687 | 104 | alternative implementation would be to use a union, but use of a union is |
dflet | 0:91ad48ad5687 | 105 | against the coding standard (although an exception to the standard has been |
dflet | 0:91ad48ad5687 | 106 | permitted where the dual use also significantly changes the type of the |
dflet | 0:91ad48ad5687 | 107 | structure member). */ |
dflet | 0:91ad48ad5687 | 108 | #define pxMutexHolder pcTail |
dflet | 0:91ad48ad5687 | 109 | #define uxQueueType pcHead |
dflet | 0:91ad48ad5687 | 110 | #define queueQUEUE_IS_MUTEX NULL |
dflet | 0:91ad48ad5687 | 111 | |
dflet | 0:91ad48ad5687 | 112 | /* Semaphores do not actually store or copy data, so have an item size of |
dflet | 0:91ad48ad5687 | 113 | zero. */ |
dflet | 0:91ad48ad5687 | 114 | #define queueSEMAPHORE_QUEUE_ITEM_LENGTH ( ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 115 | #define queueMUTEX_GIVE_BLOCK_TIME ( ( TickType_t ) 0U ) |
dflet | 0:91ad48ad5687 | 116 | |
dflet | 0:91ad48ad5687 | 117 | #if( configUSE_PREEMPTION == 0 ) |
dflet | 0:91ad48ad5687 | 118 | /* If the cooperative scheduler is being used then a yield should not be |
dflet | 0:91ad48ad5687 | 119 | performed just because a higher priority task has been woken. */ |
dflet | 0:91ad48ad5687 | 120 | #define queueYIELD_IF_USING_PREEMPTION() |
dflet | 0:91ad48ad5687 | 121 | #else |
dflet | 0:91ad48ad5687 | 122 | #define queueYIELD_IF_USING_PREEMPTION() portYIELD_WITHIN_API() |
dflet | 0:91ad48ad5687 | 123 | #endif |
dflet | 0:91ad48ad5687 | 124 | |
dflet | 0:91ad48ad5687 | 125 | /* |
dflet | 0:91ad48ad5687 | 126 | * Definition of the queue used by the scheduler. |
dflet | 0:91ad48ad5687 | 127 | * Items are queued by copy, not reference. See the following link for the |
dflet | 0:91ad48ad5687 | 128 | * rationale: http://www.freertos.org/Embedded-RTOS-Queues.html |
dflet | 0:91ad48ad5687 | 129 | */ |
dflet | 0:91ad48ad5687 | 130 | typedef struct QueueDefinition |
dflet | 0:91ad48ad5687 | 131 | { |
dflet | 0:91ad48ad5687 | 132 | int8_t *pcHead; /*< Points to the beginning of the queue storage area. */ |
dflet | 0:91ad48ad5687 | 133 | int8_t *pcTail; /*< Points to the byte at the end of the queue storage area. Once more byte is allocated than necessary to store the queue items, this is used as a marker. */ |
dflet | 0:91ad48ad5687 | 134 | int8_t *pcWriteTo; /*< Points to the free next place in the storage area. */ |
dflet | 0:91ad48ad5687 | 135 | |
dflet | 0:91ad48ad5687 | 136 | union /* Use of a union is an exception to the coding standard to ensure two mutually exclusive structure members don't appear simultaneously (wasting RAM). */ |
dflet | 0:91ad48ad5687 | 137 | { |
dflet | 0:91ad48ad5687 | 138 | int8_t *pcReadFrom; /*< Points to the last place that a queued item was read from when the structure is used as a queue. */ |
dflet | 0:91ad48ad5687 | 139 | UBaseType_t uxRecursiveCallCount;/*< Maintains a count of the number of times a recursive mutex has been recursively 'taken' when the structure is used as a mutex. */ |
dflet | 0:91ad48ad5687 | 140 | } u; |
dflet | 0:91ad48ad5687 | 141 | |
dflet | 0:91ad48ad5687 | 142 | List_t xTasksWaitingToSend; /*< List of tasks that are blocked waiting to post onto this queue. Stored in priority order. */ |
dflet | 0:91ad48ad5687 | 143 | List_t xTasksWaitingToReceive; /*< List of tasks that are blocked waiting to read from this queue. Stored in priority order. */ |
dflet | 0:91ad48ad5687 | 144 | |
dflet | 0:91ad48ad5687 | 145 | volatile UBaseType_t uxMessagesWaiting;/*< The number of items currently in the queue. */ |
dflet | 0:91ad48ad5687 | 146 | UBaseType_t uxLength; /*< The length of the queue defined as the number of items it will hold, not the number of bytes. */ |
dflet | 0:91ad48ad5687 | 147 | UBaseType_t uxItemSize; /*< The size of each items that the queue will hold. */ |
dflet | 0:91ad48ad5687 | 148 | |
dflet | 0:91ad48ad5687 | 149 | volatile BaseType_t xRxLock; /*< Stores the number of items received from the queue (removed from the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */ |
dflet | 0:91ad48ad5687 | 150 | volatile BaseType_t xTxLock; /*< Stores the number of items transmitted to the queue (added to the queue) while the queue was locked. Set to queueUNLOCKED when the queue is not locked. */ |
dflet | 0:91ad48ad5687 | 151 | |
dflet | 0:91ad48ad5687 | 152 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 153 | UBaseType_t uxQueueNumber; |
dflet | 0:91ad48ad5687 | 154 | uint8_t ucQueueType; |
dflet | 0:91ad48ad5687 | 155 | #endif |
dflet | 0:91ad48ad5687 | 156 | |
dflet | 0:91ad48ad5687 | 157 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 158 | struct QueueDefinition *pxQueueSetContainer; |
dflet | 0:91ad48ad5687 | 159 | #endif |
dflet | 0:91ad48ad5687 | 160 | |
dflet | 0:91ad48ad5687 | 161 | } xQUEUE; |
dflet | 0:91ad48ad5687 | 162 | |
dflet | 0:91ad48ad5687 | 163 | /* The old xQUEUE name is maintained above then typedefed to the new Queue_t |
dflet | 0:91ad48ad5687 | 164 | name below to enable the use of older kernel aware debuggers. */ |
dflet | 0:91ad48ad5687 | 165 | typedef xQUEUE Queue_t; |
dflet | 0:91ad48ad5687 | 166 | |
dflet | 0:91ad48ad5687 | 167 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 168 | |
dflet | 0:91ad48ad5687 | 169 | /* |
dflet | 0:91ad48ad5687 | 170 | * The queue registry is just a means for kernel aware debuggers to locate |
dflet | 0:91ad48ad5687 | 171 | * queue structures. It has no other purpose so is an optional component. |
dflet | 0:91ad48ad5687 | 172 | */ |
dflet | 0:91ad48ad5687 | 173 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
dflet | 0:91ad48ad5687 | 174 | |
dflet | 0:91ad48ad5687 | 175 | /* The type stored within the queue registry array. This allows a name |
dflet | 0:91ad48ad5687 | 176 | to be assigned to each queue making kernel aware debugging a little |
dflet | 0:91ad48ad5687 | 177 | more user friendly. */ |
dflet | 0:91ad48ad5687 | 178 | typedef struct QUEUE_REGISTRY_ITEM |
dflet | 0:91ad48ad5687 | 179 | { |
dflet | 0:91ad48ad5687 | 180 | const char *pcQueueName; /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
dflet | 0:91ad48ad5687 | 181 | QueueHandle_t xHandle; |
dflet | 0:91ad48ad5687 | 182 | } xQueueRegistryItem; |
dflet | 0:91ad48ad5687 | 183 | |
dflet | 0:91ad48ad5687 | 184 | /* The old xQueueRegistryItem name is maintained above then typedefed to the |
dflet | 0:91ad48ad5687 | 185 | new xQueueRegistryItem name below to enable the use of older kernel aware |
dflet | 0:91ad48ad5687 | 186 | debuggers. */ |
dflet | 0:91ad48ad5687 | 187 | typedef xQueueRegistryItem QueueRegistryItem_t; |
dflet | 0:91ad48ad5687 | 188 | |
dflet | 0:91ad48ad5687 | 189 | /* The queue registry is simply an array of QueueRegistryItem_t structures. |
dflet | 0:91ad48ad5687 | 190 | The pcQueueName member of a structure being NULL is indicative of the |
dflet | 0:91ad48ad5687 | 191 | array position being vacant. */ |
dflet | 0:91ad48ad5687 | 192 | QueueRegistryItem_t xQueueRegistry[ configQUEUE_REGISTRY_SIZE ]; |
dflet | 0:91ad48ad5687 | 193 | |
dflet | 0:91ad48ad5687 | 194 | #endif /* configQUEUE_REGISTRY_SIZE */ |
dflet | 0:91ad48ad5687 | 195 | |
dflet | 0:91ad48ad5687 | 196 | /* |
dflet | 0:91ad48ad5687 | 197 | * Unlocks a queue locked by a call to prvLockQueue. Locking a queue does not |
dflet | 0:91ad48ad5687 | 198 | * prevent an ISR from adding or removing items to the queue, but does prevent |
dflet | 0:91ad48ad5687 | 199 | * an ISR from removing tasks from the queue event lists. If an ISR finds a |
dflet | 0:91ad48ad5687 | 200 | * queue is locked it will instead increment the appropriate queue lock count |
dflet | 0:91ad48ad5687 | 201 | * to indicate that a task may require unblocking. When the queue in unlocked |
dflet | 0:91ad48ad5687 | 202 | * these lock counts are inspected, and the appropriate action taken. |
dflet | 0:91ad48ad5687 | 203 | */ |
dflet | 0:91ad48ad5687 | 204 | static void prvUnlockQueue( Queue_t * const pxQueue ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 205 | |
dflet | 0:91ad48ad5687 | 206 | /* |
dflet | 0:91ad48ad5687 | 207 | * Uses a critical section to determine if there is any data in a queue. |
dflet | 0:91ad48ad5687 | 208 | * |
dflet | 0:91ad48ad5687 | 209 | * @return pdTRUE if the queue contains no items, otherwise pdFALSE. |
dflet | 0:91ad48ad5687 | 210 | */ |
dflet | 0:91ad48ad5687 | 211 | static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 212 | |
dflet | 0:91ad48ad5687 | 213 | /* |
dflet | 0:91ad48ad5687 | 214 | * Uses a critical section to determine if there is any space in a queue. |
dflet | 0:91ad48ad5687 | 215 | * |
dflet | 0:91ad48ad5687 | 216 | * @return pdTRUE if there is no space, otherwise pdFALSE; |
dflet | 0:91ad48ad5687 | 217 | */ |
dflet | 0:91ad48ad5687 | 218 | static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 219 | |
dflet | 0:91ad48ad5687 | 220 | /* |
dflet | 0:91ad48ad5687 | 221 | * Copies an item into the queue, either at the front of the queue or the |
dflet | 0:91ad48ad5687 | 222 | * back of the queue. |
dflet | 0:91ad48ad5687 | 223 | */ |
dflet | 0:91ad48ad5687 | 224 | static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 225 | |
dflet | 0:91ad48ad5687 | 226 | /* |
dflet | 0:91ad48ad5687 | 227 | * Copies an item out of a queue. |
dflet | 0:91ad48ad5687 | 228 | */ |
dflet | 0:91ad48ad5687 | 229 | static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 230 | |
dflet | 0:91ad48ad5687 | 231 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 232 | /* |
dflet | 0:91ad48ad5687 | 233 | * Checks to see if a queue is a member of a queue set, and if so, notifies |
dflet | 0:91ad48ad5687 | 234 | * the queue set that the queue contains data. |
dflet | 0:91ad48ad5687 | 235 | */ |
dflet | 0:91ad48ad5687 | 236 | static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) PRIVILEGED_FUNCTION; |
dflet | 0:91ad48ad5687 | 237 | #endif |
dflet | 0:91ad48ad5687 | 238 | |
dflet | 0:91ad48ad5687 | 239 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 240 | |
dflet | 0:91ad48ad5687 | 241 | /* |
dflet | 0:91ad48ad5687 | 242 | * Macro to mark a queue as locked. Locking a queue prevents an ISR from |
dflet | 0:91ad48ad5687 | 243 | * accessing the queue event lists. |
dflet | 0:91ad48ad5687 | 244 | */ |
dflet | 0:91ad48ad5687 | 245 | #define prvLockQueue( pxQueue ) \ |
dflet | 0:91ad48ad5687 | 246 | taskENTER_CRITICAL(); \ |
dflet | 0:91ad48ad5687 | 247 | { \ |
dflet | 0:91ad48ad5687 | 248 | if( ( pxQueue )->xRxLock == queueUNLOCKED ) \ |
dflet | 0:91ad48ad5687 | 249 | { \ |
dflet | 0:91ad48ad5687 | 250 | ( pxQueue )->xRxLock = queueLOCKED_UNMODIFIED; \ |
dflet | 0:91ad48ad5687 | 251 | } \ |
dflet | 0:91ad48ad5687 | 252 | if( ( pxQueue )->xTxLock == queueUNLOCKED ) \ |
dflet | 0:91ad48ad5687 | 253 | { \ |
dflet | 0:91ad48ad5687 | 254 | ( pxQueue )->xTxLock = queueLOCKED_UNMODIFIED; \ |
dflet | 0:91ad48ad5687 | 255 | } \ |
dflet | 0:91ad48ad5687 | 256 | } \ |
dflet | 0:91ad48ad5687 | 257 | taskEXIT_CRITICAL() |
dflet | 0:91ad48ad5687 | 258 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 259 | |
dflet | 0:91ad48ad5687 | 260 | BaseType_t xQueueGenericReset( QueueHandle_t xQueue, BaseType_t xNewQueue ) |
dflet | 0:91ad48ad5687 | 261 | { |
dflet | 0:91ad48ad5687 | 262 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 263 | |
dflet | 0:91ad48ad5687 | 264 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 265 | |
dflet | 0:91ad48ad5687 | 266 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 267 | { |
dflet | 0:91ad48ad5687 | 268 | pxQueue->pcTail = pxQueue->pcHead + ( pxQueue->uxLength * pxQueue->uxItemSize ); |
dflet | 0:91ad48ad5687 | 269 | pxQueue->uxMessagesWaiting = ( UBaseType_t ) 0U; |
dflet | 0:91ad48ad5687 | 270 | pxQueue->pcWriteTo = pxQueue->pcHead; |
dflet | 0:91ad48ad5687 | 271 | pxQueue->u.pcReadFrom = pxQueue->pcHead + ( ( pxQueue->uxLength - ( UBaseType_t ) 1U ) * pxQueue->uxItemSize ); |
dflet | 0:91ad48ad5687 | 272 | pxQueue->xRxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 273 | pxQueue->xTxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 274 | |
dflet | 0:91ad48ad5687 | 275 | if( xNewQueue == pdFALSE ) |
dflet | 0:91ad48ad5687 | 276 | { |
dflet | 0:91ad48ad5687 | 277 | /* If there are tasks blocked waiting to read from the queue, then |
dflet | 0:91ad48ad5687 | 278 | the tasks will remain blocked as after this function exits the queue |
dflet | 0:91ad48ad5687 | 279 | will still be empty. If there are tasks blocked waiting to write to |
dflet | 0:91ad48ad5687 | 280 | the queue, then one should be unblocked as after this function exits |
dflet | 0:91ad48ad5687 | 281 | it will be possible to write to it. */ |
dflet | 0:91ad48ad5687 | 282 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 283 | { |
dflet | 0:91ad48ad5687 | 284 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 285 | { |
dflet | 0:91ad48ad5687 | 286 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 287 | } |
dflet | 0:91ad48ad5687 | 288 | else |
dflet | 0:91ad48ad5687 | 289 | { |
dflet | 0:91ad48ad5687 | 290 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 291 | } |
dflet | 0:91ad48ad5687 | 292 | } |
dflet | 0:91ad48ad5687 | 293 | else |
dflet | 0:91ad48ad5687 | 294 | { |
dflet | 0:91ad48ad5687 | 295 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 296 | } |
dflet | 0:91ad48ad5687 | 297 | } |
dflet | 0:91ad48ad5687 | 298 | else |
dflet | 0:91ad48ad5687 | 299 | { |
dflet | 0:91ad48ad5687 | 300 | /* Ensure the event queues start in the correct state. */ |
dflet | 0:91ad48ad5687 | 301 | vListInitialise( &( pxQueue->xTasksWaitingToSend ) ); |
dflet | 0:91ad48ad5687 | 302 | vListInitialise( &( pxQueue->xTasksWaitingToReceive ) ); |
dflet | 0:91ad48ad5687 | 303 | } |
dflet | 0:91ad48ad5687 | 304 | } |
dflet | 0:91ad48ad5687 | 305 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 306 | |
dflet | 0:91ad48ad5687 | 307 | /* A value is returned for calling semantic consistency with previous |
dflet | 0:91ad48ad5687 | 308 | versions. */ |
dflet | 0:91ad48ad5687 | 309 | return pdPASS; |
dflet | 0:91ad48ad5687 | 310 | } |
dflet | 0:91ad48ad5687 | 311 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 312 | |
dflet | 0:91ad48ad5687 | 313 | QueueHandle_t xQueueGenericCreate( const UBaseType_t uxQueueLength, const UBaseType_t uxItemSize, const uint8_t ucQueueType ) |
dflet | 0:91ad48ad5687 | 314 | { |
dflet | 0:91ad48ad5687 | 315 | Queue_t *pxNewQueue; |
dflet | 0:91ad48ad5687 | 316 | size_t xQueueSizeInBytes; |
dflet | 0:91ad48ad5687 | 317 | QueueHandle_t xReturn = NULL; |
dflet | 0:91ad48ad5687 | 318 | int8_t *pcAllocatedBuffer; |
dflet | 0:91ad48ad5687 | 319 | |
dflet | 0:91ad48ad5687 | 320 | /* Remove compiler warnings about unused parameters should |
dflet | 0:91ad48ad5687 | 321 | configUSE_TRACE_FACILITY not be set to 1. */ |
dflet | 0:91ad48ad5687 | 322 | ( void ) ucQueueType; |
dflet | 0:91ad48ad5687 | 323 | |
dflet | 0:91ad48ad5687 | 324 | configASSERT( uxQueueLength > ( UBaseType_t ) 0 ); |
dflet | 0:91ad48ad5687 | 325 | |
dflet | 0:91ad48ad5687 | 326 | if( uxItemSize == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 327 | { |
dflet | 0:91ad48ad5687 | 328 | /* There is not going to be a queue storage area. */ |
dflet | 0:91ad48ad5687 | 329 | xQueueSizeInBytes = ( size_t ) 0; |
dflet | 0:91ad48ad5687 | 330 | } |
dflet | 0:91ad48ad5687 | 331 | else |
dflet | 0:91ad48ad5687 | 332 | { |
dflet | 0:91ad48ad5687 | 333 | /* The queue is one byte longer than asked for to make wrap checking |
dflet | 0:91ad48ad5687 | 334 | easier/faster. */ |
dflet | 0:91ad48ad5687 | 335 | xQueueSizeInBytes = ( size_t ) ( uxQueueLength * uxItemSize ) + ( size_t ) 1; /*lint !e961 MISRA exception as the casts are only redundant for some ports. */ |
dflet | 0:91ad48ad5687 | 336 | } |
dflet | 0:91ad48ad5687 | 337 | |
dflet | 0:91ad48ad5687 | 338 | /* Allocate the new queue structure and storage area. */ |
dflet | 0:91ad48ad5687 | 339 | pcAllocatedBuffer = ( int8_t * ) pvPortMalloc( sizeof( Queue_t ) + xQueueSizeInBytes ); |
dflet | 0:91ad48ad5687 | 340 | |
dflet | 0:91ad48ad5687 | 341 | if( pcAllocatedBuffer != NULL ) |
dflet | 0:91ad48ad5687 | 342 | { |
dflet | 0:91ad48ad5687 | 343 | pxNewQueue = ( Queue_t * ) pcAllocatedBuffer; /*lint !e826 MISRA The buffer cannot be too small because it was dimensioned by sizeof( Queue_t ) + xQueueSizeInBytes. */ |
dflet | 0:91ad48ad5687 | 344 | |
dflet | 0:91ad48ad5687 | 345 | if( uxItemSize == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 346 | { |
dflet | 0:91ad48ad5687 | 347 | /* No RAM was allocated for the queue storage area, but PC head |
dflet | 0:91ad48ad5687 | 348 | cannot be set to NULL because NULL is used as a key to say the queue |
dflet | 0:91ad48ad5687 | 349 | is used as a mutex. Therefore just set pcHead to point to the queue |
dflet | 0:91ad48ad5687 | 350 | as a benign value that is known to be within the memory map. */ |
dflet | 0:91ad48ad5687 | 351 | pxNewQueue->pcHead = ( int8_t * ) pxNewQueue; |
dflet | 0:91ad48ad5687 | 352 | } |
dflet | 0:91ad48ad5687 | 353 | else |
dflet | 0:91ad48ad5687 | 354 | { |
dflet | 0:91ad48ad5687 | 355 | /* Jump past the queue structure to find the location of the queue |
dflet | 0:91ad48ad5687 | 356 | storage area - adding the padding bytes to get a better alignment. */ |
dflet | 0:91ad48ad5687 | 357 | pxNewQueue->pcHead = pcAllocatedBuffer + sizeof( Queue_t ); |
dflet | 0:91ad48ad5687 | 358 | } |
dflet | 0:91ad48ad5687 | 359 | |
dflet | 0:91ad48ad5687 | 360 | /* Initialise the queue members as described above where the queue type |
dflet | 0:91ad48ad5687 | 361 | is defined. */ |
dflet | 0:91ad48ad5687 | 362 | pxNewQueue->uxLength = uxQueueLength; |
dflet | 0:91ad48ad5687 | 363 | pxNewQueue->uxItemSize = uxItemSize; |
dflet | 0:91ad48ad5687 | 364 | ( void ) xQueueGenericReset( pxNewQueue, pdTRUE ); |
dflet | 0:91ad48ad5687 | 365 | |
dflet | 0:91ad48ad5687 | 366 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 367 | { |
dflet | 0:91ad48ad5687 | 368 | pxNewQueue->ucQueueType = ucQueueType; |
dflet | 0:91ad48ad5687 | 369 | } |
dflet | 0:91ad48ad5687 | 370 | #endif /* configUSE_TRACE_FACILITY */ |
dflet | 0:91ad48ad5687 | 371 | |
dflet | 0:91ad48ad5687 | 372 | #if( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 373 | { |
dflet | 0:91ad48ad5687 | 374 | pxNewQueue->pxQueueSetContainer = NULL; |
dflet | 0:91ad48ad5687 | 375 | } |
dflet | 0:91ad48ad5687 | 376 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 377 | |
dflet | 0:91ad48ad5687 | 378 | traceQUEUE_CREATE( pxNewQueue ); |
dflet | 0:91ad48ad5687 | 379 | xReturn = pxNewQueue; |
dflet | 0:91ad48ad5687 | 380 | } |
dflet | 0:91ad48ad5687 | 381 | else |
dflet | 0:91ad48ad5687 | 382 | { |
dflet | 0:91ad48ad5687 | 383 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 384 | } |
dflet | 0:91ad48ad5687 | 385 | |
dflet | 0:91ad48ad5687 | 386 | configASSERT( xReturn ); |
dflet | 0:91ad48ad5687 | 387 | |
dflet | 0:91ad48ad5687 | 388 | return xReturn; |
dflet | 0:91ad48ad5687 | 389 | } |
dflet | 0:91ad48ad5687 | 390 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 391 | |
dflet | 0:91ad48ad5687 | 392 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 393 | |
dflet | 0:91ad48ad5687 | 394 | QueueHandle_t xQueueCreateMutex( const uint8_t ucQueueType ) |
dflet | 0:91ad48ad5687 | 395 | { |
dflet | 0:91ad48ad5687 | 396 | Queue_t *pxNewQueue; |
dflet | 0:91ad48ad5687 | 397 | |
dflet | 0:91ad48ad5687 | 398 | /* Prevent compiler warnings about unused parameters if |
dflet | 0:91ad48ad5687 | 399 | configUSE_TRACE_FACILITY does not equal 1. */ |
dflet | 0:91ad48ad5687 | 400 | ( void ) ucQueueType; |
dflet | 0:91ad48ad5687 | 401 | |
dflet | 0:91ad48ad5687 | 402 | /* Allocate the new queue structure. */ |
dflet | 0:91ad48ad5687 | 403 | pxNewQueue = ( Queue_t * ) pvPortMalloc( sizeof( Queue_t ) ); |
dflet | 0:91ad48ad5687 | 404 | if( pxNewQueue != NULL ) |
dflet | 0:91ad48ad5687 | 405 | { |
dflet | 0:91ad48ad5687 | 406 | /* Information required for priority inheritance. */ |
dflet | 0:91ad48ad5687 | 407 | pxNewQueue->pxMutexHolder = NULL; |
dflet | 0:91ad48ad5687 | 408 | pxNewQueue->uxQueueType = queueQUEUE_IS_MUTEX; |
dflet | 0:91ad48ad5687 | 409 | |
dflet | 0:91ad48ad5687 | 410 | /* Queues used as a mutex no data is actually copied into or out |
dflet | 0:91ad48ad5687 | 411 | of the queue. */ |
dflet | 0:91ad48ad5687 | 412 | pxNewQueue->pcWriteTo = NULL; |
dflet | 0:91ad48ad5687 | 413 | pxNewQueue->u.pcReadFrom = NULL; |
dflet | 0:91ad48ad5687 | 414 | |
dflet | 0:91ad48ad5687 | 415 | /* Each mutex has a length of 1 (like a binary semaphore) and |
dflet | 0:91ad48ad5687 | 416 | an item size of 0 as nothing is actually copied into or out |
dflet | 0:91ad48ad5687 | 417 | of the mutex. */ |
dflet | 0:91ad48ad5687 | 418 | pxNewQueue->uxMessagesWaiting = ( UBaseType_t ) 0U; |
dflet | 0:91ad48ad5687 | 419 | pxNewQueue->uxLength = ( UBaseType_t ) 1U; |
dflet | 0:91ad48ad5687 | 420 | pxNewQueue->uxItemSize = ( UBaseType_t ) 0U; |
dflet | 0:91ad48ad5687 | 421 | pxNewQueue->xRxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 422 | pxNewQueue->xTxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 423 | |
dflet | 0:91ad48ad5687 | 424 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 425 | { |
dflet | 0:91ad48ad5687 | 426 | pxNewQueue->ucQueueType = ucQueueType; |
dflet | 0:91ad48ad5687 | 427 | } |
dflet | 0:91ad48ad5687 | 428 | #endif |
dflet | 0:91ad48ad5687 | 429 | |
dflet | 0:91ad48ad5687 | 430 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 431 | { |
dflet | 0:91ad48ad5687 | 432 | pxNewQueue->pxQueueSetContainer = NULL; |
dflet | 0:91ad48ad5687 | 433 | } |
dflet | 0:91ad48ad5687 | 434 | #endif |
dflet | 0:91ad48ad5687 | 435 | |
dflet | 0:91ad48ad5687 | 436 | /* Ensure the event queues start with the correct state. */ |
dflet | 0:91ad48ad5687 | 437 | vListInitialise( &( pxNewQueue->xTasksWaitingToSend ) ); |
dflet | 0:91ad48ad5687 | 438 | vListInitialise( &( pxNewQueue->xTasksWaitingToReceive ) ); |
dflet | 0:91ad48ad5687 | 439 | |
dflet | 0:91ad48ad5687 | 440 | traceCREATE_MUTEX( pxNewQueue ); |
dflet | 0:91ad48ad5687 | 441 | |
dflet | 0:91ad48ad5687 | 442 | /* Start with the semaphore in the expected state. */ |
dflet | 0:91ad48ad5687 | 443 | ( void ) xQueueGenericSend( pxNewQueue, NULL, ( TickType_t ) 0U, queueSEND_TO_BACK ); |
dflet | 0:91ad48ad5687 | 444 | } |
dflet | 0:91ad48ad5687 | 445 | else |
dflet | 0:91ad48ad5687 | 446 | { |
dflet | 0:91ad48ad5687 | 447 | traceCREATE_MUTEX_FAILED(); |
dflet | 0:91ad48ad5687 | 448 | } |
dflet | 0:91ad48ad5687 | 449 | |
dflet | 0:91ad48ad5687 | 450 | configASSERT( pxNewQueue ); |
dflet | 0:91ad48ad5687 | 451 | return pxNewQueue; |
dflet | 0:91ad48ad5687 | 452 | } |
dflet | 0:91ad48ad5687 | 453 | |
dflet | 0:91ad48ad5687 | 454 | #endif /* configUSE_MUTEXES */ |
dflet | 0:91ad48ad5687 | 455 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 456 | |
dflet | 0:91ad48ad5687 | 457 | #if ( ( configUSE_MUTEXES == 1 ) && ( INCLUDE_xSemaphoreGetMutexHolder == 1 ) ) |
dflet | 0:91ad48ad5687 | 458 | |
dflet | 0:91ad48ad5687 | 459 | void* xQueueGetMutexHolder( QueueHandle_t xSemaphore ) |
dflet | 0:91ad48ad5687 | 460 | { |
dflet | 0:91ad48ad5687 | 461 | void *pxReturn; |
dflet | 0:91ad48ad5687 | 462 | |
dflet | 0:91ad48ad5687 | 463 | /* This function is called by xSemaphoreGetMutexHolder(), and should not |
dflet | 0:91ad48ad5687 | 464 | be called directly. Note: This is a good way of determining if the |
dflet | 0:91ad48ad5687 | 465 | calling task is the mutex holder, but not a good way of determining the |
dflet | 0:91ad48ad5687 | 466 | identity of the mutex holder, as the holder may change between the |
dflet | 0:91ad48ad5687 | 467 | following critical section exiting and the function returning. */ |
dflet | 0:91ad48ad5687 | 468 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 469 | { |
dflet | 0:91ad48ad5687 | 470 | if( ( ( Queue_t * ) xSemaphore )->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 471 | { |
dflet | 0:91ad48ad5687 | 472 | pxReturn = ( void * ) ( ( Queue_t * ) xSemaphore )->pxMutexHolder; |
dflet | 0:91ad48ad5687 | 473 | } |
dflet | 0:91ad48ad5687 | 474 | else |
dflet | 0:91ad48ad5687 | 475 | { |
dflet | 0:91ad48ad5687 | 476 | pxReturn = NULL; |
dflet | 0:91ad48ad5687 | 477 | } |
dflet | 0:91ad48ad5687 | 478 | } |
dflet | 0:91ad48ad5687 | 479 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 480 | |
dflet | 0:91ad48ad5687 | 481 | return pxReturn; |
dflet | 0:91ad48ad5687 | 482 | } /*lint !e818 xSemaphore cannot be a pointer to const because it is a typedef. */ |
dflet | 0:91ad48ad5687 | 483 | |
dflet | 0:91ad48ad5687 | 484 | #endif |
dflet | 0:91ad48ad5687 | 485 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 486 | |
dflet | 0:91ad48ad5687 | 487 | #if ( configUSE_RECURSIVE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 488 | |
dflet | 0:91ad48ad5687 | 489 | BaseType_t xQueueGiveMutexRecursive( QueueHandle_t xMutex ) |
dflet | 0:91ad48ad5687 | 490 | { |
dflet | 0:91ad48ad5687 | 491 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 492 | Queue_t * const pxMutex = ( Queue_t * ) xMutex; |
dflet | 0:91ad48ad5687 | 493 | |
dflet | 0:91ad48ad5687 | 494 | configASSERT( pxMutex ); |
dflet | 0:91ad48ad5687 | 495 | |
dflet | 0:91ad48ad5687 | 496 | /* If this is the task that holds the mutex then pxMutexHolder will not |
dflet | 0:91ad48ad5687 | 497 | change outside of this task. If this task does not hold the mutex then |
dflet | 0:91ad48ad5687 | 498 | pxMutexHolder can never coincidentally equal the tasks handle, and as |
dflet | 0:91ad48ad5687 | 499 | this is the only condition we are interested in it does not matter if |
dflet | 0:91ad48ad5687 | 500 | pxMutexHolder is accessed simultaneously by another task. Therefore no |
dflet | 0:91ad48ad5687 | 501 | mutual exclusion is required to test the pxMutexHolder variable. */ |
dflet | 0:91ad48ad5687 | 502 | if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Not a redundant cast as TaskHandle_t is a typedef. */ |
dflet | 0:91ad48ad5687 | 503 | { |
dflet | 0:91ad48ad5687 | 504 | traceGIVE_MUTEX_RECURSIVE( pxMutex ); |
dflet | 0:91ad48ad5687 | 505 | |
dflet | 0:91ad48ad5687 | 506 | /* uxRecursiveCallCount cannot be zero if pxMutexHolder is equal to |
dflet | 0:91ad48ad5687 | 507 | the task handle, therefore no underflow check is required. Also, |
dflet | 0:91ad48ad5687 | 508 | uxRecursiveCallCount is only modified by the mutex holder, and as |
dflet | 0:91ad48ad5687 | 509 | there can only be one, no mutual exclusion is required to modify the |
dflet | 0:91ad48ad5687 | 510 | uxRecursiveCallCount member. */ |
dflet | 0:91ad48ad5687 | 511 | ( pxMutex->u.uxRecursiveCallCount )--; |
dflet | 0:91ad48ad5687 | 512 | |
dflet | 0:91ad48ad5687 | 513 | /* Have we unwound the call count? */ |
dflet | 0:91ad48ad5687 | 514 | if( pxMutex->u.uxRecursiveCallCount == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 515 | { |
dflet | 0:91ad48ad5687 | 516 | /* Return the mutex. This will automatically unblock any other |
dflet | 0:91ad48ad5687 | 517 | task that might be waiting to access the mutex. */ |
dflet | 0:91ad48ad5687 | 518 | ( void ) xQueueGenericSend( pxMutex, NULL, queueMUTEX_GIVE_BLOCK_TIME, queueSEND_TO_BACK ); |
dflet | 0:91ad48ad5687 | 519 | } |
dflet | 0:91ad48ad5687 | 520 | else |
dflet | 0:91ad48ad5687 | 521 | { |
dflet | 0:91ad48ad5687 | 522 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 523 | } |
dflet | 0:91ad48ad5687 | 524 | |
dflet | 0:91ad48ad5687 | 525 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 526 | } |
dflet | 0:91ad48ad5687 | 527 | else |
dflet | 0:91ad48ad5687 | 528 | { |
dflet | 0:91ad48ad5687 | 529 | /* The mutex cannot be given because the calling task is not the |
dflet | 0:91ad48ad5687 | 530 | holder. */ |
dflet | 0:91ad48ad5687 | 531 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 532 | |
dflet | 0:91ad48ad5687 | 533 | traceGIVE_MUTEX_RECURSIVE_FAILED( pxMutex ); |
dflet | 0:91ad48ad5687 | 534 | } |
dflet | 0:91ad48ad5687 | 535 | |
dflet | 0:91ad48ad5687 | 536 | return xReturn; |
dflet | 0:91ad48ad5687 | 537 | } |
dflet | 0:91ad48ad5687 | 538 | |
dflet | 0:91ad48ad5687 | 539 | #endif /* configUSE_RECURSIVE_MUTEXES */ |
dflet | 0:91ad48ad5687 | 540 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 541 | |
dflet | 0:91ad48ad5687 | 542 | #if ( configUSE_RECURSIVE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 543 | |
dflet | 0:91ad48ad5687 | 544 | BaseType_t xQueueTakeMutexRecursive( QueueHandle_t xMutex, TickType_t xTicksToWait ) |
dflet | 0:91ad48ad5687 | 545 | { |
dflet | 0:91ad48ad5687 | 546 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 547 | Queue_t * const pxMutex = ( Queue_t * ) xMutex; |
dflet | 0:91ad48ad5687 | 548 | |
dflet | 0:91ad48ad5687 | 549 | configASSERT( pxMutex ); |
dflet | 0:91ad48ad5687 | 550 | |
dflet | 0:91ad48ad5687 | 551 | /* Comments regarding mutual exclusion as per those within |
dflet | 0:91ad48ad5687 | 552 | xQueueGiveMutexRecursive(). */ |
dflet | 0:91ad48ad5687 | 553 | |
dflet | 0:91ad48ad5687 | 554 | traceTAKE_MUTEX_RECURSIVE( pxMutex ); |
dflet | 0:91ad48ad5687 | 555 | |
dflet | 0:91ad48ad5687 | 556 | if( pxMutex->pxMutexHolder == ( void * ) xTaskGetCurrentTaskHandle() ) /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */ |
dflet | 0:91ad48ad5687 | 557 | { |
dflet | 0:91ad48ad5687 | 558 | ( pxMutex->u.uxRecursiveCallCount )++; |
dflet | 0:91ad48ad5687 | 559 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 560 | } |
dflet | 0:91ad48ad5687 | 561 | else |
dflet | 0:91ad48ad5687 | 562 | { |
dflet | 0:91ad48ad5687 | 563 | xReturn = xQueueGenericReceive( pxMutex, NULL, xTicksToWait, pdFALSE ); |
dflet | 0:91ad48ad5687 | 564 | |
dflet | 0:91ad48ad5687 | 565 | /* pdPASS will only be returned if the mutex was successfully |
dflet | 0:91ad48ad5687 | 566 | obtained. The calling task may have entered the Blocked state |
dflet | 0:91ad48ad5687 | 567 | before reaching here. */ |
dflet | 0:91ad48ad5687 | 568 | if( xReturn == pdPASS ) |
dflet | 0:91ad48ad5687 | 569 | { |
dflet | 0:91ad48ad5687 | 570 | ( pxMutex->u.uxRecursiveCallCount )++; |
dflet | 0:91ad48ad5687 | 571 | } |
dflet | 0:91ad48ad5687 | 572 | else |
dflet | 0:91ad48ad5687 | 573 | { |
dflet | 0:91ad48ad5687 | 574 | traceTAKE_MUTEX_RECURSIVE_FAILED( pxMutex ); |
dflet | 0:91ad48ad5687 | 575 | } |
dflet | 0:91ad48ad5687 | 576 | } |
dflet | 0:91ad48ad5687 | 577 | |
dflet | 0:91ad48ad5687 | 578 | return xReturn; |
dflet | 0:91ad48ad5687 | 579 | } |
dflet | 0:91ad48ad5687 | 580 | |
dflet | 0:91ad48ad5687 | 581 | #endif /* configUSE_RECURSIVE_MUTEXES */ |
dflet | 0:91ad48ad5687 | 582 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 583 | |
dflet | 0:91ad48ad5687 | 584 | #if ( configUSE_COUNTING_SEMAPHORES == 1 ) |
dflet | 0:91ad48ad5687 | 585 | |
dflet | 0:91ad48ad5687 | 586 | QueueHandle_t xQueueCreateCountingSemaphore( const UBaseType_t uxMaxCount, const UBaseType_t uxInitialCount ) |
dflet | 0:91ad48ad5687 | 587 | { |
dflet | 0:91ad48ad5687 | 588 | QueueHandle_t xHandle; |
dflet | 0:91ad48ad5687 | 589 | |
dflet | 0:91ad48ad5687 | 590 | configASSERT( uxMaxCount != 0 ); |
dflet | 0:91ad48ad5687 | 591 | configASSERT( uxInitialCount <= uxMaxCount ); |
dflet | 0:91ad48ad5687 | 592 | |
dflet | 0:91ad48ad5687 | 593 | xHandle = xQueueGenericCreate( uxMaxCount, queueSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_COUNTING_SEMAPHORE ); |
dflet | 0:91ad48ad5687 | 594 | |
dflet | 0:91ad48ad5687 | 595 | if( xHandle != NULL ) |
dflet | 0:91ad48ad5687 | 596 | { |
dflet | 0:91ad48ad5687 | 597 | ( ( Queue_t * ) xHandle )->uxMessagesWaiting = uxInitialCount; |
dflet | 0:91ad48ad5687 | 598 | |
dflet | 0:91ad48ad5687 | 599 | traceCREATE_COUNTING_SEMAPHORE(); |
dflet | 0:91ad48ad5687 | 600 | } |
dflet | 0:91ad48ad5687 | 601 | else |
dflet | 0:91ad48ad5687 | 602 | { |
dflet | 0:91ad48ad5687 | 603 | traceCREATE_COUNTING_SEMAPHORE_FAILED(); |
dflet | 0:91ad48ad5687 | 604 | } |
dflet | 0:91ad48ad5687 | 605 | |
dflet | 0:91ad48ad5687 | 606 | configASSERT( xHandle ); |
dflet | 0:91ad48ad5687 | 607 | return xHandle; |
dflet | 0:91ad48ad5687 | 608 | } |
dflet | 0:91ad48ad5687 | 609 | |
dflet | 0:91ad48ad5687 | 610 | #endif /* configUSE_COUNTING_SEMAPHORES */ |
dflet | 0:91ad48ad5687 | 611 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 612 | |
dflet | 0:91ad48ad5687 | 613 | BaseType_t xQueueGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, const BaseType_t xCopyPosition ) |
dflet | 0:91ad48ad5687 | 614 | { |
dflet | 0:91ad48ad5687 | 615 | BaseType_t xEntryTimeSet = pdFALSE, xYieldRequired; |
dflet | 0:91ad48ad5687 | 616 | TimeOut_t xTimeOut; |
dflet | 0:91ad48ad5687 | 617 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 618 | |
dflet | 0:91ad48ad5687 | 619 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 620 | configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 621 | configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) ); |
dflet | 0:91ad48ad5687 | 622 | #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) ) |
dflet | 0:91ad48ad5687 | 623 | { |
dflet | 0:91ad48ad5687 | 624 | configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) ); |
dflet | 0:91ad48ad5687 | 625 | } |
dflet | 0:91ad48ad5687 | 626 | #endif |
dflet | 0:91ad48ad5687 | 627 | |
dflet | 0:91ad48ad5687 | 628 | |
dflet | 0:91ad48ad5687 | 629 | /* This function relaxes the coding standard somewhat to allow return |
dflet | 0:91ad48ad5687 | 630 | statements within the function itself. This is done in the interest |
dflet | 0:91ad48ad5687 | 631 | of execution time efficiency. */ |
dflet | 0:91ad48ad5687 | 632 | for( ;; ) |
dflet | 0:91ad48ad5687 | 633 | { |
dflet | 0:91ad48ad5687 | 634 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 635 | { |
dflet | 0:91ad48ad5687 | 636 | /* Is there room on the queue now? The running task must be the |
dflet | 0:91ad48ad5687 | 637 | highest priority task wanting to access the queue. If the head item |
dflet | 0:91ad48ad5687 | 638 | in the queue is to be overwritten then it does not matter if the |
dflet | 0:91ad48ad5687 | 639 | queue is full. */ |
dflet | 0:91ad48ad5687 | 640 | if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) ) |
dflet | 0:91ad48ad5687 | 641 | { |
dflet | 0:91ad48ad5687 | 642 | traceQUEUE_SEND( pxQueue ); |
dflet | 0:91ad48ad5687 | 643 | xYieldRequired = prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); |
dflet | 0:91ad48ad5687 | 644 | |
dflet | 0:91ad48ad5687 | 645 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 646 | { |
dflet | 0:91ad48ad5687 | 647 | if( pxQueue->pxQueueSetContainer != NULL ) |
dflet | 0:91ad48ad5687 | 648 | { |
dflet | 0:91ad48ad5687 | 649 | if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 650 | { |
dflet | 0:91ad48ad5687 | 651 | /* The queue is a member of a queue set, and posting |
dflet | 0:91ad48ad5687 | 652 | to the queue set caused a higher priority task to |
dflet | 0:91ad48ad5687 | 653 | unblock. A context switch is required. */ |
dflet | 0:91ad48ad5687 | 654 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 655 | } |
dflet | 0:91ad48ad5687 | 656 | else |
dflet | 0:91ad48ad5687 | 657 | { |
dflet | 0:91ad48ad5687 | 658 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 659 | } |
dflet | 0:91ad48ad5687 | 660 | } |
dflet | 0:91ad48ad5687 | 661 | else |
dflet | 0:91ad48ad5687 | 662 | { |
dflet | 0:91ad48ad5687 | 663 | /* If there was a task waiting for data to arrive on the |
dflet | 0:91ad48ad5687 | 664 | queue then unblock it now. */ |
dflet | 0:91ad48ad5687 | 665 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 666 | { |
dflet | 0:91ad48ad5687 | 667 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 668 | { |
dflet | 0:91ad48ad5687 | 669 | /* The unblocked task has a priority higher than |
dflet | 0:91ad48ad5687 | 670 | our own so yield immediately. Yes it is ok to |
dflet | 0:91ad48ad5687 | 671 | do this from within the critical section - the |
dflet | 0:91ad48ad5687 | 672 | kernel takes care of that. */ |
dflet | 0:91ad48ad5687 | 673 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 674 | } |
dflet | 0:91ad48ad5687 | 675 | else |
dflet | 0:91ad48ad5687 | 676 | { |
dflet | 0:91ad48ad5687 | 677 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 678 | } |
dflet | 0:91ad48ad5687 | 679 | } |
dflet | 0:91ad48ad5687 | 680 | else if( xYieldRequired != pdFALSE ) |
dflet | 0:91ad48ad5687 | 681 | { |
dflet | 0:91ad48ad5687 | 682 | /* This path is a special case that will only get |
dflet | 0:91ad48ad5687 | 683 | executed if the task was holding multiple mutexes |
dflet | 0:91ad48ad5687 | 684 | and the mutexes were given back in an order that is |
dflet | 0:91ad48ad5687 | 685 | different to that in which they were taken. */ |
dflet | 0:91ad48ad5687 | 686 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 687 | } |
dflet | 0:91ad48ad5687 | 688 | else |
dflet | 0:91ad48ad5687 | 689 | { |
dflet | 0:91ad48ad5687 | 690 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 691 | } |
dflet | 0:91ad48ad5687 | 692 | } |
dflet | 0:91ad48ad5687 | 693 | } |
dflet | 0:91ad48ad5687 | 694 | #else /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 695 | { |
dflet | 0:91ad48ad5687 | 696 | /* If there was a task waiting for data to arrive on the |
dflet | 0:91ad48ad5687 | 697 | queue then unblock it now. */ |
dflet | 0:91ad48ad5687 | 698 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 699 | { |
dflet | 0:91ad48ad5687 | 700 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 701 | { |
dflet | 0:91ad48ad5687 | 702 | /* The unblocked task has a priority higher than |
dflet | 0:91ad48ad5687 | 703 | our own so yield immediately. Yes it is ok to do |
dflet | 0:91ad48ad5687 | 704 | this from within the critical section - the kernel |
dflet | 0:91ad48ad5687 | 705 | takes care of that. */ |
dflet | 0:91ad48ad5687 | 706 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 707 | } |
dflet | 0:91ad48ad5687 | 708 | else |
dflet | 0:91ad48ad5687 | 709 | { |
dflet | 0:91ad48ad5687 | 710 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 711 | } |
dflet | 0:91ad48ad5687 | 712 | } |
dflet | 0:91ad48ad5687 | 713 | else if( xYieldRequired != pdFALSE ) |
dflet | 0:91ad48ad5687 | 714 | { |
dflet | 0:91ad48ad5687 | 715 | /* This path is a special case that will only get |
dflet | 0:91ad48ad5687 | 716 | executed if the task was holding multiple mutexes and |
dflet | 0:91ad48ad5687 | 717 | the mutexes were given back in an order that is |
dflet | 0:91ad48ad5687 | 718 | different to that in which they were taken. */ |
dflet | 0:91ad48ad5687 | 719 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 720 | } |
dflet | 0:91ad48ad5687 | 721 | else |
dflet | 0:91ad48ad5687 | 722 | { |
dflet | 0:91ad48ad5687 | 723 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 724 | } |
dflet | 0:91ad48ad5687 | 725 | } |
dflet | 0:91ad48ad5687 | 726 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 727 | |
dflet | 0:91ad48ad5687 | 728 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 729 | return pdPASS; |
dflet | 0:91ad48ad5687 | 730 | } |
dflet | 0:91ad48ad5687 | 731 | else |
dflet | 0:91ad48ad5687 | 732 | { |
dflet | 0:91ad48ad5687 | 733 | if( xTicksToWait == ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 734 | { |
dflet | 0:91ad48ad5687 | 735 | /* The queue was full and no block time is specified (or |
dflet | 0:91ad48ad5687 | 736 | the block time has expired) so leave now. */ |
dflet | 0:91ad48ad5687 | 737 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 738 | |
dflet | 0:91ad48ad5687 | 739 | /* Return to the original privilege level before exiting |
dflet | 0:91ad48ad5687 | 740 | the function. */ |
dflet | 0:91ad48ad5687 | 741 | traceQUEUE_SEND_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 742 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 743 | } |
dflet | 0:91ad48ad5687 | 744 | else if( xEntryTimeSet == pdFALSE ) |
dflet | 0:91ad48ad5687 | 745 | { |
dflet | 0:91ad48ad5687 | 746 | /* The queue was full and a block time was specified so |
dflet | 0:91ad48ad5687 | 747 | configure the timeout structure. */ |
dflet | 0:91ad48ad5687 | 748 | vTaskSetTimeOutState( &xTimeOut ); |
dflet | 0:91ad48ad5687 | 749 | xEntryTimeSet = pdTRUE; |
dflet | 0:91ad48ad5687 | 750 | } |
dflet | 0:91ad48ad5687 | 751 | else |
dflet | 0:91ad48ad5687 | 752 | { |
dflet | 0:91ad48ad5687 | 753 | /* Entry time was already set. */ |
dflet | 0:91ad48ad5687 | 754 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 755 | } |
dflet | 0:91ad48ad5687 | 756 | } |
dflet | 0:91ad48ad5687 | 757 | } |
dflet | 0:91ad48ad5687 | 758 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 759 | |
dflet | 0:91ad48ad5687 | 760 | /* Interrupts and other tasks can send to and receive from the queue |
dflet | 0:91ad48ad5687 | 761 | now the critical section has been exited. */ |
dflet | 0:91ad48ad5687 | 762 | |
dflet | 0:91ad48ad5687 | 763 | vTaskSuspendAll(); |
dflet | 0:91ad48ad5687 | 764 | prvLockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 765 | |
dflet | 0:91ad48ad5687 | 766 | /* Update the timeout state to see if it has expired yet. */ |
dflet | 0:91ad48ad5687 | 767 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 768 | { |
dflet | 0:91ad48ad5687 | 769 | if( prvIsQueueFull( pxQueue ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 770 | { |
dflet | 0:91ad48ad5687 | 771 | traceBLOCKING_ON_QUEUE_SEND( pxQueue ); |
dflet | 0:91ad48ad5687 | 772 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait ); |
dflet | 0:91ad48ad5687 | 773 | |
dflet | 0:91ad48ad5687 | 774 | /* Unlocking the queue means queue events can effect the |
dflet | 0:91ad48ad5687 | 775 | event list. It is possible that interrupts occurring now |
dflet | 0:91ad48ad5687 | 776 | remove this task from the event list again - but as the |
dflet | 0:91ad48ad5687 | 777 | scheduler is suspended the task will go onto the pending |
dflet | 0:91ad48ad5687 | 778 | ready last instead of the actual ready list. */ |
dflet | 0:91ad48ad5687 | 779 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 780 | |
dflet | 0:91ad48ad5687 | 781 | /* Resuming the scheduler will move tasks from the pending |
dflet | 0:91ad48ad5687 | 782 | ready list into the ready list - so it is feasible that this |
dflet | 0:91ad48ad5687 | 783 | task is already in a ready list before it yields - in which |
dflet | 0:91ad48ad5687 | 784 | case the yield will not cause a context switch unless there |
dflet | 0:91ad48ad5687 | 785 | is also a higher priority task in the pending ready list. */ |
dflet | 0:91ad48ad5687 | 786 | if( xTaskResumeAll() == pdFALSE ) |
dflet | 0:91ad48ad5687 | 787 | { |
dflet | 0:91ad48ad5687 | 788 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 789 | } |
dflet | 0:91ad48ad5687 | 790 | } |
dflet | 0:91ad48ad5687 | 791 | else |
dflet | 0:91ad48ad5687 | 792 | { |
dflet | 0:91ad48ad5687 | 793 | /* Try again. */ |
dflet | 0:91ad48ad5687 | 794 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 795 | ( void ) xTaskResumeAll(); |
dflet | 0:91ad48ad5687 | 796 | } |
dflet | 0:91ad48ad5687 | 797 | } |
dflet | 0:91ad48ad5687 | 798 | else |
dflet | 0:91ad48ad5687 | 799 | { |
dflet | 0:91ad48ad5687 | 800 | /* The timeout has expired. */ |
dflet | 0:91ad48ad5687 | 801 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 802 | ( void ) xTaskResumeAll(); |
dflet | 0:91ad48ad5687 | 803 | |
dflet | 0:91ad48ad5687 | 804 | /* Return to the original privilege level before exiting the |
dflet | 0:91ad48ad5687 | 805 | function. */ |
dflet | 0:91ad48ad5687 | 806 | traceQUEUE_SEND_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 807 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 808 | } |
dflet | 0:91ad48ad5687 | 809 | } |
dflet | 0:91ad48ad5687 | 810 | } |
dflet | 0:91ad48ad5687 | 811 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 812 | |
dflet | 0:91ad48ad5687 | 813 | #if ( configUSE_ALTERNATIVE_API == 1 ) |
dflet | 0:91ad48ad5687 | 814 | |
dflet | 0:91ad48ad5687 | 815 | BaseType_t xQueueAltGenericSend( QueueHandle_t xQueue, const void * const pvItemToQueue, TickType_t xTicksToWait, BaseType_t xCopyPosition ) |
dflet | 0:91ad48ad5687 | 816 | { |
dflet | 0:91ad48ad5687 | 817 | BaseType_t xEntryTimeSet = pdFALSE; |
dflet | 0:91ad48ad5687 | 818 | TimeOut_t xTimeOut; |
dflet | 0:91ad48ad5687 | 819 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 820 | |
dflet | 0:91ad48ad5687 | 821 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 822 | configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 823 | |
dflet | 0:91ad48ad5687 | 824 | for( ;; ) |
dflet | 0:91ad48ad5687 | 825 | { |
dflet | 0:91ad48ad5687 | 826 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 827 | { |
dflet | 0:91ad48ad5687 | 828 | /* Is there room on the queue now? To be running we must be |
dflet | 0:91ad48ad5687 | 829 | the highest priority task wanting to access the queue. */ |
dflet | 0:91ad48ad5687 | 830 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
dflet | 0:91ad48ad5687 | 831 | { |
dflet | 0:91ad48ad5687 | 832 | traceQUEUE_SEND( pxQueue ); |
dflet | 0:91ad48ad5687 | 833 | prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); |
dflet | 0:91ad48ad5687 | 834 | |
dflet | 0:91ad48ad5687 | 835 | /* If there was a task waiting for data to arrive on the |
dflet | 0:91ad48ad5687 | 836 | queue then unblock it now. */ |
dflet | 0:91ad48ad5687 | 837 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 838 | { |
dflet | 0:91ad48ad5687 | 839 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 840 | { |
dflet | 0:91ad48ad5687 | 841 | /* The unblocked task has a priority higher than |
dflet | 0:91ad48ad5687 | 842 | our own so yield immediately. */ |
dflet | 0:91ad48ad5687 | 843 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 844 | } |
dflet | 0:91ad48ad5687 | 845 | else |
dflet | 0:91ad48ad5687 | 846 | { |
dflet | 0:91ad48ad5687 | 847 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 848 | } |
dflet | 0:91ad48ad5687 | 849 | } |
dflet | 0:91ad48ad5687 | 850 | else |
dflet | 0:91ad48ad5687 | 851 | { |
dflet | 0:91ad48ad5687 | 852 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 853 | } |
dflet | 0:91ad48ad5687 | 854 | |
dflet | 0:91ad48ad5687 | 855 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 856 | return pdPASS; |
dflet | 0:91ad48ad5687 | 857 | } |
dflet | 0:91ad48ad5687 | 858 | else |
dflet | 0:91ad48ad5687 | 859 | { |
dflet | 0:91ad48ad5687 | 860 | if( xTicksToWait == ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 861 | { |
dflet | 0:91ad48ad5687 | 862 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 863 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 864 | } |
dflet | 0:91ad48ad5687 | 865 | else if( xEntryTimeSet == pdFALSE ) |
dflet | 0:91ad48ad5687 | 866 | { |
dflet | 0:91ad48ad5687 | 867 | vTaskSetTimeOutState( &xTimeOut ); |
dflet | 0:91ad48ad5687 | 868 | xEntryTimeSet = pdTRUE; |
dflet | 0:91ad48ad5687 | 869 | } |
dflet | 0:91ad48ad5687 | 870 | } |
dflet | 0:91ad48ad5687 | 871 | } |
dflet | 0:91ad48ad5687 | 872 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 873 | |
dflet | 0:91ad48ad5687 | 874 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 875 | { |
dflet | 0:91ad48ad5687 | 876 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 877 | { |
dflet | 0:91ad48ad5687 | 878 | if( prvIsQueueFull( pxQueue ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 879 | { |
dflet | 0:91ad48ad5687 | 880 | traceBLOCKING_ON_QUEUE_SEND( pxQueue ); |
dflet | 0:91ad48ad5687 | 881 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToSend ), xTicksToWait ); |
dflet | 0:91ad48ad5687 | 882 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 883 | } |
dflet | 0:91ad48ad5687 | 884 | else |
dflet | 0:91ad48ad5687 | 885 | { |
dflet | 0:91ad48ad5687 | 886 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 887 | } |
dflet | 0:91ad48ad5687 | 888 | } |
dflet | 0:91ad48ad5687 | 889 | else |
dflet | 0:91ad48ad5687 | 890 | { |
dflet | 0:91ad48ad5687 | 891 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 892 | traceQUEUE_SEND_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 893 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 894 | } |
dflet | 0:91ad48ad5687 | 895 | } |
dflet | 0:91ad48ad5687 | 896 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 897 | } |
dflet | 0:91ad48ad5687 | 898 | } |
dflet | 0:91ad48ad5687 | 899 | |
dflet | 0:91ad48ad5687 | 900 | #endif /* configUSE_ALTERNATIVE_API */ |
dflet | 0:91ad48ad5687 | 901 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 902 | |
dflet | 0:91ad48ad5687 | 903 | #if ( configUSE_ALTERNATIVE_API == 1 ) |
dflet | 0:91ad48ad5687 | 904 | |
dflet | 0:91ad48ad5687 | 905 | BaseType_t xQueueAltGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, BaseType_t xJustPeeking ) |
dflet | 0:91ad48ad5687 | 906 | { |
dflet | 0:91ad48ad5687 | 907 | BaseType_t xEntryTimeSet = pdFALSE; |
dflet | 0:91ad48ad5687 | 908 | TimeOut_t xTimeOut; |
dflet | 0:91ad48ad5687 | 909 | int8_t *pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 910 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 911 | |
dflet | 0:91ad48ad5687 | 912 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 913 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 914 | |
dflet | 0:91ad48ad5687 | 915 | for( ;; ) |
dflet | 0:91ad48ad5687 | 916 | { |
dflet | 0:91ad48ad5687 | 917 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 918 | { |
dflet | 0:91ad48ad5687 | 919 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 920 | { |
dflet | 0:91ad48ad5687 | 921 | /* Remember our read position in case we are just peeking. */ |
dflet | 0:91ad48ad5687 | 922 | pcOriginalReadPosition = pxQueue->u.pcReadFrom; |
dflet | 0:91ad48ad5687 | 923 | |
dflet | 0:91ad48ad5687 | 924 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
dflet | 0:91ad48ad5687 | 925 | |
dflet | 0:91ad48ad5687 | 926 | if( xJustPeeking == pdFALSE ) |
dflet | 0:91ad48ad5687 | 927 | { |
dflet | 0:91ad48ad5687 | 928 | traceQUEUE_RECEIVE( pxQueue ); |
dflet | 0:91ad48ad5687 | 929 | |
dflet | 0:91ad48ad5687 | 930 | /* Data is actually being removed (not just peeked). */ |
dflet | 0:91ad48ad5687 | 931 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 932 | |
dflet | 0:91ad48ad5687 | 933 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 934 | { |
dflet | 0:91ad48ad5687 | 935 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 936 | { |
dflet | 0:91ad48ad5687 | 937 | /* Record the information required to implement |
dflet | 0:91ad48ad5687 | 938 | priority inheritance should it become necessary. */ |
dflet | 0:91ad48ad5687 | 939 | pxQueue->pxMutexHolder = ( int8_t * ) xTaskGetCurrentTaskHandle(); |
dflet | 0:91ad48ad5687 | 940 | } |
dflet | 0:91ad48ad5687 | 941 | else |
dflet | 0:91ad48ad5687 | 942 | { |
dflet | 0:91ad48ad5687 | 943 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 944 | } |
dflet | 0:91ad48ad5687 | 945 | } |
dflet | 0:91ad48ad5687 | 946 | #endif |
dflet | 0:91ad48ad5687 | 947 | |
dflet | 0:91ad48ad5687 | 948 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 949 | { |
dflet | 0:91ad48ad5687 | 950 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 951 | { |
dflet | 0:91ad48ad5687 | 952 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 953 | } |
dflet | 0:91ad48ad5687 | 954 | else |
dflet | 0:91ad48ad5687 | 955 | { |
dflet | 0:91ad48ad5687 | 956 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 957 | } |
dflet | 0:91ad48ad5687 | 958 | } |
dflet | 0:91ad48ad5687 | 959 | } |
dflet | 0:91ad48ad5687 | 960 | else |
dflet | 0:91ad48ad5687 | 961 | { |
dflet | 0:91ad48ad5687 | 962 | traceQUEUE_PEEK( pxQueue ); |
dflet | 0:91ad48ad5687 | 963 | |
dflet | 0:91ad48ad5687 | 964 | /* The data is not being removed, so reset our read |
dflet | 0:91ad48ad5687 | 965 | pointer. */ |
dflet | 0:91ad48ad5687 | 966 | pxQueue->u.pcReadFrom = pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 967 | |
dflet | 0:91ad48ad5687 | 968 | /* The data is being left in the queue, so see if there are |
dflet | 0:91ad48ad5687 | 969 | any other tasks waiting for the data. */ |
dflet | 0:91ad48ad5687 | 970 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 971 | { |
dflet | 0:91ad48ad5687 | 972 | /* Tasks that are removed from the event list will get added to |
dflet | 0:91ad48ad5687 | 973 | the pending ready list as the scheduler is still suspended. */ |
dflet | 0:91ad48ad5687 | 974 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 975 | { |
dflet | 0:91ad48ad5687 | 976 | /* The task waiting has a higher priority than this task. */ |
dflet | 0:91ad48ad5687 | 977 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 978 | } |
dflet | 0:91ad48ad5687 | 979 | else |
dflet | 0:91ad48ad5687 | 980 | { |
dflet | 0:91ad48ad5687 | 981 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 982 | } |
dflet | 0:91ad48ad5687 | 983 | } |
dflet | 0:91ad48ad5687 | 984 | else |
dflet | 0:91ad48ad5687 | 985 | { |
dflet | 0:91ad48ad5687 | 986 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 987 | } |
dflet | 0:91ad48ad5687 | 988 | } |
dflet | 0:91ad48ad5687 | 989 | |
dflet | 0:91ad48ad5687 | 990 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 991 | return pdPASS; |
dflet | 0:91ad48ad5687 | 992 | } |
dflet | 0:91ad48ad5687 | 993 | else |
dflet | 0:91ad48ad5687 | 994 | { |
dflet | 0:91ad48ad5687 | 995 | if( xTicksToWait == ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 996 | { |
dflet | 0:91ad48ad5687 | 997 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 998 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 999 | return errQUEUE_EMPTY; |
dflet | 0:91ad48ad5687 | 1000 | } |
dflet | 0:91ad48ad5687 | 1001 | else if( xEntryTimeSet == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1002 | { |
dflet | 0:91ad48ad5687 | 1003 | vTaskSetTimeOutState( &xTimeOut ); |
dflet | 0:91ad48ad5687 | 1004 | xEntryTimeSet = pdTRUE; |
dflet | 0:91ad48ad5687 | 1005 | } |
dflet | 0:91ad48ad5687 | 1006 | } |
dflet | 0:91ad48ad5687 | 1007 | } |
dflet | 0:91ad48ad5687 | 1008 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1009 | |
dflet | 0:91ad48ad5687 | 1010 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1011 | { |
dflet | 0:91ad48ad5687 | 1012 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1013 | { |
dflet | 0:91ad48ad5687 | 1014 | if( prvIsQueueEmpty( pxQueue ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1015 | { |
dflet | 0:91ad48ad5687 | 1016 | traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue ); |
dflet | 0:91ad48ad5687 | 1017 | |
dflet | 0:91ad48ad5687 | 1018 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 1019 | { |
dflet | 0:91ad48ad5687 | 1020 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 1021 | { |
dflet | 0:91ad48ad5687 | 1022 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1023 | { |
dflet | 0:91ad48ad5687 | 1024 | vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder ); |
dflet | 0:91ad48ad5687 | 1025 | } |
dflet | 0:91ad48ad5687 | 1026 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1027 | } |
dflet | 0:91ad48ad5687 | 1028 | else |
dflet | 0:91ad48ad5687 | 1029 | { |
dflet | 0:91ad48ad5687 | 1030 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1031 | } |
dflet | 0:91ad48ad5687 | 1032 | } |
dflet | 0:91ad48ad5687 | 1033 | #endif |
dflet | 0:91ad48ad5687 | 1034 | |
dflet | 0:91ad48ad5687 | 1035 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait ); |
dflet | 0:91ad48ad5687 | 1036 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 1037 | } |
dflet | 0:91ad48ad5687 | 1038 | else |
dflet | 0:91ad48ad5687 | 1039 | { |
dflet | 0:91ad48ad5687 | 1040 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1041 | } |
dflet | 0:91ad48ad5687 | 1042 | } |
dflet | 0:91ad48ad5687 | 1043 | else |
dflet | 0:91ad48ad5687 | 1044 | { |
dflet | 0:91ad48ad5687 | 1045 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1046 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1047 | return errQUEUE_EMPTY; |
dflet | 0:91ad48ad5687 | 1048 | } |
dflet | 0:91ad48ad5687 | 1049 | } |
dflet | 0:91ad48ad5687 | 1050 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1051 | } |
dflet | 0:91ad48ad5687 | 1052 | } |
dflet | 0:91ad48ad5687 | 1053 | |
dflet | 0:91ad48ad5687 | 1054 | |
dflet | 0:91ad48ad5687 | 1055 | #endif /* configUSE_ALTERNATIVE_API */ |
dflet | 0:91ad48ad5687 | 1056 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1057 | |
dflet | 0:91ad48ad5687 | 1058 | BaseType_t xQueueGenericSendFromISR( QueueHandle_t xQueue, const void * const pvItemToQueue, BaseType_t * const pxHigherPriorityTaskWoken, const BaseType_t xCopyPosition ) |
dflet | 0:91ad48ad5687 | 1059 | { |
dflet | 0:91ad48ad5687 | 1060 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 1061 | UBaseType_t uxSavedInterruptStatus; |
dflet | 0:91ad48ad5687 | 1062 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1063 | |
dflet | 0:91ad48ad5687 | 1064 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1065 | configASSERT( !( ( pvItemToQueue == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 1066 | configASSERT( !( ( xCopyPosition == queueOVERWRITE ) && ( pxQueue->uxLength != 1 ) ) ); |
dflet | 0:91ad48ad5687 | 1067 | |
dflet | 0:91ad48ad5687 | 1068 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
dflet | 0:91ad48ad5687 | 1069 | system call (or maximum API call) interrupt priority. Interrupts that are |
dflet | 0:91ad48ad5687 | 1070 | above the maximum system call priority are kept permanently enabled, even |
dflet | 0:91ad48ad5687 | 1071 | when the RTOS kernel is in a critical section, but cannot make any calls to |
dflet | 0:91ad48ad5687 | 1072 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
dflet | 0:91ad48ad5687 | 1073 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
dflet | 0:91ad48ad5687 | 1074 | failure if a FreeRTOS API function is called from an interrupt that has been |
dflet | 0:91ad48ad5687 | 1075 | assigned a priority above the configured maximum system call priority. |
dflet | 0:91ad48ad5687 | 1076 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
dflet | 0:91ad48ad5687 | 1077 | that have been assigned a priority at or (logically) below the maximum |
dflet | 0:91ad48ad5687 | 1078 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
dflet | 0:91ad48ad5687 | 1079 | safe API to ensure interrupt entry is as fast and as simple as possible. |
dflet | 0:91ad48ad5687 | 1080 | More information (albeit Cortex-M specific) is provided on the following |
dflet | 0:91ad48ad5687 | 1081 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
dflet | 0:91ad48ad5687 | 1082 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
dflet | 0:91ad48ad5687 | 1083 | |
dflet | 0:91ad48ad5687 | 1084 | /* Similar to xQueueGenericSend, except without blocking if there is no room |
dflet | 0:91ad48ad5687 | 1085 | in the queue. Also don't directly wake a task that was blocked on a queue |
dflet | 0:91ad48ad5687 | 1086 | read, instead return a flag to say whether a context switch is required or |
dflet | 0:91ad48ad5687 | 1087 | not (i.e. has a task with a higher priority than us been woken by this |
dflet | 0:91ad48ad5687 | 1088 | post). */ |
dflet | 0:91ad48ad5687 | 1089 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
dflet | 0:91ad48ad5687 | 1090 | { |
dflet | 0:91ad48ad5687 | 1091 | if( ( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) || ( xCopyPosition == queueOVERWRITE ) ) |
dflet | 0:91ad48ad5687 | 1092 | { |
dflet | 0:91ad48ad5687 | 1093 | traceQUEUE_SEND_FROM_ISR( pxQueue ); |
dflet | 0:91ad48ad5687 | 1094 | |
dflet | 0:91ad48ad5687 | 1095 | /* Semaphores use xQueueGiveFromISR(), so pxQueue will not be a |
dflet | 0:91ad48ad5687 | 1096 | semaphore or mutex. That means prvCopyDataToQueue() cannot result |
dflet | 0:91ad48ad5687 | 1097 | in a task disinheriting a priority and prvCopyDataToQueue() can be |
dflet | 0:91ad48ad5687 | 1098 | called here even though the disinherit function does not check if |
dflet | 0:91ad48ad5687 | 1099 | the scheduler is suspended before accessing the ready lists. */ |
dflet | 0:91ad48ad5687 | 1100 | ( void ) prvCopyDataToQueue( pxQueue, pvItemToQueue, xCopyPosition ); |
dflet | 0:91ad48ad5687 | 1101 | |
dflet | 0:91ad48ad5687 | 1102 | /* The event list is not altered if the queue is locked. This will |
dflet | 0:91ad48ad5687 | 1103 | be done when the queue is unlocked later. */ |
dflet | 0:91ad48ad5687 | 1104 | if( pxQueue->xTxLock == queueUNLOCKED ) |
dflet | 0:91ad48ad5687 | 1105 | { |
dflet | 0:91ad48ad5687 | 1106 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 1107 | { |
dflet | 0:91ad48ad5687 | 1108 | if( pxQueue->pxQueueSetContainer != NULL ) |
dflet | 0:91ad48ad5687 | 1109 | { |
dflet | 0:91ad48ad5687 | 1110 | if( prvNotifyQueueSetContainer( pxQueue, xCopyPosition ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 1111 | { |
dflet | 0:91ad48ad5687 | 1112 | /* The queue is a member of a queue set, and posting |
dflet | 0:91ad48ad5687 | 1113 | to the queue set caused a higher priority task to |
dflet | 0:91ad48ad5687 | 1114 | unblock. A context switch is required. */ |
dflet | 0:91ad48ad5687 | 1115 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1116 | { |
dflet | 0:91ad48ad5687 | 1117 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1118 | } |
dflet | 0:91ad48ad5687 | 1119 | else |
dflet | 0:91ad48ad5687 | 1120 | { |
dflet | 0:91ad48ad5687 | 1121 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1122 | } |
dflet | 0:91ad48ad5687 | 1123 | } |
dflet | 0:91ad48ad5687 | 1124 | else |
dflet | 0:91ad48ad5687 | 1125 | { |
dflet | 0:91ad48ad5687 | 1126 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1127 | } |
dflet | 0:91ad48ad5687 | 1128 | } |
dflet | 0:91ad48ad5687 | 1129 | else |
dflet | 0:91ad48ad5687 | 1130 | { |
dflet | 0:91ad48ad5687 | 1131 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1132 | { |
dflet | 0:91ad48ad5687 | 1133 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1134 | { |
dflet | 0:91ad48ad5687 | 1135 | /* The task waiting has a higher priority so |
dflet | 0:91ad48ad5687 | 1136 | record that a context switch is required. */ |
dflet | 0:91ad48ad5687 | 1137 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1138 | { |
dflet | 0:91ad48ad5687 | 1139 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1140 | } |
dflet | 0:91ad48ad5687 | 1141 | else |
dflet | 0:91ad48ad5687 | 1142 | { |
dflet | 0:91ad48ad5687 | 1143 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1144 | } |
dflet | 0:91ad48ad5687 | 1145 | } |
dflet | 0:91ad48ad5687 | 1146 | else |
dflet | 0:91ad48ad5687 | 1147 | { |
dflet | 0:91ad48ad5687 | 1148 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1149 | } |
dflet | 0:91ad48ad5687 | 1150 | } |
dflet | 0:91ad48ad5687 | 1151 | else |
dflet | 0:91ad48ad5687 | 1152 | { |
dflet | 0:91ad48ad5687 | 1153 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1154 | } |
dflet | 0:91ad48ad5687 | 1155 | } |
dflet | 0:91ad48ad5687 | 1156 | } |
dflet | 0:91ad48ad5687 | 1157 | #else /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1158 | { |
dflet | 0:91ad48ad5687 | 1159 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1160 | { |
dflet | 0:91ad48ad5687 | 1161 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1162 | { |
dflet | 0:91ad48ad5687 | 1163 | /* The task waiting has a higher priority so record that a |
dflet | 0:91ad48ad5687 | 1164 | context switch is required. */ |
dflet | 0:91ad48ad5687 | 1165 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1166 | { |
dflet | 0:91ad48ad5687 | 1167 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1168 | } |
dflet | 0:91ad48ad5687 | 1169 | else |
dflet | 0:91ad48ad5687 | 1170 | { |
dflet | 0:91ad48ad5687 | 1171 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1172 | } |
dflet | 0:91ad48ad5687 | 1173 | } |
dflet | 0:91ad48ad5687 | 1174 | else |
dflet | 0:91ad48ad5687 | 1175 | { |
dflet | 0:91ad48ad5687 | 1176 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1177 | } |
dflet | 0:91ad48ad5687 | 1178 | } |
dflet | 0:91ad48ad5687 | 1179 | else |
dflet | 0:91ad48ad5687 | 1180 | { |
dflet | 0:91ad48ad5687 | 1181 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1182 | } |
dflet | 0:91ad48ad5687 | 1183 | } |
dflet | 0:91ad48ad5687 | 1184 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1185 | } |
dflet | 0:91ad48ad5687 | 1186 | else |
dflet | 0:91ad48ad5687 | 1187 | { |
dflet | 0:91ad48ad5687 | 1188 | /* Increment the lock count so the task that unlocks the queue |
dflet | 0:91ad48ad5687 | 1189 | knows that data was posted while it was locked. */ |
dflet | 0:91ad48ad5687 | 1190 | ++( pxQueue->xTxLock ); |
dflet | 0:91ad48ad5687 | 1191 | } |
dflet | 0:91ad48ad5687 | 1192 | |
dflet | 0:91ad48ad5687 | 1193 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 1194 | } |
dflet | 0:91ad48ad5687 | 1195 | else |
dflet | 0:91ad48ad5687 | 1196 | { |
dflet | 0:91ad48ad5687 | 1197 | traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1198 | xReturn = errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 1199 | } |
dflet | 0:91ad48ad5687 | 1200 | } |
dflet | 0:91ad48ad5687 | 1201 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
dflet | 0:91ad48ad5687 | 1202 | |
dflet | 0:91ad48ad5687 | 1203 | return xReturn; |
dflet | 0:91ad48ad5687 | 1204 | } |
dflet | 0:91ad48ad5687 | 1205 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1206 | |
dflet | 0:91ad48ad5687 | 1207 | BaseType_t xQueueGiveFromISR( QueueHandle_t xQueue, BaseType_t * const pxHigherPriorityTaskWoken ) |
dflet | 0:91ad48ad5687 | 1208 | { |
dflet | 0:91ad48ad5687 | 1209 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 1210 | UBaseType_t uxSavedInterruptStatus; |
dflet | 0:91ad48ad5687 | 1211 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1212 | |
dflet | 0:91ad48ad5687 | 1213 | /* Similar to xQueueGenericSendFromISR() but used with semaphores where the |
dflet | 0:91ad48ad5687 | 1214 | item size is 0. Don't directly wake a task that was blocked on a queue |
dflet | 0:91ad48ad5687 | 1215 | read, instead return a flag to say whether a context switch is required or |
dflet | 0:91ad48ad5687 | 1216 | not (i.e. has a task with a higher priority than us been woken by this |
dflet | 0:91ad48ad5687 | 1217 | post). */ |
dflet | 0:91ad48ad5687 | 1218 | |
dflet | 0:91ad48ad5687 | 1219 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1220 | |
dflet | 0:91ad48ad5687 | 1221 | /* xQueueGenericSendFromISR() should be used instead of xQueueGiveFromISR() |
dflet | 0:91ad48ad5687 | 1222 | if the item size is not 0. */ |
dflet | 0:91ad48ad5687 | 1223 | configASSERT( pxQueue->uxItemSize == 0 ); |
dflet | 0:91ad48ad5687 | 1224 | |
dflet | 0:91ad48ad5687 | 1225 | /* Normally a mutex would not be given from an interrupt, and doing so is |
dflet | 0:91ad48ad5687 | 1226 | definitely wrong if there is a mutex holder as priority inheritance makes no |
dflet | 0:91ad48ad5687 | 1227 | sense for an interrupts, only tasks. */ |
dflet | 0:91ad48ad5687 | 1228 | configASSERT( !( ( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) && ( pxQueue->pxMutexHolder != NULL ) ) ); |
dflet | 0:91ad48ad5687 | 1229 | |
dflet | 0:91ad48ad5687 | 1230 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
dflet | 0:91ad48ad5687 | 1231 | system call (or maximum API call) interrupt priority. Interrupts that are |
dflet | 0:91ad48ad5687 | 1232 | above the maximum system call priority are kept permanently enabled, even |
dflet | 0:91ad48ad5687 | 1233 | when the RTOS kernel is in a critical section, but cannot make any calls to |
dflet | 0:91ad48ad5687 | 1234 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
dflet | 0:91ad48ad5687 | 1235 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
dflet | 0:91ad48ad5687 | 1236 | failure if a FreeRTOS API function is called from an interrupt that has been |
dflet | 0:91ad48ad5687 | 1237 | assigned a priority above the configured maximum system call priority. |
dflet | 0:91ad48ad5687 | 1238 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
dflet | 0:91ad48ad5687 | 1239 | that have been assigned a priority at or (logically) below the maximum |
dflet | 0:91ad48ad5687 | 1240 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
dflet | 0:91ad48ad5687 | 1241 | safe API to ensure interrupt entry is as fast and as simple as possible. |
dflet | 0:91ad48ad5687 | 1242 | More information (albeit Cortex-M specific) is provided on the following |
dflet | 0:91ad48ad5687 | 1243 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
dflet | 0:91ad48ad5687 | 1244 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
dflet | 0:91ad48ad5687 | 1245 | |
dflet | 0:91ad48ad5687 | 1246 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
dflet | 0:91ad48ad5687 | 1247 | { |
dflet | 0:91ad48ad5687 | 1248 | /* When the queue is used to implement a semaphore no data is ever |
dflet | 0:91ad48ad5687 | 1249 | moved through the queue but it is still valid to see if the queue 'has |
dflet | 0:91ad48ad5687 | 1250 | space'. */ |
dflet | 0:91ad48ad5687 | 1251 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
dflet | 0:91ad48ad5687 | 1252 | { |
dflet | 0:91ad48ad5687 | 1253 | traceQUEUE_SEND_FROM_ISR( pxQueue ); |
dflet | 0:91ad48ad5687 | 1254 | |
dflet | 0:91ad48ad5687 | 1255 | /* A task can only have an inherited priority if it is a mutex |
dflet | 0:91ad48ad5687 | 1256 | holder - and if there is a mutex holder then the mutex cannot be |
dflet | 0:91ad48ad5687 | 1257 | given from an ISR. As this is the ISR version of the function it |
dflet | 0:91ad48ad5687 | 1258 | can be assumed there is no mutex holder and no need to determine if |
dflet | 0:91ad48ad5687 | 1259 | priority disinheritance is needed. Simply increase the count of |
dflet | 0:91ad48ad5687 | 1260 | messages (semaphores) available. */ |
dflet | 0:91ad48ad5687 | 1261 | ++( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 1262 | |
dflet | 0:91ad48ad5687 | 1263 | /* The event list is not altered if the queue is locked. This will |
dflet | 0:91ad48ad5687 | 1264 | be done when the queue is unlocked later. */ |
dflet | 0:91ad48ad5687 | 1265 | if( pxQueue->xTxLock == queueUNLOCKED ) |
dflet | 0:91ad48ad5687 | 1266 | { |
dflet | 0:91ad48ad5687 | 1267 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 1268 | { |
dflet | 0:91ad48ad5687 | 1269 | if( pxQueue->pxQueueSetContainer != NULL ) |
dflet | 0:91ad48ad5687 | 1270 | { |
dflet | 0:91ad48ad5687 | 1271 | if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 1272 | { |
dflet | 0:91ad48ad5687 | 1273 | /* The semaphore is a member of a queue set, and |
dflet | 0:91ad48ad5687 | 1274 | posting to the queue set caused a higher priority |
dflet | 0:91ad48ad5687 | 1275 | task to unblock. A context switch is required. */ |
dflet | 0:91ad48ad5687 | 1276 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1277 | { |
dflet | 0:91ad48ad5687 | 1278 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1279 | } |
dflet | 0:91ad48ad5687 | 1280 | else |
dflet | 0:91ad48ad5687 | 1281 | { |
dflet | 0:91ad48ad5687 | 1282 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1283 | } |
dflet | 0:91ad48ad5687 | 1284 | } |
dflet | 0:91ad48ad5687 | 1285 | else |
dflet | 0:91ad48ad5687 | 1286 | { |
dflet | 0:91ad48ad5687 | 1287 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1288 | } |
dflet | 0:91ad48ad5687 | 1289 | } |
dflet | 0:91ad48ad5687 | 1290 | else |
dflet | 0:91ad48ad5687 | 1291 | { |
dflet | 0:91ad48ad5687 | 1292 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1293 | { |
dflet | 0:91ad48ad5687 | 1294 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1295 | { |
dflet | 0:91ad48ad5687 | 1296 | /* The task waiting has a higher priority so |
dflet | 0:91ad48ad5687 | 1297 | record that a context switch is required. */ |
dflet | 0:91ad48ad5687 | 1298 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1299 | { |
dflet | 0:91ad48ad5687 | 1300 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1301 | } |
dflet | 0:91ad48ad5687 | 1302 | else |
dflet | 0:91ad48ad5687 | 1303 | { |
dflet | 0:91ad48ad5687 | 1304 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1305 | } |
dflet | 0:91ad48ad5687 | 1306 | } |
dflet | 0:91ad48ad5687 | 1307 | else |
dflet | 0:91ad48ad5687 | 1308 | { |
dflet | 0:91ad48ad5687 | 1309 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1310 | } |
dflet | 0:91ad48ad5687 | 1311 | } |
dflet | 0:91ad48ad5687 | 1312 | else |
dflet | 0:91ad48ad5687 | 1313 | { |
dflet | 0:91ad48ad5687 | 1314 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1315 | } |
dflet | 0:91ad48ad5687 | 1316 | } |
dflet | 0:91ad48ad5687 | 1317 | } |
dflet | 0:91ad48ad5687 | 1318 | #else /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1319 | { |
dflet | 0:91ad48ad5687 | 1320 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1321 | { |
dflet | 0:91ad48ad5687 | 1322 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1323 | { |
dflet | 0:91ad48ad5687 | 1324 | /* The task waiting has a higher priority so record that a |
dflet | 0:91ad48ad5687 | 1325 | context switch is required. */ |
dflet | 0:91ad48ad5687 | 1326 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1327 | { |
dflet | 0:91ad48ad5687 | 1328 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1329 | } |
dflet | 0:91ad48ad5687 | 1330 | else |
dflet | 0:91ad48ad5687 | 1331 | { |
dflet | 0:91ad48ad5687 | 1332 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1333 | } |
dflet | 0:91ad48ad5687 | 1334 | } |
dflet | 0:91ad48ad5687 | 1335 | else |
dflet | 0:91ad48ad5687 | 1336 | { |
dflet | 0:91ad48ad5687 | 1337 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1338 | } |
dflet | 0:91ad48ad5687 | 1339 | } |
dflet | 0:91ad48ad5687 | 1340 | else |
dflet | 0:91ad48ad5687 | 1341 | { |
dflet | 0:91ad48ad5687 | 1342 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1343 | } |
dflet | 0:91ad48ad5687 | 1344 | } |
dflet | 0:91ad48ad5687 | 1345 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1346 | } |
dflet | 0:91ad48ad5687 | 1347 | else |
dflet | 0:91ad48ad5687 | 1348 | { |
dflet | 0:91ad48ad5687 | 1349 | /* Increment the lock count so the task that unlocks the queue |
dflet | 0:91ad48ad5687 | 1350 | knows that data was posted while it was locked. */ |
dflet | 0:91ad48ad5687 | 1351 | ++( pxQueue->xTxLock ); |
dflet | 0:91ad48ad5687 | 1352 | } |
dflet | 0:91ad48ad5687 | 1353 | |
dflet | 0:91ad48ad5687 | 1354 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 1355 | } |
dflet | 0:91ad48ad5687 | 1356 | else |
dflet | 0:91ad48ad5687 | 1357 | { |
dflet | 0:91ad48ad5687 | 1358 | traceQUEUE_SEND_FROM_ISR_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1359 | xReturn = errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 1360 | } |
dflet | 0:91ad48ad5687 | 1361 | } |
dflet | 0:91ad48ad5687 | 1362 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
dflet | 0:91ad48ad5687 | 1363 | |
dflet | 0:91ad48ad5687 | 1364 | return xReturn; |
dflet | 0:91ad48ad5687 | 1365 | } |
dflet | 0:91ad48ad5687 | 1366 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1367 | |
dflet | 0:91ad48ad5687 | 1368 | BaseType_t xQueueGenericReceive( QueueHandle_t xQueue, void * const pvBuffer, TickType_t xTicksToWait, const BaseType_t xJustPeeking ) |
dflet | 0:91ad48ad5687 | 1369 | { |
dflet | 0:91ad48ad5687 | 1370 | BaseType_t xEntryTimeSet = pdFALSE; |
dflet | 0:91ad48ad5687 | 1371 | TimeOut_t xTimeOut; |
dflet | 0:91ad48ad5687 | 1372 | int8_t *pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 1373 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1374 | |
dflet | 0:91ad48ad5687 | 1375 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1376 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 1377 | #if ( ( INCLUDE_xTaskGetSchedulerState == 1 ) || ( configUSE_TIMERS == 1 ) ) |
dflet | 0:91ad48ad5687 | 1378 | { |
dflet | 0:91ad48ad5687 | 1379 | configASSERT( !( ( xTaskGetSchedulerState() == taskSCHEDULER_SUSPENDED ) && ( xTicksToWait != 0 ) ) ); |
dflet | 0:91ad48ad5687 | 1380 | } |
dflet | 0:91ad48ad5687 | 1381 | #endif |
dflet | 0:91ad48ad5687 | 1382 | |
dflet | 0:91ad48ad5687 | 1383 | /* This function relaxes the coding standard somewhat to allow return |
dflet | 0:91ad48ad5687 | 1384 | statements within the function itself. This is done in the interest |
dflet | 0:91ad48ad5687 | 1385 | of execution time efficiency. */ |
dflet | 0:91ad48ad5687 | 1386 | |
dflet | 0:91ad48ad5687 | 1387 | for( ;; ) |
dflet | 0:91ad48ad5687 | 1388 | { |
dflet | 0:91ad48ad5687 | 1389 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1390 | { |
dflet | 0:91ad48ad5687 | 1391 | /* Is there data in the queue now? To be running the calling task |
dflet | 0:91ad48ad5687 | 1392 | must be the highest priority task wanting to access the queue. */ |
dflet | 0:91ad48ad5687 | 1393 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1394 | { |
dflet | 0:91ad48ad5687 | 1395 | /* Remember the read position in case the queue is only being |
dflet | 0:91ad48ad5687 | 1396 | peeked. */ |
dflet | 0:91ad48ad5687 | 1397 | pcOriginalReadPosition = pxQueue->u.pcReadFrom; |
dflet | 0:91ad48ad5687 | 1398 | |
dflet | 0:91ad48ad5687 | 1399 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
dflet | 0:91ad48ad5687 | 1400 | |
dflet | 0:91ad48ad5687 | 1401 | if( xJustPeeking == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1402 | { |
dflet | 0:91ad48ad5687 | 1403 | traceQUEUE_RECEIVE( pxQueue ); |
dflet | 0:91ad48ad5687 | 1404 | |
dflet | 0:91ad48ad5687 | 1405 | /* Actually removing data, not just peeking. */ |
dflet | 0:91ad48ad5687 | 1406 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 1407 | |
dflet | 0:91ad48ad5687 | 1408 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 1409 | { |
dflet | 0:91ad48ad5687 | 1410 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 1411 | { |
dflet | 0:91ad48ad5687 | 1412 | /* Record the information required to implement |
dflet | 0:91ad48ad5687 | 1413 | priority inheritance should it become necessary. */ |
dflet | 0:91ad48ad5687 | 1414 | pxQueue->pxMutexHolder = ( int8_t * ) pvTaskIncrementMutexHeldCount(); /*lint !e961 Cast is not redundant as TaskHandle_t is a typedef. */ |
dflet | 0:91ad48ad5687 | 1415 | } |
dflet | 0:91ad48ad5687 | 1416 | else |
dflet | 0:91ad48ad5687 | 1417 | { |
dflet | 0:91ad48ad5687 | 1418 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1419 | } |
dflet | 0:91ad48ad5687 | 1420 | } |
dflet | 0:91ad48ad5687 | 1421 | #endif /* configUSE_MUTEXES */ |
dflet | 0:91ad48ad5687 | 1422 | |
dflet | 0:91ad48ad5687 | 1423 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1424 | { |
dflet | 0:91ad48ad5687 | 1425 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 1426 | { |
dflet | 0:91ad48ad5687 | 1427 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 1428 | } |
dflet | 0:91ad48ad5687 | 1429 | else |
dflet | 0:91ad48ad5687 | 1430 | { |
dflet | 0:91ad48ad5687 | 1431 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1432 | } |
dflet | 0:91ad48ad5687 | 1433 | } |
dflet | 0:91ad48ad5687 | 1434 | else |
dflet | 0:91ad48ad5687 | 1435 | { |
dflet | 0:91ad48ad5687 | 1436 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1437 | } |
dflet | 0:91ad48ad5687 | 1438 | } |
dflet | 0:91ad48ad5687 | 1439 | else |
dflet | 0:91ad48ad5687 | 1440 | { |
dflet | 0:91ad48ad5687 | 1441 | traceQUEUE_PEEK( pxQueue ); |
dflet | 0:91ad48ad5687 | 1442 | |
dflet | 0:91ad48ad5687 | 1443 | /* The data is not being removed, so reset the read |
dflet | 0:91ad48ad5687 | 1444 | pointer. */ |
dflet | 0:91ad48ad5687 | 1445 | pxQueue->u.pcReadFrom = pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 1446 | |
dflet | 0:91ad48ad5687 | 1447 | /* The data is being left in the queue, so see if there are |
dflet | 0:91ad48ad5687 | 1448 | any other tasks waiting for the data. */ |
dflet | 0:91ad48ad5687 | 1449 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1450 | { |
dflet | 0:91ad48ad5687 | 1451 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1452 | { |
dflet | 0:91ad48ad5687 | 1453 | /* The task waiting has a higher priority than this task. */ |
dflet | 0:91ad48ad5687 | 1454 | queueYIELD_IF_USING_PREEMPTION(); |
dflet | 0:91ad48ad5687 | 1455 | } |
dflet | 0:91ad48ad5687 | 1456 | else |
dflet | 0:91ad48ad5687 | 1457 | { |
dflet | 0:91ad48ad5687 | 1458 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1459 | } |
dflet | 0:91ad48ad5687 | 1460 | } |
dflet | 0:91ad48ad5687 | 1461 | else |
dflet | 0:91ad48ad5687 | 1462 | { |
dflet | 0:91ad48ad5687 | 1463 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1464 | } |
dflet | 0:91ad48ad5687 | 1465 | } |
dflet | 0:91ad48ad5687 | 1466 | |
dflet | 0:91ad48ad5687 | 1467 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1468 | return pdPASS; |
dflet | 0:91ad48ad5687 | 1469 | } |
dflet | 0:91ad48ad5687 | 1470 | else |
dflet | 0:91ad48ad5687 | 1471 | { |
dflet | 0:91ad48ad5687 | 1472 | if( xTicksToWait == ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1473 | { |
dflet | 0:91ad48ad5687 | 1474 | /* The queue was empty and no block time is specified (or |
dflet | 0:91ad48ad5687 | 1475 | the block time has expired) so leave now. */ |
dflet | 0:91ad48ad5687 | 1476 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1477 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1478 | return errQUEUE_EMPTY; |
dflet | 0:91ad48ad5687 | 1479 | } |
dflet | 0:91ad48ad5687 | 1480 | else if( xEntryTimeSet == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1481 | { |
dflet | 0:91ad48ad5687 | 1482 | /* The queue was empty and a block time was specified so |
dflet | 0:91ad48ad5687 | 1483 | configure the timeout structure. */ |
dflet | 0:91ad48ad5687 | 1484 | vTaskSetTimeOutState( &xTimeOut ); |
dflet | 0:91ad48ad5687 | 1485 | xEntryTimeSet = pdTRUE; |
dflet | 0:91ad48ad5687 | 1486 | } |
dflet | 0:91ad48ad5687 | 1487 | else |
dflet | 0:91ad48ad5687 | 1488 | { |
dflet | 0:91ad48ad5687 | 1489 | /* Entry time was already set. */ |
dflet | 0:91ad48ad5687 | 1490 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1491 | } |
dflet | 0:91ad48ad5687 | 1492 | } |
dflet | 0:91ad48ad5687 | 1493 | } |
dflet | 0:91ad48ad5687 | 1494 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1495 | |
dflet | 0:91ad48ad5687 | 1496 | /* Interrupts and other tasks can send to and receive from the queue |
dflet | 0:91ad48ad5687 | 1497 | now the critical section has been exited. */ |
dflet | 0:91ad48ad5687 | 1498 | |
dflet | 0:91ad48ad5687 | 1499 | vTaskSuspendAll(); |
dflet | 0:91ad48ad5687 | 1500 | prvLockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 1501 | |
dflet | 0:91ad48ad5687 | 1502 | /* Update the timeout state to see if it has expired yet. */ |
dflet | 0:91ad48ad5687 | 1503 | if( xTaskCheckForTimeOut( &xTimeOut, &xTicksToWait ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1504 | { |
dflet | 0:91ad48ad5687 | 1505 | if( prvIsQueueEmpty( pxQueue ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1506 | { |
dflet | 0:91ad48ad5687 | 1507 | traceBLOCKING_ON_QUEUE_RECEIVE( pxQueue ); |
dflet | 0:91ad48ad5687 | 1508 | |
dflet | 0:91ad48ad5687 | 1509 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 1510 | { |
dflet | 0:91ad48ad5687 | 1511 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 1512 | { |
dflet | 0:91ad48ad5687 | 1513 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1514 | { |
dflet | 0:91ad48ad5687 | 1515 | vTaskPriorityInherit( ( void * ) pxQueue->pxMutexHolder ); |
dflet | 0:91ad48ad5687 | 1516 | } |
dflet | 0:91ad48ad5687 | 1517 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1518 | } |
dflet | 0:91ad48ad5687 | 1519 | else |
dflet | 0:91ad48ad5687 | 1520 | { |
dflet | 0:91ad48ad5687 | 1521 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1522 | } |
dflet | 0:91ad48ad5687 | 1523 | } |
dflet | 0:91ad48ad5687 | 1524 | #endif |
dflet | 0:91ad48ad5687 | 1525 | |
dflet | 0:91ad48ad5687 | 1526 | vTaskPlaceOnEventList( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait ); |
dflet | 0:91ad48ad5687 | 1527 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 1528 | if( xTaskResumeAll() == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1529 | { |
dflet | 0:91ad48ad5687 | 1530 | portYIELD_WITHIN_API(); |
dflet | 0:91ad48ad5687 | 1531 | } |
dflet | 0:91ad48ad5687 | 1532 | else |
dflet | 0:91ad48ad5687 | 1533 | { |
dflet | 0:91ad48ad5687 | 1534 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1535 | } |
dflet | 0:91ad48ad5687 | 1536 | } |
dflet | 0:91ad48ad5687 | 1537 | else |
dflet | 0:91ad48ad5687 | 1538 | { |
dflet | 0:91ad48ad5687 | 1539 | /* Try again. */ |
dflet | 0:91ad48ad5687 | 1540 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 1541 | ( void ) xTaskResumeAll(); |
dflet | 0:91ad48ad5687 | 1542 | } |
dflet | 0:91ad48ad5687 | 1543 | } |
dflet | 0:91ad48ad5687 | 1544 | else |
dflet | 0:91ad48ad5687 | 1545 | { |
dflet | 0:91ad48ad5687 | 1546 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 1547 | ( void ) xTaskResumeAll(); |
dflet | 0:91ad48ad5687 | 1548 | traceQUEUE_RECEIVE_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1549 | return errQUEUE_EMPTY; |
dflet | 0:91ad48ad5687 | 1550 | } |
dflet | 0:91ad48ad5687 | 1551 | } |
dflet | 0:91ad48ad5687 | 1552 | } |
dflet | 0:91ad48ad5687 | 1553 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1554 | |
dflet | 0:91ad48ad5687 | 1555 | BaseType_t xQueueReceiveFromISR( QueueHandle_t xQueue, void * const pvBuffer, BaseType_t * const pxHigherPriorityTaskWoken ) |
dflet | 0:91ad48ad5687 | 1556 | { |
dflet | 0:91ad48ad5687 | 1557 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 1558 | UBaseType_t uxSavedInterruptStatus; |
dflet | 0:91ad48ad5687 | 1559 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1560 | |
dflet | 0:91ad48ad5687 | 1561 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1562 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 1563 | |
dflet | 0:91ad48ad5687 | 1564 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
dflet | 0:91ad48ad5687 | 1565 | system call (or maximum API call) interrupt priority. Interrupts that are |
dflet | 0:91ad48ad5687 | 1566 | above the maximum system call priority are kept permanently enabled, even |
dflet | 0:91ad48ad5687 | 1567 | when the RTOS kernel is in a critical section, but cannot make any calls to |
dflet | 0:91ad48ad5687 | 1568 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
dflet | 0:91ad48ad5687 | 1569 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
dflet | 0:91ad48ad5687 | 1570 | failure if a FreeRTOS API function is called from an interrupt that has been |
dflet | 0:91ad48ad5687 | 1571 | assigned a priority above the configured maximum system call priority. |
dflet | 0:91ad48ad5687 | 1572 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
dflet | 0:91ad48ad5687 | 1573 | that have been assigned a priority at or (logically) below the maximum |
dflet | 0:91ad48ad5687 | 1574 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
dflet | 0:91ad48ad5687 | 1575 | safe API to ensure interrupt entry is as fast and as simple as possible. |
dflet | 0:91ad48ad5687 | 1576 | More information (albeit Cortex-M specific) is provided on the following |
dflet | 0:91ad48ad5687 | 1577 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
dflet | 0:91ad48ad5687 | 1578 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
dflet | 0:91ad48ad5687 | 1579 | |
dflet | 0:91ad48ad5687 | 1580 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
dflet | 0:91ad48ad5687 | 1581 | { |
dflet | 0:91ad48ad5687 | 1582 | /* Cannot block in an ISR, so check there is data available. */ |
dflet | 0:91ad48ad5687 | 1583 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1584 | { |
dflet | 0:91ad48ad5687 | 1585 | traceQUEUE_RECEIVE_FROM_ISR( pxQueue ); |
dflet | 0:91ad48ad5687 | 1586 | |
dflet | 0:91ad48ad5687 | 1587 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
dflet | 0:91ad48ad5687 | 1588 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 1589 | |
dflet | 0:91ad48ad5687 | 1590 | /* If the queue is locked the event list will not be modified. |
dflet | 0:91ad48ad5687 | 1591 | Instead update the lock count so the task that unlocks the queue |
dflet | 0:91ad48ad5687 | 1592 | will know that an ISR has removed data while the queue was |
dflet | 0:91ad48ad5687 | 1593 | locked. */ |
dflet | 0:91ad48ad5687 | 1594 | if( pxQueue->xRxLock == queueUNLOCKED ) |
dflet | 0:91ad48ad5687 | 1595 | { |
dflet | 0:91ad48ad5687 | 1596 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1597 | { |
dflet | 0:91ad48ad5687 | 1598 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1599 | { |
dflet | 0:91ad48ad5687 | 1600 | /* The task waiting has a higher priority than us so |
dflet | 0:91ad48ad5687 | 1601 | force a context switch. */ |
dflet | 0:91ad48ad5687 | 1602 | if( pxHigherPriorityTaskWoken != NULL ) |
dflet | 0:91ad48ad5687 | 1603 | { |
dflet | 0:91ad48ad5687 | 1604 | *pxHigherPriorityTaskWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 1605 | } |
dflet | 0:91ad48ad5687 | 1606 | else |
dflet | 0:91ad48ad5687 | 1607 | { |
dflet | 0:91ad48ad5687 | 1608 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1609 | } |
dflet | 0:91ad48ad5687 | 1610 | } |
dflet | 0:91ad48ad5687 | 1611 | else |
dflet | 0:91ad48ad5687 | 1612 | { |
dflet | 0:91ad48ad5687 | 1613 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1614 | } |
dflet | 0:91ad48ad5687 | 1615 | } |
dflet | 0:91ad48ad5687 | 1616 | else |
dflet | 0:91ad48ad5687 | 1617 | { |
dflet | 0:91ad48ad5687 | 1618 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1619 | } |
dflet | 0:91ad48ad5687 | 1620 | } |
dflet | 0:91ad48ad5687 | 1621 | else |
dflet | 0:91ad48ad5687 | 1622 | { |
dflet | 0:91ad48ad5687 | 1623 | /* Increment the lock count so the task that unlocks the queue |
dflet | 0:91ad48ad5687 | 1624 | knows that data was removed while it was locked. */ |
dflet | 0:91ad48ad5687 | 1625 | ++( pxQueue->xRxLock ); |
dflet | 0:91ad48ad5687 | 1626 | } |
dflet | 0:91ad48ad5687 | 1627 | |
dflet | 0:91ad48ad5687 | 1628 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 1629 | } |
dflet | 0:91ad48ad5687 | 1630 | else |
dflet | 0:91ad48ad5687 | 1631 | { |
dflet | 0:91ad48ad5687 | 1632 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 1633 | traceQUEUE_RECEIVE_FROM_ISR_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1634 | } |
dflet | 0:91ad48ad5687 | 1635 | } |
dflet | 0:91ad48ad5687 | 1636 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
dflet | 0:91ad48ad5687 | 1637 | |
dflet | 0:91ad48ad5687 | 1638 | return xReturn; |
dflet | 0:91ad48ad5687 | 1639 | } |
dflet | 0:91ad48ad5687 | 1640 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1641 | |
dflet | 0:91ad48ad5687 | 1642 | BaseType_t xQueuePeekFromISR( QueueHandle_t xQueue, void * const pvBuffer ) |
dflet | 0:91ad48ad5687 | 1643 | { |
dflet | 0:91ad48ad5687 | 1644 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 1645 | UBaseType_t uxSavedInterruptStatus; |
dflet | 0:91ad48ad5687 | 1646 | int8_t *pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 1647 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1648 | |
dflet | 0:91ad48ad5687 | 1649 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1650 | configASSERT( !( ( pvBuffer == NULL ) && ( pxQueue->uxItemSize != ( UBaseType_t ) 0U ) ) ); |
dflet | 0:91ad48ad5687 | 1651 | configASSERT( pxQueue->uxItemSize != 0 ); /* Can't peek a semaphore. */ |
dflet | 0:91ad48ad5687 | 1652 | |
dflet | 0:91ad48ad5687 | 1653 | /* RTOS ports that support interrupt nesting have the concept of a maximum |
dflet | 0:91ad48ad5687 | 1654 | system call (or maximum API call) interrupt priority. Interrupts that are |
dflet | 0:91ad48ad5687 | 1655 | above the maximum system call priority are kept permanently enabled, even |
dflet | 0:91ad48ad5687 | 1656 | when the RTOS kernel is in a critical section, but cannot make any calls to |
dflet | 0:91ad48ad5687 | 1657 | FreeRTOS API functions. If configASSERT() is defined in FreeRTOSConfig.h |
dflet | 0:91ad48ad5687 | 1658 | then portASSERT_IF_INTERRUPT_PRIORITY_INVALID() will result in an assertion |
dflet | 0:91ad48ad5687 | 1659 | failure if a FreeRTOS API function is called from an interrupt that has been |
dflet | 0:91ad48ad5687 | 1660 | assigned a priority above the configured maximum system call priority. |
dflet | 0:91ad48ad5687 | 1661 | Only FreeRTOS functions that end in FromISR can be called from interrupts |
dflet | 0:91ad48ad5687 | 1662 | that have been assigned a priority at or (logically) below the maximum |
dflet | 0:91ad48ad5687 | 1663 | system call interrupt priority. FreeRTOS maintains a separate interrupt |
dflet | 0:91ad48ad5687 | 1664 | safe API to ensure interrupt entry is as fast and as simple as possible. |
dflet | 0:91ad48ad5687 | 1665 | More information (albeit Cortex-M specific) is provided on the following |
dflet | 0:91ad48ad5687 | 1666 | link: http://www.freertos.org/RTOS-Cortex-M3-M4.html */ |
dflet | 0:91ad48ad5687 | 1667 | portASSERT_IF_INTERRUPT_PRIORITY_INVALID(); |
dflet | 0:91ad48ad5687 | 1668 | |
dflet | 0:91ad48ad5687 | 1669 | uxSavedInterruptStatus = portSET_INTERRUPT_MASK_FROM_ISR(); |
dflet | 0:91ad48ad5687 | 1670 | { |
dflet | 0:91ad48ad5687 | 1671 | /* Cannot block in an ISR, so check there is data available. */ |
dflet | 0:91ad48ad5687 | 1672 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1673 | { |
dflet | 0:91ad48ad5687 | 1674 | traceQUEUE_PEEK_FROM_ISR( pxQueue ); |
dflet | 0:91ad48ad5687 | 1675 | |
dflet | 0:91ad48ad5687 | 1676 | /* Remember the read position so it can be reset as nothing is |
dflet | 0:91ad48ad5687 | 1677 | actually being removed from the queue. */ |
dflet | 0:91ad48ad5687 | 1678 | pcOriginalReadPosition = pxQueue->u.pcReadFrom; |
dflet | 0:91ad48ad5687 | 1679 | prvCopyDataFromQueue( pxQueue, pvBuffer ); |
dflet | 0:91ad48ad5687 | 1680 | pxQueue->u.pcReadFrom = pcOriginalReadPosition; |
dflet | 0:91ad48ad5687 | 1681 | |
dflet | 0:91ad48ad5687 | 1682 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 1683 | } |
dflet | 0:91ad48ad5687 | 1684 | else |
dflet | 0:91ad48ad5687 | 1685 | { |
dflet | 0:91ad48ad5687 | 1686 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 1687 | traceQUEUE_PEEK_FROM_ISR_FAILED( pxQueue ); |
dflet | 0:91ad48ad5687 | 1688 | } |
dflet | 0:91ad48ad5687 | 1689 | } |
dflet | 0:91ad48ad5687 | 1690 | portCLEAR_INTERRUPT_MASK_FROM_ISR( uxSavedInterruptStatus ); |
dflet | 0:91ad48ad5687 | 1691 | |
dflet | 0:91ad48ad5687 | 1692 | return xReturn; |
dflet | 0:91ad48ad5687 | 1693 | } |
dflet | 0:91ad48ad5687 | 1694 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1695 | |
dflet | 0:91ad48ad5687 | 1696 | UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1697 | { |
dflet | 0:91ad48ad5687 | 1698 | UBaseType_t uxReturn; |
dflet | 0:91ad48ad5687 | 1699 | |
dflet | 0:91ad48ad5687 | 1700 | configASSERT( xQueue ); |
dflet | 0:91ad48ad5687 | 1701 | |
dflet | 0:91ad48ad5687 | 1702 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1703 | { |
dflet | 0:91ad48ad5687 | 1704 | uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting; |
dflet | 0:91ad48ad5687 | 1705 | } |
dflet | 0:91ad48ad5687 | 1706 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1707 | |
dflet | 0:91ad48ad5687 | 1708 | return uxReturn; |
dflet | 0:91ad48ad5687 | 1709 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
dflet | 0:91ad48ad5687 | 1710 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1711 | |
dflet | 0:91ad48ad5687 | 1712 | UBaseType_t uxQueueSpacesAvailable( const QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1713 | { |
dflet | 0:91ad48ad5687 | 1714 | UBaseType_t uxReturn; |
dflet | 0:91ad48ad5687 | 1715 | Queue_t *pxQueue; |
dflet | 0:91ad48ad5687 | 1716 | |
dflet | 0:91ad48ad5687 | 1717 | pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1718 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1719 | |
dflet | 0:91ad48ad5687 | 1720 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1721 | { |
dflet | 0:91ad48ad5687 | 1722 | uxReturn = pxQueue->uxLength - pxQueue->uxMessagesWaiting; |
dflet | 0:91ad48ad5687 | 1723 | } |
dflet | 0:91ad48ad5687 | 1724 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1725 | |
dflet | 0:91ad48ad5687 | 1726 | return uxReturn; |
dflet | 0:91ad48ad5687 | 1727 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
dflet | 0:91ad48ad5687 | 1728 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1729 | |
dflet | 0:91ad48ad5687 | 1730 | UBaseType_t uxQueueMessagesWaitingFromISR( const QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1731 | { |
dflet | 0:91ad48ad5687 | 1732 | UBaseType_t uxReturn; |
dflet | 0:91ad48ad5687 | 1733 | |
dflet | 0:91ad48ad5687 | 1734 | configASSERT( xQueue ); |
dflet | 0:91ad48ad5687 | 1735 | |
dflet | 0:91ad48ad5687 | 1736 | uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting; |
dflet | 0:91ad48ad5687 | 1737 | |
dflet | 0:91ad48ad5687 | 1738 | return uxReturn; |
dflet | 0:91ad48ad5687 | 1739 | } /*lint !e818 Pointer cannot be declared const as xQueue is a typedef not pointer. */ |
dflet | 0:91ad48ad5687 | 1740 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1741 | |
dflet | 0:91ad48ad5687 | 1742 | void vQueueDelete( QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1743 | { |
dflet | 0:91ad48ad5687 | 1744 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 1745 | |
dflet | 0:91ad48ad5687 | 1746 | configASSERT( pxQueue ); |
dflet | 0:91ad48ad5687 | 1747 | |
dflet | 0:91ad48ad5687 | 1748 | traceQUEUE_DELETE( pxQueue ); |
dflet | 0:91ad48ad5687 | 1749 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
dflet | 0:91ad48ad5687 | 1750 | { |
dflet | 0:91ad48ad5687 | 1751 | vQueueUnregisterQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 1752 | } |
dflet | 0:91ad48ad5687 | 1753 | #endif |
dflet | 0:91ad48ad5687 | 1754 | vPortFree( pxQueue ); |
dflet | 0:91ad48ad5687 | 1755 | } |
dflet | 0:91ad48ad5687 | 1756 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1757 | |
dflet | 0:91ad48ad5687 | 1758 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 1759 | |
dflet | 0:91ad48ad5687 | 1760 | UBaseType_t uxQueueGetQueueNumber( QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1761 | { |
dflet | 0:91ad48ad5687 | 1762 | return ( ( Queue_t * ) xQueue )->uxQueueNumber; |
dflet | 0:91ad48ad5687 | 1763 | } |
dflet | 0:91ad48ad5687 | 1764 | |
dflet | 0:91ad48ad5687 | 1765 | #endif /* configUSE_TRACE_FACILITY */ |
dflet | 0:91ad48ad5687 | 1766 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1767 | |
dflet | 0:91ad48ad5687 | 1768 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 1769 | |
dflet | 0:91ad48ad5687 | 1770 | void vQueueSetQueueNumber( QueueHandle_t xQueue, UBaseType_t uxQueueNumber ) |
dflet | 0:91ad48ad5687 | 1771 | { |
dflet | 0:91ad48ad5687 | 1772 | ( ( Queue_t * ) xQueue )->uxQueueNumber = uxQueueNumber; |
dflet | 0:91ad48ad5687 | 1773 | } |
dflet | 0:91ad48ad5687 | 1774 | |
dflet | 0:91ad48ad5687 | 1775 | #endif /* configUSE_TRACE_FACILITY */ |
dflet | 0:91ad48ad5687 | 1776 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1777 | |
dflet | 0:91ad48ad5687 | 1778 | #if ( configUSE_TRACE_FACILITY == 1 ) |
dflet | 0:91ad48ad5687 | 1779 | |
dflet | 0:91ad48ad5687 | 1780 | uint8_t ucQueueGetQueueType( QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 1781 | { |
dflet | 0:91ad48ad5687 | 1782 | return ( ( Queue_t * ) xQueue )->ucQueueType; |
dflet | 0:91ad48ad5687 | 1783 | } |
dflet | 0:91ad48ad5687 | 1784 | |
dflet | 0:91ad48ad5687 | 1785 | #endif /* configUSE_TRACE_FACILITY */ |
dflet | 0:91ad48ad5687 | 1786 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1787 | |
dflet | 0:91ad48ad5687 | 1788 | static BaseType_t prvCopyDataToQueue( Queue_t * const pxQueue, const void *pvItemToQueue, const BaseType_t xPosition ) |
dflet | 0:91ad48ad5687 | 1789 | { |
dflet | 0:91ad48ad5687 | 1790 | BaseType_t xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 1791 | |
dflet | 0:91ad48ad5687 | 1792 | if( pxQueue->uxItemSize == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1793 | { |
dflet | 0:91ad48ad5687 | 1794 | #if ( configUSE_MUTEXES == 1 ) |
dflet | 0:91ad48ad5687 | 1795 | { |
dflet | 0:91ad48ad5687 | 1796 | if( pxQueue->uxQueueType == queueQUEUE_IS_MUTEX ) |
dflet | 0:91ad48ad5687 | 1797 | { |
dflet | 0:91ad48ad5687 | 1798 | /* The mutex is no longer being held. */ |
dflet | 0:91ad48ad5687 | 1799 | xReturn = xTaskPriorityDisinherit( ( void * ) pxQueue->pxMutexHolder ); |
dflet | 0:91ad48ad5687 | 1800 | pxQueue->pxMutexHolder = NULL; |
dflet | 0:91ad48ad5687 | 1801 | } |
dflet | 0:91ad48ad5687 | 1802 | else |
dflet | 0:91ad48ad5687 | 1803 | { |
dflet | 0:91ad48ad5687 | 1804 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1805 | } |
dflet | 0:91ad48ad5687 | 1806 | } |
dflet | 0:91ad48ad5687 | 1807 | #endif /* configUSE_MUTEXES */ |
dflet | 0:91ad48ad5687 | 1808 | } |
dflet | 0:91ad48ad5687 | 1809 | else if( xPosition == queueSEND_TO_BACK ) |
dflet | 0:91ad48ad5687 | 1810 | { |
dflet | 0:91ad48ad5687 | 1811 | ( void ) memcpy( ( void * ) pxQueue->pcWriteTo, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports, plus previous logic ensures a null pointer can only be passed to memcpy() if the copy size is 0. */ |
dflet | 0:91ad48ad5687 | 1812 | pxQueue->pcWriteTo += pxQueue->uxItemSize; |
dflet | 0:91ad48ad5687 | 1813 | if( pxQueue->pcWriteTo >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */ |
dflet | 0:91ad48ad5687 | 1814 | { |
dflet | 0:91ad48ad5687 | 1815 | pxQueue->pcWriteTo = pxQueue->pcHead; |
dflet | 0:91ad48ad5687 | 1816 | } |
dflet | 0:91ad48ad5687 | 1817 | else |
dflet | 0:91ad48ad5687 | 1818 | { |
dflet | 0:91ad48ad5687 | 1819 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1820 | } |
dflet | 0:91ad48ad5687 | 1821 | } |
dflet | 0:91ad48ad5687 | 1822 | else |
dflet | 0:91ad48ad5687 | 1823 | { |
dflet | 0:91ad48ad5687 | 1824 | ( void ) memcpy( ( void * ) pxQueue->u.pcReadFrom, pvItemToQueue, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 MISRA exception as the casts are only redundant for some ports. */ |
dflet | 0:91ad48ad5687 | 1825 | pxQueue->u.pcReadFrom -= pxQueue->uxItemSize; |
dflet | 0:91ad48ad5687 | 1826 | if( pxQueue->u.pcReadFrom < pxQueue->pcHead ) /*lint !e946 MISRA exception justified as comparison of pointers is the cleanest solution. */ |
dflet | 0:91ad48ad5687 | 1827 | { |
dflet | 0:91ad48ad5687 | 1828 | pxQueue->u.pcReadFrom = ( pxQueue->pcTail - pxQueue->uxItemSize ); |
dflet | 0:91ad48ad5687 | 1829 | } |
dflet | 0:91ad48ad5687 | 1830 | else |
dflet | 0:91ad48ad5687 | 1831 | { |
dflet | 0:91ad48ad5687 | 1832 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1833 | } |
dflet | 0:91ad48ad5687 | 1834 | |
dflet | 0:91ad48ad5687 | 1835 | if( xPosition == queueOVERWRITE ) |
dflet | 0:91ad48ad5687 | 1836 | { |
dflet | 0:91ad48ad5687 | 1837 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1838 | { |
dflet | 0:91ad48ad5687 | 1839 | /* An item is not being added but overwritten, so subtract |
dflet | 0:91ad48ad5687 | 1840 | one from the recorded number of items in the queue so when |
dflet | 0:91ad48ad5687 | 1841 | one is added again below the number of recorded items remains |
dflet | 0:91ad48ad5687 | 1842 | correct. */ |
dflet | 0:91ad48ad5687 | 1843 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 1844 | } |
dflet | 0:91ad48ad5687 | 1845 | else |
dflet | 0:91ad48ad5687 | 1846 | { |
dflet | 0:91ad48ad5687 | 1847 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1848 | } |
dflet | 0:91ad48ad5687 | 1849 | } |
dflet | 0:91ad48ad5687 | 1850 | else |
dflet | 0:91ad48ad5687 | 1851 | { |
dflet | 0:91ad48ad5687 | 1852 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1853 | } |
dflet | 0:91ad48ad5687 | 1854 | } |
dflet | 0:91ad48ad5687 | 1855 | |
dflet | 0:91ad48ad5687 | 1856 | ++( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 1857 | |
dflet | 0:91ad48ad5687 | 1858 | return xReturn; |
dflet | 0:91ad48ad5687 | 1859 | } |
dflet | 0:91ad48ad5687 | 1860 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1861 | |
dflet | 0:91ad48ad5687 | 1862 | static void prvCopyDataFromQueue( Queue_t * const pxQueue, void * const pvBuffer ) |
dflet | 0:91ad48ad5687 | 1863 | { |
dflet | 0:91ad48ad5687 | 1864 | if( pxQueue->uxItemSize != ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 1865 | { |
dflet | 0:91ad48ad5687 | 1866 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
dflet | 0:91ad48ad5687 | 1867 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) /*lint !e946 MISRA exception justified as use of the relational operator is the cleanest solutions. */ |
dflet | 0:91ad48ad5687 | 1868 | { |
dflet | 0:91ad48ad5687 | 1869 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
dflet | 0:91ad48ad5687 | 1870 | } |
dflet | 0:91ad48ad5687 | 1871 | else |
dflet | 0:91ad48ad5687 | 1872 | { |
dflet | 0:91ad48ad5687 | 1873 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1874 | } |
dflet | 0:91ad48ad5687 | 1875 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( size_t ) pxQueue->uxItemSize ); /*lint !e961 !e418 MISRA exception as the casts are only redundant for some ports. Also previous logic ensures a null pointer can only be passed to memcpy() when the count is 0. */ |
dflet | 0:91ad48ad5687 | 1876 | } |
dflet | 0:91ad48ad5687 | 1877 | } |
dflet | 0:91ad48ad5687 | 1878 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1879 | |
dflet | 0:91ad48ad5687 | 1880 | static void prvUnlockQueue( Queue_t * const pxQueue ) |
dflet | 0:91ad48ad5687 | 1881 | { |
dflet | 0:91ad48ad5687 | 1882 | /* THIS FUNCTION MUST BE CALLED WITH THE SCHEDULER SUSPENDED. */ |
dflet | 0:91ad48ad5687 | 1883 | |
dflet | 0:91ad48ad5687 | 1884 | /* The lock counts contains the number of extra data items placed or |
dflet | 0:91ad48ad5687 | 1885 | removed from the queue while the queue was locked. When a queue is |
dflet | 0:91ad48ad5687 | 1886 | locked items can be added or removed, but the event lists cannot be |
dflet | 0:91ad48ad5687 | 1887 | updated. */ |
dflet | 0:91ad48ad5687 | 1888 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1889 | { |
dflet | 0:91ad48ad5687 | 1890 | /* See if data was added to the queue while it was locked. */ |
dflet | 0:91ad48ad5687 | 1891 | while( pxQueue->xTxLock > queueLOCKED_UNMODIFIED ) |
dflet | 0:91ad48ad5687 | 1892 | { |
dflet | 0:91ad48ad5687 | 1893 | /* Data was posted while the queue was locked. Are any tasks |
dflet | 0:91ad48ad5687 | 1894 | blocked waiting for data to become available? */ |
dflet | 0:91ad48ad5687 | 1895 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 1896 | { |
dflet | 0:91ad48ad5687 | 1897 | if( pxQueue->pxQueueSetContainer != NULL ) |
dflet | 0:91ad48ad5687 | 1898 | { |
dflet | 0:91ad48ad5687 | 1899 | if( prvNotifyQueueSetContainer( pxQueue, queueSEND_TO_BACK ) == pdTRUE ) |
dflet | 0:91ad48ad5687 | 1900 | { |
dflet | 0:91ad48ad5687 | 1901 | /* The queue is a member of a queue set, and posting to |
dflet | 0:91ad48ad5687 | 1902 | the queue set caused a higher priority task to unblock. |
dflet | 0:91ad48ad5687 | 1903 | A context switch is required. */ |
dflet | 0:91ad48ad5687 | 1904 | vTaskMissedYield(); |
dflet | 0:91ad48ad5687 | 1905 | } |
dflet | 0:91ad48ad5687 | 1906 | else |
dflet | 0:91ad48ad5687 | 1907 | { |
dflet | 0:91ad48ad5687 | 1908 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1909 | } |
dflet | 0:91ad48ad5687 | 1910 | } |
dflet | 0:91ad48ad5687 | 1911 | else |
dflet | 0:91ad48ad5687 | 1912 | { |
dflet | 0:91ad48ad5687 | 1913 | /* Tasks that are removed from the event list will get added to |
dflet | 0:91ad48ad5687 | 1914 | the pending ready list as the scheduler is still suspended. */ |
dflet | 0:91ad48ad5687 | 1915 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1916 | { |
dflet | 0:91ad48ad5687 | 1917 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1918 | { |
dflet | 0:91ad48ad5687 | 1919 | /* The task waiting has a higher priority so record that a |
dflet | 0:91ad48ad5687 | 1920 | context switch is required. */ |
dflet | 0:91ad48ad5687 | 1921 | vTaskMissedYield(); |
dflet | 0:91ad48ad5687 | 1922 | } |
dflet | 0:91ad48ad5687 | 1923 | else |
dflet | 0:91ad48ad5687 | 1924 | { |
dflet | 0:91ad48ad5687 | 1925 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1926 | } |
dflet | 0:91ad48ad5687 | 1927 | } |
dflet | 0:91ad48ad5687 | 1928 | else |
dflet | 0:91ad48ad5687 | 1929 | { |
dflet | 0:91ad48ad5687 | 1930 | break; |
dflet | 0:91ad48ad5687 | 1931 | } |
dflet | 0:91ad48ad5687 | 1932 | } |
dflet | 0:91ad48ad5687 | 1933 | } |
dflet | 0:91ad48ad5687 | 1934 | #else /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1935 | { |
dflet | 0:91ad48ad5687 | 1936 | /* Tasks that are removed from the event list will get added to |
dflet | 0:91ad48ad5687 | 1937 | the pending ready list as the scheduler is still suspended. */ |
dflet | 0:91ad48ad5687 | 1938 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1939 | { |
dflet | 0:91ad48ad5687 | 1940 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1941 | { |
dflet | 0:91ad48ad5687 | 1942 | /* The task waiting has a higher priority so record that a |
dflet | 0:91ad48ad5687 | 1943 | context switch is required. */ |
dflet | 0:91ad48ad5687 | 1944 | vTaskMissedYield(); |
dflet | 0:91ad48ad5687 | 1945 | } |
dflet | 0:91ad48ad5687 | 1946 | else |
dflet | 0:91ad48ad5687 | 1947 | { |
dflet | 0:91ad48ad5687 | 1948 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1949 | } |
dflet | 0:91ad48ad5687 | 1950 | } |
dflet | 0:91ad48ad5687 | 1951 | else |
dflet | 0:91ad48ad5687 | 1952 | { |
dflet | 0:91ad48ad5687 | 1953 | break; |
dflet | 0:91ad48ad5687 | 1954 | } |
dflet | 0:91ad48ad5687 | 1955 | } |
dflet | 0:91ad48ad5687 | 1956 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 1957 | |
dflet | 0:91ad48ad5687 | 1958 | --( pxQueue->xTxLock ); |
dflet | 0:91ad48ad5687 | 1959 | } |
dflet | 0:91ad48ad5687 | 1960 | |
dflet | 0:91ad48ad5687 | 1961 | pxQueue->xTxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 1962 | } |
dflet | 0:91ad48ad5687 | 1963 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1964 | |
dflet | 0:91ad48ad5687 | 1965 | /* Do the same for the Rx lock. */ |
dflet | 0:91ad48ad5687 | 1966 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1967 | { |
dflet | 0:91ad48ad5687 | 1968 | while( pxQueue->xRxLock > queueLOCKED_UNMODIFIED ) |
dflet | 0:91ad48ad5687 | 1969 | { |
dflet | 0:91ad48ad5687 | 1970 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 1971 | { |
dflet | 0:91ad48ad5687 | 1972 | if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 1973 | { |
dflet | 0:91ad48ad5687 | 1974 | vTaskMissedYield(); |
dflet | 0:91ad48ad5687 | 1975 | } |
dflet | 0:91ad48ad5687 | 1976 | else |
dflet | 0:91ad48ad5687 | 1977 | { |
dflet | 0:91ad48ad5687 | 1978 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 1979 | } |
dflet | 0:91ad48ad5687 | 1980 | |
dflet | 0:91ad48ad5687 | 1981 | --( pxQueue->xRxLock ); |
dflet | 0:91ad48ad5687 | 1982 | } |
dflet | 0:91ad48ad5687 | 1983 | else |
dflet | 0:91ad48ad5687 | 1984 | { |
dflet | 0:91ad48ad5687 | 1985 | break; |
dflet | 0:91ad48ad5687 | 1986 | } |
dflet | 0:91ad48ad5687 | 1987 | } |
dflet | 0:91ad48ad5687 | 1988 | |
dflet | 0:91ad48ad5687 | 1989 | pxQueue->xRxLock = queueUNLOCKED; |
dflet | 0:91ad48ad5687 | 1990 | } |
dflet | 0:91ad48ad5687 | 1991 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 1992 | } |
dflet | 0:91ad48ad5687 | 1993 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 1994 | |
dflet | 0:91ad48ad5687 | 1995 | static BaseType_t prvIsQueueEmpty( const Queue_t *pxQueue ) |
dflet | 0:91ad48ad5687 | 1996 | { |
dflet | 0:91ad48ad5687 | 1997 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 1998 | |
dflet | 0:91ad48ad5687 | 1999 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2000 | { |
dflet | 0:91ad48ad5687 | 2001 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2002 | { |
dflet | 0:91ad48ad5687 | 2003 | xReturn = pdTRUE; |
dflet | 0:91ad48ad5687 | 2004 | } |
dflet | 0:91ad48ad5687 | 2005 | else |
dflet | 0:91ad48ad5687 | 2006 | { |
dflet | 0:91ad48ad5687 | 2007 | xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 2008 | } |
dflet | 0:91ad48ad5687 | 2009 | } |
dflet | 0:91ad48ad5687 | 2010 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2011 | |
dflet | 0:91ad48ad5687 | 2012 | return xReturn; |
dflet | 0:91ad48ad5687 | 2013 | } |
dflet | 0:91ad48ad5687 | 2014 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2015 | |
dflet | 0:91ad48ad5687 | 2016 | BaseType_t xQueueIsQueueEmptyFromISR( const QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 2017 | { |
dflet | 0:91ad48ad5687 | 2018 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2019 | |
dflet | 0:91ad48ad5687 | 2020 | configASSERT( xQueue ); |
dflet | 0:91ad48ad5687 | 2021 | if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2022 | { |
dflet | 0:91ad48ad5687 | 2023 | xReturn = pdTRUE; |
dflet | 0:91ad48ad5687 | 2024 | } |
dflet | 0:91ad48ad5687 | 2025 | else |
dflet | 0:91ad48ad5687 | 2026 | { |
dflet | 0:91ad48ad5687 | 2027 | xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 2028 | } |
dflet | 0:91ad48ad5687 | 2029 | |
dflet | 0:91ad48ad5687 | 2030 | return xReturn; |
dflet | 0:91ad48ad5687 | 2031 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
dflet | 0:91ad48ad5687 | 2032 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2033 | |
dflet | 0:91ad48ad5687 | 2034 | static BaseType_t prvIsQueueFull( const Queue_t *pxQueue ) |
dflet | 0:91ad48ad5687 | 2035 | { |
dflet | 0:91ad48ad5687 | 2036 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2037 | |
dflet | 0:91ad48ad5687 | 2038 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2039 | { |
dflet | 0:91ad48ad5687 | 2040 | if( pxQueue->uxMessagesWaiting == pxQueue->uxLength ) |
dflet | 0:91ad48ad5687 | 2041 | { |
dflet | 0:91ad48ad5687 | 2042 | xReturn = pdTRUE; |
dflet | 0:91ad48ad5687 | 2043 | } |
dflet | 0:91ad48ad5687 | 2044 | else |
dflet | 0:91ad48ad5687 | 2045 | { |
dflet | 0:91ad48ad5687 | 2046 | xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 2047 | } |
dflet | 0:91ad48ad5687 | 2048 | } |
dflet | 0:91ad48ad5687 | 2049 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2050 | |
dflet | 0:91ad48ad5687 | 2051 | return xReturn; |
dflet | 0:91ad48ad5687 | 2052 | } |
dflet | 0:91ad48ad5687 | 2053 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2054 | |
dflet | 0:91ad48ad5687 | 2055 | BaseType_t xQueueIsQueueFullFromISR( const QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 2056 | { |
dflet | 0:91ad48ad5687 | 2057 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2058 | |
dflet | 0:91ad48ad5687 | 2059 | configASSERT( xQueue ); |
dflet | 0:91ad48ad5687 | 2060 | if( ( ( Queue_t * ) xQueue )->uxMessagesWaiting == ( ( Queue_t * ) xQueue )->uxLength ) |
dflet | 0:91ad48ad5687 | 2061 | { |
dflet | 0:91ad48ad5687 | 2062 | xReturn = pdTRUE; |
dflet | 0:91ad48ad5687 | 2063 | } |
dflet | 0:91ad48ad5687 | 2064 | else |
dflet | 0:91ad48ad5687 | 2065 | { |
dflet | 0:91ad48ad5687 | 2066 | xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 2067 | } |
dflet | 0:91ad48ad5687 | 2068 | |
dflet | 0:91ad48ad5687 | 2069 | return xReturn; |
dflet | 0:91ad48ad5687 | 2070 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
dflet | 0:91ad48ad5687 | 2071 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2072 | |
dflet | 0:91ad48ad5687 | 2073 | #if ( configUSE_CO_ROUTINES == 1 ) |
dflet | 0:91ad48ad5687 | 2074 | |
dflet | 0:91ad48ad5687 | 2075 | BaseType_t xQueueCRSend( QueueHandle_t xQueue, const void *pvItemToQueue, TickType_t xTicksToWait ) |
dflet | 0:91ad48ad5687 | 2076 | { |
dflet | 0:91ad48ad5687 | 2077 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2078 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 2079 | |
dflet | 0:91ad48ad5687 | 2080 | /* If the queue is already full we may have to block. A critical section |
dflet | 0:91ad48ad5687 | 2081 | is required to prevent an interrupt removing something from the queue |
dflet | 0:91ad48ad5687 | 2082 | between the check to see if the queue is full and blocking on the queue. */ |
dflet | 0:91ad48ad5687 | 2083 | portDISABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2084 | { |
dflet | 0:91ad48ad5687 | 2085 | if( prvIsQueueFull( pxQueue ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2086 | { |
dflet | 0:91ad48ad5687 | 2087 | /* The queue is full - do we want to block or just leave without |
dflet | 0:91ad48ad5687 | 2088 | posting? */ |
dflet | 0:91ad48ad5687 | 2089 | if( xTicksToWait > ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2090 | { |
dflet | 0:91ad48ad5687 | 2091 | /* As this is called from a coroutine we cannot block directly, but |
dflet | 0:91ad48ad5687 | 2092 | return indicating that we need to block. */ |
dflet | 0:91ad48ad5687 | 2093 | vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToSend ) ); |
dflet | 0:91ad48ad5687 | 2094 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2095 | return errQUEUE_BLOCKED; |
dflet | 0:91ad48ad5687 | 2096 | } |
dflet | 0:91ad48ad5687 | 2097 | else |
dflet | 0:91ad48ad5687 | 2098 | { |
dflet | 0:91ad48ad5687 | 2099 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2100 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 2101 | } |
dflet | 0:91ad48ad5687 | 2102 | } |
dflet | 0:91ad48ad5687 | 2103 | } |
dflet | 0:91ad48ad5687 | 2104 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2105 | |
dflet | 0:91ad48ad5687 | 2106 | portDISABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2107 | { |
dflet | 0:91ad48ad5687 | 2108 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
dflet | 0:91ad48ad5687 | 2109 | { |
dflet | 0:91ad48ad5687 | 2110 | /* There is room in the queue, copy the data into the queue. */ |
dflet | 0:91ad48ad5687 | 2111 | prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK ); |
dflet | 0:91ad48ad5687 | 2112 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 2113 | |
dflet | 0:91ad48ad5687 | 2114 | /* Were any co-routines waiting for data to become available? */ |
dflet | 0:91ad48ad5687 | 2115 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2116 | { |
dflet | 0:91ad48ad5687 | 2117 | /* In this instance the co-routine could be placed directly |
dflet | 0:91ad48ad5687 | 2118 | into the ready list as we are within a critical section. |
dflet | 0:91ad48ad5687 | 2119 | Instead the same pending ready list mechanism is used as if |
dflet | 0:91ad48ad5687 | 2120 | the event were caused from within an interrupt. */ |
dflet | 0:91ad48ad5687 | 2121 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2122 | { |
dflet | 0:91ad48ad5687 | 2123 | /* The co-routine waiting has a higher priority so record |
dflet | 0:91ad48ad5687 | 2124 | that a yield might be appropriate. */ |
dflet | 0:91ad48ad5687 | 2125 | xReturn = errQUEUE_YIELD; |
dflet | 0:91ad48ad5687 | 2126 | } |
dflet | 0:91ad48ad5687 | 2127 | else |
dflet | 0:91ad48ad5687 | 2128 | { |
dflet | 0:91ad48ad5687 | 2129 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2130 | } |
dflet | 0:91ad48ad5687 | 2131 | } |
dflet | 0:91ad48ad5687 | 2132 | else |
dflet | 0:91ad48ad5687 | 2133 | { |
dflet | 0:91ad48ad5687 | 2134 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2135 | } |
dflet | 0:91ad48ad5687 | 2136 | } |
dflet | 0:91ad48ad5687 | 2137 | else |
dflet | 0:91ad48ad5687 | 2138 | { |
dflet | 0:91ad48ad5687 | 2139 | xReturn = errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 2140 | } |
dflet | 0:91ad48ad5687 | 2141 | } |
dflet | 0:91ad48ad5687 | 2142 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2143 | |
dflet | 0:91ad48ad5687 | 2144 | return xReturn; |
dflet | 0:91ad48ad5687 | 2145 | } |
dflet | 0:91ad48ad5687 | 2146 | |
dflet | 0:91ad48ad5687 | 2147 | #endif /* configUSE_CO_ROUTINES */ |
dflet | 0:91ad48ad5687 | 2148 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2149 | |
dflet | 0:91ad48ad5687 | 2150 | #if ( configUSE_CO_ROUTINES == 1 ) |
dflet | 0:91ad48ad5687 | 2151 | |
dflet | 0:91ad48ad5687 | 2152 | BaseType_t xQueueCRReceive( QueueHandle_t xQueue, void *pvBuffer, TickType_t xTicksToWait ) |
dflet | 0:91ad48ad5687 | 2153 | { |
dflet | 0:91ad48ad5687 | 2154 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2155 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 2156 | |
dflet | 0:91ad48ad5687 | 2157 | /* If the queue is already empty we may have to block. A critical section |
dflet | 0:91ad48ad5687 | 2158 | is required to prevent an interrupt adding something to the queue |
dflet | 0:91ad48ad5687 | 2159 | between the check to see if the queue is empty and blocking on the queue. */ |
dflet | 0:91ad48ad5687 | 2160 | portDISABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2161 | { |
dflet | 0:91ad48ad5687 | 2162 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2163 | { |
dflet | 0:91ad48ad5687 | 2164 | /* There are no messages in the queue, do we want to block or just |
dflet | 0:91ad48ad5687 | 2165 | leave with nothing? */ |
dflet | 0:91ad48ad5687 | 2166 | if( xTicksToWait > ( TickType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2167 | { |
dflet | 0:91ad48ad5687 | 2168 | /* As this is a co-routine we cannot block directly, but return |
dflet | 0:91ad48ad5687 | 2169 | indicating that we need to block. */ |
dflet | 0:91ad48ad5687 | 2170 | vCoRoutineAddToDelayedList( xTicksToWait, &( pxQueue->xTasksWaitingToReceive ) ); |
dflet | 0:91ad48ad5687 | 2171 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2172 | return errQUEUE_BLOCKED; |
dflet | 0:91ad48ad5687 | 2173 | } |
dflet | 0:91ad48ad5687 | 2174 | else |
dflet | 0:91ad48ad5687 | 2175 | { |
dflet | 0:91ad48ad5687 | 2176 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2177 | return errQUEUE_FULL; |
dflet | 0:91ad48ad5687 | 2178 | } |
dflet | 0:91ad48ad5687 | 2179 | } |
dflet | 0:91ad48ad5687 | 2180 | else |
dflet | 0:91ad48ad5687 | 2181 | { |
dflet | 0:91ad48ad5687 | 2182 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2183 | } |
dflet | 0:91ad48ad5687 | 2184 | } |
dflet | 0:91ad48ad5687 | 2185 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2186 | |
dflet | 0:91ad48ad5687 | 2187 | portDISABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2188 | { |
dflet | 0:91ad48ad5687 | 2189 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2190 | { |
dflet | 0:91ad48ad5687 | 2191 | /* Data is available from the queue. */ |
dflet | 0:91ad48ad5687 | 2192 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
dflet | 0:91ad48ad5687 | 2193 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) |
dflet | 0:91ad48ad5687 | 2194 | { |
dflet | 0:91ad48ad5687 | 2195 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
dflet | 0:91ad48ad5687 | 2196 | } |
dflet | 0:91ad48ad5687 | 2197 | else |
dflet | 0:91ad48ad5687 | 2198 | { |
dflet | 0:91ad48ad5687 | 2199 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2200 | } |
dflet | 0:91ad48ad5687 | 2201 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 2202 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize ); |
dflet | 0:91ad48ad5687 | 2203 | |
dflet | 0:91ad48ad5687 | 2204 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 2205 | |
dflet | 0:91ad48ad5687 | 2206 | /* Were any co-routines waiting for space to become available? */ |
dflet | 0:91ad48ad5687 | 2207 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2208 | { |
dflet | 0:91ad48ad5687 | 2209 | /* In this instance the co-routine could be placed directly |
dflet | 0:91ad48ad5687 | 2210 | into the ready list as we are within a critical section. |
dflet | 0:91ad48ad5687 | 2211 | Instead the same pending ready list mechanism is used as if |
dflet | 0:91ad48ad5687 | 2212 | the event were caused from within an interrupt. */ |
dflet | 0:91ad48ad5687 | 2213 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2214 | { |
dflet | 0:91ad48ad5687 | 2215 | xReturn = errQUEUE_YIELD; |
dflet | 0:91ad48ad5687 | 2216 | } |
dflet | 0:91ad48ad5687 | 2217 | else |
dflet | 0:91ad48ad5687 | 2218 | { |
dflet | 0:91ad48ad5687 | 2219 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2220 | } |
dflet | 0:91ad48ad5687 | 2221 | } |
dflet | 0:91ad48ad5687 | 2222 | else |
dflet | 0:91ad48ad5687 | 2223 | { |
dflet | 0:91ad48ad5687 | 2224 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2225 | } |
dflet | 0:91ad48ad5687 | 2226 | } |
dflet | 0:91ad48ad5687 | 2227 | else |
dflet | 0:91ad48ad5687 | 2228 | { |
dflet | 0:91ad48ad5687 | 2229 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2230 | } |
dflet | 0:91ad48ad5687 | 2231 | } |
dflet | 0:91ad48ad5687 | 2232 | portENABLE_INTERRUPTS(); |
dflet | 0:91ad48ad5687 | 2233 | |
dflet | 0:91ad48ad5687 | 2234 | return xReturn; |
dflet | 0:91ad48ad5687 | 2235 | } |
dflet | 0:91ad48ad5687 | 2236 | |
dflet | 0:91ad48ad5687 | 2237 | #endif /* configUSE_CO_ROUTINES */ |
dflet | 0:91ad48ad5687 | 2238 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2239 | |
dflet | 0:91ad48ad5687 | 2240 | #if ( configUSE_CO_ROUTINES == 1 ) |
dflet | 0:91ad48ad5687 | 2241 | |
dflet | 0:91ad48ad5687 | 2242 | BaseType_t xQueueCRSendFromISR( QueueHandle_t xQueue, const void *pvItemToQueue, BaseType_t xCoRoutinePreviouslyWoken ) |
dflet | 0:91ad48ad5687 | 2243 | { |
dflet | 0:91ad48ad5687 | 2244 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 2245 | |
dflet | 0:91ad48ad5687 | 2246 | /* Cannot block within an ISR so if there is no space on the queue then |
dflet | 0:91ad48ad5687 | 2247 | exit without doing anything. */ |
dflet | 0:91ad48ad5687 | 2248 | if( pxQueue->uxMessagesWaiting < pxQueue->uxLength ) |
dflet | 0:91ad48ad5687 | 2249 | { |
dflet | 0:91ad48ad5687 | 2250 | prvCopyDataToQueue( pxQueue, pvItemToQueue, queueSEND_TO_BACK ); |
dflet | 0:91ad48ad5687 | 2251 | |
dflet | 0:91ad48ad5687 | 2252 | /* We only want to wake one co-routine per ISR, so check that a |
dflet | 0:91ad48ad5687 | 2253 | co-routine has not already been woken. */ |
dflet | 0:91ad48ad5687 | 2254 | if( xCoRoutinePreviouslyWoken == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2255 | { |
dflet | 0:91ad48ad5687 | 2256 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2257 | { |
dflet | 0:91ad48ad5687 | 2258 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2259 | { |
dflet | 0:91ad48ad5687 | 2260 | return pdTRUE; |
dflet | 0:91ad48ad5687 | 2261 | } |
dflet | 0:91ad48ad5687 | 2262 | else |
dflet | 0:91ad48ad5687 | 2263 | { |
dflet | 0:91ad48ad5687 | 2264 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2265 | } |
dflet | 0:91ad48ad5687 | 2266 | } |
dflet | 0:91ad48ad5687 | 2267 | else |
dflet | 0:91ad48ad5687 | 2268 | { |
dflet | 0:91ad48ad5687 | 2269 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2270 | } |
dflet | 0:91ad48ad5687 | 2271 | } |
dflet | 0:91ad48ad5687 | 2272 | else |
dflet | 0:91ad48ad5687 | 2273 | { |
dflet | 0:91ad48ad5687 | 2274 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2275 | } |
dflet | 0:91ad48ad5687 | 2276 | } |
dflet | 0:91ad48ad5687 | 2277 | else |
dflet | 0:91ad48ad5687 | 2278 | { |
dflet | 0:91ad48ad5687 | 2279 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2280 | } |
dflet | 0:91ad48ad5687 | 2281 | |
dflet | 0:91ad48ad5687 | 2282 | return xCoRoutinePreviouslyWoken; |
dflet | 0:91ad48ad5687 | 2283 | } |
dflet | 0:91ad48ad5687 | 2284 | |
dflet | 0:91ad48ad5687 | 2285 | #endif /* configUSE_CO_ROUTINES */ |
dflet | 0:91ad48ad5687 | 2286 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2287 | |
dflet | 0:91ad48ad5687 | 2288 | #if ( configUSE_CO_ROUTINES == 1 ) |
dflet | 0:91ad48ad5687 | 2289 | |
dflet | 0:91ad48ad5687 | 2290 | BaseType_t xQueueCRReceiveFromISR( QueueHandle_t xQueue, void *pvBuffer, BaseType_t *pxCoRoutineWoken ) |
dflet | 0:91ad48ad5687 | 2291 | { |
dflet | 0:91ad48ad5687 | 2292 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2293 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 2294 | |
dflet | 0:91ad48ad5687 | 2295 | /* We cannot block from an ISR, so check there is data available. If |
dflet | 0:91ad48ad5687 | 2296 | not then just leave without doing anything. */ |
dflet | 0:91ad48ad5687 | 2297 | if( pxQueue->uxMessagesWaiting > ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2298 | { |
dflet | 0:91ad48ad5687 | 2299 | /* Copy the data from the queue. */ |
dflet | 0:91ad48ad5687 | 2300 | pxQueue->u.pcReadFrom += pxQueue->uxItemSize; |
dflet | 0:91ad48ad5687 | 2301 | if( pxQueue->u.pcReadFrom >= pxQueue->pcTail ) |
dflet | 0:91ad48ad5687 | 2302 | { |
dflet | 0:91ad48ad5687 | 2303 | pxQueue->u.pcReadFrom = pxQueue->pcHead; |
dflet | 0:91ad48ad5687 | 2304 | } |
dflet | 0:91ad48ad5687 | 2305 | else |
dflet | 0:91ad48ad5687 | 2306 | { |
dflet | 0:91ad48ad5687 | 2307 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2308 | } |
dflet | 0:91ad48ad5687 | 2309 | --( pxQueue->uxMessagesWaiting ); |
dflet | 0:91ad48ad5687 | 2310 | ( void ) memcpy( ( void * ) pvBuffer, ( void * ) pxQueue->u.pcReadFrom, ( unsigned ) pxQueue->uxItemSize ); |
dflet | 0:91ad48ad5687 | 2311 | |
dflet | 0:91ad48ad5687 | 2312 | if( ( *pxCoRoutineWoken ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2313 | { |
dflet | 0:91ad48ad5687 | 2314 | if( listLIST_IS_EMPTY( &( pxQueue->xTasksWaitingToSend ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2315 | { |
dflet | 0:91ad48ad5687 | 2316 | if( xCoRoutineRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2317 | { |
dflet | 0:91ad48ad5687 | 2318 | *pxCoRoutineWoken = pdTRUE; |
dflet | 0:91ad48ad5687 | 2319 | } |
dflet | 0:91ad48ad5687 | 2320 | else |
dflet | 0:91ad48ad5687 | 2321 | { |
dflet | 0:91ad48ad5687 | 2322 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2323 | } |
dflet | 0:91ad48ad5687 | 2324 | } |
dflet | 0:91ad48ad5687 | 2325 | else |
dflet | 0:91ad48ad5687 | 2326 | { |
dflet | 0:91ad48ad5687 | 2327 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2328 | } |
dflet | 0:91ad48ad5687 | 2329 | } |
dflet | 0:91ad48ad5687 | 2330 | else |
dflet | 0:91ad48ad5687 | 2331 | { |
dflet | 0:91ad48ad5687 | 2332 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2333 | } |
dflet | 0:91ad48ad5687 | 2334 | |
dflet | 0:91ad48ad5687 | 2335 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 2336 | } |
dflet | 0:91ad48ad5687 | 2337 | else |
dflet | 0:91ad48ad5687 | 2338 | { |
dflet | 0:91ad48ad5687 | 2339 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2340 | } |
dflet | 0:91ad48ad5687 | 2341 | |
dflet | 0:91ad48ad5687 | 2342 | return xReturn; |
dflet | 0:91ad48ad5687 | 2343 | } |
dflet | 0:91ad48ad5687 | 2344 | |
dflet | 0:91ad48ad5687 | 2345 | #endif /* configUSE_CO_ROUTINES */ |
dflet | 0:91ad48ad5687 | 2346 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2347 | |
dflet | 0:91ad48ad5687 | 2348 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
dflet | 0:91ad48ad5687 | 2349 | |
dflet | 0:91ad48ad5687 | 2350 | void vQueueAddToRegistry( QueueHandle_t xQueue, const char *pcQueueName ) /*lint !e971 Unqualified char types are allowed for strings and single characters only. */ |
dflet | 0:91ad48ad5687 | 2351 | { |
dflet | 0:91ad48ad5687 | 2352 | UBaseType_t ux; |
dflet | 0:91ad48ad5687 | 2353 | |
dflet | 0:91ad48ad5687 | 2354 | /* See if there is an empty space in the registry. A NULL name denotes |
dflet | 0:91ad48ad5687 | 2355 | a free slot. */ |
dflet | 0:91ad48ad5687 | 2356 | for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ ) |
dflet | 0:91ad48ad5687 | 2357 | { |
dflet | 0:91ad48ad5687 | 2358 | if( xQueueRegistry[ ux ].pcQueueName == NULL ) |
dflet | 0:91ad48ad5687 | 2359 | { |
dflet | 0:91ad48ad5687 | 2360 | /* Store the information on this queue. */ |
dflet | 0:91ad48ad5687 | 2361 | xQueueRegistry[ ux ].pcQueueName = pcQueueName; |
dflet | 0:91ad48ad5687 | 2362 | xQueueRegistry[ ux ].xHandle = xQueue; |
dflet | 0:91ad48ad5687 | 2363 | |
dflet | 0:91ad48ad5687 | 2364 | traceQUEUE_REGISTRY_ADD( xQueue, pcQueueName ); |
dflet | 0:91ad48ad5687 | 2365 | break; |
dflet | 0:91ad48ad5687 | 2366 | } |
dflet | 0:91ad48ad5687 | 2367 | else |
dflet | 0:91ad48ad5687 | 2368 | { |
dflet | 0:91ad48ad5687 | 2369 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2370 | } |
dflet | 0:91ad48ad5687 | 2371 | } |
dflet | 0:91ad48ad5687 | 2372 | } |
dflet | 0:91ad48ad5687 | 2373 | |
dflet | 0:91ad48ad5687 | 2374 | #endif /* configQUEUE_REGISTRY_SIZE */ |
dflet | 0:91ad48ad5687 | 2375 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2376 | |
dflet | 0:91ad48ad5687 | 2377 | #if ( configQUEUE_REGISTRY_SIZE > 0 ) |
dflet | 0:91ad48ad5687 | 2378 | |
dflet | 0:91ad48ad5687 | 2379 | void vQueueUnregisterQueue( QueueHandle_t xQueue ) |
dflet | 0:91ad48ad5687 | 2380 | { |
dflet | 0:91ad48ad5687 | 2381 | UBaseType_t ux; |
dflet | 0:91ad48ad5687 | 2382 | |
dflet | 0:91ad48ad5687 | 2383 | /* See if the handle of the queue being unregistered in actually in the |
dflet | 0:91ad48ad5687 | 2384 | registry. */ |
dflet | 0:91ad48ad5687 | 2385 | for( ux = ( UBaseType_t ) 0U; ux < ( UBaseType_t ) configQUEUE_REGISTRY_SIZE; ux++ ) |
dflet | 0:91ad48ad5687 | 2386 | { |
dflet | 0:91ad48ad5687 | 2387 | if( xQueueRegistry[ ux ].xHandle == xQueue ) |
dflet | 0:91ad48ad5687 | 2388 | { |
dflet | 0:91ad48ad5687 | 2389 | /* Set the name to NULL to show that this slot if free again. */ |
dflet | 0:91ad48ad5687 | 2390 | xQueueRegistry[ ux ].pcQueueName = NULL; |
dflet | 0:91ad48ad5687 | 2391 | break; |
dflet | 0:91ad48ad5687 | 2392 | } |
dflet | 0:91ad48ad5687 | 2393 | else |
dflet | 0:91ad48ad5687 | 2394 | { |
dflet | 0:91ad48ad5687 | 2395 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2396 | } |
dflet | 0:91ad48ad5687 | 2397 | } |
dflet | 0:91ad48ad5687 | 2398 | |
dflet | 0:91ad48ad5687 | 2399 | } /*lint !e818 xQueue could not be pointer to const because it is a typedef. */ |
dflet | 0:91ad48ad5687 | 2400 | |
dflet | 0:91ad48ad5687 | 2401 | #endif /* configQUEUE_REGISTRY_SIZE */ |
dflet | 0:91ad48ad5687 | 2402 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2403 | |
dflet | 0:91ad48ad5687 | 2404 | #if ( configUSE_TIMERS == 1 ) |
dflet | 0:91ad48ad5687 | 2405 | |
dflet | 0:91ad48ad5687 | 2406 | void vQueueWaitForMessageRestricted( QueueHandle_t xQueue, TickType_t xTicksToWait ) |
dflet | 0:91ad48ad5687 | 2407 | { |
dflet | 0:91ad48ad5687 | 2408 | Queue_t * const pxQueue = ( Queue_t * ) xQueue; |
dflet | 0:91ad48ad5687 | 2409 | |
dflet | 0:91ad48ad5687 | 2410 | /* This function should not be called by application code hence the |
dflet | 0:91ad48ad5687 | 2411 | 'Restricted' in its name. It is not part of the public API. It is |
dflet | 0:91ad48ad5687 | 2412 | designed for use by kernel code, and has special calling requirements. |
dflet | 0:91ad48ad5687 | 2413 | It can result in vListInsert() being called on a list that can only |
dflet | 0:91ad48ad5687 | 2414 | possibly ever have one item in it, so the list will be fast, but even |
dflet | 0:91ad48ad5687 | 2415 | so it should be called with the scheduler locked and not from a critical |
dflet | 0:91ad48ad5687 | 2416 | section. */ |
dflet | 0:91ad48ad5687 | 2417 | |
dflet | 0:91ad48ad5687 | 2418 | /* Only do anything if there are no messages in the queue. This function |
dflet | 0:91ad48ad5687 | 2419 | will not actually cause the task to block, just place it on a blocked |
dflet | 0:91ad48ad5687 | 2420 | list. It will not block until the scheduler is unlocked - at which |
dflet | 0:91ad48ad5687 | 2421 | time a yield will be performed. If an item is added to the queue while |
dflet | 0:91ad48ad5687 | 2422 | the queue is locked, and the calling task blocks on the queue, then the |
dflet | 0:91ad48ad5687 | 2423 | calling task will be immediately unblocked when the queue is unlocked. */ |
dflet | 0:91ad48ad5687 | 2424 | prvLockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 2425 | if( pxQueue->uxMessagesWaiting == ( UBaseType_t ) 0U ) |
dflet | 0:91ad48ad5687 | 2426 | { |
dflet | 0:91ad48ad5687 | 2427 | /* There is nothing in the queue, block for the specified period. */ |
dflet | 0:91ad48ad5687 | 2428 | vTaskPlaceOnEventListRestricted( &( pxQueue->xTasksWaitingToReceive ), xTicksToWait ); |
dflet | 0:91ad48ad5687 | 2429 | } |
dflet | 0:91ad48ad5687 | 2430 | else |
dflet | 0:91ad48ad5687 | 2431 | { |
dflet | 0:91ad48ad5687 | 2432 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2433 | } |
dflet | 0:91ad48ad5687 | 2434 | prvUnlockQueue( pxQueue ); |
dflet | 0:91ad48ad5687 | 2435 | } |
dflet | 0:91ad48ad5687 | 2436 | |
dflet | 0:91ad48ad5687 | 2437 | #endif /* configUSE_TIMERS */ |
dflet | 0:91ad48ad5687 | 2438 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2439 | |
dflet | 0:91ad48ad5687 | 2440 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2441 | |
dflet | 0:91ad48ad5687 | 2442 | QueueSetHandle_t xQueueCreateSet( const UBaseType_t uxEventQueueLength ) |
dflet | 0:91ad48ad5687 | 2443 | { |
dflet | 0:91ad48ad5687 | 2444 | QueueSetHandle_t pxQueue; |
dflet | 0:91ad48ad5687 | 2445 | |
dflet | 0:91ad48ad5687 | 2446 | pxQueue = xQueueGenericCreate( uxEventQueueLength, sizeof( Queue_t * ), queueQUEUE_TYPE_SET ); |
dflet | 0:91ad48ad5687 | 2447 | |
dflet | 0:91ad48ad5687 | 2448 | return pxQueue; |
dflet | 0:91ad48ad5687 | 2449 | } |
dflet | 0:91ad48ad5687 | 2450 | |
dflet | 0:91ad48ad5687 | 2451 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2452 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2453 | |
dflet | 0:91ad48ad5687 | 2454 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2455 | |
dflet | 0:91ad48ad5687 | 2456 | BaseType_t xQueueAddToSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet ) |
dflet | 0:91ad48ad5687 | 2457 | { |
dflet | 0:91ad48ad5687 | 2458 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2459 | |
dflet | 0:91ad48ad5687 | 2460 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2461 | { |
dflet | 0:91ad48ad5687 | 2462 | if( ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer != NULL ) |
dflet | 0:91ad48ad5687 | 2463 | { |
dflet | 0:91ad48ad5687 | 2464 | /* Cannot add a queue/semaphore to more than one queue set. */ |
dflet | 0:91ad48ad5687 | 2465 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2466 | } |
dflet | 0:91ad48ad5687 | 2467 | else if( ( ( Queue_t * ) xQueueOrSemaphore )->uxMessagesWaiting != ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2468 | { |
dflet | 0:91ad48ad5687 | 2469 | /* Cannot add a queue/semaphore to a queue set if there are already |
dflet | 0:91ad48ad5687 | 2470 | items in the queue/semaphore. */ |
dflet | 0:91ad48ad5687 | 2471 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2472 | } |
dflet | 0:91ad48ad5687 | 2473 | else |
dflet | 0:91ad48ad5687 | 2474 | { |
dflet | 0:91ad48ad5687 | 2475 | ( ( Queue_t * ) xQueueOrSemaphore )->pxQueueSetContainer = xQueueSet; |
dflet | 0:91ad48ad5687 | 2476 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 2477 | } |
dflet | 0:91ad48ad5687 | 2478 | } |
dflet | 0:91ad48ad5687 | 2479 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2480 | |
dflet | 0:91ad48ad5687 | 2481 | return xReturn; |
dflet | 0:91ad48ad5687 | 2482 | } |
dflet | 0:91ad48ad5687 | 2483 | |
dflet | 0:91ad48ad5687 | 2484 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2485 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2486 | |
dflet | 0:91ad48ad5687 | 2487 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2488 | |
dflet | 0:91ad48ad5687 | 2489 | BaseType_t xQueueRemoveFromSet( QueueSetMemberHandle_t xQueueOrSemaphore, QueueSetHandle_t xQueueSet ) |
dflet | 0:91ad48ad5687 | 2490 | { |
dflet | 0:91ad48ad5687 | 2491 | BaseType_t xReturn; |
dflet | 0:91ad48ad5687 | 2492 | Queue_t * const pxQueueOrSemaphore = ( Queue_t * ) xQueueOrSemaphore; |
dflet | 0:91ad48ad5687 | 2493 | |
dflet | 0:91ad48ad5687 | 2494 | if( pxQueueOrSemaphore->pxQueueSetContainer != xQueueSet ) |
dflet | 0:91ad48ad5687 | 2495 | { |
dflet | 0:91ad48ad5687 | 2496 | /* The queue was not a member of the set. */ |
dflet | 0:91ad48ad5687 | 2497 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2498 | } |
dflet | 0:91ad48ad5687 | 2499 | else if( pxQueueOrSemaphore->uxMessagesWaiting != ( UBaseType_t ) 0 ) |
dflet | 0:91ad48ad5687 | 2500 | { |
dflet | 0:91ad48ad5687 | 2501 | /* It is dangerous to remove a queue from a set when the queue is |
dflet | 0:91ad48ad5687 | 2502 | not empty because the queue set will still hold pending events for |
dflet | 0:91ad48ad5687 | 2503 | the queue. */ |
dflet | 0:91ad48ad5687 | 2504 | xReturn = pdFAIL; |
dflet | 0:91ad48ad5687 | 2505 | } |
dflet | 0:91ad48ad5687 | 2506 | else |
dflet | 0:91ad48ad5687 | 2507 | { |
dflet | 0:91ad48ad5687 | 2508 | taskENTER_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2509 | { |
dflet | 0:91ad48ad5687 | 2510 | /* The queue is no longer contained in the set. */ |
dflet | 0:91ad48ad5687 | 2511 | pxQueueOrSemaphore->pxQueueSetContainer = NULL; |
dflet | 0:91ad48ad5687 | 2512 | } |
dflet | 0:91ad48ad5687 | 2513 | taskEXIT_CRITICAL(); |
dflet | 0:91ad48ad5687 | 2514 | xReturn = pdPASS; |
dflet | 0:91ad48ad5687 | 2515 | } |
dflet | 0:91ad48ad5687 | 2516 | |
dflet | 0:91ad48ad5687 | 2517 | return xReturn; |
dflet | 0:91ad48ad5687 | 2518 | } /*lint !e818 xQueueSet could not be declared as pointing to const as it is a typedef. */ |
dflet | 0:91ad48ad5687 | 2519 | |
dflet | 0:91ad48ad5687 | 2520 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2521 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2522 | |
dflet | 0:91ad48ad5687 | 2523 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2524 | |
dflet | 0:91ad48ad5687 | 2525 | QueueSetMemberHandle_t xQueueSelectFromSet( QueueSetHandle_t xQueueSet, TickType_t const xTicksToWait ) |
dflet | 0:91ad48ad5687 | 2526 | { |
dflet | 0:91ad48ad5687 | 2527 | QueueSetMemberHandle_t xReturn = NULL; |
dflet | 0:91ad48ad5687 | 2528 | |
dflet | 0:91ad48ad5687 | 2529 | ( void ) xQueueGenericReceive( ( QueueHandle_t ) xQueueSet, &xReturn, xTicksToWait, pdFALSE ); /*lint !e961 Casting from one typedef to another is not redundant. */ |
dflet | 0:91ad48ad5687 | 2530 | return xReturn; |
dflet | 0:91ad48ad5687 | 2531 | } |
dflet | 0:91ad48ad5687 | 2532 | |
dflet | 0:91ad48ad5687 | 2533 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2534 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2535 | |
dflet | 0:91ad48ad5687 | 2536 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2537 | |
dflet | 0:91ad48ad5687 | 2538 | QueueSetMemberHandle_t xQueueSelectFromSetFromISR( QueueSetHandle_t xQueueSet ) |
dflet | 0:91ad48ad5687 | 2539 | { |
dflet | 0:91ad48ad5687 | 2540 | QueueSetMemberHandle_t xReturn = NULL; |
dflet | 0:91ad48ad5687 | 2541 | |
dflet | 0:91ad48ad5687 | 2542 | ( void ) xQueueReceiveFromISR( ( QueueHandle_t ) xQueueSet, &xReturn, NULL ); /*lint !e961 Casting from one typedef to another is not redundant. */ |
dflet | 0:91ad48ad5687 | 2543 | return xReturn; |
dflet | 0:91ad48ad5687 | 2544 | } |
dflet | 0:91ad48ad5687 | 2545 | |
dflet | 0:91ad48ad5687 | 2546 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2547 | /*-----------------------------------------------------------*/ |
dflet | 0:91ad48ad5687 | 2548 | |
dflet | 0:91ad48ad5687 | 2549 | #if ( configUSE_QUEUE_SETS == 1 ) |
dflet | 0:91ad48ad5687 | 2550 | |
dflet | 0:91ad48ad5687 | 2551 | static BaseType_t prvNotifyQueueSetContainer( const Queue_t * const pxQueue, const BaseType_t xCopyPosition ) |
dflet | 0:91ad48ad5687 | 2552 | { |
dflet | 0:91ad48ad5687 | 2553 | Queue_t *pxQueueSetContainer = pxQueue->pxQueueSetContainer; |
dflet | 0:91ad48ad5687 | 2554 | BaseType_t xReturn = pdFALSE; |
dflet | 0:91ad48ad5687 | 2555 | |
dflet | 0:91ad48ad5687 | 2556 | /* This function must be called form a critical section. */ |
dflet | 0:91ad48ad5687 | 2557 | |
dflet | 0:91ad48ad5687 | 2558 | configASSERT( pxQueueSetContainer ); |
dflet | 0:91ad48ad5687 | 2559 | configASSERT( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength ); |
dflet | 0:91ad48ad5687 | 2560 | |
dflet | 0:91ad48ad5687 | 2561 | if( pxQueueSetContainer->uxMessagesWaiting < pxQueueSetContainer->uxLength ) |
dflet | 0:91ad48ad5687 | 2562 | { |
dflet | 0:91ad48ad5687 | 2563 | traceQUEUE_SEND( pxQueueSetContainer ); |
dflet | 0:91ad48ad5687 | 2564 | |
dflet | 0:91ad48ad5687 | 2565 | /* The data copied is the handle of the queue that contains data. */ |
dflet | 0:91ad48ad5687 | 2566 | xReturn = prvCopyDataToQueue( pxQueueSetContainer, &pxQueue, xCopyPosition ); |
dflet | 0:91ad48ad5687 | 2567 | |
dflet | 0:91ad48ad5687 | 2568 | if( pxQueueSetContainer->xTxLock == queueUNLOCKED ) |
dflet | 0:91ad48ad5687 | 2569 | { |
dflet | 0:91ad48ad5687 | 2570 | if( listLIST_IS_EMPTY( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) == pdFALSE ) |
dflet | 0:91ad48ad5687 | 2571 | { |
dflet | 0:91ad48ad5687 | 2572 | if( xTaskRemoveFromEventList( &( pxQueueSetContainer->xTasksWaitingToReceive ) ) != pdFALSE ) |
dflet | 0:91ad48ad5687 | 2573 | { |
dflet | 0:91ad48ad5687 | 2574 | /* The task waiting has a higher priority. */ |
dflet | 0:91ad48ad5687 | 2575 | xReturn = pdTRUE; |
dflet | 0:91ad48ad5687 | 2576 | } |
dflet | 0:91ad48ad5687 | 2577 | else |
dflet | 0:91ad48ad5687 | 2578 | { |
dflet | 0:91ad48ad5687 | 2579 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2580 | } |
dflet | 0:91ad48ad5687 | 2581 | } |
dflet | 0:91ad48ad5687 | 2582 | else |
dflet | 0:91ad48ad5687 | 2583 | { |
dflet | 0:91ad48ad5687 | 2584 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2585 | } |
dflet | 0:91ad48ad5687 | 2586 | } |
dflet | 0:91ad48ad5687 | 2587 | else |
dflet | 0:91ad48ad5687 | 2588 | { |
dflet | 0:91ad48ad5687 | 2589 | ( pxQueueSetContainer->xTxLock )++; |
dflet | 0:91ad48ad5687 | 2590 | } |
dflet | 0:91ad48ad5687 | 2591 | } |
dflet | 0:91ad48ad5687 | 2592 | else |
dflet | 0:91ad48ad5687 | 2593 | { |
dflet | 0:91ad48ad5687 | 2594 | mtCOVERAGE_TEST_MARKER(); |
dflet | 0:91ad48ad5687 | 2595 | } |
dflet | 0:91ad48ad5687 | 2596 | |
dflet | 0:91ad48ad5687 | 2597 | return xReturn; |
dflet | 0:91ad48ad5687 | 2598 | } |
dflet | 0:91ad48ad5687 | 2599 | |
dflet | 0:91ad48ad5687 | 2600 | #endif /* configUSE_QUEUE_SETS */ |
dflet | 0:91ad48ad5687 | 2601 | |
dflet | 0:91ad48ad5687 | 2602 | |
dflet | 0:91ad48ad5687 | 2603 | |
dflet | 0:91ad48ad5687 | 2604 | |
dflet | 0:91ad48ad5687 | 2605 | |
dflet | 0:91ad48ad5687 | 2606 | |
dflet | 0:91ad48ad5687 | 2607 | |
dflet | 0:91ad48ad5687 | 2608 | |
dflet | 0:91ad48ad5687 | 2609 | |
dflet | 0:91ad48ad5687 | 2610 | |
dflet | 0:91ad48ad5687 | 2611 | |
dflet | 0:91ad48ad5687 | 2612 | |
dflet | 0:91ad48ad5687 | 2613 |