Official mbed lwIP library (version 1.4.0)

Dependents:   LwIPNetworking NetServicesMin EthernetInterface EthernetInterface_RSF ... more

Legacy Networking Libraries

This is an mbed 2 networking library. For mbed OS 5, lwip has been integrated with built-in networking interfaces. The networking libraries have been revised to better support additional network stacks and thread safety here.

This library is based on the code of lwIP v1.4.0

Copyright (c) 2001, 2002 Swedish Institute of Computer Science.
All rights reserved. 

Redistribution and use in source and binary forms, with or without modification, 
are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice,
   this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
   derived from this software without specific prior written permission. 

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED 
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT 
SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT 
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY 
OF SUCH DAMAGE.
Committer:
mbed_official
Date:
Mon Mar 14 16:15:36 2016 +0000
Revision:
20:08f08bfc3f3d
Parent:
18:2dd57fc0af78
Synchronized with git revision fec574a5ed6db26aca1b13992ff271bf527d4a0d

Full URL: https://github.com/mbedmicro/mbed/commit/fec574a5ed6db26aca1b13992ff271bf527d4a0d/

Increased allocated netbufs to handle DTLS handshakes

Who changed what in which revision?

UserRevisionLine numberNew contents of line
mbed_official 0:51ac1d130fd4 1 /**
mbed_official 0:51ac1d130fd4 2 * @file
mbed_official 0:51ac1d130fd4 3 * Dynamic memory manager
mbed_official 0:51ac1d130fd4 4 *
mbed_official 0:51ac1d130fd4 5 * This is a lightweight replacement for the standard C library malloc().
mbed_official 0:51ac1d130fd4 6 *
mbed_official 0:51ac1d130fd4 7 * If you want to use the standard C library malloc() instead, define
mbed_official 0:51ac1d130fd4 8 * MEM_LIBC_MALLOC to 1 in your lwipopts.h
mbed_official 0:51ac1d130fd4 9 *
mbed_official 0:51ac1d130fd4 10 * To let mem_malloc() use pools (prevents fragmentation and is much faster than
mbed_official 0:51ac1d130fd4 11 * a heap but might waste some memory), define MEM_USE_POOLS to 1, define
mbed_official 0:51ac1d130fd4 12 * MEM_USE_CUSTOM_POOLS to 1 and create a file "lwippools.h" that includes a list
mbed_official 0:51ac1d130fd4 13 * of pools like this (more pools can be added between _START and _END):
mbed_official 0:51ac1d130fd4 14 *
mbed_official 0:51ac1d130fd4 15 * Define three pools with sizes 256, 512, and 1512 bytes
mbed_official 0:51ac1d130fd4 16 * LWIP_MALLOC_MEMPOOL_START
mbed_official 0:51ac1d130fd4 17 * LWIP_MALLOC_MEMPOOL(20, 256)
mbed_official 0:51ac1d130fd4 18 * LWIP_MALLOC_MEMPOOL(10, 512)
mbed_official 0:51ac1d130fd4 19 * LWIP_MALLOC_MEMPOOL(5, 1512)
mbed_official 0:51ac1d130fd4 20 * LWIP_MALLOC_MEMPOOL_END
mbed_official 0:51ac1d130fd4 21 */
mbed_official 0:51ac1d130fd4 22
mbed_official 0:51ac1d130fd4 23 /*
mbed_official 0:51ac1d130fd4 24 * Copyright (c) 2001-2004 Swedish Institute of Computer Science.
mbed_official 0:51ac1d130fd4 25 * All rights reserved.
mbed_official 0:51ac1d130fd4 26 *
mbed_official 0:51ac1d130fd4 27 * Redistribution and use in source and binary forms, with or without modification,
mbed_official 0:51ac1d130fd4 28 * are permitted provided that the following conditions are met:
mbed_official 0:51ac1d130fd4 29 *
mbed_official 0:51ac1d130fd4 30 * 1. Redistributions of source code must retain the above copyright notice,
mbed_official 0:51ac1d130fd4 31 * this list of conditions and the following disclaimer.
mbed_official 0:51ac1d130fd4 32 * 2. Redistributions in binary form must reproduce the above copyright notice,
mbed_official 0:51ac1d130fd4 33 * this list of conditions and the following disclaimer in the documentation
mbed_official 0:51ac1d130fd4 34 * and/or other materials provided with the distribution.
mbed_official 0:51ac1d130fd4 35 * 3. The name of the author may not be used to endorse or promote products
mbed_official 0:51ac1d130fd4 36 * derived from this software without specific prior written permission.
mbed_official 0:51ac1d130fd4 37 *
mbed_official 0:51ac1d130fd4 38 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
mbed_official 0:51ac1d130fd4 39 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
mbed_official 0:51ac1d130fd4 40 * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
mbed_official 0:51ac1d130fd4 41 * SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
mbed_official 0:51ac1d130fd4 42 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
mbed_official 0:51ac1d130fd4 43 * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
mbed_official 0:51ac1d130fd4 44 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
mbed_official 0:51ac1d130fd4 45 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
mbed_official 0:51ac1d130fd4 46 * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
mbed_official 0:51ac1d130fd4 47 * OF SUCH DAMAGE.
mbed_official 0:51ac1d130fd4 48 *
mbed_official 0:51ac1d130fd4 49 * This file is part of the lwIP TCP/IP stack.
mbed_official 0:51ac1d130fd4 50 *
mbed_official 0:51ac1d130fd4 51 * Author: Adam Dunkels <adam@sics.se>
mbed_official 0:51ac1d130fd4 52 * Simon Goldschmidt
mbed_official 0:51ac1d130fd4 53 *
mbed_official 0:51ac1d130fd4 54 */
mbed_official 0:51ac1d130fd4 55
mbed_official 0:51ac1d130fd4 56 #include "lwip/opt.h"
mbed_official 0:51ac1d130fd4 57
mbed_official 0:51ac1d130fd4 58 #if !MEM_LIBC_MALLOC /* don't build if not configured for use in lwipopts.h */
mbed_official 0:51ac1d130fd4 59
mbed_official 0:51ac1d130fd4 60 #include "lwip/def.h"
mbed_official 0:51ac1d130fd4 61 #include "lwip/mem.h"
mbed_official 0:51ac1d130fd4 62 #include "lwip/sys.h"
mbed_official 0:51ac1d130fd4 63 #include "lwip/stats.h"
mbed_official 0:51ac1d130fd4 64 #include "lwip/err.h"
mbed_official 0:51ac1d130fd4 65
mbed_official 0:51ac1d130fd4 66 #include <string.h>
mbed_official 0:51ac1d130fd4 67
mbed_official 0:51ac1d130fd4 68 #if MEM_USE_POOLS
mbed_official 0:51ac1d130fd4 69 /* lwIP head implemented with different sized pools */
mbed_official 0:51ac1d130fd4 70
mbed_official 0:51ac1d130fd4 71 /**
mbed_official 0:51ac1d130fd4 72 * Allocate memory: determine the smallest pool that is big enough
mbed_official 0:51ac1d130fd4 73 * to contain an element of 'size' and get an element from that pool.
mbed_official 0:51ac1d130fd4 74 *
mbed_official 0:51ac1d130fd4 75 * @param size the size in bytes of the memory needed
mbed_official 0:51ac1d130fd4 76 * @return a pointer to the allocated memory or NULL if the pool is empty
mbed_official 0:51ac1d130fd4 77 */
mbed_official 0:51ac1d130fd4 78 void *
mbed_official 0:51ac1d130fd4 79 mem_malloc(mem_size_t size)
mbed_official 0:51ac1d130fd4 80 {
mbed_official 0:51ac1d130fd4 81 struct memp_malloc_helper *element;
mbed_official 0:51ac1d130fd4 82 memp_t poolnr;
mbed_official 0:51ac1d130fd4 83 mem_size_t required_size = size + sizeof(struct memp_malloc_helper);
mbed_official 0:51ac1d130fd4 84
mbed_official 0:51ac1d130fd4 85 for (poolnr = MEMP_POOL_FIRST; poolnr <= MEMP_POOL_LAST; poolnr = (memp_t)(poolnr + 1)) {
mbed_official 0:51ac1d130fd4 86 #if MEM_USE_POOLS_TRY_BIGGER_POOL
mbed_official 0:51ac1d130fd4 87 again:
mbed_official 0:51ac1d130fd4 88 #endif /* MEM_USE_POOLS_TRY_BIGGER_POOL */
mbed_official 0:51ac1d130fd4 89 /* is this pool big enough to hold an element of the required size
mbed_official 0:51ac1d130fd4 90 plus a struct memp_malloc_helper that saves the pool this element came from? */
mbed_official 0:51ac1d130fd4 91 if (required_size <= memp_sizes[poolnr]) {
mbed_official 0:51ac1d130fd4 92 break;
mbed_official 0:51ac1d130fd4 93 }
mbed_official 0:51ac1d130fd4 94 }
mbed_official 0:51ac1d130fd4 95 if (poolnr > MEMP_POOL_LAST) {
mbed_official 0:51ac1d130fd4 96 LWIP_ASSERT("mem_malloc(): no pool is that big!", 0);
mbed_official 0:51ac1d130fd4 97 return NULL;
mbed_official 0:51ac1d130fd4 98 }
mbed_official 0:51ac1d130fd4 99 element = (struct memp_malloc_helper*)memp_malloc(poolnr);
mbed_official 0:51ac1d130fd4 100 if (element == NULL) {
mbed_official 0:51ac1d130fd4 101 /* No need to DEBUGF or ASSERT: This error is already
mbed_official 0:51ac1d130fd4 102 taken care of in memp.c */
mbed_official 0:51ac1d130fd4 103 #if MEM_USE_POOLS_TRY_BIGGER_POOL
mbed_official 0:51ac1d130fd4 104 /** Try a bigger pool if this one is empty! */
mbed_official 0:51ac1d130fd4 105 if (poolnr < MEMP_POOL_LAST) {
mbed_official 0:51ac1d130fd4 106 poolnr++;
mbed_official 0:51ac1d130fd4 107 goto again;
mbed_official 0:51ac1d130fd4 108 }
mbed_official 0:51ac1d130fd4 109 #endif /* MEM_USE_POOLS_TRY_BIGGER_POOL */
mbed_official 0:51ac1d130fd4 110 return NULL;
mbed_official 0:51ac1d130fd4 111 }
mbed_official 0:51ac1d130fd4 112
mbed_official 0:51ac1d130fd4 113 /* save the pool number this element came from */
mbed_official 0:51ac1d130fd4 114 element->poolnr = poolnr;
mbed_official 0:51ac1d130fd4 115 /* and return a pointer to the memory directly after the struct memp_malloc_helper */
mbed_official 0:51ac1d130fd4 116 element++;
mbed_official 0:51ac1d130fd4 117
mbed_official 0:51ac1d130fd4 118 return element;
mbed_official 0:51ac1d130fd4 119 }
mbed_official 0:51ac1d130fd4 120
mbed_official 0:51ac1d130fd4 121 /**
mbed_official 0:51ac1d130fd4 122 * Free memory previously allocated by mem_malloc. Loads the pool number
mbed_official 0:51ac1d130fd4 123 * and calls memp_free with that pool number to put the element back into
mbed_official 0:51ac1d130fd4 124 * its pool
mbed_official 0:51ac1d130fd4 125 *
mbed_official 0:51ac1d130fd4 126 * @param rmem the memory element to free
mbed_official 0:51ac1d130fd4 127 */
mbed_official 0:51ac1d130fd4 128 void
mbed_official 0:51ac1d130fd4 129 mem_free(void *rmem)
mbed_official 0:51ac1d130fd4 130 {
mbed_official 0:51ac1d130fd4 131 struct memp_malloc_helper *hmem = (struct memp_malloc_helper*)rmem;
mbed_official 0:51ac1d130fd4 132
mbed_official 0:51ac1d130fd4 133 LWIP_ASSERT("rmem != NULL", (rmem != NULL));
mbed_official 0:51ac1d130fd4 134 LWIP_ASSERT("rmem == MEM_ALIGN(rmem)", (rmem == LWIP_MEM_ALIGN(rmem)));
mbed_official 0:51ac1d130fd4 135
mbed_official 0:51ac1d130fd4 136 /* get the original struct memp_malloc_helper */
mbed_official 0:51ac1d130fd4 137 hmem--;
mbed_official 0:51ac1d130fd4 138
mbed_official 0:51ac1d130fd4 139 LWIP_ASSERT("hmem != NULL", (hmem != NULL));
mbed_official 0:51ac1d130fd4 140 LWIP_ASSERT("hmem == MEM_ALIGN(hmem)", (hmem == LWIP_MEM_ALIGN(hmem)));
mbed_official 0:51ac1d130fd4 141 LWIP_ASSERT("hmem->poolnr < MEMP_MAX", (hmem->poolnr < MEMP_MAX));
mbed_official 0:51ac1d130fd4 142
mbed_official 0:51ac1d130fd4 143 /* and put it in the pool we saved earlier */
mbed_official 0:51ac1d130fd4 144 memp_free(hmem->poolnr, hmem);
mbed_official 0:51ac1d130fd4 145 }
mbed_official 0:51ac1d130fd4 146
mbed_official 0:51ac1d130fd4 147 #else /* MEM_USE_POOLS */
mbed_official 0:51ac1d130fd4 148 /* lwIP replacement for your libc malloc() */
mbed_official 0:51ac1d130fd4 149
mbed_official 0:51ac1d130fd4 150 /**
mbed_official 0:51ac1d130fd4 151 * The heap is made up as a list of structs of this type.
mbed_official 0:51ac1d130fd4 152 * This does not have to be aligned since for getting its size,
mbed_official 0:51ac1d130fd4 153 * we only use the macro SIZEOF_STRUCT_MEM, which automatically alignes.
mbed_official 0:51ac1d130fd4 154 */
mbed_official 0:51ac1d130fd4 155 struct mem {
mbed_official 0:51ac1d130fd4 156 /** index (-> ram[next]) of the next struct */
mbed_official 0:51ac1d130fd4 157 mem_size_t next;
mbed_official 0:51ac1d130fd4 158 /** index (-> ram[prev]) of the previous struct */
mbed_official 0:51ac1d130fd4 159 mem_size_t prev;
mbed_official 0:51ac1d130fd4 160 /** 1: this area is used; 0: this area is unused */
mbed_official 0:51ac1d130fd4 161 u8_t used;
mbed_official 0:51ac1d130fd4 162 };
mbed_official 0:51ac1d130fd4 163
mbed_official 0:51ac1d130fd4 164 /** All allocated blocks will be MIN_SIZE bytes big, at least!
mbed_official 0:51ac1d130fd4 165 * MIN_SIZE can be overridden to suit your needs. Smaller values save space,
mbed_official 0:51ac1d130fd4 166 * larger values could prevent too small blocks to fragment the RAM too much. */
mbed_official 0:51ac1d130fd4 167 #ifndef MIN_SIZE
mbed_official 0:51ac1d130fd4 168 #define MIN_SIZE 12
mbed_official 0:51ac1d130fd4 169 #endif /* MIN_SIZE */
mbed_official 0:51ac1d130fd4 170 /* some alignment macros: we define them here for better source code layout */
mbed_official 0:51ac1d130fd4 171 #define MIN_SIZE_ALIGNED LWIP_MEM_ALIGN_SIZE(MIN_SIZE)
mbed_official 0:51ac1d130fd4 172 #define SIZEOF_STRUCT_MEM LWIP_MEM_ALIGN_SIZE(sizeof(struct mem))
mbed_official 0:51ac1d130fd4 173 #define MEM_SIZE_ALIGNED LWIP_MEM_ALIGN_SIZE(MEM_SIZE)
mbed_official 0:51ac1d130fd4 174
mbed_official 0:51ac1d130fd4 175 /** If you want to relocate the heap to external memory, simply define
mbed_official 0:51ac1d130fd4 176 * LWIP_RAM_HEAP_POINTER as a void-pointer to that location.
mbed_official 0:51ac1d130fd4 177 * If so, make sure the memory at that location is big enough (see below on
mbed_official 0:51ac1d130fd4 178 * how that space is calculated). */
mbed_official 0:51ac1d130fd4 179 #ifndef LWIP_RAM_HEAP_POINTER
emilmont 10:42a34d63b218 180
mbed_official 18:2dd57fc0af78 181 #if defined(TARGET_LPC4088) || defined(TARGET_LPC4088_DM)
emilmont 10:42a34d63b218 182 # if defined (__ICCARM__)
emilmont 10:42a34d63b218 183 # define ETHMEM_SECTION
emilmont 10:42a34d63b218 184 # elif defined(TOOLCHAIN_GCC_CR)
emilmont 10:42a34d63b218 185 # define ETHMEM_SECTION __attribute__((section(".data.$RamPeriph32")))
emilmont 10:42a34d63b218 186 # else
emilmont 10:42a34d63b218 187 # define ETHMEM_SECTION __attribute__((section("AHBSRAM1"),aligned))
emilmont 10:42a34d63b218 188 # endif
mbed_official 16:092c37b63ee8 189 #elif defined(TARGET_LPC1768)
mbed_official 16:092c37b63ee8 190 # define ETHMEM_SECTION __attribute((section("AHBSRAM0")))
emilmont 10:42a34d63b218 191 #else
mbed_official 16:092c37b63ee8 192 # define ETHMEM_SECTION
emilmont 10:42a34d63b218 193 #endif
emilmont 10:42a34d63b218 194
mbed_official 0:51ac1d130fd4 195 /** the heap. we need one struct mem at the end and some room for alignment */
emilmont 10:42a34d63b218 196 u8_t ram_heap[MEM_SIZE_ALIGNED + (2*SIZEOF_STRUCT_MEM) + MEM_ALIGNMENT] ETHMEM_SECTION;
mbed_official 0:51ac1d130fd4 197 #define LWIP_RAM_HEAP_POINTER ram_heap
mbed_official 0:51ac1d130fd4 198 #endif /* LWIP_RAM_HEAP_POINTER */
mbed_official 0:51ac1d130fd4 199
mbed_official 0:51ac1d130fd4 200 /** pointer to the heap (ram_heap): for alignment, ram is now a pointer instead of an array */
mbed_official 0:51ac1d130fd4 201 static u8_t *ram;
mbed_official 0:51ac1d130fd4 202 /** the last entry, always unused! */
mbed_official 0:51ac1d130fd4 203 static struct mem *ram_end;
mbed_official 0:51ac1d130fd4 204 /** pointer to the lowest free block, this is used for faster search */
mbed_official 0:51ac1d130fd4 205 static struct mem *lfree;
mbed_official 0:51ac1d130fd4 206
mbed_official 0:51ac1d130fd4 207 /** concurrent access protection */
mbed_official 0:51ac1d130fd4 208 static sys_mutex_t mem_mutex;
mbed_official 0:51ac1d130fd4 209
mbed_official 0:51ac1d130fd4 210 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 211
mbed_official 0:51ac1d130fd4 212 static volatile u8_t mem_free_count;
mbed_official 0:51ac1d130fd4 213
mbed_official 0:51ac1d130fd4 214 /* Allow mem_free from other (e.g. interrupt) context */
mbed_official 0:51ac1d130fd4 215 #define LWIP_MEM_FREE_DECL_PROTECT() SYS_ARCH_DECL_PROTECT(lev_free)
mbed_official 0:51ac1d130fd4 216 #define LWIP_MEM_FREE_PROTECT() SYS_ARCH_PROTECT(lev_free)
mbed_official 0:51ac1d130fd4 217 #define LWIP_MEM_FREE_UNPROTECT() SYS_ARCH_UNPROTECT(lev_free)
mbed_official 0:51ac1d130fd4 218 #define LWIP_MEM_ALLOC_DECL_PROTECT() SYS_ARCH_DECL_PROTECT(lev_alloc)
mbed_official 0:51ac1d130fd4 219 #define LWIP_MEM_ALLOC_PROTECT() SYS_ARCH_PROTECT(lev_alloc)
mbed_official 0:51ac1d130fd4 220 #define LWIP_MEM_ALLOC_UNPROTECT() SYS_ARCH_UNPROTECT(lev_alloc)
mbed_official 0:51ac1d130fd4 221
mbed_official 0:51ac1d130fd4 222 #else /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 223
mbed_official 0:51ac1d130fd4 224 /* Protect the heap only by using a semaphore */
mbed_official 0:51ac1d130fd4 225 #define LWIP_MEM_FREE_DECL_PROTECT()
mbed_official 0:51ac1d130fd4 226 #define LWIP_MEM_FREE_PROTECT() sys_mutex_lock(&mem_mutex)
mbed_official 0:51ac1d130fd4 227 #define LWIP_MEM_FREE_UNPROTECT() sys_mutex_unlock(&mem_mutex)
mbed_official 0:51ac1d130fd4 228 /* mem_malloc is protected using semaphore AND LWIP_MEM_ALLOC_PROTECT */
mbed_official 0:51ac1d130fd4 229 #define LWIP_MEM_ALLOC_DECL_PROTECT()
mbed_official 0:51ac1d130fd4 230 #define LWIP_MEM_ALLOC_PROTECT()
mbed_official 0:51ac1d130fd4 231 #define LWIP_MEM_ALLOC_UNPROTECT()
mbed_official 0:51ac1d130fd4 232
mbed_official 0:51ac1d130fd4 233 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 234
mbed_official 0:51ac1d130fd4 235
mbed_official 0:51ac1d130fd4 236 /**
mbed_official 0:51ac1d130fd4 237 * "Plug holes" by combining adjacent empty struct mems.
mbed_official 0:51ac1d130fd4 238 * After this function is through, there should not exist
mbed_official 0:51ac1d130fd4 239 * one empty struct mem pointing to another empty struct mem.
mbed_official 0:51ac1d130fd4 240 *
mbed_official 0:51ac1d130fd4 241 * @param mem this points to a struct mem which just has been freed
mbed_official 0:51ac1d130fd4 242 * @internal this function is only called by mem_free() and mem_trim()
mbed_official 0:51ac1d130fd4 243 *
mbed_official 0:51ac1d130fd4 244 * This assumes access to the heap is protected by the calling function
mbed_official 0:51ac1d130fd4 245 * already.
mbed_official 0:51ac1d130fd4 246 */
mbed_official 0:51ac1d130fd4 247 static void
mbed_official 0:51ac1d130fd4 248 plug_holes(struct mem *mem)
mbed_official 0:51ac1d130fd4 249 {
mbed_official 0:51ac1d130fd4 250 struct mem *nmem;
mbed_official 0:51ac1d130fd4 251 struct mem *pmem;
mbed_official 0:51ac1d130fd4 252
mbed_official 0:51ac1d130fd4 253 LWIP_ASSERT("plug_holes: mem >= ram", (u8_t *)mem >= ram);
mbed_official 0:51ac1d130fd4 254 LWIP_ASSERT("plug_holes: mem < ram_end", (u8_t *)mem < (u8_t *)ram_end);
mbed_official 0:51ac1d130fd4 255 LWIP_ASSERT("plug_holes: mem->used == 0", mem->used == 0);
mbed_official 0:51ac1d130fd4 256
mbed_official 0:51ac1d130fd4 257 /* plug hole forward */
mbed_official 0:51ac1d130fd4 258 LWIP_ASSERT("plug_holes: mem->next <= MEM_SIZE_ALIGNED", mem->next <= MEM_SIZE_ALIGNED);
mbed_official 0:51ac1d130fd4 259
mbed_official 0:51ac1d130fd4 260 nmem = (struct mem *)(void *)&ram[mem->next];
mbed_official 0:51ac1d130fd4 261 if (mem != nmem && nmem->used == 0 && (u8_t *)nmem != (u8_t *)ram_end) {
mbed_official 0:51ac1d130fd4 262 /* if mem->next is unused and not end of ram, combine mem and mem->next */
mbed_official 0:51ac1d130fd4 263 if (lfree == nmem) {
mbed_official 0:51ac1d130fd4 264 lfree = mem;
mbed_official 0:51ac1d130fd4 265 }
mbed_official 0:51ac1d130fd4 266 mem->next = nmem->next;
mbed_official 0:51ac1d130fd4 267 ((struct mem *)(void *)&ram[nmem->next])->prev = (mem_size_t)((u8_t *)mem - ram);
mbed_official 0:51ac1d130fd4 268 }
mbed_official 0:51ac1d130fd4 269
mbed_official 0:51ac1d130fd4 270 /* plug hole backward */
mbed_official 0:51ac1d130fd4 271 pmem = (struct mem *)(void *)&ram[mem->prev];
mbed_official 0:51ac1d130fd4 272 if (pmem != mem && pmem->used == 0) {
mbed_official 0:51ac1d130fd4 273 /* if mem->prev is unused, combine mem and mem->prev */
mbed_official 0:51ac1d130fd4 274 if (lfree == mem) {
mbed_official 0:51ac1d130fd4 275 lfree = pmem;
mbed_official 0:51ac1d130fd4 276 }
mbed_official 0:51ac1d130fd4 277 pmem->next = mem->next;
mbed_official 0:51ac1d130fd4 278 ((struct mem *)(void *)&ram[mem->next])->prev = (mem_size_t)((u8_t *)pmem - ram);
mbed_official 0:51ac1d130fd4 279 }
mbed_official 0:51ac1d130fd4 280 }
mbed_official 0:51ac1d130fd4 281
mbed_official 0:51ac1d130fd4 282 /**
mbed_official 0:51ac1d130fd4 283 * Zero the heap and initialize start, end and lowest-free
mbed_official 0:51ac1d130fd4 284 */
mbed_official 0:51ac1d130fd4 285 void
mbed_official 0:51ac1d130fd4 286 mem_init(void)
mbed_official 0:51ac1d130fd4 287 {
mbed_official 0:51ac1d130fd4 288 struct mem *mem;
mbed_official 0:51ac1d130fd4 289
mbed_official 0:51ac1d130fd4 290 LWIP_ASSERT("Sanity check alignment",
mbed_official 0:51ac1d130fd4 291 (SIZEOF_STRUCT_MEM & (MEM_ALIGNMENT-1)) == 0);
mbed_official 0:51ac1d130fd4 292
mbed_official 0:51ac1d130fd4 293 /* align the heap */
mbed_official 0:51ac1d130fd4 294 ram = (u8_t *)LWIP_MEM_ALIGN(LWIP_RAM_HEAP_POINTER);
mbed_official 0:51ac1d130fd4 295 /* initialize the start of the heap */
mbed_official 0:51ac1d130fd4 296 mem = (struct mem *)(void *)ram;
mbed_official 0:51ac1d130fd4 297 mem->next = MEM_SIZE_ALIGNED;
mbed_official 0:51ac1d130fd4 298 mem->prev = 0;
mbed_official 0:51ac1d130fd4 299 mem->used = 0;
mbed_official 0:51ac1d130fd4 300 /* initialize the end of the heap */
mbed_official 0:51ac1d130fd4 301 ram_end = (struct mem *)(void *)&ram[MEM_SIZE_ALIGNED];
mbed_official 0:51ac1d130fd4 302 ram_end->used = 1;
mbed_official 0:51ac1d130fd4 303 ram_end->next = MEM_SIZE_ALIGNED;
mbed_official 0:51ac1d130fd4 304 ram_end->prev = MEM_SIZE_ALIGNED;
mbed_official 0:51ac1d130fd4 305
mbed_official 0:51ac1d130fd4 306 /* initialize the lowest-free pointer to the start of the heap */
mbed_official 0:51ac1d130fd4 307 lfree = (struct mem *)(void *)ram;
mbed_official 0:51ac1d130fd4 308
mbed_official 0:51ac1d130fd4 309 MEM_STATS_AVAIL(avail, MEM_SIZE_ALIGNED);
mbed_official 0:51ac1d130fd4 310
mbed_official 0:51ac1d130fd4 311 if(sys_mutex_new(&mem_mutex) != ERR_OK) {
mbed_official 0:51ac1d130fd4 312 LWIP_ASSERT("failed to create mem_mutex", 0);
mbed_official 0:51ac1d130fd4 313 }
mbed_official 0:51ac1d130fd4 314 }
mbed_official 0:51ac1d130fd4 315
mbed_official 0:51ac1d130fd4 316 /**
mbed_official 0:51ac1d130fd4 317 * Put a struct mem back on the heap
mbed_official 0:51ac1d130fd4 318 *
mbed_official 0:51ac1d130fd4 319 * @param rmem is the data portion of a struct mem as returned by a previous
mbed_official 0:51ac1d130fd4 320 * call to mem_malloc()
mbed_official 0:51ac1d130fd4 321 */
mbed_official 0:51ac1d130fd4 322 void
mbed_official 0:51ac1d130fd4 323 mem_free(void *rmem)
mbed_official 0:51ac1d130fd4 324 {
mbed_official 0:51ac1d130fd4 325 struct mem *mem;
mbed_official 0:51ac1d130fd4 326 LWIP_MEM_FREE_DECL_PROTECT();
mbed_official 0:51ac1d130fd4 327
mbed_official 0:51ac1d130fd4 328 if (rmem == NULL) {
mbed_official 0:51ac1d130fd4 329 LWIP_DEBUGF(MEM_DEBUG | LWIP_DBG_TRACE | LWIP_DBG_LEVEL_SERIOUS, ("mem_free(p == NULL) was called.\n"));
mbed_official 0:51ac1d130fd4 330 return;
mbed_official 0:51ac1d130fd4 331 }
mbed_official 0:51ac1d130fd4 332 LWIP_ASSERT("mem_free: sanity check alignment", (((mem_ptr_t)rmem) & (MEM_ALIGNMENT-1)) == 0);
mbed_official 0:51ac1d130fd4 333
mbed_official 0:51ac1d130fd4 334 LWIP_ASSERT("mem_free: legal memory", (u8_t *)rmem >= (u8_t *)ram &&
mbed_official 0:51ac1d130fd4 335 (u8_t *)rmem < (u8_t *)ram_end);
mbed_official 0:51ac1d130fd4 336
mbed_official 0:51ac1d130fd4 337 if ((u8_t *)rmem < (u8_t *)ram || (u8_t *)rmem >= (u8_t *)ram_end) {
mbed_official 0:51ac1d130fd4 338 SYS_ARCH_DECL_PROTECT(lev);
mbed_official 0:51ac1d130fd4 339 LWIP_DEBUGF(MEM_DEBUG | LWIP_DBG_LEVEL_SEVERE, ("mem_free: illegal memory\n"));
mbed_official 0:51ac1d130fd4 340 /* protect mem stats from concurrent access */
mbed_official 0:51ac1d130fd4 341 SYS_ARCH_PROTECT(lev);
mbed_official 0:51ac1d130fd4 342 MEM_STATS_INC(illegal);
mbed_official 0:51ac1d130fd4 343 SYS_ARCH_UNPROTECT(lev);
mbed_official 0:51ac1d130fd4 344 return;
mbed_official 0:51ac1d130fd4 345 }
mbed_official 0:51ac1d130fd4 346 /* protect the heap from concurrent access */
mbed_official 0:51ac1d130fd4 347 LWIP_MEM_FREE_PROTECT();
mbed_official 0:51ac1d130fd4 348 /* Get the corresponding struct mem ... */
mbed_official 0:51ac1d130fd4 349 mem = (struct mem *)(void *)((u8_t *)rmem - SIZEOF_STRUCT_MEM);
mbed_official 0:51ac1d130fd4 350 /* ... which has to be in a used state ... */
mbed_official 0:51ac1d130fd4 351 LWIP_ASSERT("mem_free: mem->used", mem->used);
mbed_official 0:51ac1d130fd4 352 /* ... and is now unused. */
mbed_official 0:51ac1d130fd4 353 mem->used = 0;
mbed_official 0:51ac1d130fd4 354
mbed_official 0:51ac1d130fd4 355 if (mem < lfree) {
mbed_official 0:51ac1d130fd4 356 /* the newly freed struct is now the lowest */
mbed_official 0:51ac1d130fd4 357 lfree = mem;
mbed_official 0:51ac1d130fd4 358 }
mbed_official 0:51ac1d130fd4 359
mbed_official 0:51ac1d130fd4 360 MEM_STATS_DEC_USED(used, mem->next - (mem_size_t)(((u8_t *)mem - ram)));
mbed_official 0:51ac1d130fd4 361
mbed_official 0:51ac1d130fd4 362 /* finally, see if prev or next are free also */
mbed_official 0:51ac1d130fd4 363 plug_holes(mem);
mbed_official 0:51ac1d130fd4 364 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 365 mem_free_count = 1;
mbed_official 0:51ac1d130fd4 366 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 367 LWIP_MEM_FREE_UNPROTECT();
mbed_official 0:51ac1d130fd4 368 }
mbed_official 0:51ac1d130fd4 369
mbed_official 0:51ac1d130fd4 370 /**
mbed_official 0:51ac1d130fd4 371 * Shrink memory returned by mem_malloc().
mbed_official 0:51ac1d130fd4 372 *
mbed_official 0:51ac1d130fd4 373 * @param rmem pointer to memory allocated by mem_malloc the is to be shrinked
mbed_official 0:51ac1d130fd4 374 * @param newsize required size after shrinking (needs to be smaller than or
mbed_official 0:51ac1d130fd4 375 * equal to the previous size)
mbed_official 0:51ac1d130fd4 376 * @return for compatibility reasons: is always == rmem, at the moment
mbed_official 0:51ac1d130fd4 377 * or NULL if newsize is > old size, in which case rmem is NOT touched
mbed_official 0:51ac1d130fd4 378 * or freed!
mbed_official 0:51ac1d130fd4 379 */
mbed_official 0:51ac1d130fd4 380 void *
mbed_official 0:51ac1d130fd4 381 mem_trim(void *rmem, mem_size_t newsize)
mbed_official 0:51ac1d130fd4 382 {
mbed_official 0:51ac1d130fd4 383 mem_size_t size;
mbed_official 0:51ac1d130fd4 384 mem_size_t ptr, ptr2;
mbed_official 0:51ac1d130fd4 385 struct mem *mem, *mem2;
mbed_official 0:51ac1d130fd4 386 /* use the FREE_PROTECT here: it protects with sem OR SYS_ARCH_PROTECT */
mbed_official 0:51ac1d130fd4 387 LWIP_MEM_FREE_DECL_PROTECT();
mbed_official 0:51ac1d130fd4 388
mbed_official 0:51ac1d130fd4 389 /* Expand the size of the allocated memory region so that we can
mbed_official 0:51ac1d130fd4 390 adjust for alignment. */
mbed_official 0:51ac1d130fd4 391 newsize = LWIP_MEM_ALIGN_SIZE(newsize);
mbed_official 0:51ac1d130fd4 392
mbed_official 0:51ac1d130fd4 393 if(newsize < MIN_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 394 /* every data block must be at least MIN_SIZE_ALIGNED long */
mbed_official 0:51ac1d130fd4 395 newsize = MIN_SIZE_ALIGNED;
mbed_official 0:51ac1d130fd4 396 }
mbed_official 0:51ac1d130fd4 397
mbed_official 0:51ac1d130fd4 398 if (newsize > MEM_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 399 return NULL;
mbed_official 0:51ac1d130fd4 400 }
mbed_official 0:51ac1d130fd4 401
mbed_official 0:51ac1d130fd4 402 LWIP_ASSERT("mem_trim: legal memory", (u8_t *)rmem >= (u8_t *)ram &&
mbed_official 0:51ac1d130fd4 403 (u8_t *)rmem < (u8_t *)ram_end);
mbed_official 0:51ac1d130fd4 404
mbed_official 0:51ac1d130fd4 405 if ((u8_t *)rmem < (u8_t *)ram || (u8_t *)rmem >= (u8_t *)ram_end) {
mbed_official 0:51ac1d130fd4 406 SYS_ARCH_DECL_PROTECT(lev);
mbed_official 0:51ac1d130fd4 407 LWIP_DEBUGF(MEM_DEBUG | LWIP_DBG_LEVEL_SEVERE, ("mem_trim: illegal memory\n"));
mbed_official 0:51ac1d130fd4 408 /* protect mem stats from concurrent access */
mbed_official 0:51ac1d130fd4 409 SYS_ARCH_PROTECT(lev);
mbed_official 0:51ac1d130fd4 410 MEM_STATS_INC(illegal);
mbed_official 0:51ac1d130fd4 411 SYS_ARCH_UNPROTECT(lev);
mbed_official 0:51ac1d130fd4 412 return rmem;
mbed_official 0:51ac1d130fd4 413 }
mbed_official 0:51ac1d130fd4 414 /* Get the corresponding struct mem ... */
mbed_official 0:51ac1d130fd4 415 mem = (struct mem *)(void *)((u8_t *)rmem - SIZEOF_STRUCT_MEM);
mbed_official 0:51ac1d130fd4 416 /* ... and its offset pointer */
mbed_official 0:51ac1d130fd4 417 ptr = (mem_size_t)((u8_t *)mem - ram);
mbed_official 0:51ac1d130fd4 418
mbed_official 0:51ac1d130fd4 419 size = mem->next - ptr - SIZEOF_STRUCT_MEM;
mbed_official 0:51ac1d130fd4 420 LWIP_ASSERT("mem_trim can only shrink memory", newsize <= size);
mbed_official 0:51ac1d130fd4 421 if (newsize > size) {
mbed_official 0:51ac1d130fd4 422 /* not supported */
mbed_official 0:51ac1d130fd4 423 return NULL;
mbed_official 0:51ac1d130fd4 424 }
mbed_official 0:51ac1d130fd4 425 if (newsize == size) {
mbed_official 0:51ac1d130fd4 426 /* No change in size, simply return */
mbed_official 0:51ac1d130fd4 427 return rmem;
mbed_official 0:51ac1d130fd4 428 }
mbed_official 0:51ac1d130fd4 429
mbed_official 0:51ac1d130fd4 430 /* protect the heap from concurrent access */
mbed_official 0:51ac1d130fd4 431 LWIP_MEM_FREE_PROTECT();
mbed_official 0:51ac1d130fd4 432
mbed_official 0:51ac1d130fd4 433 mem2 = (struct mem *)(void *)&ram[mem->next];
mbed_official 0:51ac1d130fd4 434 if(mem2->used == 0) {
mbed_official 0:51ac1d130fd4 435 /* The next struct is unused, we can simply move it at little */
mbed_official 0:51ac1d130fd4 436 mem_size_t next;
mbed_official 0:51ac1d130fd4 437 /* remember the old next pointer */
mbed_official 0:51ac1d130fd4 438 next = mem2->next;
mbed_official 0:51ac1d130fd4 439 /* create new struct mem which is moved directly after the shrinked mem */
mbed_official 0:51ac1d130fd4 440 ptr2 = ptr + SIZEOF_STRUCT_MEM + newsize;
mbed_official 0:51ac1d130fd4 441 if (lfree == mem2) {
mbed_official 0:51ac1d130fd4 442 lfree = (struct mem *)(void *)&ram[ptr2];
mbed_official 0:51ac1d130fd4 443 }
mbed_official 0:51ac1d130fd4 444 mem2 = (struct mem *)(void *)&ram[ptr2];
mbed_official 0:51ac1d130fd4 445 mem2->used = 0;
mbed_official 0:51ac1d130fd4 446 /* restore the next pointer */
mbed_official 0:51ac1d130fd4 447 mem2->next = next;
mbed_official 0:51ac1d130fd4 448 /* link it back to mem */
mbed_official 0:51ac1d130fd4 449 mem2->prev = ptr;
mbed_official 0:51ac1d130fd4 450 /* link mem to it */
mbed_official 0:51ac1d130fd4 451 mem->next = ptr2;
mbed_official 0:51ac1d130fd4 452 /* last thing to restore linked list: as we have moved mem2,
mbed_official 0:51ac1d130fd4 453 * let 'mem2->next->prev' point to mem2 again. but only if mem2->next is not
mbed_official 0:51ac1d130fd4 454 * the end of the heap */
mbed_official 0:51ac1d130fd4 455 if (mem2->next != MEM_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 456 ((struct mem *)(void *)&ram[mem2->next])->prev = ptr2;
mbed_official 0:51ac1d130fd4 457 }
mbed_official 0:51ac1d130fd4 458 MEM_STATS_DEC_USED(used, (size - newsize));
mbed_official 0:51ac1d130fd4 459 /* no need to plug holes, we've already done that */
mbed_official 0:51ac1d130fd4 460 } else if (newsize + SIZEOF_STRUCT_MEM + MIN_SIZE_ALIGNED <= size) {
mbed_official 0:51ac1d130fd4 461 /* Next struct is used but there's room for another struct mem with
mbed_official 0:51ac1d130fd4 462 * at least MIN_SIZE_ALIGNED of data.
mbed_official 0:51ac1d130fd4 463 * Old size ('size') must be big enough to contain at least 'newsize' plus a struct mem
mbed_official 0:51ac1d130fd4 464 * ('SIZEOF_STRUCT_MEM') with some data ('MIN_SIZE_ALIGNED').
mbed_official 0:51ac1d130fd4 465 * @todo we could leave out MIN_SIZE_ALIGNED. We would create an empty
mbed_official 0:51ac1d130fd4 466 * region that couldn't hold data, but when mem->next gets freed,
mbed_official 0:51ac1d130fd4 467 * the 2 regions would be combined, resulting in more free memory */
mbed_official 0:51ac1d130fd4 468 ptr2 = ptr + SIZEOF_STRUCT_MEM + newsize;
mbed_official 0:51ac1d130fd4 469 mem2 = (struct mem *)(void *)&ram[ptr2];
mbed_official 0:51ac1d130fd4 470 if (mem2 < lfree) {
mbed_official 0:51ac1d130fd4 471 lfree = mem2;
mbed_official 0:51ac1d130fd4 472 }
mbed_official 0:51ac1d130fd4 473 mem2->used = 0;
mbed_official 0:51ac1d130fd4 474 mem2->next = mem->next;
mbed_official 0:51ac1d130fd4 475 mem2->prev = ptr;
mbed_official 0:51ac1d130fd4 476 mem->next = ptr2;
mbed_official 0:51ac1d130fd4 477 if (mem2->next != MEM_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 478 ((struct mem *)(void *)&ram[mem2->next])->prev = ptr2;
mbed_official 0:51ac1d130fd4 479 }
mbed_official 0:51ac1d130fd4 480 MEM_STATS_DEC_USED(used, (size - newsize));
mbed_official 0:51ac1d130fd4 481 /* the original mem->next is used, so no need to plug holes! */
mbed_official 0:51ac1d130fd4 482 }
mbed_official 0:51ac1d130fd4 483 /* else {
mbed_official 0:51ac1d130fd4 484 next struct mem is used but size between mem and mem2 is not big enough
mbed_official 0:51ac1d130fd4 485 to create another struct mem
mbed_official 0:51ac1d130fd4 486 -> don't do anyhting.
mbed_official 0:51ac1d130fd4 487 -> the remaining space stays unused since it is too small
mbed_official 0:51ac1d130fd4 488 } */
mbed_official 0:51ac1d130fd4 489 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 490 mem_free_count = 1;
mbed_official 0:51ac1d130fd4 491 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 492 LWIP_MEM_FREE_UNPROTECT();
mbed_official 0:51ac1d130fd4 493 return rmem;
mbed_official 0:51ac1d130fd4 494 }
mbed_official 0:51ac1d130fd4 495
mbed_official 0:51ac1d130fd4 496 /**
mbed_official 0:51ac1d130fd4 497 * Adam's mem_malloc() plus solution for bug #17922
mbed_official 0:51ac1d130fd4 498 * Allocate a block of memory with a minimum of 'size' bytes.
mbed_official 0:51ac1d130fd4 499 *
mbed_official 0:51ac1d130fd4 500 * @param size is the minimum size of the requested block in bytes.
mbed_official 0:51ac1d130fd4 501 * @return pointer to allocated memory or NULL if no free memory was found.
mbed_official 0:51ac1d130fd4 502 *
mbed_official 0:51ac1d130fd4 503 * Note that the returned value will always be aligned (as defined by MEM_ALIGNMENT).
mbed_official 0:51ac1d130fd4 504 */
mbed_official 0:51ac1d130fd4 505 void *
mbed_official 0:51ac1d130fd4 506 mem_malloc(mem_size_t size)
mbed_official 0:51ac1d130fd4 507 {
mbed_official 0:51ac1d130fd4 508 mem_size_t ptr, ptr2;
mbed_official 0:51ac1d130fd4 509 struct mem *mem, *mem2;
mbed_official 0:51ac1d130fd4 510 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 511 u8_t local_mem_free_count = 0;
mbed_official 0:51ac1d130fd4 512 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 513 LWIP_MEM_ALLOC_DECL_PROTECT();
mbed_official 0:51ac1d130fd4 514
mbed_official 0:51ac1d130fd4 515 if (size == 0) {
mbed_official 0:51ac1d130fd4 516 return NULL;
mbed_official 0:51ac1d130fd4 517 }
mbed_official 0:51ac1d130fd4 518
mbed_official 0:51ac1d130fd4 519 /* Expand the size of the allocated memory region so that we can
mbed_official 0:51ac1d130fd4 520 adjust for alignment. */
mbed_official 0:51ac1d130fd4 521 size = LWIP_MEM_ALIGN_SIZE(size);
mbed_official 0:51ac1d130fd4 522
mbed_official 0:51ac1d130fd4 523 if(size < MIN_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 524 /* every data block must be at least MIN_SIZE_ALIGNED long */
mbed_official 0:51ac1d130fd4 525 size = MIN_SIZE_ALIGNED;
mbed_official 0:51ac1d130fd4 526 }
mbed_official 0:51ac1d130fd4 527
mbed_official 0:51ac1d130fd4 528 if (size > MEM_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 529 return NULL;
mbed_official 0:51ac1d130fd4 530 }
mbed_official 0:51ac1d130fd4 531
mbed_official 0:51ac1d130fd4 532 /* protect the heap from concurrent access */
mbed_official 0:51ac1d130fd4 533 sys_mutex_lock(&mem_mutex);
mbed_official 0:51ac1d130fd4 534 LWIP_MEM_ALLOC_PROTECT();
mbed_official 0:51ac1d130fd4 535 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 536 /* run as long as a mem_free disturbed mem_malloc */
mbed_official 0:51ac1d130fd4 537 do {
mbed_official 0:51ac1d130fd4 538 local_mem_free_count = 0;
mbed_official 0:51ac1d130fd4 539 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 540
mbed_official 0:51ac1d130fd4 541 /* Scan through the heap searching for a free block that is big enough,
mbed_official 0:51ac1d130fd4 542 * beginning with the lowest free block.
mbed_official 0:51ac1d130fd4 543 */
mbed_official 0:51ac1d130fd4 544 for (ptr = (mem_size_t)((u8_t *)lfree - ram); ptr < MEM_SIZE_ALIGNED - size;
mbed_official 0:51ac1d130fd4 545 ptr = ((struct mem *)(void *)&ram[ptr])->next) {
mbed_official 0:51ac1d130fd4 546 mem = (struct mem *)(void *)&ram[ptr];
mbed_official 0:51ac1d130fd4 547 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 548 mem_free_count = 0;
mbed_official 0:51ac1d130fd4 549 LWIP_MEM_ALLOC_UNPROTECT();
mbed_official 0:51ac1d130fd4 550 /* allow mem_free to run */
mbed_official 0:51ac1d130fd4 551 LWIP_MEM_ALLOC_PROTECT();
mbed_official 0:51ac1d130fd4 552 if (mem_free_count != 0) {
mbed_official 0:51ac1d130fd4 553 local_mem_free_count = mem_free_count;
mbed_official 0:51ac1d130fd4 554 }
mbed_official 0:51ac1d130fd4 555 mem_free_count = 0;
mbed_official 0:51ac1d130fd4 556 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 557
mbed_official 0:51ac1d130fd4 558 if ((!mem->used) &&
mbed_official 0:51ac1d130fd4 559 (mem->next - (ptr + SIZEOF_STRUCT_MEM)) >= size) {
mbed_official 0:51ac1d130fd4 560 /* mem is not used and at least perfect fit is possible:
mbed_official 0:51ac1d130fd4 561 * mem->next - (ptr + SIZEOF_STRUCT_MEM) gives us the 'user data size' of mem */
mbed_official 0:51ac1d130fd4 562
mbed_official 0:51ac1d130fd4 563 if (mem->next - (ptr + SIZEOF_STRUCT_MEM) >= (size + SIZEOF_STRUCT_MEM + MIN_SIZE_ALIGNED)) {
mbed_official 0:51ac1d130fd4 564 /* (in addition to the above, we test if another struct mem (SIZEOF_STRUCT_MEM) containing
mbed_official 0:51ac1d130fd4 565 * at least MIN_SIZE_ALIGNED of data also fits in the 'user data space' of 'mem')
mbed_official 0:51ac1d130fd4 566 * -> split large block, create empty remainder,
mbed_official 0:51ac1d130fd4 567 * remainder must be large enough to contain MIN_SIZE_ALIGNED data: if
mbed_official 0:51ac1d130fd4 568 * mem->next - (ptr + (2*SIZEOF_STRUCT_MEM)) == size,
mbed_official 0:51ac1d130fd4 569 * struct mem would fit in but no data between mem2 and mem2->next
mbed_official 0:51ac1d130fd4 570 * @todo we could leave out MIN_SIZE_ALIGNED. We would create an empty
mbed_official 0:51ac1d130fd4 571 * region that couldn't hold data, but when mem->next gets freed,
mbed_official 0:51ac1d130fd4 572 * the 2 regions would be combined, resulting in more free memory
mbed_official 0:51ac1d130fd4 573 */
mbed_official 0:51ac1d130fd4 574 ptr2 = ptr + SIZEOF_STRUCT_MEM + size;
mbed_official 0:51ac1d130fd4 575 /* create mem2 struct */
mbed_official 0:51ac1d130fd4 576 mem2 = (struct mem *)(void *)&ram[ptr2];
mbed_official 0:51ac1d130fd4 577 mem2->used = 0;
mbed_official 0:51ac1d130fd4 578 mem2->next = mem->next;
mbed_official 0:51ac1d130fd4 579 mem2->prev = ptr;
mbed_official 0:51ac1d130fd4 580 /* and insert it between mem and mem->next */
mbed_official 0:51ac1d130fd4 581 mem->next = ptr2;
mbed_official 0:51ac1d130fd4 582 mem->used = 1;
mbed_official 0:51ac1d130fd4 583
mbed_official 0:51ac1d130fd4 584 if (mem2->next != MEM_SIZE_ALIGNED) {
mbed_official 0:51ac1d130fd4 585 ((struct mem *)(void *)&ram[mem2->next])->prev = ptr2;
mbed_official 0:51ac1d130fd4 586 }
mbed_official 0:51ac1d130fd4 587 MEM_STATS_INC_USED(used, (size + SIZEOF_STRUCT_MEM));
mbed_official 0:51ac1d130fd4 588 } else {
mbed_official 0:51ac1d130fd4 589 /* (a mem2 struct does no fit into the user data space of mem and mem->next will always
mbed_official 0:51ac1d130fd4 590 * be used at this point: if not we have 2 unused structs in a row, plug_holes should have
mbed_official 0:51ac1d130fd4 591 * take care of this).
mbed_official 0:51ac1d130fd4 592 * -> near fit or excact fit: do not split, no mem2 creation
mbed_official 0:51ac1d130fd4 593 * also can't move mem->next directly behind mem, since mem->next
mbed_official 0:51ac1d130fd4 594 * will always be used at this point!
mbed_official 0:51ac1d130fd4 595 */
mbed_official 0:51ac1d130fd4 596 mem->used = 1;
mbed_official 0:51ac1d130fd4 597 MEM_STATS_INC_USED(used, mem->next - (mem_size_t)((u8_t *)mem - ram));
mbed_official 0:51ac1d130fd4 598 }
mbed_official 0:51ac1d130fd4 599
mbed_official 0:51ac1d130fd4 600 if (mem == lfree) {
mbed_official 0:51ac1d130fd4 601 /* Find next free block after mem and update lowest free pointer */
mbed_official 0:51ac1d130fd4 602 while (lfree->used && lfree != ram_end) {
mbed_official 0:51ac1d130fd4 603 LWIP_MEM_ALLOC_UNPROTECT();
mbed_official 0:51ac1d130fd4 604 /* prevent high interrupt latency... */
mbed_official 0:51ac1d130fd4 605 LWIP_MEM_ALLOC_PROTECT();
mbed_official 0:51ac1d130fd4 606 lfree = (struct mem *)(void *)&ram[lfree->next];
mbed_official 0:51ac1d130fd4 607 }
mbed_official 0:51ac1d130fd4 608 LWIP_ASSERT("mem_malloc: !lfree->used", ((lfree == ram_end) || (!lfree->used)));
mbed_official 0:51ac1d130fd4 609 }
mbed_official 0:51ac1d130fd4 610 LWIP_MEM_ALLOC_UNPROTECT();
mbed_official 0:51ac1d130fd4 611 sys_mutex_unlock(&mem_mutex);
mbed_official 0:51ac1d130fd4 612 LWIP_ASSERT("mem_malloc: allocated memory not above ram_end.",
mbed_official 0:51ac1d130fd4 613 (mem_ptr_t)mem + SIZEOF_STRUCT_MEM + size <= (mem_ptr_t)ram_end);
mbed_official 16:092c37b63ee8 614 LWIP_ASSERT("mem_malloc: allocated memory properly aligned.",
mbed_official 16:092c37b63ee8 615 ((mem_ptr_t)mem + SIZEOF_STRUCT_MEM) % MEM_ALIGNMENT == 0);
mbed_official 0:51ac1d130fd4 616 LWIP_ASSERT("mem_malloc: sanity check alignment",
mbed_official 16:092c37b63ee8 617 (((mem_ptr_t)mem) & (MEM_ALIGNMENT-1)) == 0);
mbed_official 0:51ac1d130fd4 618
mbed_official 0:51ac1d130fd4 619 return (u8_t *)mem + SIZEOF_STRUCT_MEM;
mbed_official 0:51ac1d130fd4 620 }
mbed_official 0:51ac1d130fd4 621 }
mbed_official 0:51ac1d130fd4 622 #if LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT
mbed_official 0:51ac1d130fd4 623 /* if we got interrupted by a mem_free, try again */
mbed_official 0:51ac1d130fd4 624 } while(local_mem_free_count != 0);
mbed_official 0:51ac1d130fd4 625 #endif /* LWIP_ALLOW_MEM_FREE_FROM_OTHER_CONTEXT */
mbed_official 0:51ac1d130fd4 626 LWIP_DEBUGF(MEM_DEBUG | LWIP_DBG_LEVEL_SERIOUS, ("mem_malloc: could not allocate %"S16_F" bytes\n", (s16_t)size));
mbed_official 0:51ac1d130fd4 627 MEM_STATS_INC(err);
mbed_official 0:51ac1d130fd4 628 LWIP_MEM_ALLOC_UNPROTECT();
mbed_official 0:51ac1d130fd4 629 sys_mutex_unlock(&mem_mutex);
mbed_official 0:51ac1d130fd4 630 return NULL;
mbed_official 0:51ac1d130fd4 631 }
mbed_official 0:51ac1d130fd4 632
mbed_official 0:51ac1d130fd4 633 #endif /* MEM_USE_POOLS */
mbed_official 0:51ac1d130fd4 634 /**
mbed_official 0:51ac1d130fd4 635 * Contiguously allocates enough space for count objects that are size bytes
mbed_official 0:51ac1d130fd4 636 * of memory each and returns a pointer to the allocated memory.
mbed_official 0:51ac1d130fd4 637 *
mbed_official 0:51ac1d130fd4 638 * The allocated memory is filled with bytes of value zero.
mbed_official 0:51ac1d130fd4 639 *
mbed_official 0:51ac1d130fd4 640 * @param count number of objects to allocate
mbed_official 0:51ac1d130fd4 641 * @param size size of the objects to allocate
mbed_official 0:51ac1d130fd4 642 * @return pointer to allocated memory / NULL pointer if there is an error
mbed_official 0:51ac1d130fd4 643 */
mbed_official 0:51ac1d130fd4 644 void *mem_calloc(mem_size_t count, mem_size_t size)
mbed_official 0:51ac1d130fd4 645 {
mbed_official 0:51ac1d130fd4 646 void *p;
mbed_official 0:51ac1d130fd4 647
mbed_official 0:51ac1d130fd4 648 /* allocate 'count' objects of size 'size' */
mbed_official 0:51ac1d130fd4 649 p = mem_malloc(count * size);
mbed_official 0:51ac1d130fd4 650 if (p) {
mbed_official 0:51ac1d130fd4 651 /* zero the memory */
mbed_official 0:51ac1d130fd4 652 memset(p, 0, count * size);
mbed_official 0:51ac1d130fd4 653 }
mbed_official 0:51ac1d130fd4 654 return p;
mbed_official 0:51ac1d130fd4 655 }
mbed_official 0:51ac1d130fd4 656
mbed_official 0:51ac1d130fd4 657 #endif /* !MEM_LIBC_MALLOC */