OpenCores
URL https://opencores.org/ocsvn/or1k/or1k/trunk

Subversion Repositories or1k

[/] [or1k/] [trunk/] [newlib-1.10.0/] [newlib/] [libc/] [stdlib/] [mallocr.c] - Blame information for rev 1773

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 1010 ivang
#ifdef MALLOC_PROVIDED
2
int _dummy_mallocr = 1;
3
#else
4
/* ---------- To make a malloc.h, start cutting here ------------ */
5
 
6
/*
7
  A version of malloc/free/realloc written by Doug Lea and released to the
8
  public domain.  Send questions/comments/complaints/performance data
9
  to dl@cs.oswego.edu
10
 
11
* VERSION 2.6.4  Thu Nov 28 07:54:55 1996  Doug Lea  (dl at gee)
12
 
13
   Note: There may be an updated version of this malloc obtainable at
14
           ftp://g.oswego.edu/pub/misc/malloc.c
15
         Check before installing!
16
 
17
* Why use this malloc?
18
 
19
  This is not the fastest, most space-conserving, most portable, or
20
  most tunable malloc ever written. However it is among the fastest
21
  while also being among the most space-conserving, portable and tunable.
22
  Consistent balance across these factors results in a good general-purpose
23
  allocator. For a high-level description, see
24
     http://g.oswego.edu/dl/html/malloc.html
25
 
26
* Synopsis of public routines
27
 
28
  (Much fuller descriptions are contained in the program documentation below.)
29
 
30
  malloc(size_t n);
31
     Return a pointer to a newly allocated chunk of at least n bytes, or null
32
     if no space is available.
33
  free(Void_t* p);
34
     Release the chunk of memory pointed to by p, or no effect if p is null.
35
  realloc(Void_t* p, size_t n);
36
     Return a pointer to a chunk of size n that contains the same data
37
     as does chunk p up to the minimum of (n, p's size) bytes, or null
38
     if no space is available. The returned pointer may or may not be
39
     the same as p. If p is null, equivalent to malloc.  Unless the
40
     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41
     size argument of zero (re)allocates a minimum-sized chunk.
42
  memalign(size_t alignment, size_t n);
43
     Return a pointer to a newly allocated chunk of n bytes, aligned
44
     in accord with the alignment argument, which must be a power of
45
     two.
46
  valloc(size_t n);
47
     Equivalent to memalign(pagesize, n), where pagesize is the page
48
     size of the system (or as near to this as can be figured out from
49
     all the includes/defines below.)
50
  pvalloc(size_t n);
51
     Equivalent to valloc(minimum-page-that-holds(n)), that is,
52
     round up n to nearest pagesize.
53
  calloc(size_t unit, size_t quantity);
54
     Returns a pointer to quantity * unit bytes, with all locations
55
     set to zero.
56
  cfree(Void_t* p);
57
     Equivalent to free(p).
58
  malloc_trim(size_t pad);
59
     Release all but pad bytes of freed top-most memory back
60
     to the system. Return 1 if successful, else 0.
61
  malloc_usable_size(Void_t* p);
62
     Report the number usable allocated bytes associated with allocated
63
     chunk p. This may or may not report more bytes than were requested,
64
     due to alignment and minimum size constraints.
65
  malloc_stats();
66
     Prints brief summary statistics on stderr.
67
  mallinfo()
68
     Returns (by copy) a struct containing various summary statistics.
69
  mallopt(int parameter_number, int parameter_value)
70
     Changes one of the tunable parameters described below. Returns
71
     1 if successful in changing the parameter, else 0.
72
 
73
* Vital statistics:
74
 
75
  Alignment:                            8-byte
76
       8 byte alignment is currently hardwired into the design.  This
77
       seems to suffice for all current machines and C compilers.
78
 
79
  Assumed pointer representation:       4 or 8 bytes
80
       Code for 8-byte pointers is untested by me but has worked
81
       reliably by Wolfram Gloger, who contributed most of the
82
       changes supporting this.
83
 
84
  Assumed size_t  representation:       4 or 8 bytes
85
       Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
 
87
  Minimum overhead per allocated chunk: 4 or 8 bytes
88
       Each malloced chunk has a hidden overhead of 4 bytes holding size
89
       and status information.
90
 
91
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
92
                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
93
 
94
       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95
       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96
       needed; 4 (8) for a trailing size field
97
       and 8 (16) bytes for free list pointers. Thus, the minimum
98
       allocatable size is 16/24/32 bytes.
99
 
100
       Even a request for zero bytes (i.e., malloc(0)) returns a
101
       pointer to something of the minimum allocatable size.
102
 
103
  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
104
                          8-byte size_t: 2^63 - 16 bytes
105
 
106
       It is assumed that (possibly signed) size_t bit values suffice to
107
       represent chunk sizes. `Possibly signed' is due to the fact
108
       that `size_t' may be defined on a system as either a signed or
109
       an unsigned type. To be conservative, values that would appear
110
       as negative numbers are avoided.
111
       Requests for sizes with a negative sign bit will return a
112
       minimum-sized chunk.
113
 
114
  Maximum overhead wastage per allocated chunk: normally 15 bytes
115
 
116
       Alignnment demands, plus the minimum allocatable size restriction
117
       make the normal worst-case wastage 15 bytes (i.e., up to 15
118
       more bytes will be allocated than were requested in malloc), with
119
       two exceptions:
120
         1. Because requests for zero bytes allocate non-zero space,
121
            the worst case wastage for a request of zero bytes is 24 bytes.
122
         2. For requests >= mmap_threshold that are serviced via
123
            mmap(), the worst case wastage is 8 bytes plus the remainder
124
            from a system page (the minimal mmap unit); typically 4096 bytes.
125
 
126
* Limitations
127
 
128
    Here are some features that are NOT currently supported
129
 
130
    * No user-definable hooks for callbacks and the like.
131
    * No automated mechanism for fully checking that all accesses
132
      to malloced memory stay within their bounds.
133
    * No support for compaction.
134
 
135
* Synopsis of compile-time options:
136
 
137
    People have reported using previous versions of this malloc on all
138
    versions of Unix, sometimes by tweaking some of the defines
139
    below. It has been tested most extensively on Solaris and
140
    Linux. It is also reported to work on WIN32 platforms.
141
    People have also reported adapting this malloc for use in
142
    stand-alone embedded systems.
143
 
144
    The implementation is in straight, hand-tuned ANSI C.  Among other
145
    consequences, it uses a lot of macros.  Because of this, to be at
146
    all usable, this code should be compiled using an optimizing compiler
147
    (for example gcc -O2) that can simplify expressions and control
148
    paths.
149
 
150
  __STD_C                  (default: derived from C compiler defines)
151
     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152
     a C compiler sufficiently close to ANSI to get away with it.
153
  DEBUG                    (default: NOT defined)
154
     Define to enable debugging. Adds fairly extensive assertion-based
155
     checking to help track down memory errors, but noticeably slows down
156
     execution.
157
  SEPARATE_OBJECTS         (default: NOT defined)
158
     Define this to compile into separate .o files.  You must then
159
     compile malloc.c several times, defining a DEFINE_* macro each
160
     time.  The list of DEFINE_* macros appears below.
161
  MALLOC_LOCK              (default: NOT defined)
162
  MALLOC_UNLOCK            (default: NOT defined)
163
     Define these to C expressions which are run to lock and unlock
164
     the malloc data structures.  Calls may be nested; that is,
165
     MALLOC_LOCK may be called more than once before the corresponding
166
     MALLOC_UNLOCK calls.  MALLOC_LOCK must avoid waiting for a lock
167
     that it already holds.
168
  MALLOC_ALIGNMENT          (default: NOT defined)
169
     Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
170
     which is the normal default.
171
  SIZE_T_SMALLER_THAN_LONG (default: NOT defined)
172
     Define this when the platform you are compiling has sizeof(long) > sizeof(size_t).
173
     The option causes some extra code to be generated to handle operations
174
     that use size_t operands and have long results.
175
  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
176
     Define this if you think that realloc(p, 0) should be equivalent
177
     to free(p). Otherwise, since malloc returns a unique pointer for
178
     malloc(0), so does realloc(p, 0).
179
  HAVE_MEMCPY               (default: defined)
180
     Define if you are not otherwise using ANSI STD C, but still
181
     have memcpy and memset in your C library and want to use them.
182
     Otherwise, simple internal versions are supplied.
183
  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
184
     Define as 1 if you want the C library versions of memset and
185
     memcpy called in realloc and calloc (otherwise macro versions are used).
186
     At least on some platforms, the simple macro versions usually
187
     outperform libc versions.
188
  HAVE_MMAP                 (default: defined as 1)
189
     Define to non-zero to optionally make malloc() use mmap() to
190
     allocate very large blocks.
191
  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
192
     Define to non-zero to optionally make realloc() use mremap() to
193
     reallocate very large blocks.
194
  malloc_getpagesize        (default: derived from system #includes)
195
     Either a constant or routine call returning the system page size.
196
  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
197
     Optionally define if you are on a system with a /usr/include/malloc.h
198
     that declares struct mallinfo. It is not at all necessary to
199
     define this even if you do, but will ensure consistency.
200
  INTERNAL_SIZE_T           (default: size_t)
201
     Define to a 32-bit type (probably `unsigned int') if you are on a
202
     64-bit machine, yet do not want or need to allow malloc requests of
203
     greater than 2^31 to be handled. This saves space, especially for
204
     very small chunks.
205
  INTERNAL_LINUX_C_LIB      (default: NOT defined)
206
     Defined only when compiled as part of Linux libc.
207
     Also note that there is some odd internal name-mangling via defines
208
     (for example, internally, `malloc' is named `mALLOc') needed
209
     when compiling in this case. These look funny but don't otherwise
210
     affect anything.
211
  INTERNAL_NEWLIB           (default: NOT defined)
212
     Defined only when compiled as part of the Cygnus newlib
213
     distribution.
214
  WIN32                     (default: undefined)
215
     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
216
  LACKS_UNISTD_H            (default: undefined)
217
     Define this if your system does not have a <unistd.h>.
218
  MORECORE                  (default: sbrk)
219
     The name of the routine to call to obtain more memory from the system.
220
  MORECORE_FAILURE          (default: -1)
221
     The value returned upon failure of MORECORE.
222
  MORECORE_CLEARS           (default 1)
223
     True (1) if the routine mapped to MORECORE zeroes out memory (which
224
     holds for sbrk).
225
  DEFAULT_TRIM_THRESHOLD
226
  DEFAULT_TOP_PAD
227
  DEFAULT_MMAP_THRESHOLD
228
  DEFAULT_MMAP_MAX
229
     Default values of tunable parameters (described in detail below)
230
     controlling interaction with host system routines (sbrk, mmap, etc).
231
     These values may also be changed dynamically via mallopt(). The
232
     preset defaults are those that give best performance for typical
233
     programs/systems.
234
 
235
 
236
*/
237
 
238
 
239
 
240
 
241
/* Preliminaries */
242
 
243
#ifndef __STD_C
244
#ifdef __STDC__
245
#define __STD_C     1
246
#else
247
#if __cplusplus
248
#define __STD_C     1
249
#else
250
#define __STD_C     0
251
#endif /*__cplusplus*/
252
#endif /*__STDC__*/
253
#endif /*__STD_C*/
254
 
255
#ifndef Void_t
256
#if __STD_C
257
#define Void_t      void
258
#else
259
#define Void_t      char
260
#endif
261
#endif /*Void_t*/
262
 
263
#if __STD_C
264
#include <stddef.h>   /* for size_t */
265
#else
266
#include <sys/types.h>
267
#endif
268
 
269
#ifdef __cplusplus
270
extern "C" {
271
#endif
272
 
273
#include <stdio.h>    /* needed for malloc_stats */
274
 
275
 
276
/*
277
  Compile-time options
278
*/
279
 
280
 
281
/*
282
 
283
  Special defines for Cygnus newlib distribution.
284
 
285
 */
286
 
287
#ifdef INTERNAL_NEWLIB
288
 
289
#include <sys/config.h>
290
 
291
/*
292
  In newlib, all the publically visible routines take a reentrancy
293
  pointer.  We don't currently do anything much with it, but we do
294
  pass it to the lock routine.
295
 */
296
 
297
#include <reent.h>
298
 
299
#define POINTER_UINT unsigned _POINTER_INT
300
#define SEPARATE_OBJECTS
301
#define HAVE_MMAP 0
302
#define MORECORE(size) _sbrk_r(reent_ptr, (size))
303
#define MORECORE_CLEARS 0
304
#define MALLOC_LOCK __malloc_lock(reent_ptr)
305
#define MALLOC_UNLOCK __malloc_unlock(reent_ptr)
306
 
307
#ifdef __CYGWIN__
308
# undef _WIN32
309
# undef WIN32
310
#endif
311
 
312
#ifndef _WIN32
313
#ifdef SMALL_MEMORY
314
#define malloc_getpagesize (128)
315
#else
316
#define malloc_getpagesize (4096)
317
#endif
318
#endif
319
 
320
#if __STD_C
321
extern void __malloc_lock(struct _reent *);
322
extern void __malloc_unlock(struct _reent *);
323
#else
324
extern void __malloc_lock();
325
extern void __malloc_unlock();
326
#endif
327
 
328
#if __STD_C
329
#define RARG struct _reent *reent_ptr,
330
#define RONEARG struct _reent *reent_ptr
331
#else
332
#define RARG reent_ptr
333
#define RONEARG reent_ptr
334
#define RDECL struct _reent *reent_ptr;
335
#endif
336
 
337
#define RCALL reent_ptr,
338
#define RONECALL reent_ptr
339
 
340
#else /* ! INTERNAL_NEWLIB */
341
 
342
#define POINTER_UINT unsigned long
343
#define RARG
344
#define RONEARG
345
#define RDECL
346
#define RCALL
347
#define RONECALL
348
 
349
#endif /* ! INTERNAL_NEWLIB */
350
 
351
/*
352
    Debugging:
353
 
354
    Because freed chunks may be overwritten with link fields, this
355
    malloc will often die when freed memory is overwritten by user
356
    programs.  This can be very effective (albeit in an annoying way)
357
    in helping track down dangling pointers.
358
 
359
    If you compile with -DDEBUG, a number of assertion checks are
360
    enabled that will catch more memory errors. You probably won't be
361
    able to make much sense of the actual assertion errors, but they
362
    should help you locate incorrectly overwritten memory.  The
363
    checking is fairly extensive, and will slow down execution
364
    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
365
    attempt to check every non-mmapped allocated and free chunk in the
366
    course of computing the summmaries. (By nature, mmapped regions
367
    cannot be checked very much automatically.)
368
 
369
    Setting DEBUG may also be helpful if you are trying to modify
370
    this code. The assertions in the check routines spell out in more
371
    detail the assumptions and invariants underlying the algorithms.
372
 
373
*/
374
 
375
#if DEBUG 
376
#include <assert.h>
377
#else
378
#define assert(x) ((void)0)
379
#endif
380
 
381
 
382
/*
383
  SEPARATE_OBJECTS should be defined if you want each function to go
384
  into a separate .o file.  You must then compile malloc.c once per
385
  function, defining the appropriate DEFINE_ macro.  See below for the
386
  list of macros.
387
 */
388
 
389
#ifndef SEPARATE_OBJECTS
390
#define DEFINE_MALLOC
391
#define DEFINE_FREE
392
#define DEFINE_REALLOC
393
#define DEFINE_CALLOC
394
#define DEFINE_CFREE
395
#define DEFINE_MEMALIGN
396
#define DEFINE_VALLOC
397
#define DEFINE_PVALLOC
398
#define DEFINE_MALLINFO
399
#define DEFINE_MALLOC_STATS
400
#define DEFINE_MALLOC_USABLE_SIZE
401
#define DEFINE_MALLOPT
402
 
403
#define STATIC static
404
#else
405
#define STATIC
406
#endif
407
 
408
/*
409
   Define MALLOC_LOCK and MALLOC_UNLOCK to C expressions to run to
410
   lock and unlock the malloc data structures.  MALLOC_LOCK may be
411
   called recursively.
412
 */
413
 
414
#ifndef MALLOC_LOCK
415
#define MALLOC_LOCK
416
#endif
417
 
418
#ifndef MALLOC_UNLOCK
419
#define MALLOC_UNLOCK
420
#endif
421
 
422
/*
423
  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
424
  of chunk sizes. On a 64-bit machine, you can reduce malloc
425
  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
426
  at the expense of not being able to handle requests greater than
427
  2^31. This limitation is hardly ever a concern; you are encouraged
428
  to set this. However, the default version is the same as size_t.
429
*/
430
 
431
#ifndef INTERNAL_SIZE_T
432
#define INTERNAL_SIZE_T size_t
433
#endif
434
 
435
/*
436
  Following is needed on implementations whereby long > size_t.
437
  The problem is caused because the code performs subtractions of
438
  size_t values and stores the result in long values.  In the case
439
  where long > size_t and the first value is actually less than
440
  the second value, the resultant value is positive.  For example,
441
  (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
442
  which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF.  This is due to the
443
  fact that assignment from unsigned to signed won't sign extend.
444
*/
445
 
446
#ifdef SIZE_T_SMALLER_THAN_LONG
447
#define long_sub_size_t(x, y) ( (x < y) ? -((long)(y - x)) : (x - y) );
448
#else
449
#define long_sub_size_t(x, y) ( (long)(x - y) )
450
#endif
451
 
452
/*
453
  REALLOC_ZERO_BYTES_FREES should be set if a call to
454
  realloc with zero bytes should be the same as a call to free.
455
  Some people think it should. Otherwise, since this malloc
456
  returns a unique pointer for malloc(0), so does realloc(p, 0).
457
*/
458
 
459
 
460
/*   #define REALLOC_ZERO_BYTES_FREES */
461
 
462
 
463
/*
464
  WIN32 causes an emulation of sbrk to be compiled in
465
  mmap-based options are not currently supported in WIN32.
466
*/
467
 
468
/* #define WIN32 */
469
#ifdef WIN32
470
#define MORECORE wsbrk
471
#define HAVE_MMAP 0
472
#endif
473
 
474
 
475
/*
476
  HAVE_MEMCPY should be defined if you are not otherwise using
477
  ANSI STD C, but still have memcpy and memset in your C library
478
  and want to use them in calloc and realloc. Otherwise simple
479
  macro versions are defined here.
480
 
481
  USE_MEMCPY should be defined as 1 if you actually want to
482
  have memset and memcpy called. People report that the macro
483
  versions are often enough faster than libc versions on many
484
  systems that it is better to use them.
485
 
486
*/
487
 
488
#define HAVE_MEMCPY 
489
 
490
#ifndef USE_MEMCPY
491
#ifdef HAVE_MEMCPY
492
#define USE_MEMCPY 1
493
#else
494
#define USE_MEMCPY 0
495
#endif
496
#endif
497
 
498
#if (__STD_C || defined(HAVE_MEMCPY)) 
499
 
500
#if __STD_C
501
void* memset(void*, int, size_t);
502
void* memcpy(void*, const void*, size_t);
503
#else
504
Void_t* memset();
505
Void_t* memcpy();
506
#endif
507
#endif
508
 
509
#if USE_MEMCPY
510
 
511
/* The following macros are only invoked with (2n+1)-multiples of
512
   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
513
   for fast inline execution when n is small. */
514
 
515
#define MALLOC_ZERO(charp, nbytes)                                            \
516
do {                                                                          \
517
  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
518
  if(mzsz <= 9*sizeof(mzsz)) {                                                \
519
    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
520
    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
521
                                     *mz++ = 0;                               \
522
      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
523
                                     *mz++ = 0;                               \
524
        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
525
                                     *mz++ = 0; }}}                           \
526
                                     *mz++ = 0;                               \
527
                                     *mz++ = 0;                               \
528
                                     *mz   = 0;                               \
529
  } else memset((charp), 0, mzsz);                                            \
530
} while(0)
531
 
532
#define MALLOC_COPY(dest,src,nbytes)                                          \
533
do {                                                                          \
534
  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
535
  if(mcsz <= 9*sizeof(mcsz)) {                                                \
536
    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
537
    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
538
    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
539
                                     *mcdst++ = *mcsrc++;                     \
540
      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
541
                                     *mcdst++ = *mcsrc++;                     \
542
        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
543
                                     *mcdst++ = *mcsrc++; }}}                 \
544
                                     *mcdst++ = *mcsrc++;                     \
545
                                     *mcdst++ = *mcsrc++;                     \
546
                                     *mcdst   = *mcsrc  ;                     \
547
  } else memcpy(dest, src, mcsz);                                             \
548
} while(0)
549
 
550
#else /* !USE_MEMCPY */
551
 
552
/* Use Duff's device for good zeroing/copying performance. */
553
 
554
#define MALLOC_ZERO(charp, nbytes)                                            \
555
do {                                                                          \
556
  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
557
  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
558
  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
559
  switch (mctmp) {                                                            \
560
    case 0: for(;;) { *mzp++ = 0;                                             \
561
    case 7:           *mzp++ = 0;                                             \
562
    case 6:           *mzp++ = 0;                                             \
563
    case 5:           *mzp++ = 0;                                             \
564
    case 4:           *mzp++ = 0;                                             \
565
    case 3:           *mzp++ = 0;                                             \
566
    case 2:           *mzp++ = 0;                                             \
567
    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
568
  }                                                                           \
569
} while(0)
570
 
571
#define MALLOC_COPY(dest,src,nbytes)                                          \
572
do {                                                                          \
573
  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
574
  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
575
  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
576
  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
577
  switch (mctmp) {                                                            \
578
    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
579
    case 7:           *mcdst++ = *mcsrc++;                                    \
580
    case 6:           *mcdst++ = *mcsrc++;                                    \
581
    case 5:           *mcdst++ = *mcsrc++;                                    \
582
    case 4:           *mcdst++ = *mcsrc++;                                    \
583
    case 3:           *mcdst++ = *mcsrc++;                                    \
584
    case 2:           *mcdst++ = *mcsrc++;                                    \
585
    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
586
  }                                                                           \
587
} while(0)
588
 
589
#endif
590
 
591
 
592
/*
593
  Define HAVE_MMAP to optionally make malloc() use mmap() to
594
  allocate very large blocks.  These will be returned to the
595
  operating system immediately after a free().
596
*/
597
 
598
#ifndef HAVE_MMAP
599
#define HAVE_MMAP 1
600
#endif
601
 
602
/*
603
  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
604
  large blocks.  This is currently only possible on Linux with
605
  kernel versions newer than 1.3.77.
606
*/
607
 
608
#ifndef HAVE_MREMAP
609
#ifdef INTERNAL_LINUX_C_LIB
610
#define HAVE_MREMAP 1
611
#else
612
#define HAVE_MREMAP 0
613
#endif
614
#endif
615
 
616
#if HAVE_MMAP
617
 
618
#include <unistd.h>
619
#include <fcntl.h>
620
#include <sys/mman.h>
621
 
622
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
623
#define MAP_ANONYMOUS MAP_ANON
624
#endif
625
 
626
#endif /* HAVE_MMAP */
627
 
628
/*
629
  Access to system page size. To the extent possible, this malloc
630
  manages memory from the system in page-size units.
631
 
632
  The following mechanics for getpagesize were adapted from
633
  bsd/gnu getpagesize.h
634
*/
635
 
636
#ifndef LACKS_UNISTD_H
637
#  include <unistd.h>
638
#endif
639
 
640
#ifndef malloc_getpagesize
641
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
642
#    ifndef _SC_PAGE_SIZE
643
#      define _SC_PAGE_SIZE _SC_PAGESIZE
644
#    endif
645
#  endif
646
#  ifdef _SC_PAGE_SIZE
647
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
648
#  else
649
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
650
       extern size_t getpagesize();
651
#      define malloc_getpagesize getpagesize()
652
#    else
653
#      include <sys/param.h>
654
#      ifdef EXEC_PAGESIZE
655
#        define malloc_getpagesize EXEC_PAGESIZE
656
#      else
657
#        ifdef NBPG
658
#          ifndef CLSIZE
659
#            define malloc_getpagesize NBPG
660
#          else
661
#            define malloc_getpagesize (NBPG * CLSIZE)
662
#          endif
663
#        else 
664
#          ifdef NBPC
665
#            define malloc_getpagesize NBPC
666
#          else
667
#            ifdef PAGESIZE
668
#              define malloc_getpagesize PAGESIZE
669
#            else
670
#              define malloc_getpagesize (4096) /* just guess */
671
#            endif
672
#          endif
673
#        endif 
674
#      endif
675
#    endif 
676
#  endif
677
#endif
678
 
679
 
680
 
681
/*
682
 
683
  This version of malloc supports the standard SVID/XPG mallinfo
684
  routine that returns a struct containing the same kind of
685
  information you can get from malloc_stats. It should work on
686
  any SVID/XPG compliant system that has a /usr/include/malloc.h
687
  defining struct mallinfo. (If you'd like to install such a thing
688
  yourself, cut out the preliminary declarations as described above
689
  and below and save them in a malloc.h file. But there's no
690
  compelling reason to bother to do this.)
691
 
692
  The main declaration needed is the mallinfo struct that is returned
693
  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
694
  bunch of fields, most of which are not even meaningful in this
695
  version of malloc. Some of these fields are are instead filled by
696
  mallinfo() with other numbers that might possibly be of interest.
697
 
698
  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
699
  /usr/include/malloc.h file that includes a declaration of struct
700
  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
701
  version is declared below.  These must be precisely the same for
702
  mallinfo() to work.
703
 
704
*/
705
 
706
/* #define HAVE_USR_INCLUDE_MALLOC_H */
707
 
708
#if HAVE_USR_INCLUDE_MALLOC_H
709
#include "/usr/include/malloc.h"
710
#else
711
 
712
/* SVID2/XPG mallinfo structure */
713
 
714
struct mallinfo {
715
  int arena;    /* total space allocated from system */
716
  int ordblks;  /* number of non-inuse chunks */
717
  int smblks;   /* unused -- always zero */
718
  int hblks;    /* number of mmapped regions */
719
  int hblkhd;   /* total space in mmapped regions */
720
  int usmblks;  /* unused -- always zero */
721
  int fsmblks;  /* unused -- always zero */
722
  int uordblks; /* total allocated space */
723
  int fordblks; /* total non-inuse space */
724
  int keepcost; /* top-most, releasable (via malloc_trim) space */
725
};
726
 
727
/* SVID2/XPG mallopt options */
728
 
729
#define M_MXFAST  1    /* UNUSED in this malloc */
730
#define M_NLBLKS  2    /* UNUSED in this malloc */
731
#define M_GRAIN   3    /* UNUSED in this malloc */
732
#define M_KEEP    4    /* UNUSED in this malloc */
733
 
734
#endif
735
 
736
/* mallopt options that actually do something */
737
 
738
#define M_TRIM_THRESHOLD    -1
739
#define M_TOP_PAD           -2
740
#define M_MMAP_THRESHOLD    -3
741
#define M_MMAP_MAX          -4
742
 
743
 
744
 
745
#ifndef DEFAULT_TRIM_THRESHOLD
746
#define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
747
#endif
748
 
749
/*
750
    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
751
      to keep before releasing via malloc_trim in free().
752
 
753
      Automatic trimming is mainly useful in long-lived programs.
754
      Because trimming via sbrk can be slow on some systems, and can
755
      sometimes be wasteful (in cases where programs immediately
756
      afterward allocate more large chunks) the value should be high
757
      enough so that your overall system performance would improve by
758
      releasing.
759
 
760
      The trim threshold and the mmap control parameters (see below)
761
      can be traded off with one another. Trimming and mmapping are
762
      two different ways of releasing unused memory back to the
763
      system. Between these two, it is often possible to keep
764
      system-level demands of a long-lived program down to a bare
765
      minimum. For example, in one test suite of sessions measuring
766
      the XF86 X server on Linux, using a trim threshold of 128K and a
767
      mmap threshold of 192K led to near-minimal long term resource
768
      consumption.
769
 
770
      If you are using this malloc in a long-lived program, it should
771
      pay to experiment with these values.  As a rough guide, you
772
      might set to a value close to the average size of a process
773
      (program) running on your system.  Releasing this much memory
774
      would allow such a process to run in memory.  Generally, it's
775
      worth it to tune for trimming rather tham memory mapping when a
776
      program undergoes phases where several large chunks are
777
      allocated and released in ways that can reuse each other's
778
      storage, perhaps mixed with phases where there are no such
779
      chunks at all.  And in well-behaved long-lived programs,
780
      controlling release of large blocks via trimming versus mapping
781
      is usually faster.
782
 
783
      However, in most programs, these parameters serve mainly as
784
      protection against the system-level effects of carrying around
785
      massive amounts of unneeded memory. Since frequent calls to
786
      sbrk, mmap, and munmap otherwise degrade performance, the default
787
      parameters are set to relatively high values that serve only as
788
      safeguards.
789
 
790
      The default trim value is high enough to cause trimming only in
791
      fairly extreme (by current memory consumption standards) cases.
792
      It must be greater than page size to have any useful effect.  To
793
      disable trimming completely, you can set to (unsigned long)(-1);
794
 
795
 
796
*/
797
 
798
 
799
#ifndef DEFAULT_TOP_PAD
800
#define DEFAULT_TOP_PAD        (0)
801
#endif
802
 
803
/*
804
    M_TOP_PAD is the amount of extra `padding' space to allocate or
805
      retain whenever sbrk is called. It is used in two ways internally:
806
 
807
      * When sbrk is called to extend the top of the arena to satisfy
808
        a new malloc request, this much padding is added to the sbrk
809
        request.
810
 
811
      * When malloc_trim is called automatically from free(),
812
        it is used as the `pad' argument.
813
 
814
      In both cases, the actual amount of padding is rounded
815
      so that the end of the arena is always a system page boundary.
816
 
817
      The main reason for using padding is to avoid calling sbrk so
818
      often. Having even a small pad greatly reduces the likelihood
819
      that nearly every malloc request during program start-up (or
820
      after trimming) will invoke sbrk, which needlessly wastes
821
      time.
822
 
823
      Automatic rounding-up to page-size units is normally sufficient
824
      to avoid measurable overhead, so the default is 0.  However, in
825
      systems where sbrk is relatively slow, it can pay to increase
826
      this value, at the expense of carrying around more memory than
827
      the program needs.
828
 
829
*/
830
 
831
 
832
#ifndef DEFAULT_MMAP_THRESHOLD
833
#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
834
#endif
835
 
836
/*
837
 
838
    M_MMAP_THRESHOLD is the request size threshold for using mmap()
839
      to service a request. Requests of at least this size that cannot
840
      be allocated using already-existing space will be serviced via mmap.
841
      (If enough normal freed space already exists it is used instead.)
842
 
843
      Using mmap segregates relatively large chunks of memory so that
844
      they can be individually obtained and released from the host
845
      system. A request serviced through mmap is never reused by any
846
      other request (at least not directly; the system may just so
847
      happen to remap successive requests to the same locations).
848
 
849
      Segregating space in this way has the benefit that mmapped space
850
      can ALWAYS be individually released back to the system, which
851
      helps keep the system level memory demands of a long-lived
852
      program low. Mapped memory can never become `locked' between
853
      other chunks, as can happen with normally allocated chunks, which
854
      menas that even trimming via malloc_trim would not release them.
855
 
856
      However, it has the disadvantages that:
857
 
858
         1. The space cannot be reclaimed, consolidated, and then
859
            used to service later requests, as happens with normal chunks.
860
         2. It can lead to more wastage because of mmap page alignment
861
            requirements
862
         3. It causes malloc performance to be more dependent on host
863
            system memory management support routines which may vary in
864
            implementation quality and may impose arbitrary
865
            limitations. Generally, servicing a request via normal
866
            malloc steps is faster than going through a system's mmap.
867
 
868
      All together, these considerations should lead you to use mmap
869
      only for relatively large requests.
870
 
871
 
872
*/
873
 
874
 
875
 
876
#ifndef DEFAULT_MMAP_MAX
877
#if HAVE_MMAP
878
#define DEFAULT_MMAP_MAX       (64)
879
#else
880
#define DEFAULT_MMAP_MAX       (0)
881
#endif
882
#endif
883
 
884
/*
885
    M_MMAP_MAX is the maximum number of requests to simultaneously
886
      service using mmap. This parameter exists because:
887
 
888
         1. Some systems have a limited number of internal tables for
889
            use by mmap.
890
         2. In most systems, overreliance on mmap can degrade overall
891
            performance.
892
         3. If a program allocates many large regions, it is probably
893
            better off using normal sbrk-based allocation routines that
894
            can reclaim and reallocate normal heap memory. Using a
895
            small value allows transition into this mode after the
896
            first few allocations.
897
 
898
      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
899
      the default value is 0, and attempts to set it to non-zero values
900
      in mallopt will fail.
901
*/
902
 
903
 
904
 
905
 
906
/*
907
 
908
  Special defines for linux libc
909
 
910
  Except when compiled using these special defines for Linux libc
911
  using weak aliases, this malloc is NOT designed to work in
912
  multithreaded applications.  No semaphores or other concurrency
913
  control are provided to ensure that multiple malloc or free calls
914
  don't run at the same time, which could be disasterous. A single
915
  semaphore could be used across malloc, realloc, and free (which is
916
  essentially the effect of the linux weak alias approach). It would
917
  be hard to obtain finer granularity.
918
 
919
*/
920
 
921
 
922
#ifdef INTERNAL_LINUX_C_LIB
923
 
924
#if __STD_C
925
 
926
Void_t * __default_morecore_init (ptrdiff_t);
927
Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
928
 
929
#else
930
 
931
Void_t * __default_morecore_init ();
932
Void_t *(*__morecore)() = __default_morecore_init;
933
 
934
#endif
935
 
936
#define MORECORE (*__morecore)
937
#define MORECORE_FAILURE 0
938
#define MORECORE_CLEARS 1 
939
 
940
#else /* INTERNAL_LINUX_C_LIB */
941
 
942
#ifndef INTERNAL_NEWLIB
943
#if __STD_C
944
extern Void_t*     sbrk(ptrdiff_t);
945
#else
946
extern Void_t*     sbrk();
947
#endif
948
#endif
949
 
950
#ifndef MORECORE
951
#define MORECORE sbrk
952
#endif
953
 
954
#ifndef MORECORE_FAILURE
955
#define MORECORE_FAILURE -1
956
#endif
957
 
958
#ifndef MORECORE_CLEARS
959
#define MORECORE_CLEARS 1
960
#endif
961
 
962
#endif /* INTERNAL_LINUX_C_LIB */
963
 
964
#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
965
 
966
#define cALLOc          __libc_calloc
967
#define fREe            __libc_free
968
#define mALLOc          __libc_malloc
969
#define mEMALIGn        __libc_memalign
970
#define rEALLOc         __libc_realloc
971
#define vALLOc          __libc_valloc
972
#define pvALLOc         __libc_pvalloc
973
#define mALLINFo        __libc_mallinfo
974
#define mALLOPt         __libc_mallopt
975
 
976
#pragma weak calloc = __libc_calloc
977
#pragma weak free = __libc_free
978
#pragma weak cfree = __libc_free
979
#pragma weak malloc = __libc_malloc
980
#pragma weak memalign = __libc_memalign
981
#pragma weak realloc = __libc_realloc
982
#pragma weak valloc = __libc_valloc
983
#pragma weak pvalloc = __libc_pvalloc
984
#pragma weak mallinfo = __libc_mallinfo
985
#pragma weak mallopt = __libc_mallopt
986
 
987
#else
988
 
989
#ifdef INTERNAL_NEWLIB
990
 
991
#define cALLOc          _calloc_r
992
#define fREe            _free_r
993
#define mALLOc          _malloc_r
994
#define mEMALIGn        _memalign_r
995
#define rEALLOc         _realloc_r
996
#define vALLOc          _valloc_r
997
#define pvALLOc         _pvalloc_r
998
#define mALLINFo        _mallinfo_r
999
#define mALLOPt         _mallopt_r
1000
 
1001
#define malloc_stats                    _malloc_stats_r
1002
#define malloc_trim                     _malloc_trim_r
1003
#define malloc_usable_size              _malloc_usable_size_r
1004
 
1005
#define malloc_update_mallinfo          __malloc_update_mallinfo
1006
 
1007
#define malloc_av_                      __malloc_av_
1008
#define malloc_current_mallinfo         __malloc_current_mallinfo
1009
#define malloc_max_sbrked_mem           __malloc_max_sbrked_mem
1010
#define malloc_max_total_mem            __malloc_max_total_mem
1011
#define malloc_sbrk_base                __malloc_sbrk_base
1012
#define malloc_top_pad                  __malloc_top_pad
1013
#define malloc_trim_threshold           __malloc_trim_threshold
1014
 
1015
#else /* ! INTERNAL_NEWLIB */
1016
 
1017
#define cALLOc          calloc
1018
#define fREe            free
1019
#define mALLOc          malloc
1020
#define mEMALIGn        memalign
1021
#define rEALLOc         realloc
1022
#define vALLOc          valloc
1023
#define pvALLOc         pvalloc
1024
#define mALLINFo        mallinfo
1025
#define mALLOPt         mallopt
1026
 
1027
#endif /* ! INTERNAL_NEWLIB */
1028
#endif
1029
 
1030
/* Public routines */
1031
 
1032
#if __STD_C
1033
 
1034
Void_t* mALLOc(RARG size_t);
1035
void    fREe(RARG Void_t*);
1036
Void_t* rEALLOc(RARG Void_t*, size_t);
1037
Void_t* mEMALIGn(RARG size_t, size_t);
1038
Void_t* vALLOc(RARG size_t);
1039
Void_t* pvALLOc(RARG size_t);
1040
Void_t* cALLOc(RARG size_t, size_t);
1041
void    cfree(Void_t*);
1042
int     malloc_trim(RARG size_t);
1043
size_t  malloc_usable_size(RARG Void_t*);
1044
void    malloc_stats(RONEARG);
1045
int     mALLOPt(RARG int, int);
1046
struct mallinfo mALLINFo(RONEARG);
1047
#else
1048
Void_t* mALLOc();
1049
void    fREe();
1050
Void_t* rEALLOc();
1051
Void_t* mEMALIGn();
1052
Void_t* vALLOc();
1053
Void_t* pvALLOc();
1054
Void_t* cALLOc();
1055
void    cfree();
1056
int     malloc_trim();
1057
size_t  malloc_usable_size();
1058
void    malloc_stats();
1059
int     mALLOPt();
1060
struct mallinfo mALLINFo();
1061
#endif
1062
 
1063
 
1064
#ifdef __cplusplus
1065
};  /* end of extern "C" */
1066
#endif
1067
 
1068
/* ---------- To make a malloc.h, end cutting here ------------ */
1069
 
1070
 
1071
/*
1072
  Emulation of sbrk for WIN32
1073
  All code within the ifdef WIN32 is untested by me.
1074
*/
1075
 
1076
 
1077
#ifdef WIN32
1078
 
1079
#define AlignPage(add) (((add) + (malloc_getpagesize-1)) &
1080
~(malloc_getpagesize-1))
1081
 
1082
/* resrve 64MB to insure large contiguous space */
1083
#define RESERVED_SIZE (1024*1024*64)
1084
#define NEXT_SIZE (2048*1024)
1085
#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1086
 
1087
struct GmListElement;
1088
typedef struct GmListElement GmListElement;
1089
 
1090
struct GmListElement
1091
{
1092
        GmListElement* next;
1093
        void* base;
1094
};
1095
 
1096
static GmListElement* head = 0;
1097
static unsigned int gNextAddress = 0;
1098
static unsigned int gAddressBase = 0;
1099
static unsigned int gAllocatedSize = 0;
1100
 
1101
static
1102
GmListElement* makeGmListElement (void* bas)
1103
{
1104
        GmListElement* this;
1105
        this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1106
        ASSERT (this);
1107
        if (this)
1108
        {
1109
                this->base = bas;
1110
                this->next = head;
1111
                head = this;
1112
        }
1113
        return this;
1114
}
1115
 
1116
void gcleanup ()
1117
{
1118
        BOOL rval;
1119
        ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
1120
        if (gAddressBase && (gNextAddress - gAddressBase))
1121
        {
1122
                rval = VirtualFree ((void*)gAddressBase,
1123
                                                        gNextAddress - gAddressBase,
1124
                                                        MEM_DECOMMIT);
1125
        ASSERT (rval);
1126
        }
1127
        while (head)
1128
        {
1129
                GmListElement* next = head->next;
1130
                rval = VirtualFree (head->base, 0, MEM_RELEASE);
1131
                ASSERT (rval);
1132
                LocalFree (head);
1133
                head = next;
1134
        }
1135
}
1136
 
1137
static
1138
void* findRegion (void* start_address, unsigned long size)
1139
{
1140
        MEMORY_BASIC_INFORMATION info;
1141
        while ((unsigned long)start_address < TOP_MEMORY)
1142
        {
1143
                VirtualQuery (start_address, &info, sizeof (info));
1144
                if (info.State != MEM_FREE)
1145
                        start_address = (char*)info.BaseAddress + info.RegionSize;
1146
                else if (info.RegionSize >= size)
1147
                        return start_address;
1148
                else
1149
                        start_address = (char*)info.BaseAddress + info.RegionSize;
1150
        }
1151
        return NULL;
1152
 
1153
}
1154
 
1155
 
1156
void* wsbrk (long size)
1157
{
1158
        void* tmp;
1159
        if (size > 0)
1160
        {
1161
                if (gAddressBase == 0)
1162
                {
1163
                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1164
                        gNextAddress = gAddressBase =
1165
                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1166
                                                                                        MEM_RESERVE, PAGE_NOACCESS);
1167
                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1168
gAllocatedSize))
1169
                {
1170
                        long new_size = max (NEXT_SIZE, AlignPage (size));
1171
                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
1172
                        do
1173
                        {
1174
                                new_address = findRegion (new_address, new_size);
1175
 
1176
                                if (new_address == 0)
1177
                                        return (void*)-1;
1178
 
1179
                                gAddressBase = gNextAddress =
1180
                                        (unsigned int)VirtualAlloc (new_address, new_size,
1181
                                                                                                MEM_RESERVE, PAGE_NOACCESS);
1182
                                // repeat in case of race condition
1183
                                // The region that we found has been snagged 
1184
                                // by another thread
1185
                        }
1186
                        while (gAddressBase == 0);
1187
 
1188
                        ASSERT (new_address == (void*)gAddressBase);
1189
 
1190
                        gAllocatedSize = new_size;
1191
 
1192
                        if (!makeGmListElement ((void*)gAddressBase))
1193
                                return (void*)-1;
1194
                }
1195
                if ((size + gNextAddress) > AlignPage (gNextAddress))
1196
                {
1197
                        void* res;
1198
                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1199
                                                                (size + gNextAddress -
1200
                                                                 AlignPage (gNextAddress)),
1201
                                                                MEM_COMMIT, PAGE_READWRITE);
1202
                        if (res == 0)
1203
                                return (void*)-1;
1204
                }
1205
                tmp = (void*)gNextAddress;
1206
                gNextAddress = (unsigned int)tmp + size;
1207
                return tmp;
1208
        }
1209
        else if (size < 0)
1210
        {
1211
                unsigned int alignedGoal = AlignPage (gNextAddress + size);
1212
                /* Trim by releasing the virtual memory */
1213
                if (alignedGoal >= gAddressBase)
1214
                {
1215
                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1216
                                                 MEM_DECOMMIT);
1217
                        gNextAddress = gNextAddress + size;
1218
                        return (void*)gNextAddress;
1219
                }
1220
                else
1221
                {
1222
                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1223
                                                 MEM_DECOMMIT);
1224
                        gNextAddress = gAddressBase;
1225
                        return (void*)-1;
1226
                }
1227
        }
1228
        else
1229
        {
1230
                return (void*)gNextAddress;
1231
        }
1232
}
1233
 
1234
#endif
1235
 
1236
 
1237
 
1238
/*
1239
  Type declarations
1240
*/
1241
 
1242
 
1243
struct malloc_chunk
1244
{
1245
  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1246
  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1247
  struct malloc_chunk* fd;   /* double links -- used only if free. */
1248
  struct malloc_chunk* bk;
1249
};
1250
 
1251
typedef struct malloc_chunk* mchunkptr;
1252
 
1253
/*
1254
 
1255
   malloc_chunk details:
1256
 
1257
    (The following includes lightly edited explanations by Colin Plumb.)
1258
 
1259
    Chunks of memory are maintained using a `boundary tag' method as
1260
    described in e.g., Knuth or Standish.  (See the paper by Paul
1261
    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1262
    survey of such techniques.)  Sizes of free chunks are stored both
1263
    in the front of each chunk and at the end.  This makes
1264
    consolidating fragmented chunks into bigger chunks very fast.  The
1265
    size fields also hold bits representing whether chunks are free or
1266
    in use.
1267
 
1268
    An allocated chunk looks like this:
1269
 
1270
 
1271
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1272
            |             Size of previous chunk, if allocated            | |
1273
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1274
            |             Size of chunk, in bytes                         |P|
1275
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1276
            |             User data starts here...                          .
1277
            .                                                               .
1278
            .             (malloc_usable_space() bytes)                     .
1279
            .                                                               |
1280
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1281
            |             Size of chunk                                     |
1282
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1283
 
1284
 
1285
    Where "chunk" is the front of the chunk for the purpose of most of
1286
    the malloc code, but "mem" is the pointer that is returned to the
1287
    user.  "Nextchunk" is the beginning of the next contiguous chunk.
1288
 
1289
    Chunks always begin on even word boundries, so the mem portion
1290
    (which is returned to the user) is also on an even word boundary, and
1291
    thus double-word aligned.
1292
 
1293
    Free chunks are stored in circular doubly-linked lists, and look like this:
1294
 
1295
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1296
            |             Size of previous chunk                            |
1297
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1298
    `head:' |             Size of chunk, in bytes                         |P|
1299
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1300
            |             Forward pointer to next chunk in list             |
1301
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1302
            |             Back pointer to previous chunk in list            |
1303
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1304
            |             Unused space (may be 0 bytes long)                .
1305
            .                                                               .
1306
            .                                                               |
1307
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1308
    `foot:' |             Size of chunk, in bytes                           |
1309
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1310
 
1311
    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1312
    chunk size (which is always a multiple of two words), is an in-use
1313
    bit for the *previous* chunk.  If that bit is *clear*, then the
1314
    word before the current chunk size contains the previous chunk
1315
    size, and can be used to find the front of the previous chunk.
1316
    (The very first chunk allocated always has this bit set,
1317
    preventing access to non-existent (or non-owned) memory.)
1318
 
1319
    Note that the `foot' of the current chunk is actually represented
1320
    as the prev_size of the NEXT chunk. (This makes it easier to
1321
    deal with alignments etc).
1322
 
1323
    The two exceptions to all this are
1324
 
1325
     1. The special chunk `top', which doesn't bother using the
1326
        trailing size field since there is no
1327
        next contiguous chunk that would have to index off it. (After
1328
        initialization, `top' is forced to always exist.  If it would
1329
        become less than MINSIZE bytes long, it is replenished via
1330
        malloc_extend_top.)
1331
 
1332
     2. Chunks allocated via mmap, which have the second-lowest-order
1333
        bit (IS_MMAPPED) set in their size fields.  Because they are
1334
        never merged or traversed from any other chunk, they have no
1335
        foot size or inuse information.
1336
 
1337
    Available chunks are kept in any of several places (all declared below):
1338
 
1339
    * `av': An array of chunks serving as bin headers for consolidated
1340
       chunks. Each bin is doubly linked.  The bins are approximately
1341
       proportionally (log) spaced.  There are a lot of these bins
1342
       (128). This may look excessive, but works very well in
1343
       practice.  All procedures maintain the invariant that no
1344
       consolidated chunk physically borders another one. Chunks in
1345
       bins are kept in size order, with ties going to the
1346
       approximately least recently used chunk.
1347
 
1348
       The chunks in each bin are maintained in decreasing sorted order by
1349
       size.  This is irrelevant for the small bins, which all contain
1350
       the same-sized chunks, but facilitates best-fit allocation for
1351
       larger chunks. (These lists are just sequential. Keeping them in
1352
       order almost never requires enough traversal to warrant using
1353
       fancier ordered data structures.)  Chunks of the same size are
1354
       linked with the most recently freed at the front, and allocations
1355
       are taken from the back.  This results in LRU or FIFO allocation
1356
       order, which tends to give each chunk an equal opportunity to be
1357
       consolidated with adjacent freed chunks, resulting in larger free
1358
       chunks and less fragmentation.
1359
 
1360
    * `top': The top-most available chunk (i.e., the one bordering the
1361
       end of available memory) is treated specially. It is never
1362
       included in any bin, is used only if no other chunk is
1363
       available, and is released back to the system if it is very
1364
       large (see M_TRIM_THRESHOLD).
1365
 
1366
    * `last_remainder': A bin holding only the remainder of the
1367
       most recently split (non-top) chunk. This bin is checked
1368
       before other non-fitting chunks, so as to provide better
1369
       locality for runs of sequentially allocated chunks.
1370
 
1371
    *  Implicitly, through the host system's memory mapping tables.
1372
       If supported, requests greater than a threshold are usually
1373
       serviced via calls to mmap, and then later released via munmap.
1374
 
1375
*/
1376
 
1377
 
1378
 
1379
 
1380
 
1381
 
1382
/*  sizes, alignments */
1383
 
1384
#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1385
#ifndef MALLOC_ALIGNMENT
1386
#define MALLOC_ALIGN           8
1387
#define MALLOC_ALIGNMENT       (SIZE_SZ < 4 ? 8 : (SIZE_SZ + SIZE_SZ))
1388
#else
1389
#define MALLOC_ALIGN           MALLOC_ALIGNMENT
1390
#endif
1391
#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1392
#define MINSIZE                (sizeof(struct malloc_chunk))
1393
 
1394
/* conversion from malloc headers to user pointers, and back */
1395
 
1396
#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1397
#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1398
 
1399
/* pad request bytes into a usable size */
1400
 
1401
#define request2size(req) \
1402
 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1403
  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1404
   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1405
 
1406
/* Check if m has acceptable alignment */
1407
 
1408
#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1409
 
1410
 
1411
 
1412
 
1413
/*
1414
  Physical chunk operations
1415
*/
1416
 
1417
 
1418
/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1419
 
1420
#define PREV_INUSE 0x1 
1421
 
1422
/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1423
 
1424
#define IS_MMAPPED 0x2
1425
 
1426
/* Bits to mask off when extracting size */
1427
 
1428
#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1429
 
1430
 
1431
/* Ptr to next physical malloc_chunk. */
1432
 
1433
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1434
 
1435
/* Ptr to previous physical malloc_chunk */
1436
 
1437
#define prev_chunk(p)\
1438
   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1439
 
1440
 
1441
/* Treat space at ptr + offset as a chunk */
1442
 
1443
#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1444
 
1445
 
1446
 
1447
 
1448
/*
1449
  Dealing with use bits
1450
*/
1451
 
1452
/* extract p's inuse bit */
1453
 
1454
#define inuse(p)\
1455
((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1456
 
1457
/* extract inuse bit of previous chunk */
1458
 
1459
#define prev_inuse(p)  ((p)->size & PREV_INUSE)
1460
 
1461
/* check for mmap()'ed chunk */
1462
 
1463
#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1464
 
1465
/* set/clear chunk as in use without otherwise disturbing */
1466
 
1467
#define set_inuse(p)\
1468
((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1469
 
1470
#define clear_inuse(p)\
1471
((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1472
 
1473
/* check/set/clear inuse bits in known places */
1474
 
1475
#define inuse_bit_at_offset(p, s)\
1476
 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1477
 
1478
#define set_inuse_bit_at_offset(p, s)\
1479
 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1480
 
1481
#define clear_inuse_bit_at_offset(p, s)\
1482
 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1483
 
1484
 
1485
 
1486
 
1487
/*
1488
  Dealing with size fields
1489
*/
1490
 
1491
/* Get size, ignoring use bits */
1492
 
1493
#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1494
 
1495
/* Set size at head, without disturbing its use bit */
1496
 
1497
#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1498
 
1499
/* Set size/use ignoring previous bits in header */
1500
 
1501
#define set_head(p, s)        ((p)->size = (s))
1502
 
1503
/* Set size at footer (only when chunk is not in use) */
1504
 
1505
#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1506
 
1507
 
1508
 
1509
 
1510
 
1511
/*
1512
   Bins
1513
 
1514
    The bins, `av_' are an array of pairs of pointers serving as the
1515
    heads of (initially empty) doubly-linked lists of chunks, laid out
1516
    in a way so that each pair can be treated as if it were in a
1517
    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1518
    and chunks are the same).
1519
 
1520
    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1521
    8 bytes apart. Larger bins are approximately logarithmically
1522
    spaced. (See the table below.) The `av_' array is never mentioned
1523
    directly in the code, but instead via bin access macros.
1524
 
1525
    Bin layout:
1526
 
1527
    64 bins of size       8
1528
    32 bins of size      64
1529
    16 bins of size     512
1530
     8 bins of size    4096
1531
     4 bins of size   32768
1532
     2 bins of size  262144
1533
     1 bin  of size what's left
1534
 
1535
    There is actually a little bit of slop in the numbers in bin_index
1536
    for the sake of speed. This makes no difference elsewhere.
1537
 
1538
    The special chunks `top' and `last_remainder' get their own bins,
1539
    (this is implemented via yet more trickery with the av_ array),
1540
    although `top' is never properly linked to its bin since it is
1541
    always handled specially.
1542
 
1543
*/
1544
 
1545
#ifdef SEPARATE_OBJECTS
1546
#define av_ malloc_av_
1547
#endif
1548
 
1549
#define NAV             128   /* number of bins */
1550
 
1551
typedef struct malloc_chunk* mbinptr;
1552
 
1553
/* access macros */
1554
 
1555
#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1556
#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1557
#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1558
 
1559
/*
1560
   The first 2 bins are never indexed. The corresponding av_ cells are instead
1561
   used for bookkeeping. This is not to save space, but to simplify
1562
   indexing, maintain locality, and avoid some initialization tests.
1563
*/
1564
 
1565
#define top            (bin_at(0)->fd)   /* The topmost chunk */
1566
#define last_remainder (bin_at(1))       /* remainder from last split */
1567
 
1568
 
1569
/*
1570
   Because top initially points to its own bin with initial
1571
   zero size, thus forcing extension on the first malloc request,
1572
   we avoid having any special code in malloc to check whether
1573
   it even exists yet. But we still need to in malloc_extend_top.
1574
*/
1575
 
1576
#define initial_top    ((mchunkptr)(bin_at(0)))
1577
 
1578
/* Helper macro to initialize bins */
1579
 
1580
#define IAV(i)  bin_at(i), bin_at(i)
1581
 
1582
#ifdef DEFINE_MALLOC
1583
STATIC mbinptr av_[NAV * 2 + 2] = {
1584
 0, 0,
1585
 IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1586
 IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1587
 IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1588
 IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1589
 IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1590
 IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1591
 IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1592
 IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1593
 IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1594
 IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1595
 IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1596
 IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1597
 IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1598
 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1599
 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1600
 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1601
};
1602
#else
1603
extern mbinptr av_[NAV * 2 + 2];
1604
#endif
1605
 
1606
 
1607
 
1608
/* field-extraction macros */
1609
 
1610
#define first(b) ((b)->fd)
1611
#define last(b)  ((b)->bk)
1612
 
1613
/*
1614
  Indexing into bins
1615
*/
1616
 
1617
#define bin_index(sz)                                                          \
1618
(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1619
 ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1620
 ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1621
 ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1622
 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1623
 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1624
                                          126)
1625
/*
1626
  bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1627
  identically sized chunks. This is exploited in malloc.
1628
*/
1629
 
1630
#define MAX_SMALLBIN_SIZE   512
1631
#define SMALLBIN_WIDTH        8
1632
#define SMALLBIN_WIDTH_BITS   3
1633
#define MAX_SMALLBIN        (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1634
 
1635
#define smallbin_index(sz)  (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1636
 
1637
/*
1638
   Requests are `small' if both the corresponding and the next bin are small
1639
*/
1640
 
1641
#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1642
 
1643
 
1644
 
1645
/*
1646
    To help compensate for the large number of bins, a one-level index
1647
    structure is used for bin-by-bin searching.  `binblocks' is a
1648
    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1649
    have any (possibly) non-empty bins, so they can be skipped over
1650
    all at once during during traversals. The bits are NOT always
1651
    cleared as soon as all bins in a block are empty, but instead only
1652
    when all are noticed to be empty during traversal in malloc.
1653
*/
1654
 
1655
#define BINBLOCKWIDTH     4   /* bins per block */
1656
 
1657
#define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
1658
 
1659
/* bin<->block macros */
1660
 
1661
#define idx2binblock(ix)    ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1662
#define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
1663
#define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
1664
 
1665
 
1666
 
1667
 
1668
 
1669
/*  Other static bookkeeping data */
1670
 
1671
#ifdef SEPARATE_OBJECTS
1672
#define trim_threshold          malloc_trim_threshold
1673
#define top_pad                 malloc_top_pad
1674
#define n_mmaps_max             malloc_n_mmaps_max
1675
#define mmap_threshold          malloc_mmap_threshold
1676
#define sbrk_base               malloc_sbrk_base
1677
#define max_sbrked_mem          malloc_max_sbrked_mem
1678
#define max_total_mem           malloc_max_total_mem
1679
#define current_mallinfo        malloc_current_mallinfo
1680
#define n_mmaps                 malloc_n_mmaps
1681
#define max_n_mmaps             malloc_max_n_mmaps
1682
#define mmapped_mem             malloc_mmapped_mem
1683
#define max_mmapped_mem         malloc_max_mmapped_mem
1684
#endif
1685
 
1686
/* variables holding tunable values */
1687
 
1688
#ifdef DEFINE_MALLOC
1689
 
1690
STATIC unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1691
STATIC unsigned long top_pad          = DEFAULT_TOP_PAD;
1692
#if HAVE_MMAP
1693
STATIC unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1694
STATIC unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1695
#endif
1696
 
1697
/* The first value returned from sbrk */
1698
STATIC char* sbrk_base = (char*)(-1);
1699
 
1700
/* The maximum memory obtained from system via sbrk */
1701
STATIC unsigned long max_sbrked_mem = 0;
1702
 
1703
/* The maximum via either sbrk or mmap */
1704
STATIC unsigned long max_total_mem = 0;
1705
 
1706
/* internal working copy of mallinfo */
1707
STATIC struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1708
 
1709
#if HAVE_MMAP
1710
 
1711
/* Tracking mmaps */
1712
 
1713
STATIC unsigned int n_mmaps = 0;
1714
STATIC unsigned int max_n_mmaps = 0;
1715
STATIC unsigned long mmapped_mem = 0;
1716
STATIC unsigned long max_mmapped_mem = 0;
1717
 
1718
#endif
1719
 
1720
#else /* ! DEFINE_MALLOC */
1721
 
1722
extern unsigned long trim_threshold;
1723
extern unsigned long top_pad;
1724
#if HAVE_MMAP
1725
extern unsigned int  n_mmaps_max;
1726
extern unsigned long mmap_threshold;
1727
#endif
1728
extern char* sbrk_base;
1729
extern unsigned long max_sbrked_mem;
1730
extern unsigned long max_total_mem;
1731
extern struct mallinfo current_mallinfo;
1732
#if HAVE_MMAP
1733
extern unsigned int n_mmaps;
1734
extern unsigned int max_n_mmaps;
1735
extern unsigned long mmapped_mem;
1736
extern unsigned long max_mmapped_mem;
1737
#endif
1738
 
1739
#endif /* ! DEFINE_MALLOC */
1740
 
1741
/* The total memory obtained from system via sbrk */
1742
#define sbrked_mem  (current_mallinfo.arena)
1743
 
1744
 
1745
 
1746
/*
1747
  Debugging support
1748
*/
1749
 
1750
#if DEBUG
1751
 
1752
 
1753
/*
1754
  These routines make a number of assertions about the states
1755
  of data structures that should be true at all times. If any
1756
  are not true, it's very likely that a user program has somehow
1757
  trashed memory. (It's also possible that there is a coding error
1758
  in malloc. In which case, please report it!)
1759
*/
1760
 
1761
#if __STD_C
1762
static void do_check_chunk(mchunkptr p)
1763
#else
1764
static void do_check_chunk(p) mchunkptr p;
1765
#endif
1766
{
1767
  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1768
 
1769
  /* No checkable chunk is mmapped */
1770
  assert(!chunk_is_mmapped(p));
1771
 
1772
  /* Check for legal address ... */
1773
  assert((char*)p >= sbrk_base);
1774
  if (p != top)
1775
    assert((char*)p + sz <= (char*)top);
1776
  else
1777
    assert((char*)p + sz <= sbrk_base + sbrked_mem);
1778
 
1779
}
1780
 
1781
 
1782
#if __STD_C
1783
static void do_check_free_chunk(mchunkptr p)
1784
#else
1785
static void do_check_free_chunk(p) mchunkptr p;
1786
#endif
1787
{
1788
  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1789
  mchunkptr next = chunk_at_offset(p, sz);
1790
 
1791
  do_check_chunk(p);
1792
 
1793
  /* Check whether it claims to be free ... */
1794
  assert(!inuse(p));
1795
 
1796
  /* Unless a special marker, must have OK fields */
1797
  if ((long)sz >= (long)MINSIZE)
1798
  {
1799
    assert((sz & MALLOC_ALIGN_MASK) == 0);
1800
    assert(aligned_OK(chunk2mem(p)));
1801
    /* ... matching footer field */
1802
    assert(next->prev_size == sz);
1803
    /* ... and is fully consolidated */
1804
    assert(prev_inuse(p));
1805
    assert (next == top || inuse(next));
1806
 
1807
    /* ... and has minimally sane links */
1808
    assert(p->fd->bk == p);
1809
    assert(p->bk->fd == p);
1810
  }
1811
  else /* markers are always of size SIZE_SZ */
1812
    assert(sz == SIZE_SZ);
1813
}
1814
 
1815
#if __STD_C
1816
static void do_check_inuse_chunk(mchunkptr p)
1817
#else
1818
static void do_check_inuse_chunk(p) mchunkptr p;
1819
#endif
1820
{
1821
  mchunkptr next = next_chunk(p);
1822
  do_check_chunk(p);
1823
 
1824
  /* Check whether it claims to be in use ... */
1825
  assert(inuse(p));
1826
 
1827
  /* ... and is surrounded by OK chunks.
1828
    Since more things can be checked with free chunks than inuse ones,
1829
    if an inuse chunk borders them and debug is on, it's worth doing them.
1830
  */
1831
  if (!prev_inuse(p))
1832
  {
1833
    mchunkptr prv = prev_chunk(p);
1834
    assert(next_chunk(prv) == p);
1835
    do_check_free_chunk(prv);
1836
  }
1837
  if (next == top)
1838
  {
1839
    assert(prev_inuse(next));
1840
    assert(chunksize(next) >= MINSIZE);
1841
  }
1842
  else if (!inuse(next))
1843
    do_check_free_chunk(next);
1844
 
1845
}
1846
 
1847
#if __STD_C
1848
static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1849
#else
1850
static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1851
#endif
1852
{
1853
  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1854
  long room = long_sub_size_t(sz, s);
1855
 
1856
  do_check_inuse_chunk(p);
1857
 
1858
  /* Legal size ... */
1859
  assert((long)sz >= (long)MINSIZE);
1860
  assert((sz & MALLOC_ALIGN_MASK) == 0);
1861
  assert(room >= 0);
1862
  assert(room < (long)MINSIZE);
1863
 
1864
  /* ... and alignment */
1865
  assert(aligned_OK(chunk2mem(p)));
1866
 
1867
 
1868
  /* ... and was allocated at front of an available chunk */
1869
  assert(prev_inuse(p));
1870
 
1871
}
1872
 
1873
 
1874
#define check_free_chunk(P)  do_check_free_chunk(P)
1875
#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1876
#define check_chunk(P) do_check_chunk(P)
1877
#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1878
#else
1879
#define check_free_chunk(P) 
1880
#define check_inuse_chunk(P)
1881
#define check_chunk(P)
1882
#define check_malloced_chunk(P,N)
1883
#endif
1884
 
1885
 
1886
 
1887
/*
1888
  Macro-based internal utilities
1889
*/
1890
 
1891
 
1892
/*
1893
  Linking chunks in bin lists.
1894
  Call these only with variables, not arbitrary expressions, as arguments.
1895
*/
1896
 
1897
/*
1898
  Place chunk p of size s in its bin, in size order,
1899
  putting it ahead of others of same size.
1900
*/
1901
 
1902
 
1903
#define frontlink(P, S, IDX, BK, FD)                                          \
1904
{                                                                             \
1905
  if (S < MAX_SMALLBIN_SIZE)                                                  \
1906
  {                                                                           \
1907
    IDX = smallbin_index(S);                                                  \
1908
    mark_binblock(IDX);                                                       \
1909
    BK = bin_at(IDX);                                                         \
1910
    FD = BK->fd;                                                              \
1911
    P->bk = BK;                                                               \
1912
    P->fd = FD;                                                               \
1913
    FD->bk = BK->fd = P;                                                      \
1914
  }                                                                           \
1915
  else                                                                        \
1916
  {                                                                           \
1917
    IDX = bin_index(S);                                                       \
1918
    BK = bin_at(IDX);                                                         \
1919
    FD = BK->fd;                                                              \
1920
    if (FD == BK) mark_binblock(IDX);                                         \
1921
    else                                                                      \
1922
    {                                                                         \
1923
      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1924
      BK = FD->bk;                                                            \
1925
    }                                                                         \
1926
    P->bk = BK;                                                               \
1927
    P->fd = FD;                                                               \
1928
    FD->bk = BK->fd = P;                                                      \
1929
  }                                                                           \
1930
}
1931
 
1932
 
1933
/* take a chunk off a list */
1934
 
1935
#define unlink(P, BK, FD)                                                     \
1936
{                                                                             \
1937
  BK = P->bk;                                                                 \
1938
  FD = P->fd;                                                                 \
1939
  FD->bk = BK;                                                                \
1940
  BK->fd = FD;                                                                \
1941
}                                                                             \
1942
 
1943
/* Place p as the last remainder */
1944
 
1945
#define link_last_remainder(P)                                                \
1946
{                                                                             \
1947
  last_remainder->fd = last_remainder->bk =  P;                               \
1948
  P->fd = P->bk = last_remainder;                                             \
1949
}
1950
 
1951
/* Clear the last_remainder bin */
1952
 
1953
#define clear_last_remainder \
1954
  (last_remainder->fd = last_remainder->bk = last_remainder)
1955
 
1956
 
1957
 
1958
 
1959
 
1960
 
1961
/* Routines dealing with mmap(). */
1962
 
1963
#if HAVE_MMAP
1964
 
1965
#ifdef DEFINE_MALLOC
1966
 
1967
#if __STD_C
1968
static mchunkptr mmap_chunk(size_t size)
1969
#else
1970
static mchunkptr mmap_chunk(size) size_t size;
1971
#endif
1972
{
1973
  size_t page_mask = malloc_getpagesize - 1;
1974
  mchunkptr p;
1975
 
1976
#ifndef MAP_ANONYMOUS
1977
  static int fd = -1;
1978
#endif
1979
 
1980
  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1981
 
1982
  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1983
   * there is no following chunk whose prev_size field could be used.
1984
   */
1985
  size = (size + SIZE_SZ + page_mask) & ~page_mask;
1986
 
1987
#ifdef MAP_ANONYMOUS
1988
  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1989
                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1990
#else /* !MAP_ANONYMOUS */
1991
  if (fd < 0)
1992
  {
1993
    fd = open("/dev/zero", O_RDWR);
1994
    if(fd < 0) return 0;
1995
  }
1996
  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1997
#endif
1998
 
1999
  if(p == (mchunkptr)-1) return 0;
2000
 
2001
  n_mmaps++;
2002
  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2003
 
2004
  /* We demand that eight bytes into a page must be 8-byte aligned. */
2005
  assert(aligned_OK(chunk2mem(p)));
2006
 
2007
  /* The offset to the start of the mmapped region is stored
2008
   * in the prev_size field of the chunk; normally it is zero,
2009
   * but that can be changed in memalign().
2010
   */
2011
  p->prev_size = 0;
2012
  set_head(p, size|IS_MMAPPED);
2013
 
2014
  mmapped_mem += size;
2015
  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2016
    max_mmapped_mem = mmapped_mem;
2017
  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2018
    max_total_mem = mmapped_mem + sbrked_mem;
2019
  return p;
2020
}
2021
 
2022
#endif /* DEFINE_MALLOC */
2023
 
2024
#ifdef SEPARATE_OBJECTS
2025
#define munmap_chunk malloc_munmap_chunk
2026
#endif
2027
 
2028
#ifdef DEFINE_FREE
2029
 
2030
#if __STD_C
2031
STATIC void munmap_chunk(mchunkptr p)
2032
#else
2033
STATIC void munmap_chunk(p) mchunkptr p;
2034
#endif
2035
{
2036
  INTERNAL_SIZE_T size = chunksize(p);
2037
  int ret;
2038
 
2039
  assert (chunk_is_mmapped(p));
2040
  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2041
  assert((n_mmaps > 0));
2042
  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2043
 
2044
  n_mmaps--;
2045
  mmapped_mem -= (size + p->prev_size);
2046
 
2047
  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2048
 
2049
  /* munmap returns non-zero on failure */
2050
  assert(ret == 0);
2051
}
2052
 
2053
#else /* ! DEFINE_FREE */
2054
 
2055
#if __STD_C
2056
extern void munmap_chunk(mchunkptr);
2057
#else
2058
extern void munmap_chunk();
2059
#endif
2060
 
2061
#endif /* ! DEFINE_FREE */
2062
 
2063
#if HAVE_MREMAP
2064
 
2065
#ifdef DEFINE_REALLOC
2066
 
2067
#if __STD_C
2068
static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2069
#else
2070
static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2071
#endif
2072
{
2073
  size_t page_mask = malloc_getpagesize - 1;
2074
  INTERNAL_SIZE_T offset = p->prev_size;
2075
  INTERNAL_SIZE_T size = chunksize(p);
2076
  char *cp;
2077
 
2078
  assert (chunk_is_mmapped(p));
2079
  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2080
  assert((n_mmaps > 0));
2081
  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2082
 
2083
  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2084
  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2085
 
2086
  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2087
 
2088
  if (cp == (char *)-1) return 0;
2089
 
2090
  p = (mchunkptr)(cp + offset);
2091
 
2092
  assert(aligned_OK(chunk2mem(p)));
2093
 
2094
  assert((p->prev_size == offset));
2095
  set_head(p, (new_size - offset)|IS_MMAPPED);
2096
 
2097
  mmapped_mem -= size + offset;
2098
  mmapped_mem += new_size;
2099
  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2100
    max_mmapped_mem = mmapped_mem;
2101
  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2102
    max_total_mem = mmapped_mem + sbrked_mem;
2103
  return p;
2104
}
2105
 
2106
#endif /* DEFINE_REALLOC */
2107
 
2108
#endif /* HAVE_MREMAP */
2109
 
2110
#endif /* HAVE_MMAP */
2111
 
2112
 
2113
 
2114
 
2115
#ifdef DEFINE_MALLOC
2116
 
2117
/*
2118
  Extend the top-most chunk by obtaining memory from system.
2119
  Main interface to sbrk (but see also malloc_trim).
2120
*/
2121
 
2122
#if __STD_C
2123
static void malloc_extend_top(RARG INTERNAL_SIZE_T nb)
2124
#else
2125
static void malloc_extend_top(RARG nb) RDECL INTERNAL_SIZE_T nb;
2126
#endif
2127
{
2128
  char*     brk;                  /* return value from sbrk */
2129
  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2130
  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
2131
  char*     new_brk;              /* return of 2nd sbrk call */
2132
  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
2133
 
2134
  mchunkptr old_top     = top;  /* Record state of old top */
2135
  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2136
  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
2137
 
2138
  /* Pad request with top_pad plus minimal overhead */
2139
 
2140
  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
2141
  unsigned long pagesz    = malloc_getpagesize;
2142
 
2143
  /* If not the first time through, round to preserve page boundary */
2144
  /* Otherwise, we need to correct to a page size below anyway. */
2145
  /* (We also correct below if an intervening foreign sbrk call.) */
2146
 
2147
  if (sbrk_base != (char*)(-1))
2148
    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2149
 
2150
  brk = (char*)(MORECORE (sbrk_size));
2151
 
2152
  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2153
  if (brk == (char*)(MORECORE_FAILURE) ||
2154
      (brk < old_end && old_top != initial_top))
2155
    return;
2156
 
2157
  sbrked_mem += sbrk_size;
2158
 
2159
  if (brk == old_end) /* can just add bytes to current top */
2160
  {
2161
    top_size = sbrk_size + old_top_size;
2162
    set_head(top, top_size | PREV_INUSE);
2163
  }
2164
  else
2165
  {
2166
    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
2167
      sbrk_base = brk;
2168
    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
2169
      sbrked_mem += brk - (char*)old_end;
2170
 
2171
    /* Guarantee alignment of first new chunk made from this space */
2172
    front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2173
    if (front_misalign > 0)
2174
    {
2175
      correction = (MALLOC_ALIGNMENT) - front_misalign;
2176
      brk += correction;
2177
    }
2178
    else
2179
      correction = 0;
2180
 
2181
    /* Guarantee the next brk will be at a page boundary */
2182
    correction += pagesz - ((POINTER_UINT)(brk + sbrk_size) & (pagesz - 1));
2183
 
2184
    /* Allocate correction */
2185
    new_brk = (char*)(MORECORE (correction));
2186
    if (new_brk == (char*)(MORECORE_FAILURE)) return;
2187
 
2188
    sbrked_mem += correction;
2189
 
2190
    top = (mchunkptr)brk;
2191
    top_size = new_brk - brk + correction;
2192
    set_head(top, top_size | PREV_INUSE);
2193
 
2194
    if (old_top != initial_top)
2195
    {
2196
 
2197
      /* There must have been an intervening foreign sbrk call. */
2198
      /* A double fencepost is necessary to prevent consolidation */
2199
 
2200
      /* If not enough space to do this, then user did something very wrong */
2201
      if (old_top_size < MINSIZE)
2202
      {
2203
        set_head(top, PREV_INUSE); /* will force null return from malloc */
2204
        return;
2205
      }
2206
 
2207
      /* Also keep size a multiple of MALLOC_ALIGNMENT */
2208
      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2209
      set_head_size(old_top, old_top_size);
2210
      chunk_at_offset(old_top, old_top_size          )->size =
2211
        SIZE_SZ|PREV_INUSE;
2212
      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2213
        SIZE_SZ|PREV_INUSE;
2214
      /* If possible, release the rest. */
2215
      if (old_top_size >= MINSIZE)
2216
        fREe(RCALL chunk2mem(old_top));
2217
    }
2218
  }
2219
 
2220
  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2221
    max_sbrked_mem = sbrked_mem;
2222
#if HAVE_MMAP
2223
  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2224
    max_total_mem = mmapped_mem + sbrked_mem;
2225
#else
2226
  if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem)
2227
    max_total_mem = sbrked_mem;
2228
#endif
2229
 
2230
  /* We always land on a page boundary */
2231
  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2232
}
2233
 
2234
#endif /* DEFINE_MALLOC */
2235
 
2236
 
2237
/* Main public routines */
2238
 
2239
#ifdef DEFINE_MALLOC
2240
 
2241
/*
2242
  Malloc Algorthim:
2243
 
2244
    The requested size is first converted into a usable form, `nb'.
2245
    This currently means to add 4 bytes overhead plus possibly more to
2246
    obtain 8-byte alignment and/or to obtain a size of at least
2247
    MINSIZE (currently 16 bytes), the smallest allocatable size.
2248
    (All fits are considered `exact' if they are within MINSIZE bytes.)
2249
 
2250
    From there, the first successful of the following steps is taken:
2251
 
2252
      1. The bin corresponding to the request size is scanned, and if
2253
         a chunk of exactly the right size is found, it is taken.
2254
 
2255
      2. The most recently remaindered chunk is used if it is big
2256
         enough.  This is a form of (roving) first fit, used only in
2257
         the absence of exact fits. Runs of consecutive requests use
2258
         the remainder of the chunk used for the previous such request
2259
         whenever possible. This limited use of a first-fit style
2260
         allocation strategy tends to give contiguous chunks
2261
         coextensive lifetimes, which improves locality and can reduce
2262
         fragmentation in the long run.
2263
 
2264
      3. Other bins are scanned in increasing size order, using a
2265
         chunk big enough to fulfill the request, and splitting off
2266
         any remainder.  This search is strictly by best-fit; i.e.,
2267
         the smallest (with ties going to approximately the least
2268
         recently used) chunk that fits is selected.
2269
 
2270
      4. If large enough, the chunk bordering the end of memory
2271
         (`top') is split off. (This use of `top' is in accord with
2272
         the best-fit search rule.  In effect, `top' is treated as
2273
         larger (and thus less well fitting) than any other available
2274
         chunk since it can be extended to be as large as necessary
2275
         (up to system limitations).
2276
 
2277
      5. If the request size meets the mmap threshold and the
2278
         system supports mmap, and there are few enough currently
2279
         allocated mmapped regions, and a call to mmap succeeds,
2280
         the request is allocated via direct memory mapping.
2281
 
2282
      6. Otherwise, the top of memory is extended by
2283
         obtaining more space from the system (normally using sbrk,
2284
         but definable to anything else via the MORECORE macro).
2285
         Memory is gathered from the system (in system page-sized
2286
         units) in a way that allows chunks obtained across different
2287
         sbrk calls to be consolidated, but does not require
2288
         contiguous memory. Thus, it should be safe to intersperse
2289
         mallocs with other sbrk calls.
2290
 
2291
 
2292
      All allocations are made from the the `lowest' part of any found
2293
      chunk. (The implementation invariant is that prev_inuse is
2294
      always true of any allocated chunk; i.e., that each allocated
2295
      chunk borders either a previously allocated and still in-use chunk,
2296
      or the base of its memory arena.)
2297
 
2298
*/
2299
 
2300
#if __STD_C
2301
Void_t* mALLOc(RARG size_t bytes)
2302
#else
2303
Void_t* mALLOc(RARG bytes) RDECL size_t bytes;
2304
#endif
2305
{
2306
#ifdef MALLOC_PROVIDED
2307
 
2308
  malloc (bytes);
2309
 
2310
#else
2311
 
2312
  mchunkptr victim;                  /* inspected/selected chunk */
2313
  INTERNAL_SIZE_T victim_size;       /* its size */
2314
  int       idx;                     /* index for bin traversal */
2315
  mbinptr   bin;                     /* associated bin */
2316
  mchunkptr remainder;               /* remainder from a split */
2317
  long      remainder_size;          /* its size */
2318
  int       remainder_index;         /* its bin index */
2319
  unsigned long block;               /* block traverser bit */
2320
  int       startidx;                /* first bin of a traversed block */
2321
  mchunkptr fwd;                     /* misc temp for linking */
2322
  mchunkptr bck;                     /* misc temp for linking */
2323
  mbinptr q;                         /* misc temp */
2324
 
2325
  INTERNAL_SIZE_T nb  = request2size(bytes);  /* padded request size; */
2326
 
2327
  MALLOC_LOCK;
2328
 
2329
  /* Check for exact match in a bin */
2330
 
2331
  if (is_small_request(nb))  /* Faster version for small requests */
2332
  {
2333
    idx = smallbin_index(nb);
2334
 
2335
    /* No traversal or size check necessary for small bins.  */
2336
 
2337
    q = bin_at(idx);
2338
    victim = last(q);
2339
 
2340
#if MALLOC_ALIGN != 16
2341
    /* Also scan the next one, since it would have a remainder < MINSIZE */
2342
    if (victim == q)
2343
    {
2344
      q = next_bin(q);
2345
      victim = last(q);
2346
    }
2347
#endif
2348
    if (victim != q)
2349
    {
2350
      victim_size = chunksize(victim);
2351
      unlink(victim, bck, fwd);
2352
      set_inuse_bit_at_offset(victim, victim_size);
2353
      check_malloced_chunk(victim, nb);
2354
      MALLOC_UNLOCK;
2355
      return chunk2mem(victim);
2356
    }
2357
 
2358
    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2359
 
2360
  }
2361
  else
2362
  {
2363
    idx = bin_index(nb);
2364
    bin = bin_at(idx);
2365
 
2366
    for (victim = last(bin); victim != bin; victim = victim->bk)
2367
    {
2368
      victim_size = chunksize(victim);
2369
      remainder_size = long_sub_size_t(victim_size, nb);
2370
 
2371
      if (remainder_size >= (long)MINSIZE) /* too big */
2372
      {
2373
        --idx; /* adjust to rescan below after checking last remainder */
2374
        break;
2375
      }
2376
 
2377
      else if (remainder_size >= 0) /* exact fit */
2378
      {
2379
        unlink(victim, bck, fwd);
2380
        set_inuse_bit_at_offset(victim, victim_size);
2381
        check_malloced_chunk(victim, nb);
2382
        MALLOC_UNLOCK;
2383
        return chunk2mem(victim);
2384
      }
2385
    }
2386
 
2387
    ++idx;
2388
 
2389
  }
2390
 
2391
  /* Try to use the last split-off remainder */
2392
 
2393
  if ( (victim = last_remainder->fd) != last_remainder)
2394
  {
2395
    victim_size = chunksize(victim);
2396
    remainder_size = long_sub_size_t(victim_size, nb);
2397
 
2398
    if (remainder_size >= (long)MINSIZE) /* re-split */
2399
    {
2400
      remainder = chunk_at_offset(victim, nb);
2401
      set_head(victim, nb | PREV_INUSE);
2402
      link_last_remainder(remainder);
2403
      set_head(remainder, remainder_size | PREV_INUSE);
2404
      set_foot(remainder, remainder_size);
2405
      check_malloced_chunk(victim, nb);
2406
      MALLOC_UNLOCK;
2407
      return chunk2mem(victim);
2408
    }
2409
 
2410
    clear_last_remainder;
2411
 
2412
    if (remainder_size >= 0)  /* exhaust */
2413
    {
2414
      set_inuse_bit_at_offset(victim, victim_size);
2415
      check_malloced_chunk(victim, nb);
2416
      MALLOC_UNLOCK;
2417
      return chunk2mem(victim);
2418
    }
2419
 
2420
    /* Else place in bin */
2421
 
2422
    frontlink(victim, victim_size, remainder_index, bck, fwd);
2423
  }
2424
 
2425
  /*
2426
     If there are any possibly nonempty big-enough blocks,
2427
     search for best fitting chunk by scanning bins in blockwidth units.
2428
  */
2429
 
2430
  if ( (block = idx2binblock(idx)) <= binblocks)
2431
  {
2432
 
2433
    /* Get to the first marked block */
2434
 
2435
    if ( (block & binblocks) == 0)
2436
    {
2437
      /* force to an even block boundary */
2438
      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2439
      block <<= 1;
2440
      while ((block & binblocks) == 0)
2441
      {
2442
        idx += BINBLOCKWIDTH;
2443
        block <<= 1;
2444
      }
2445
    }
2446
 
2447
    /* For each possibly nonempty block ... */
2448
    for (;;)
2449
    {
2450
      startidx = idx;          /* (track incomplete blocks) */
2451
      q = bin = bin_at(idx);
2452
 
2453
      /* For each bin in this block ... */
2454
      do
2455
      {
2456
        /* Find and use first big enough chunk ... */
2457
 
2458
        for (victim = last(bin); victim != bin; victim = victim->bk)
2459
        {
2460
          victim_size = chunksize(victim);
2461
          remainder_size = long_sub_size_t(victim_size, nb);
2462
 
2463
          if (remainder_size >= (long)MINSIZE) /* split */
2464
          {
2465
            remainder = chunk_at_offset(victim, nb);
2466
            set_head(victim, nb | PREV_INUSE);
2467
            unlink(victim, bck, fwd);
2468
            link_last_remainder(remainder);
2469
            set_head(remainder, remainder_size | PREV_INUSE);
2470
            set_foot(remainder, remainder_size);
2471
            check_malloced_chunk(victim, nb);
2472
            MALLOC_UNLOCK;
2473
            return chunk2mem(victim);
2474
          }
2475
 
2476
          else if (remainder_size >= 0)  /* take */
2477
          {
2478
            set_inuse_bit_at_offset(victim, victim_size);
2479
            unlink(victim, bck, fwd);
2480
            check_malloced_chunk(victim, nb);
2481
            MALLOC_UNLOCK;
2482
            return chunk2mem(victim);
2483
          }
2484
 
2485
        }
2486
 
2487
       bin = next_bin(bin);
2488
 
2489
#if MALLOC_ALIGN == 16
2490
       if (idx < MAX_SMALLBIN)
2491
         {
2492
           bin = next_bin(bin);
2493
           ++idx;
2494
         }
2495
#endif
2496
      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2497
 
2498
      /* Clear out the block bit. */
2499
 
2500
      do   /* Possibly backtrack to try to clear a partial block */
2501
      {
2502
        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2503
        {
2504
          binblocks &= ~block;
2505
          break;
2506
        }
2507
        --startidx;
2508
       q = prev_bin(q);
2509
      } while (first(q) == q);
2510
 
2511
      /* Get to the next possibly nonempty block */
2512
 
2513
      if ( (block <<= 1) <= binblocks && (block != 0) )
2514
      {
2515
        while ((block & binblocks) == 0)
2516
        {
2517
          idx += BINBLOCKWIDTH;
2518
          block <<= 1;
2519
        }
2520
      }
2521
      else
2522
        break;
2523
    }
2524
  }
2525
 
2526
 
2527
  /* Try to use top chunk */
2528
 
2529
  /* Require that there be a remainder, ensuring top always exists  */
2530
  remainder_size = long_sub_size_t(chunksize(top), nb);
2531
  if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2532
  {
2533
 
2534
#if HAVE_MMAP
2535
    /* If big and would otherwise need to extend, try to use mmap instead */
2536
    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2537
        (victim = mmap_chunk(nb)) != 0)
2538
    {
2539
      MALLOC_UNLOCK;
2540
      return chunk2mem(victim);
2541
    }
2542
#endif
2543
 
2544
    /* Try to extend */
2545
    malloc_extend_top(RCALL nb);
2546
    remainder_size = long_sub_size_t(chunksize(top), nb);
2547
    if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2548
    {
2549
      MALLOC_UNLOCK;
2550
      return 0; /* propagate failure */
2551
    }
2552
  }
2553
 
2554
  victim = top;
2555
  set_head(victim, nb | PREV_INUSE);
2556
  top = chunk_at_offset(victim, nb);
2557
  set_head(top, remainder_size | PREV_INUSE);
2558
  check_malloced_chunk(victim, nb);
2559
  MALLOC_UNLOCK;
2560
  return chunk2mem(victim);
2561
 
2562
#endif /* MALLOC_PROVIDED */
2563
}
2564
 
2565
#endif /* DEFINE_MALLOC */
2566
 
2567
#ifdef DEFINE_FREE
2568
 
2569
/*
2570
 
2571
  free() algorithm :
2572
 
2573
    cases:
2574
 
2575
       1. free(0) has no effect.
2576
 
2577
       2. If the chunk was allocated via mmap, it is release via munmap().
2578
 
2579
       3. If a returned chunk borders the current high end of memory,
2580
          it is consolidated into the top, and if the total unused
2581
          topmost memory exceeds the trim threshold, malloc_trim is
2582
          called.
2583
 
2584
       4. Other chunks are consolidated as they arrive, and
2585
          placed in corresponding bins. (This includes the case of
2586
          consolidating with the current `last_remainder').
2587
 
2588
*/
2589
 
2590
 
2591
#if __STD_C
2592
void fREe(RARG Void_t* mem)
2593
#else
2594
void fREe(RARG mem) RDECL Void_t* mem;
2595
#endif
2596
{
2597
#ifdef MALLOC_PROVIDED
2598
 
2599
  free (mem);
2600
 
2601
#else
2602
 
2603
  mchunkptr p;         /* chunk corresponding to mem */
2604
  INTERNAL_SIZE_T hd;  /* its head field */
2605
  INTERNAL_SIZE_T sz;  /* its size */
2606
  int       idx;       /* its bin index */
2607
  mchunkptr next;      /* next contiguous chunk */
2608
  INTERNAL_SIZE_T nextsz; /* its size */
2609
  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2610
  mchunkptr bck;       /* misc temp for linking */
2611
  mchunkptr fwd;       /* misc temp for linking */
2612
  int       islr;      /* track whether merging with last_remainder */
2613
 
2614
  if (mem == 0)                              /* free(0) has no effect */
2615
    return;
2616
 
2617
  MALLOC_LOCK;
2618
 
2619
  p = mem2chunk(mem);
2620
  hd = p->size;
2621
 
2622
#if HAVE_MMAP
2623
  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2624
  {
2625
    munmap_chunk(p);
2626
    MALLOC_UNLOCK;
2627
    return;
2628
  }
2629
#endif
2630
 
2631
  check_inuse_chunk(p);
2632
 
2633
  sz = hd & ~PREV_INUSE;
2634
  next = chunk_at_offset(p, sz);
2635
  nextsz = chunksize(next);
2636
 
2637
  if (next == top)                            /* merge with top */
2638
  {
2639
    sz += nextsz;
2640
 
2641
    if (!(hd & PREV_INUSE))                    /* consolidate backward */
2642
    {
2643
      prevsz = p->prev_size;
2644
      p = chunk_at_offset(p, -prevsz);
2645
      sz += prevsz;
2646
      unlink(p, bck, fwd);
2647
    }
2648
 
2649
    set_head(p, sz | PREV_INUSE);
2650
    top = p;
2651
    if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2652
      malloc_trim(RCALL top_pad);
2653
    MALLOC_UNLOCK;
2654
    return;
2655
  }
2656
 
2657
  set_head(next, nextsz);                    /* clear inuse bit */
2658
 
2659
  islr = 0;
2660
 
2661
  if (!(hd & PREV_INUSE))                    /* consolidate backward */
2662
  {
2663
    prevsz = p->prev_size;
2664
    p = chunk_at_offset(p, -prevsz);
2665
    sz += prevsz;
2666
 
2667
    if (p->fd == last_remainder)             /* keep as last_remainder */
2668
      islr = 1;
2669
    else
2670
      unlink(p, bck, fwd);
2671
  }
2672
 
2673
  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2674
  {
2675
    sz += nextsz;
2676
 
2677
    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2678
    {
2679
      islr = 1;
2680
      link_last_remainder(p);
2681
    }
2682
    else
2683
      unlink(next, bck, fwd);
2684
  }
2685
 
2686
 
2687
  set_head(p, sz | PREV_INUSE);
2688
  set_foot(p, sz);
2689
  if (!islr)
2690
    frontlink(p, sz, idx, bck, fwd);
2691
 
2692
  MALLOC_UNLOCK;
2693
 
2694
#endif /* MALLOC_PROVIDED */
2695
}
2696
 
2697
#endif /* DEFINE_FREE */
2698
 
2699
#ifdef DEFINE_REALLOC
2700
 
2701
/*
2702
 
2703
  Realloc algorithm:
2704
 
2705
    Chunks that were obtained via mmap cannot be extended or shrunk
2706
    unless HAVE_MREMAP is defined, in which case mremap is used.
2707
    Otherwise, if their reallocation is for additional space, they are
2708
    copied.  If for less, they are just left alone.
2709
 
2710
    Otherwise, if the reallocation is for additional space, and the
2711
    chunk can be extended, it is, else a malloc-copy-free sequence is
2712
    taken.  There are several different ways that a chunk could be
2713
    extended. All are tried:
2714
 
2715
       * Extending forward into following adjacent free chunk.
2716
       * Shifting backwards, joining preceding adjacent space
2717
       * Both shifting backwards and extending forward.
2718
       * Extending into newly sbrked space
2719
 
2720
    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2721
    size argument of zero (re)allocates a minimum-sized chunk.
2722
 
2723
    If the reallocation is for less space, and the new request is for
2724
    a `small' (<512 bytes) size, then the newly unused space is lopped
2725
    off and freed.
2726
 
2727
    The old unix realloc convention of allowing the last-free'd chunk
2728
    to be used as an argument to realloc is no longer supported.
2729
    I don't know of any programs still relying on this feature,
2730
    and allowing it would also allow too many other incorrect
2731
    usages of realloc to be sensible.
2732
 
2733
 
2734
*/
2735
 
2736
 
2737
#if __STD_C
2738
Void_t* rEALLOc(RARG Void_t* oldmem, size_t bytes)
2739
#else
2740
Void_t* rEALLOc(RARG oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2741
#endif
2742
{
2743
#ifdef MALLOC_PROVIDED
2744
 
2745
  realloc (oldmem, bytes);
2746
 
2747
#else
2748
 
2749
  INTERNAL_SIZE_T    nb;      /* padded request size */
2750
 
2751
  mchunkptr oldp;             /* chunk corresponding to oldmem */
2752
  INTERNAL_SIZE_T    oldsize; /* its size */
2753
 
2754
  mchunkptr newp;             /* chunk to return */
2755
  INTERNAL_SIZE_T    newsize; /* its size */
2756
  Void_t*   newmem;           /* corresponding user mem */
2757
 
2758
  mchunkptr next;             /* next contiguous chunk after oldp */
2759
  INTERNAL_SIZE_T  nextsize;  /* its size */
2760
 
2761
  mchunkptr prev;             /* previous contiguous chunk before oldp */
2762
  INTERNAL_SIZE_T  prevsize;  /* its size */
2763
 
2764
  mchunkptr remainder;        /* holds split off extra space from newp */
2765
  INTERNAL_SIZE_T  remainder_size;   /* its size */
2766
 
2767
  mchunkptr bck;              /* misc temp for linking */
2768
  mchunkptr fwd;              /* misc temp for linking */
2769
 
2770
#ifdef REALLOC_ZERO_BYTES_FREES
2771
  if (bytes == 0) { fREe(RCALL oldmem); return 0; }
2772
#endif
2773
 
2774
 
2775
  /* realloc of null is supposed to be same as malloc */
2776
  if (oldmem == 0) return mALLOc(RCALL bytes);
2777
 
2778
  MALLOC_LOCK;
2779
 
2780
  newp    = oldp    = mem2chunk(oldmem);
2781
  newsize = oldsize = chunksize(oldp);
2782
 
2783
 
2784
  nb = request2size(bytes);
2785
 
2786
#if HAVE_MMAP
2787
  if (chunk_is_mmapped(oldp))
2788
  {
2789
#if HAVE_MREMAP
2790
    newp = mremap_chunk(oldp, nb);
2791
    if(newp)
2792
    {
2793
      MALLOC_UNLOCK;
2794
      return chunk2mem(newp);
2795
    }
2796
#endif
2797
    /* Note the extra SIZE_SZ overhead. */
2798
    if(oldsize - SIZE_SZ >= nb)
2799
    {
2800
      MALLOC_UNLOCK;
2801
      return oldmem; /* do nothing */
2802
    }
2803
    /* Must alloc, copy, free. */
2804
    newmem = mALLOc(RCALL bytes);
2805
    if (newmem == 0)
2806
    {
2807
      MALLOC_UNLOCK;
2808
      return 0; /* propagate failure */
2809
    }
2810
    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2811
    munmap_chunk(oldp);
2812
    MALLOC_UNLOCK;
2813
    return newmem;
2814
  }
2815
#endif
2816
 
2817
  check_inuse_chunk(oldp);
2818
 
2819
  if ((long)(oldsize) < (long)(nb))
2820
  {
2821
 
2822
    /* Try expanding forward */
2823
 
2824
    next = chunk_at_offset(oldp, oldsize);
2825
    if (next == top || !inuse(next))
2826
    {
2827
      nextsize = chunksize(next);
2828
 
2829
      /* Forward into top only if a remainder */
2830
      if (next == top)
2831
      {
2832
        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2833
        {
2834
          newsize += nextsize;
2835
          top = chunk_at_offset(oldp, nb);
2836
          set_head(top, (newsize - nb) | PREV_INUSE);
2837
          set_head_size(oldp, nb);
2838
          MALLOC_UNLOCK;
2839
          return chunk2mem(oldp);
2840
        }
2841
      }
2842
 
2843
      /* Forward into next chunk */
2844
      else if (((long)(nextsize + newsize) >= (long)(nb)))
2845
      {
2846
        unlink(next, bck, fwd);
2847
        newsize  += nextsize;
2848
        goto split;
2849
      }
2850
    }
2851
    else
2852
    {
2853
      next = 0;
2854
      nextsize = 0;
2855
    }
2856
 
2857
    /* Try shifting backwards. */
2858
 
2859
    if (!prev_inuse(oldp))
2860
    {
2861
      prev = prev_chunk(oldp);
2862
      prevsize = chunksize(prev);
2863
 
2864
      /* try forward + backward first to save a later consolidation */
2865
 
2866
      if (next != 0)
2867
      {
2868
        /* into top */
2869
        if (next == top)
2870
        {
2871
          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2872
          {
2873
            unlink(prev, bck, fwd);
2874
            newp = prev;
2875
            newsize += prevsize + nextsize;
2876
            newmem = chunk2mem(newp);
2877
            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2878
            top = chunk_at_offset(newp, nb);
2879
            set_head(top, (newsize - nb) | PREV_INUSE);
2880
            set_head_size(newp, nb);
2881
            MALLOC_UNLOCK;
2882
            return newmem;
2883
          }
2884
        }
2885
 
2886
        /* into next chunk */
2887
        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2888
        {
2889
          unlink(next, bck, fwd);
2890
          unlink(prev, bck, fwd);
2891
          newp = prev;
2892
          newsize += nextsize + prevsize;
2893
          newmem = chunk2mem(newp);
2894
          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2895
          goto split;
2896
        }
2897
      }
2898
 
2899
      /* backward only */
2900
      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2901
      {
2902
        unlink(prev, bck, fwd);
2903
        newp = prev;
2904
        newsize += prevsize;
2905
        newmem = chunk2mem(newp);
2906
        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2907
        goto split;
2908
      }
2909
    }
2910
 
2911
    /* Must allocate */
2912
 
2913
    newmem = mALLOc (RCALL bytes);
2914
 
2915
    if (newmem == 0)  /* propagate failure */
2916
    {
2917
      MALLOC_UNLOCK;
2918
      return 0;
2919
    }
2920
 
2921
    /* Avoid copy if newp is next chunk after oldp. */
2922
    /* (This can only happen when new chunk is sbrk'ed.) */
2923
 
2924
    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2925
    {
2926
      newsize += chunksize(newp);
2927
      newp = oldp;
2928
      goto split;
2929
    }
2930
 
2931
    /* Otherwise copy, free, and exit */
2932
    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2933
    fREe(RCALL oldmem);
2934
    MALLOC_UNLOCK;
2935
    return newmem;
2936
  }
2937
 
2938
 
2939
 split:  /* split off extra room in old or expanded chunk */
2940
 
2941
  remainder_size = long_sub_size_t(newsize, nb);
2942
 
2943
  if (remainder_size >= (long)MINSIZE) /* split off remainder */
2944
  {
2945
    remainder = chunk_at_offset(newp, nb);
2946
    set_head_size(newp, nb);
2947
    set_head(remainder, remainder_size | PREV_INUSE);
2948
    set_inuse_bit_at_offset(remainder, remainder_size);
2949
    fREe(RCALL chunk2mem(remainder)); /* let free() deal with it */
2950
  }
2951
  else
2952
  {
2953
    set_head_size(newp, newsize);
2954
    set_inuse_bit_at_offset(newp, newsize);
2955
  }
2956
 
2957
  check_inuse_chunk(newp);
2958
  MALLOC_UNLOCK;
2959
  return chunk2mem(newp);
2960
 
2961
#endif /* MALLOC_PROVIDED */
2962
}
2963
 
2964
#endif /* DEFINE_REALLOC */
2965
 
2966
#ifdef DEFINE_MEMALIGN
2967
 
2968
/*
2969
 
2970
  memalign algorithm:
2971
 
2972
    memalign requests more than enough space from malloc, finds a spot
2973
    within that chunk that meets the alignment request, and then
2974
    possibly frees the leading and trailing space.
2975
 
2976
    The alignment argument must be a power of two. This property is not
2977
    checked by memalign, so misuse may result in random runtime errors.
2978
 
2979
    8-byte alignment is guaranteed by normal malloc calls, so don't
2980
    bother calling memalign with an argument of 8 or less.
2981
 
2982
    Overreliance on memalign is a sure way to fragment space.
2983
 
2984
*/
2985
 
2986
 
2987
#if __STD_C
2988
Void_t* mEMALIGn(RARG size_t alignment, size_t bytes)
2989
#else
2990
Void_t* mEMALIGn(RARG alignment, bytes) RDECL size_t alignment; size_t bytes;
2991
#endif
2992
{
2993
  INTERNAL_SIZE_T    nb;      /* padded  request size */
2994
  char*     m;                /* memory returned by malloc call */
2995
  mchunkptr p;                /* corresponding chunk */
2996
  char*     brk;              /* alignment point within p */
2997
  mchunkptr newp;             /* chunk to return */
2998
  INTERNAL_SIZE_T  newsize;   /* its size */
2999
  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
3000
  mchunkptr remainder;        /* spare room at end to split off */
3001
  long      remainder_size;   /* its size */
3002
 
3003
  /* If need less alignment than we give anyway, just relay to malloc */
3004
 
3005
  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(RCALL bytes);
3006
 
3007
  /* Otherwise, ensure that it is at least a minimum chunk size */
3008
 
3009
  if (alignment <  MINSIZE) alignment = MINSIZE;
3010
 
3011
  /* Call malloc with worst case padding to hit alignment. */
3012
 
3013
  nb = request2size(bytes);
3014
  m  = (char*)(mALLOc(RCALL nb + alignment + MINSIZE));
3015
 
3016
  if (m == 0) return 0; /* propagate failure */
3017
 
3018
  MALLOC_LOCK;
3019
 
3020
  p = mem2chunk(m);
3021
 
3022
  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3023
  {
3024
#if HAVE_MMAP
3025
    if(chunk_is_mmapped(p))
3026
    {
3027
      MALLOC_UNLOCK;
3028
      return chunk2mem(p); /* nothing more to do */
3029
    }
3030
#endif
3031
  }
3032
  else /* misaligned */
3033
  {
3034
    /*
3035
      Find an aligned spot inside chunk.
3036
      Since we need to give back leading space in a chunk of at
3037
      least MINSIZE, if the first calculation places us at
3038
      a spot with less than MINSIZE leader, we can move to the
3039
      next aligned spot -- we've allocated enough total room so that
3040
      this is always possible.
3041
    */
3042
 
3043
    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
3044
    if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3045
 
3046
    newp = (mchunkptr)brk;
3047
    leadsize = brk - (char*)(p);
3048
    newsize = chunksize(p) - leadsize;
3049
 
3050
#if HAVE_MMAP
3051
    if(chunk_is_mmapped(p))
3052
    {
3053
      newp->prev_size = p->prev_size + leadsize;
3054
      set_head(newp, newsize|IS_MMAPPED);
3055
      MALLOC_UNLOCK;
3056
      return chunk2mem(newp);
3057
    }
3058
#endif
3059
 
3060
    /* give back leader, use the rest */
3061
 
3062
    set_head(newp, newsize | PREV_INUSE);
3063
    set_inuse_bit_at_offset(newp, newsize);
3064
    set_head_size(p, leadsize);
3065
    fREe(RCALL chunk2mem(p));
3066
    p = newp;
3067
 
3068
    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3069
  }
3070
 
3071
  /* Also give back spare room at the end */
3072
 
3073
  remainder_size = long_sub_size_t(chunksize(p), nb);
3074
 
3075
  if (remainder_size >= (long)MINSIZE)
3076
  {
3077
    remainder = chunk_at_offset(p, nb);
3078
    set_head(remainder, remainder_size | PREV_INUSE);
3079
    set_head_size(p, nb);
3080
    fREe(RCALL chunk2mem(remainder));
3081
  }
3082
 
3083
  check_inuse_chunk(p);
3084
  MALLOC_UNLOCK;
3085
  return chunk2mem(p);
3086
 
3087
}
3088
 
3089
#endif /* DEFINE_MEMALIGN */
3090
 
3091
#ifdef DEFINE_VALLOC
3092
 
3093
/*
3094
    valloc just invokes memalign with alignment argument equal
3095
    to the page size of the system (or as near to this as can
3096
    be figured out from all the includes/defines above.)
3097
*/
3098
 
3099
#if __STD_C
3100
Void_t* vALLOc(RARG size_t bytes)
3101
#else
3102
Void_t* vALLOc(RARG bytes) RDECL size_t bytes;
3103
#endif
3104
{
3105
  return mEMALIGn (RCALL malloc_getpagesize, bytes);
3106
}
3107
 
3108
#endif /* DEFINE_VALLOC */
3109
 
3110
#ifdef DEFINE_PVALLOC
3111
 
3112
/*
3113
  pvalloc just invokes valloc for the nearest pagesize
3114
  that will accommodate request
3115
*/
3116
 
3117
 
3118
#if __STD_C
3119
Void_t* pvALLOc(RARG size_t bytes)
3120
#else
3121
Void_t* pvALLOc(RARG bytes) RDECL size_t bytes;
3122
#endif
3123
{
3124
  size_t pagesize = malloc_getpagesize;
3125
  return mEMALIGn (RCALL pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3126
}
3127
 
3128
#endif /* DEFINE_PVALLOC */
3129
 
3130
#ifdef DEFINE_CALLOC
3131
 
3132
/*
3133
 
3134
  calloc calls malloc, then zeroes out the allocated chunk.
3135
 
3136
*/
3137
 
3138
#if __STD_C
3139
Void_t* cALLOc(RARG size_t n, size_t elem_size)
3140
#else
3141
Void_t* cALLOc(RARG n, elem_size) RDECL size_t n; size_t elem_size;
3142
#endif
3143
{
3144
  mchunkptr p;
3145
  INTERNAL_SIZE_T csz;
3146
 
3147
  INTERNAL_SIZE_T sz = n * elem_size;
3148
 
3149
#if MORECORE_CLEARS
3150
  mchunkptr oldtop;
3151
  INTERNAL_SIZE_T oldtopsize;
3152
#endif
3153
  Void_t* mem;
3154
 
3155
  /* check if expand_top called, in which case don't need to clear */
3156
#if MORECORE_CLEARS
3157
  MALLOC_LOCK;
3158
  oldtop = top;
3159
  oldtopsize = chunksize(top);
3160
#endif
3161
 
3162
  mem = mALLOc (RCALL sz);
3163
 
3164
  if (mem == 0)
3165
  {
3166
#if MORECORE_CLEARS
3167
    MALLOC_UNLOCK;
3168
#endif
3169
    return 0;
3170
  }
3171
  else
3172
  {
3173
    p = mem2chunk(mem);
3174
 
3175
    /* Two optional cases in which clearing not necessary */
3176
 
3177
 
3178
#if HAVE_MMAP
3179
    if (chunk_is_mmapped(p))
3180
    {
3181
#if MORECORE_CLEARS
3182
      MALLOC_UNLOCK;
3183
#endif
3184
      return mem;
3185
    }
3186
#endif
3187
 
3188
    csz = chunksize(p);
3189
 
3190
#if MORECORE_CLEARS
3191
    if (p == oldtop && csz > oldtopsize)
3192
    {
3193
      /* clear only the bytes from non-freshly-sbrked memory */
3194
      csz = oldtopsize;
3195
    }
3196
    MALLOC_UNLOCK;
3197
#endif
3198
 
3199
    MALLOC_ZERO(mem, csz - SIZE_SZ);
3200
    return mem;
3201
  }
3202
}
3203
 
3204
#endif /* DEFINE_CALLOC */
3205
 
3206
#if defined(DEFINE_CFREE) && !defined(__CYGWIN__)
3207
 
3208
/*
3209
 
3210
  cfree just calls free. It is needed/defined on some systems
3211
  that pair it with calloc, presumably for odd historical reasons.
3212
 
3213
*/
3214
 
3215
#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3216
#if !defined(INTERNAL_NEWLIB) || !defined(_REENT_ONLY)
3217
#if __STD_C
3218
void cfree(Void_t *mem)
3219
#else
3220
void cfree(mem) Void_t *mem;
3221
#endif
3222
{
3223
#ifdef INTERNAL_NEWLIB
3224
  fREe(_REENT, mem);
3225
#else
3226
  fREe(mem);
3227
#endif
3228
}
3229
#endif
3230
#endif
3231
 
3232
#endif /* DEFINE_CFREE */
3233
 
3234
#ifdef DEFINE_FREE
3235
 
3236
/*
3237
 
3238
    Malloc_trim gives memory back to the system (via negative
3239
    arguments to sbrk) if there is unused memory at the `high' end of
3240
    the malloc pool. You can call this after freeing large blocks of
3241
    memory to potentially reduce the system-level memory requirements
3242
    of a program. However, it cannot guarantee to reduce memory. Under
3243
    some allocation patterns, some large free blocks of memory will be
3244
    locked between two used chunks, so they cannot be given back to
3245
    the system.
3246
 
3247
    The `pad' argument to malloc_trim represents the amount of free
3248
    trailing space to leave untrimmed. If this argument is zero,
3249
    only the minimum amount of memory to maintain internal data
3250
    structures will be left (one page or less). Non-zero arguments
3251
    can be supplied to maintain enough trailing space to service
3252
    future expected allocations without having to re-obtain memory
3253
    from the system.
3254
 
3255
    Malloc_trim returns 1 if it actually released any memory, else 0.
3256
 
3257
*/
3258
 
3259
#if __STD_C
3260
int malloc_trim(RARG size_t pad)
3261
#else
3262
int malloc_trim(RARG pad) RDECL size_t pad;
3263
#endif
3264
{
3265
  long  top_size;        /* Amount of top-most memory */
3266
  long  extra;           /* Amount to release */
3267
  char* current_brk;     /* address returned by pre-check sbrk call */
3268
  char* new_brk;         /* address returned by negative sbrk call */
3269
 
3270
  unsigned long pagesz = malloc_getpagesize;
3271
 
3272
  MALLOC_LOCK;
3273
 
3274
  top_size = chunksize(top);
3275
  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3276
 
3277
  if (extra < (long)pagesz)  /* Not enough memory to release */
3278
  {
3279
    MALLOC_UNLOCK;
3280
    return 0;
3281
  }
3282
 
3283
  else
3284
  {
3285
    /* Test to make sure no one else called sbrk */
3286
    current_brk = (char*)(MORECORE (0));
3287
    if (current_brk != (char*)(top) + top_size)
3288
    {
3289
      MALLOC_UNLOCK;
3290
      return 0;     /* Apparently we don't own memory; must fail */
3291
    }
3292
 
3293
    else
3294
    {
3295
      new_brk = (char*)(MORECORE (-extra));
3296
 
3297
      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3298
      {
3299
        /* Try to figure out what we have */
3300
        current_brk = (char*)(MORECORE (0));
3301
        top_size = current_brk - (char*)top;
3302
        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3303
        {
3304
          sbrked_mem = current_brk - sbrk_base;
3305
          set_head(top, top_size | PREV_INUSE);
3306
        }
3307
        check_chunk(top);
3308
        MALLOC_UNLOCK;
3309
        return 0;
3310
      }
3311
 
3312
      else
3313
      {
3314
        /* Success. Adjust top accordingly. */
3315
        set_head(top, (top_size - extra) | PREV_INUSE);
3316
        sbrked_mem -= extra;
3317
        check_chunk(top);
3318
        MALLOC_UNLOCK;
3319
        return 1;
3320
      }
3321
    }
3322
  }
3323
}
3324
 
3325
#endif /* DEFINE_FREE */
3326
 
3327
#ifdef DEFINE_MALLOC_USABLE_SIZE
3328
 
3329
/*
3330
  malloc_usable_size:
3331
 
3332
    This routine tells you how many bytes you can actually use in an
3333
    allocated chunk, which may be more than you requested (although
3334
    often not). You can use this many bytes without worrying about
3335
    overwriting other allocated objects. Not a particularly great
3336
    programming practice, but still sometimes useful.
3337
 
3338
*/
3339
 
3340
#if __STD_C
3341
size_t malloc_usable_size(RARG Void_t* mem)
3342
#else
3343
size_t malloc_usable_size(RARG mem) RDECL Void_t* mem;
3344
#endif
3345
{
3346
  mchunkptr p;
3347
  if (mem == 0)
3348
    return 0;
3349
  else
3350
  {
3351
    p = mem2chunk(mem);
3352
    if(!chunk_is_mmapped(p))
3353
    {
3354
      if (!inuse(p)) return 0;
3355
#if DEBUG
3356
      MALLOC_LOCK;
3357
      check_inuse_chunk(p);
3358
      MALLOC_UNLOCK;
3359
#endif
3360
      return chunksize(p) - SIZE_SZ;
3361
    }
3362
    return chunksize(p) - 2*SIZE_SZ;
3363
  }
3364
}
3365
 
3366
#endif /* DEFINE_MALLOC_USABLE_SIZE */
3367
 
3368
#ifdef DEFINE_MALLINFO
3369
 
3370
/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3371
 
3372
STATIC void malloc_update_mallinfo()
3373
{
3374
  int i;
3375
  mbinptr b;
3376
  mchunkptr p;
3377
#if DEBUG
3378
  mchunkptr q;
3379
#endif
3380
 
3381
  INTERNAL_SIZE_T avail = chunksize(top);
3382
  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3383
 
3384
  for (i = 1; i < NAV; ++i)
3385
  {
3386
    b = bin_at(i);
3387
    for (p = last(b); p != b; p = p->bk)
3388
    {
3389
#if DEBUG
3390
      check_free_chunk(p);
3391
      for (q = next_chunk(p);
3392
           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3393
           q = next_chunk(q))
3394
        check_inuse_chunk(q);
3395
#endif
3396
      avail += chunksize(p);
3397
      navail++;
3398
    }
3399
  }
3400
 
3401
  current_mallinfo.ordblks = navail;
3402
  current_mallinfo.uordblks = sbrked_mem - avail;
3403
  current_mallinfo.fordblks = avail;
3404
#if HAVE_MMAP
3405
  current_mallinfo.hblks = n_mmaps;
3406
  current_mallinfo.hblkhd = mmapped_mem;
3407
#endif
3408
  current_mallinfo.keepcost = chunksize(top);
3409
 
3410
}
3411
 
3412
#else /* ! DEFINE_MALLINFO */
3413
 
3414
#if __STD_C
3415
extern void malloc_update_mallinfo(void);
3416
#else
3417
extern void malloc_update_mallinfo();
3418
#endif
3419
 
3420
#endif /* ! DEFINE_MALLINFO */
3421
 
3422
#ifdef DEFINE_MALLOC_STATS
3423
 
3424
/*
3425
 
3426
  malloc_stats:
3427
 
3428
    Prints on stderr the amount of space obtain from the system (both
3429
    via sbrk and mmap), the maximum amount (which may be more than
3430
    current if malloc_trim and/or munmap got called), the maximum
3431
    number of simultaneous mmap regions used, and the current number
3432
    of bytes allocated via malloc (or realloc, etc) but not yet
3433
    freed. (Note that this is the number of bytes allocated, not the
3434
    number requested. It will be larger than the number requested
3435
    because of alignment and bookkeeping overhead.)
3436
 
3437
*/
3438
 
3439
#if __STD_C
3440
void malloc_stats(RONEARG)
3441
#else
3442
void malloc_stats(RONEARG) RDECL
3443
#endif
3444
{
3445
  unsigned long local_max_total_mem;
3446
  int local_sbrked_mem;
3447
  struct mallinfo local_mallinfo;
3448
#if HAVE_MMAP
3449
  unsigned long local_mmapped_mem, local_max_n_mmaps;
3450
#endif
3451
  FILE *fp;
3452
 
3453
  MALLOC_LOCK;
3454
  malloc_update_mallinfo();
3455
  local_max_total_mem = max_total_mem;
3456
  local_sbrked_mem = sbrked_mem;
3457
  local_mallinfo = current_mallinfo;
3458
#if HAVE_MMAP
3459
  local_mmapped_mem = mmapped_mem;
3460
  local_max_n_mmaps = max_n_mmaps;
3461
#endif
3462
  MALLOC_UNLOCK;
3463
 
3464
#ifdef INTERNAL_NEWLIB
3465
  fp = _stderr_r(reent_ptr);
3466
#define fprintf fiprintf
3467
#else
3468
  fp = stderr;
3469
#endif
3470
 
3471
  fprintf(fp, "max system bytes = %10u\n",
3472
          (unsigned int)(local_max_total_mem));
3473
#if HAVE_MMAP
3474
  fprintf(fp, "system bytes     = %10u\n",
3475
          (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3476
  fprintf(fp, "in use bytes     = %10u\n",
3477
          (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3478
#else
3479
  fprintf(fp, "system bytes     = %10u\n",
3480
          (unsigned int)local_sbrked_mem);
3481
  fprintf(fp, "in use bytes     = %10u\n",
3482
          (unsigned int)local_mallinfo.uordblks);
3483
#endif
3484
#if HAVE_MMAP
3485
  fprintf(fp, "max mmap regions = %10u\n",
3486
          (unsigned int)local_max_n_mmaps);
3487
#endif
3488
}
3489
 
3490
#endif /* DEFINE_MALLOC_STATS */
3491
 
3492
#ifdef DEFINE_MALLINFO
3493
 
3494
/*
3495
  mallinfo returns a copy of updated current mallinfo.
3496
*/
3497
 
3498
#if __STD_C
3499
struct mallinfo mALLINFo(RONEARG)
3500
#else
3501
struct mallinfo mALLINFo(RONEARG) RDECL
3502
#endif
3503
{
3504
  struct mallinfo ret;
3505
 
3506
  MALLOC_LOCK;
3507
  malloc_update_mallinfo();
3508
  ret = current_mallinfo;
3509
  MALLOC_UNLOCK;
3510
  return ret;
3511
}
3512
 
3513
#endif /* DEFINE_MALLINFO */
3514
 
3515
#ifdef DEFINE_MALLOPT
3516
 
3517
/*
3518
  mallopt:
3519
 
3520
    mallopt is the general SVID/XPG interface to tunable parameters.
3521
    The format is to provide a (parameter-number, parameter-value) pair.
3522
    mallopt then sets the corresponding parameter to the argument
3523
    value if it can (i.e., so long as the value is meaningful),
3524
    and returns 1 if successful else 0.
3525
 
3526
    See descriptions of tunable parameters above.
3527
 
3528
*/
3529
 
3530
#if __STD_C
3531
int mALLOPt(RARG int param_number, int value)
3532
#else
3533
int mALLOPt(RARG param_number, value) RDECL int param_number; int value;
3534
#endif
3535
{
3536
  MALLOC_LOCK;
3537
  switch(param_number)
3538
  {
3539
    case M_TRIM_THRESHOLD:
3540
      trim_threshold = value; MALLOC_UNLOCK; return 1;
3541
    case M_TOP_PAD:
3542
      top_pad = value; MALLOC_UNLOCK; return 1;
3543
    case M_MMAP_THRESHOLD:
3544
#if HAVE_MMAP
3545
      mmap_threshold = value;
3546
#endif
3547
      MALLOC_UNLOCK;
3548
      return 1;
3549
    case M_MMAP_MAX:
3550
#if HAVE_MMAP
3551
      n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3552
#else
3553
      MALLOC_UNLOCK; return value == 0;
3554
#endif
3555
 
3556
    default:
3557
      MALLOC_UNLOCK;
3558
      return 0;
3559
  }
3560
}
3561
 
3562
#endif /* DEFINE_MALLOPT */
3563
 
3564
/*
3565
 
3566
History:
3567
 
3568
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3569
      * Added pvalloc, as recommended by H.J. Liu
3570
      * Added 64bit pointer support mainly from Wolfram Gloger
3571
      * Added anonymously donated WIN32 sbrk emulation
3572
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3573
      * malloc_extend_top: fix mask error that caused wastage after
3574
        foreign sbrks
3575
      * Add linux mremap support code from HJ Liu
3576
 
3577
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3578
      * Integrated most documentation with the code.
3579
      * Add support for mmap, with help from
3580
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3581
      * Use last_remainder in more cases.
3582
      * Pack bins using idea from  colin@nyx10.cs.du.edu
3583
      * Use ordered bins instead of best-fit threshhold
3584
      * Eliminate block-local decls to simplify tracing and debugging.
3585
      * Support another case of realloc via move into top
3586
      * Fix error occuring when initial sbrk_base not word-aligned.
3587
      * Rely on page size for units instead of SBRK_UNIT to
3588
        avoid surprises about sbrk alignment conventions.
3589
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3590
        (raymond@es.ele.tue.nl) for the suggestion.
3591
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3592
      * More precautions for cases where other routines call sbrk,
3593
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3594
      * Added macros etc., allowing use in linux libc from
3595
        H.J. Lu (hjl@gnu.ai.mit.edu)
3596
      * Inverted this history list
3597
 
3598
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3599
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3600
      * Removed all preallocation code since under current scheme
3601
        the work required to undo bad preallocations exceeds
3602
        the work saved in good cases for most test programs.
3603
      * No longer use return list or unconsolidated bins since
3604
        no scheme using them consistently outperforms those that don't
3605
        given above changes.
3606
      * Use best fit for very large chunks to prevent some worst-cases.
3607
      * Added some support for debugging
3608
 
3609
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3610
      * Removed footers when chunks are in use. Thanks to
3611
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3612
 
3613
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3614
      * Added malloc_trim, with help from Wolfram Gloger
3615
        (wmglo@Dent.MED.Uni-Muenchen.DE).
3616
 
3617
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3618
 
3619
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3620
      * realloc: try to expand in both directions
3621
      * malloc: swap order of clean-bin strategy;
3622
      * realloc: only conditionally expand backwards
3623
      * Try not to scavenge used bins
3624
      * Use bin counts as a guide to preallocation
3625
      * Occasionally bin return list chunks in first scan
3626
      * Add a few optimizations from colin@nyx10.cs.du.edu
3627
 
3628
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3629
      * faster bin computation & slightly different binning
3630
      * merged all consolidations to one part of malloc proper
3631
         (eliminating old malloc_find_space & malloc_clean_bin)
3632
      * Scan 2 returns chunks (not just 1)
3633
      * Propagate failure in realloc if malloc returns 0
3634
      * Add stuff to allow compilation on non-ANSI compilers
3635
          from kpv@research.att.com
3636
 
3637
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3638
      * removed potential for odd address access in prev_chunk
3639
      * removed dependency on getpagesize.h
3640
      * misc cosmetics and a bit more internal documentation
3641
      * anticosmetics: mangled names in macros to evade debugger strangeness
3642
      * tested on sparc, hp-700, dec-mips, rs6000
3643
          with gcc & native cc (hp, dec only) allowing
3644
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3645
 
3646
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3647
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3648
         structure of old version,  but most details differ.)
3649
 
3650
*/
3651
#endif

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.