This post explores how Rust manages memory through Box, Vec, and the global allocator, uncovering what really happens under the hood.

Box and Vec Memory Allocation

Before diving into implementation details, let’s first understand how Box and Vec internally use allocators.

Box

Box is the simplest smart pointer in Rust. It allocates a value on the heap while making ownership explicit and safe.

Under the hood, though, Box is more complex than the Box<T> we usually write. Here’s the actual definition:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#[lang = "owned_box"]
#[fundamental]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_insignificant_dtor]
#[doc(search_unbox)]
// The declaration of the `Box` struct must be kept in sync with the
// compiler or ICEs will happen.
pub struct Box<
    T: ?Sized,
    #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global,
>(Unique<T>, A);

Box has two fields:

Unique<T>

  • T: ?Sized: Allows Box to hold dynamically sized types like Box<dyn Trait> or Box<[u8]>.
  • Unique<T>: A wrapper around a non-null pointer that guarantees uniqueness, ensuring the Box owns its allocation exclusively.

A

  • A: Allocator = Global: Represents the allocator used to manage the memory.
  • By default this is Global, which adds no runtime overhead because it’s a zero-sized type (ZST).

How Box::new Works

At first glance, Box::new looks like an ordinary function:

1
2
3
4
5
6
7
8
9
#[cfg(not(no_global_oom_handling))]
#[inline(always)]
#[stable(feature = "rust1", since = "1.0.0")]
#[must_use]
#[rustc_diagnostic_item = "box_new"]
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
pub fn new(x: T) -> Self {
    return box_new(x);
}

But Box::new doesn’t perform allocation itself—it calls the compiler intrinsic box_new.

1
2
3
#[rustc_intrinsic]
#[unstable(feature = "liballoc_internals", issue = "none")]
pub fn box_new<T>(x: T) -> Box<T>;

This intrinsic is lowered by the compiler into a call to exchange_malloc, which is the real allocation entry point:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#[cfg(not(no_global_oom_handling))]
#[lang = "exchange_malloc"]
#[inline]
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 {
    let layout = unsafe { Layout::from_size_align_unchecked(size, align) };
    match Global.allocate(layout) {
        Ok(ptr) => ptr.as_mut_ptr(),
        Err(_) => handle_alloc_error(layout),
    }
}

Flow: Box::new → box_new → exchange_malloc

When you compile:

1
2
3
fn main() {
    Box::new(true);
}

What looks like a simple Box::new(true) in Rust is actually lowered by the compiler into a raw call to exchange_malloc.

box_example1.png

The binary output above clearly shows that Box::new(true) is translated into a call to exchange_malloc — this is where the real allocation takes place.

How Box is Dropped

When a Box is dropped, the compiler eventually invokes the special lang item drop_in_place. This function is just a placeholder — the compiler replaces it with real drop glue for the type being dropped:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#[stable(feature = "drop_in_place", since = "1.8.0")]
#[lang = "drop_in_place"]
#[allow(unconditional_recursion)]
#[rustc_diagnostic_item = "ptr_drop_in_place"]
pub unsafe fn drop_in_place<T: PointeeSized>(to_drop: *mut T) {
    // Code here does not matter - this is replaced by the
    // real drop glue by the compiler.

    // SAFETY: see comment above
    unsafe { drop_in_place(to_drop) }
}

At first glance, this looks like infinite recursion. But the trick is:

  • The function body is just a stub.
  • The #[lang = "drop_in_place"] attribute tells the compiler to handle it specially.
  • During compilation, the compiler generates real code that knows how to properly drop values of type T.

box_example2.png

For Box, this placeholder gets replaced with its actual Drop implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#[stable(feature = "rust1", since = "1.0.0")]
unsafe impl<#[may_dangle] T: ?Sized, A: Allocator> Drop for Box<T, A> {
    #[inline]
    fn drop(&mut self) {
        // the T in the Box is dropped by the compiler before the destructor is run

        let ptr = self.0;

        unsafe {
            let layout = Layout::for_value_raw(ptr.as_ptr());
            if layout.size() != 0 {
                self.1.deallocate(From::from(ptr.cast()), layout);
            }
        }
    }
}

In other words:

  1. The compiler inserts drop glue to drop the inner value T.
  2. Then, the Box destructor calls the allocator’s deallocate method.
  3. With the default allocator (Global), this eventually frees the memory back to the system.

Vec

Vec is a growable, heap-allocated, contiguous array type. Unlike arrays or slices, it owns its buffer and can resize itself as elements are pushed or popped.

Here’s the actual definition:

1
2
3
4
5
6
7
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_diagnostic_item = "Vec"]
#[rustc_insignificant_dtor]
pub struct Vec<T, #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global> {
    buf: RawVec<T, A>,
    len: usize,
}

Breaking Down the Fields

buf: RawVec<T, A>

  • The underlying buffer that manages the heap allocation.
  • RawVec stores a pointer to the allocated memory and keeps track of the current capacity.
  • It is generic over an allocator A (default: Global), which makes Vec compatible with custom allocators.

len: usize

  • The number of initialized elements in the vector.
  • Must always be less than or equal to the buffer’s capacity.
  • When elements are pushed or popped, len changes, but the buffer itself is not necessarily reallocated immediately.

How Vec::new Works

1
2
3
4
5
6
7
8
#[inline]
#[rustc_const_stable(feature = "const_vec_new", since = "1.39.0")]
#[rustc_diagnostic_item = "vec_new"]
#[stable(feature = "rust1", since = "1.0.0")]
#[must_use]
pub const fn new() -> Self {
    Vec { buf: RawVec::new(), len: 0 }
}

Under the hood, Vec::new constructs a RawVec, which in turn wraps RawVecInner:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#[allow(missing_debug_implementations)]
pub(crate) struct RawVec<T, A: Allocator = Global> {
    inner: RawVecInner<A>,
    _marker: PhantomData<T>,
}

impl<T> RawVec<T, Global> {
    #[must_use]
    pub(crate) const fn new() -> Self {
        Self::new_in(Global)
    }
    // ...
}

impl<T, A: Allocator> RawVec<T, A> {
    #[inline]
    pub(crate) const fn new_in(alloc: A) -> Self {
        Self { inner: RawVecInner::new_in(alloc, Alignment::of::<T>()), _marker: PhantomData }
    }
    // ...
}

RawVec has two fields:

  • inner: RawVecInner<A> – stores the pointer and capacity, and uses the allocator A (usually Global) to manage memory.
  • _marker: PhantomData<T> – a zero-sized type that informs the compiler that RawVec logically owns values of type T. This ensures correct drop-check behavior and variance rules, even though no T is stored directly.

About RawVecInner

The real allocation logic lives inside RawVecInner:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#[allow(missing_debug_implementations)]
struct RawVecInner<A: Allocator = Global> {
    ptr: Unique<u8>,
    cap: Cap,
    alloc: A,
}

impl<A: Allocator> RawVecInner<A> {
    #[inline]
    const fn new_in(alloc: A, align: Alignment) -> Self {
        let ptr = Unique::from_non_null(NonNull::without_provenance(align.as_nonzero()));
        // `cap: 0` means "unallocated". zero-sized types are ignored.
        Self { ptr, cap: ZERO_CAP, alloc }
    }
}

It stores:

  • ptr – raw pointer to the allocated buffer.
  • cap – the current capacity.
  • alloc – the allocator instance that owns the memory.

In short: RawVec is the interface, while RawVecInner is the engine that handles allocation, growth, and deallocation.

When created with new_in, no memory is allocated immediately. Instead, it sets up a valid, aligned dummy pointer with capacity = 0. Real allocation happens only when the vector actually needs space.

How Vec::push Works

Appending with push looks simple, but involves a clever growth strategy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#[cfg(not(no_global_oom_handling))]
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_confusables("push_back", "put", "append")]
#[track_caller]
pub fn push(&mut self, value: T) {
    // Inform codegen that the length does not change across grow_one().
    let len = self.len;
    // This will panic or abort if we would allocate > isize::MAX bytes
    // or if the length increment would overflow for zero-sized types.
    if len == self.buf.capacity() {
        self.buf.grow_one();
    }
    unsafe {
        let end = self.as_mut_ptr().add(len);
        ptr::write(end, value);
        self.len = len + 1;
    }
}

Steps:

  1. Save the current length.
  2. If len == capacity, call grow_one to make space.
  3. Compute the pointer to the end of the buffer.
  4. Write the element into memory.
  5. Increment len.

How Growth Happens

If the buffer is full, push delegates to RawVec::grow_one, which calls RawVecInner::grow_amortized:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
impl<A: Allocator> RawVecInner<A> {
    // ...
    #[cfg(not(no_global_oom_handling))]
    #[inline]
    #[track_caller]
    fn grow_one(&mut self, elem_layout: Layout) {
        if let Err(err) = self.grow_amortized(self.cap.as_inner(), 1, elem_layout) {
            handle_error(err);
        }
    }

    fn grow_amortized(
        &mut self,
        len: usize,
        additional: usize,
        elem_layout: Layout,
    ) -> Result<(), TryReserveError> {
        // This is ensured by the calling contexts.
        debug_assert!(additional > 0);

        if elem_layout.size() == 0 {
            // Since we return a capacity of `usize::MAX` when `elem_size` is
            // 0, getting to here necessarily means the `RawVec` is overfull.
            return Err(CapacityOverflow.into());
        }

        // Nothing we can really do about these checks, sadly.
        let required_cap = len.checked_add(additional).ok_or(CapacityOverflow)?;

        // This guarantees exponential growth. The doubling cannot overflow
        // because `cap <= isize::MAX` and the type of `cap` is `usize`.
        let cap = cmp::max(self.cap.as_inner() * 2, required_cap);
        let cap = cmp::max(min_non_zero_cap(elem_layout.size()), cap);

        let new_layout = layout_array(cap, elem_layout)?;

        let ptr = finish_grow(new_layout, self.current_memory(elem_layout), &mut self.alloc)?;
        // SAFETY: finish_grow would have resulted in a capacity overflow if we tried to allocate more than `isize::MAX` items

        unsafe { self.set_ptr_and_cap(ptr, cap) };
        Ok(())
    }
    // ...
}

Key ideas:

  1. Handle zero-sized types (ZSTs) Separately.
  2. Calculate the required capacity.
  3. Double the capacity (amortized growth).
  4. Compute a new layout.
  5. Allocate a larger buffer.
  6. Update the pointer and capacity.

The finish_grow Step

This is where actual memory allocation (or reallocation) occurs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
#[cold]
fn finish_grow<A>(
    new_layout: Layout,
    current_memory: Option<(NonNull<u8>, Layout)>,
    alloc: &mut A,
) -> Result<NonNull<[u8]>, TryReserveError>
where
    A: Allocator,
{
    alloc_guard(new_layout.size())?;

    let memory = if let Some((ptr, old_layout)) = current_memory {
        debug_assert_eq!(old_layout.align(), new_layout.align());
        unsafe {
            // The allocator checks for alignment equality
            hint::assert_unchecked(old_layout.align() == new_layout.align());
            alloc.grow(ptr, old_layout, new_layout)
        }
    } else {
        alloc.allocate(new_layout)
    };

    memory.map_err(|_| AllocError { layout: new_layout, non_exhaustive: () }.into())
}

// library/core/src/alloc/mod.rs
#[unstable(feature = "allocator_api", issue = "32838")]
pub unsafe trait Allocator {
    // ...
    unsafe fn grow(
        &self,
        ptr: NonNull<u8>,
        old_layout: Layout,
        new_layout: Layout,
    ) -> Result<NonNull<[u8]>, AllocError> {
        debug_assert!(
            new_layout.size() >= old_layout.size(),
            "`new_layout.size()` must be greater than or equal to `old_layout.size()`"
        );

        let new_ptr = self.allocate(new_layout)?;

        // SAFETY: because `new_layout.size()` must be greater than or equal to
        // `old_layout.size()`, both the old and new memory allocation are valid for reads and
        // writes for `old_layout.size()` bytes. Also, because the old allocation wasn't yet
        // deallocated, it cannot overlap `new_ptr`. Thus, the call to `copy_nonoverlapping` is
        // safe. The safety contract for `dealloc` must be upheld by the caller.
        unsafe {
            ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_mut_ptr(), old_layout.size());
            self.deallocate(ptr, old_layout);
        }

        Ok(new_ptr)
    }
    // ...
}
  • If the vector already has a buffer, the allocator tries to grow it.
  • Otherwise, it simply allocates a fresh buffer.

The Allocator::grow default implementation does three things:

  1. Allocate a larger buffer.
  2. Copy existing elements into it.
  3. Free the old buffer.

Summary

Vec keeps memory safe by:

  • Growing its buffer when more space is required.
  • Copying elements safely into new buffers.
  • Freeing old allocations correctly.

push may trigger growth, while pop simply decreases len without changing the underlying allocation.


Global Allocator & Low-level Allocation Functions

In this section, let’s take a closer look at how the Global allocator works and how it interacts with Rust’s low-level allocation functions.

Definition of Global

1
2
3
4
5
#[unstable(feature = "allocator_api", issue = "32838")]
#[derive(Copy, Clone, Default, Debug)]
// the compiler needs to know when a Box uses the global allocator vs a custom one
#[lang = "global_alloc_ty"]
pub struct Global;

Global is defined as a zero-sized type (ZST). As noted earlier, it implements the Allocator trait and serves as the default allocator for Box, Vec, String, and other standard collections.

Implementing the Allocator Trait

The Allocator trait provides a uniform interface for allocation, deallocation, and reallocation, which allows collections to stay allocator-agnostic. Global implements it like this:

1
2
3
4
5
6
7
8
9
#[unstable(feature = "allocator_api", issue = "32838")]
unsafe impl Allocator for Global {
    #[inline]
    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
    fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
        self.alloc_impl(layout, false)
    }
    // ...
}

Here, allocate simply forwards to alloc_impl.

The Core Allocation Logic

The real work happens inside alloc_impl, which delegates to the underlying system allocator:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
impl Global {
    #[inline]
    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
    fn alloc_impl(&self, layout: Layout, zeroed: bool) -> Result<NonNull<[u8]>, AllocError> {
        match layout.size() {
            0 => Ok(NonNull::slice_from_raw_parts(layout.dangling(), 0)),
            // SAFETY: `layout` is non-zero in size,
            size => unsafe {
                let raw_ptr = if zeroed { alloc_zeroed(layout) } else { alloc(layout) };
                let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;
                Ok(NonNull::slice_from_raw_parts(ptr, size))
            },
        }
    }
    // ...
}

Breaking it down:

  • Zero-sized allocations: If the requested Layout has size 0, Rust doesn’t actually allocate memory. Instead, it returns a dangling pointer that is guaranteed never to be dereferenced. This keeps the API consistent without wasting memory.
  • Non-zero allocations: For real allocations, it calls either alloc(layout) or alloc_zeroed(layout) depending on whether zero-initialization is required.
  • Safety & error handling: The result is wrapped in NonNull, which guarantees the pointer is not null. If the system allocator fails, AllocError is returned instead.

The Alloc Shim

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#[stable(feature = "global_alloc", since = "1.28.0")]
#[must_use = "losing the pointer will leak memory"]
#[inline]
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
pub unsafe fn alloc(layout: Layout) -> *mut u8 {
    unsafe {
        // Make sure we don't accidentally allow omitting the allocator shim in
        // stable code until it is actually stabilized.
        __rust_no_alloc_shim_is_unstable_v2();

        __rust_alloc(layout.size(), layout.align())
    }
}

This function is deliberately tiny:

  • It’s a stable façade over the allocator ABI.
  • The guard __rust_no_alloc_shim_is_unstable_v2() prevents bypassing the ABI contract.
  • The real work happens inside the compiler-provided symbol __rust_alloc.

In other words: alloc doesn’t call the OS directly — it forwards the request to __rust_alloc.

Where Does __rust_alloc Come From?

Inside liballoc, these entry points are declared as extern "Rust" symbols:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
unsafe extern "Rust" {
    #[rustc_allocator]
    #[rustc_nounwind]
    #[rustc_std_internal_symbol]
    fn __rust_alloc(size: usize, align: usize) -> *mut u8;
    #[rustc_deallocator]
    #[rustc_nounwind]
    #[rustc_std_internal_symbol]
    fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize);
    #[rustc_reallocator]
    #[rustc_nounwind]
    #[rustc_std_internal_symbol]
    fn __rust_realloc(ptr: *mut u8, old_size: usize, align: usize, new_size: usize) -> *mut u8;
    #[rustc_allocator_zeroed]
    #[rustc_nounwind]
    #[rustc_std_internal_symbol]
    fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8;

    #[rustc_nounwind]
    #[rustc_std_internal_symbol]
    fn __rust_no_alloc_shim_is_unstable_v2();
}

How are these satisfied?

Default Path (no custom allocator)

If no #[global_allocator] is provided, Rust wires these symbols to internal shims (__rdl_*) that simply forward to the system allocator:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#[cfg(not(test))]
#[doc(hidden)]
#[allow(unused_attributes)]
#[unstable(feature = "alloc_internals", issue = "none")]
pub mod __default_lib_allocator {
    use super::{GlobalAlloc, Layout, System};

    #[rustc_std_internal_symbol]
    pub unsafe extern "C" fn __rdl_alloc(size: usize, align: usize) -> *mut u8 {
        // SAFETY: see the guarantees expected by `Layout::from_size_align` and
        // `GlobalAlloc::alloc`.
        unsafe {
            let layout = Layout::from_size_align_unchecked(size, align);
            System.alloc(layout)
        }
    }

    #[rustc_std_internal_symbol]
    pub unsafe extern "C" fn __rdl_dealloc(ptr: *mut u8, size: usize, align: usize) {
        // SAFETY: see the guarantees expected by `Layout::from_size_align` and
        // `GlobalAlloc::dealloc`.
        unsafe { System.dealloc(ptr, Layout::from_size_align_unchecked(size, align)) }
    }

    #[rustc_std_internal_symbol]
    pub unsafe extern "C" fn __rdl_realloc(
        ptr: *mut u8,
        old_size: usize,
        align: usize,
        new_size: usize,
    ) -> *mut u8 {
        // SAFETY: see the guarantees expected by `Layout::from_size_align` and
        // `GlobalAlloc::realloc`.
        unsafe {
            let old_layout = Layout::from_size_align_unchecked(old_size, align);
            System.realloc(ptr, old_layout, new_size)
        }
    }

    #[rustc_std_internal_symbol]
    pub unsafe extern "C" fn __rdl_alloc_zeroed(size: usize, align: usize) -> *mut u8 {
        // SAFETY: see the guarantees expected by `Layout::from_size_align` and
        // `GlobalAlloc::alloc_zeroed`.
        unsafe {
            let layout = Layout::from_size_align_unchecked(size, align);
            System.alloc_zeroed(layout)
        }
    }
}

So by default, Box, Vec, and String use the platform allocator.

Custom Global Allocator

If you declare your own type implementing GlobalAlloc and mark it with #[global_allocator], the compiler rewires the __rust_* symbols to call into your implementation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
use std::alloc::{GlobalAlloc, Layout, System};

struct CustomAllocator;

unsafe impl GlobalAlloc for CustomAllocator {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        let ptr = System.alloc(layout);
        // ...
        ptr
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        System.dealloc(ptr, layout);
        // ...
    }
}

#[global_allocator]
static GLOBAL: CustomAllocator = CustomAllocator;

Now all allocations in your program flow through CustomAllocator.


Example: Logging Allocations

Here’s a complete example of a custom allocator that logs every allocation and deallocation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
use std::alloc::{GlobalAlloc, Layout, System};

struct CustomAllocator;

unsafe impl GlobalAlloc for CustomAllocator {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        let ptr = System.alloc(layout);
        log_alloc(b"Alloc", layout.size(), layout.align());
        ptr
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        System.dealloc(ptr, layout);
        log_alloc(b"Dealloc", layout.size(), layout.align());
    }
}

#[global_allocator]
static GLOBAL: CustomAllocator = CustomAllocator;

fn log_line(msg: &str) {
    unsafe {
        libc::write(2, msg.as_ptr() as *const _, msg.len());
        libc::write(2, b"\n".as_ptr() as *const _, 1);
    }
}

fn log_alloc(event: &[u8], size: usize, align: usize) {
    let mut buf = [0u8; 128];
    let mut cursor = 0;

    // event
    buf[cursor..cursor+event.len()].copy_from_slice(event);
    cursor += event.len();

    buf[cursor..cursor+6].copy_from_slice(b" size=");
    cursor += 7;
    cursor += itoa_noalloc(size, &mut buf[cursor..]);

    buf[cursor..cursor+7].copy_from_slice(b" align=");
    cursor += 7;
    cursor += itoa_noalloc(align, &mut buf[cursor..]);

    buf[cursor] = b'\n';
    cursor += 1;

    unsafe {
        libc::write(2, buf.as_ptr() as *const _, cursor);
    }
}

fn itoa_noalloc(mut n: usize, out: &mut [u8]) -> usize {
    let mut tmp = [0u8; 20];
    let mut i = tmp.len();

    if n == 0 {
        i -= 1;
        tmp[i] = b'0';
    } else {
        while n > 0 {
            i -= 1;
            tmp[i] = b'0' + (n % 10) as u8;
            n /= 10;
        }
    }

    let len = tmp.len() - i;
    out[..len].copy_from_slice(&tmp[i..]);
    len
}

fn main() {
    log_line("=== main start ===");

    let _v = vec![1, 2, 3];
    let _s = String::from("hello");
    let _b = Box::new([0u8; 64]);

    drop(_v);
    drop(_s);
    drop(_b);

    log_line("=== main end ===");
}

Logging is done directly via libc::write to stderr, ensuring no extra allocations are introduced (which could cause recursion).


Sample Output

Alloc size=4 align=1
Alloc size=64 align=8
Alloc size=456 align=8
=== main start ===
Alloc size=12 align=4
Alloc size=5 align=1
Alloc size=64 align=1
Dealloc size=12 align=4
Dealloc size=5 align=1
Dealloc size=64 align=1
=== main end ===
Dealloc size=4 align=1

Notes

  • The first allocations happen before main (runtime setup like panic hooks and I/O locks).
  • align values reflect type layouts (Vec<i32> needs align=4, String needs align=1, etc.).
  • Because logging avoids heap allocations, it’s safe to run inside the allocator itself.

New Allocator trait

Stable Status

The Allocator trait is still unstable (#[unstable(feature = "allocator_api", issue = "32838")]). This means that while collections such as Vec, Box, and String are internally designed to work with custom allocators, the public API for using them with your own allocator is not yet stabilized.

Basic Struct defined

Collections like Vec, Box, and String already store an allocator internally. However, the API that lets developers explicitly provide a custom allocator is still only available on nightly Rust. For example:

1
let v = Vec::new_in(CustomAllocator);

Looking Ahead

Rust’s memory allocation is safe by default, but when combined with unsafe it gives you almost as much freedom as C. Beyond the standard system allocator, there are also high-performance alternatives like jemalloc and mimalloc. In future posts, we’ll take a closer look at these allocators, explore how to integrate them into Rust projects, and see what kind of performance differences they can make.