Unlock Your Python Backend Career: Build 30 Projects in 30 Days. Join now for just $54

Sized & ?Sized: A beginners guide

by Ugochukwu Chizaram Omumusinachi

.

Updated Sat May 31 2025

.
Sized & ?Sized: A beginners guide

When you're writing Rust code, the compiler needs to know exactly how much memory each type requires at compile time.

This knowledge allows Rust to allocate stack space efficiently, pass values around without heap allocation, and optimize your code aggressively.

The Sized trait serves as the compiler's way of saying: "I know exactly how many bytes this type needs, and that size never changes."

Understanding Sized unlocks deeper comprehension of why Rust behaves the way it does with generics, function parameters, and memory management.

Most importantly, it explains the mysterious ?Sized syntax you've probably encountered in advanced Rust code

What Is the Sized Marker Trait?

The Sized trait is one of Rust's most fundamental marker traits, yet it's largely invisible in everyday programming because the compiler handles it automatically. Think of it as a compile-time promise that says: "This type has a known, fixed size that never changes."


// This is what the Sized trait looks like conceptually
// (You can't actually see this definition - it's built into the compiler)
trait Sized {
    // No methods - it's purely a marker trait
    // The compiler automatically determines if a type implements this
}

What makes Sized special is that unlike other traits, you don't implement it manually. The compiler automatically determines whether a type has a known size and marks it as Sized accordingly. This automatic implementation is crucial because size information is needed during compilation, not at runtime.

Understanding Sized vs Unsized Types

To truly digest the Sized trait, we need to understand the difference between sized and unsized types, forming the foundation of Rust's memory model.

Sized Types: The Foundation of Stack Allocation

Most types you work with daily are sized types. These types have a fixed, known size at compile time:


fn demonstrate_sized_types() {
    // All these types are Sized - the compiler knows their exact byte count
    let number: i32 = 42;           // Always 4 bytes
    let flag: bool = true;          // Always 1 byte
    let character: char = 'A';      // Always 4 bytes (Unicode scalar)
    let array: [i32; 5] = [1, 2, 3, 4, 5]; // Always 20 bytes (5 × 4)
    
    // Even complex types can be Sized if all their parts are Sized
    struct Point {
        x: f64,  // 8 bytes
        y: f64,  // 8 bytes
    }
    let point = Point { x: 1.0, y: 2.0 }; // Always 16 bytes total
    
    // The compiler can stack-allocate all of these efficiently
    // because it knows their sizes at compile time
}

The key insight here is that sized types enable stack allocation. When the compiler knows a type's size, it can reserve the exact amount of stack space needed when entering a function scope.

Dynamically Sized Types: When Size Isn't Known

Dynamically Sized Types, or DSTs, represent data whose size isn't known at compile time. These types do not implement Sized:


fn explore_unsized_types() {
    // String slices (&str) are unsized - they can point to any length of text
    let short_text: &str = "Hi";
    let long_text: &str = "This is a much longer string with many characters";
    
    // Both variables are the same type (&str) but point to different amounts of data
    // The compiler cannot know at compile time how long a &str will be
    
    // Array slices are also unsized
    let numbers = [1, 2, 3, 4, 5];
    let slice1: &[i32] = &numbers[0..2];  // Points to 2 elements
    let slice2: &[i32] = &numbers[0..4];  // Points to 4 elements
    
    // Same type (&[i32]) but different amounts of data behind the reference
}

Understanding this distinction helps explain why you can't directly store a str or [T] as a local variable. The compiler needs to know how much stack space to allocate, and it can't determine that for unsized types, then how can it determine this — see in the next section.

Fat Pointers: How Rust Handles Unsized Types

When working with unsized types, Rust uses "fat pointers" that contain both a pointer to the data and size information:


use std::mem;

fn examine_pointer_sizes() {
    // Regular reference to sized type - just a pointer
    let number = 42i32;
    let number_ref: &i32 = &number;
    println!("Size of &i32: {} bytes", mem::size_of_val(&number_ref));
    // Prints: Size of &i32: 8 bytes (on 64-bit systems)
    
    // Fat pointer to unsized type - pointer + length
    let text = "Hello, Rust!";
    let text_ref: &str = text;
    println!("Size of &str: {} bytes", mem::size_of_val(&text_ref));
    // Prints: Size of &str: 16 bytes (pointer + length on 64-bit systems)
    
    // The extra 8 bytes store the length of the string slice
    println!("Length stored in fat pointer: {}", text.len());
}

Since these pointers have known sizes the compiler is able to allocate appropriate stack space for them at compile time , this design allows Rust to work with dynamically sized data while maintaining memory safety and zero-cost abstractions.

The ?Sized Syntax: Opting Out of Size Requirements

Now we reach one of Rust's more advanced features: the ?Sized syntax. This notation appears in generic type bounds and represents a crucial escape hatch from Rust's default sizing requirements.

Default Behavior: All Generics Are Sized

By default, every generic type parameter in Rust has an implicit Sized bound:


// These two function signatures are equivalent
fn process_value<T>(value: T) { /* ... */ }
fn process_value_explicit<T: Sized>(value: T) { /* ... */ }

// The compiler automatically adds ': Sized' to every generic parameter

This default makes sense because functions need to know how much stack space to allocate for their parameters. However, this restriction prevents you from writing generic code that works with unsized types.

Using ?Sized to Accept Unsized Types

The ?Sized syntax tells the compiler: "This type parameter might or might not be sized." 



// This function can work with both sized and unsized types
fn flexible_function<T: ?Sized>(value: &T) {
    // We can only work with T through references because
    // we don't know if T has a known size
    println!("Processing value at address: {:p}", value);
    
    // We cannot do this: let owned_value: T = ...;
    // Because T might not have a known size
}

fn demonstrating_flexible_generics() {
    let number = 42i32;
    let text = "Hello";
    let array = [1, 2, 3, 4, 5];
    
    // Works with sized types
    flexible_function(&number);    // T = i32 (sized)
    
    // Also works with unsized types!
    flexible_function(text);       // T = str (unsized)
    flexible_function(&array[..]);  // T = [i32] (unsized)
}
 

Notice how we can only accept unsized types through references. This exists because the function needs to receive its parameters somehow, and you can't pass an unsized value directly on the stack.

Practical Applications and Real-World Examples

Common Patterns

When working with Sized and ?Sized, certain patterns emerge that you'll encounter frequently in idiomatic Rust code.

Pattern: Generic Functions That Accept References

Most functions that need ?Sized accept their parameters by reference:


// Common pattern: accept unsized types through references
fn process_data<T: ?Sized + std::fmt::Display>(data: &T) {
    println!("Processing: {}", data);
}

// This works because &T is always sized, even when T is not
// &str is sized (it's a fat pointer), even though str is not

The key idea is that references to unsized types are themselves sized. A &str is always 16 bytes on a 64-bit system (8 bytes for the pointer, 8 bytes for the length), regardless of how long the string it points to might be.

Pattern: Owned Unsized Data Behind Pointers

When you need to own unsized data, you must put it behind a pointer:


struct DataProcessor<T: ?Sized> {
    // Cannot do: data: T,  // T might be unsized
    data: Box<T>,           // Box can own unsized data
}

impl<T: ?Sized> DataProcessor<T> {
    fn new(data: Box<T>) -> Self {
        DataProcessor { data }
    }
    
    // Methods work normally because self.data is &T, which is always sized
    fn process(&self) where T: std::fmt::Display {
        println!("Processing: {}", self.data);
    }
}

Common Mistake: Forgetting ?Sized in Generic Bounds

A frequent mistake is forgetting to add ?Sized when you want maximum flexibility, especially in generic bounds:


// This function is unnecessarily restrictive
fn restrictive_function<T: std::fmt::Display>(data: &T) {
    println!("{}", data);
}

// This function is more flexible and idiomatic
fn flexible_function<T: ?Sized + std::fmt::Display>(data: &T) {
    println!("{}", data);
}

fn demonstrate_difference() {
    let text = "Hello, world!";
    
    // Both work with sized types
    restrictive_function(&42i32);
    flexible_function(&42i32);
    
    // Only the flexible version works with unsized types
    // restrictive_function(text);  // Won't compile!
    flexible_function(text);        // Works fine
}

The flexible version can accept both sized and unsized types, making it more reusable and practical.

When and Why to Use ?Sized

Like everything cool , it comes with a cost and knowing when to use ?Sized helps you write more flexible and reusable generic code at lesser cost. The decision comes down to whether your function or data structure needs to work with unsized types.

Use ?Sized When Building Generic Libraries

Library code benefits most from ?Sized because it maximizes compatibility:


// A generic utility function that measures and displays information about any type
fn analyze_value<T: ?Sized + std::fmt::Debug>(value: &T, name: &str) {
    println!("Analyzing {}: {:?}", name, value);
    println!("Size in memory: {} bytes", std::mem::size_of_val(value));
    println!("Memory address: {:p}", value);
}

fn library_usage_example() {
    // Works with all kinds of types
    analyze_value(&42i32, "integer");
    analyze_value("Hello", "string slice");
    analyze_value(&[1, 2, 3, 4, 5], "array");
    analyze_value(&[1, 2, 3, 4, 5][1..3], "slice");
}

This kind of flexibility makes your library functions more useful to other developers, even though it might seem a little complex for beginners.

Don't Use ?Sized for Application-Specific Code

In application code where you know your specific types, the added complexity of ?Sized  often isn't worth it, except in rare cases where complex generics are important like traits and trait functions largely based off generics:


fn process_user_input(input: &str) -> String {
    // No need for generics or ?Sized here, we know the types
    input.trim().to_uppercase()
}

// Better than over-engineering with generics
// fn process_user_input<T: ?Sized + AsRef<str>>(input: &T) -> String { ... }

Keep your application code simple and only introduce ?Sized when you genuinely need the flexibility. And besides generics also add a compile time over head so you should use them wisely.

Memory Layout and Performance Implications

Understanding how Sized and ?Sized types affect memory layout helps you make informed performance decisions.

Stack vs Heap Allocation Patterns

Sized types enable efficient stack allocation, while unsized types often require heap allocation:


fn memory_allocation_patterns() {
    // Sized types - stack allocated, very fast
    let number: i32 = 42;                    // 4 bytes on stack
    let array: [i32; 1000] = [0; 1000];     // 4000 bytes on stack
    
    // Unsized types - require heap allocation for ownership
    let owned_string: String = String::from("Hello");  // Heap allocated
    let boxed_slice: Box<[i32]> = vec![1, 2, 3].into_boxed_slice(); // Heap allocated
    
    // References to unsized types - stack allocated pointers to data elsewhere
    let string_slice: &str = &owned_string;    // 16 bytes on stack (fat pointer)
    let array_slice: &[i32] = &boxed_slice;   // 16 bytes on stack (fat pointer)
}

This distinction affects performance: stack allocation is faster than heap allocation, but heap allocation provides more flexibility for dynamically sized data, so use cases are very important in making this decision.

Zero-Cost Abstractions with Sized Types

Rust's zero-cost abstractions work best with sized types because the compiler can offer fine-grained optimizations:


// This generic function compiles to the same code as hand-written versions
// for each specific type, thanks to monomorphization
fn efficient_generic<T>(value: T) -> T 
where 
    T: Copy + std::ops::Add<Output = T>
{
    value + value  // Compiles to optimal assembly for each T
}

fn zero_cost_demonstration() {
    // Each call compiles to optimized, type-specific code
    let doubled_int = efficient_generic(21i32);    // Optimized i32 addition
    let doubled_float = efficient_generic(3.14f64); // Optimized f64 addition
    
    // No runtime overhead from generics!
}

Sized types enable this optimization because the compiler knows exactly how to handle each concrete type at compile time.

Summary

The Sized trait represents one of Rust's most fundamental concepts, controlling how the language manages memory and enables its zero-cost abstractions. Most types you work with daily are automatically Sized, meaning the compiler knows their exact memory requirements at compile time.

  • The ?Sized syntax is a way to escape from Rust's default requirement that all generic type parameters be sized. 

  • By using ?Sized, you can write generic functions and data structures that work with both sized and unsized types, increasing flexibility with minimal costs .

  • Smart pointers like Box and Rc use ?Sized to store both sized and unsized data on the heap.

If you want to explore more , check out the Rust Reference on dynamically sized types it provides comprehensive technical details. 

The Rust Book's chapter on advanced traits covers marker traits and their role in the type system.

As usual stick around for more info , if you have any questions feel free to reach out to us, or connect with me on my Linkedin ; see our rust course for beginners if any of the discussed concepts feel strange and you can subscribe to our rust only blog at Rust daily.

Course image
Become a Rust Backend Engineeer today

All-in-one Rust course for learning backend engineering with Rust. This comprehensive course is designed for Rust developers seeking proficiency in Rust.

Start Learning Now

Whenever you're ready

There are 4 ways we can help you become a great backend engineer:

The MB Platform

Join 1000+ backend engineers learning backend engineering. Build real-world backend projects, learn from expert-vetted courses and roadmaps, track your learnings and set schedules, and solve backend engineering tasks, exercises, and challenges.

The MB Academy

The “MB Academy” is a 6-month intensive Advanced Backend Engineering BootCamp to produce great backend engineers.

Join Backend Weekly

If you like post like this, you will absolutely enjoy our exclusive weekly newsletter, Sharing exclusive backend engineering resources to help you become a great Backend Engineer.

Get Backend Jobs

Find over 2,000+ Tailored International Remote Backend Jobs or Reach 50,000+ backend engineers on the #1 Backend Engineering Job Board

Backend Tips, Every week

Backend Tips, Every week