Demilade Sonuga's blog

All posts

Once

2023-05-20 · 52 min read

In this post, we're going to implement a Once.

What Needs To Be Done

As mentioned in the previous post, a Once is a synchronization primitive used to create immutable values that are initialized once at runtime.

Our aim here is to create a safe abstraction that allows for this one-time mutability of an immutable value. To do this, we're inevitably going to have to bust out some unsafe operations to make it work.

A typical use of our Once should look like this:

static SCREEN: Once<&mut Screen> = Once::new();

fn do_something() {
// Initialize once
SCREEN.call_once(|| screen_reference);
// Use like a regular immutable static
write!(SCREEN, "Hello, world!");
}

Before you proceed, please take your time to think about how you'll implement this yourself.

Implementation

Like other things in programming, there are many ways this could be done. The line of thought followed here is just one out of many.

Create a new folder sync and in that folder, create the files once.rs and mod.rs.

In mod.rs

pub mod once;

In once.rs

// A structure that allows for a one-time initialization of
// values at runtime
pub struct Once<T> {

}

The purpose of a Once is to create an immutable value that will be initialized once at runtime. From this requirement, we can infer that before that initialization takes place, two things must be true:

  1. The memory space required to hold that newly initialized value must be readily available in the Once itself.
  2. That memory space, before initialization, holds nothing. That is, it's uninitialized.

From these points, it's clear that we need to have a space in the Once itself that will at first be uninitialized but will be mutated to be initialized later.

The core and std crates offer two things that are very useful in this scenario: UnsafeCell and MaybeUninit. UnsafeCell is the fundamental primitive for building abstractions with interior mutability in Rust.

Its definition looks like this:

#[repr(transparent)]
pub struct UnsafeCell<T: ?Sized> { /* private fields */ }

The generic T, assures that UnsafeCell can hold a value of any type. And the #[repr(transparent)] assures that an UnsafeCell<T> value will have the exact same data layout as the T that it holds.

UnsafeCell provides a simple interface that allows mutation of its underlying data, even through an immutable reference. But as the creators of safe abstractions, it's up to us to ensure that it's used correctly, that is, its use doesn't result in the violation of Rust rules and safety guarantees.

An example use of UnsafeCell:

fn func() {
// An immutable value
let x = UnsafeCell::new(5);
// Mutating the underlying value inspite of the immutability
*x.get_mut() = 6;
}

This is just a quick briefing of the UnsafeCell structure. You should definitely read at least its top-level documentation. You can find the link to that in the references.

Now, we could just add a data field to our Once like this:

pub struct Once<T> {
data: UnsafeCell<T>
}

But that won't work out because, as you have probably seen already, UnsafeCell requires you to already know the initial value that will be kept inside it. But with our Once structure, we don't already know the value. If we did, we wouldn't need a Once.

In other words, the value is supposed to be uninitialized at first. It's at this point that the MaybeUninit struct comes in. MaybeUninit allows us to create uninitialized values which can be initialized later.

Its definition:

#[repr(transparent)]
pub union MaybeUninit<T> {
/* private fields */
}

Like UnsafeCell, it's annotated with the #[repr(transparent)] attribute that assures us that the representation of MaybeUninit<T> will have the exact same data layout as T. So a MaybeUninit<i32> will look exactly like a normal i32 in memory.

With this tool at our disposal, we can create a data field in the Once like so:

use core::cell::UnsafeCell;
use core::mem::MaybeUninit;
pub struct Once<T> {
// The data that will be initialized at runtime
data: UnsafeCell<MaybeUninit<T>>
}

By putting the MaybeUninit<T> inside UnsafeCell, we are essentially telling the compiler that data is a value that can be mutated through an immutable reference and may or may not be initialized (which is exactly what we want).

We aren't done with our definitions yet. A Once is initialized only after the running of an initialization function. But what if we try to access the value in the Once without first initializing it? Or while it's being initialized? Or what if the initialization function panicked while running?

We need a way to know the status of the running function at any point in time. We can do this with a status field:

pub struct Once<T> {
data: UnsafeCell<MaybeUninit<T>>,
status: ?
}

But of what type will the field be? We could create an enum with variants that represent a status of uninitialized, initialized, running, or panicked. And the Once could look like this:

pub struct Once<T> {
data: UnsafeCell<MaybeUninit<T>>,
status: OnceStatus
}

#[derive(Clone, Copy, Debug, PartialEq)]
enum OnceStatus {
Uninit,
Init,
Running,
Panicked
}

But there is a problem here. There is a form of concurrency in our code: interrupts. Whenever an interrupt occurs whatever code is running stops running and control is immediately transferred to an interrupt service routine.

If some code is performing an operation that involves changing the status of a Once when an interrupt occurs, and that interrupt service routine results in accessing that same Once, then a race could occur.

The status needs to be able to be changed atomically. By atomically, what I mean is that it's not possible for a change to be half-completed. It's either fully changed or not changed at all.

To make our status atomic, we can make use of the atomic types in core::sync::atomic. These types are the foundations of concurrent structures. In it, we have AtomicBool, AtomicI8, AtomicI16, and others. Just generally the number types with "Atomic" put in front of them.

These types can be safely shared between threads because modifications made to them are completely atomic. A change to a value of any of these types is either fully made or not made at all.

For our status field, we can create a wrapper around an atomic type:

use core::sync::atomic::AtomicU8;
pub struct Once<T> {
data: UnsafeCell<MaybeUninit<T>>,
// Provides atomic access to the status of the data
status: AtomicStatus
}

// Tells the state of a Once
#[derive(Clone, Copy, Debug, PartialEq)]
enum OnceStatus {
// The data in the Once us uninitialized
Uninit = 0,
// The data in the Once has been initialized
Init = 1,
// The initializer function is running
Running = 2,
// The initializer panicked
Panicked = 3
}

#[repr(transparent)]
struct AtomicStatus(AtomicU8);

We still keep the OnceStatus to compare with the value in the AtomicStatus.

AtomicStatus is a wrapper around an AtomicU8 to present an interface for dealing with a OnceStatus as though it was an AtomicU8.

We also create a new function for OnceStatus to reinterpret u8s as OnceStatuses:

impl OnceStatus {
// Interpret a `u8` as a `OnceStatus`
fn new(status: u8) -> Self {
unsafe { core::mem::transmute(status) }
}
}

The core::mem::transmute function tells the compiler to interpret a value of one type as a value of another type.

To complete the wrapping, we need to give it some functions typically associated with atomics, that we'll use.

Firstly, a new function:

impl AtomicStatus {
// Creates a new AtomicStatus from a OnceStatus
fn new(status: OnceStatus) -> Self {
Self(AtomicU8::new(status as u8))
}
}

We can cast a value of type OnceStatus into a u8 because OnceStatus is annotated with #[repr(u8)] which makes it equivalent to a u8. You can see the values of OnceStatus as just named u8 values.

Another question that may come up is why do we wrap an AtomicU8? Why not an AtomicU32 or an AtomicI16 or something else? Well, the answer to that is although we can do that, an AtomicU8 will be the smallest. It's only the values 0..=3 that we use to represent a status, so while we can wrap an AtomicU32, an AtomicU8 will be much smaller and still suit our purposes.

If you look through the docs, you'll see that the AtomicU8 and other atomics like it have a lot of associated functions.

Out of all those functions, the only ones we need to wrap for our use are load, store and compare_exchange.

load is used to read the u8 inside. store is used to write a value inside. And compare_exchange is used to do a conditional write (write only if this condition is true).

All these functions just mentioned take an extra argument which at first doesn't seem like much: ordering of type Ordering.

The Ordering enum specifies how the computer should handle memory ordering.

Understanding this Ordering enum and the role it plays is important. You should check the references concerning memory ordering to really understand this topic.

Creating a load function for AtomicStatus:

use core::sync::atomic::Ordering;
impl AtomicStatus {
// Atomically read the status
fn load(&self, ordering: Ordering) -> OnceStatus {
OnceStatus::new(self.0.load(ordering))
}
}

A load operation with a regular AtomicU8 returns a u8. With our AtomicStatus::load, we can call load and get a OnceStatus as if it was another primitive type itself.

Creating a store function:

impl AtomicStatus {
// Atomically writes the status
fn store(&self, val: OnceStatus, ordering: Ordering) {
self.0.store(val as u8, ordering);
}
}

Now, we can call AtomicStatus::store to atomically store a value of type OnceStatus.

The compare_exhange of AtomicU8 looks like this:

pub fn compare_exchange(
&self,
current: u8,
new: u8,
success: Ordering,
failure: Ordering
) -> Result<u8, u8>

What this function does is that it compares the argument current and the value in the AtomicU8 at the time of calling the function. If those two are the same, then new is stored in the AtomicU8. If the two are not the same, then the value in the AtomicU8 remains unchanged.

The success argument tells the memory ordering that should be used if the comparison succeeds. The failure argument tells the memory ordering that should be used if the comparison fails.

If the comparison succeeds and the new value gets stored, an Ok(new value) will be returned. Else an Err(value currently inside AtomicU8) will be returned instead.

Creating a new compare_exchange that can handle our OnceStatus:

impl AtomicStatus {
fn compare_exchange(
&self,
old: OnceStatus,
new: OnceStatus,
success: Ordering,
failure: Ordering
) -> Result<OnceStatus, OnceStatus> {
match self.0.compare_exchange(old as u8, new as u8, success, failure) {
Ok(status) => Ok(OnceStatus::new(status)),
Err(status) => Err(OnceStatus::new(status))
}
}
}

And now, we have a primitive for atomically handling the status of the Once.

Now, for the interface of the Once itself. We need a function for creating a new Once:

impl<T> Once<T> {
// Creates a new Once
pub fn new() -> Self {
Self {
status: AtomicStatus::new(OnceStatus::Uninit),
data: UnsafeCell::new(MaybeUninit::uninit())
}
}
}

MaybeUninit<T>::uninit creates an uninitialized value of type T.

For getting the value in a Once:

impl Once {
// Retrieves the data if it has been initialized
fn get(&self) -> Option<&T> {
match self.status.load(Ordering::Acquire) {
OnceStatus::Init => {
unsafe {
let data_ptr = self.data.get();
let maybeinit_ptr = (*data_ptr).as_ptr();
Some(&*maybeinit_ptr)
}
}
_ => None
}
}
}

The UnsafeCell::get function returns a pointer to the underlying data, which in this case is a MaybeUninit value. The MaybeUninit::as_ptr returns a pointer to the value which may or may not be initialized. If self.status.load is a OnceStatus::Init, we can be sure that the data has already been initialized, so we return a reference to that data wrapped in a Some.

If the data is not initialized, a None is returned.

The get function is not public because that's not what we're going to use to access the value in the Once.

For initializing data with a function, we create a call_once function:

impl<T> Once<T> {
// Performs an initialization routine once and only once
pub fn call_once<F: FnOnce() -> T>(&self, f: F) {

}
}

The generic F and trait bound T: FnOnce() -> T tells the compiler that f is a value of any type that is callable and returns a value of type T.

Before calling the function, we first need to verify that the status of the Once is still uninitialized. If it's uninitialized, we initialize it.

impl<T> Once<T> {
pub fn call_once<F: FnOnce() -> T>(&self, f: F) {
let status = self.status.load(Ordering::Acquire);
if status == OnceStatus::Uninit {
match self.status.compare_exchange(
OnceStatus::Uninit,
OnceStatus::Running,
Ordering::Acquire,
Ordering::Acquire
) {
Ok(_) => {
let val = f();
unsafe {
let data_ptr = self.data.get();
let mut_data_ptr = (*data_ptr).as_mut_ptr();
mut_data_ptr.write(val);
}
self.status.store(OnceStatus::Init, Ordering::Release);
}
Err(_) => () // Don't do anything if the Once is in any other state
}
}
}
}

The compare_exchange first checks if the status is OnceStatus::Uninit. If it is, it's changed to OnceStatus::Running, an Ok is returned and the initialization takes place. If it's not, an Err is returned and nothing happens. This means that the call_once function has already been called and the initializer it was called with is either already running or finished initializing or panicked.

Speaking of panicking, we did not do put any mechanisms in place to detect panics and act accordingly. To do this, we need to think about the behavior of panicking.

Consider the following:

struct X { a: i32 }
impl Drop for X {
fn drop(&mut self) {
println!("Dropping X");
}
}
fn main() {
let val = X { a: 1 };
}

If you run the above code, "Dropping X" will be printed.

Now, if you modify the code to look like this:

fn main() {
let val = X { a: 1 };
std::mem::forget(val);
}

Nothing happens. Nothing gets printed. This is because the mem::forget function tells the compiler to forget to call the Drop implementation for a value.

But if a panic occurs before reaching that mem::forget invocation:

fn main() {
let val = X { a: 1 };
panic!("Panicked!");
std::mem::forget(val);
}

Then both "Panicked!" and "Dropping X" will get printed. This is because when a panic occurs, the Rust walks back up the stack and cleans up the data from each function on the way up, dropping everything that can be dropped.

So, if a panic occurs before reaching mem::forget(val), val gets dropped. If a panic doesn't occur before that, it doesn't get dropped (which is the whole point of mem::forget).

We can make use of this knowledge to create a structure that sets the Once status to OnceStatus::Panicked on drop. And if the initialization function executes successfully without dropping, we mem::forget that structure so the drop and subsequent change to a panicked state won't occur.

// A structure that changes the Once status to OnceStatus::Panicked
// on drop
struct Finish<'a> {
// A reference to the status of the Once
status: &'a AtomicStatus
}

impl<'a> Drop for Finish<'a> {
fn drop(&mut self) {
self.status.store(OnceStatus::Panicked, Ordering::SeqCst);
}
}

And in the call_once function:

impl<T> Once<T> {
pub fn call_once<F: FnOnce() -> T>(&self, f: F) {
let status = self.status.load(Ordering::Acquire);
if status == OnceStatus::Uninit {
match self.status.compare_exchange(
OnceStatus::Uninit,
OnceStatus::Running,
Ordering::Acquire,
Ordering::Acquire
) {
Ok(_) => {
// Creating a Finish value with a reference to the Once's
// status
let finish = Finish { status: &self.status }; // NEW
let val = f();
unsafe {
let data_ptr = self.data.get();
let mut_data_ptr = (*data_ptr).as_mut_ptr();
mut_data_ptr.write(val);
}
// If a panic occurs before this `mem::forget` executes, that means
// the initializer function panicked.
// finish will get dropped and the Once status will be set to OnceStatus::Panicked
core::mem::forget(finish); // NEW
self.status.store(OnceStatus::Init, Ordering::Release);
}
Err(_) => _
}
}
}
}

Now, we have a mechanism to create and initialize Once values but we have none to read them.

Going back to the What Needs To Be Done section, you'll see that our aim is to be able to use the static as though it had been initialized at compile time, without explicitly calling any special functions.

To do that, we're going to treat our Once as a smart pointer. Remember that smart pointers are structures that point to a value and also hold additional metadata to help handle that value in a certain way.

For example, Box is a smart pointer. One peculiarity about the Box is that you can do this:

let b = Box::new(5);
assert_eq!(b, 5);

The above assertion will not fail. This is because Box implements traits that allow it to behave as if it is the value it holds.

That's what we're going to do for our Once here.

We'll just implement the Deref trait because this is the main one required for this kind of behavior:

impl<T> core::ops::Deref for Once<T> {
type Target = T;

fn deref(&self) -> &Self::Target {
loop {
match self.status.load(Ordering::Acquire) {
OnceStatus::Init => return self.get().unwrap(),
OnceStatus::Running => core::hint::spin_loop(),
OnceStatus::Uninit => panic!("Once has not been initialized"),
OnceStatus::Panicked => panic!("Once initializer panicked")
};
}
}
}

Our Deref implementation simply does a match on the status of the Once and acts according to that.

Our implementation panics either when the Once is uninitialized or the initializer panics. This is safe because the result is never undefined behavior. Never will outside code be able to handle the Once in such a way that leads to using uninitialized memory as a value.

If the initializer is still running, the loop simply continues. The core::hint::spin_loop is there to tell the processor executing this code that it's in a busy waiting loop.

Simply looping without the core::hint::spin_loop will lead to a huge consumption of the processor power because of the wait for the initializer to finish, especially if the initializer takes a long time to complete.

Adding the core::hint::spin_loop will make the processor go into a more efficient loop that will ensure that power doesn't get used up as much.

And that completes our implementation of Once. If you check other implementations of Once, you'll notice that this one doesn't look quite right. That's because what we aimed to achieve here is a little different from the typical aims.

Our implementation is safe because its use doesn't result in any undefined behavior or any violation of ownership rules or invariants that the Rust compiler expects to be true.

When the value is getting mutated, nothing is able to access it because of the atomic status checks accompanied with reading and writing data. After the value is initialized, nothing is able to mutate it. If the value doesn't get initialized for some reason, a panic occurs and panicking is not undefined behavior.

Testing

Just a quick testing round to ensure that the Once works as expected.

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_init_once() {
static VAL: Once<i32> = Once::new();
VAL.call_once(|| 5);
assert_eq!(*VAL, 5);
VAL.call_once(|| 10000000);
assert_eq!(*VAL, 5);
}

#[test]
#[should_panic]
fn test_panic() {
static VAL: Once<i32> = Once::new();
assert_eq!(*VAL, 5);
}
}

Remember that to run the test, you need to declare the sync module in main.rs

mod machine;
mod alloc;
mod display;
mod event_hook;
mod game;
mod sync; // NEW

If you run the tests now, you'll get an error that UnsafeCell<MaybeUninit<i32>> cannot be shared between threads safely.

The reason we're having this error is that for a value to be used as a static, its type has to implement the Sync trait. The Sync trait is a trait implemented by any type that can be shared between threads safely. Similarly, Send is a trait implemented by any type that can be moved between threads safely.

To resolve this problem:

unsafe impl<T: Sync> Sync for Once<T> {}

This tells the compiler that the Once<T> can be safely shared between threads if the underlying data can be safely shared between threads, too.

If you run the tests again, you'll get another error: cannot call non-const fn once::Once::::new in statics.

This means that only constants and functions that be evaluated at compile time (const functions) can be used as values of statics.

To resolve this, we make our Once::new function a const function:

impl<T> Once<T> {
// DELETED: pub fn new() -> Self {
pub const fn new() -> Self { // NEW
Self {
status: AtomicStatus::new(OnceStatus::Uninit),
data: UnsafeCell::new(MaybeUninit::uninit())
}
}
}

If you run the tests again, you'll get another error complaining that AtomicStatus::new is a non-const function. And that makes sense because for the Once::new to be evaluated at compile-time, all its fields, too, should be able to be evaluated at compile-time.

To resolve this, we make our AtomicStatus::new a const function:

impl AtomicStatus {
// DELETED: fn new(status: OnceStatus) -> Self {
const fn new(status: OnceStatus) -> Self { // NEW
Self(AtomicU8::new(status as u8))
}
}

The tests should run now without errors.

Take Away

  • When panics occur, the stack is unwinded and values are dropped.

For the full code up to this point, go to the repo

In The Next Post

We'll be implementing a Mutex.

References