Demilade Sonuga's blog

All posts

Refactoring IX

2023-06-03 · 89 min read

Armed with a Once and a Mutex, we can now refactor away all those mutable statics.

Let's start with the IDT static. This is initialized once at runtime and after that, it's no longer modified again. We'll use a Once in this case.

In main.rs

use sync::{once::Once, mutex::Mutex};
// DELETED: static mut IDT: Option<IDT> = None;
static IDT: Once<IDT> = Once::new(); // NEW
fn setup_idt(sel: SegmentSelector) {
/* DELETED:
let idt: &mut IDT;
unsafe {
IDT = Some(IDT::new());
idt = IDT.as_mut().unwrap();
}
*/

IDT.call_once(|| { // NEW
let mut idt = IDT::new(); // NEW
idt.breakpoint = Entry::exception(ServiceRoutine(breakpoint_handler), sel);
idt.page_fault = Entry::exception(ServiceRoutineWithErrCode(page_fault_handler), sel);
idt.double_fault = Entry::exception(ServiceRoutineWithNoReturn(double_fault_handler), sel);
idt.interrupts[0] = Entry::interrupt(ServiceRoutine(timer_handler), sel);
idt.interrupts[1] = Entry::interrupt(ServiceRoutine(keyboard_handler), sel);
/* DELETED:
let pointer = idt.as_pointer();
interrupts::disable_interrupts();
interrupts::load_idt(&pointer);
interrupts::enable_interrupts();*/

idt // NEW
}); // NEW
// NEW:
let pointer = IDT.as_pointer();
interrupts::disable_interrupts();
interrupts::load_idt(&pointer);
interrupts::enable_interrupts();
}

Most of the IDT initialization code is moved into a separate closure passed into call_once. In this closure, a new IDT is created, initialized and returned.

After initialization, the load_idt function is then called to tell the processor where the IDT is located.

Next, let's refactor TSS and GDT.

If you take a look at setup_tss, you'll see that initializing GDT depends on the TSS's initialization, so we'll initialize TSS first.

// DELETED: static mut TSS: Option<TSS> = None;
static TSS: Once<TSS> = Once::new(); // NEW
fn setup_tss() {
/* DELETED
let tss: &mut TSS;
unsafe {
TSS = Some(TSS::new());
tss = TSS.as_mut().unwrap();
}
let gdt = unsafe { GDT.as_mut().unwrap() };
let tss_selector = gdt.add_descriptor(Descriptor::tss_segment(tss)).unwrap();
*/

TSS.call_once(|| TSS::new());
// DELETED: load_tss(tss_selector);
}

And then, we do all the GDT stuff in the setup_gdt which we'll call after calling setup_tss.

If you take a good look at this new setup_tss function, you'll notice that we don't actually need TSS to be a Once. This is its new function implementation:

impl TSS {
pub fn new() -> Self {
Self {
privilege_stack_table: [core::ptr::null_mut(); 3],
interrupt_stack_table: [core::ptr::null_mut(); 7],
io_map_base_addr: core::mem::size_of::<TSS>() as u16,
reserved1: 0,
reserved2: 0,
reserved3: 0,
reserved4: 0
}
}
}

All it does is initialize values that don't need to wait for some runtime information to get initialized.

Using this information, we can just make this TSS::new function a const function, so it can be used for compile-time initialization, and then:

In machine/tss.rs

impl TSS {
// DELETED: pub fn new() -> Self {
pub const fn new() -> Self { // NEW
// ...Others
}
}

And in main.rs

// DELETED: static TSS: Once<TSS> = Once::new();
static TSS: TSS = TSS::new(); // NEW

There is no more need for a setup_tss:

/* DELETED:
fn setup_tss() {
TSS.call_once(|| TSS::new());
}
*/

But there is one more error in our code. If you try compiling the code now, you'll get an error that *mut u8 cannot be shared safely between threads. The reason why we're having this error: the TSS type has arrays of *mut u8s.

To tell the compiler that TSS can be safely shared between threads, we need to implement the trait Sync for TSS. It will be safe to implement this for TSS because TSS is never modified.

In machine/tss.rs

unsafe impl Sync for TSS {}

And that will eliminate the error.

Now, back to main.rs. Let's refactor GDT to use a Once.

// DELETED: static mut GDT: Option<GDT> = None;
static GDT: Once<GDT> = Once::new(); // NEW
// DELETED: fn setup_gdt() -> SegmentSelector {
fn setup_gdt() { // NEW
/* DELETED:
let gdt: &mut GDT;
unsafe {
GDT = Some(GDT::new());
gdt = GDT.as_mut().unwrap();
}
*/

let (mut cs, mut ds, mut tss_sel) = (SegmentSelector(0), SegmentSelector(0), SegmentSelector(0));
GDT.call_once(|| { // NEW
/* DELETED:
let cs = gdt.add_descriptor(Descriptor::code_segment()).unwrap();
let ds = gdt.add_descriptor(Descriptor::data_segment()).unwrap();
interrupts::disable_interrupts();
let gdt_pointer = gdt.as_pointer();
gdt.load(&gdt_pointer);
interrupts::enable_interrupts();
gdt::CS.set(cs);
gdt::DS.set(ds);
gdt::SS.set(ds);
cs
*/

// NEW:
let mut gdt = GDT::new();
cs = gdt.add_descriptor(Descriptor::code_segment()).unwrap();
ds = gdt.add_descriptor(Descriptor::data_segment()).unwrap();
tss_sel = gdt.add_descriptor(Descriptor::tss_segment(&TSS)).unwrap();
gdt
}); // NEW
interrupts::disable_interrupts();
let gdt_pointer = GDT.as_pointer();
GDT.load(&gdt_pointer);
gdt::CS.set(cs);
gdt::DS.set(ds);
gdt::SS.set(ds);
load_tss(tss_sel);
interrupts::enable_interrupts();
}

The closure passed to call_once does the actual creation of the GDT structure. It's after the structure is initialized that the GDT then gets loaded to tell the processor where it is.

We can't call any of the segment registers set function until after loading the GDT, so we initialize it first.`

The previous setup_gdt function returned the code segment selector, but now it doesn't. The reason we made it return one is so that it can be used to initialize entries in the IDT. But instead of doing that, we can just read the CS register, since we've already written it there in setup_gdt.

#[no_mangle]
extern "efiapi" fn efi_main(
handle: *const core::ffi::c_void,
sys_table: *mut SystemTable,
) -> usize {
// ...Others

boot_services.exit_boot_services(handle).unwrap();

// DELETED: let cs = setup_gdt();
setup_gdt(); // NEW

// DELETED: setup_idt(cs);
setup_idt(); // NEW

// ...Others
}
// DELETED: fn setup_idt(sel: SegmentSelector) {
fn setup_idt() { // NEW
IDT.call_once(|| {
let mut idt = IDT::new();
let sel = gdt::CS.read(); // NEW
idt.breakpoint = Entry::exception(ServiceRoutine(breakpoint_handler), sel);
// ...Others
});
// ...Others
}

To make this work, we have to actually implement a CS::read function:

In machine/gdt.rs

impl CS {
pub fn read(&self) -> SegmentSelector {
let segment: u16;
unsafe {
asm!("mov {}, cs", out(reg) segment);
}
SegmentSelector(segment)
}
}

The above code reads the value in the cs register and stores it in the segment variable, then returns it wrapped in a SegmentSelector. This way, we can read the value in the cs register without having to be passing it around.

It would make more sense if we just add this read function to the SegmentRegister trait and implement it for all the segment registers, but this is sufficient for our purposes since it's only the cs register that we're reading.

If you try compiling the code now, you'll get an error that complains of an invalid operand for the instruction used to load the TSS in the load_tss function in machine/tss.rs.

To resolve this:

In machine/tss.rs

pub fn load_tss(sel: SegmentSelector) {
unsafe {
// DELETED: asm!("ltr {}", in(reg) sel.0);
asm!("ltr {0:x}", in(reg) sel.0); // NEW
}
}

The 0:x added to the template string tells the compiler that the register used there must be a 16-bit register. This is required because by default, 64-bit registers are used and a 16-bit one is required by the ltr instruction used here.

As for the PICS, after we initialize it, we still mutate it at the end of service routine executions, so rather than a Once, we can use a Mutex here.

// DELETED: static mut PICS: Option<PICs> = None;
static PICS: Mutex<PICs> = Mutex::new(PICs::new()); // NEW

And to do the initialization:

// Initializes the PICs
fn setup_pics() {
/* DELETED:
let pics: &mut PICs;
unsafe {
PICS = Some(PICs::new());
pics = PICS.as_mut().unwrap();
}
pics.init();
*/

PICS.lock().init();
}

We no longer need a get_pics function:

/* DELETED:
pub fn get_pics() -> Option<&'static mut PICs> {
unsafe { PICS.as_mut() }
}
*/

And we can use it like so:

extern "x86-interrupt" fn timer_handler(frame: interrupts::InterruptStackFrame) {
let screen = get_screen().unwrap();
event_hook::send_event(EventInfo::Timer);
// DELETED: get_pics().unwrap().end_of_interrupt(0);
PICS.lock().end_of_interrupt(0); // NEW
}
extern "x86-interrupt" fn keyboard_handler(frame: interrupts::InterruptStackFrame) {
let port = port::Port::new(0x60);
let scancode = port.read();
if let Ok(Some(event)) = get_keyboard().unwrap().interpret_byte(scancode) {
event_hook::send_event(EventInfo::Keyboard(event));
}
// DELETED: get_pics().unwrap().end_of_interrupt(1);
PICS.lock().end_of_interrupt(1); // NEW
}

And for this to work, we have to make the PICs::new function a const function:

In machine/pics.rs

impl PICs {
// DELETED: pub fn new() -> PICs {
pub const fn new() -> PICs { // NEW
let first = PIC {
offset: 32,
command: Port::new(0x20),
data: Port::new(0x21)
};
let second = PIC {
offset: 32 + 8,
command: Port::new(0xa0),
data: Port::new(0xa1)
};
PICs {
first,
second
}
}
}

Making PICs::new const introduces another round of errors. For PICs::new to be able to be evaluated at runtime, all PICs's fields should be evaluated at runtime, too. But for them to be evaluated at runtime, Port::new has to be const, too.

In machine/port.rs

impl Port {
// DELETED: pub fn new(port_no: u16) -> Self {
pub const fn new(port_no: u16) -> Self { // NEW
Self(port_no)
}
}

And that compiles now.

As for the KEYBOARD static, the same logic applies. KEYBOARD may get mutated even after it's initialized while processing a scancode. So, we wrap it in a Mutex.

// DELETED: static mut KEYBOARD: Option<Keyboard> = None;
static KEYBOARD: Mutex<Keyboard> = Mutex::new(Keyboard::new()); // NEW

We no longer need any get_keyboard because we can safely use the static directly:

/* DELETED:
fn get_keyboard() -> Option<&'static mut Keyboard> {
unsafe { KEYBOARD.as_mut() }
}
*/

Or setup_keyboard because it's already fully initialized in the static definition:

/* DELETED:
fn setup_keyboard() {
unsafe {
KEYBOARD = Some(Keyboard::new());
}
}
*/
#[no_mangle]
extern "efiapi" fn efi_main(
handle: *const core::ffi::c_void,
sys_table: *mut SystemTable,
) -> usize {
// ...Others

setup_gdt();
setup_idt();
// DELETED: setup_keyboard();
setup_pics();

// ...Others
}

And we make use of it like this:

extern "x86-interrupt" fn keyboard_handler(frame: interrupts::InterruptStackFrame) {
let port = port::Port::new(0x60);
let scancode = port.read();
// DELETED: if let Ok(Some(event)) = get_keyboard().unwrap().interpret_byte(scancode) {
if let Ok(Some(event)) = KEYBOARD.lock().interpret_byte(scancode) { // NEW
// Notifying the event hooker that the keyboard event has occured
event_hook::send_event(EventInfo::Keyboard(event));
}
// Signalling that the keyboard interrupt has been handled
PICS.lock().end_of_interrupt(1);
}

And to make this work:

In machine/keyboard.rs

impl Keyboard {
// DELETED: pub fn new() -> Self {
pub const fn new() -> Self { // NEW
Self { code_is_extended: false }
}
}

If you take a look at our statics in main.rs

static mut SCREEN: Option<&mut Screen> = None;

static GDT: Once<GDT> = Once::new();
static TSS: TSS = TSS::new();
static IDT: Once<IDT> = Once::new();
static PICS: Mutex<PICs> = Mutex::new(PICs::new());
static KEYBOARD: Mutex<Keyboard> = Mutex::new(Keyboard::new());

Only one is a mutable static. That's progress. Before we get on with refactoring away that last mutable static here, let's divert our attention to the mutable statics in other modules.

The EVENT_HOOKER static in event_hook.rs is a prime candidate. After it's initialized, it still gets mutated again and again, so a Mutex will be suitable here.

use crate::sync::{mutex::Mutex};
// DELETED: static mut EVENT_HOOKER: Option<EventHooker> = None;
static EVENT_HOOKER: Mutex<Option<EventHooker>> = Mutex::new(None);

The EventHooker still remains wrapped in an Option because we need to initialize it at runtime.

For the EventHooker to be considered safe for sharing between threads:

unsafe impl Sync for EventHooker {}

The init function:

pub fn init() {
/* DELETED:
if unsafe { EVENT_HOOKER.is_none() } {
unsafe { EVENT_HOOKER = Some(EventHooker::new(get_allocator())); }
}
*/

let mut lock = EVENT_HOOKER.lock();
if lock.is_none() {
*lock = Some(EventHooker::new(get_allocator()));
}
}

The send_event function:

pub fn send_event(event_info: EventInfo) {
/* DELETED:
let event_hooker = get_event_hooker().unwrap();
event_hooker.send_event(event_info);
*/

EVENT_HOOKER.lock().as_mut().unwrap().send_event(event_info); // NEW
}

hook_event:

pub fn hook_event(event_kind: EventKind, func: BoxedFn) -> HandlerId {
/* DELETED:
let mut event_hooker = get_event_hooker().unwrap();
event_hooker.hook_event(event_kind, func)
*/

EVENT_HOOKER.lock().as_mut().unwrap().hook_event(event_kind, func) // NEW
}

unhook_event:

pub fn unhook_event(event_kind: EventKind, id: HandlerId) -> Result<(), ()> {
/* DELETED:
let mut event_hooker = get_event_hooker().unwrap();
event_hooker.unhook_event(event_kind, id)
*/

EVENT_HOOKER.lock().as_mut().unwrap().unhook_event(event_kind, id)
}

Our next stop is the alloc module. vec, boxed and boxed_fn all have to deal with trait objects behind raw pointers which are unsafe to access because of how our Allocator interface is exposed.

The issue we really have to think about is that multiple Vecs and Boxes can be alive and well at the same time, but all use the same allocator. This is the reason why we can't use a mutable reference as the allocator field in these structures. If we did, we'll have multiple mutable references alive at the same time, which is invalid.

The LinkedListAllocator, or any other allocator, is a structure that is not just initialized once but mutated multiple times afterward. The LinkedListAllocator is a good candidate for the Mutex. The Mutex can act as a structure for interior mutability.

By wrapping the allocator in a Mutex, we can use an immutable reference to a trait object instead of a mutable pointer, removing all those unsafety concerns that come with it.

Firstly,

In allocator.rs

use crate::sync::{mutex::Mutex};
unsafe impl Sync for LinkedListAllocator {}
// DELETED: static mut ALLOCATOR: Option<LinkedListAllocator> = None;
static ALLOCATOR: Mutex<LinkedListAllocator> = Mutex::new(LinkedListAllocator { head: {
// Remember that the LinkedListAllocator's head is a dummy node
static mut head: ListNode = ListNode {
size: 0,
next: None
};
unsafe { &head as *const _ as *mut _ }
} }); // NEW

pub fn init(mem: MemChunk) {
/* DELETED:
let mut allocator = LinkedListAllocator { head: {
static mut head: ListNode = ListNode {
size: 0,
next: None
};
unsafe { &mut head as *mut _ }
} };
*/

// DELETED: unsafe { allocator.add_free_region(mem); }
unsafe { ALLOCATOR.lock().add_free_region(mem); }
}

Next, we refactor the Allocator trait to work on immutable instead of mutable references. The reason for this is that we need multiple allocators references and we can't have multiple mutable references. With the current way the Allocator trait is specified, we have to use raw pointers.

pub unsafe trait Allocator {
// DELETED: unsafe fn alloc(&mut self, size: usize, alignment: usize) -> Option<*mut u8>;
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8>; // NEW
// DELETED: unsafe fn dealloc(&mut self, ptr: *mut u8, size_to_dealloc: usize);
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize); // NEW
}

Now, instead of implementing Allocator for LinkedListAllocator, we implement it for Mutex<LinkedListAllocator>:

// DELETED: unsafe impl Allocator for LinkedListAllocator {
unsafe impl Allocator for Mutex<LinkedListAllocator> {
// DELETED: unsafe fn alloc(&mut self, size: usize, alignment: usize) -> Option<*mut u8> {
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8> { // NEW
// DELETED: self.find_free_region(size, alignment)
self.lock().find_free_region(size, alignment)
}

// DELETED: unsafe fn dealloc(&mut self, ptr: *mut u8, size_to_dealloc: usize) {
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize) { // NEW
// DELETED: self.add_free_region(MemChunk {
self.lock().add_free_region(MemChunk { // NEW
start_addr: ptr as usize,
size: size_to_dealloc
});
}
}

As for get_allocator, instead of returning a mutable pointer, we can return a reference to Mutex<LinkecListAllocator>, which can now act as an allocator even though it's behind a mutable reference.

// DELETED: pub fn get_allocator() -> *mut LinkedListAllocator {
pub fn get_allocator() -> &'static Mutex<LinkedListAllocator> {
// DELETED: unsafe { ALLOCATOR.as_mut().unwrap() as *mut _ }
&ALLOCATOR
}

Now, we have to change some things in our allocator test since LinkedListAllocator no longer implements Allocator.

#[cfg(test)]
mod tests {
// ...Others

// DELETED: unsafe fn setup_3_4_2_dealloc_test1() -> (LinkedListAllocator, *mut u8) {
unsafe fn setup_3_4_2_dealloc_test1() -> (Mutex<LinkedListAllocator>, *mut u8) { // NEW
// ...Others
}

// ...Others

// DELETED: unsafe fn setup_3_4_2_dealloc_test2() -> (LinkedListAllocator, *mut u8) {
unsafe fn setup_3_4_2_dealloc_test2() -> (Mutex<LinkedListAllocator>, *mut u8) { // NEW
// ...Others
}

// ...Others

// DELETED: unsafe fn setup_3_4_2_dealloc_test3() -> (LinkedListAllocator, *mut u8) {
unsafe fn setup_3_4_2_dealloc_test3() -> (Mutex<LinkedListAllocator>, *mut u8) { // NEW
// ...Others
}

// DELETED: fn new_allocator(memory_size: usize) -> LinkedListAllocator {
fn new_allocator(memory_size: usize) -> Mutex<LinkedListAllocator> { // NEW
// ...Others
unsafe {
allocator.add_free_region(MemChunk {
start_addr: mem_ptr as usize,
size: memory_size
});
}
// DELETED: return allocator;
return Mutex::new(allocator);
}

// ...Others
}

Now, we have to refactor the tests:

#[test]
fn test_allocate_3Kib() {
unsafe {
let mut allocator = new_allocator(10 * ONE_KIB);
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
assert!(ptr.is_some(), "Allocation should have been successful");
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
}
}
#[test]
fn test_allocate_3_4_2Kib() {
unsafe {
let mut allocator = new_allocator(10 * ONE_KIB);
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
assert!(ptr_2.is_some(), "Allocation of 2Kib should have been successful");
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
}
}
#[test]
fn test_dealloc_3_4_2Kib1() {
unsafe {
let (mut allocator, four_kib_ptr) = setup_3_4_2_dealloc_test1();
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
allocator.dealloc(four_kib_ptr, 4 * ONE_KIB);
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
}
}
#[test]
fn test_dealloc_3_4_2Kib2() {
unsafe {
let (mut allocator, three_kib_ptr) = setup_3_4_2_dealloc_test2();
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others

allocator.dealloc(three_kib_ptr, 3 * ONE_KIB);

// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
}
}
#[test]
fn test_dealloc_3_4_2Kib3() {
unsafe {
let (mut allocator, two_kib_ptr) = setup_3_4_2_dealloc_test3();
// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others

allocator.dealloc(two_kib_ptr, 2 * ONE_KIB);

// DELETED: let first_region = (*allocator.head).next.unwrap();
let first_region = (*allocator.lock().head).next.unwrap(); // NEW
// ...Others
}
}

Now, we have to refactor the Boxes, BoxedFns and Vecs to use trait objects behind immutable references instead of raw pointers.

In alloc/boxed.rs

pub struct Box<'a, T> {
ptr: *mut T,
// DELETED: allocator: *mut dyn Allocator
allocator: &'a dyn Allocator // NEW
}

We need to add a lifetime specifier because of the reference, but that shouldn't be a big deal.

// DELETED: impl<T> Box<T> {
impl<'a, T> Box<'a, T> { // NEW
// DELETED: pub fn new(val: T, allocator: *mut dyn Allocator) -> Box<T> {
pub fn new(val: T, allocator: &'a dyn Allocator) -> Box<'a, T> { // NEW
// ...Others
}

// DELETED: pub unsafe fn from_raw<U>(ptr: *mut U, allocator: *mut dyn Allocator) -> Box<U> {
pub unsafe fn from_raw<'b, U>(ptr: *mut U, allocator: &'b dyn Allocator) -> Box<'b, U> {
Box {
ptr,
allocator
}
}
}
// DELETED: impl<T> core::ops::Drop for Box<T> {
impl<'a, T> core::ops::Drop for Box<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T> core::ops::Deref for Box<T> {
impl<'a, T> core::ops::Deref for Box<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T> core::ops::DerefMut for Box<T> {
impl<'a, T> core::ops::DerefMut for Box<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T: PartialEq> PartialEq<Box<T>> for Box<T> {
impl<'a, 'b, T: PartialEq> PartialEq<Box<'a, T>> for Box<'b, T> { // NEW
// ...Others
}

In the case of this PartialEq implementation, we had to use two references to indicate that the allocator in one Box doesn't need to have the exact same lifetime as the allocator in the other for the two to be compared.

// DELETED: impl<T: PartialEq> PartialEq<T> for Box<T> {
impl<'a, T: PartialEq> PartialEq<T> for Box<'a, T> {
// ...Others
}
// DELETED: impl<T: fmt::Debug> fmt::Debug for Box<T> {
impl<'a, T: fmt::Debug> fmt::Debug for Box<'a, T> { // NEW
// ...Others
}

For the tests:

#[cfg(test)]
mod tests {
// ...Others

// DELETED: fn failing_allocator() -> *mut FailingAllocator {
fn failing_allocator() -> &'static FailingAllocator { // NEW
// DELETED: &mut FailingAllocator as *mut _
FailingAllocator // NEW
}

// DELETED: fn successful_allocator() -> *mut SuccessfulAllocator {
fn successful_allocator() -> &'static SuccessfulAllocator { // NEW
// DELETED: &mut SuccessfulAllocator as *mut _
SuccessfulAllocator // NEW
}

// ...Others


unsafe impl Allocator for SuccessfulAllocator {
// DELETED: unsafe fn alloc(&mut self, size: usize, alignment: usize) -> Option<*mut u8> {
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8> { // NEW
// ...Others
}
// DELETED: unsafe fn dealloc(&mut self, ptr: *mut u8, size_to_dealloc: usize) {
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize) { // NEW
// ...Others
}
}


struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
// DELETED: unsafe fn alloc(&mut self, size: usize, alignment: usize) -> Option<*mut u8> {
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8> { // NEW
// ...Others
}
// DELETED: unsafe fn dealloc(&mut self, ptr: *mut u8, size_to_dealloc: usize) {}
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize) {} // NEW
}
}

Since these dummy allocators are used in multiple tests, we can factor them out into a separate module.

In alloc/mod.rs

#[cfg(test)]
pub mod test_utils {
use crate::allocator::Allocator;

// Convenience function for getting the always fail allocator
pub fn failing_allocator() -> &'static FailingAllocator {
&FailingAllocator
}

// Convenience function for getting the always successful allocator
pub fn successful_allocator() -> &'static SuccessfulAllocator {
&SuccessfulAllocator
}

// Dummy allocator that we can depend on to always succeed
pub struct SuccessfulAllocator;

use std::alloc::Global as PlatformAllocator;
use std::alloc::Layout;
use std::ptr::NonNull;
use std::alloc::Allocator as StdAllocator;

// Use your computer's allocator to allocate and deallocate memory
// Much more reliable than using our own custom allocator,
// so we can depend on it succeeding (under normal circumstances)
unsafe impl Allocator for SuccessfulAllocator {
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8> {
let mem_layout = Layout::from_size_align(size, alignment).unwrap();
let mem = PlatformAllocator.allocate(mem_layout).unwrap();
let ptr = mem.as_ptr() as *mut u8;
Some(ptr)
}
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize) {
let mem_layout = Layout::from_size_align(size_to_dealloc, 1).unwrap();
PlatformAllocator.deallocate(NonNull::new(ptr).unwrap(), mem_layout);
}
}


// Dummy allocator we can depend on to always fail
pub struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
unsafe fn alloc(&self, size: usize, alignment: usize) -> Option<*mut u8> {
None
}
unsafe fn dealloc(&self, ptr: *mut u8, size_to_dealloc: usize) {}
}
}

And in alloc/boxed.rs

#[cfg(test)]
mod tests {
use super::*;
// DELETED: use crate::allocator::Allocator;
use crate::alloc::test_utils::*; // NEW

// ...Others

/* DELETED:
// Convenience function for getting the always fail allocator
pub fn failing_allocator() -> &'static FailingAllocator {
&FailingAllocator
}

// Convenience function for getting the always successful allocator
pub fn successful_allocator() -> &'static SuccessfulAllocator {
&SuccessfulAllocator
}

// Dummy allocator that we can depend on to always succeed
struct SuccessfulAllocator;

// ...Others

unsafe impl Allocator for SuccessfulAllocator {
// ...Others
}


struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
// ...Others
}
*/

}

Next on our list is the Vec.

// DELETED: pub struct Vec<T: Clone> {
pub struct Vec<'a, T: Clone> { // NEW
len: usize,
capacity: usize,
start_ptr: *mut T,
// DELETED: allocator: *mut dyn Allocator
allocator: &'a dyn Allocator // NEW
}
// DELETED: impl<T: Clone> Vec<T> {
impl<'a, T: Clone> Vec<'a, T> { // NEW
// DELETED: pub fn with_capacity(capacity: usize, allocator: *mut dyn Allocator) -> Vec<T> {
pub fn with_capacity(capacity: usize, allocator: &'a dyn Allocator) -> Vec<'a, T> { // NEW
// ...Others
}
}
// DELETED: impl<T: Clone> Drop for Vec<T> {
impl<'a, T: Clone> Drop for Vec<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T: Clone> Index<usize> for Vec<T> {
impl<'a, T: Clone> Index<usize> for Vec<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T: Clone> IndexMut<usize> for Vec<T> {
impl<'a, T: Clone> IndexMut<usize> for Vec<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T: PartialEq + Clone> PartialEq<Vec<T>> for Vec<T> {
impl<'a, 'b, T: PartialEq + Clone> PartialEq<Vec<'a, T>> for Vec<'b, T> { // NEW
// ...Others
}
// DELETED: impl<T: Clone> Clone for Vec<T> {
impl<'a, T: Clone> Clone for Vec<'a, T> { // NEW
// ...Others
}
// DELETED: impl<T: fmt::Debug + Clone> fmt::Debug for Vec<T> {
impl<'a, T: fmt::Debug + Clone> fmt::Debug for Vec<'a, T> { // NEW
// ...Others
}

The tests:

#[cfg(test)]
mod tests {
use super::*;
// DELETED: use crate::allocator::Allocator;
use crate::alloc::test_utils::*; // NEW

// ...Others

/* DELETED:
// Convenience function for getting the always fail allocator
pub fn failing_allocator() -> &'static FailingAllocator {
&FailingAllocator
}

// Convenience function for getting the always successful allocator
pub fn successful_allocator() -> &'static SuccessfulAllocator {
&SuccessfulAllocator
}

// Dummy allocator that we can depend on to always succeed
struct SuccessfulAllocator;

// ...Others

unsafe impl Allocator for SuccessfulAllocator {
// ...Others
}


struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
// ...Others
}
*/

}

Now, unto alloc/boxed_fn.rs

// DELETED: pub struct BoxedFn(*mut BaseFn, *mut dyn Allocator);
pub struct BoxedFn<'a>(*mut BaseFn, &'a dyn Allocator); // NEW
impl<'a> BoxedFn<'a> {
// DELETED: pub fn new<F>(func: F, allocator: *mut dyn Allocator) -> Self where F: FnMut(EventInfo) {
pub fn new<F>(func: F, allocator: &'a dyn Allocator) -> Self where F: FnMut(EventInfo) { // NEW
// ...Others
}
}
// DELETED: impl Drop for BoxedFn {
impl<'a> Drop for BoxedFn<'a> { // NEW
// ...Others
}
// DELETED: impl Clone for BoxedFn {
impl<'a> Clone for BoxedFn<'a> { // NEW
// ...Others
}
// DELETED: impl Fn<(EventInfo,)> for BoxedFn {
impl<'a> Fn<(EventInfo,)> for BoxedFn<'a> { // NEW
// ...Others
}
// DELETED: impl FnMut<(EventInfo,)> for BoxedFn {
impl<'a> FnMut<(EventInfo,)> for BoxedFn<'a> { // NEW
// ...Others
}
// DELETED: impl FnOnce<(EventInfo,)> for BoxedFn {
impl<'a> FnOnce<(EventInfo,)> for BoxedFn<'a> { // NEW
// ...Others
}

The tests:

#[cfg(test)]
mod tests {
use super::*;
use crate::event_hook::EventInfo;
use crate::alloc::vec;
use crate::alloc::test_utils::*; // NEW

// ...Others

/* DELETED:
// Convenience function for getting the always fail allocator
pub fn failing_allocator() -> &'static FailingAllocator {
&FailingAllocator
}

// Convenience function for getting the always successful allocator
pub fn successful_allocator() -> &'static SuccessfulAllocator {
&SuccessfulAllocator
}

// Dummy allocator that we can depend on to always succeed
struct SuccessfulAllocator;

// ...Others

unsafe impl Allocator for SuccessfulAllocator {
// ...Others
}


struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
// ...Others
}
*/

}

Next: event_hook.rs

// DELETED: pub struct EventHooker {
pub struct EventHooker<'a> { // NEW
// DELETED: timer_handlers: Vec<Handler>,
timer_handlers: Vec<'a, Handler<'a>>,
// DELETED: keyboard_handlers: Vec<Handler>,
keyboard_handlers: Vec<'a, Handler>,
// The identifier to be used for the next function added
next_id: usize
}

// ...Others

#[derive(Clone)]
// DELETED: struct Handler {
struct Handler<'a> { // NEW
// The identifier for this handler
id: HandlerId,
// A function called when an event occurs
// DELETED: func: BoxedFn
func: BoxedFn<'a> // NEW
}
// DELETED: impl EventHooker {
impl<'a> EventHooker<'a> { // NEW
// Creates a new EventHooker instance
// DELETED:pub fn new(allocator: *mut dyn Allocator) -> Self {
pub fn new(allocator: &'a dyn Allocator) -> Self { // NEW
EventHooker {
timer_handlers: Vec::with_capacity(10, allocator),
keyboard_handlers: Vec::with_capacity(10, allocator),
next_id: 0
}
}
}
impl<'a> EventHooker<'a> {
// DELETED: pub fn hook_event(&mut self, event_kind: EventKind, func: BoxedFn) -> HandlerId {
pub fn hook_event(&mut self, event_kind: EventKind, func: BoxedFn<'a>) -> HandlerId { // NEW
// ...Others
}
}
// DELETED: pub fn hook_event(event_kind: EventKind, func: BoxedFn) -> HandlerId {
pub fn hook_event(event_kind: EventKind, func: BoxedFn<'static>) -> HandlerId { // NEW
// ...Others
}

The Sync implementation:

unsafe impl<'a> Sync for EventHooker<'a> {}

And the tests:

#[cfg(test)]
mod tests {
use super::*;
// DELETED: use crate::allocator::Allocator;
use crate::keyboard::{KeyCode, KeyDirection};
use crate::alloc::test_utils::*; // NEW

// ...Others

/* DELETED:
// Convenience function for getting the always fail allocator
pub fn failing_allocator() -> &'static FailingAllocator {
&FailingAllocator
}

// Convenience function for getting the always successful allocator
pub fn successful_allocator() -> &'static SuccessfulAllocator {
&SuccessfulAllocator
}

// Dummy allocator that we can depend on to always succeed
struct SuccessfulAllocator;

// ...Others

unsafe impl Allocator for SuccessfulAllocator {
// ...Others
}


struct FailingAllocator;

unsafe impl Allocator for FailingAllocator {
// ...Others
}
*/

}

We're almost done. The last thing left before we move on is the SCREEN static.

If you run the code now, it should compile successfully but you'll still keep getting those mysterious double-fault errors. If you get one of them, kill the program and re-run it until you don't see any errors. Eventually, the game scene will show up on the screen. We'll deal with this bug later.

The SCREEN static is initialized only once when the graphics are initialized. After that, it's only read. It might seem like a good situation to use a Once here, but it's not. Because of the way our Once is implemented, if the Once holds a mutable reference, nothing can be mutated through that reference.

This is bad news because the whole purpose of keeping that reference is to mutate the screen through it. Because of this, we'll just use a Mutex instead.

// DELETED: static mut SCREEN: Option<&mut Screen> = None;
static SCREEN: Mutex<Option<&mut Screen>> = Mutex::new(None); // NEW

The Option part is still maintained because we don't know what the screen reference should be at compile-time.

// Initializes the screen static
fn init_screen(screen: &'static mut Screen) {
// DELETED: unsafe { SCREEN = Some(screen); }
*SCREEN.lock() = Some(screen); // NEW
}

We just get rid of get_screen:

/* DELETED:
pub fn get_screen() -> Option<&'static mut &'static mut Screen> {
unsafe { SCREEN.as_mut() }
}
*/

And change some other stuff:

#[no_mangle]
extern "efiapi" fn efi_main(
handle: *const core::ffi::c_void,
sys_table: *mut SystemTable,
) -> usize {
// ...Others

if let Err(msg) = init_graphics_result {
let simple_text_output = sys_table.simple_text_output();
write!(simple_text_output, "{}", msg);
loop {}
}
// DELETED: let screen = init_graphics_result.unwrap();

// DELETED: init_screen(screen);
init_screen(init_graphics_result.unwrap()); // NEW

// DELETED: let screen = get_screen().unwrap();

// Allocating memory for the heap
let heap_mem = boot_services.alloc_pool(MemType::EfiLoaderData, HEAP_SIZE);

// ...Others

event_hook::hook_event(EventKind::Keyboard, boxed_fn::BoxedFn::new(|event_info| {
if let EventInfo::Keyboard(key_event) = event_info {
if key_event.direction == keyboard::KeyDirection::Down {
// DELETED: write!(screen, "{:?}", key_event.keycode);
write!(SCREEN.lock().as_mut().unwrap(), "{:?}", key_event.keycode); // NEW
}
}
}, allocator::get_allocator()));

// DELETED: game::blasterball(screen);
game::blasterball(&SCREEN);

// ...Others
}

We pass a reference to the SCREEN static when calling game::blasterball, so we'll have to refactor that too.

extern "x86-interrupt" fn breakpoint_handler(frame: interrupts::InterruptStackFrame) {
/* DELETED:
let screen = get_screen().unwrap();
write!(screen, "In the breakpoint handler");
*/

write!(SCREEN.lock().as_mut().unwrap(), "In the breakpoint handler"); // NEW
}
extern "x86-interrupt" fn timer_handler(frame: interrupts::InterruptStackFrame) {
// DELETED: let screen = get_screen().unwrap();
let screen = SCREEN.lock().as_mut().unwrap(); // NEW
// Notifying the event hooker that the timer event has occured
event_hook::send_event(EventInfo::Timer);
// Signalling that the timer interrupt has been handled
PICS.lock().end_of_interrupt(0);
}
#[cfg_attr(not(test), panic_handler)]
fn panic_handler(panic_info: &core::panic::PanicInfo) -> ! {
// DELETED: let screen = get_screen().unwrap();
write!(SCREEN.lock().as_mut().unwrap(), "{}", panic_info); // NEW
loop {}
}

Now, refactoring game::blasterball:

In game.rs

use crate::sync::{mutex::Mutex};
// DELETED: pub fn blasterball(screen: &mut Screen) -> ! {
pub fn blasterball(screen: &Mutex<Option<&mut Screen>>) -> ! { // NEW
// ...Others
}

The screen argument is passed to draw_bitmap calls, so we have to refactor those, too.

In display/bitmap.rs

use crate::sync::{mutex::Mutex};
// DELETED: pub fn draw_bitmap(screen: &mut Screen, bitmap: &Bitmap, pos: (usize, usize)) {
pub fn draw_bitmap(screen: &Mutex<Option<&mut Screen>>, bitmap: &Bitmap, pos: (usize, usize)) { // NEW
for row in 0..bitmap.height() {
for col in 0..bitmap.width() {
let inverted_row = bitmap.height() - row - 1;
let color_table_index = bitmap.pixel_array[inverted_row * bitmap.width() + col];
let color = bitmap.color_table[color_table_index as usize];
if row + pos.0 < NO_OF_PIXELS_IN_A_COLUMN && col + pos.1 < NO_OF_PIXELS_IN_A_ROW {
screen.lock().as_mut().unwrap().pixels[row + pos.0][col + pos.1] = Pixel {
red: color.red,
green: color.green,
blue: color.blue,
reserved: 0
};
}
}
}
}

If you compile the code now, you'll see an error complaining about the absence of a get_screen function in machine/uefi.rs.

impl BootServices {
pub fn exit_boot_services(&self, image_handle: *const core::ffi::c_void) -> Result<(), Status> {
/* DELETED:
use crate::get_screen;
let s = get_screen().unwrap();
*/

// ...Others
}
}

We just delete those lines because they're never even used. Perhaps we used them to debug and forgot to remove them after debugging.

If you run the code now, all you'll get will be a bunch of double-fault errors. This is a bug and this is what we'll turn our attention to in the next post.

Take Away

For the full code up to this point, go to the repo

In The Next Post

We'll be doing some debugging.