2.3. Rust as a statically-typed language

In this chapter, we will look at an aspect of programming language design called the type system. Without becoming experts on compiler building, we will learn what types are, what makes some types "static" and others "dynamic", and why this distinction is useful for understanding which languages work well for systems programming. We will see a first example of why Rust is considered to have a particularly strong type system.

Why strongly typed languages for systems programming?

Since this course uses Rust as its main programming language, you will have to get used to some of the special features that Rust offers. One of the most powerful ones is the type system that Rust is built upon. Many of the examples in this course deal with finding, building, and understanding abstractions for the many different areas of systems programming that we as programmers have to deal with. A strong type system helps with these abstractions, as it provides a way to encode concepts into rules that the (Rust) compiler can check and enforce for us before we ever run our program. As we saw in chapter 2.1, strong checks are helpful for systems programming, but be aware that they come at a cost (slower iteration cycles for example, as we have to wait for the compiler to check our program).

Examples of encoding concepts into rules using types

Let's try to understand what "encoding concepts into rules" means. What types of concepts can we even encode? It turns out that this is very different depending on the programming language that we use, as we saw in chapter 2.2 when we learned about programming paradigms. Some concepts that you might already know are:

  • Primitive data types: Is something an integer (e.g. int) or a rational number (e.g. float), or maybe text (e.g. const char*)?
  • Control flow and conditions: Loops, recursion, if/else, switch etc.
  • Objects and object hierarchies: class, virtual etc.
  • Single entities vs. sequences: Arrays, pointers Notice that collections such as std::vector<T> are not listed here. Those are not features of the language itself but are typically part of the standard library of the language, meaning they are things that are built using the language, not built into the language! Some languages have certain containers as parts of the language itself, but that is rare.

These concepts might seem like fundamental properties of what "programming" means to you, and they might not even look all that special. All of them are abstractions however, as none of them exist on the level of computer hardware: Your CPU knows nothing of objects, arrays, text, not even of loops! It knows a bit about data types, but less than you are used to in languages such as C++ or Python. You can realize object-like, array-like or loop-like behavior using assembly language, but you have to do that manually, which is error-prone and tediousIt can also be fun, if you are into puzzles :). Since we don't want to do things manually, we want abstractions for these concepts, which is what programming languages give us. So a simple piece of code such as the following one contains quite a lot of nifty abstractions:

int div(int a, int b) {
    return a / b;
}

Here is what this piece of code states:

  • There is a reusable procedure (typically called a function) with a specific name (div)
  • The procedure has specific inputs and outputs (2 things in, one thing out)
  • The inputs and outputs have specific properties: They are signed, whole numbers with a specific size (32 bits, depending on the compiler that we use)
  • The procedure implements a specific operation (signed integer division), which is hidden in the / operator

For your machine, the corresponding instructions in x86-64 assembly language might look like this:

div(int, int):
        push    rbp
        mov     rbp, rsp
        mov     DWORD PTR [rbp-4], edi
        mov     DWORD PTR [rbp-8], esi
        mov     eax, DWORD PTR [rbp-4]
        cdq
        idiv    DWORD PTR [rbp-8]
        pop     rbp
        ret

Compared to the abstraction level that C++ supports, x86-64 assembly language is a lot less powerful:

  • There is some notion of reusable procedures, namely a label (div(int, int)) and the ret instruction. What we don't immediately find are the function arguments (a and b), these are handled implicitly by the calling convention. We might guess that the arguments are stored in the registers edi and esi, which is the default for the "System V AMD64 ABI", the calling convention used for the system this code was compiled on (x86-64 Linux). If we were to write the assembly code ourselves, we would have to memorize the calling convention and always pass arguments in the correct registers. It is not hard to imagine how we could make mistakes and write wrong code at this abstraction level!
  • There is no notion of int anymore. Instead, we have a few keywords and numbers that tell us how many bytes of memory our values take, and how to treat them: DWORD PTR [x] means "read 4 bytes (32 bits) from memory address x", the e in edi and esi is the identifier for the 32-bit registers on x86-64. The i in idiv is a hint that we want to do signed integer division. Again, it is easy to get things wrong here!
  • The return value of the procedure is also treated implicitly due to the calling convention: It is stored in the eax register, from which the caller can access it, given that they also stick to the calling convention.

Before diving into more complex types, take a moment to answer the following questions for yourself:

❓ Question

What are the things that are easiest to get wrong in the x86-64 assembly language snippet, if you would have to write it yourself?

💡 Click to show the answer

Subjective questions will yield subjective answers, but here are a few ideas:

  • Register names are hard to read and easy to mistype: edi is one keystroke away from esi
  • As mentioned, forgetting the rules of the calling convention seems like a likely situation, especially as a beginner
  • Forgetting to push and pop the frame pointer register (rbp)

❓ Question

How many places / statements would you have to change if you wanted to do 64-bit unsigned integer division instead of 32-bit signed integer division, both in the C++ version of the code and in the assembly version?

💡 Click to show the answer
  • All mov statements have to change, both their register names and their addresses:
    • We have to use the 64-bit registers, which start with an r: rdi instead of edi
    • We have to load 8 bytes instead of 4 bytes: QWORD PTR [rbp-8] instead of DWORD PTR [rbp-4]. Notice that the addresses also changed

A higher level of abstraction

The same code as before might look like this in Python:

def div(a, b):
    return a // b

Notice that the types are gone: No int anywhere to be found. Other than that, it looks quite similar: We have a return statement, use the division operator (which for integer division is written as // in Python instead of / in C++) and define two arguments. Since we state no types and every function in both Python and C++ has exactly one return value, the Python code doesn't need an extra piece of information that says "This function returns one value", like we needed in C++."What about functions that return nothing in C++?" you might ask. They still return a "value" (void), but the language designates this value to mean "nothing". Translated to assembly language this actually makes sense, as the calling convention states that the return value of a function is accessed through a register, but we can just ignore that register if we have no return value.

❓ Question

What would you have to do to convert this exact Python function into x86-64 assembly code? Does Python provide enough information to do that?

💡 Click to show the answer

Since we already have an example of a C++ function for signed integer division and its corresponing x86-64 assembly equivalent, we can take inspiration from that. However, since Python is a dynamically typed language, we do not know if we need 8-bit, 16-bit, 32-bit, 64-bit or some entirely different bit-width for the mov operations. We also do not know if the numbers are actually signed or unsigned.

You might have struggled to answer the previous question because it is unclear what a direct mapping of the Python function to assembly code would require. You could simply use the function we have seen before, which might be viable in some situations, but does it cover all the possible argument that the Python function can handle? How about this:

>>> div(2 ** 128, 2)
170141183460469231731687303715884105728

We divide a pretty big value (\(2^{128}\)) by 2, and Python does that effortlessly. But the C++ function uses the type int, of which we know that it can only represent numbers in the range \( [-2^{31}; 2^{31} - 1] \). If we try to execute similar code in C++, the result is different:

std::cout << div(2 << 128, 2);

We get both a compiler warning and the wrong result:

<source>: In function 'int main()':
<source>:10:29: warning: left shift count >= width of type [-Wshift-count-overflow]
   10 |     std::cout << asp::div(2 << 128, 2);
      |                           ~~^~~~~~

Output: 0

Now you can start to debate who did something wrong here: Our computer or us? The compiler did issue a warning, but isn't it maybe the machines fault if it doesn't do what we want? If Python can do it, why can't C++?

Questions like these are interesting to debate, because the computer science perspective is different from the user perspective: The user might want a calculator-like behavior, but the computer scientist is the one who has to make a thinking piece of rock behave in the expected way.It turns out that implementing a calculator is really hard Python hides more of the underlying hardware than C++ does (which is precisely what makes Python more high-level than C++, as we saw in chapter 2.1). In C++, there is no such thing as "any integer", the compiler comes right back at us and asks "how many bits does your integer require?". The fact that we get a warning from the compiler for the statement 2 << 128 is arguably a good thing, even though it might be annoying if all you want is to simply calculate \(\frac{2^{128}}{2} \).

Interestingly enough, Python does have a corresponding abstraction for numbers (even though we never wrote any explicit types into our Python function), but it is more general than the one that C++ uses. Consulting the Python documentation, we see this statement: "There are three distinct numeric types: integers, floating-point numbers, and complex numbers. [...] Integers have unlimited precision" (emphasis ours). Unlimited precision integers are what make the Python code work, where the C++ code fails. They come at a cost however, as unlimited precision requires memory allocations and the usage of dynamically-sized arrays, which are more complex and costly to handle than the primitive types that fit within registers.

A limitation of our div function

One thing that we haven't talked about concerning our div function is what happens if the second argument is zero. What does happen? Here is how Python handles it:

>>> div(2, 0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in div
ZeroDivisionError: division by zero

And in C++:

std::cout << div(2, 0);

Outout: Program terminated with signal: SIGFPE

SIGFPE signals an "Erroneous arithmetic operation", as can be seen in the Linux man page for signal. In both cases, our program crashed, which is undesirable. So let's go back to the roots and look at the underlying math of our function!

Mathematically, the operation of division is generally deemed to be undefined if the denominator is zero, which we can express like so:

\[ f : \mathbb{Z} \times (\mathbb{Z} \setminus {0}) \to \mathbb{Z}, \quad f(x, y) = \lfloor \frac{x}{y} \rfloor \]

There are a couple of noteworthy things:

  • We define the domain and codomain of the function. This is the mathematical equivalent of the input arguments and function output. We can read the first part of the above expression as: "A function mapping from an ordered set of pairs of integersThe pair is a result from the cross product operation x, where the second pair element must not be zero, to the set of integers"
  • The mathematical notation is closer to C++ than it is to Python, because we describe something like the type of each argument. Where in C++ we said int, here we say \(\mathbb{Z}\), meaning "the set of all integers"
  • Just like in C++, the mathematical notation has both a function signature (the domain to codomain mapping) and a function body (the actual function definition)

🔤 Exercise 2.3.1

We never really stated how two numbers divide by each other, neither in the C++ and Python code nor in the mathematical definition. We saw that the C++ code compiles down to using an assembly instruction called idiv that does the heavy lifting. Find out how integer division is implemented in Python, and what the mathematical algorithm for integer division is.

💡 Need help? Click here

First we have to decide on a Python interpreter, as there are multiple ones. CPython is open source and the code can be found on GitHub. Then we have to figure out how CPython implements integers in the first place. The Python documentation on Integer Objects tells us that PyLongObject is the type that represents arbitrary-precision integers. With a bit of searching, we find a file called Objects/longobject.c. Depending on your preference, you might first want to look into the corresponding header file, but since we are looking for an implementation for an algorithm, the C file makes sense. The very first search result for "div" shows a forward declaration of the following function:

static PyLongObject *x_divrem(PyLongObject *, PyLongObject *, PyLongObject **);

The function body is found in line 3304 and contains an illuminating comment: "We follow Knuth [The Art of Computer Programming, Vol. 2 (3rd edn.), section 4.3.1, Algorithm D]". So they are using an integer division algorithm from the classic reference textbook "The Art of Computer Programming" by Donald Knuth.

If the fact that mathematical notation also has types of some sort made you curious, you are onto something, because the idea of types in programming languages comes from mathematics! As we will soon see, mathematical sets are a good way of conceptualizing types.

Back to the problem at hand: Our C++ and Python code allowed divisions by zero. The mathematical definition of our division function was more powerful than both the C++ and Python code, because in the former we were able to exclude the malformed value (0 for the denominator) from the domain of the function. Translating this to a programming language would mean that it should be impossible to call the function div with 0 as the second argument. "Impossible" here can mean different things. It can mean "literally impossible", as in "there is no way to express having 0 as the second argument to div", or it can mean "impossible to compile/run such a program". The former would require a kind of newspeak programming language that has no concept of zero-valued denominators, which is hard to conceptualizeWe could try by defining the number representation in this language in a way that we simply can't express the number 0. Maybe each number n is written as n repetitions of the 🍣 emoji. To make the picture complete, our language would need prefix notation for calling functions. Then, the div function with two integer arguments could be written as: div 🍣🍣🍣🍣 🍣🍣 to express \(\frac{4}{2}\). It would then be impossible to divide by zero, because the expression div 🍣🍣🍣🍣 is malformed, as div expects two arguments. Unfortunately, this would also mean that we can't have a zero-values nominator, something which is perfectly legal in math. And negative numbers also can't really be expressed... Maybe we would have to define negative numbers -n as n repetitions of 🍕... . Luckily, the latter definition is easier to achieve, as the act of checking the adherence to syntax is already part of both compilers and interpreters.

So what we are looking for is a way to encode the idea of a non-zero integer into the language as a type! How exactly this looks depends on the way type checking is implemented in each programming language. For example, C++ has somewhat lenient rules for conversions between types:

std::cout << div(3.0, 1.5);

Output: 3

Passing float values to a function that is declared to accept int values works in C++, because there is an implicit conversion between float and int. This can lead to subtle and confusing bugs: 3.0 divided by 1.5 should be 2.0, but in the conversion to the int type, numbers are always rounded down to the next lowest integer, so what we actually computed was div(3, 1), which of course is 3. The compiler could warn about such implicit conversions, but the language itself explicitly allows them.

🔤 Exercise 2.3.2

Try playing around with the code example to see which compiler issues a warning for the conversions and which one doesn't.

💡 Need help? Click here

Note that Python is a bit smarter:

>>> div(3.0, 1.5)
2.0

Rust on the other hand is very strict and does not allow implicit conversions even between primitive types:

fn div(a: i32, b: i32) -> i32 {
    a / b
}

fn main() {
    div(3.0, 1.5);
}

This program does not compile:

error[E0308]: arguments to this function are incorrect
  --> <source>:16:5
   |
16 |     div(3.0, 1.5);
   |     ^^^ ---  --- expected `i32`, found floating-point number
   |         |
   |         expected `i32`, found floating-point number
   |
note: function defined here
  --> <source>:10:8
   |
10 | fn div(a: i32, b: i32) -> i32 {
   |    ^^^ ------  ------

Definition: We just saw our first example of the strictness of a type system: The number of implicit conversions it allows between different types. Rust is more strict than C++ or Python because it allows fewer conversions.

Fixing the gap in div using the Rust type system

Since Rust is so strict, we can use this strictness to our advantageBe prepared to be annoyed by the level of strictness of the Rust compiler. You will have to be very explicit about a lot of things, which can get frustrating while learning. This is a common criticism regarding the learning curve of Rust. At the same time, this strictness does prevent a lot of runtime bugs and was a concious design decision of Rust. Decide for yourself if it was worth it.. To prevent passing 0 as the second argument to div, the type of the second argument has to be something not convertible from e.g. the literal 0. A first place to look is the list of all primitive types in Rust. The Rust language documentation states that these are all the primitive types:

  • Boolean — bool
  • Numeric — integer and float
  • Textual — char and str
  • Never — ! — a type with no values

We need a numeric type, but none of the primitive numeric types naturally excludes zero. The unsigned types all start at 0, the signed types all include 0 as well. The two floating-point types f32 and f64 use IEEE-754 encoding, which also can represent 0 (technically it can represent -0.0 and +0.0). So the primitive types are out of the question. But we can write our own types in Rust (these are called user-defined types):

pub struct NonZeroI32(i32);

fn div(a: i32, b: NonZeroI32) -> i32 {
    a / b.0
}

The NonZeroI32 type is a tuple struct, which is a convenient way of writing a Rust type that simply wraps another type. You will often encounter these tuple structs for various wrapper types. Since Rust has strict type checking rules, we can only pass a value of the type NonZeroI32 as the second argument to div, which prevents this code:

div(2, 0);

Unfortunately, it also prevents this code:

div(2, 1);

The correct way to invoke div is to first create an instance of NonZeroI32 and pass that as the second argument. We can do that in-place like so:

div(2, NonZeroI32(1));

Unfortunately we have gained nothing, because NonZeroI32 itself accepts any i32 value, even 0:

div(2, NonZeroI32(0)); // :(

The common way to deal with that is to restrict the means through which we can construct an instance of the type NonZeroI32. We can use the visibility and privacy rules of Rust to make the constructor of NonZeroI32 private:

mod safety {
    // `pub` exports the struct definition from the module `safety`, so it can be accessed from the outside!
    // The inner field `i32` is NOT declared `pub`, so outside code can't access it. If you would want that, 
    // write `pub struct NonZeroI32(pub i32)`. Also try out what happens if you write `struct NonZeroI32(pub i32)`
    pub struct NonZeroI32(i32);
}

// Now we can't create an instance of NonZeroI32 from a raw number anymore:
div(2, safety::NonZeroI32(1)); // Does not compile!

This is like making the constructor private in C++. Now we have to re-introduce a way to create instances of our type. We do so by defining a constructor-like function within the module safety and declaring that function pub, so it gets exported from the module, making it accessible to outside code. Since the function itself sits within the safety module, it has full access to all the private fields of NonZeroI32:

mod safety {
    pub struct NonZeroI32(i32);

    pub fn make_non_zero(val: i32) -> NonZeroI32 {
        NonZeroI32(val)
    }
}

div(2, safety::make_non_zero(1));
div(2, safety::make_non_zero(0));

You might think that we still haven't gained anything, after all we can still write safety::make_non_zero(1). But now we have a function that creates the instances of NonZeroI32, and we can perform a check inside of that function to disallow zero as an argument:

pub fn make_non_zero(val: i32) -> NonZeroI32 {
    if val == 0 {
        panic!("Not allowed to create NonZeroI32 from 0");
    }
    NonZeroI32(val)
}

This is not perfect, because our check happens at runtime and we simply crash if the wrong value is passed to make_non_zero, but it is a start. In chapter 4 we will learn about more powerful ways that Rust gives us to get rid of the runtime panic!. For now, ignoring the construction of NonZeroI32, we have introduced a new concept: A type that is guaranteed to never be zero. If we have an instance of NonZeroI32 (outside of the safety module), we know that it will never be zero. There is just one last caveat however, which is that we lost a way to access the inner i32 value from NonZeroI32. Let's fix that with a getter function for now:

mod safety {
    pub struct NonZeroI32(i32);

    impl NonZeroI32 {
        pub fn inner(&self) -> i32 {
            self.0
        }
    }
}

fn div(a: i32, b: safety::NonZeroI32) -> i32 {
    a / b.inner()
}

❓ Question

What other (mathematical) functions would the NonZeroI32 type be useful for?

💡 Click to show the answer

An example is the logarithm function (to any base), which is undefined for the value 0.

The types that Rust provides

Rust has a bunch of built-in types as well as many useful types in its standard library. We already saw primitive types and structs as well as functions, but there are many more. For a great visual overview that links to the Rust documentation of each type, check out this guide on rustcurious.com.

Recap

We learned about the role of the type system in that it checks certain properties of our code and upholds certain invariantsAn invariant is a property that must always be true. We saw that some language are more strict than others when it comes to checking these invariants. Rust is very strict, and we learned of a way to use that strictness to our advantage to express properties of function arguments that go beyond the common primitive types that we typically might use.