2.2. Rust as a multi-paradigm language
In this chapter, we will look more closely at the Rust programming language and its high-level features. We will do some comparisons between Rust code and C++ code, for now purely from a theoretical perspective. In the process, we will learn about programming paradigms, which often define the way one writes code in a programming language.
Why different programming languages?
There are a myriad of different programming languages available today. As of 2021, Wikipedia lists almost 700 unique programming languages, stating that these are only the "notable programming languages". With such an abundance in programming languages that a programmer can choose from, it is only natural that questions such as "What is the best programming language?" arise. Such questions are of course highly subjective, and discussions whether programming language A is superior to programming language B are often fought with almost religious zeal. Perhaps the better question to ask instead is: "Why do so many different programming languages exist?"
Exercise 2.1 List the programming languages that you know of, as well as the ones that you have already written some code in. If you have written code in more than one language, which language did you prefer, and why? Can you come up with some metrics that can be used to assess the quality of a programming language?
One possible approach to understand why there are so many different programming languages is to view a programming language like any other product that is available on the market. Take shoes as an analogy. There are lots of companies making shoes, and each company typically has many different product lines of shoes available. There are of course different types of shoes for different occasions, such as sneakers, slippers, winter boots or dress shoes. Even within each functional category, there are many different variations which will differ in price, style, material used etc. A way to classify shoes (or any product for that matter) is through quality criteria. We can take a look at the discipline of project management, which also deals with quality assurance, to find some examples of quality criteria:
- Performance
- Usability
- Reliability
- Look&Feel
- Cost
To name just a few. We can apply these quality criteria to programming languages as well and gain some insight into why there are so many programming languages. We will do so for the three interesting criteria Performance, Usability and Reliability.
Performance as a quality criterion for programming languages
Performance is one of the most interesting quality criteria when applying it to programming languages. There are two major ways to interpret the term performance here: One is the intuitive notion of "How fast are the programs that I can write with this programming language?". We talked a bit about this in section 1.2 when we learned about high-level and low-level programming languages. Another interpretation of performance is "What can I actually do with this programming language?" This question is interesting because there is a theoretical answer and a practical one, and both are equally valuable. The theoretical answers comes from the area of theoretical computer science (a bit obvious, isn't it?). Studying theoretical computer science is part of any decent undergraduate curriculum in computer science, as it deals with the more mathematical and formal aspects of computer science. Many of the underlying concepts that make modern programming languages so powerful are directly rooted in theoretical computer science, as we will see time and again in this course. To answer the question "What can I do with programming language X?", a theoretical computer scientist will ask you what abstract machine your programming language runs on. In the previous chapter, we learned about the Turing machine as an abstract machine, which incidentely is also the most powerful (conventional) abstract machine known to theoretical computer scientists (remember the Church-Turing-Thesis?).
If you can simulate a Turing machine in your programming language, you have shown that your programming language is as powerful (as capable) as any other computational model (disregarding some exotic, theoretical models). We call any such language a Turing-complete language. Most modern programming languages are Turing-complete, which is why you rarely encounter problems which you can solve with one popular programming language, but not with another. At least, this is the theoretical aspect of it. From a practical standpoint, there are many capabilities that one language might have that another language is lacking. The ability to directly access hardware resources is one example, as we saw in section 1.2. The whole reason why we chose Rust as a programming language in this course is that it has capabilities that for example the language Python does not have. In practice, many languages have mechanisms to interface with code written in other languages, to get access to certain capabilities that they themselves lack. This is called language interoperability or interop for short, and we will see some examples of this in chapter 10. In this regard, if language A can interop with language B, they are in principle equivalent in terms of their capabilities (strictly speaking the capabilities of A form a superset of the capabilities of B). In practice, interop might introduce some performance overhead, makes code harder to debug and maintain and thus is not always the best solution. This leads us to the next quality criterion:
Usability of programming languages
Usability is an interesting criterion because it is highly subjective. While there are formal definitions for usability, even those definitions still involve a high degree of subjectiveness. The same is true for programming languages. The Python language has become one of the most popular programming languages over the last decade in part due to its great usability. It is easy to install, easy to learn and easy to write, which is why it is now seen as a great choice for the first programming language that one should learn when learning programming. We can define some aspects that make a language usable:
- Simplicity: How many concepts does the programmer have to memorize in order to effectively use the language?
- Conciseness: How much code does the programmer have to write for certain tasks?
- Familiarity: Does the language use standard terminology and syntax that is also used in other languages?
- Accessibility: How much effort is it to install the necessary tools and run your first program in the language?
You will see that not all of these aspects might have the same weight for everyone. If you start out learning programming, simplicity and accessibility might be the most important criteria for you. As an experienced programmer, you might look more for conciseness and familiarity instead.
Exercise 2.2 How would you rate the following programming languages in terms of simplicity, conciseness, familiarity and accessibility?
- Python
- Java
- C
- C++
- Rust
- Haskell
If you don't know some of these languages, try to look up an introductory tutorial for them (you don't have to write any code) and make an educated guess.
In the opinion of the author, Rust has an overall higher usability than most other systems programming languages, which is the main reason why it was chosen for this course.
Reliability of programming languages
Reliability is a difficult criterion to assess in the context of programming languages. A programming language does not wear out with repeated use, as any physical product might. Instead, we might ask the question "How reliable are the programs that I write with this language?" This all boils down to "How easy is it to accidently introduce bugs to the code?", which is an exceedingly difficult question to answer. Bugs can take on a variety of forms and can have a myriad of origins. No programming language can reasonably expect to prevent all sorts of bugs, so we have to instead figure out what bugs can be prevented by using a specific programming language, and how the language deals with any errors that can't be prevented.
Generally, it makes sense to distinguish between logical errors caused by the programmer (directly or indirectly), and runtime errors, which can occur due to external circumstances. Examples of logical errors are:
- The usage of uninitialized variables
- Not enforcing assumptions (e.g. calling a function with a null-pointer without checking for the possibility of null-pointers)
- Accessing an array out-of-bounds (for example due to the famous off-by-one errors)
- Wrong calculations
Examples of runtime errors are:
- Trying to access a non-existing file (or insufficient privileges to access the file)
- A dropped network connection
- Insufficient working memory
- Wrong user input
We can now classify languages by the mechanisms they provide to deal with logical and runtime errors. For runtime errors, a reliable language will have robust mechanisms to deal with a large range of possible errors and will make it easy to deal with these errors. We will learn more about this in chapter 5 when we talk about error handling. Preventing logical errors is a lot harder, but there are also many different mechanisms which can help and make a language more reliable. In the next section for example, we will learn how the Rust type system makes certain kinds of logical errors impossible to write in Rust.
A last important concept, which plays a large role in systems programming, is that of determinism. One part of being reliable as a system is that the system gives expected behaviour repeatedly. Intuitively, one might think that every program should behave in this way, giving the same results when executed multiple times with the same parameters. While this is true in principle, in practice not all parameters of a program can be specified by the user. On a modern multi-processor system, the operating system for example determines at which point in time your program is executed on which logical core. Here, it might compete with other programs that are currently running on your system in a chaotic, unpredictable manner. Even disregarding the operating system, a programming language might contain unpredictable parts, such as a garbage collector which periodically (but unpredictably) interrupts the program to free up unused but reserved memory. Determinism is especially important in time-critical systems (so-called real-time systems), where code has to execute in a predictable timespan. This ranges from soft real-time systems, such as video games which have to hit a certain target framerate in order to provide good user experience, to hard real-time systems, such as the circuit that triggers the airbag in your car in case of an emergency.
It is worth pointing out that, in principle, all of the given examples still constitute deterministic behaviour, however the amount of information required to make a useful prediction in those systems is so large that any such prediction is in practice not feasible. Many programs thus constitute chaotic systems: Systems that are in principle deterministic, but are so sensitive to even a small change in input conditions that their behaviour cannot be accurately predicted. Luckily, most software still behaves deterministically on the macroscopic scale, even if it exhibits chaotic behaviour on the microscopic scale.
The concept of programming paradigms
In order to obtain good performance, usability or reliability, there are certain patterns in programming language design that can be used. Since programming languages are abstractions over computation, a natural question that arises is: "What are good abstractions for computation?" Here, over the last decades, several patterns have emerged that turned out to be immensely useful for writing fast, efficient, concise and powerful code. These patterns are called programming paradigms and can be used to classify the features of a programming language. Perhaps one of the most well-known paradigms is object-oriented programming. It refers to a way of structuring data and functions together in functional units called objects. While object-oriented programming is often marketed as a "natural" way of writing code, by modeling it as you would model relationships between entities in the real world, it is far from the only programming paradigm. In the next couple of sections, we will look at the most important programming paradigms in use today.
The most important programming paradigms
In the context of systems programming, there are several programming paradigms which are especially important. These are: Imperative programming, object-oriented programming, functional programming, generic programming, and concurrent programming. Of course, there are many other programming paradigms in use today, for a comprehensive survey the paper "Programming Paradigms for Dummies: What Every Programmer Should Know" by Peter Van Roy [Roy09] is a good starting point.
Imperative
Imperative programming refers to a way of programming in which statements modify state and express control flow. Here is a small example written in Rust:
#![allow(unused)] fn main() { fn collatz(mut n: u32) { loop { if n % 2 == 0 { n /= 2; } else { n = 3 * n + 1; } } } }
This code computes the famous Collatz sequence and illustrates the key concepts of imperative programming. It defines some state (the variable n
) that is modified using statements (conditionals, such as if
and else
, and assignments through =
). The statements define the control flow of the program, which can be thought of as the sequence of instructions that are executed when your program runs. In this regard, imperative programming is a way of defining how a program should achieve its desired result.
Imperative programming might feel very natural to many programmers, especially when starting out to learn programming. It is the classic "Do this, then that" way of telling a computer how to behave. Indeed, most modern hardware architectures are imperative in nature, as the low-level machine instructions are run one after another, each acting on and potentially modifying some state. As this style of programming closely resembles the way that processors execute code, it has become a natural choice for writing systems software. Most systems programming languages that are in use today thus use the imperative programming paradigm to some extent.
The opposite of imperative programming is called declarative programming. If imperative programming focuses on how things are to be achieved, declarative programming focuses on what should be achieved. To illustrate the declarative programming style, it pays of to take a look at mathematical statements, which are inherently declarative in nature:
f(x)=x²
This simple statement expresses the idea that "There is some function f(x)
whose value is x²
". It describes what things are, not how they are achieved. The imperative equivalent of this statement might be something like this:
#![allow(unused)] fn main() { fn f_x(x: u32) -> u32 { x * x } }
Here, we describe how we achieve the desired result (f(x)
is achieved by multiplying x
by itself). While this difference might seem pedantic at first, it has large implications for the way we write our programs. One specific form of declarative programming is called functional programming, which we will introduce in just a bit.
Object-Oriented
The next important programming paradigm is the famous object-oriented programming (OOP). The basic idea of object-oriented programming is to combine state and functions into functional units called objects. OOP builds on the concept of information hiding, where the inner workings of an object are hidden to its users. Here is a short example of object-oriented code, this time written in C++:
#include <iostream>
#include <string>
class Cat {
std::string _name;
bool _is_angry;
public:
Cat(std::string name, bool is_angry) : _name(std::move(name)), _is_angry(is_angry) {}
void pet() const {
std::cout << "Petting the cat " << _name << std::endl;
if(_is_angry) {
std::cout << "*hiss* How dare you touch me?" << std::endl;
} else {
std::cout << "*purr* This is... acceptable." << std::endl;
}
}
};
int main() {
Cat cat1{"Milo", false};
Cat cat2{"Jack", true};
cat1.pet();
cat2.pet();
}
In OOP, we hide internal state in objects and only interact with them through a well-defined set of functions on the object. The technical term for this is encapsulation, which is an important idea to keep larger code bases from becoming confusing and hard to maintain. Besides encapsulation, OOP introduces two more concepts that are important: Inheritance, and Polymorphism.
Inheritance refers to a way of sharing functionality and state between multiple objects. By inheriting from an objectTechnically, classes inherit from other classes, but it does not really matter here., another object gains access to the state and functionality of the base object, without having to redefine these state and functionalities. Inheritance thus aims to reduce code duplication.
Polymorphism goes a step further and allows objects to serve as templates for specific behaviour. This is perhaps the most well-known example of object-oriented code, where common state or behaviour of a class of entities is lifted into a common base type. The base type defines what can be done with these objects, each specific type of object then defines how this action is done. We will learn more about different types of polymorphism in section 2.5, for now a single example will suffice:
#include <iostream>
#include <memory>
struct Shape {
virtual ~Shape() {};
virtual double area() const = 0;
};
class Circle : Shape {
double radius;
public:
explicit Circle(double radius) : radius(radius) {}
double area() const override {
return 3.14159 * radius * radius;
}
};
class Square : Shape {
double sidelength;
public:
explicit Square(double sidelength) : sidelength(sidelength) {}
double area() const override {
return sidelength * sidelength;
}
};
int main() {
auto shape1 = std::make_unique<Circle>(10.0);
auto shape2 = std::make_unique<Square>(5.0);
std::cout << "Area of shape1: " << shape1->area() << std::endl;
std::cout << "Area of shape2: " << shape2->area() << std::endl;
}
OOP became quite popular in the 1980s and 1990s and still to this day is one of the most widely adopted programming paradigms. It has arguably more importance in applications programming than in systems programming (many systems software is written in C, a non-object-oriented language), but its overall importance and impact to programming as a whole make it worth knowing. In particular, the core concepts of OOP (encapsulation, inheritance, polymorphism) can be found within other programming paradigms as well, albeit in different flavours. In particular, Rust is not considered to use the OOP paradigm, but it still supports encapsulation and polymorphism, as we will see in later chapters.
It is worth noting that the popularity of OOP is perhaps more due to its history than due to its practical usefulness today. OOP has been heavily criticised time and again, and modern programming languages increasingly tend to move away from it. This is in large part due to the downsides of OOP: The immense success of OOP after its introduction for a time led many programmers to use it as a one-size-fits-all solution, designing large class hierarchies that quickly become unmaintainable. Perhaps most importantly, OOP does not map well onto modern hardware architectures. Advances in computational power over the last decade were mostly due to increased support for concurrent computing, not so much due to an increase in sequential execution speed. To make good use of modern multi-core processors, programming languages require solid support for concurrent programming. OOP is notoriously bad at this, as the information-hiding principle employed by typical OOP code does not lend itself well to parallelization of computations. This is where concepts such as functional programming and specifically concurrent programming come into play.
Functional
We already briefly looked at declarative programming, which is the umbrella term for all programming paradigms that focus on what things are, instead of how things are done. Functional programming (FP) is one of the most important programming paradigms from this domain. With roots deep within mathematics and theoretical computer science, it has seen an increase in popularity over the last decade due to its elegance, efficiency and usefulness for writing concurrent code.
FP generally refers to programs which are written through the application and composition of functions. Functions are of course a common concept in most programming languages, what makes functional programming stand out is that it treats functions as "first-class citizens". This means that functions share the same characteristics as data, namely that they can be passed around as arguments, assigned to variables and stored in collections. A function that takes another function as an argument is called a higher-order function, a concept which is crucial to the success of the FP programming paradigm. Many of the most common operations in programming can be elegantly solved through the usage of higher-order functions. In particular, all algorithms that use some form of iteration over elements in a container are good candidates: Sorting a collection, searching for an element in a collection, filtering elements from a collection, transforming elements from one type into another type etc. The following example illustrates the application of functional programming in Rust:
#![allow(unused)] fn main() { struct Student { pub id: String, pub gpa: f64, pub courses: Vec<String>, } fn which_courses_are_easy(students: &[Student]) -> HashSet<String> { students .iter() .filter(|student| student.gpa >= 3.0) .flat_map(|student| student.courses.clone()) .collect() } }
Here, we have a collection of Students and want to figure out which courses might be easy. The naive way to do this is to look at the best students (all those with a GPA >= 3) and collect all the courses that these students took. In functional programming, these operations - finding elements in a collection, converting elements from one type to another etc. - are higher-order functions with specific names that make them read almost like an english sentence: "Iterate over all students, filter for those with a GPA >= 3 and map (convert) them to the list of courses. Then collect everything at the end." Notice that the filter
and flat_map
functions (which is a special variant of map
that collapses collections of collections into a single level) take another function as their argument. In this way, these functions are composable and general-purpose. Changing the search criterion in the filter
call is equal to passing a different function to filter
:
#![allow(unused)] fn main() { filter(|student| student.id.starts_with("700")) }
All this can of course be achieved with the imperative way of programming as well: Create a counter that loops from zero to the number of elements in the collection minus one, access the element in the collection at the current index, check the first condition (GPA >= 3) with an if
-statement, continue
-ing if the condition is not met etc. While there are many arguments for functional programming, such as that it produces prettier code that is easier to understand and maintain, there is one argument that is especially important in the context of systems programming. Functional programming, by its nature, makes it easy to write concurrent code (i.e. code that can be run in parallel on multiple processor cores). In Rust, using a library called rayon
, we can run the same code as before in parallel by adding just 4 characters:
#![allow(unused)] fn main() { fn which_courses_are_easy(students: &[Student]) -> HashSet<String> { students .par_iter() .filter(|student| student.gpa >= 3.0) .flat_map(|student| student.courses.clone()) .collect() } }
We will learn a lot more about writing concurrent code in Rust in chapter 7, for now it is sufficient to note that functional programming is one of the core programming paradigms which make writing concurrent code easy.
As a closing note to this section, there are also languages that are purely functional, such as Haskell. Functions in a purely functional programming language must not have side-effects, that is they must not modify any global state. Any function for which this condition holds is called a pure function (hence the name purely function language). Pure functions are an important concept that we will examine more closely when we will learn about concurrency in systems programming.
Generic
Another important programming paradigm is called generic programming. In generic programming, algorithms and datastructures can be implemented without knowing the specific types that they operate on. Generic programming thus is immensely helpful in preventing code duplication. Code can be written once in a generic way, which is then specialized (instantiated) using specific types. The following Rust-Code illustrates the concept of a generic container class:
#![allow(unused)] fn main() { use std::fmt::{Display, Formatter}; struct PrettyContainer<T: Display> { val: T, } impl<T: Display> PrettyContainer<T> { pub fn new(val: T) -> Self { Self { val } } } impl<T: Display> Display for PrettyContainer<T> { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { write!(f, "~~~{}~~~", self.val) } } fn pretty_container_example() { // Type annotations are here for clarity let container1 : PrettyContainer<i32> = PrettyContainer::new(42); let container2 : PrettyContainer<&str> = PrettyContainer::new("hello"); println!("{}", container1); println!("{}", container2); } }
Here, we create a container called PrettyContainer
which is generic over some arbitrary type T
. The purpose of this container is to wrap arbitrary values and print them in a pretty way. To make this work, we constraint our generic type T
, requiring it to have the necessary functionality for being displayed (e.g. written to the standard output).
Generic programming enables us to write the necessary code just once and then use our container with various different types, such as i32
and &str
in this example. Generic programming is closely related to the concept of polymorphism that we learned about in the section on object-oriented programming. Strictly speaking, polymorphism is an umbrella term, with generic programming being one way of achieving polymorphism. In object-oriented languages, polymorphism is typically achieved through a form of subtyping, which resolves the specific function calls at runtime. This is called dynamic polymorphism. In contrast, generic programming can often be implemented purely at compile time. The most well-known example of this are C++ templates, which allow the compiler to generate the appropriate code for a concrete type automatically. This process is called monomorphization and it is also the way generics are implemented in Rust.
While it is surely benefitial to study different types of generic programming in general, we will focus instead on the implications of generic programming in the domain of systems programming. Besides the obvious benefits of reduced code duplication an general convenience, there are also performance aspects to generic programming. Compared to dynamic polymorphism, which has to do some work at runtime to resolve generic types, languages such as Rust and C++ eliminate runtime cost through monomorphization. If possible, generic programming thus constitutes a great way to eliminate the runtime cost of dynamic polymorphism, making it a valuable tool for any systems programmer.
There are also several downsides to generic programming. The potentially extensive code instantiations that the compiler has to perform often result in significantly larger compilation times than with non-generic code. Additionally, generic code can get quite complex and hard to read and might give frustration compilation errors. C++ templates are notorious for that.
Concurrent
The last important programming paradigm that we will look at is concurrent programming. We already saw an example for writing code that makes use of multiple processor cores in the section on functional programming. Concurrent programming goes a step further and includes various mechanisms for writing concurrent code. At this point, we first have to understand an important distinction between two terms: Concurrency and Parallelism. Concurrency refers to multiple processes running during the same time periods, whereas parallelism refers to multiple processes running at the same time. An example of concurrency in real-life is university. Over the course of a semester (the time period), one student will typical be enrolled in multiple courses, making progress on all of them (ideally finishing them by the end of the semester). At no point in time, however, did the student sit in two lectures at the same timeWith online courses during the pandemic, things might be different though. Time-turners might also be a way to go.. An example of parallelism is during studying. A student can study for an exam while, at the same time, listening to some music. These two processes (studying and music) run at the same time, thus they are parallel.
Concurrency thus can be seen as a weaker, more general form of parallelism. An interesting piece of history is that concurrency was employed in operating systems way before multi-core processors became commonplace. This allowed users to run multiple pieces of software seemingly at the same time, all one just one processor. The illusion of parallelism was achieved through a process called time slicing, where each program ran exclusively on the single processor core for only a few milliseconds before being replaced by the next program. This rapid switching between programs gave the illusion that multiple things were happening at the same time.
Concurrency is a very important programming paradigm nowadays because it is central for achieving good performance and interactivity in complex applications. At the same time, concurrency means that multiple operations can be in flight at the same time, resulting in asynchronous execution and often non-deterministic behaviour. This makes writing concurrent code generally more difficult than writing sequential code (for example with imperative programming). Especially when employing parallelism, where multiple things can happen at the same instant in time, a whole new class of programming errors becomes possible due to multiple operations interfering with each other. To reduce the mental load of progammers when writing concurrent or parallel code, and to prevent some of these programming errors, concurrent programming employs many powerful abstractions. We will learn more about these abstractions when we talk about fearless concurrency in chapter 7. For now, here is a short example form the Rust documentation that illustrates a simple form of concurrency using the thread abstraction:
use std::thread; use std::time::Duration; fn main() { thread::spawn(|| { for i in 1..10 { println!("hi number {} from the spawned thread!", i); thread::sleep(Duration::from_millis(1)); } }); for i in 1..5 { println!("hi number {} from the main thread!", i); thread::sleep(Duration::from_millis(1)); } }
Running this example multiple times illustrates the non-deterministic nature of concurrent code, with each run of the program potentially producing a different order of the print statements.
The programming paradigms used by Rust
Now that we know some of the most important programming paradigms and their importance for systems programming, we can take a quick look at the Rust programming language again. As you might have guessed while reading the previous sections, many programming languages use multiple programming paradigms at the same time. These languages are called multi-paradigm languages. This includes languages such as C++, Java, Python, and also Rust. Here is a list of the main programming paradigms that Rust uses:
- Imperative
- Functional
- Generic
- Concurrent
Additionally, Rust is not an object-oriented programming language. This makes it stand out somewhat from most of the other languages that are usually taught in undergraduate courses at universities, with the exception of C, which is also not object-oriented. Comparing C to Rust is an interesting comparison in the context of systems programming, because C is one of the most widely used programming languages in systems programming, even though from a modern point of view, it is lacking many convenience features that one might be used to from other languages. There is plenty of discussion regarding the necessity of "fancy" programming features, and some people will argue that one can write perfectly fine systems code in C (as the Linux kernel demonstrates). While this is certainly true (people also used to write working programs in assembly language for a long time), a modern systems programming language such as Rust might be more appealing to a wider range of developers, from students just starting to learn programming, to experienced programmers who have been scared by unreadable C++-template-code in the past.
Feature comparison between Rust, C++, Java, Python etc.
We shall conclude this section with a small feature comparison of Rust and a bunch of other popular programming languages in use today:
Language | Imperative | Object-oriented | Functional | Generic | Concurrent | Other notable features |
---|---|---|---|---|---|---|
Rust | Yes | No | Yes (impure) | Yes (monomorphization) | Yes (threads, async) | Memory-safety through ownership semantics |
C++ | Yes | Yes | Yes (impure) | Yes (monomorphization) | Yes (threads since C++11, coroutines since C++20) | Metaprogramming through templates |
C | Yes | No | No | No | No (but possible through OS-dependent APIs, such as pthreads) | Metaprogramming through preprocessor |
Java | Yes | Yes | Yes (impure) | Yes (type-erasure) | Yes (threads) | Supports runtime reflection |
Python | Yes | Yes | Yes (impure) | No (dynamically typed) | Yes (threads, coroutines) | Dynamically typed scripting language |
JavaScript | Yes | No (but OOP features can be implemented) | Yes (impure) | No (dynamically typed) | Yes (async) | Uses event-driven programming |
C# | Yes | Yes | Yes (impure) | Yes (type substitution at runtime) | Yes (threads, coroutines) | One of the most paradigm-rich programming languages in use today |
Haskell | No | No | Yes (pure) | Yes | Yes (various methods available) | A very powerful type system |
It is worth noting that most languages undergo a constant process of evolution and development themselves. C++ has seen significant change over the last decade, starting a three-year release cycle with C++11 in 2011, with C++20 being the latest version. Rust has a much faster release cycle of just six weeks, with version 1.52 being the current stable version as of writing, which will probably be some versions behind the current version at the time you are reading this.
Recap
In this chapter, we learned about programming paradigms. We saw how certain patterns for designing a programming language can aid in writing faster, more robust, more maintainable code. We learned about five important programming paradigms: Imperative programming, object-oriented programming, functional programming, generic programming and concurrent programming. We saw some examples of Rust and C++ code for these paradigms and learned that Rust supports most of these paradigms, with the exception of object-oriented programming. We concluded with a feature comparison of several popular programming languages.
In the next chapter, we will dive deeper into Rust and take a look at the type system of Rust. Here, we will learn why strongly typed languages are often preferred in systems programming.