Banner for blog post Rust technology essentials

10 Rust technology essentials

This article will cover a bunch of examples what really makes Rust so interesting for developers. It’s not just about system programming anymore. Rust moved on to boost different areas where classical system programming languages haven’t really taken hold. The formula for success is a sophisticated combination of new and carefully selected concepts from other languages that form Rust and developers already love Rust.

Of course the language itself is no reason to blindly adopt it. There are so many different aspects starting with a module eco-system and ending with the question about hiring new developers, but that’s another thing that is always hard for new technologies. Even if you can’t or don’t plan to use Rust it might be a huge asset for your daily work to have some Rust understandings that makes you a better developer.

Essentials

When we compare Rust to other system programming languages it’s clear, why memory-safety is such an important thing, while VM-based technologies don’t have to deal a lot with those problems. The other way round for asynchronous programming there is less often a need for native local programs, but as an IO-intensive application running in the cloud it’s a huge benefit to use less resources more efficiently.

Here are 10 Rust essentials with no specific order. It’s about the technology and the language itself.

1. Short and snappy, modern language

A lot of Rust keywords are using a short form, like pub for public, impl for implementation, mod for module, but that’s not the point here. Rust is a system language that is compiled to native assembly code, which is directly executed by the CPU. There is no virtual machine in between and every line of code needs to carry enough information to generate the assembly code. We also have OOP, stack- and heap-memory, move-, copy-, clone-semantics and other conventions. But it feels like writing a script if we leave aside the very strict compiler.

Per default everything is private, so we only need write more code (add pub) for making it public and same thing for memory. For slower heap memory we need to write more code (like Box::new(..)). Data is immutable per default. If you want to change it, you have to mark it mut, so if we don’t have a reason to make something public, using heap memory or mutating data, we should stick to the shorter version which is very good default for a modern language.

Let’s take a look at the example. You can read and understand it, even if you’ve never seen Rust code before.

use animals::Cat;

fn main() {
    let cat = Cat::new("Paul".to_owned(), 3);
    let charlie = Cat::new_charlie();

    cat.meow();
    charlie.meow();
}

mod animals {

    pub struct Cat {
        name: String,
        age: u32,
    }

    impl Cat {
        pub fn new(name: String, age: u32) -> Self {
            Cat { name, age }
        }

        pub fn new_charlie() -> Self {
            Cat {
                name: "Charlie".to_owned(),
                age: 5,
            }
        }

        pub fn meow(&self) {
            println!("{}({}): Meow!", self.name, self.age);
        }
    }
}

// prints:
//
// Paul(3): Meow!
// Charlie(5): Meow!
  • Rust separates data from implementation. We have a struct with only data in it and we can provide an implementation in a different block.
  • The language doesn’t provide a special “constructor” function, but there is a convention. It’s just like a factory function starting with new which is no keyword in Rust and that’s neat, because we can give it a name and Rust doesn’t support function overriding at all.

2. Expressive for readability

Not having to write too much code is a good thing, but what about understanding what’s actually happening. There are many cases, where we need to explicitly write down what will happen, although it’s no additional information for the compiler and of course what we write needs to be consistent. It helps a lot, also for code reviews, where we may not want to jump to another code section.

fn main() {
    let peter = "Peter".to_owned();
    say_hello(&peter); // peter is borrowed (passing a reference)
    say_bye(peter);    // peter is moved (passing the value)
}

// thousand lines of code

fn say_hello(name: &String) { // borrow (notice the ampersand), we can read it, don't own it
    println!("Hello, {}", name);
}

fn say_bye(name: String) {    // ownership (no &)
    println!("Bye, {}", name);
}

Reading the calling site or the callee for say_.., it’s clear how the data is passed. We could say, that in line 3 there is no need for a &, because the function only takes borrows, but in terms of Rust has a strict ownership concept, we as developers need to read it or it looks like a mistake.

3. Object oriented programming (OOP) light

In Rust we have OOP, but not every feature. We can encapsulate data and give objects functions, but we don’t have inheritance. That might be a religious discussion, but even other languages often have inheritance on their hit lists and recommend composition for extending. For abstractions we have trait.

fn main() {
    let cat = Cat;
    let dog = Dog;
    
    pet_animal(&cat);
    pet_animal(&dog);
}

trait Animal {
    fn pet(&self);
}

struct Cat;
struct Dog;

impl Animal for Cat {
    fn pet(&self) {
        println!("(cat is curring)");
    }
}

impl Animal for Dog {
    fn pet(&self) {
        println!("(dog is panting)");
    }
}

fn pet_animal(animal: &dyn Animal) {
    animal.pet();
}

Our pet_animal is taking a borrow to a trait Animal. Another point about expressiveness here is that performance always matters in Rust and the language doesn’t want to hide uncommon behavior. If we read animal: &Animal we know, that Animal is a struct and calling any functions of it will be fast and optimized by the compiler.

In this case we read &dyn Animal and that tells us, that’s it’s a trait and we may lose performance, because the compiler has to generate dynamic dispatch code here and may not be able optimize it. At least it’s good to know what it means, not saying that we should care too much about performance, because either way, Rust is fast. That’s more interesting for low level optimizations.

4. Enums are not just numbers with labels

Yes, a whole section about enums, because they are a game-changer. We have strict types and strict memory alignment, so what about heterogeneous data? What if we have function that may return a Cat or a Dog depending on the current moon phase? The answer is that we can’t use different types, but we can wrap them in a enum type. The compiler will make our enum large enough to contain any of the types and Rust will not allow us to access the enum as Cat if it’s a Dog , so it’s safe that we can’t even write code, that unwraps a Dog but it’s actually a Cat .

fn main() {
    let dog = Animal::Dog(Dog { height: 35.0 });
    let cat = Animal::Cat(Cat {
        name: "Fiffy".to_owned(),
    });

    print_animal(&dog);
    print_animal(&cat);
}

enum Animal {
    Dog(Dog),
    Cat(Cat),
}

struct Dog {
    height: f32,
}

struct Cat {
    name: String,
}

fn print_animal(animal: &Animal) {
    match animal {
        Animal::Dog(dog) => {
            println!("dog with height: {}", dog.height);
        }
        Animal::Cat(cat) => {
            println!("cat with name: {}", cat.name);
        }
    }
}

Rust even enforces, that we need to handle all cases so using match has to always be exhaustive. If we add a Mouse to our enum, we need to update the match block, which is often a positive breaking change someone should investigate. Especially for errors and error handling Rust uses enums, but also for optional values. Rust doesn’t have “nullish” values, so if we need optional values there is Option from the Rust core.

fn main() {
    match get_animal_name() {
        Some(animal_name) => {
            println!("animal is called {}", animal_name);
        }
        None => {
            println!("no animal name");
        }
    }
}

fn get_animal_name() -> Option<String> {
    None
}

It’s already imported in the prelude, so we don’t even have to write use std::option::Option; .

5. Neat error handling and panics

In Rust the decision was made to not have another return channel with exceptions. That has a huge impact on API design, error handling and convenience. From a Rust code security perspective the caller cannot miss an error, because of a lacking documentation or not reading it at all, because the code is the documentation and if the return value can carry an error, we need to write code, that somehow checks the result.

Error Results

In Rust a function can only return its return value or never returns at all (panic). That means, if we write a function that may fail we need to put that information into our return value. The most used return type for success and error types is Result which is (like Option) in the prelude and long story short: we need it everywhere.

fn main() {
    match connect_db() {
        Ok(db_conn) => {
            println!("connected to: {}", db_conn.dsn);
        }
        Err(err) => match err {
            Error::DbError(msg) => {
                println!("DbError: {}", msg);
            }
            Error::Unknown(msg) => {
                println!("Unknown: {}", msg);
            }
        },
    }
}

fn connect_db() -> Result<DbConn, Error> {
    Err(Error::DbError("db connect failed.".to_owned()))
}

enum Error {
    DbError(String),
    Unknown(String),
}

struct DbConn {
    dsn: String, // data source name
}

Result takes 2 generic types for success type and error type. It’s a simple enum, but it has some additional convenience features, like there is a ? “early-return” Operator in Rust we can implement for our type. For a Result it means to early return the error, so we don’t have to write verbose error checks and somehow clutter our code.

fn main() {
    let user_id = 4u32;

    match get_user_name(user_id) {
        Ok(username) => {
            println!("current username: {}", username);
        }
        Err(err) => {
            println!("error: {:?}", err)
        }
    }
}

fn get_user_name(user_id: u32) -> Result<String, Error> {
    let db_conn = connect_db()?;          // early return
    db_conn.get_current_username(user_id) // return Result
}

fn connect_db() -> Result<DbConn, Error> {
    Ok(DbConn)
}

// `#[derive(Debug)]` does magic that our println!(..) get debug information.
// Wait for the section of macros.
#[derive(Debug)]
enum Error {
    DbError(String),
    Unknown(String),
}

struct DbConn;

impl DbConn {
    fn get_current_username(&self, user_id: u32) -> Result<String, Error> {
        if user_id != 0 {
            Ok("Robert".to_owned())
        } else {
            Err(Error::DbError("user_id = 0 is not allowed".to_owned()))
        }
    }
}

Rust doesn’t have exceptions. When we don’t read return or a ? the code is executed from top to bottom. Strictly speaking, if we call a function that returns a Result we can do three things, but it’s always kinda local error handling. We need to write code, what should happen.

fn main() {
    if let Err(err) = run() {
        println!("error: {}", err);
    }
}

fn run() -> Result<(), String> { // `()`-type is an "empty tuple", like void

    let _ = get_name()?;         // 1. early return error
    let _ = get_name().unwrap(); // 2. make thread panic for an error
    match get_name() {           // 3. handle error
        Ok(name) => println!("ok {}", name),
        Err(err) => {
            return Err(format!("something went wrong: {}", err));
        }
    }

    Ok(())
}

fn get_name() -> Result<String, String> {
    Ok("Jenny".to_owned())
}

Because we always have to do some kind of error handling, we cannot easily let errors bubble up to code sections where meaningful error handling isn’t possible anymore. We will also have different layers of errors, like on the very top, the server has a very generic error that we can write some final logs and do cleanup, before it terminates, but business errors from lower layers should be better handled locally.

Result and Option are must_use, so the compiler warns us, if we call function that returns a Result and we don’t use it.

fn main() {
    connect_db();         // warning
    let _ = connect_db(); // we don't care
    
    connect_db()          // assertion
        .expect("DB must run for this CLI tool.");
}

fn connect_db() -> Result<(), String> {
    Err("cannot connect to DB.".to_owned())
}

// warning: unused `Result` that must be used
//  --> src/main.rs:2:5
//   |
// 2 |     connect_db();         // warning
//   |     ^^^^^^^^^^^^^
//   |
//   = note: `#[warn(unused_must_use)]` on by default
//   = note: this `Result` may be an `Err` variant, which should be handled
// 
// thread 'main' panicked at 'DB must run for this CLI tool.: "cannot connect to DB."', src/main.rs:6:10

Typically every module has an Error type that represents error values for the pub module API. Modules are organized in a hierarchy so every time we want to make an error bubble up, we need to wrap it into the higher level error type. This is simple, such an easy concept and we cannot conceal bad error handling. Especially we can check like .unwrap() or .expect(..) in our code reviews.

Panics

If at some point returning an error is not an option, we can panic and make the current thread terminate. For example index access of arrays panic for “out of bounds”. The reason is the API. It wants an index and returns a borrow for the item, so there is no meaningful other option than panic for an error here.

fn main() {
    let i = 5;
    let numbers = [1, 2, 3, 4];
    println!("{}. number is {}", i, numbers[i]);
}

// thread 'main' panicked at 'index out of bounds:
//  the len is 4 but the index is 5', src/main.rs:4:37

If we have no information about our data and need the 5th item we should better check numbers.len() or use numbers.get(5) which returns an Option<..>. Turning expected errors into panics is mostly code smell. We could say, that we will not handle out of memory (OOM) errors, so it’s okay that any memory allocation that fails just panics. But it’s arguable if reading or writing a file should panic.

6. Convenient asynchronous programming

Did I already mentioned, that performance matters a lot in Rust? Every time IO is involved, if we use a thread-blocking API, we pay more than we get. Many applications are fine with this, like reading a small configuration file at once before it starts is good enough. Also a video encoder application that will batch process some videos and should read the files, process them in parallel and write them to the filesystem might not profit a lot using async programming, because the bottleneck will probably be the CPU processing the video, not reading or writing from the filesystem.

If we move to web applications, we have some instances, mostly custom integration software, that is high in traffic, but doesn’t have high CPU load, because it’s delegating workload to other systems, like databases or authorization services. Here is where async programming has huge benefits, because a few threads may serve thousands of clients. Starting more instances doesn’t make it faster. On the other hand applications with a blocking thread model may start more instances, because they need more memory and scheduling hundreds of threads can become slow.

In the context of garbage collected memory, async programming is not a big problem. Closures just capture everything they need by taking a reference, so closures always live longer. Whenever memory management is delegated to the developer async programming can become difficult. Rust also provides a convenient and of course safe way to write async code. The language feature async/.await was stabilized in 2018, which makes working with the underlying Future building blocks much easier. A Future is just a trait with one function we can call multiple times and it will return a pending state as long as it’s not ready.

// tokio is an async runtime for Rust
use std::time::Duration;
use tokio::time::sleep;

// magic that runs our function in an async runtime
#[tokio::main]
async fn main() {
    match send_mail().await {
        Ok(()) => {
            println!("mail sent");
        }
        Err(err) => {
            println!("error send mail: {}", err);
        }
    }
}

async fn send_mail() -> Result<(), String> {
    sleep(Duration::from_secs(3)).await;
    Err("mail server connection error".to_owned())
}

await is a Rust keyword, that looks like accessing a hidden property, but it actually makes the compiler generate the needed boilerplate code that for every .await the thread may return with a pending state and process other tasks until it’s again woken up for this task and tries to make progress here until ready. tokio is a very popular crate for async runtimes which drives the Future implementation.

In our example sleep(..).await returns a pending state after it subscribed to some internal timer. After that duration the task is woken up and when the thread is idle it returns a ready state for the sleep and finally returns a ready state with the “mail server connection” error in it.

The example doesn’t use a blocking thread sleep. The same thread can invoke send_mail() , process other tasks for 3s and then jump back into the function and make further progress until the task finally returns with a ready state. It’s the same for IO where reading or writing doesn’t utilize the whole thread, but it can process a lot more tasks concurrently.

7. Ownership, borrows and mutability concept

In Rust the type itself tells us everything about who owns which value or is it just borrowed and whether mutable access is allowed or not. The other way round, we can design a sound API without having to write documentation for those aspects. For languages without garbage collection this is a huge benefit, because it’s not just clean, but safe.

This is how the Rust compiler guarantees memory-safety and tackles a fundamental problem of many system-programming languages where memory management is delegated to the developer. A CVE report of MSRC (Microsoft Security Response Center) shows, that about 70% of all CVE are memory related issues.

To be fair, whatever nasty code we want to write in Rust is possible inside an unsafe { .. } code block and for optimization reasons or because it’s not possible without unsafe it is used. Mostly in libraries like std where e.g. Vec is a bullet-proven memory-safe implementation for contiguous memory, using unsafe under the hood is necessary, but it exposes a safe API for us.

The next example will use some of the different flavors of borrows and mutability.

fn main() {
    // create state => we own it here
    let state = AppState { circuit_breaker_open: false };
    
    // we pass a borrow => still be owner afterwards
    log_state(&state);
    
    // we pass ownership to `run_server`
    run_server(state);
    
    // we can't use `state` anymore here
}

// just a immutable borrow
fn log_state(state: &AppState) {
    println!("current state: {:?}", state);
}

// we take ownership, type is `mut AppState`
// and we're going to mutate it (`mut`)
fn run_server(mut state: AppState) {

    // pass a borrow with type `&mut AppState`
    start_listener(&mut state);
}

// just a borrow with `mut` access
fn start_listener(state: &mut AppState) {
    // ..

    // here we need `mut` access, because we change it
    state.circuit_breaker_open = false;
}

#[derive(Debug)]
struct AppState {
    circuit_breaker_open: bool,
}

The callee defines the API contract and the caller has to match the types. If the callee changes the contract let’s say from borrow state: &AppState to “take ownership” state: AppState it will break at the caller. Same, if the callee suddenly wants state: &mut AppState access.

Finally about immutability. In multithreaded environments, we often hear about that one exclusive writer or multiple reader is a valid concept for shared data access. The Rust mut keyword has a similar meaning, but it’s not about threads. The idea is that whenever there is &mut mutable borrow to a value, there must not be any other borrow (mutable or immutable) to it, while multiple & immutable borrows are okay.

It’s not that easy to understand why this makes sense even for a single thread, but let’s check the next classic example where Rust doesn’t allow us to fall into the trap. We want to remove all odd number from an array.

// will not compile!

fn main() {
    let mut numbers = vec![1, 2, 3, 4, 5, 6];
    for (i, num) in numbers.iter().enumerate() {
        if num % 2 == 1 {
            numbers.remove(i);
        }
    }
}

// error[E0502]: cannot borrow `numbers` as mutable because it is also borrowed as immutable
//  --> src/main.rs:7:13
//   |
// 5 |     for (i, num) in numbers.iter().enumerate() {
//   |                     --------------------------
//   |                     |
//   |                     immutable borrow occurs here
//   |                     immutable borrow later used here
// 6 |         if num % 2 == 1 {
// 7 |             numbers.remove(i);
//   |             ^^^^^^^^^^^^^^^^^ mutable borrow occurs here

numbers.iter() creates an iterator for the values. .enumerate() create a new iterator that emits tuples with the index first and value second. The iterator is holding a borrow to our vector and the lifetime is for the whole for-loop. Within the loop numbers.remove(..) needs mut access, so we have a mutable and immutable lifetime overlapping and the compiler doesn’t allow this, see the error message.

We can assume that iterators are faster than index access in loops, because they skip unnecessary safety checks, while index access has some Rust checks, but even those checks may be optimized away (LLVM level), if the compiler sees that our index can only be in a valid range. For our example we could use index access.

fn main() {
    let mut numbers = vec![1, 2, 3, 4, 5, 6];
    let mut i = 0;
    while i < numbers.len() {
        if numbers[i] % 2 == 1 {
            numbers.remove(i);
            
            // we need to check `numbers[i]` again
            // no increment here
        } else {
            i += 1;
        }
    }
    
    println!("{:?}", numbers);
}

It got a bit more complicated, but now it compiles. We need to be careful to check for index again, if we remove at that position. Another problem here is, that numbers.remove(..) fills the gap by shifting all data left. An improvement would be .swap_remove(..) which fills the gap with the last element, but then array order isn’t kept and it prints [6, 2, 4].

For the sake of completeness the most elegant solution changing the array in-place without copying data or allocating memory is behind an unstable Rust flag. That means we can use it, but it may change its API.

#![feature(drain_filter)]

fn main() {
    let mut numbers = vec![1, 2, 3, 4, 5, 6];
    numbers.drain_filter(|n| *n % 2 == 1); // predicate closure

    println!("{:?}", numbers);
}

8. Fearless concurrency for everyone

When you read about Rust you will read about “fearless concurrency”, like now. Utilizing all resources is getting more and more important if we need performance. CPUs will not get faster by increased timings, but they get more cores and we can do more in parallel. So it would be good if we can write code that is executed in parallel. Rust cannot help us to find a good algorithm for parallel code execution for our problem, but it helps us to not shoot ourselves into the foot. Most famous shared memory issues will not happen, respectively code will not compile.

// will not compile!

fn main() {
    let mut counter = 0;
    let t1 = std::thread::spawn(|| { // closure functions look like:
                                     //   |param1, param2| { .. }
        counter += 1;
    });
    
    let t2 = std::thread::spawn(|| {
        counter += 1;
    });
    
    t1.join().unwrap();
    t2.join().unwrap();
    
    assert_eq!(counter, 2);
}

// 5 |     let t1 = std::thread::spawn(|| {
//   |                                 ^^ may outlive borrowed value `counter`
// 6 |         counter += 1;
//   |         ------- `counter` is borrowed here
//   |
// note: function requires argument type to outlive `'static`
// ..

Compiler says, that the closure may outlive the borrowed value counter . That’s right, because Rust doesn’t understands that counter should live longer than the threads, because we also join them at the end. std::thread::spawn wants a closure, that is self-contained and has a static lifetime, which means it has to live forever (lifetimes in Rust look e.g. like 'static, starting with a single quote). That means our closure cannot capture borrows, but they must become owner, which is done by the move keyword. But we need to do more:

use std::sync::{Arc, Mutex};

fn main() {
    let counter = Arc::new(Mutex::new(0));

    let t1_counter = counter.clone();
    let t1 = std::thread::spawn(move || {
        let mut counter = t1_counter.lock().unwrap();
        *counter += 1;
    });

    let t2_counter = counter.clone();
    let t2 = std::thread::spawn(move || {
        let mut counter = t2_counter.lock().unwrap();
        *counter += 1;
    });

    t1.join().unwrap();
    t2.join().unwrap();

    assert_eq!(*counter.lock().unwrap(), 2);
}

Arc is a “atomic reference counted” data structure that brings some garbage collection feeling. When we .clone() an Arc we get another new object, that references the same data and we pass them into the threads. Arc only gives us immutable access to the inner data, but we want to change it, so we use a Mutex which gives us mut access for .lock() and that’s okay, because it’s guaranteed that mut access is only possible once at a time.

We cannot write code, that has race-conditions and that’s the reason why it’s called fearless concurrency. No more discussions and documentation reading about “should I use a concurrent map?” or “is my framework synchronizing access for me?”.

Finally let’s take an example, where we use two threads to sum numbers. We don’t even have to use synchronization primitives here, because we don’t have shared data. .join() returns a Result which contains the result of the closure if the thread didn’t panic.

fn main() {

    let data1 = vec![4, 2, 19, 8];
    let data2 = vec![3, 18, 1, 2];
    
    let t1 = std::thread::spawn(move || -> i32 {
        data1.into_iter().sum()
    });

    
    let t2 = std::thread::spawn(move || -> i32 {
        data2.into_iter().sum()
    });

    let sum1 = t1.join().unwrap();
    let sum2 = t2.join().unwrap();
    let sum = sum1 + sum2;

    assert_eq!(sum, 57);
}

9. Macro generated code

Did you notice the exclamation marks at println!(..) or the hash at #[derive(..)]? Welcome to Rust meta programming. Macros is code that is executed before the build and may generate additional Rust code. Let’s check what println!(..) does, with one placeholder.

fn main() {
    let name = "Michael".to_owned();
    println!("hello, {}", name);
}

This code will not discover {} at runtime and add the name. This is done at compile time, because macros can do that. It’s expanded to the following code.

fn main() {
    let name = "Michael".to_owned();
    {
        ::std::io::_print(::core::fmt::Arguments::new_v1(&["hello, ", "\n"],
                &[::core::fmt::ArgumentV1::new_display(&name)]));
    };
}

If we forget to add a value for a placeholder, it will not compile, so we cannot make mistakes here.

fn main() {
    let name = "Michael".to_owned();
    println!("hello, {}");
}

// error: 1 positional argument in format string, but no arguments were given
//  --> src/main.rs:3:22
//   |
// 3 |     println!("hello, {}");
//   |                      ^^

#[derive(Debug)] will execute a macro, that reads the struct and generates code, that we can print debug information about such an object without having to write the code for that.

fn main() {
    let user = User {
        first_name: "Robert".to_owned(),
        last_name: "Rust".to_owned(),
    };
    println!("hello, {:?}", user);
}

#[derive(Debug)]
struct User {
    first_name: String,
    last_name: String,
}

// prints: hello, User { first_name: "Robert", last_name: "Rust" }

It expands to:

fn main() {
    let user =
        User {
            first_name: "Robert".to_owned(),
            last_name: "Rust".to_owned(),
        };

    {
        ::std::io::_print(::core::fmt::Arguments::new_v1(&["hello, ", "\n"],
                &[::core::fmt::ArgumentV1::new_debug(&user)]));
    };
}
struct User {
    first_name: String,
    last_name: String,
}
#[automatically_derived]
impl ::core::fmt::Debug for User {
    fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
        ::core::fmt::Formatter::debug_struct_field2_finish(f, "User",
            "first_name", &&self.first_name, "last_name", &&self.last_name)
    }
}

Rust macros are very powerful. We can create new DSLs or let them generate data objects from a static configuration file that is compiled into the binary. We can embed byte data from images or other files. Serde is a crate that allows us to create serialization and deserialization code for structs for many different formats, like JSON, Yaml and more.

10. Testing on board

Rust has visibility qualifiers pub and more to make things visible for the smallest scope as possible. As we can write tests next to the implementation, we don’t need to widen the visibility scope. Instead we insert a tests submodule which has full access to its parent module, which is our implementation.

fn square_odds(num: u32) -> u32 {
    // bug
    if num == 9 {
        return 9;
    }

    if num % 1 == 1 {
        num * num
    } else {
        num
    }
}

#[cfg(test)]
mod tests {

    use super::*;

    #[test]
    fn test_square_odds() {
        assert_eq!(square_odds(0), 0);
        assert_eq!(square_odds(1), 1);
        assert_eq!(square_odds(2), 2);
        assert_eq!(square_odds(3), 9);
        assert_eq!(square_odds(4), 4);
        assert_eq!(square_odds(5), 25);
        assert_eq!(square_odds(8), 8);
        assert_eq!(square_odds(9), 81);
    }
}

// running 1 test
// test tests::test_square_odds ... FAILED
// 
// failures:
// 
// ---- tests::test_square_odds stdout ----
// thread 'tests::test_square_odds' panicked at 'assertion failed: `(left == right)`
//   left: `3`,
//  right: `9`', src/lib.rs:25:9
// note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
// 
// 
// failures:
//     tests::test_square_odds
// 
// test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Tests should be conditionally compiled for test configuration so we put a #[cfg(test)] macro that will makes the block disappear for debug or release but not for test.

Thanks to the language, that we don’t need to put too much effort into testing edge cases on API level. We have enums, objects and primitive data that has been constructed through an API. We don’t have “nullish”-value traps and we are memory-safe. What we really need to test is our main application logic.

Summary

Many areas of software development didn’t use system programming languages a lot, because it was much easier with other languages. Rust tackled so many problems about system programming languages and brings full performance. It is currently entering many different areas where system programming is still rather unusual, like Web- and Mobile development or Desktop Applications, not talking about where Rust has already established.

The feedback of teams adopted Rust is mostly very positive. Even teams which started with just a few Rustaceans reported about immediate improvements in software quality and the team became comfortable with Rust within a few weeks. The common ground mostly was, that writing code takes a bit more time, especially at the beginning, but the outcome was rock-solid software. Bugs were reduced and software architecture became better.

The fact that Rust needs less resources like memory and CPU and has excellent performance can reduce running costs. Another point is that power consumption is lower. Companies where energy balance is a thing may profit from this as well or at least when energy shortage is imminent we all perhaps do. In 2017 a paper (Energy Efficiency across Programming Languages: How Do Energy, Time, and Memory Relate?) Rust succeeded as the “second greenest language” after C. A prettified summary of it can be found here.

Please feel free to reach out to us about any consulting assistance that may help you and your team.


Kommentar verfassen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Nach oben scrollen
Cookie Consent Banner von Real Cookie Banner