EDA Needs to be Using Rust

Jason McCampbell
8 min readJun 12, 2021

tl;dr: The Rust Language offers a potential competitive advantage to development teams by providing compute and memory efficiency on-par with C/C++ but with more robust, higher quality code and better developer productivity. Part 1, here, provides an introduction to Rust’s memory safety guarantees by way of example illustrating how it can eliminate hard-to-find memory errors at compile time with no runtime overhead. Parts 2 and 3 will look at Rust’s thread-safety guarantees and the benefits it provides for code reuse.

EDA tools are critical to the design, verification, and manufacture of IC’s (chips) and electronic systems. Photo by Denny Müller on Unsplash

What is EDA?

The EDA (Electronic Design Automation) industry produces semiconductor design tools — applications used for designing, simulating, verifying, and testing chips and electronic systems. In other words, the software that underpins the development of everything from smart watches and phones, to servers and self-driving vehicles.

EDA tools come in huge variety, but three common characteristics are:

  • Compute intensive: elapsed runtime is frequently a competitive differentiator and applications are heavily optimized in terms of algorithmic efficiency and efficiency of the code.
  • Memory intensive: explicit control over the layout of data structures and control over memory allocation are required for cache efficiency and managing memory usage in applications with large memory requirements (fitting into “only” hundreds of GB of RAM can be a competitive advantage).
  • High quality expectations: results are expected to be consistent/stable over many releases and, with elapsed times measured in hours or days and total CPU time exceeding a thousand CPU hours per-run in some cases, random crashes cause significant user grief.

In this article I look at how memory safety works with examples of two common errors that can be eliminated at compile-time. I believe this capability can help improve the quality of EDA tools and the productivity of the development teams, and thus be a competitive advantage.

Please note: I compare Rust and C++ a lot here and the intent is not to disparage C++. On the contrary, C++ is the language I’ve used the most and the dominant language for tools such as these, thus the one to beat. Just as C++11 was a giant improvement over earlier C++ versions, which were improvements over C, I believe Rust offers another big leap by building on what we’ve learned as an industry and the processing power available to today’s compilers.

Catching Memory Errors Today

C and C++ are the dominant languages in the core engines in large part because they allow the software to be precisely tuned to maximize hardware performance. Unfortunately, both languages, and C in particular, are also notorious “foot guns”: not only is it easy to make a memory handling mistake, sometimes it is downright hard not to. The C++ RAII model goes some ways towards reducing memory errors, but doesn’t nearly eliminate them. Consider this code:

int main(int argc, const char **argv) {
const std::string *str;
const std::string arg0(argv[0]);
if (argc == 1) {
// Most common case, want it fast
str = &arg0;
} else {
std::string local_var = arg0 + "suffix";
str = &local_var;
}
std::cout << "Result = " << *str << std::endl;
return 0;
}

This is a silly case, but hopefully not an unfamiliar sort of optimization. And while it’s easy enough to notice that local_var is deallocated too early, when embedded in a more complex function, perhaps with multiple authors and some schedule time pressure, such a mistake is easier to miss.

G++ 10 and CLang 11 both compile this example without complaint, even with all warnings enabled (-Wall). And both code paths in the example appear to run fine on my Linux laptop. Not until it is compiled with AddressSanitizer (aSan) and run with an additional command-line argument is the use-after-free bug revealed. This is important: aSan is a wonderful tool but is only as good as the test vectors used with it; it can’t find bugs which aren’t exercised by the test suite.

Is this really the best we can do?

The Rust Ownership Memory Management Model

The Rust Language originated at Mozilla with one of the goals to improve the reliability of software, particularly in memory safety. Rust achieves the goal of eliminating memory errors, such as in the example above, without the runtime overhead of dynamic memory management (e.g., garbage collection or reference counting). How?

Rust strictly tracks the lifetime of values, including references, to determine when a value can be deallocated and that no dangling references exist. Specifically, the rules can be summarized as:

  1. Every value has a single owner (e.g., variable, structure field) and the value is released (dropped) when the owner goes out of scope;
  2. There may exist at most one mutable reference to a value; or
  3. There may be any number of immutable references to a value and while they exist the value cannot be mutated;
  4. All references must have a lifetime no longer than the value being referred to.

To help understand how these rules work in practice, consider the case of argc > 1 in the example:

const std::string *str;
...
} else {
std::string local_var = arg0 + "suffix";
str = &local_var;
}
...
std::cout << "Result = " << *str << std::endl;

When local_var goes out of scope, the string memory is deallocated even though str still refers to it and the value will be used in the cout call below.

The corresponding Rust code looks like this:

fn main() {
let strs: &String;
let args : Vec<String> = env::args().collect();
if args.len() == 1 {
strs = &args[0];
} else {
let local_var = args[0].clone() + “suffix”;
strs = &local_var;
}
println!(“Result = {}”, strs);
}

Again local_var is defined and has a lifetime that lasts until the end of the local block, and strs holds a reference to it. The big difference is in Rust this violates rule #4, that references cannot outlive the value being referenced. Instead of producing a use-after-free bug which has to be caught by the right test vector at runtime, the error is caught at compile time.

This was a simple example to provide a sense for how the language rules work. The next example will explore a bit more complex case of a memory error occurring across interface boundaries.

Cross-interface Value Lifetimes

Consider this somewhat more complex example (some declarations removed for brevity):

struct SomeType {
SomeType(const std::string &name) {
properties["name"] = name;
}

const std::string &get_name() {
auto i = properties.find("name");
return i != properties.end() ? i->second : default_name;
}

void optimize(int level) {
/// ... complex code ...
auto i = properties.find("name");
if (level > 2 && i != properties.end() &&
i-> second == default_name) {
properties.erase(i);
}
// ... more complexity ...
}
private:
std::map<std::string, std::string> properties;
static const std::string default_name;
};
const std::string SomeType::default_name{"P.Platypus"};
int main(int argc, char **argv) {
SomeType val(argv[1]); // Requires at least one arg
const std::string &orig_name = val.get_name();
/// ... Some code ...
val.optimize(argc);
/// ... more code ...
std::cout << "Name was: " << orig_name << std::endl;
return 0;
}

In this example, get_name returns a const string from SomeVal where the lifetime of that string is either static, if the default is returned, or tied to the properties map. This lifetime depends on the internal implementation of the type and yet isn’t explicitly defined as part of the public interface. That is, someone could write a comprehensive set of unit tests for SomeType and not see any issue. Similarly, a developer could write a good set of tests covering main, but not exercise all of the internal states of SomeType. And, yet, there is a use-after-free error lurking when run like this:

./a.out P.Platypus secret-agent

To see how Rust helps with a case like this, here is the equivalent Rust code, also minus a few declarations:

struct SomeType {
properties : HashMap<&'static str, String>,
}
impl SomeType {
pub fn new(name : String) -> SomeType {
let mut prop : HashMap<&'static str, String> = HashMap::new();
prop.insert("name", name);
SomeType {
properties : prop,
}
}

pub fn get_name(self:&Self) -> &str {
self.properties.get("name")
.map(|s|s.as_str())
.unwrap_or(&default_name)
}

pub fn optimize(self:&mut Self, level : usize) {
let name_ent = self.properties.entry("name");
if let Entry::Occupied(name) = name_ent {
if level > 2 && name.get() == default_name {
name.remove_entry();
}
}
}
}
fn main() {
let args : Vec<String> = env::args().collect();

let mut val = SomeType::new(args[1].clone());
let orig_name = val.get_name();

// ... some code ...
val.optimize(args.len());
// ... mode code ...

println!("Name was: {}", orig_name);
}

The behavior is the same, with a type SomeType that returns the name of the value, and that name may be stored in the properties map or may be a static value. However, this example fails to compile with the following error:

error[E0502]: cannot borrow `val` as mutable because it is also borrowed as immutable
--> src/main.rs:38:3
|
36 | let orig_name = val.get_name();
| --- immutable borrow occurs here
37 | // ... some code ...
38 | val.optimize(args.len());
| ^^^^^^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here
39 | // ... more code ...
40 | println!("Name was: {}", orig_name);
| --------- immutable borrow later used here

Since get_name returns a reference to a value owned by val, it is the same as holding a reference to val. Rule #3 requires val to remain immutable for the lifetime of the reference. Thus attempting a mutating operation, optimize, is prevented.

One interesting aspect of this is simply moving the print statement in line 40 above the optimize statement resolves the compiler error; the reference in orig_name can be dropped prior to the mutating call to optimize. It isn’t necessary to add the overhead of copying name. Of course, copying name is a fine solution as well, if the value is needed later.

Whatever the solution, the compiler has caught, and prevented, a subtle memory error from being introduced during development, even before testing starts. And, crucially, it did so independent of any test vectors!

Now, if you are wondering, as I did, can the compiler really handle every memory management case that comes up in a complex application, the answer is ‘yes’ and ‘no’. The safety guarantees apply to all code, not just simple cases such as these. However, the compiler is conservative and errs on the side of safety and there are legitimate, safe behaviors that can’t be expressed in Rust. For these the compiler provides an escape hatch: the unsafe keyword.

Unsafe, For When You Really Need It

Seems unsafe, probably warrants a closer review. Photo by Alberto Bigoni on Unsplash

As good as compilers are, their knowledge of a developer’s intent is limited to what can be expressed in the language. In the case of Rust, this can mean needing to perform an operation that is safe, but that the compiler cannot prove is safe.

The unsafe keyword flags a block of code where the restrictions are relaxed to allow, among other things, raw pointers to be de-referenced and unsafe functions to be called, including malloc and free. In other words, unsafe code lets the developer operate at a similar level to C/C++ when needed. And that when needed part is the critical bit: the very precisely written code is called out with the unsafe tag so developers and reviewers know it deserves added attention.

Summary

We have looked at two examples of memory errors which can occur in C++, require specific test-vectors to expose them, and how Rust’s memory safety rules prevent them though static lifetime analysis. I believe the memory-safety aspects alone warrant taking a close look at Rust for EDA software development. However, these are not the only benefits Rust provides: Rust also offer similar guarantees for concurrency and, I believe, a potential for significantly increasing software reuse within an organization. I will cover these two topics in following posts.

--

--

Jason McCampbell

Software architect with interests in AI/ML, high-performance computing, physics, and finance/economics.