Google’s Project Zero reports that memory safety vulnerabilities—security defects caused by subtle coding errors related to how a program accesses memory—have been “the standard for attacking software for the last few decades and it’s still how attackers are having success”. Their analysis shows two thirds of 0-day exploits detected in the wild used memory corruption vulnerabilities. Despite substantial investments to improve memory-unsafe languages, those vulnerabilities continue to top the most commonly exploited vulnerability classes.
In this post, we share our perspective on memory safety in a comprehensive whitepaper. This paper delves into the data, challenges of tackling memory unsafety, and discusses possible approaches for achieving memory safety and their tradeoffs. We’ll also highlight our commitments towards implementing several of the solutions outlined in the whitepaper, most recently with a $1,000,000 grant to the Rust Foundation, thereby advancing the development of a robust memory-safe ecosystem.
Alex Rebert and Christoph Kern at Google’s blog
Even as someone who isn’t a programmer, it’s impossible to escape the rising tide of memory-safe languages, with Rust leading the charge. If this makes the software we all use objectively better, I’ll take the programmers complaining they have to learn something new.
So many of us have been pointing the finger at unsafe languages for years, That they have remained dominant since the beginning despite continuous bugs and exploits is mind boggling. yet their dominance persists. Is it possible we are finally starting to turn a new leaf? I’d like to be optimistic, but until I see meaningful changes on the ground, it’s hard to definitively say that we’re ready to leave unsafe languages in the past.
Of course, the inclusion of rust in linux and windows in any way shape or form is encouraging.
https://www.theregister.com/2023/04/27/microsoft_windows_rust/
The incremental approach that’s likely to happen for the foreseeable future will have safe languages intertwined with C interfaces and structures at the core, which honestly isn’t ideal. Still maybe this is the best we can do for now and any progress is better than none. What other option do we have? Switch to new operating systems that are designed with safe language primitives from the ground up? (cue sarcastic laughing)
One of the reasons that unsafe languages have persisted for so long may simply that old graybeards are notoriously stubborn to change. I wonder if their retirements could be accelerating interest in safe languages? As leadership positions are filled with new generations, this could placing higher value on the need for safe languages.
The graybeards are just loathe to jump on the latest fad. They aren’t going to invest time and energy into learning an entire new ecosystem, not just a language, because of what it promises it can do. It needs to be proven to be absolutely bullet proof. On the other end of the spectrum you have people who play with every new language that comes out.
Rust is definitely getting there, but it’s happening slowly. And frankly, I think that’s a good thing, not a bad one.
cmdrlinux,
I understand where you are coming from, memory safe languages aren’t perfect…But as I see it they are so much better than the unsafe languages they’d be replacing, which have costed us dearly in terms of regular hacks, private data leaks, ransomeware forcing hospitals to shut down, and so on, At what point do we commit to replacing these unsafe languages once and for all?
The daily deluge of breaches and vulnerabilities probably helps.
Also, it’s not like there weren’t memory safe options during their time. Ada is super old.
FOSS is probably the biggest factor in the newer languages. There is very little to lose in testing out a new language when the compiler or runtime is free. Instead of having to invest several thousand dollars and time into a new SDK, it’s a little bit of time now.
Also, the developer productivity of languages like Python, Ruby, Perl, PHP, C#, and Java has been fairly well established. Their biggest problem is they are GC’ed and run in a runtime environment, which is fine for services.
A more ergonomic C or C++ needs to be compiled to machine code at a minimum, and ideally wouldn’t need a garbage collector.
Something combining C and Ruby, is a rift which hasn’t be bridged until recently.
Alfman,
I take offense at this
One of the main reasons we stayed with engineering languages like C++ was that they allowed no compromise high level coding. In fact, it was one of the original design goals of the language: you did not pay for the feature you did not use.
For example:
1. If you don’t use virtual methods, your classes will be exactly the same as plain old C structs, even with inheritance. Swap in the compile time polymorphism for runtime dynamic dispatch (in other words templates, STL, and “algorithms”) you get the best of both worlds.
2. If you don’t need bounds checking, the vector will happily allow you directly access any member. And if you really need bounds checking, then there is the at() variant of indexing.
3. If you don’t want exceptions, run time time information, or many of the optional features, they will just not be compiled in.
However it is easy to see each one of these causing issues. The first only causes complicated error messages, but the rest are possible avenues for high profile security bugs.
And what has changed.
1. The modern compilers can now automatically detect cases where, for example, dynamic dispatch is not necessary, and can deduce the exact variant in a virtual call.
2. The modern compilers can enforce bounds check at compile time, or optimize them to be one per loop, or similar. In the past, it was also possible to use better algorithms in C++ do to this, but it required once again heavy use of templates, and it was optional, meaning developers avoided it.
3. Error checking, especially the recent kind with concurrency is now built in and a bit more efficient. Newer compilers like recent Swift will tell you which objects are erroneously shared between concurrent paths, and will crash at unsafe memory access, instead of causing silent corruption.
Overall, C++ has saved a good purpose. But its time has been complete. They can still evolve to modernize, and stay relevant. It it can be the next COBOL.
(I purposefully avoided Java in this discussion).
sukru,
It’s useful to have syntatic sugar that does this. I’d like it to go even further insofar as giving the compiler more flexibility to decide the best structures to use, but I can’t cover the topic right now.
Yes, that’s a perfect example of a primitive that makes C/C++ unsafe. Of course there’s no point in bounds checking when the accesses are all in bounds, which is your point. However the goal should be to make the compiler prove that, not fallible programmers. Usually it’s not a case of needing some kind of black magic AI to do this, but having enough information be either specified or gleaned and following rules very consistently – something computers are much better at doing than humans. Ideally the compiler would even be able to evaluate these proofs across procedures too although I believe most compilers today only perform cross procedural analysis when inlining is used.
Indeed, IMHO we should be migrating to languages that make memory corruption impossible!
Java and C# are good languages (I wouldn’t necessarily say the same about the politics behind them, but that’s a tangent…). Technically however as run-time managed languages, they’re not suitable everywhere C is, This is where compile-time safety verification stands out, I know you are aware of this, but I don’t think everyone is.
Alfman,
Yes. That is why I found “C with objects” very cumbersome. It had personally bitten me in the past, but still both Linux Kernel and GTK prefer to use that instead of the native compiler supported classes in C++.
Exacltly.
In most cases, except for the final one, there is no need for bounds check. As long at the range upper limit fits the rest automatically will, and in those cases a modern compiler can easily prove this.
But that requires:
1) Access patterns and memory layouts to be known beforehand
2) And making sure there are no other concurrent accesses to that array (no aliases, no references, even more strict access than a unique_ptr)
When it can, a modern compile like Rust works really well.
However if you need to beyond those bounds (pun intended), it becomes cumbersome. For example, if you want to do in place mutation of a dynamic array (say expand its dimensions by replicating values, a common operation broadcast in machine learning), it adds additional complexity. Then you’d need to do:
1. Try to find an “escape hatch” into C++
2. Write an inefficient version
3. Return a copy instead of mutating the original
Though 3 could be desirable in many cases:
https://github.com/huggingface/candle/blob/74bf6994b172f364c6e8bea2ac6e1bfbc6ca0c25/candle-core/src/layout.rs#L148
sukru,
I can see why we may need more safe primitives that align with our needs as developers, though I still stress the importance of not relying on manual code verification for undefined behaviors and memory faults. For the industry to get beyond these faults it must be automated! I think rust has opportunities to evolve further, but generally the goal should be that the compiler only emits checks in instances where it’s not mathematically possible to the code does not have memory faults. And I believe that compilers are evolving past human ability to solve such proofs manually such that any extra runtime checks emitted by safe languages could not be mathematically deemed unnecessary for correctness by human or machine.
To be clear, I understand things aren’t settled and there’s more work to be done, but nevertheless I think it’s fair to say great progress has already been made. It’s just hard to build industry-wide momentum when unsafe languages still have the lion’s share of deployments and resources. This needs to change. Many aren’t exactly comfortable with this, but for safe languages to grow, unsafe languages need to get displaced from their trenched positions.
One of the things I find interesting is that I notice that modern C++ is adopting some very Rusty features — ‘safe’ and ‘unsafe’ code, stuff like that. https://thenewstack.io/can-c-be-saved-bjarne-stroustrup-on-ensuring-memory-safety/
Re: switching to new OSes that are designed with safe language primitives… don’t tease us all what I find crazy is that we’ve known about various solutions for a while, and we’re adopting them soooooo slowly. Harvard architectures make code injection more challenging (though not intractable, https://stackoverflow.com/questions/954556/are-harvard-architecture-computers-immune-to-arbitrary-code-injection-and-execut ), and just the other day that there was an entire computer architecture that implemented object capabilities for memory access https://hackaday.com/2024/03/13/the-flex-computer-system-uks-forgotten-capability-computer-architecture/ (ocaps are considered the gold standard for avoiding all sorts of unintentional privilege escalation — when designed right). They seem so obvious, I wonder where the friction preventing adoption is.
Still there are some promising signs — modern POSIX systems implement capabilities at the software layer, at least, to make process-to-process memory access more challenging. It’s not a true object capability system from top to bottom, but it’s a start.
skeezix,
Thank you for the link, stuff like this is worth following since C++ is an evolving language. IMHO C/C++ still have too much ugly legacy cruft in ways that aren’t necessarily related to safety. Include files and the preprocessor made more sense when computers had few compiling resources, but they’re notoriously awful today compared to languages with real modules. Class templates are too complicated. The lack of reflection is limiting. Once you clean out the legacy cruft, you end up with a new language. To this end I kind of prefer D-lang, which has already done much of the legwork. My biggest gripe with D-lang is the garbage collection. unfortunately this makes it unsuitable as a complete replacement.
I’m not really familiar with Harvard architectures, I’ve only heard about it in theory. In a way, read-only code segments kind of mimic this, but obviously anything running in ring-0 could defeat it.
Interesting, I’ve never heard about this one.
Absolutely. The original unix relied way too much on sudo and root instead of capabilities. On a similar note I started playing around with using linux namespaces – instead of the OS prohibiting actions they would take place inside of the application’s own namespace isolated from everything else. These namespaces make very effective sandboxes and I made use of them in some of my software. However when systemd took over the namespace functionality my software broke. it’s too bad namespaces got taken over by systemd, but if I wanted to target systemd I think there’s a API that might work now – I have to revisit it.
>Still maybe this is the best we can do for now and
> any progress is better than none. What other option
> do we have?
Pretty much. Evolving systems in place is the easier route then a blood soaked revolution where the old system is ripped out and replaced. Ex: X and Wayland vs PulseAudio and Pipewire.
> Switch to new operating systems that
> are designed with safe language primitives from
> the ground up?
Switching to new processors which are smarter about memory management and aren’t built around C would be a good place to start.
I completely understand that, people are highly vested and don’t want to rebuild. The issue I though is that these incremental fixed might not give us as good a result as doing it right from the start. This is the software parallel to the millennium tower debacle.
https://www.nbcbayarea.com/investigations/series/millennium-tower/san-francisco-millennium-tower-more-tilting/3249034/
I view it as a language problem, but I’m curious what might a CPU do better? I know things like Lisp processors existed back in the day. CS could be a very different field if prolog or scheme had won. Maybe haskell?
That is a question for someone who know something about how chips work. I can only repeat what I’ve heard others mention and speculate on features which could be added.
I saw someone mention modern MMUs would function better with a functional language due to the way memory is accessed, if the proc wasn’t pretending it was a PDP-11.
Next, the MMU could enforce the memory boundaries since it is the thing allocating the memory.
Haskell would definitely been different. LOL
The DoD sticking to Ada for systems probably would have made a difference.
Maybe Pascal would have been better. Smalltalk winning would have been pretty wild.
While I can’t comment on general memory safety issues, I can definitely say that if you want to be “safer” on the Internet, you need to disable anything Google. Unless you don’t care about your own privacy data.
chriscox,
I think Google is easy to hate. However the real trackers you have to worry are usually not even known publicly.
With Google, you can ask them not to track you, turn off history features, and limit ad personalization.
However an ad targeting 3rd party will even use other devices at home to track entire families based on IP addresses, among other things:
https://www.accudata.com/blog/ip-targeting/#:~:text=In%20marketing%2C%20an%20IP%20address,you're%20trying%20to%20reach.
I would definitely not want you to look up how multiple databases are “joined” on simple identifiers like “First Initial + Lastname + Zip code” to find all personal information about individuals:
https://news.ycombinator.com/item?id=2942967
> I’ll take the programmers complaining they have to learn something new.
People have finite time.
These “memory-safe” features aren’t free, it requires a lot of bounds checking at runtime. Which is not something that should be done in runtime code, for performance reasons.
If the coder is so inept that they really need this, the program is probably not worth using because (a) the performance will be poor; and (b) likely it is riddled with various other bugs.
Minuous,
Look at the problem closer. I would posit that rust does not require more bounds checking at runtime than correct C code does. The lack of bounds checking on user data is precisely the type of coding mistake that leads to memory faults in production environments. The inability to prove correctness is a bad thing.
A second point is that a lot of runtime checks can actually be optimized away and incur literally zero runtime cost. Rust does not work like old managed languages where every access is enforced at runtime. Even when there’s no runtime cost it’s good to have a compiler that can check no mistakes were made.
To err is human, and software developers working at major tech companies are no exception to this. It’s always the same story: just hire humans that don’t make mistakes. But we’ve had decades upon decades of empirical evidence showing that this doesn’t work and mistakes WILL be made. This is what makes languages that protect us from these mistakes so useful! Certainly there are other kinds of bugs, but that’s not a reason not to solve the ones that can be solved.
User input should be validated/sanitized of course, that’s not really what I was referring to. I don’t see how these languages are proving the correctness of anything, just by papering over the cracks in the code. Mistakes can be made but that is what the testing and debugging process is for.
Minuous,
Humor me, in what cases would you want memory faults to go unchecked? Rust has gone through great lengths to make zero cost abstractions for exactly the reasons you are bringing up.
Debugging is great and all, but a compiler error can save hours of effort and find memory faults that you may not even be testing for until somebody reports bugs in production. Unfortunately this happens all the time from games to productivity apps to drivers, etc. 1) we need to stop denying the problem exists and 2) we have the means to stop software memory faults and it’s not expensive…let’s use it!
Minuous,
Most bugs are very hard to debug. Especially those in complex system with myriad of frameworks from different providers, nested algorithms, and heavy use of concurrency.
(Simple things like this.mutating_function(input, &this.output) is also a hidden concurrent call, since we don’t know the order of updates to output member and this inside mutating_function)
Hence doing very simple things like sanitized wrappers have little compile overhead, and can have zero runtime overhead if designed well, but requires coder discipline to use.
Modern computers have a lot of extra cycles, and we’re finally at a point where the performance impact of security features isn’t as costly as it once was.
We’re also at a point where security can’t be ignored any longer, so we now have to clean up 50 years of ignoring security.
LOL I’ve seen some dumpster fires of code bases make lots of money, so yeah, more ergonomic programming languages are really needed to help those poor, inept coders out there. (Hint: It’s everyone.) LOL
There were many more advanced programming languages and computer systems which took security seriously, but everyone picked C, C++, and Unix. Security issues are nothing new, people have known about them since the beginning. Most people just didn’t take them seriously.
Anyway, people are finally starting to researching safer replacements for C and C++. Or making all the same mistakes. LOL
Sorry, I meant “production code”, not “runtime code”. Please bring back the edit button Thom
A agree that sponsoring some FOSS project with money is a good thing and a boilerplate PR announcement that goes with it can be considered justified. Still Google, if serious, then they need to start delivering some major driver code written in Rust for their Google Pixel line of mobile devices, or similar. Without that it’s all just theoretical advantage as C will stay the preferred choice.
Geck,
While it is a good sentiment to rewrite existing code in the “new shiny platform”, it usually is costly, and error prone. Especially more so for hardware drivers.
What works better is generally adding the new code in the new language, while still interfacing with the legacy, and proven codebase. As it has been mostly bug free with years of in production testing and repeated reviews.
Exactly. That is on why new and modern hardware and new device drivers for it are perfect candidates for experimenting with Rust in GNU/Linux ecosystem. Here is on where Google can make a real difference and move things forward. The rest is more or less wishful thinking.
Geck,
While only writing new drivers in rust while keeping old ones in C is backed with good intentions, it imposes technical costs. The C interfaces and structures can compromise the safety goals that a native rust project would allow for.. A kernel designed to be safe from the ground up would be better… except for the minuscule chance it has of reaching critical mass. The critical mass that linux has gives it a huge advantage over new contenders, even if they have lots of merit.
So then C it is. Paired with an occasional million dollar donation to Rust project including a boilerplate PR, on how safe Rust really is. GNU/Linux not benefiting from it in any meaningful way, beyond some experimental code and a blog post. As for another attempt to burn money on developing in-house GNU/Linux competitor. If that happens again lets see if Google will use Rust for it.
Geck,
Believe it or not I think google engineers might be more progressive on this front. They’re relatively young with a lot of turn over. For all of their other faults, I suspect they may be statistically more likely to favor rust.
Alfman, Geck,
This also has the additional side benefit of providing a (potentially) massive library of Rust based device drivers, along with implementation of basic kernel structures.
In the past, I had benefited from reading Linux and BSD kernel drivers when writing my own low level hardware code. And future OSes, like Fuchsia can see actual benefits.
(Yes, Fuchsia has a different license, however reading GPL to learn, as fas as I know, is okay).