Armv9: Understanding ARM chips and why they matter
At the end of March, UK-based semiconductor developer Arm announced the launch of its Arm v9 architecture. This is the company’s first major update in more than a decade and represents a major milestone in the future of semiconductors.
Coming just a couple of months after the release of Apple’s (ARM-based) M1 chip, it is a statement of intent from the Cambridge organization that further indicates that ARM chips are going to play an increasingly large part in the future of computing.
Armv9 is particularly good news for Nvidia, the US semiconductor giant that is currently in the process of acquiring the company, but more broadly it should be good news for the wider industry. The new architectural update aims to improve security and better enable a number of cutting edge applications including AI and edge computing. It also targets virtual machine hosting requirements — indicating Arm is going after Intel’s cash cow: the data centre.
Many technology users will find it hard to get too excited about the launch. Indeed, it’s easy to overlook shifts in semiconductor design and development, even though they are at the very foundations of the technologies we take for granted today. However, to bring some sense of perspective, we’ll explain what you need to know about Arm and ARM processors, as well as some of the key thinking behind the v9 architecture.
What is Arm? And what are ARM processors?
This is actually somewhat confusing to someone new to the semiconductor industry. At its most basic, Arm is a design and manufacturing company (its proper name is Arm Ltd.), while ARM (Advanced RISC Machines) is a specific architectural design for semiconductors.
That’s simple, but where things start to get a little more complicated is that Arm Ltd. is responsible for developing the underlying architecture, the license for which it then sells on to other semiconductor manufacturers.
This is one of the reasons why Apple switched to Arm’s architecture for the M1 chip. It wasn’t so much the specific performance advantages, but rather the fact that it gave the company the ability to adapt Arm’s ISA and manufacture its own Silicon chips — using an x86 chip forced the company to rely on Intel. With ARM they could utilize their vast resources to develop a new CPU matched to their requirements, and already already had experience with ARM on iPads/iPhones.
So, you might find an ARM processor built by another company, but the fundamental architecture and design behind it will have been developed by Arm Ltd.
How are ARM processors different from other semiconductors?
There are a number of different processor architectures available. However, the one that you’re most likely to find ARM compared to today is x86. Although it’s possible to go into great technical depth about the difference between ARM and x86, the key thing you need to understand is that they differ in their level of complexity inside their operation: ARM is a Reduced Instruction Set Computing (RISC) architecture, while x86 is a Complex Instruction Set Computing architecture (CISC).
As the word ‘reduced’ in the RISC acronym suggests, the decode logic for an ARM instruction is much smaller than an x86 instruction. To date, that has meant ARM devices can better compete where power is the limiting factor. However, it’s worth noting that Apple has demonstrated that they can also compete on x86’s home ground — single thread performance. Apple’s engineers solved the potential bottlenecks of x86’s decode logic and memory bandwidth by tightly integrating the CPU and memory inside the same package.
Put crudely then, the x86 is an incredibly powerful chip that can perform lots of complex operations. But it does so at a cost — primarily it consumes lots of energy and it’s also much larger than an ARM chip. As computing has become more mobile, and devices have become more lightweight, the need for ARM-based processors has grown over time. It’s for this reason that you’ll find ARM processors in mobile devices, while x86 is typically used in standard laptops (although admittedly that’s starting to change).
Will ARM eventually kill off x86?
There’s a lot of debate about whether ARM will eventually render x86 chips obsolete. Looking at the trajectory industry it makes sense to think that, but it’s important to remember that there’s a lot of software written that can only run on x86 processors. The work involved in rewriting vast amounts of software far exceeds the advantages of ensuring compatibility with ARM.
X86 will remain a large part of the hardware world in the same way that older less fashionable programming languages still play an important part in many software systems.
What’s the thinking behind Armv9?
The press and marketing material for Armv9 is, as you might expect, all about enabling all sorts of future innovations and ensuring continuing performance gains. The company claims, for example, that Armv9 will give “CPU performance increases of more than 30% over the next two generations of mobile and infrastructure CPUs.”
However, what’s most notable about the launch is that it isn’t just about performance, but more about ensuring high performance in different contexts and applications. The company’s rather grandiose ‘Total Compute Design Methodology’, is intended to expand the potential of ARM processors not simply by adding more power, but instead by providing more architectural flexibility.
“As the industry moves from general-purpose computing towards ubiquitous specialized processing, annual double-digit CPU performance gains are not enough.” the company’s press release states. “Along with enhancing specialized processing, Arm’s Total Compute design methodology will accelerate overall compute performance through focused system-level hardware and software optimizations and increases in use-case performance.”
This is good news for anyone in particularly specialist industries — from Arm’s perspective it also unlocks many new lucrative commercial opportunities.
Arm also appears to be positioning itself as a tool in the future of security. Its ‘Confidential Compute Architecture’, which it flags as a critical part of the v9 roadmap “shields portions of code and data from access or modification while in-use, even from privileged software, by performing computation in a hardware-based secure environment.”
It’s maybe not a coincidence that we’ve also seen Microsoft very recently try to push the conversation about firmware attacks. While there’s certainly some degree of spin at work, it’s undoubtedly clear that we’re entering a new chapter where the evolution of hardware becomes an even more important battleground in the cybersecurity world.