Turning Down The Power
Chip and system designers are giving greater weight to power issues these days. But will they inevitably hit a wall in accounting for ultra-low-power considerations?
Performance, power, and area are the traditional attributes in chip design. Area was originally the main priority, with feature sizes constantly shrinking according to Moore’s Law. Performance was in the saddle for many years. Microprocessors had to be brawnier and faster all the time.
In this century, power consumption has emerged as the prime directive in chip and system design. Smartphones and other mobile devices initially drove that mandate. But now the emphasis has shifted to ultra-low-power, which is essential for drones, the Internet of Things, robotics, wearable gadgets, and other battery-powered electronics.
“We always think we’re nearing a limit of something,” says Dave Pursley, product management director at Cadence’s Digital Signoff Group. “We’re going to be able to continue further and further down the power slope, but it does involve more and more analysis, and more and more automation.”
The demise of Moore’s Law has been predicted many times. Yet Gordon Moore’s prescient observation in 1965 still holds, with some amendments. Chipmakers are turning to compound semiconductors and 2D materials in silicon’s twilight. And the pace of new process node introductions is roughly every 3.5 to 4 years, rather than 18 months to 2 years, largely because of issues related to lithography and the difficulty in manufacturing 3D transistor structures. But it’s also getting harder to design these chips in the first place, and the chief culprit from the design side is power and power-related effects such as heat and noise.
“One of the big limiting factors is that a designer, especially a digital designer, has a very good intuitive feel for performance and for area impact, silicon impact, but the gut instinct isn’t exactly there for power,” Pursley says. “That is good in some ways for the tool chain, because there’s a level of trust there. The designer may not be doing everything they could be doing earlier on to implement the best low-power design they could. Sometimes, you’re looking for 50% area improvement or power improvement, but you’re far enough along the chain that any chance for improvement had to happen much earlier on in your architectural decisions.”
This is particularly true at advanced nodes, where insulation is thinner, RC delay is a factor in thin wires, and where tolerances for noise and variation are much lower. Power needs to be much more tightly controlled to minimize the impact of heat, which can affect the reliability of chips and shorten their lifespan.
“7nm is an expensive place to be,” says Oliver King, CTO at Moortec. “Constraints are tight and you get less use out of a design than you did at older nodes.”
King notes there are clear benefits to moving to new nodes. But it’s also harder to compensate for power and performance because the manufacturing processes are not yet mature. That translates into more restrictive design rules to compensate for variation, which in turn makes it even harder to optimize for power or performance on the latest processes.
The most advanced process nodes add a few other twists, as well. “Where things get really tricky is the effective resistance of wires,” says Anand Raman, senior director at Helic. “Now you’re dealing with the fundamental physics of Q (the charge), L (length), and the magnetic behavior of the wires.”
Raman observes this isn’t something design engineers necessarily understand until they’re forced to deal with it. “When they’re dealing with failed silicon, they get it.”
And in devices that are supposed to last 10 years or more, such as automotive electronics, that failure can easily come back to haunt a company years after technology ships to end users.
“You need to understand electrical loads and thermal loads, as well as mechanical loads,” said Roland Jancke, head of the department for design methodology for the Fraunhofer Institute of Integrated Circuits. “You also need to understand the degradation effects of a process and the impact of electrothermal effects and noise coupling.”
Power is a critical piece of the puzzle here, and the concern about power is spreading into some unexpected places.
Machine learning, IoT and drones
Battery-powered electronics are constantly driving power requirements because as more features and functionality are added into devices, they have to last at least as long between charges as previous versions, and preferably longer. But power also is becoming a consideration in areas such as machine learning, which is extremely compute-intensive.
“If you think about machine learning, there’s the training part, the inferencing parts, the machine vision being able to react locally on silicon,” says Cadence’s Pursley. “That’s one of the big places where we’ve been seeing a lot of the low-power push. And then for the more wireless technologies, as well, for these ultra-low-power modems, both in the Wi-Fi space and also, of course, for 5G. Everything in the cellphones, and in the IoT.”
So while this is still about power, performance and area, the balance has shifted.
“Area was always king, and now as things move to higher and higher performance, performance tends to be king, so performance becomes the hard constraint,” Pursley says. “And then you’re sort of optimizing for area after that. Power was essentially get what you could get—make sure that the tools were doing what they can to improve power, so find opportunities for clock gating. Have a power-friendly layout. Use multibit cells, and all of those kinds of things. That’s kind of the traditional way where power has come into the PPA. The new trend is where power becomes a primary design metric, and designers can’t afford to hope that the tools can get to the power they want. They need to know ahead of time they’re going to be able to meet their power requirements.”
Others are seeing similar trends. Qazi Ahmed, product marketing manager for the PowerPro RTL Low-Power Platform at Mentor, a Siemens Business, sees power as a significant factor in designing chips for mobile devices, the IoT, and big data analytics, among other applications.
“You have data centers that want to go green, or you have cloud technology,” he says. “But the most demanding thing is the Internet of Things. You have medical devices that you can strap down and you can connect them to the cloud. IoT devices are one of the key applications for low power.”
Mobile devices have incorporated GPS and other features, Ahmed notes. “When you converge all those things into one device, and you want to keep the industrial form factor limited, your devices are getting smaller and slimmer. And you also want to conserve battery life. If you have all those things on one chip, that’s going to consume a lot of power.”
Drones are an ultra-low-power application now with their motors and other components, according to Ahmed. “Drones have a lot of software code in them. They sometimes have Android-based systems. They have the same kind of code that aircraft used to have. They’ll have flight control systems, they’ll have contour mapping, they’ll have transmitters. Drones run on batteries, and the battery capability that drones have is limited. With the minimal battery life, you have to maximize the drone’s operational cycle.”
Smart vacuum cleaners are another ultra-low-power application. Chip designers turn to traditional techniques to optimize power consumption, according to Qazi. “Mobile time-to-market is getting shorter,” he says. “With that short time-to-market, people are trying to get reduced power, as well.”
Large data centers have been focused on power for some time. It takes energy to power racks of servers and storage, and it takes energy to cool them—to the tune of millions of dollars per year for large data centers. In addition, keeping servers and other elements cool leads to greater energy efficiency for those facilities. The cost of more efficient electronics, in comparison, is relatively small, which is why this market has become a sandbox for experimentation.
“We’re pushing hard on conventional CMOS devices,” says Gary Bronner, vice president of Rambus Labs. “Tunneling finFETs and negative-capacitance finFETs are among the technologies that could go into ultra-low-power devices, along with fully depleted silicon-on-insulator technology. One aim is to develop devices that can operate on 0.2 volt, rather than the 0.4 volt level at present for conventional CMOS processing. It’s a combination of things. Rambus is sort of a fabless chip company, so we don’t have the ability to play with technology. We take whatever the big foundries give us. So we always want to push the power cycle down as far we can go. There’s always a tradeoff between active power and standby or leakage power. We’re learning how to do that much better now than we have in the past.”
That has applications in other areas, as well. “The sensors that are going to become ubiquitous everywhere are driving unbelievably low power,” Bronner says. “That’s the bigger challenge out there. If I have 50 sensors in a room, which is not a crazy thing to talk about, and the batteries only last you a year, I don’t want to be the person who’s got to change 50 batteries a year. It’s going to be a nightmare.”
While negative-capacitance finFETs are in their early days, tunneling finFETs are a more mature technology that could be employed in the near future, he notes.
Rambus is considering work on quantum computing, a topic that was dismissed as science fiction in the past, but it is emerging out of its speculative status with work being done by Google, IBM, Intel and Microsoft, among others. The memory systems for quantum computing will need to operate at very low temperatures. “Devices turn off faster,” Bronner says.
At least part of this emphasis on low power is a function of architecture, namely where processing is actually done. As the amount of data increases, it becomes less power-efficient to move all of that data into the cloud. That changes the picture about how to deal with data most efficiently, both from a processing and storage standpoint.
“Ultra-low-power is typically associated with IoT edge systems, and local computing is increasingly moving into focus to extract information at lower sensor power to limit overheads for communication to the cloud as well as to achieve greater autonomy,” said Rainer Herberholz, director of emerging technology for Arm’s Physical Design Group. “Autonomy mandates that compute systems are always-on, deliver highly scalable performance, and minimize both static and dynamic power consumption. To address this, multiple foundries have re-tuned larger nodes to provide lower voltage and lower leakage. This is a call to action for substantial design optimization. A trillion IoT nodes can’t mean a trillion batteries, so energy harvesting will likely be a driver for ultra-low-power.”
The general consensus is that the best way to reduce power is to lower the voltage, but this is easier said than done.
“Two key issues are increased timing variation and the challenge of operating SRAM at low voltage, ideally close to the retention level,” says Herberholz. “Multiple start-ups pursue near-threshold microcontroller design using adaptive voltage or frequency scaling or asynchronous design. It is important not to give up the power benefits of SoC integration. Therefore, we need to find robust methods to manage the timing variation and enable predictable and scalable performance without extending the design cycle or putting reliability and yield at risk. The Arm ecosystem plays an important role in moving towards ultra-low-power consumption, from foundries providing low-power processes, to EDA and recommended design implementation methodologies addressing low-voltage challenges, and from new cores and physical IP to new ways to integrate SoCs, and from system-level design to software design. Every piece contributes.”
But not every piece is necessarily available. “With low voltage the key direction, the first limitation customers are aware of is the lack of SRAM IP,” he notes. “It is a common experience that with leakage-optimized nodes like 55ULP and 40ULP, logic could potentially run much faster than the SRAM cycle time at 0.9V. It is critical to realize which components dominate the system power, and hence a battery-operated system is not automatically a candidate for ultra-low-power. Newly emerging applications are generally sensor nodes, not including electric motors, speakers, wireless streaming or continuous lighting.”
This isn’t as straightforward as it sounds, particularly at more advanced nodes.
“Power behavior is highly dependent on the chip activity,” Preeti Gupta, the head of RTL product management in the Semiconductor Business Unit of ANSYS. “Traditional methodologies of identifying appropriate activity modes focus on short-duration windows for power analysis, and they run the risk of missing power-critical events that may occur when the chip is exposed to its real activity. Having early visibility into power and thermal profiles of real-life applications, such as operating system (OS) boot up or high-definition video frames, can help you avoid costly power-related surprises late in the design process. Specialized hardware, such as an emulator, can simulate at a much higher speed, which makes analysis based on real-life applications possible. However, running cycle-by-cycle power analysis of such real application activity can be very compute-intensive and can take up to days or even weeks.”
This is where RTL simulation fits in, particularly for early power noise and thermal analysis. “The ability to quickly run thousands of RTL vectors with millions of cycles of activity provides several key insights,” she says. “It identifies event activities such as peak switching power (di/dt) that cause large power noise and thermal hotspots. By focusing on power-critical activity areas, you can improve productivity and coverage of transient power delivery network analysis and mitigate risks of design failure. RTL chip current profiles based on real application activity also enable early and accurate co-design of the chip, the package and the board. At the system level, power consumption can have a direct impact on the thermal performance. Understanding the power profile throughout the duration of real-life simulation helps you determine and address areas of the design that are consuming most power and in turn causing thermal issues.”
Large chip designs must be managed, she asserts. “As the chip size and its functionality continue to grow at an exponential rate, the ability to manage capacity while thoroughly analyzing multiple operating scenarios will become important success criteria. Applying emerging technologies such as elastic computing and big data analytics to RTL power analysis can help manage such complexities.”
Power is the top design challenge at 7nm, but it also is becoming a challenge even at older nodes, where the need to conserve battery life is a competitive advantage. The best way to deal with it is to predict power consumption, to understand where power is wasted, and to increase coverage for power noise and thermal applications by profiling power across real applications.
The good news is there still is plenty of room to run when it comes to designing chips and systems for ultra-low-power applications. IC designers have a number of resources to help with this issue, utilizing analysis tools and other techniques. But while there used to be one or two power experts inside of large chipmakers, it is rapidly becoming a required competency for anyone working at advanced nodes. And there are far more designs that not only require it, but which need to push the power down much further than in the past.
—Ed Sperling contributed to this report.