fabernovel loader

Apr 3, 2017 | 5 min read

Tech

AI Is Eating Hardware

Tom Morisse

Research Manager


FABERNOVEL
The current wave of artificial intelligence has been fueled by advances in computing hardware, thanks to the advent of GPUs and cloud computing infrastructure. What is remarkable is that AI’s momentum is now so strong that the relationship to hardware has started to reverse: the specific demands of AI are heavily influencing the development of new platforms, from chips to dedicated programming frameworks.

Chips on the table

The essential chips that power servers are the first parts that will set the hardware landscape in motion. Up until now, the CPU (Central Processing Unit) has been the most important chip used both within consumer – think “Intel inside” microprocessors – and enterprise devices. The CPU is usually described as the “brain” of a computer: it makes sure the instructions are well executed and coordinates the activity of the other components.
The server market is also the preserve of Intel’s CPUs. Indeed, saying that it is the market leader would be an understatement: it has a monopoly, with a 99% market share in 2015.

What the rise of machine learning – and particularly its deep learning branch – changes to the situation is that the models used have different needs from usual programs. To be trained more efficiently, they require computers to make a lot of calculations in parallel (particularly for neural networks, since calculating the state of numerous neurons is naturally a parallel task), while a CPU is a general-purpose processor adept at sequential computations.

3 types of chips run machine learning models much more quickly than CPUs:

 

GPUs: the Power of Parallelism

GPUs (Graphics Processing Units) are currently the big deal in machine learning computation. As their name implies, GPUs were created in the end of the 1990s to accelerate video rendering, hence their proficiency in parallel calculus since they have to render the countless pixels that make up each frame of a video.

NVIDIA is the main manufacturer of GPUs, and as such is duly appreciated by gamers – and now AI researchers and companies. The firm now tailors GPUs for deep learning use cases, and offers accompanying programming frameworks – CPUs and GPUs are not interchangeable, you need to adapt programs to run on the latter.

 

ASICs: the Power of Customization

ASIC (Application-Specific Integrated Circuit) is a general name for a class of chips that are tailor-made for a narrow purpose. For instance, some ASICs are chips optimized for the sole purpose of mining bitcoin.

In the last few years, several startups have begun to tackle the creation of chips dedicated to AI. Nervana Systems, acquired by Intel in August 2016 for $350m, is currently developing an ASIC that promises a “10x increase in training speed” (coming in 2017), and Graphcore plans to introduce its Intelligence Processing Unit in 2017 – with a claim of a 10x to 100x improvement in speed.

Even more interesting, Google revealed in May 2016 that it had been running an ASIC tailored for machine learning – dubbed Tensor Processing Unit – for one year within its data centers.

 

FPGAs: the Power of Flexibility

FPGAs (Field-Programmable Gate Arrays) are a type of integrated circuit that can be reconfigured after manufacturing. FPGAs have been around for a few decades, used to prototype other processors, but they have key advantages for AI researchers too. First because they can be customized to suit the AI model at hand. Then because they are more power-efficient than GPUs.

The two leaders of the FPGA market are Xilinx and Altera. The latter was acquired by Intel in 2015 for $17bn – the electronics giant, as you will have noticed by now, is clearly hedging its bets.

 

That is not to say that CPUs will disappear: all the aforementioned chips work alongside them. And Intel is also enhancing its high-end server CPUs to better serve the AI market. But the importance of CPUs will no doubt diminish in the foreseeable future.

 

Why you should pay attention to chips – ripple effects

 

1/ The server landscape will change…

Of course, chips are not an end in themselves. Through their assemblage with other components, in a diverse range of possible architectures, it is of course the server that is the final deliverable. So the server industry, dominated by the likes of Dell and HPE (and cloud providers such as AWS and Google that build their own racks), might end up shaken up.

In this regard, the rise of NVIDIA is noteworthy, as it is gradually moving up the value chain, with its DGX-1 “supercomputer-in-a-box” (price tag: $129,000), and the company has even created its own full-scale supercomputer based on this element:

Created and used internally by NVIDIA, the DGX SATURNV bundles 124 DGX-1 nodes and is the most power-efficient supercomputer in the world

 

2/ …which will impact the cloud leaderboard

Large companies and startups may buy hardware directly, but a rising share of their IT usage is done through cloud providers – these are the ones that really scrutinize the development of new chips.

The race is on to offer the best cloud infrastructure to serve AI needs, and several approaches are in competition. Microsoft has started to deploy a new data center architecture that includes FPGAs from Intel Altera, AWS has just started offering access to FPGAs – this time from Xilinx – and Google has a proprietary approach with its in-house TPU circuits.

The availability of FPGAs on AWS was one of the major announcements during its last developer conference

 

3/ Standard languages & frameworks will emerge

Hardware, whatever the way it is accessed, is useless without the languages and frameworks that facilitate programming. And so the definition of these standards is also part of the hardware / cloud battlefield, all the more difficult to grasp since there are several levels of abstraction – high-level programming languages, deep-learning frameworks to kickstart neural networks, low-level APIs that directly access chips…

For instance, there is a competition between the open-source standard OpenCL, which helps programming a wide range of parallel-computing chips – GPUs and FPGAs alike – and CUDA, the proprietary platform of NVIDIA – whose goal is of course to sell its own GPUs… but the latter also support OpenCL.

 

Takeaways

If artificial intelligence becomes critical to business workflows, then choosing the right hardware architecture – on premises or cloud-based – will be essential to accelerate the transformation.

Nonetheless we are still in the early innings of AI hardware, and no technological solution has emerged as the victor yet. So whatever the size of your company, blue-chip or startup, here are 3 recommendations to make the most of current resources:

  1. Do not go all in on any technology! Keeping an overall comprehension of the field, and experimenting with diverse options, is the best way to take advantage of future opportunities. In a few words: for now, breadth >> depth.
  2. More than the mastery of singular technologies, the ability to ensure their efficient connection will be key. For instance, could you easily port a machine learning algorithm from a current hardware platform to a new one?
  3. If you need to use a specific technology for a particular use case, cloud providers are the best testbed. This makes possible to truly figure out the ROI of the use case before investing in your own infrastructure if that makes (financial) sense, or to pivot to another technology if need be.

Interested in receiving every week a new episode on Artificial Intelligence - 1st Season by FABERNOVEL?

Subscribe
This article belongs to a story
logo business unit

FABERNOVEL

Distribute the future. Connect leaders. Change the game. We ignite ventures and transform organizations for the new economy.

next read