Intel Reveals Two New AI-Focused Chips at the Hot Chips Conference

With its new processors, Intel will now be able to climb the artificial intelligence ladder.

In a bid to accelerate training and inferences taken from artificial intelligence (AI) models, Intel has unveiled its two new processors. These two chips are part of its Nervana Neural Network Processor (NNP) selection. 

The AI-focused chips will be called Spring Crest and Spring Hill, as they were disclosed on Tuesday at the Hot Chips Conference, held in Palo Alto, California. 

The Hot Chips Conference is an annual tech symposium held annually in August.

RELATED: INTEL DEVELOPING AN AI CHIP THAT ACTS LIKE A HUMAN BRAIN

Why are these chips important?

AI-focused work is growing each year. With that in mind, having the ability to turn data into information, and then into knowledge requires specific hardware and casing, memory, storage, and technologies that can interlink, evolve and support new and complex usage and AI techniques

These two new chips and accelerators, as part of Intel's Nervana NNPs, are built from the ground up, focusing on AI in order to give customers the right intelligence at the right moment.

"In an AI empowered world, we will need to adapt hardware solutions into a combination of processors tailored to specific use cases," said Naveen Rao, Intel VP for Artificial Intelligence Products Group.

Rao continued, "This means looking at specific application needs and reducing latency by delivering the best results as close to the data as possible."

What will the chips do?

The Nervana Neural Network Processor for Training is built to manage data for several different deep learning models within a power budget. Moreover, it does so while delivering high-performance and improving memory efficiency. 

They're built with flexibility in mind, working around a balance of computing, communication, and memory. 

They are specifically created for inference, and to accelerate deep learning deployment at scale. Easy to program, with short latencies, fast code porting, and supporting all major deep learning frameworks, these chips will cover a broad range of capabilities. 

Advertisement