Intel and Microsoft look to bring AI to the masses with Meteor Lake chips

Intel aims to put AI-accelerated experiences in the hands of hundreds of millions of people.
Amal Jos Chacko
Circuit board image
Circuit board


Intel unveiled key details about its next-generation suite of PC processors, Meteor Lake, at Microsoft’s Build 2023 conference.

The company claims its “chiplet” system-on-chip (SoC) design will allow Intel to deliver advanced intellectual properties (IPs) and segment-relevant performance at a lower power cost. This will enable Meteor Lake to be Intel’s first PC platform featuring a built-in neural VPU that will efficiently run AI models.

The new VPU will work in tandem with powerful AI accelerators on the CPU and GPU, which Intel has been supporting for several generations, to make AI-powered features accessible and more effective for PC users.

While computers powered by Meteor Lake chips are months away from hitting the market, Intel isn’t shying away from claiming that its product is at the center of the current tech buzzword, AI.

Intel and Microsoft look to bring AI to the masses with Meteor Lake chips
Intel’s upcoming Meteor Lake client PC processors are the first PC platform from Intel featuring a built-in neural VPU, a dedicated AI engine integrated directly on the SoC to power efficiently run AI models.

At the Taipei International Information Technology Show, or Computex, as it’s popularly known, Intel disclosed that the VPU would be a block derived from Movidius’s third-generation Vision Processing Unit (VPU) design, AnandTech reported. Intel acquired Movidius in 2016, looking to establish itself as a leader in the artificial intelligence market.

Although no performance figures or the SoC’s tile die space of the VPU have been teased, it is expected that Intel’s VPU will exceed Movidius’s most recent rating of 1 TOPS (tera operations per second) of throughput.

Since the VPU is integrated into the SoC, its AI capabilities will not be used as a feature differentiator but will instead be offered in all Meteor Lake SKUs.

SoCs designed for smartphones have provided similar AI acceleration, such as Apple’s Neural Engine and Qualcomm’s Hexagon NPU, for a while now. Intel aims to gain similar levels of energy efficiency in performing tasks like dynamic noise suppression and background blur.

Intel will closely work with Microsoft to scale Meteor Lake and Windows 11 across the ecosystem. “We’re excited to collaborate on AI with Intel with the scale Meteor Lake will bring to the Windows PC ecosystem,” said Pavan Davuluri, corporate vice president of Windows Silicon & System, Integration, Microsoft Corp. “Together, we are enabling developers to use ONNX Runtime and related toolchains to run their AI models optimally on the Windows platform.”

The ONNX Runtime is an open-source software library built on top of the Open Neural Network Exchange (ONNX) format, which can deploy machine learning models on a host of hardware platforms, including CPUs, GPUs, and VPUs.

The company expects its vision of moving currently server-facilitated AI workloads to clients to give it an edge over competitors, with gains including lower costs, a lower latency, and improvements in privacy.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board