After months of teases, previews, peeks, and drone shows, we finally got our first deep-dive view of Intel’s upcoming Xe HPG graphics card platform, complete with roadmaps for current and next-generation projects, detailed looks into the technology for upcoming discrete graphics cards, and a look toward what the company has planned for the future of desktop gaming. Intel’s 2021 Architecture Day was a veritable treasure trove of info on how the company plans to compete with Nvidia and AMD in the GPU market, as well as in integrated graphics and enterprise-level solutions.
So what’d we learn over the past few days about Intel’s GPU ambitions, and should Teams Green (Nvidia) and Red (AMD) be shaking in their proverbial boots? Let’s dig into the details and see what Blue’s got in store for 2022.
Intel’s Architecture Day 2021: A Bit About the Arc
For starters, let’s temper expectations: Intel revealed almost nothing this week about its upcoming GPU related to specifications, shroud design (intentionally, at least…more on that in a minute), or pricing. To Intel’s credit, this makes sense for an event with such an on-the-nose name as “Architecture Day,” especially considering what we did learn during both the initial presentation on Tuesday, as well as at an hour-long “Office Hours” session held yesterday morning.
First up: “Intel DG2” is dead; long live “Alchemist.” Falling under Intel’s growing ranks of “Xe”-branded graphics products, Alchemist is the code name for Intel’s graphics architecture powering the first generation of discrete graphics cards based on Xe HPG (“high-powered gaming”), which themselves will sell under the “Arc” badge of graphics products.
Confused yet? Us too. From what we can grok, this means we will likely see “Intel Arc XXXX” graphics cards when they launch sometime in Q1 of 2022, based on an Alchemist-generation GPU, much like the Nvidia GeForce RTX 3080 Founders Edition is based on the company’s “Ampere” generation of GPUs, or AMD’s RDNA cards use GPU dies with the “Navi” code name attached. Alchemist is just the first in a planned line of generational improvements to Intel’s graphics core framework, which will be followed by “Battlemage” in Xe2 HPG cards, “Celestial” on Xe3 HPGs, and “Druid” on Xe Next Architecture after that.
Following up the OEM-only DG1 running on an Xe-LP (“low-power”) chip, Intel promises a one-and-a-half times uplift in performance and clock frequency for Alchemist-based Xe HPG GPUs. Its engineers say it accomplished this through “optimizations of architecture, logic design, circuit design, process technology, and software.” Intel says Alchemist GPUs will be based on what Intel calls the “Xe-core,” which the chip maker describes as “a compute-focused, programmable, and scalable element,” though not many more technical details were shared beyond that.
One thing we can say with certainty is that the Xe cores will be attached to ray-tracing cores in configurations of one Xe-core to one “ray-tracing unit” (no specifics on the compute-unit count inside), which themselves will be aligned in combinations of four Xe-cores and four ray-tracing units per “Render Slice.” Render Slices will exist as a scalable solution in both discrete Xe HPG designs, and Xe HPG deployments for data centers. The ray-tracing units inside these slices are confirmed to support DirectX- and Vulkan-based ray-tracing implementations.
Also on the hardware side of things, in news that should come as a surprise to many except the few that have been following the rumor mill, Intel will tap Taiwan-based TSMC for production, utilizing the company’s 7-nanometer “N6” process to manufacture all of its Xe HPG GPUs.
Actual Card Details? Thin So Far
As far as what the design of the new graphics card (or cards) will look like, we have a few hints, but nothing as of yet technically “confirmed.” For starters, there was the GPU that showed up in the skies the other day in an Intel-provided recording, comprising 1,000 tiny aerial drones that were programmed with a veritable fireworks display of Intel branding. Intel, though, wouldn’t say one way or the other if this was the official planned design.
Beyond that, no official images or renders were released. However, an eagle-eyed member of the virtual audience happened to notice something sitting in the background of Raja Koduri’s video (taken inside Intel’s Folsom graphics unit) during the follow-up Q&A session…
Koduri is the head of Intel’s graphics group. Whether it was an accident to show them off or not, Intel wouldn’t confirm on subsequent sessions, but if you look closely you can see three graphics cards in frame: two by Koduri’s right shoulder, and another above his head. The first two look familiar, and were confirmed by Koduri as lab models of DG1 when I asked. Whether or not the card up top was a production model of Alchemist? I couldn’t get anyone on the team to say.
DirectX 11 Could Be a Thorn in Intel’s Side, Too
Speaking of things I couldn’t get more clarity on, one possibly worrisome aspect of the presentation was Intel’s specific focus on “optimization for DirectX 12 Ultimate Gaming” in several slides. When pressed about it during Q&A, Intel refused to get specific about any performance numbers in DX12, or how they might differ between different versions of DirectX. This has us worried the company might be seeing similar issues with its driver sets as we’ve seen pushed to AMD Radeon cards over the past few years.
Radeon RDNA-based cards have struggled a bit with some DirectX 11 games in particular for some time now, and if our investigation into the subject (or recent review of the midrange AMD Radeon RX 6600 XT) are any indication, the problem doesn’t look like it will let up anytime soon. Could it be that Intel’s driver team is running into the same roadblocks as AMD’s? Only time and testing will tell.
But enough about non-specs and potential DX11 pitfalls. Is there anything else Intel will be coming to the GPU table with? Like, a DLSS competitor, perhaps?
Meet XeSS: Intel Joins the Supersampling Ranks
These days, you can’t really run a graphics unit at your company if you don’t have some kind of supersampling tech on the horizon, can you? If the number of supersampling options out there is starting to sound, ahem, “XeSSive” to you, then fear not: Intel is setting out to (hopefully) develop the last one you’ll ever need.
Well, maybe. If there’s anything I’ve learned in my years of covering, and relentlessly testing, supersampling technologies since they first debuted with Nvidia’s DLSS, it’s that the marketing rarely meets reality. (Still waiting on that DLSS update for PUBG, Nvidia!) In case the above pun wasn’t enough to clue you in, Intel’s new supersampling technology, dubbed “XeSS” (for “Xe Super Sampling”), will use both temporal and static image reconstruction to upscale image quality and improve frame rates in games. That reconstruction will happen on Intel’s unnamed artificial intelligence neural net, in the same vein as Nvidia’s original method of training its first DLSS 1.0 network.
We don’t know much about XeSS yet, and the company was characteristically mum on details during after-presentation Q&A sessions. Which games or engines will XeSS work on? No mention. Which versions of DirectX will it support? Wouldn’t say. When will it launch? Who knows. How many games at launch? Your guess is as good as mine.
What we could confirm is that, just like with Nvidia’s Deep-Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR) before it, Intel says it expects the rollout of XeSS will be a slow trickle of support by developers, rather than launching with a large number of compatible titles ready to go off the rip. This was hinted at further by Intel’s announcement that the XeSS SDK (developer’s tool kit) will be going live later this month, suggesting that possibly no developer has had a chance to train their games on XeSS yet. This means it could be a long while before we’re up to Nvidia’s current DLSS-compatible count of 50-plus titles.
Also, like with AMD’s FSR, Intel says that the hope for XeSS is that it will be compatible with more than just Intel graphics cards, though the company made clear that it’s still too early in development to confirm whether or not Nvidia or AMD card owners will one day see XeSS running on the hardware they already own.
Intel Advances Toward a New Warfront
Alright, alright—enough about specs and theories on what might work and what won’t. Let’s get down to the only kind of brass tacks that people have the luxury of caring about in this GPU-starved marketplace: Will you actually be able to buy an Arc GPU when they launch?
During the Q&A, Intel was mum about the well-known stock shortages that have been affecting the GPU market from 2020 to present, stating only that “Intel is excited to provide gamers with a new option to choose from” when the first cards debut in Q1 of 2022. I pressed further on that response, but the company chose not to comment on potential availability this far from launch.
Either way, feast or famine, we plan to benchmark whatever card we can get our hands on once Intel provides access to Alchemist samples next year. Stay tuned to PCMag for more info as it comes through the end of 2021, and we look forward in 2022 to generating our first independent performance numbers, along with a full breakdown of all the new tech on offer and how it stacks up against the competition from AMD and Nvidia.