…In Geekbench 6.5 single-core, the X2 Elite Extreme posts a score of 4,080, edging out Apple’s M4 (3,872) and leaving AMD’s Ryzen AI 9 HX 370 (2,881) and Intel’s Core Ultra 9 288V (2,919) far behind…
…The multi-core story is even more dramatic. With a Geekbench 6.5 multi-core score of 23,491, the X2 Elite Extreme nearly doubles the Intel Core Ultra 9 185H (11,386) and comfortably outpaces Apple’s M4 (15,146) and AMD’s Ryzen AI 9 370 (15,443)…
…This isn’t just a speed play — Qualcomm is betting that its ARM-based design can deliver desktop-class performance at mobile-class power draw, enabling thin, fanless designs or ultra-light laptops with battery life measured in days, not hours.
One of the more intriguing aspects of the Snapdragon X2 Elite Extreme is its memory‑in‑package design, a departure from the off‑package RAM used in other X2 Elite variants. Qualcomm is using a System‑in‑Package (SiP) approach here, integrating the RAM directly alongside the CPU, GPU, and NPU on the same substrate.
This proximity slashes latency and boosts bandwidth — up to 228 GB/s compared to 152 GB/s on the off‑package models — while also enabling a unified memory architecture similar in concept to Apple’s M‑series chips, where CPU and GPU share the same pool for faster, more efficient data access…
… the company notes the “first half” of 2026 for the new Snapdragon X2 Elite and Snapdragon X2 Elite Extreme…
Keep in mind the original X Elite benchmarks were never replicated in real world devices (not even close).
They used a desktop style device (with intense cooling that is not possible with laptops) and “developed solely for benchmarking” version of Linux (to this day X Elite runs like shit in Linux).
This is almost certainly a premeditated attempt at “legal false advertising”.
Mark my words, you’ll never see 4,000 points in GB6 ST on any real products.
I imagine things would be much closer if they put a giant heatsink that Ryzen 370 they’re comparing and ran it at its 54W configurable TDP instead of the default 28W.
Shouldn’t they also be comparing it to Strix Halo instead?
Ah. Thanks for the context.
Well, after they have product out, third parties will benchmark them, and we’ll see how they actually stack up.
Snapdragon X2 Elite Extreme
That doesn’t sound very high end, I think I’ll wait for the Pro version, preferably Pro Plus.
The ultra absorbent one is the one to get
Elite Extreme
Sounds like it focuses more on shiny RGB than performance.
Let me know when these X elite chips have full Linux compatibility and then I’ll be interested. Until then, I’ll stick with Mac, it has the better hardware.
LOL.
I’m going to call semi-bullshit here, or there is a major revisionist version or catch. If this were true, they’d be STUPID to not be working fast as hell to get full, unlocked Linux support upstreamed and start selling this as a datacenter competitor to what Amazon, Microsoft, and Amazon are offering, because it would be an entirely new class of performance. It could also dig into Nvidia and AMDs datacenter sales at scale if this efficient.
Qualcomm is pretty dumb. Even if this were true, they’d still be leaving Linux support to the community.
Yeah I’ll wait for independent benchmarks, thanks.
With actual devices
The X1 Elite never lived up to its geekbench scores, and the drivers are absolute dogshit.
The X2 Elite wont match Apple or AMD in real world scenarios either, I’d wager.
Windows 11 will turn this into a 486.
X2 “Elite Extreme” probably in ideal conditions vs. the base M4 chip in a real-world device. Sure, nice single core results but Apple will likely counter with the M5 (the A19 Pro already reaches around 4,000 and the M chips can probably clock a bit higher). And the M4 Pro and Max already score as high or higher in multi-core. Real world in a 14 inch laptop.
It doesn’t “crush” the M4 series at all and we’ll see how it’ll perform in a comparable power/thermal envelope.
I don’t hate what Qualcomm is doing here, but these chips only work properly under Windows and the Windows app ecosystem still hasn’t embraced ARM all that much, and from what I’ve heard Windows’ x64 to ARM translation layer is not as good as Rosetta 2. Linux support is pretty horrible, especially at launch.
*X Elite opens browser windows faster under desktop cooling.
FTFY
Oh no, each new chip is going to be tree at something than another chip and vice versa. Anyways, what did people have for lunch?
And here I am with my cheap old quad core doing my stuff.
Except for the theoretical interest, what are we supposed to do with stuff like that? Is it just more data centers? Does I sound like 640KB is enough?
deleted by creator
Ah, a not at all theoretical example but a real life one 😁 /s
deleted by creator
In my experience, arm64 is nowhere close to x64 with heavy multi processing/threading loads.
deleted by creator