(Pocket-lint) – When the Fujifilm X-T2 arrived back in 2016, we thought it set a new benchmark for mirrorless cameras. Since then we’ve been spoiled with the likes of the Panasonic Lumix G9 and many other compact system cameras.
In 2018, Fujifilm came back with a bang: the X-T3 brought a new sensor, new autofocus and 4K 60fps video capture that set it apart from its X-T2 cousin. Below we breakdown the key differences between those two. But if you’re looking for the even newer X-T4 then go read our review here.
squirrel_widget_145646
Fuji X-T3 vs X-T2: Design & Layout
X-T3: A little deeper (58.8mm) than X-T2 (49.2mm) due to eyecup and grip
Both cameras: Full manual control dials, Fujifilm X mount lenses
Optional battery grip is different for each camera
Both cameras: Weather resistant build
At a glance, the X-T3 and X-T2 are one and the same. But they’re a slightly different size, which means if you want to use an accessory battery grip then you’ll need to buy for the specific camera. X-T2 users will be disappointed that an X-T3 will require a new grip.
Otherwise, the layout and operation is similar: there’s full manual control, all the shutter/aperture/ISO/exposure compensation dials you could need, and that old skool design aesthetic.
The X-T3 does shrink the exposure compensation dial to avoid it getting accidental knocks, while the toggles around its dials are larger for easier adjustment. There’s also a dioptre lock on the X-T3 which was lacking previously.
Fuji X-T3 vs X-T2: Viewfinder, Screen, Performance
Both cameras: 3.0 inch, 1040k-dot, tri-adjustable LCD (X-T3 adds touchscreen)
X-T3: 1.5x autofocus speed improvement over X-T2 (Fujifilm claims)
X-T3: 2.16m phase-detection pixels offer edge-to-edge autofocus (X-T2 has a limited selection area)
Mirrorless cameras have gone from strength to strength in recent times, with electronic viewfinders good enough to rival traditional optical ones. The X-T2’s already decent 0.5in OLED finder remains the same size (magnification) in the X-T3, but the new camera ups the resolution by over 50 per cent. It’s the same finder as found in the Canon EOS R.
Regarding the rear screen, both cameras offer a tri-adjustable fit, meaning the LCD panel can be pulled out for waist-level or overhead work in either portrait or landscape orientation. Most competitors can’t handle this vertical orientation. However, we found this method of control a little fiddly to use, which is a setback. The X-T3, like the X-H1, adds touchscreen – which was lacking from the X-T2.
When it comes to speed, the X-T3 also ups the autofocus ante with a claimed 1.5x speed improvement over the X-T2. The biggest change is the full edge-to-edge autofocus system, though, which offers a mammoth 2.16m phase-detection pixels right across the whole sensor for precision autofocus anywhere within the frame (it offers 425 AF areas, compared to the X-T2’s 325). Even the newer X-T4 doesn’t really elevate its autofocus beyond the X-T3’s capabilities.
Fuji X-T3 vs X-T2: Image Quality, Speed, Video
X-T3: X-Trans CMOS IV sensor, 26-megapixel resolution
X-T2: X-Trans CMOS III sensor, 24-megapixel resolution
X-T3: 11fps burst shooting at full resolution
X-T2: 11fps with optional battery grip only
X-T3: 4K video at 60fps / XT-2: 4K 30fps
Core to the X-T3 is its X-Processor and X-Trans CMOS sensor, both of which are in their fourth generation guises (compared tot he X-T2’s third-gen). This brings greater processing speed, able to handle the slightly higher resolution of the newer camera.
In terms of burst speed, the X-T2 was never a slouch, capable of up to 11fps at full resolution. However, you had to have the optional battery grip attached to achieve that. With the X-T3 you do not: it’s 11fps capable out of the box (and it’ll even hit 30fps with a 1.25x crop and electronic shutter).
Another big benefit of this X-Processor is the readout speed means the X-T3 can cater for 4K video at 60fps straight to the camera’s internal SD card. At the time of launch no other APS-C sensor camera could do this. Furthermore, the X-T3 offers up to a 400Mbps data rate with H.265 compression and 24-bit stereo sound support.
Fuji X-T3 vs X-T2: Conclusion
X-T3: £1,349 body only (at launch)
X-T2: £1,399 body only (around £1,249 at time of writing)
The X-T3 is a savvy replacement for the X-T2. It’s faster, more adept at focusing, will deliver similar image quality at a slightly higher resolution, and will appease videographers too. And all for a price that’s actually less than the X-T2 was at launch.
Our suggestion would be to go with the newer model and all the extras that brings (unless the X-T2’s price really plummets). Of course, with the X-T4 being launched since, in 2020, there’s an even newer generation to consider – but, on balance, if you can find the older model in stock then picking it up for a bargain price makes heaps of sense.
With a Ryzen 9 5900X and an RTX 3080, both liquid-cooled for quiet operation in a compact case, Corsair’s One a200 is easy to recommend–if you can afford it and find it in stock. Just know that your upgrade options are more limited than larger gaming rigs.
For
+ Top-end performance
+ Space-saving, quiet shell
+ Liquid-cooled GPU and CPU
Against
– Expensive
– Limited upgrade options
For a whole host of reasons, AMD’s
Ryzen 9 5900X
and Nvidia’s
RTX 3080
have been two of the hardest-to-find PC components since late last year. But Corsair has combined them both in a handy, compact, liquid-cooled bundle it calls the Corsair One a200.
The company’s vertically-oriented One desktop
debuted in 2018
and has since been regularly updated to accommodate current high-end components. This time around, the options include either AMD or Intel’s latest processors (the latter called the One i200), and Nvidia’s penultimate consumer GPU, the RTX 3080.
Not much has changed in terms of the system’s design, other than the addition of a USB Type-C port up front (where an HDMI port was on previous models). But with liquid cooling handling thermals for both the CPU and graphics in a still-impressively compact package, there’s really little reason to change what was already one of the
best gaming PCs
for those who want something small.
The only real concern is pricing. At $3,799 as tested (including 32GB of RAM, a 1TB SSD and a 2TB HDD), you’re definitely paying a premium for the compact design and slick, quiet cooling. But with the scarcity of these core components and the RTX 3080 regularly
selling for well over $2,000 on its own on eBay
, it’s tough to discern what constitutes ‘value’ in the gaming desktop world at the moment. You may be able to find a system with similar components for less, but it won’t likely be this small or slick.
Design of the Corsair One a200
Just like the
One i160
model we looked at in 2019, the Corsair One a200 is a quite compact (14.96 x 7.87 x 6.93 inches) tower of matte-black metal with RGB LED lines running down its front. To get some sense of how small this system is compared to more traditional gaming rigs, we called
Alienware’s Aurora R11
“fairly compact” when we reviewed it, and it’s 18.9 x 17 x 8.8 inches, taking up more than twice the desk space of Corsair’s One a200.
The 750-watt SFX power supply in the a200 is mounted at the bottom, pulling in air that’s expelled at the top with the help of a fan. And the heat from the CPU and GPU will mostly be expelled out either side, as both are liquid cooled, with radiators mounted against the side panels.
The primary external difference with the updated a200 over previous models is the replacement of an HDMI port that used to live up front next to the headphone/mic combo jack and pair of USB-A ports. It’s been replaced with a USB-C port. That makes for three front-facing USB ports, a surprising amount of front-panel connectivity for a system so compact. But there are only six more USB ports around back (more on that shortly).
Overall, while the design of the One a200 is pretty familiar at this point, it still looks and feels great, with all the external panels made out of metal. Just note that the matte finish does easily pick up finger smudges.
Front: 2x USB 3.2 Gen 1 (5 Gbps) Type-A, 1 USB 3.2 Gen 2 (10 Gbps) Type-C ; Combination Mic/Headphone Jack; Rear: 4x USB USB 3.2 Gen 1 (5 Gbps) Type-A, 2x USB 3.2 Gen 2 (Type-A, Type-C), Ethernet, HD Audio, 3x DisplayPort, 1x HDMI
Video Output
(3) DisplayPort 1.4a (1) HDMI 2.1
Power Supply
750W Corsair SFX 80 Plus Platinum
Case
Corsair One Aluminum/Steel
Operating System
Windows 10 Home 64-Bit
Dimensions
14.96 x 7.87 x 6.937 inches (380 x 200 x 176 mm)
Price As Configured
$3,799
Ports and Upgradability of the Corsair One a200
Since the Corsair One a200 is built around a compact Mini-ITX motherboard (specifically the ASRock B550 Phantom Gaming-ITX/ax), you won’t quite get the same amount of ports that you would expect with a larger desktop. Since we already covered the three USB ports and audio jack up front, let’s take a look at the back.
Here you’ll find four USB 3.2 Gen 1 (5 Gbps) Type-A ports, plus two USB 3.2 Gen 2 (one Type-A and one Type-C). Also here is a 2.5 Gb Ethernet jack, three analog audio connections and connectors for the small antennae. The ASrock board also includes a pair of video connectors, but since you’ll want to use the ports on RTX 3080 instead, Corsair has blocked them off behind the I/O plate so most people wouldn’t even know they’re there.
The video connections from the RTX 3080 graphics card live next to the Corsair SF750 power supply, and come in the form of three DisplayPort 1.4a ports and a single HDMI 2.1 connector.
As for internal upgradability, you can get at most of the parts if you’re comfortable dismantling expensive PC hardware. But you can’t add any RAM or storage without swapping out what’s already there (or at least without removing the whole motherboard, more on that soon). That said, the 32GB of Corsair Vengeance LPX DDR4-3200 RAM, 1TB PCIe 4.0 Force MP600 SSD and 2TB Seagate 2.5-inch hard drive that’s already here are a potent cadre of components. If you need more RAM and storage (as well as more CPU cores), there’s a $4,199 configuration we’ll detail later.
To get inside the Corsair One a200, you don’t need any tools, but you’ll want to be a bit careful. Press a button at the rear top of the case (you have to press it quite hard) and the top, which also houses a fan, will pop up. But before you go yanking it away in haste, note that it’s attached via a fan cable that you can disconnect after first fishing the plug out from a hole inside the case.
To access the rest of the system you’ll have to remove two screws from each side. But again, don’t be careless, as radiators are attached to both side panels via short tubes, so the sides are a bit like upside-down gull-wing doors. You can’t really remove them without disconnecting the cooling plates from the CPU and GPU.
It’s fairly easy to remove the RAM, although the 32GB of Corsair Vengeance LPX DDR4-3200 occupies both of the slots. The 2TB Seagate 2.5-inch hard drive is also accessible from the left side, wedged under the PCIe riser cable that’s routed to the GPU on the other side.
At least the 1TB Force MP600 SSD on this model is mounted on the front of the motherboard under a heatsink, rather than behind the board on the i160 version we looked at a couple years ago.
You can open the right panel as well, though there’s not much to do here as the space is taken up by the GPU, a large radiator and a pair of fans mounted on the heatsink to move the RTX 3080’s heat through the radiator and out the vents on the side.
As with previous models, you should be able to replace the RTX 3080 with an air-cooled graphics card at some point, provided it has axial rather than blower-style cooling, and that it fits within the physical constraints of the chassis. But given that the RTX 3080 is the
best graphics card
you can buy, you may be ready for a whole new system by the time you start thinking about swapping out the graphics card here.
Aside from wishing there were more USB ports on the motherboard, I have no real complaints about the hardware here. If I were spending this much, I’d prefer a 2TB SSD, but at least the 1TB model Corsair has included is a PCIe 4.0 drive for the best speed possible. Technically the ASRock motherboard here has a second PCIe 3.0 M.2 slot, where you could install a second SSD. But it’s housed on the back of the motherboard, which would mean fairly major disassembly in cramped quarters, and remember that you’d have to disconnect the pump/cooling plate from the CPU before even attempting to do that.
Gaming Performance on the Corsair One a200
With AMD’s 12-core Ryzen 9 5900X and Nvidia’s RTX 3080 running the gaming show inside Corsair’s One a200 — and both of them liquid-cooled — we expected Corsair’s compact power tower to spit out impressive frame rates.
We pitted the a200 against
MSI’s Aegis RS 11th
, which also has an RX 3080 but an 8-core Intel Rocket Lake Core i7-11700K, and a couple other recent gaming rigs we’ve tested.
Alienware’s Aurora Ryzen Edition R10
sports a stepped down Ryzen 7 5800X and a
Radeon RX 6800XT
. And
HP’s Omen 30L
, which we looked at near the end of 2020, was outfitted with a last-generation Intel Core i9-10900K and an RTX 3080 to call its own.
While the Corsair One a200 didn’t walk away from the impressive competition, it was almost always in the lead in our gaming tests. And that’s all the more impressive given most of the systems it competes with are much larger.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
On the Shadow of the Tomb Raider benchmark (highest settings), the game ran at 147 fps at 1080p on the One a200, and 57 fps at 4K. The former ties it with the Aegis for first place here, and the latter beats both the Aegis and the Omen 30L, just slightly, giving Corsair’s system an uncontested win.
In Grand Theft Auto V (very high settings), the Corsair system basically repeated its previous performance, tying the MSI machine at 1080p and pulling one frame ahead of both the Omen and the MSI at 4K.
On the Far Cry New Dawn benchmark, the MSI Aegis pulled ahead at 1080p by 11 fps, but the One a200 still managed to tie the MSI and HP systems at 4K.
After trailing a bit in Far Cry at 1080p, the One a200 pulled ahead in Red Dead Redemption 2 (medium settings) at the same resolution, with its score of 117 fps beating everything else. And at 4K, the Corsair system’s 51 fps was again one frame ahead of both the MSI and Alienware systems.
Last up in Borderlands 3 (badass settings), the Corsair system stayed true to its impressive form. Its score of 137 fps at 1080 was a frame ahead of the MSI (and ahead of everything else). And at 4K, its score of 59 fps was only tied by the HP Omen.
Aside from the One a200’s gaming performance being impressive for its size, this is also one of the quietest high-end gaming rigs I’ve tested in a long time. Lots of heat shot out of the top of the tower while I played the Ancient Gods expansion of Doom Eternal, but fan noise was a constant low-end whirr. The large fan at the top does its job without doing much to make itself known, and the radiators on either side help move heat out of the case without adding to the impressively quiet noise floor.
We also subjected the Corsair One a200 to our Metro Exodus stress test gauntlet, in which we run the benchmark at the Extreme preset 15 times to simulate roughly half an hour of gaming. The Corsair tower ran the game at an average of 71.13 fps, with very little variation. The system started out the test at 71.37 fps on the first run, and dipped just to 71.05 fps on the final run. That’s a change of just a third of a frame per second throughout our stress test. It’s clear both in terms of consistent performance and low noise levels that the One a200’s cooling system is excelling at its job.
During the Metro Exodus runs, the CPU ran at an average clock speed of 4.2 GHz and an average temperature of 74.9 degrees Celsius (166.8 degrees Fahrenheit). The GPU’s average clock speed was 1.81 GHz, with an average temperature of 68.7 degrees Celsius (155.6 degrees Fahrenheit).
Productivity Performance
While the Ryzen 9 5900X isn’t quite as potentially speedy on paper as the top-end 5950X (thanks to a slightly lower top boost clock and four fewer cores), it’s still a very powerful 12-core CPU. And paired with Nvidia’s RTX 3080, along with 32GB of RAM and a fast PCIe 4.0 SSD, the Corsair One a200 is just as potent in productivity and workstation tasks as it is playing games.
Image 1 of 3
Image 2 of 3
Image 3 of 3
On Geekbench 5, an overall performance benchmark, the Corsair system was just behind the leading systems in the single-core tests, with its score of 1,652. But on the multi-core test, it’s 11,968 was well ahead of everything else.
The Corsair PCIe Gen 4 SSD in the a200 blew past competing systems, transferring our 25GB of files at a rate of 1.27 GBps, with only the HP Omen’s WD SSD also managing to get close to the 1GBps mark.
And on our Handbrake video editing test, the Corsair One a200 transcoded a 4K video to 1080p in an impressive 4 minutes and 44 seconds, while all the other systems took well more than 5 minutes to complete the same task. Video editors in particular will be able to make good use of this system’s 12 cores and 24 threads of CPU might.
Software and Warranty for the Corsair One a200
The Corsair One a200 ships with a two-year warranty (plus lifetime customer support) and very little pre-installed software. Aside from Windows 10 Home, you get the company’s iCue software, which can be used to control both the lights as well as the system fans. The company even seems to have avoided the usual bloat of streaming apps and casual games like Candy Crush, which ship with almost all Windows machines these days.
Configuration Options for the Corsair One a200
If you’re after the AMD-powered Corsair a200 specifically, you have two configuration options. There’s the model we tested (Corsair One a200 CS-90200212), with a 12-core Ryzen 9 5900X, 32GB of RAM, a 1TB PCIe Gen 4 SSD, 2TB hard drive, and an RTX 3080 for $3,799. Or you can pay $400 more ($4,199) to step up to the 16-core Ryzen 5950X and double the RAM and SSD to 64GB and 2TB respectively (Corsair One Pro a200 CS-9040010). The latter configuration is overkill for gaming, but the extra storage, RAM and four more CPU cores are well worth the extra money if you can actually make use of them.
For those who aren’t wedded to AMD, there’s also the Intel-based Corsair One i200, which now includes 11th Gen “Rocket Lake” CPU options, with up to a Core i9-11900K and an RTX 3080, albeit running on a last-gen Z490 platform. It starts a little lower at $3,599. But that model is currently out of stock with any current-generation Intel and Nvidia components, leaving exact pricing up in the air as of publicatioon.
We tried to do some comparison pricing, and were able to find a similarly equipped HP Omen 30L, as HP often sells gaming rigs on the more-affordable side of the spectrum. But when we wrote this, all Omen 30L systems with current-generation graphics cards were sold out on HP’s site. We were able to
find an Omen 30L on Amazon
with an RTX 3080 and an Intel Core i9-10850K, along with similar RAM and storage as our Corsair a200, for $3,459. That’s about $340 less than the a200, but the Omen 30L is also much larger than the a200 and has a now last-generation CPU with fewer cores, plus a slower SSD.
Bottom Line
With one of
the best CPUs
and graphics cards, both liquid cooled and quiet, in an attractive, compact package, Corsair’s One a200 offers a whole lot to like. The $3,799 asking price is certainly daunting, but in these times when that graphics card alone is selling on eBay regularly for more than $2,000, the Ryzen 9 5900X often sells for close to $800, and even most desktops with current-gen graphics cards are mostly sold out, it’s tough to which high-end gaming rig is more or less of a bargain than something else.
If you spend some time looking you can probably find a system with similar specs as the Corsair One a200 for a bit less. But unless and until the ongoing mining craze subsides, that system probably won’t cost substantially less than Corsair’s pricing. And with its impressively compact shell, quiet operation, and top-end performance in both gaming and productivity, the a200 is easy to recommend for those who can afford it. Just know that upgrading will be a bit more difficult and limiting than with a larger desktop, and if you need lots of USB ports, you may want to invest in a hub.
Microsoft’s Xbox Cloud Gaming (xCloud) is officially arriving on iOS and PC tomorrow. The service will arrive on devices via browsers, allowing Xbox Game Pass Ultimate subscribers to play Xbox games on iPhones, iPads, and PCs. Microsoft is keeping this beta rather limited though, and requiring players to be invited to participate in the testing phase.
The service will be accessible at www.xbox.com/play, where Xbox Game Pass Ultimate subscribers that have been invited to the beta will be able to play Xbox games through Edge, Chrome, or Safari browsers. More than 100 games will be available, and testers will be able to use a compatible Bluetooth or USB-connected controller or simply use custom touch controls.
“The limited beta is our time to test and learn; we’ll send out more invites on a continuous basis to players in all 22 supported countries, evaluate feedback, continue to improve the experience, and add support for more devices,” says Catherine Gluckstein, Microsoft’s head of xCloud. “Our plan is to iterate quickly and open up to all Xbox Game Pass Ultimate members in the coming months so more people have the opportunity to play Xbox in all-new ways.”
It’s the first time Xbox Game Streaming has been available on iOS devices after the service launched exclusively on Android phones and tablets last year. Microsoft wasn’t able to offer xCloud on iPhones or iPads during the initial launch phase of the service back in September, due to Apple’s restrictions on cloud gaming apps.
Both Apple and Microsoft got into a public war of words over xCloud, and Apple initially insisted that Microsoft would have to submit individual games for review. Apple eventually offered a compromise to allow cloud gaming apps to run on iOS with individually reviewed games, but Microsoft branded it a “bad experience for consumers.”
The What Hi-Fi? Virtual Show takes place this Saturday 24th April and, alongside new product launches, interviews and advice, we will also be hosting a live Q&A on Saturday afternoon.
We will be answering questions that have been posted by viewers during the event and you can also log your questions in advance. You can post a question below this article or head over to the thread on the What Hi-Fi? Facebook page.
The What Hi-Fi? Virtual Show will be a full day of sessions, across two virtual stages. We’ll be explaining how we review and the importance of our dedicated test rooms, giving advice on getting the best from your existing kit, whether that be your speakers or your TV, and dusting off our crystal ball to discuss the next big things in hi-fi and home cinema.
Visit the What Hi-Fi? Virtual Show page to register for this FREE event
We’ll also be building our own speaker and talking you through the DIY process, discussing our favourite products in the world right now – and choosing our all-time favourites.
For full details, head over to the What Hi-Fi? Virtual Show page. See you on Saturday.
(Pocket-lint) – There aren’t a great many high-resolution gaming headsets out there, but the devices that do exist can make a real difference to your gaming experience. That extra audio range provides more immersion and also helps with things like hearing footsteps in competitive shooters.
The Asus ROG Delta S sports a high-resolution Quad DAC (digital-to-analogue converter) and MQA technology that promises “true to life” audio. So on paper it should be fantastic, but is it? We’ve been gaming and listening to find out.
Best PC gaming headsets: The best wired, wireless and surround sound headsets around
Lightweight comfortable design with RGB
Detachable microphone
Lightweight 300g frame
Braided 1.5m USB-C cable, 1m USB 2.0 adapter
ROG Hybrid ear cushions / protein leather cushions with fast-cool memory foam padding
The first thing that struck us about the Asus ROG Delta S upon wearing it for the first time was the comfort. This headset comes with a flexible headband and earcup design that extends nicely over the head and sits in a satisfying way over the ears. But more importantly, it sports D-shaped ergonomic ear cushions, with a choice of either a protein leather or ROG Hybrid finish backed by fast-cool memory foam padding.
Both these ear cushions are included in the box, giving you a choice of what to use – but they’re equally comfortable in our mind. The protein leather cushions do a better job of blocking out external noise though, which means you can focus on the sound.
The D-shaped cushions fit nicely over the ears and they’re both deep and wide enough to not put unnecessary pressure on your ears either. This, combined with the nicely padded headband and the lightweight over ear design, result in a headset that’s comfortable to wear all day for work and then into the evening for gaming.
Comfort and convenience go hand-in-hand with this headset. As standard it has a USB-C connection, which means you can use it with your Android phone or Nintendo Switch and still get great sound. Alternatively, there’s an adapter that converts it to USB-A with ease, meaning you can connect it to even more devices. The detachable mic also gives you the choice of whether you use the provided one or opt for something external.
Best microphones for video calling, podcasting and streaming
On the outside of the headset there’s a couple of RGB lighting zones on each earcup: a ring around the outer plate and the ROG logo. This lighting can be adjusted within the Armoury Crate software – there’s a few different effects including static, breathing, strobe, colour cycle and, of course, rainbow. The headset itself also has a hardware button to set it to three different modes – on, off or soundwave. Soundwave makes the lights respond to your voice when you’re talking, which might appeal to streamers.
One thing we were impressed with is the RGB lighting works even when plugged into a smartphone, which is a fairly unusual feature. So yes, you can have RGB on the go with this headset. If you really want to show off your passion for gaming when outside the house. But there’s the option to turn it off too if you don’t want to look like a mobile disco.
Satisfying high-resolution audio
50mm Neodymium magnet drivers
20Hz-40KHz frequency response
Hi-Res ESS 9281 Quad DAC
MQA rendering technology
24-bit, 96KHz sample rate
Virtual 7.1 surround sound
Custom audio profiles
The main selling point of the Asus ROG Delta S is the inclusion of the Hi-Res ESS 9281 Quad DAC and MQA rendering technology (which stands for ‘Master Quality Authenticated’). This tech means that with Tidal Masters recordings you can enjoy some seriously satisfying sound quality.
We thoroughly enjoyed listening to music this way on a Google Pixel 5. The audio is rich, warm, and has a superb range to it. If you’ve never heard hi-resolution audio before, you’ll soon notice new elements to your favourite tracks that you’ve never heard before.
Best USB-C headphones for Android phones 2021
By Dan Grabham
·
That same logic applies to gaming too. Plug the headset into a PC, set the 24-bit/96KHz sample rate in Windows sound settings, tweak the equaliser (EQ) in ROG Armoury Crate and get your game on.
Suddenly you’ll find a wider audio range than you’ve heard before. This is great as it often means you can pick up on important sounds more easily. The footsteps of enemies in games like Rainbow Six Siege or Warzone, for example, are much easier to hear and discern their direction from within the game world.
That said, we did feel like this headset oddly isn’t as bassy or as rich as other high-res headsets we’ve tried. Strangely, music is richer than when gaming. And though you can adjust the EQ settings and sound profiles within Armoury Crate, we just feel like it lacks some of the richness we’d expect at this price point.
That said, the virtual surround sound is good and combined with high-res audio it delivers great positional awareness. This headset is also insanely loud. So if you feel like you struggle to hear with other headsets then the ROG Delta S won’t disappoint.
AI-powered mic?
AI-noise cancellation
Unidirectional pick up pattern
100Hz to 10KHz frequency response
Noise gate, perfect voice, other settings in Armoury Crate
The Asus ROG Delta S has a flexbile, detachable unidirectional microphone included in the box. This mic offers AI-powered noise cancellation that’s designed to block out external noise and help keep your voice in focus.
We weren’t overly impressed with the mic on this headset, though, but it’s far from the worst we’ve tried.
You can adjust settings for noise gate, perfect voice and the AI noise-cancellation in the Armoury Crate software. But we found our voice was captured more clearly when we didn’t use those settings. This is going to depend on your environment of course, but the quality of the audio can certainly be tweaked in various ways with ease.
Verdict
The Asus ROG Delta S is a comfortable and easy-to-wear gaming headset that sounds fantastic when listening to high-res music on Tidal.
However, for our ears the audio lacks depth when gaming. It’s not as rich or as bassy as we’d like, but there are plenty of settings to play around with and tweak to your preference.
The included microphone is also not as good as, say, that included on the Corsair Virtuoso – so we’d highly recommend a proper mic as an alternative.
All told, the Asus ROG Delta S is a mixed bag. We love that it works with multiple different devices – a benefit of that USB-C/USB-A connection option – and for music it’s absolutely fantastic. But it’s just not quite as on point for gaming audio.
Also consider
Corsair Virtuoso RGB
squirrel_widget_167882
A fantastic alternative thanks to a superior microphone and more connection options with 3.5mm, wireless and USB-A. It’s not as comfortable as the ROG Delta S, but is more impressive in a number of ways and also delivers high-res audio that’s fantastic on PC.
Audeze Penrose
squirrel_widget_3762273
This is a wireless version of the company’s Mobius headset. It features massive 100mm Planar Magnetic drivers and a broadcast-quality microphone. It also works well on PC and PS5 and offers 2.4Ghz wireless, Bluetooth connectivity and 3.5mm options too.
AMD’s EPYC Milan processors launched last month with 120 new world records to their credit in various applications, like HPC, Cloud, and enterprise workloads. But variants of these chips will eventually come to the market as Threadripper models for high end desktop PCs, and AMD’s server records don’t tell us too much about what we could expect from the PC chips. However, the company recently broke the Cinebench world record with its Milan chips, giving us an idea of what to expect in rendering work. Just for fun, we also ran a few tests on Intel’s new flagship 40-core Ice Lake Xeon chips to see how they stack up against not only AMD’s new record it set with the server chips, but also a single AMD Threadripper processor.
During the latest episode of AMD’s The Bring Up YouTube video series, the company took two of its $7,980 EPYC Milan 7763 chips for a spin in Cinbench R23, a rendering application that AMD commonly uses for its desktop PC marketing (largely because it responds exceedingly well to AMD’s Zen architectures).
As a quick reminder, AMD’s flagship 7763 server chips come armed with the 64 Zen 3 cores and 128 threads apiece and have a 2.45 GHz base and 3.5 GHz boost frequency. All told, we’re looking at a Cinebench run with 128 cores and 256 threads, which you can see in the tweet below:
So sieht das aus, wenn sich 2x 64 Zen-3-Kerne durch den Cinebench R23 fressen. pic.twitter.com/o9jiZeKPlRApril 15, 2021
See more
The dual 7763’s scored 113,631 points, while the previous world record weighed in at 105,570 (as per HWBot rankings). AMD says it used a reference server design with conventional air cooling for the test run, so there were no special accommodations or overclocking. The system peaked at 85C and 403W during the test run. Here’s AMD’s official HWBot world record submission.
1K Unit Price / RCP
Cores / Threads
Base / Boost – All Core (GHz)
L3 Cache (MB)
TDP (W)
AMD EPYC Milan 7763
$7,890
64 / 128
2.45 / 3.5
256
280
Intel Xeon Platinum 8380
$8,099
40 / 80
2.3 / 3.2 – 3.0
60
270
That isn’t much info to work with, but it’s enough for us to set up our own test. We ran a few tests with a dual Xeon 8380 Ice Lake Xeon server we used for our recent review. Much like AMD’s test system, this is a standard development design with air cooling (more details in the review). The Xeon system houses two $8,099 10nm Ice Lake Xeons with 40 cores 80 threads apiece that operate at a 2.3 GHz base and 3.2 GHz boost frequency. Yes, AMD’s Milan outweighs the Xeon system, but the Ice Lake 8380 is Intel’s highest-end part, and both chips come with comparable pricing.
We’re looking at the EPCY Milan server with 128 cores and 256 threads against the Intel Ice Lake system with 80 cores and 160 threads. Our quick tests here are not 100% like-for-like so take these with a grain of salt, though we did our best to match AMD’s test conditions. Here are our test results, with a few extras from the HWBot benchmark database mixed in:
Cinebench Benchmarks
Score
Cooling
Chip Price
2x AMD EPYC Milan 7763
113,631
Air
$15,780
1x Threadripper 3990X (Splave)
105,170
Liquid Nitrogen (LN2)
$3,990
2x EPYC 7H12
92,357
Air
?
2x Intel Xeon Platinum 8380
74,630
Air
$17,000
1x Threadripper 3990X (stock)
64,354
All-In-One (AIO) Liquid Cooling
$3,990
As you can see, in Cinebench R23, the dual EPYC Milan 7763’s are 34% faster than the dual Ice Lake Xeon 8380’s. AMD lists a 403W peak power consumption during its tests, but we assume those measurements are for the processors only (and perhaps only a single processor). In contrast, our power measurement at the wall for the Xeon 8380 server weighed in at 1154W, but that includes a beastly 512GB of memory, other platform additives, and VRM losses, etc., meaning it’s just a rough idea of power consumption that isn’t comparable to the EPYC system.
Naturally, Cinebench R23 results have absolutely no bearing on the purchasing decision for a data center customer, but it is an interesting comparison. Notably, a single Threadripper 3990X, when pressed to its fullest with liquid nitrogen by our resident overclocking guru Splave, still beats the two Xeon Platinum 8380’s, though the 8380’s pull off the win against an air-cooled 3990X at stock settings (measured in our labs).
Finally, we decided to see how two Ice Lake Xeon 8380’s compare against a broader set of processors. Intel suffered quite a bit of embarrassment back at AMD’s launch of the 64-core Threadripper 3900X for high-end desktop PCs, as this $3,990 processor (yes, just one) beat two of Intel’s previous-gen 8280 Xeons in a range of threaded workloads. Intel’s Xeon’s weighed in at $20,000 total and represented the company’s fastest server processors. Ouch.
In fact, those benchmark results were so amazing that we included an entire page of testing in our Threadripper 3990X review comparing two of Intel’s fire-breathing behemoths to AMD’s single workstation chip, which you can see here. As a bit of a redux, we decided to revisit the standings with a quick run of Cinebench R20 with the new Intel 10nm Xeons. Notably, this test is with an older version of the benchmark than we used above, but that’s so we can match our historical data in the chart below:
Unfortunately, we don’t have a dual-socket EPYC Milan 7763 system to add to our historical test results here, but we get a good enough sense of Ice Lake’s relative positioning with this chart. The two Intel Ice Lake 8380’s, which weigh in at $17,000, beat the single $3,990 Threadripper 3900X at stock settings. That’s at least better than the dual 8280’s that lost so convincingly in the past.
However, a quick toggle of the PBO switch, which is an automated overclocking feature from AMD that works with standard cooling solutions (no liquid nitrogen required), allows a single Threadripper 3990X to regain the lead over Intel’s newest 10nm flagships in this test. Intel’s latest chips also can’t beat AMD’s previous-gen EPYC Rome 7742’s, which are 64-core chips.
Of course, this single benchmark has almost no bearing on the enterprise market that the Ice Lake chips are destined for, and the latest Xeon’s do make solid steps forward in a broader range of tests that do matter, which you can see in our Ice Lake 8380 review.
There is so much to love about this board. Small in size, yet great in flexibility the QT Py RP2040 is a board that you need in your projects.
For
+ Small size
+ Stemma QT Port
+ USB-C
Against
– Lack of GPIO pins
Adafruit have so far released three RP2040 boards. We have already reviewed the Feather RP2040 and that board has become our go-to RP2040 board for many reasons. Adafruit’s second board, the ItstBitsy RP2040 is next on the bench for our review, but we couldn’t wait to get our hands on Adafruit’s smallest RP2040 board, the QT Py 2040.
We already own the previous version, based around a SAMD21 chip. Comparing the two side by side, we can’t see much difference as both have the same GPIO pinout and identical size and Stemma QT connector. The only physical differences are an extra button, and the change of chip.
The QT Py RP2040 adds an additional analog pin, bringing the total to four, and it features a built in NeoPixel RGB LED which is used as a status indicator and to alert us to issues in our code. But with a board this small some sacrifices had to be made, most notably the reduced number of GPIO pins. Is the reduction in size and GPIO pins worth paying over double ($9.95 vs $4) the price of a Raspberry Pi Pico?
Adafruit QT Py RP2040 Hardware Specifications
RP2040 SoC
ARM Cortex M0+ running at up to 133Mhz
SRAM
264kB
Flash Storage
8MB of QSPI
GPIO
13 GPIO pins. 7 x Digital I/O, 4 x Analog 12-bit ADC, 2 x I2C (including Stemma QT), SPI, UART, Programmable IO, 1 x NeoPixel
USB Port
USB C
Dimensions
0.86 x 0.7 inches (22 x 18mm)
Design of the Adafruit QT Py RP2040
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Adafruit’s QT Py RP2040 is much smaller than the Pico, roughly a third of the size. It has castellations that can be used to surface mount the board to a PCB, but just like Pimoroni’s Tiny 2040, the RP2040 SoC is located on the underside of the board, meaning that a cutout will need to be made to the PCB for flush mounting.
You may be thinking that the Adafruit QT Py RP2040 looks familiar, and you are correct. The QT Py RP2040 bears a passing resemblance to Tiny 2040. Both are very close in size, but their GPIO layout is much different. The QT Py RP2040 has the same GPIO pinout as the previous SAMD21 power QT Py, meaning that this can be a drop in upgrade for a project.
Two buttons are present on the topside of the QT Py RP2040, boot and reset. The addition of a reset button is a nice feature as it saves wear and tear on the USB C port. The strongest addition to the QT Py RP2040 is the Stemma QT connector located opposite the USB C port.
Stemma QT is Adafruit’s connector, introduced in 2018. In reality it is a 3 or 4-pin JST PH connector which has a keyed interface so that it can only be inserted one way. Typical Stemma QT devices are sensors / inputs that use the I2C protocol for communication. Inserting a Stemma QT component requires just the cable and nothing more. We do not have to use pull up resistors for the I2C SDA / SCL connections; everything just works. Stemma QT devices can be chained together to create elaborate, yet simple electronic projects. SparkFun’s Qwiic ecosystem of boards uses the same connector so many of those may also be compatible with the Adafruit QT Py RP2040.
Using the Adafruit QT Py RP2040
At the heart of the QT Py RP2040 is Raspberry Pi’s “Pi Silicon” RP2040 SoC and that means we can write code for the QT Py RP2040 in MicroPython, CircuitPython, C/C++ and soon via the new Arduino Core. But most of us will be writing code in CircuitPython, Adafruit’s own version of MicroPython which has support for an extensive library of add ons via a downloadable library of drivers.
Writing code in CircuitPython is much the same as Python, the only difference is that we save the project as code.py on the QT Py RP2040 and it will autostart when the board is powered up. We installed the latest version of CircuitPython and ran through a few common tasks. Blinking LEDs and using buttons as inputs were no challenge. We then connected a NeoPixel ring to the board and installed the neopixel.mpy library. In a few minutes of coding, we had a multi-color NeoPixel ring lighting up our bench.
To test the Stemma QT connection, we used an MPR121 12 point-gator clip breakout which creates 12 capacitive touch inputs. We installed the required libraries and then wrote our code, but we then saw errors that prevented us from moving forwards. Not to be beaten, we connected the MPR121 to the I2C pins of the board, and everything worked.
After a brief conversation with Adafruit, we discovered that the Stemma QT connection is on a secondary port, requiring our code to be modified to use board.SDA1 and board.SCL1. With that change made, our code worked.
CircuitPython is quite simply the most effective way to work with the RP2040. We have the simplicity of Python along with a massive amount of support in the form of documentation and libraries of code for add ons.
Use Cases for the Adafruit QT Py RP2040
The size and capabilities of the QT Py RP2040 lean towards embedding the board in a project. If space is at a premium, but you need the power of the RP2040 then this is the board for you. We can also see the QT Py RP2040 powering many USB HID devices, such as stream decks, keyboard shortcuts and MIDI control.
Bottom Line
The power of the RP2040 in a smaller package and the added flexibility of the Stemma QT interface. There is nothing not to like here. If you don’t need the Stemma QT interface, then perhaps Pimoroni’s Tiny 2040 is for you. But Adafruit’s QT Py 2040 is a fantastic board for Pico projects. The only RP2040 board to beat it is Adafruit’s Feather RP2040 which is a larger board with more features and a price that’s $2 higher.
Asus is no stranger to the gaming phone sector and the ROG line, in particular, has solidified itself as the forerunner in the segment. After the success of last year’s ROG Phone 3, Asus decided to take the series up to 5 with the ROG Phone 5. With our ROG Phone 5 written review in the books, we now bring your attention to our video breakdown with Will guiding you through his impressions of the latest gaming flagship from Asus.
The focus here is on the vanilla ROG Phone 5 instead of the more feature-packed ROG Phone 5 Pro or Ultimate. Despite being the most affordable of the bunch, the vanilla ROG Phone 5 does not disappoint in any area. You get a dual glass design with Gorilla Glass Victus covering the display and Gorilla Glass 3 on the back.
Asus has always tried to differentiate the backs of its ROG Phones and this year brings a DOT matrix LEDs which form the ROG logo. The sides are made from aluminum and the 238-grams heft makes the ROG Phone 5 a two-hand device.
Being a gaming phone, Asus has once again brought its custom pressure-sensitive shoulder triggers giving you an upper hand in supported gaming titles. You also get a second USB-C and accessories port on the side to charge and connect up additional gaming peripherals.
The 6.78-inch AMOLED display with adaptive 144Hz refresh rate and 1080p resolution is major selling point. Colors are vibrant and the panel is uninterrupted by cutouts as Asus tends to do. Touch latency has been improved thanks to the new 300Hz sampling rate. You also get an excellent pair of stereo speakers which achieved great loudness scores in our tests though not on the level of the outgoing ROG Phone 3.
Performance from the Snapdragon 888 is among the best we’ve seen to date thanks to Asus’ optimizations and advanced cooling system. Throttling has been kept to a minimum which is a major factor for prolonged gaming sessions.
The massive 6,000 mAh battery in the ROG Phone 5 delivered a respectable 110-hour endurance rating in our test, though we were expecting more. The bundled 65W charger was among the fastest we’ve tested managing a 0-70% top-up in just 30 minutes.
Today, we will be reviewing yet another Xiaomi mid-ranger – we are welcoming the 4G version of the Mi 11 Lite.
We are not sure how Xiaomi can keep uninterrupted production of so many phones with the ongoing global chip shortages. But we are glad things are working well for them so far.
The Mi 11 Lite 5G has already earned our recommendation, and we are hoping its cheaper version to be just as good. The lightweight Mi 11 Lite, just like the Mi 11 Lite 5G, is shaped after the Mi 11 flagship and focuses on similar features – an HRR OLED screen, enjoyable camera quality, long battery life, fast charging, and overall smooth UI experience.
We are glad to see Xiaomi has thoughtfully handpicked the features that matter the most. The 6.55-inch OLED is of great quality with 10-bit color support, HDR10 certification, and a 90Hz refresh rate. There is also 240Hz touch sampling, which is another requirement for a smooth experience.
The triple camera on the back is also reminiscent of the Mi 11’s and the same as on the Mi 11 Lite 5G – there is a high-res 64MP primary, an 8MP ultrawide snapper, and a 5MP telemacro cam. All sorts of shooting modes are supported, including Night Mode, Long Exposure, Pro mode for all cameras, and the Mi 11 series exclusive video modes such as Parallel World, Time Freeze, Night Mode Timelapse, among others.
The Mi 11 Lite relies on the Snapdragon 732G chip – the same one we experienced as part of the Redmi Note 10 Pro. That’s the only notable difference with the Mi 11 Lite 5G – the 5G model uses a more powerful Snapdragon 780G 5G SoC.
The Mi 11 Lite may have undergone an obvious cost-cutting process, but it still gets to enjoy stereo speakers, NFC connectivity, a microSD slot, and even an IR port. And, by looking at its specs sheet, it does seem like a Lite version done right.
Xiaomi Mi 11 Lite specs at a glance:
Body: 160.5×75.7×6.8mm, 157g; Gorilla Glass 5 front, glass back, plastic frame.
Display: 6.55″ AMOLED, 1B colors, HDR10, 90Hz, 240Hz touch sampling, 500 nits (typ), 800 nits, 1080x2400px resolution, 20:9 aspect ratio, 402ppi.
Chipset: Qualcomm SM7150 Snapdragon 732G (8 nm): Octa-core (2×2.3 GHz Kryo 470 Gold & 6×1.8 GHz Kryo 470 Silver); Adreno 618.
Memory: 64GB 6GB RAM, 128GB 6GB RAM, 128GB 8GB RAM; UFS 2.2; microSDXC (uses shared SIM slot).
OS/Software: Android 11, MIUI 12.
Rear camera: Wide (main): 64 MP, f/1.8, 26mm, 1/1.97″, 0.7µm, PDAF; Ultra wide angle: 8 MP, f/2.2, 119˚, 1/4.0″, 1.12µm; Macro: 5 MP, f/2.4, AF.
Front camera: 16 MP, f/2.5, 25mm (wide), 1/3.06″ 1.0µm.
Video capture: Rear camera: 4K@30fps, 1080p@30/60/120fps; gyro-EIS; Front camera: 1080p@30fps, 720p@120fps.
Battery: 4250mAh; Fast charging 33W.
Misc: Fingerprint reader (side-mounted); Infrared port.The most notable omission is splash resistance, obviously. While the similarly priced Poco X3 Pro is IP53-rated, and Samsung is putting an even bigger effort with its most recent IP67-rated Galaxy A phones, Xiaomi isn’t keen on providing any sort of ingress protection for the Mi 11 Lite phones. It’s not a major issue, of course, but it’s already a popular must-have for the competition.
Unboxing the Xiaomi Mi 11 Lite
The Mi 11 Lite bundle is a match to what most of the Redmi and Poco phones recently offered – a 33W power adapter, a 3A-rated USB-C cable, there is also a USB-C-to-3.5mm adapter.
There is also a transparent silicone case inside the retail box – a much-appreciated addition across all Xiaomi phones. Xiaomi is also giving away a thin screen protector, but it’s one of those cheap films that turn your screen into a smudge magnet, and we just couldn’t bear all this smear, sorry.
Samsung was supposed to start selling its SmartTag+ in the US earlier this week, but it seems that was delayed (it’s missing from Samsung.com and other online retailers). Not so in South Korea, where the location tracker is available as of today.
It can be found on Samsung’s online store as well as Samsung Digital Plaza stores around the country, plus other retailers such as Coupang, 11th Street, G Market and Naver Smart Store. Locals can buy one for KRW 39,600 in Black or Denim Blue.
The difference between the SmartTag and SmartTag+ is that the former relies on Bluetooth LE, the uses Ultra Wide Band (UWB) technology, which provides accurate directional info. This allows users to find their lost keys using an AR application.
You can attach these tags to just about anything, things like keys or pets. Samsung even sees them as anti-theft devices you hook up to your bag or bicycle. But if you’re going to carry them where they can be seen, you can pick up one of the official cases – the Samsung has introduced Disney, Star Wars, The Simpsons and Naver Line branded cases. Most of these are for fun, but there are practical ones like this one with a built in retroreflector.
By the way, the Bluetooth LE connection of the vanilla tags has a range of about 120m. But you can track them (and the Plus) even if they are beyond the range of your phone – if you sign up for the SmartThings Find service, the tags will be detected by other Galaxy owners walking by (in turn, your phone will help other find their tags).
SmartThings also allows you to turn the tags into physical shortcuts for your smart home, allowing you to trigger actions with a short or a long press.
In the US the SmarTag+ goes for $39, the vanilla version is $29 (you can read our review of the SmartTag for more details). Note that for the Plus model you need an UWB-enabled phone like the Galaxy Note20 Ultra or S21 Ultra.
Christina Munro 1 day ago Featured Tech Reviews, Mouse, Reviews
Today we’re checking out ASUS’ latest lightweight wireless gaming mouse. The ASUS ROG Keris Wireless supports wired, Bluetooth and 2.4GHz wireless connectivity, while also sporting a PixArt PAW 3335 sensor with 400 IPS tracking and up to 16,000 DPI. Not only does it have hot-swappable switches, with spares included, you can also change the colour of the side buttons. Let’s see if this mouse is really worth the £89.99 asking price.
Watch via our Vimeo channel (below) or over on YouTube at 2160p HERE
Specifications:
Ergonomic, right handed design
Connectivity: USB 2.0
Bluetooth: RF 2.4GHz
Sensor ; PAW3335
Resolution ; 16000DPI
Max Speed ; 400IPS
Max Acceleration ; 40G
USB Report rate ; 1000 Hz
RF 2.4G Report rate:1000Hz
L/R Switch Type: ROG 70M Micro Switch
Button: 7 programmable buttons
Battery Type: 500mAh
Battery Life: 78 hours without lighting 52 hours with default lighting(Breathing)
Cable: 2.0m type-C ROG Paracord
Dimensions: 118(L)x62(w)x39(H) mm
Weight With Cable: 79g
Colour: black
You can purchase the ASUS ROG Keris Wireless from Overclockers UK for £89.99 HERE!
Discuss on our Facebook page HERE.
Pros
Lightweight.
Different coloured side buttons and spare switches included.
Built-in storage space for the USB dongle.
On the fly DPI adjustment.
Very comfortable in all grip styles.
Cons
Plastic attracts grease.
Not suited for those with larger hands.
KitGuru says: At £90 the ASUS ROG Keris Wireless certainly isn’t cheap but if you want a lightweight, wireless ergonomic mouse it’s definitely worth buying. Wireless performance is great, the shape is excellent for those with medium to small hands, and we love the hot-swappable switches.
Become a Patron!
Check Also
Patriot Viper VP4300 2TB SSD Review
Patriot’s latest PCIe 4.0 SSD doesn’t use a Phison controller – it’s from Innogrit instead
The Intel Core i5-11600K vs AMD Ryzen 5 5600X rivalry is a heated battle for supremacy right in the heart of the mid-range CPU market. AMD’s Ryzen 5000 processors took the lead in the desktop PC from Intel’s competing Comet Lake processors last year, upsetting our Best CPU for gaming recommendations and our CPU Benchmarks hierarchy. Intel’s response comes in the form of its Rocket Lake processors, which dial up the power to extreme levels and bring the new Cypress Cove architecture to the company’s 14nm process as Intel looks to upset AMD’s powerful Zen 3-powered Ryzen 5000 chips.
Intel has pushed its 14nm silicon to the limits as it attempts to unseat the AMD competition, and that has paid off in the mid-range where Intel’s six-core Core i5-11600K weighs in with surprisingly good performance given its $232 to $262 price point.
Intel’s aggressive pricing, and the fact that the potent Ryzen 5 5600X remains perpetually out of stock and price-gouged, has shifted the conversation entirely. For Intel, all it has to do is serve up solid pricing, have competitive performance, and make sure it has enough chips at retail to snatch away the win.
We put the Core i5-11600K up against the Ryzen 5 5600X in a six-round faceoff to see which chip takes the crown in our gaming and application benchmarks, along with other key criteria like power consumption and pricing. Let’s see how the chips stack up.
Features and Specifications of AMD Ryzen 5 5600X vs Intel Core i5-11600K
Rocket Lake Core i5-11600K vs AMD Zen 3 Ryzen 5 5600X Specifications and Pricing
Suggested Price
Cores / Threads
Base (GHz)
Peak Boost (Dual/All Core)
TDP
iGPU
L3
AMD Ryzen 5 5600X
$299 (and much higher)
6 / 12
3.7
4.6
65W
None
32MB (1×32)
Intel Core i5-11600K (KF)
$262 (K) – $237 (KF)
6 / 12
3.9
4.6 / 4.9 (TB2)
125W
UHD Graphics 750 Xe 32EU
12MB
The 7nm Ryzen 5 5600X set a new bar for the mid-range with six Zen 3 cores and twelve threads that operate at a 3.7-GHz base and 4.6-GHz boost frequency. Despite AMD’s decision to hike gen-on-gen pricing, the 5600X delivered class-leading performance at its launch, not to mention a solid price-to-performance ratio. Things have changed since then, though, due to overwhelming demand coupled with pandemic-spurred supply chain disruptions, both of which have combined to make finding the Ryzen 5 5600X a rarity at retail, let alone at the suggested $299 pricing.
Intel’s Core i5-11600K also comes with six cores and twelve threads, but Team Blue’s chips come with the new Cypress Cove architecture paired with the aging 14nm process. Intel has tuned this chip for performance; it weighs in with a 3.9-GHz base, 4.9-GHz Turbo Boost 2.0, and 4.6-GHz all-core clock rates. All of these things come at the expense of power consumption and heat generation.
Intel specs the 14nm 11600K at a 125W TDP rating, but that jumps to 182W under heavy loads, while AMD’s denser and more efficient 7nm process grants the 5600X a much-friendlier 65W TDP rating that coincides with a peak of 88W. We’ll dive deeper into power consumption a bit later, but this is important because the Core i5-11600K comes without a cooler. You’ll need a capable cooler, preferably a 280mm liquid AIO or equivalent air cooler, to unlock the best of the 11600K.
Meanwhile, the AMD Ryzen 5 5600X comes with a bundled cooler that is sufficient for most users, though you would definitely need to upgrade to a better cooler if you plan on overclocking. Additionally, a more robust cooler will unlock slightly higher performance in heavy work, like rendering or encoding. Still, you’d need to do that type of work quite regularly to see a worthwhile benefit, so most users will be fine with the bundled cooler.
Both the Core i5-11600K and Ryzen 5 5600X support PCIe 4.0, though it is noteworthy that Intel’s chipset doesn’t support the speedier interface. Instead, devices connected to Intel’s chipset operate at PCIe 3.0 speeds. That means you’ll only have support for one PCIe 4.0 m.2 SSD port on your motherboard, whereas AMD’s chipset is fully enabled for PCIe 4.0, giving you more options for a plethora of faster devices.
Both chips also support two channels of DDR4-3200 memory, but Intel’s new Gear memory feature takes a bit of the shine off Intel’s memory support. At stock settings, the 11600K supports DDR4-2933 in Gear 1 mode, which provides the best latency and performance for most tasks, like gaming. You’ll have to operate the chip in Gear 2 mode for warrantied DDR4-3200 support, but that results in performance penalties in some latency-sensitive apps, like gaming, which you can read about here.
For some users, the 11600K does have a big insurmountable advantage over the Ryzen 5 5600X: The chip comes with the new UHD Graphics 750 comes armed with 32 EUs based on the Xe graphics engine, while all Ryzen 5000 processors come without integrated graphics. That means Intel wins by default if you don’t plan on using a discrete GPU.
Notably, you could also buy Intel’s i5-11600KF, which comes with a disabled graphics engine, for $25 less. At $237, the 11600KF looks incredibly tempting, which we’ll get to a bit later.
Winner: AMD
The Ryzen 5 5600X and the Core i5-11600K are close with six cores and twelve threads (and each of those cores has comparable performance), but the 5600X gets the nod here due to its bundled cooler and native support for DDR4-3200 memory. Meanwhile, the Core i5-11600K comes without a cooler, and you’ll have to operate the memory in sub-optimal Gear 2 mode to access DDR4-3200 speeds, at least if you want to stay within the warranty.
The Core i5-11600K comes with integrated graphics, so it wins by default if you don’t plan on using a discrete GPU. Conversely, you can sacrifice the graphics for a lower price point. AMD has no high-end chips that come with integrated graphics, though that will change by the end of the year when the Ryzen 5000 Cezanne APUs arrive.
Gaming Performance on AMD Ryzen 5 5600X vs Core i9-11600K
The Ryzen 5 and Core i5 families tend to be the most popular gaming chips, and given the big architectural advances we’ve seen with both the Zen 3 and Cypress Cove architectures, these mid-range processors can push fast GPUs along quite nicely.
That said, as per usual, we’re testing with an Nvidia GeForce RTX 3090 to reduce GPU-imposed bottlenecks as much as possible, and differences between test subjects will shrink with lesser cards, which you’ll see most often with this class of chip, or higher resolutions. Below you can see the geometric mean of our gaming tests at 1080p and 1440p, with each resolution split into its own chart. PBO indicates an overclocked Ryzen configuration. You can find our test system details here.
Image 1 of 18
Image 2 of 18
Image 3 of 18
Image 4 of 18
Image 5 of 18
Image 6 of 18
Image 7 of 18
Image 8 of 18
Image 9 of 18
Image 10 of 18
Image 11 of 18
Image 12 of 18
Image 13 of 18
Image 14 of 18
Image 15 of 18
Image 16 of 18
Image 17 of 18
Image 18 of 18
At stock settings at 1080p, the Core i5-11600K notches an impressive boost over its predecessor, the 10600K, but the Ryzen 5 5600X is 7.8% faster over the full span of our test suite. Overclocking the 11600K brings it up to snuff with the stock Ryzen 5 5600X, but the overclocked 5600X configuration is still 3.6% faster.
As you would expect, those deltas will shrink tremendously with lesser graphics cards or with higher resolutions. At 1440p, the stock 5600X is 3.3% faster than the 11600K, and the two tie after overclocking.
Flipping through the individual games shows that the leader can change quite dramatically, with different titles responding better to either Intel or AMD. Our geometric mean of the entire test suite helps smooth that out to one digestible number, but bear in mind – the faster chip will vary based on the game you play.
Notably, the 11600K is 14% less expensive than the 5600X, and that’s if (a huge if) you can find the 5600X at recommended pricing. You could also opt for the graphics-less 11600KF model and pay 26% less than the 5600X, again, if you can find the 5600X at recommended pricing.
Winner: AMDOverall, the Ryzen 5 5600X is the faster gaming chip throughout our test suite, but be aware that performance will vary based on the title you play. This class of chips is often paired with lesser graphics cards, and most serious gamers play at higher resolutions. In both of those situations, you could be hard-pressed to notice the difference between the processors. However, it’s rational to expect that the Ryzen 5 5600X will leave a bit more gas in the tank for future GPU upgrades.
Pricing is the wild card, though, and the Core i5-11600K wins that category easily — even if you could find the Ryzen 5 5600X at suggested pricing. We’ll dive into that in the pricing section.
Application Performance of Intel Core i5-11600K vs Ryzen 5 5600X
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We can boil down productivity application performance into two broad categories: single- and multi-threaded. The first slide in the above album has a geometric mean of performance in several of our single-threaded tests, but as with all cumulative measurements, use this as a general guide and be aware that performance will vary based on workload.
The Core i5-11600K takes the lead, at both stock and overclocked settings, by 3.8% and 1%, respectively. These are rather slim deltas, but it’s clear that the Rocket Lake chip holds the edge in lightly threaded work, particularly in our browser tests, which are a good indicator of general snappiness in a standard desktop PC operating system. We also see a bruising performance advantage in the single-threaded AVX-512-enabled y-cruncher.
The Core i5-11600K is impressive in single-threaded work, but the Ryzen 5 5600X isn’t far behind. It’s too bad that the 11600K’s lead in these types of tests doesn’t equate to leading performance in gaming, which has historically been the case with processors that excel at single-threaded tasks.
Image 1 of 21
Image 2 of 21
Image 3 of 21
Image 4 of 21
Image 5 of 21
Image 6 of 21
Image 7 of 21
Image 8 of 21
Image 9 of 21
Image 10 of 21
Image 11 of 21
Image 12 of 21
Image 13 of 21
Image 14 of 21
Image 15 of 21
Image 16 of 21
Image 17 of 21
Image 18 of 21
Image 19 of 21
Image 20 of 21
Image 21 of 21
Here we take a closer look at performance in heavily-threaded applications, which has long been the stomping grounds of AMD’s core-heavy Ryzen processors. Surprisingly, in our cumulative measurement, the Core i5-11600K is actually 2.5% faster than the 5600X at stock settings and is 1.8% faster after we overclocked both chips.
These are, again, slim deltas, and the difference between the chips will vary based on workload. However, the Core i5-11600K is very competitive in threaded work against the 5600X, which is an accomplishment in its own right. The substantially lower pricing is even more impressive.
Winner: Intel
Based on our cumulative measurement, Intel’s Core i5-11600K comes out on top in both single- and multi-threaded workloads, but by slim margins in both categories of workloads, and that can vary based on the application. However, given that the Core i5-11600K has significantly lower pricing and pulls out a few hard-earned wins on the application front, this category of the Core i5-11600K vs Ryzen 5 5600X competition goes to Intel.
Overclocking of Ryzen 5 5600X vs Core i5-11600K
We have reached the land of diminishing returns for overclocking the highest-end chips from both AMD and Intel, largely because both companies are engaged in a heated dogfight for performance superiority. As a result, much of the overclocking frequency headroom is rolled into standard stock performance, leaving little room for tuners, making memory and fabric overclocking all the more important. There’s still plenty of advantages with overclocking the midrange models though in today’s Ryzen 5 5600X vs Core i5-11600K battle, but be aware that your mileage may vary.
Intel benefits from higher attainable clock rates, especially if you focus on overclocking a few cores instead of the standard all-core overclock, and exposes a wealth of tunable parameters with its Rocket Lake chips. That includes separate AVX offsets for all three flavors of AVX, and the ability to set voltage guardbands. Intel also added an option to completely disable AVX, though that feature is primarily geared for professional overclockers. Rocket also supports per-core frequency and hyper-threading control (enable/disable) to help eke out more overclocking headroom.
The Core i5-11600K supports real-time memory frequency adjustments, though motherboard support will vary. For example, this feature allows you to shift from DDR4-2933 to DDR4-3200 from within Windows 10 without rebooting (or any other attainable memory frequency). Intel also supports live memory timing adjustments from within the operating system.
Intel has long locked overclocking to its pricey K-series models, while AMD freely allows overclocking with all SKUs on almost any platform. However, we see signs of some improvement here from Intel, as it has now enabled memory overclocking on its B560 and H570 chipsets across the board. That said, Intel’s new paradigm of Gear 1 and Gear 2 modes does reduce the value of memory overclocking, which you can read more about in our review.
AMD’s Ryzen 5000 chips come with innovative boost technology that largely consumes most of the available frequency headroom, so there is precious little room for bleeding-edge all-core overclocks. In fact, all-core overclocking with AMD’s chips is lackluster; you’re often better off using its auto-overclocking Precision Boost Overdrive 2 (PBO2) feature that boosts multi-threaded performance. AMD also has plenty of Curve Optimization features that leverage undervolting to increase boost activity.
Much of the benefit of the Ryzen 500 series0 comes from its improved fabric overclocking, which then allows you to tune in higher memory overclocks. We hit a 1900-MHz fabric on our chip, allowing us to run the memory in a 1:1 mode at a higher DDR4-3800 memory speed than we could pull off with the 11600K with the same 1:1 ratio. It also isn’t uncommon to see enthusiasts hit DDR4-4000 in 1:1 mode with Ryzen 5000 processors. There’s no doubt that Intel’s new Gear 1 and 2 memory setup isn’t that refined — you can adjust the 5600X’s fabric ratio to expand the 1:1 window to higher frequencies, while Intel does not have a comparable adjustable parameter.
Winner: Tie
Both the Ryzen 5 5600X and the Core i5-11600K have a bit more overclocking headroom than their higher-end counterparts, meaning that there is still some room for gains in the mid-range. Both platforms have their respective overclocking advantages and a suite of both auto-overclocking and software utilities, meaning this contest will often boil down to personal preference.
Power Consumption, Efficiency, and Cooling of Intel Core i5-11600K vs AMD Ryzen 5 5600X
Image 1 of 12
Image 2 of 12
Image 3 of 12
Image 4 of 12
Image 5 of 12
Image 6 of 12
Image 7 of 12
Image 8 of 12
Image 9 of 12
Image 10 of 12
Image 11 of 12
Image 12 of 12
The Core i5-11600K comes with the same 125W TDP rating as its predecessor, but that rating is a rough approximation of power consumption during long-duration workloads. To improve performance in shorter-term workloads, Intel increased the PL2 rating (boost) to 251W, a whopping 69W increase over the previous-gen 10600K that also came with six cores.
Power consumption and heat go hand in hand, so you’ll have to accommodate that power consumption with a robust cooler. We didn’t have any issues with the Core i5-11600K and a 280mm liquid cooler (you could get away with less), but we did log up to 176W of power consumption at stock settings during our Handbrake benchmark.
In contrast, the Ryzen 5 5600X sips power, reaching a maximum of 76W at stock settings during a Blender benchmark. In fact, a quick look at the renders-per-day charts reveals that AMD’s Ryzen 5 5600X is in another league in terms of power efficiency — you get far more performance per watt consumed, which results in lower power consumption and heat generation.
The 5600X’s refined power consumption comes via TSMC’s 7nm process, while Intel’s 14nm process has obviously reached the end of the road in terms of absolute performance and efficiency.
Winner: AMD
AMD wins this round easily with lower power consumption, higher efficiency, and less thermal output. Intel has turned the power up to the extreme to stay competitive with AMD’s 7nm Ryzen 5000 chips, and as a result, the Core i5-11600K pulls more power and generates more heat than the Ryzen 5 5600X. Additionally, the Core i5-11600K doesn’t come with a bundled cooler, so you’ll need to budget in a capable model to unlock the best the chip has to offer, while the Ryzen 5 5600X comes with a bundled cooler that is good enough for the majority of users.
Pricing and Value of AMD Ryzen 5 5600X vs Intel Core i5-11600K
AMD was already riding the pricing line with the Ryzen 5 5600X’s suggested $299 price tag, but supply of this chip is volatile as of the time of writing, to put it lightly, leading to price gouging. This high pricing comes as a byproduct of a combination of unprecedented demand and pandemic-spurred supply chain issues, but it certainly destroys the value proposition of the Ryzen 5 5600X, at least for now.
The Ryzen 5 5600X currently retails for $370 at Microcenter, which is usually the most price-friendly vendor, a $69 markup over suggested pricing. The 5600X is also $450 from Amazon (not a third-party seller). Be aware that the pricing and availability of these chips can change drastically in very short periods of time, and they go in and out of stock frequently, reducing the accuracy of many price tracking tools.
In contrast, the Core i5-11600K can be found for $264 at Amazon, and $260 at Microcenter, which is surprisingly close to the $262 suggested tray pricing. Additionally, you could opt for the graphics-less Core i5-11600KF if you don’t need a discrete GPU. That chip is a bit harder to find than the widely-available 11600K, but we did find it for $240 at Adorama (near suggested pricing).
Here’s the breakdown (naturally, this will vary):
Suggested Price
Current (volatile for 5600X)
Price Per Core
Core i5-11600K
$262
$262 to $264
~$32.75
Ryzen 5 5600X
$299
$370 to $450
~$46.25 to $56.25
Core i5-11600KF
$237
$240 (spotty availability)
~$29.65
The Core i5-11600K doesn’t come with a cooler, so you’ll have to budget that into your purchasing decision.
Winner: Intel
Even at recommended pricing for both chips, Intel’s aggressive pricing makes the Core i5-11600K a tempting proposition, but the company wins this stage of the battle convincingly based on one almost insurmountable advantage: You can actually find the chip readily available at retail for very close to its suggested tray pricing. With much cheaper pricing both on a per-core and absolute basis, the Core i5-11600K is the better buy, and if you’re looking for an even lower cost of entry, the Core i5-11600KF is plenty attractive if you don’t need integrated graphics.
AMD’s premium pricing for the Ryzen 5 5600X was a bit of a disappointment for AMD fans at launch, but the chip did offer enough advantages to justify the price tag. However, the arrival of the Core i5-11600K with its disruptive pricing and good-enough performance would probably merit a slight pricing adjustment from AMD, or the release of a non-X model, if these were normal times. These aren’t normal times, though, and instead of improving its value proposition, AMD is facing crippling supply challenges.
Bottom Line
Intel Core i5-11600K
AMD Ryzen 5 5600X
Features and Specifications
X
Gaming
X
Application Performance
X
Overclocking
X
X
Power Consumption, Efficiency, and Cooling
X
Pricing and Value Proposition
X
Total
3
4
Here’s the tale of the tape: AMD wins this Ryzen 5 5600X vs Intel Core i5-11600K battle with a tie in one category and a win in three others, marking a four to three victory in favor of Team Red. Overall, the Ryzen 5 5600X offers up a superior blend of gaming performance, power consumption and efficiency, and a bundled cooler to help offset the higher suggested retail pricing, remaining our go-to chip recommendation for the mid-range. That is if you can find it at or near suggested pricing.
Unfortunately, in these times of almost unimaginably bad chip shortages, the chip that you can actually buy, or even find anywhere even near recommended pricing, is going to win the war at the checkout lane. For now, Intel appears to be winning the supply battle, though that could change in the coming months. As a result, the six-core twelve-thread Core i5-11600K lands with a friendly $262 price point, making it much more competitive with AMD’s $300 Ryzen 5 5600X that currently sells far over suggested pricing due to shortages.
The Core i5-11600K has a very competitive price-to-performance ratio compared to the Ryzen 5 5600X in a broad swath of games and applications. The 11600K serves up quite a bit of performance for a ~$262 chip, and the graphics-less 11600KF is an absolute steal if you can find it near the $237 tray pricing. If you don’t need an integrated GPU, the KF model is your chip.
Even if we compare the chips at AMD’s and Intel’s standard pricing, the Core i5-11600K is a potent challenger with a solid value proposition due to its incredibly aggressive pricing. While the Core i5-11600K might not claim absolute supremacy, its mixture of price and performance makes it a solid buy if you’re willing to overlook the higher power consumption.
Most gamers would be hard-pressed to notice the difference when you pair these chips with lesser GPUs or play at higher resolutions, though the Ryzen 5 5600X will potentially leave you with more gas in the tank for future GPU upgrades. The Ryzen 5 5600X is the absolute winner, though, provided you can find it anywhere close to the suggested retail price.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
As Vergecast listeners know, I am a sucker for car phone mounts and chargers. They are a perfect gadget for the modern age — a design problem with no perfect solution, price points that usually land at the higher end of the impulse-buy zone, and completely well-suited for targeted Instagram ads. “People who have bought a Qi-based car mount in the last year,” the marketing director of a tiny accessory company sternly instructs the Facebook ad-targeting system. “Find them and relentlessly pressure them into buying our product which is at best marginally better than the one they have.”
Reader, it works.
All of this means I was very excited when Apple added MagSafe charging to the new iPhone 12 line. A series of magnets aligns a wireless charger to the back of the phone, and has enough attachment strength to — yes — hold the phone on a car mount. A dream: you get in the car, seamlessly dink! your phone onto the mount, and drive away, laughing at the suckers fumbling with their cradles and motorized friction arms and other unwieldy ideas. Magnets, baby. How do they work.
Unfortunately it has been six months since the iPhone 12 was announced, and there is a pitiful shortage of MagSafe car chargers. In fact, there are no officially-sanctioned MagSafe car chargers. Instead, there is this Belkin Car Vent Mount PRO with MagSafe, which, as the name suggests, allows you to mount a phone to your vents with MagSafe, in, um, a professional way. However, it does not charge your phone.
I have been using a review unit of the Belkin Car Vent Mount PRO with MagSafe, or BCVMPwM, for a couple months now. It is at once supremely satisfying — dink! — and also tremendously frustrating. Like all vent mounts, the weight of the phone is enough to pull the vent louvers down over time, especially if you have a large phone like my iPhone 12 Pro Max. The magnets are indeed strong enough to hold even that phone in place, but if you go over any particularly huge bumps, something will fall down — the phone off the mount, or the mount off the vent.
“Dammit, BCVMPwM,” you will yell, using the full name of this $40 promise to yourself. “Why aren’t you everything I hoped and dreamed of when I looked at the marketing photos on social media?” Then you will put everything back into place at the next stop light, sheepishly glance at your partner, and slowly realize they have completely stopped paying attention to these sorts of antics anymore. You need new antics. You need to add Linux to your smart home.
Stop it. Have I mentioned that the BCVMPwM does not have C? No, this is not a charger. For that, you still have to plug in a Lightning cable, which sort-of-maybe makes sense if your car does not have wireless CarPlay and you need to plug it in anyway — but there you are, plugging a cable into your phone, which is the complete opposite of the dink! Magnet Experience. Your old car mount, with the horrible friction arms, had a built-in Qi charger. There are hacky wireless CarPlay adapters! People say they are kinda slow and have audio latency issues, but c’mon — a single dink! to mount your phone, charge it, and instantly connect to CarPlay? Now that’s the good shit.
Why isn’t there an approved MagSafe car mount with built-in wireless charging after six months? Why do Apple accessory ecosystems always seem so petrified, in every sense of the word? This is the easiest win of all time, but instead, there is the BCVMPwM. It offers you a glimpse of a dream. Then it falls down. There are better unlicensed ones that might burn your car to the ground that you can buy on Amazon. It is the paradigmatic Apple accessory.
After pausing its review of emoji submissions, the Unicode Emoji Subcommittee is back with tips for successful submissions, chair Jennifer Daniel announced. Anything you submit now could come to phones as soon as 2023 — but if you want to fine-tune your emoji, she’s got some suggestions.
A good emoji has multiple uses, can be used with other emoji to create an emoji phrase, represents something new, and is distinct from existing emoji, Daniel writes.
We won’t get new emoji this year; the Unicode Consortium — that’s the panel in charge of emoji releases — delayed the version release. The delay was because Unicode relies on volunteers, and last year, the pandemic overwhelmed them. The next planned release is in 2022.
Our own Jay Peters has contributed to your emoji vocabulary with a yawn and a waffle. He submitted his proposal in 2017, and the new emoji hit phones everywhere in 2019.
(Pocket-lint) – Back at the end of January 2021, Fujifilm announced this, the X-E4, one of the more junior models in its mirrorless camera line-up. It’s not the total baby of the range, though, a title which goes to the X-T200. The X-E4’s main difference to that camera? It brings the coveted X Trans CMOS sensor type into the fray.
When we first heard announcement of the X-E4, we thought it looked a little like the fixed-lens X100V, except with the obvious addition of an interchangeable lens mount. And given how fond we were of the X100V, that set up this interchangeable equivalent in good stead. Except this adds a flip-forward screen to the series for the first time.
So is the Fujifilm X-E4 a real mid-range champ, or does it lack innovation to elevate it above and beyond its X-E3 predecessor and surrounding X series cameras?
Design & Lens Mount
Fujifilm X mount (for XF lenses)
Dimensions (body): 121 x 73mm x 33mm / Weight: 364g
Vari-angle mounted screen, with touch controls (3-inch, 1,620k-dot LCD)
If you’ve been thinking about a Fujifilm camera then there’s three current models that sit fairly close to one another: the X-T200, the X-E4 on review here, and the higher-end X-T4. So how do they differ?
The X-E4 sits in the middle of the trio, with a more advanced sensor technology than you’ll find in the lower-end X-T200 – but other features are otherwise fairly similar. The higher-end X-T4, meanwhile, has the exact same sensor as you’ll find here – so while the ‘T’ model doesn’t mean higher quality images, it has more dedicated control dials and can shoot much faster.
The X-E4 is designed with small-scale in mind, too, so our attachment of a 10-24mm f/4 lens (not included) makes it look a bit bigger. Really Fujifilm intends to sell this camera with the 27mm pancake lens, which is sold as a kit, because that really enhances the small scale – but we’d only suggest doing that if you know you’ll want to buy other lenses later, otherwise you may wish to look to the X100V instead (if you can find it for a good price anyway).
squirrel_widget_231912
Prominently the X-E4 adds a flip-forward LCD screen for the first time in the X series, enabling that selfie or vlogging angle for those who need to frame themselves. However, the design of the camera – there’s also a built-in electronic viewfinder (EVF) – means you cannot simply flip the screen up in one swift movement. Although it’s not complex, we find the two-part movement to get the screen forward is rather fiddly. And even then the EVF’s marginal protrusion stops it from being completely vertical.
The screen quality is otherwise perfectly decent, at 1,640k-dots, and the little nub on the side makes it easy to position by 90 degrees (or similar) in a single motion – which is handy for waist-level work. The touchscreen is highly responsive, which is both great for quick reaction, yet annoying because we took heaps of unwanted pictures while the camera was in low-power mode in-between shooting proper.
A big part of any X series camera is the style, though, with this silver and black finish the epitome of retro cool. The X-E4 is made primarily from magnesium alloy, which gives it a robust feeling in the hand. However, it’s not weather-resistant like the pricier X-T4, so if you’re keen to always run around in the rain then this might not be the choice body for you.
The camera’s dials aren’t just there to look pretty either – you can quickly control the shutter and exposure compensation via their individual dedicated dials (many XF lenses have aperture control rings as the third piece of the puzzle). Sadly, there’s no lock of the exposure compensation dial, which we found was a little too easy to knock out of place (and so we took a number of images at +/-0.7EV).
Want to point and shoot? No problem. The X-E4 can be set to auto in every regard so you can just snap away. Even applying various filters – Toy camera, Miniature, Pop color, High-key, Low-key, Dynamic tone, Soft focus – if you want to get a bit ‘arty’ with results.
Performance
Battery: 2,200mAh (circa 450 shots per charge)
Autofocus system: 117 selectable areas
Face Detection & Eye Detection AF
Low-light focus: to -7EV
Adjustable AF point size
Up to 8fps burst
The X-E4’s focus system is an echo of the X-T4 too. The camera uses a massive 2.16-million phase-detection pixels embedded across its sensor’s surface, designed to cover the full width from edge to edge. That means you can focus anywhere in the field of view, as far vertically or horizontally as you wish, and still acquire the same focus ability as you would in the centre.
The autofocus system is pared down to 425 areas maximum – it’s 117 selectable areas though – which can be further reduced to simplify operation as you wish. The AF point can be adjusted between a variety of point sizes, too, by using the front thumbwheel; the miniature joystick to the rear, meanwhile, handles repositioning with speed – if you’re not using the touchscreen.
However, there’s still no Panasonic Lumix S1-style Pinpoint mode, which we always miss when using other brands’ mirrorless cameras. Pinpoint is great for still life work, as it enables really specific focus – not that the X-E4 struggles, but you may find focus is positioned a millimetre forward/back to expectation based on available contrast, for example.
Now we wouldn’t say the autofocus is the very best going for moving subjects, but it’s still highly capable. It’s hard to ignore Sony’s forward motion in this department, really, where it’s excelling in fast-moving subject capture.
The 8 frames per second (8fps) burst shooting is also capable, although approaching half that of the X-T4 – which is yet another clue of the X-E4’s target audience.
Autofocus is said to be good to -7EV, which means really dim conditions. With the curtains closed and not much light available the camera had little qualms in capturing – even when the sensitivity was forced to be maxxed out to ISO 12,800 as a result.
In terms of longevity the X-E4’s battery is relatively high capacity, capable of delivering 450 shots per charge or thereabouts. This will vary depending on the screen’s on time, how much movie shooting you wish to do, and so forth. There is a low power mode that auto-activates by default, though, so the rear screen will go into a low brightness and super-low refresh rate to retain battery – but mean it’s instant to reactivate when you want it for that next shot.
Recharging takes place via USB-C, much like an Android phone, but you’ll need to use a 15W charger at the wall for the fastest possible recharge times. It’ll take about three hours to recharge the one cell, which isn’t especially quick, but use a low power USB port and it’ll take three or four times longer than that. In short: don’t think plugging it into the side of your computer will serve the same result, as it won’t.
Image Quality
26.1-megapixel X-Trans CMOS 4 sensor
4K at 30fps, Full HD (1080p) at 60fps
Sensitivity: ISO 160 to 12,800
Inside, this lightweight camera houses the same 26.1-megapixel X-Trans CMOS 4 and X-Processor 4 combination as you’ll find in the X-T4 – so quality is, in effect, one and the same. It’s lens dependent, of course, as that’s a major part of what attributes part of the clarity and sharpness of an image.
Pocket-lint
: ISO 200ISO 200
This sensor type is backside illuminated, with the copper wiring placement beneath the photo diodes in the sensor, in order to create a cleaner signal path. But the real sell is the X Trans CMOS aspect, which uses Fujifilm’s unique colour array, not the typical Bayer array, to make benefit of a larger sequence to determine colour results.
Fujifilm shots tend to look very natural as a result, sometimes a little cooler in appearance, but there’s a lot of options within the camera to manipulate as you please – including traditional film stock equivalent, if you want to shoot Velvia for added punch, or Provia for softer portrait tones.
We’ve often praised Fujifilm for its image quality prowess, a trend that the X-E4 continues. It’s handled our various snaps well in terms of exposure, colour balance, scale and detail. The real sweet-spot is in the lower to medium ISO sensitivity, as higher up the range things beging to reveal a lot more image noise – not to the point of destruction, as such, but detail drifts away and processing is more apparent, even from ISO 3200.
That you’re getting Fujifilm’s current best-of-best (well, ignoring its medium format line-up) in a camera that sits in the middle of the range is impressive.
Verdict
Although the X-E4 is the first Fujifilm X series camera to offer a screen that can face forward, we don’t actually think that’ll be the main appeal for its target audience (it’s also a bit fiddly to position as so) – as much as the spec can cater for vloggers wanting to shoot 4K or Full HD video.
The real appeal of the X-E4 lies in a range of points: from the top-tier image quality from the X-Trans CMOS 4 sensor, to the small-scale body and retro chic design; to the capable autofocus system and variety of filters and film stocks.
If you don’t want the super-fast shooting of the X-T4, don’t need the weather-sealing either, then the X-E4 wraps much of its higher-spec cousin’s features into a smaller, tidier body with a smaller price tag.
Also consider
Sony A6400
Not the most up-to-date Sony – that goes to the over-four-figures A6600 – but the A6400 is a great example of small scale with big capabilities. And it did the whole facing-forward screen feature first too.
Read our full review
squirrel_widget_147500
Fujifilm X-T4
Want that water resistance and almost double-speed burst mode? That’s where the X-T4 comes into play. It’s far pricier, and it’s larger too, but it’s all-round more accomplished – and will feel better balanced with larger lenses, too, if that’s your future thinking.
Read our full review
squirrel_widget_188716
Writing by Mike Lowe.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.