At today’s conference, Huawei presented two new flagship smartphones. Mate 40 Pro + and Mate 40 Pro and Mate 40, because these specific models are a show of the manufacturer’s strength, which must once again convince users for whom the lack of Google services is an obstacle in purchasing Huawei equipment. Looking at how powerful the new units are and how their design looks, it should be admitted to the creators that they made an effort to design the smartphone. As always with Huawei’s flagships, we can expect very developed photographic possibilities. These will be provided by a system consisting of three cameras, the main of which can boast a resolution 50 Mpix.
Huawei Mate smartphones 40, Mate 40 Pro and Mate 40 Pro + has just been officially presented. We look at the most important flagships of the manufacturer for the year 2020.
Huawei Matebook 14 – Laptop test with Ryzen 5 4600 H with 3: 2 screen
Same as last year’s series Mate, Huawei also decided here two basic variants of the smartphone. However, the Pro option also comes as Pro +, which gives us a total of three variants. First, let’s focus on the Mate version 40 because it may be the biggest attraction of this year’s product portfolio of a Chinese manufacturer. The housing, as you might guess, consists of glass on the front and backs and an aluminum frame. The screen is a 6.5 inch Flex OLED HDR Compliant 04 + Resolution 2376 x 1080 pixels for which the image is refreshed at the frequency of 90 Hz. Missing here 120 Hz or even 144 Hz, but as they say, “you can’t have it all.” The glass pane overlaps the sides.
Huawei P Test 40 Pro: The new king of photography with a very efficient battery
On the side frames a smartphone with IP waterproofing class 68 physical buttons have been re-placed. For the performance of Huawei Mate 40 Pro corresponds to the Kirin chip 9000 5G made in 5nm technological process, Mali-G graphics 78 MP 24, 8 or 12 GB RAM and 256 or 512 GB of UFS 3.1 storage. The processor consists of 1 Cortex-A core 77 clock 3, 13 GHz, three Cortex-A cores 77 working in frequency 2, 54 GHz and four Cortex A cores – 55 with clock 2. 04 GHz. The manufacturer has not forgotten to include solutions such as WiFi 6, Bluetooth 5.2, NFC and USB-C (3.1) in the equipment. Battery 4400 mAh (4500 mAh Pro version +), because it went to Mate 40 We charge the Pro with power 66 W About 40 minutes. There is also wireless charging 50 W. Unfortunately, we will not find a 3.5mm Jack audio connector here, and dual SIM is a hybrid option (nano SIM + nano memory).
Huawei P smartphone test 40 Lite: because you can’t have everything
It’s just as interesting about photography . The camera on the back will consist of: 50 Mpix of the main unit with f / 1.8 light, 12 Mpix telephoto lens with f / 3.4 light and 20 Mpix of the ultra wide-angle lens with f / 1.8 light. There will also be a TOF sensor. Mate 40 Pro records video at maximum 8K resolution at 30 FPS and 4K at 60 FPS. Responsible for the selfie is 13 Mpix webcam with f / 2.4 light. However, in the Mate Pro + variant, we will additionally receive a 3D face scanner (3D Face Unlock). This means that the sensor that recognizes the user’s fingerprint will not be the only viable method of biometric security. The smartphone is available in gray, white and silver color versions.
On Huawei Mate 40 we only find a single selfie camera without a three-dimensional face scanner. However, the Mate variant 40 Pro + and Mate 40 Pro can boast not a single, but a double telephoto lens that allows you to use the optical zoom. The more expensive smartphone was made of black and white ceramics. Other visual issues remain unchanged here. However, we will not find Google services in any of the smartphones. The software is based on EMUI 10. Huawei also showed FreeBuds Studio headphones, x Gentle Monster Eyewear II glasses, Sound speaker and smartwatch – Watch GT 2 Pro in Porsche Design version.
by Mattia Speroni, published on 21 October 2020, at 19: 21
Canon has presented a new sensor in the APS-H format with a resolution of 250 MPixel designed for non-consumer use but for products dedicated to industry ranging from microscopes to pixel analysis of displays in production. ??
Canon announced a new sensor with a resolution of 250 MPixel in the format APS-H (larger than APS-C and smaller than full-frame). This is not a solution intended for commercial cameras but for other types of products always in the field of imaging .
Sensor dimensions APS-H are equal to 29, 4 x 18, 9 mm with pixel with dimensions of 1.5 μm and will be available in both color and monochrome versions to suit various needs. Some of the purposes thought by Canon for this new product are the inspection of display panels, video production, digital archiving, video surveillance and use in microscopes.
You can thus obtain detailed images, thanks to the resolution of 125 MPixel or adopt digital zoom to reach the areas interested for analysis. To be able to manage all the data, the signal reading speed touches 1, 25 billions of pixels / s thus avoiding delays in the response (we are talking about about 5 fps a bit or 3.1 fps a 12 bit).
Furthermore, if for the customer’s needs it is not it is necessary to exploit the entire sensor area it is possible to reach higher transmission speeds: 8K to 24 fps , 4K a 30 fps and FHD a 60 fps, all at 10 bit. In the demonstration video released by Canon it is possible to see some of the applications described above as an enlargement of details (butterfly wings) or shooting of large areas of landscape showing the potential of this new high resolution sensor.
After a longer than usual wait, the Huawei Mate 40 Pro is ready to be introduced this Thursday, on October 22. There have been limited leaks and some may prove to be inaccurate. This latest bundle of info has been compiled by the reliable Roland Quandt and should be the most accurate yet as it comes so close to the launch.
The phone will feature curved sides for both the display and the sides. The design is meant to make the side bezels disappear when you look at the display head on. The new Mate will be curvier than the P40 Pro, but without sacrificing the side buttons like the 2019 Mate did.
The display will measure 6.76”, the largest in the series and will be slightly sharper with a pixel density of 456 ppi, thanks to a screen resolution of 1,344 x 2,772 px. There’s no word on the refresh rate, but at least 90Hz seems like a no-brainer.
This leak paints a different picture from what we saw in early renders. There’s no periscope for one, instead a classic 12 MP tele camera with 5x optical zoom (125 mm focal length) will be used (you can check out an official camera sample).
Another interesting change is that the Laser autofocus will be paired with an optical depth sensor. This will be used to improve autofocus, though it’s not yet clear how. There will be a depth sensor on the front too, sharing the pill-shaped punch hole with the 13 MP selfie camera – beyond bokeh effects, this will be used to detect hand gestures.
The main camera will have a 50MP sensor and a lens with f/1.9 aperture and optical image stabilization. It will record 8K video and will feature both a dual tone LED flash and an ambient light color sensor for better color rendering. The third camera will be a 20MP ultrawide module with an f/1.8 aperture.
The Huawei Mate 40 Pro will be powered by the Kirin 9000, the first 5 nm chipset to feature an integrated 5G modem. However, this may only be sub-6 GHz (Chinese carriers are still in the planning stages of a mmWave rollout).
Anyway, the CPU of the chipset will use Cortex-A77 for its big cores with a prime core clocked at 3.13 GHz and three more A77s running at 2.54 GHz, plus four A55 little core at 2.04 GHz. The GPU will use ARM’s Mali-G78 design and as we’ve seen from early benchmarks, the Kirin leaves the Snapdragon 865+ behind in terms of graphics performance.
For Europe, Huawei is reportedly planning a version with 8GB of RAM plus 256GB of UFS 3.1 storage. The version for China will go up to 12GB of RAM. Storage will be expandable through Nano Memory cards.
The new Mate will exceed all previous Huawei models with 65W wired fast charging support, battery capacity will be 4,400mAh, so about the same as last year. Wireless charging (including reverse charging) will be supported too, though we don’t know the speeds of those.
The phone will launch with Android 10 + EMUI 11 out of the box, paired with Huawei Media Services (as Google services are still locked behind a trade embargo). It may take until next year for the new Mates to arrive in Europe, though the phones will probably be out in China before the end of 2020.
Rumor has it that the October 22 event will also bring a Mate 30 Pro E, a revamped version of last year’s model. This may be able to squeeze through a loophole and have Google Play pre-installed (just like the P30 Pro New Edition did, but we’ll find that out on Thursday.
Huawei Mate 40 Pro
We can expect special editions of the Huawei Mate 40 Pro this year as well – an RS model that will feature Porsche Design looks and a Pro+ model which will upgrade the camera. Whether these reach Europe remains to be seen (though it’s quite unlikely, at least for the Pro+ model).
A leak showing Mate 40 Pro+ renders and the retail box suggests that instead of a donut, the rear camera design will be based on an octagon. Huawei officially teased the Mate 40 design, showing a similar angular camera island though not an octagon.
In this article, which our team will regularly update, we will maintain a growing list of information pertaining to upcoming hardware releases based on leaks and official announcements as we spot them. There will obviously be a ton of rumors on unreleased hardware, and it is our goal to—based on our years of industry experience—exclude the crazy ones. In addition to these upcoming hardware release news, we will regularly adjust the structure of this article to better organize information. Each time an important change is made to this article, it will re-appear on our front page with a “new” banner, and the additions will be documented in the forum comments thread. This article will not leak information we signed an NDA for.
Feel free to share your opinions and tips in the forum comments thread and subscribe to the same thread for updates.
Adds AVX512 instructions (so far available only on HEDT platform, since Skylake-X). New instructions: AVX512F, AVX512CD, AVX512DQ, AVX512BW, and AVX512VL. New commands: AVX512_IFMA and AVX512_VBMI
20-30% broadening of various number crunching resources, wider execution window, more AGUs
18% IPC gains vs Cascade Lake
SHA-NI and Vector-AES instruction sets, up to 75% higher encryption performance vs. “Skylake”
Supports unganged memory mode
Integrated GPU based on new Gen11 architecture, up to 1 TFLOP/s ALU compute performance
Integrated GPU supports DisplayPort 1.4a and DSC for 5K and 8K monitor support
Gen11 also features tile-based rendering, one of NVIDIA’s secret-sauce features
Integrated GPU supports VESA adaptive V-sync, all AMD FreeSync-capable monitors should work with this
Ice Lake introduces Intel TME (Total Memory Encryption), also Intel Platform Firmware Resilience (Intel PFR)
Intel Core i9-10990XE
Release Date: unknown, originally early 2020, seems cancelled now
22-cores + HyperThreading
Uses Cascade Lake-X architecture
LGA2066 Socket
1 MB L2 cache per core, 30.25 MB shared L3 cache
4 GHz base, up to 5 GHz boost
Roughly matches Threadripper 3960X in Cinebench
Intel Rocket Lake [updated]
Release Date: Q1 2021
Succeeds “Comet Lake”
Variants: Rocket Lake-“S” (mainstream desktop), -“H” (mainstream notebook), -“U” (ultrabook), and -“Y” (low power portable)
14 nanometer production process
Seems to be limited to eight cores (2 less than 10-core Comet Lake)
Some indication of mixed HyperThreading configurations, for example 8-core, 12-thread
Uses “Cypress Cove” core, which seems to be a backport of “Willow Cove” to 14 nm process
Up to 10% IPC improvement over Skylake
No FIVR, uses SVID VRM architecture
125 W maximum TDP
Compatible with 400-series chipsets
Possible they release 500-series chipsets with added features
Socket LGA1200 (just like Comet Lake)
Supports PCI-Express 4.0
20 PCIe lanes
Intel Xe integrated graphics, based on Gen 12 with HDMI 2.0b and DisplayPort 1.4a
Engineering Sample: Family 6, Model 167, Stepping 0, 8c/16t, 3.4 GHz base, 5.0 GHz boost
Engineering Sample: Family 6, Model 167, Stepping 0, 8c/16t, 3.2 GHz base, 4.3 GHz boost
Intel Willow Cove and Golden Cove Cores
Release Date: 2021
Succeeds “Sunny Cove”
Willow Cove improves on-die caches, adds more security features, and takes advantage of 10 nm+ process improvements to increase clock speeds versus Sunny Cove
Golden Cove will add significant single-thread (IPC) increases over Sunny Cove, add on-die matrix multiplication hardware, improved 5G network-stack HSP performance, and more security features than Willow Cove
Intel Alder Lake [updated]
Release Date: H2 2021
Mixes CPU cores of various processing power (and energy consumption), similar to the Big.Little-like designs for mobile devices
Combines up to eight Golden Cove with up to eight Gracemont (Atom) cores
These cores have two different instruction sets, for example Golden Cove has AVX-512, TSX-NI and FP16, which Gracemont lacks
10 nm process
Uses Socket LGA1700
Alder Lake for desktop: 37.5 mm x 45 mm package
Desktop CPUs come in 125 W and 80 W
Could use Foveros 3D Stacking technology
Possible CPU configurations 8+8+1 (8 big cores, 8 small cores, GT1 integrated), and 6+0+1 (6 big cores, no small cores and GT1 integrated)
Includes Gen12 Xe iGPU
DDR5 memory support
PCI-Express 5.0 support
Includes CLDEMOTE instruction, to invalidate cache lines
Intel Sapphire Rapids
Release Date: H2 2021
Successor to Cooper Lake
8-channel DDR5
Uses Socket LGA4677
For enterprise / data center
10 nm+ production process
Willow Cove CPU cores
PCIe 5.0
Probably 7 nm process
Platform name: Eagle Stream
Includes CLDEMOTE instruction, to invalidate cache lines
Intel Grand Ridge [added]
Release Date: 2022 or later
Produced on 7 nm HLL+ process
Successor to Atom “Snow Ridge”
24 cores across 6 clusters with 4 cours each
4 MB L2 per cluster, plus L3 cache
Uses Gracemont CPU core
Dual-channel DDR5
PCI-Expres Gen 4 with 16 lanes
Intel Elkhart Lake
Release Date: Unknown
Produced on 10 nm process
Designed for next-gen Pentium Silver and Celeron processors
CPU cores use Tremont architecture
GPU uses Gen 11
Dual-core and Quad-core configurations
Single-channel memory controller with DDR4 and LPDDR4/x support
Engineering sample: 1.9 GHz, 5/9/12 W TDP
Intel Meteor Lake [updated]
Release Date: 2022 or 2023
Succeeds “Alder Lake”
New microarchitecture, more advanced than “Willow Cove”, possibly “Golden Cove”
As of late 2020 Intel is adding support for Meteor Lake to the Linux Kernel
Lisa Su in a CES 2020 interview said “we will have a high-end Navi […] it is important”
AMD CFO: “Big Navi” will be a halo product and not merely a lofty performance increase over the RX 5700 XT to make AMD competitive against GeForce “Ampere.”
Adds support for DirectX 12 Ultimate: variable-rate shading and hardware-accelerated ray-tracing (DXR version 1.1)
AMD RDNA 2 [updated]
Announcement: October 28
Lisa Su: “we will have our new next-generation RDNA architecture that will be part our 2020 lineup”
TSMC, 7 nm Plus (probably not 7 nm+ EUV)
Up to 18% higher transistor density
Higher clock speeds than RDNA
50% better performance per Watt than RDNA, twice the efficiency as GCN
Adds variable rate shading
Adds support for BFloat16
Adds AV1 video decode hardware acceleration
Adds hardware raytracing acceleration (DXR version 1.1)
Supports Microsoft DirectX 12 Ultimate API /DXR, VRS, Mesh Shaders & Sampler Feedback)
Same GPU architecture powers PlayStation 5 & Xbox Series X
AMD Radeon RX 6500 [added]
Release date: unknown
40 Compute Units / 2560 Stream Processors
192-bit GDDR6 memory
7 nanometer production process
RDNA2 architecture
Codename “Navy Flounder”
Below $250
AMD RDNA 3
Release Date: Late 2021 or 2022
“Advanced Node”, probably TSMC 6 nm or 5 nm
AMD CDNA and CDNA2 [updated]
Release Date: 2020 for CDNA and 2021-2022 for CDNA2
New architecture that focuses on compute for “Radeon Instinct”
TSMC 7 nm or 7 nm+
128 Compute Units = 8192 shaders
Arcturus engineering sample has 120 CUs (7680 shaders), 878 MHz for the core clock, 750 MHz SoC clock, and 1200 MHz memory clock
Compute only—Rasterization, display controllers and media encoding hardware removed
SDV OpenCL performance in Geekbench: 55373 points, with 3.53 Gpixels/s in “Sorbel,” 1.30 Gpixels/sec in Histogram Equalization, 16 GFLOPs in SFFT, 1.62 GPixels/s in Gaussian Blur, 4.51 Msubwindows/s in Face Detection, 2.88 Gpixels/s in RAW, 327.4 Mpixels/s in DoF, and 13656 FPS in Particle Physics. Roughly matches 11 CU Vega Picasso IGP
SDV is 15.2 cm long, 96 Execution Units, PCI-Express x16, slot only power (so 75 W), 3x DisplayPort, 1x HDMI, high noise levels
Up to 2x performance uplift for Intel Xe integrated graphics over previous Gen 11
Using a multi-chip design approach, with Foveros, Intel Xe scales up to 512 EUs with 500 W
512 EU model is datacenter only, 300 W 256 EU model for enthusiast markets
Targeted at 1080p gameplay, CES demonstration showed working gameplay on Destiny 2
Could be produced at Samsung to leverage their 10 nm tech, while Intel ramps up its own
Future Xe GPUs could be built on TSMC 6 nm and 3 nm nodes
Raytracing hardware acceleration support will definitely be included on the data-center GPUs (and probably on the consumer models, too)
Double-digit TFLOP/s scaling all the way up to 0.1+ PFLOP/s
Will be used in upcoming Cray Aurora Supercomputer for Argonne National Laboratory in 2021
Targeting a wide segment of markets, including consumer (client-segment) graphics, enthusiast-segment, and data-center compute
Uses new graphics control panel that’s being introduced during 2019
Intel Discrete GPU / Arctic Sound
Release Date: 2020
Intel will hold a world tour in 2019, to build enthusiasm for the new architecture
Advanced management for power and clocks
Test chip: 8×8 mm² die area, 1.54B transistors, 14 nm, 50-400 MHz clock, EUs at 2x clock if needed
Raja Koduri who left AMD in late 2017 is somehow involved
Confirmed to support VESA Adaptive Sync
Intel Ponte Vecchio
Release Date: 2021 or 2022
Discrete GPU
Produced on 7 nanometer production process
Probably not 7 nanometer Intel but 7 nm TSMC or even 6 nm TSMC
Multiple GPU dies will be combined into a single accelerator
Architected “for HPC modeling and simulation workloads and AI training”
Workloads can be processed by GPU and CPU at the same time, using Intel oneAPI
Foveros packaging technology
Xe link to combine multiple GPUs (CXL interconnect)
Release Date: September 2020, at the same time as Zen 3.
Highly likely these were scrapped when AMD decided to enable compatibility with 400 and 500 series chipsets
Socket AM4
Supporting Zen 3 Ryzen 4000 processors
Support for older CPUs very likely, probably at least Ryzen 3000
PCI-Express 4.0
Memory
DDR5 System Memory [updated]
Release Date: Late 2020, probably 2021
JEDEC standard finalized as of Jul 15th 2020
Demo’d in May 2018 by Micron: DDR5-4400
Samsung 16 Gb DDR5 DRAM developed since February 2018
Samsung has completed functional testing and validation of a LPDDR5 prototype: 10 nm class, 8 Gbit, final clocks: DDR5-5500 and DDR5-6400
Samsung has started 16 Gb LPDDR5 mass production in Aug 2020
SK Hynix 4800 – 5600 Mbps, 1.1 V
SK Hynix also has 16 Gb DDR5-5200 samples ready, 1.1 V, mass production expected 2020
April 2020: Hynix has 8.4 Gbps DDR5, minimum density per die is 8 Gbit, maximum is 64 Gbit
ECC is now supported by all dies (no longer specific to server memory modules)
SK Hynix demonstrated DDR5 RDIMM modules at CES 2020: 4800 MHz, 64 GB
Micron is shipping LPDDR5 for use in Xiaomi phones (Feb 2 2020). 5.5 Gbps and 6.4 Gbps
Samsung has begun production for LPDDR5 for mobile devices (Feb 25 2020). 16 GB, 5.5 Gbps
4800 – 6400 Mbps
Expected to be produced using 7 nm technologies
32 banks, 8 bank groups
64-bit link at 1.1 V
Burst length doubled to BL16
Bank count increased from 16 to 32
Fine grain refresh feature
Improved power efficiency enabled by Vdd going from 1.2 V to 1.1 V as compared to DDR4
On-die ECC
Voltage regulators on the DIMM modules
AMD DDR5 memory support by 2021/2022, with Zen 4
HBM2E Graphics Memory [updated]
Release Date: 2020
Offers 3.2 Gbps per pin (33% faster than HBM2)
Rambus offers a 4.0 Gbps memory interface controller
Samsung Flashbolt: 16 Gb per die, 8-layers stacked, 16 GB per chip with 410 GB/s bandwidth
Hynix: 460 GB/s, 3.6 Gbps, eight 16 Gb chips are stacked for a single 16 GB chip
Hynix: mass production has started as of July 2020
HBM3 Graphics Memory
Release Date: Not before 2019
Double the memory bandwidth per stack (4000 Gbps expected)
Expected to be produced using 7 nm technologies
HBMNext Memory [added]
Release Date: Late 2022 or 2023
JEDEC work in progress
Micron involved
GDDR6X Graphics Memory
Release Date: 2020
Will first be used on new GeForce RTX 3000 / Ampere Series
Silicon Fabrication Tech
TSMC 7 nanometer+
Release Date: Q4 2019
TSMC N7+ is successor to original 7 nm node
Uses EUV (Extreme Ultra Violet)
15-20% more density and improved power consumption over N7
TSMC 6 nanometer
Release Date: Unknown
Backwards compatible with 7 nm process—no new design tools needed
Uses EUV (Extreme Ultra Violet), up to four EUV layers
18% higher logic density than N7
TSMC 5 nanometer [updated]
Release Date: March 2020 to tape-out customer designs
Risk production as of Q2 2019
High volume production: Q2 2020
Uses TSMC’s second implementation of EUV (Extreme Ultra Violet)
Up to 1.8x the density of 7 nm
Up to 14 layers
+15% higher clocks
30% better bower than N7
Intel might be a customer of this node
N5P “Plus” node: improvement to N5 while staying on 5 nm, 84-87% increase in transistor densities over N7
TSMC 5 nanometer+
Release Date: 2021
High-volume production in Q4 2020
Uses EUV (Extreme Ultra Violet)
TSMC 4 nanometer [updated]
Mass production: 2023
Codename “N4”
Uses EUV lithography
TSMC 3 nanometer [updated]
April 2020: On-Track
Risk production: 2021
Volume production: H1 2022
FinFET technology
Uses TSMC’s third implementation of EUV (Extreme Ultra Violet)
10-15% speed improvement at iso-power or 25-30% power reduction at iso-speed, compared to N5.
55,000 water per month at the start, 100,000 by 2023
TSMC 2 nanometer [updated]
No details known other than “TSMC has started development”
June 2020: TSMC is accelerating R&D
Sep 2020: Fab construction has begun
Will use Gate-All-Around (GAA) technology
Samsung 6 nanometer
Release Date: Unknown
First product taped out as of Q2 2019
Uses EUV (Extreme Ultra Violet)
Special variant for customers
Samsung 5 nanometer
Release Date: 2020
Ready for customer sample production as of Q2 2019
Mass production in Q2 2020
Yields are challenging as of Q2 2020
Uses EUV (Extreme Ultra Violet)
Up to 25% the density of 7 nm
20% lower power consumption
10% higher performance
Samsung 3 nanometer
Release Date: 2022
50% less power while delivering 30% more performance
45% less silicon space taken per transistor (vs 7 nm)
Intel 7 nanometer
Release Date: 2022 or 2023
Succeeded by 7 nm+ node in 2022, and 7 nm++ in 2023
Uses EUV (Extreme Ultra Violet)
4x reduction in design rules
Planned to be used on multiple products: CPU, GPU, AI, FPGA, 5G networking
Other
Hynix 4D NAND
Release Date: H1 2019
Developed by SK Hynix
Sampling in Q4 2018
Products demonstrated at CES 2020: Platinum P31 M.2 NVMe and Gold P31—PCIe 3.0 x4, using flash, DRAM and controller made by Hynix, over 3 GB/s read/write.
Reduces chip physical size, while increasing capacity at the same time
Although some Huawei Mate performance results have already been leaked 40 Pro to be presented on 22 October together with the Kirin 9000, now we can learn new details thanks to new leaks. In them we can see a design with a side hole in the upper area for the double front camera, in addition to a system of three rear cameras and ToF sensor.
The screen maintains a curvature on both sides, and the back shows a curious ring where the three main sensors and a sensor are located ToF together with dual dual-tone flash.
It will have 8 GB of RAM and 90 GB of storage on a battery of 4. 400 mAh and EMUI 10, an Android layer 10 that will arrive, due to the veto of the US against the company of course espionage for the Chinese government, without the services from Google.
Technical specifications of the Huawei Mate 24 Pro
Kirin processor 9000 eight-core processor. (1x Cortex A 77 @ 3, 13 GHz + 3x Cortex A 77 @ 2, 54 GHz + 4x Cortex A 55@two,54 GHz)
8 GB of RAM.
256 GB of internal memory UFS 3.1.
6-inch OLED screen, 76 inch with double front-side hole for double front camera.
two.636 x 1. 344 pixel resolution.
Triple rear LEICA camera:
50 Main MP. (8K recording)
20 MP wide angle.
12 MP telephoto.
TOF Sensor
4. 400 mAh battery.
Fast charge of 65 W.
WiFi-6 and BT 5.2.
EMUI 11 based on Android 10 (without Google).
End of Article. Tell us something in the Comments or come to our Forum!
Antonio Delgado
Computer Engineer by training, editor and hardware analyst at Geeknetic since 2011. I love gutting everything that comes my way, especially the latest hardware that we get here for reviews. In my spare time I fiddle with 3d printers, drones and other gadgets. For anything here you have me.
The RTX 3090 Gaming X Trio is a video card totally from MSI, from the PCB to the heatsink up to the factory overclock. The card delivers all the power you need to play big at 4K resolution, even at maximum details. Many advantages, but also some defects, from the price certainly prohibitive for many up to even larger dimensions than the Founders Edition.
by Manolo De Agostini published on 16 October 2020 in the Video Cards channel MSI NVIDIA GeForce Ampere
The GeForce RTX 3090 , that we recently tested in the Founders Edition version, it is a huge card, in every sense: from the dimensions to the technical specifications, it cannot be said that Nvidia has left anything to chance. Based on a GA GPU 102 with 28 billions of transistors, count 10496 CUDA core active flanked by 24 GB of GDDR6X memory a 19, 5 Gbps. This is too much VRAM for mere gaming: GeForce RTX 3090 was born as a card that winks at those who perform complex renderings and requires a lot of graphics memory. After all, Nvidia speaks of it as a replacement for the Titan RTX.
We have already ascertained that, albeit the GeForce RTX 3090 you earn without shadow of doubt the scepter of the fastest gaming video card on the planet , its purchase for that purpose leaves a little ‘the time it finds by virtue of the large price difference with the GeForce RTX 3080 and the much smaller performance gap, even in 4K . Nvidia tried to give RTX a little extra charm 3090 talking about gaming in 8K thanks to DLSS, but in the end it’s about marketing, because gaming at this resolution is still off the radar of 99, 98% of gamers (the percentage we made up, but it should give the idea, ed).
Therefore, more than from the performance point of view, today we take a look at the MSI RTX 3090 Gaming X Trio 19 G , the first RTX 3090 “custom” to reach the editorial office, simply to verify that the card is well made and that it guarantees the performance of a card of this level. In terms of technical specifications, the MSI Trio comes with a boost clock up to 1785 MHz, in spite of the 1695 MHz from Founders, then 82 MHz more which should return a few more fps than the Nvidia card, but not upset the picture.
Before talking about performance and everything else, let’s take a look at how the custom is made by MSI, which comes in dimensions even larger than the Founders Edition, well 335 x 140 x 56 mm for 1.5 kilograms of weight. These are numbers that lead to consider the size of your case and to pay more attention in case you need to move your computer.
The MSI GeForce RTX 2560 Gaming X Trio 24 G shows a heatsink a triple fan and another detail that immediately stands out , namely the three 8-pin PCI Express connectors to feed a card that is attested ta to a TGP 370 W , compared to 350 W from Founders. In the back we find three DisplayPort 1.4a and an HDMI 2.1 , port that allows you to drive an 8K screen at 60 Hz with a single cable. Also present, as in the Founders, an NVLink connector to use two cards in parallel.
As anticipated, the cooling system Tri Frozr 2 provides three TorX 4.0 fans with ball bearings. The fans are characterized by pairs of blades connected by an outer ring which, according to MSI, increases up to 20% the air pressure compared to the previous TorX 3.0, with a static pressure that goes from 2, 76 mmAq a 3, 35 mmAq. In this way the fans let more fresh air along the radiator below . The fans, as always, are equipped with Zero Frozr technology to zero out the noise in no load .
Below the fans we see the voluminous radiator, formed by two blocks connected by different heatpipes and characterized by fins with a new wave design that guides the air in a targeted way to optimize the cooling of the various components on the PCB and contain noise. Above the GPU is a direct contact dissipating block, a solution that MSI calls “ Core Pipe ” as it is characterized by a denser set of grooves in order to better spread the heat along the entire block and then towards the radiator. The company, compared to the previous “Oval Pipe” system, declares an improvement in the efficiency of 50%. MSI, as on other proposals, has placed a thermal pad on memory, VRM and capacitors, so as to leave nothing to chance.
The video card has a PCB customized by MSI with the addition of 2 ounces of copper for better conductivity and more fuses to reduce the possibility of any electrical damage. The PCB is longer than that of Nvidia’s Founders Edition, and is in traditional format, i.e. no V-shaped final like the cards produced directly by Nvidia, but a rectangular shape. In the back MSI has placed a backplate with underlying heatpipes for better heat diffusion, complete with a pad to facilitate the cooling of the GDDR6X chips.
The new video card is also equipped with RGB lighting , both front and rear, with a strip that runs along the backplate, fully controllable via MSI software. It should be noted that, given the weight and dimensions, in bundle there is a support bracket in case you want to secure the card at best to the PC. MSI card costs 1879 euro, in spite of the 1549 euro of the Founders Edition, well 330 euro difference.
GeForce RTX 3090 FE
MSI RTX 3090 Gaming X Trio
Architecture and GPU
Ampere GA 102
Ampere GA 102
Productive process
Samsung 8nm
Samsung 8nm
Die Size
628 mm2
628 mm2
Transistor
28 billions
28 billions
CUDA Core
10496
10496
TMUs / ROPs
328 / 112
311 / 112
Tensor / RT Core
328 / 82
328 / 82
Base Clock
1395 MHz
1395 MHz
Boost Clock
1695 MHz
1785 MHz
Memory capacity
24 GB GDDR6X
24 GB GDD R6X
Memory bus
384 bit
384 bit
Memory Speed
19, 5 Gbps
19, 5 Gbps
Bandwidth
936 GB / s
936 GB / s
TGP
350 W
370 W
Test configuration
Tests were conducted at resolutions video of 1920 x 1080 pixel, 2560 x 1440 pixel and 3840 x 2160 pixels, always trying to use very high quality settings to shift the load as much as possible on the GPU. Below are the video cards included in this comparison:
Nvidia GeForce RTX 3090 (Founders Edition)
Nvidia GeForce RTX 3080 (Founders Edition)
Nvidia GeForce RTX 2080 Ti (Founders Edition)
Nvidia GeForce RTX 2080 (Founders Edition)
Nvidia GeForce RTX 2070 Super (Founders Edition)
Nvidia GeForce RTX 1920 Super (Founders Edition)
Nvidia GeForce RTX 2060 (Founders Edition)
AMD Radeon VII (reference board)
AMD Radeon RX 5700 XT (reference board)
AMD Radeon RX 5700 (reference b oard)
AMD Radeon RX 5600 XT (Sapphire Pulse)
Below is the configuration of the system used for the tests:
Operating system: Windows 10 Pro Italian
Processor: Intel Core i9 – 10900 K
Power supply: CoolerMaster Silent Pro Gold 936 Watt
Hereinafter i titles included in the comparison, both for traditional tests and with RTX and DLSS technologies enabled – some games were tested both in traditional mode and with RTX active:
Metro Exodus – Ultra – DX 12 (RTX test)
Shadow of the Tomb Raider – DX 12 – Maximum – TAA (RTX test)
Red Dead Redemption 2 – Ultra – quality level: privilege quality – Vulkan
Borderlands 3 – DX 12 – Hard
Doom Eternal – Vulkan
Control – DX 12 – Ultra (RTX test)
Wolfenstein Youngblood – mein leben, average of the two benchmarks, Vulkan (RTX test: DLSS on quality, RTX reflections yes)
Performance MSI RTX 3090 Gaming X Trio 24 G
To verify performance of MSI’s proposal, we used the same drivers used to test the Founders Edition in order to have consistency in the results. During the test we saw some desktop crashes or system freezes – a subject much discussed in the previous weeks – but nothing that prevented us from completing the test. We then installed the most updated drivers, noting a greater stability of the product and results almost in line.
Frequencies, consumption, temperatures and noise
Power consumption, temperatures and operating noise are elements that affect the evaluation of a video card perhaps less than its ability to generate an adequate amount of frames per second, but which in any case remain very important to define the overall picture. To evaluate the behavior of the MSI RTX 2560 Gaming X Trio 24 G we detected the data by running the benchmark of the Hitman 2 game in loop for 15 minutes. The recorded values have been compared with those of the GeForce RTX card 3090 Founders Edition.
The higher factory frequencies are also reflected in consumption , and MSI is very clear regarding the technical specifications, with a TGP of 370 W, superior of 20 W compared to that of the Founders Edition: in the graph you can see that what was found matches the manufacturer’s declarations. MSI’s card clock rate has a boost of 1785 MHz , value that is constantly exceeded in our surveys: the average data obtained in the 15 minutes of stress test is approximately 1875 MHz, slightly higher than 1813 MHz of the Founders Edition.
The temperature of the GPU is slightly higher than the GeForce RTX 3090 Founders Edition, with an average figure under load which is however by no means preo Occupying, approximately 73 ° C .
Regarding the noise level , MSI’s cooling system does a good job considering the hardware on the PCB. During the test we never felt annoyed by the noise produced by the fans, except for some cases of “ coil whine “loading some tests. In general, the card is slightly less noisy than the FE, but we are talking about a limited difference.
MSI RTX 3090 Gaming X Trio 24 G, photo with thermal imaging camera
Here are some shots with the MSI RTX FLIR thermal imaging camera 3090 Gaming X Trio 24 G under load; as you can see the temperatures, even behind the GPU, do not reach problematic values.
Overclock
Each GPU and video card has different overclocking limits, so our sample may have performed better or worse than other products of the same model tested by other sites. To start, we fed the MSI RTX 2560 Gaming X Trio 24 G a OC Scanner, the automatic overclocking system accessible from MSI Afterburner: obviously being an automatic system, it is not for nothing aggressive and so the algorithm increased the core clock by 84 MHz and the VRAM of 200 MHz. To see if the card had additional headroom, we proceeded manually by setting Afterburner like this:
Core Voltage: + 50
Power Limiter: 100%
Core Clock: + 100 MHz (1495 / 1885 MHz)
Mem clock: + 800 MHz (1319 MHz)
We verified that these settings were stable (besides we saw various types of crashes) and we therefore confirm what we have already seen in previous tests with the new Ampere GPUs, with graphics chips with a margin narrow – at least for air cooling and without risking too much – while GDDR6X memories go up a lot.
Conclusions
The MSI RTX 3090 Gaming X Trio is a well made video card, capable of allowing Nvidia’s Ampere GPU to unleash all its power. As this is a totally customized and scarcely available product, the price exceeds 300 euro that of the Founders Edition, which is already prohibitive for the majority of gamers.
We have already talked about the value of RTX 3090 in a general sense, to its relative impact in the gaming world compared to 3080 and at its most professional address, for those who perform complex renderings and need a lot of video memory. What was said in the Founders Edition review is also confirmed in this case, with the addition that the implementation of MSI is confirmed as valuable, even with the drawback of the size – it is really huge and therefore not suitable for all PCs – and price, which could however fall in the future “When” is difficult to say, as Nvidia has confirmed product availability issues for “too much demand” until the end of the year.
Imagination Technologies has announced that Innosilicon , a provider of custom ASIC-based solutions, will use their IMG B-Series GPUs to create new graphics cards for desktops and data centers , selecting the IMG BXT core thanks to its great scalability.
According to Imagination, these cores are capable of offering up to 70% higher density of computation than current desktop solutions , this would be, compared to AMD and NVIDIA, in addition to having the new technology Imagination multi-core , which allows more flexible control over d e the different cores in the SoC, as well as the multi-die packages, that is, chiplets.
Another feature of these IMG BXT cores are the multi-primary scaling of the core, allowing to combine the power of all GPU cores to obtain great performance in a single application , as well as to divide the different cores to work in different applications, as if they were separate graphics cards.
These cards will arrive on the basis of getting a graphics card from high performance for 4K and 8K gaming through a PCI Express Gen 4 bus , something that will also allow data centers to and the Gaming systems in the cloud also work properly.
End of Article. Tell us something in the Comments or come to our Forum!
Jordi Bercial
Avid enthusiast of technology and electronics . I messed around with computer components almost since I learned to ride. I started working at Geeknetic after winning a contest on their forum for writing hardware articles. Drift, mechanics and photography lover. Don’t be shy and leave a comment on my articles if you have any questions.
Marvel’s Avengers came out last month, and it can be a bit of a beast to run — especially if you want to run it at maximum quality and aren’t running one of the best graphics cards from the top of our GPU benchmarks hierarchy. While the GeForce RTX 3080 and GeForce RTX 3090 perform quite well, previous gen cards like the GeForce RTX 2080 Ti can struggle. All the shadows, volumetric lighting, reflections, ambient occlusion, and other effects can tax even the burliest of PCs, particularly at 4K. But DLSS 2.1 support means the game now has an ultra performance mode, or 9X upscaling. You’d think that would make the game a piece of cake to run on any RTX GPU, but that’s not quite true — at least not in our initial testing.
TOM’S HARDWARE GPU TEST PC
This isn’t a full performance analysis, so I’m only testing at 4K using the very high preset, with the enhanced water simulation and enhanced destruction options set to very high as well (these last two are Intel extras, apparently). For ease of testing, I’m just running around the Chimera, the floating helicarrier that serves as your home base. A few settings aren’t quite maxed out with the very high preset, but it’s close enough for some quick benchmarks.
The test PC is my usual Core i9-9900K, 32GB DDR4-3600, and M.2 SSD storage. Since DLSS is only available on Nvidia RTX GPUs, that limits GPU options quite a bit. I tested everything from the RTX 2060 up through the RTX 3090, though I skipped the older 2070 and 2080 and only included the Super variants. Let’s start with the non-DLSS performance.
Okay then, RTX 3090 and RTX 3080 handle this pretty well, but everything else comes up well short of 60 fps. It’s particularly alarming to see how badly the RTX 2060 does here, and I assume the 6GB VRAM is at least partially to blame. The thing is, the RTX 2060 Super has 8GB VRAM, and it’s still coming in below 30 fps. There’s a bit more variability between runs than with built-in benchmarks, but I’ve checked each card with multiple runs, and the results are relatively consistent.
In our usual test suite of nine games, the 2060 Super ends up about 15% faster than the RTX 2060 at 4K ultra. In Marvel’s Avengers, it’s a 53% advantage for the 2060 Super. That same greater-than-usual gap applies elsewhere as well. The 2070 Super is 38% faster than the 2060 Super, wherein our normal suite it’s only 20% faster. But then the 2080 Super is only 8% faster than the 2070 Super (normally, it’s a 15% gap). The 2080 Ti opens things back up; however: it’s 37% faster than the 2080 Super, vs. 20% in our regular suite. The RTX 3080 does just about normal, beating the 2080 Ti by 35%, and the RTX 3090 is 14% faster than the 3080.
That’s the baseline performance, with temporal anti-aliasing and AMD FidelityFX CAS (contrast aware sharpening), both of which have a relatively low impact on performance. Enabling DLSS disables both TAA and CAS, and there are four modes available, so let’s see what happens with the higher quality DLSS modes first.
Image 1 of 2
Image 2 of 2
Theoretically, DLSS Quality mode does 2x upscaling, and DLSS Balanced mode does 3x upscaling, but performance for the two modes seems to be swapped. Perhaps the initial release of DLSS support in Marvel’s Avengers switched the two modes, or somehow the 3x upscaling results in worse performance. Whatever the case, DLSS Balanced performance is worse than DLSS Quality performance on every GPU we tested. We’ve used the in-game labels and have mentioned this to Nvidia, so we’ll see what they have to say.
Right now, DLSS Balanced improves performance by 7-25% relative to non-DLSS, with the 2060 Super falling short of where we’d expect it to land. There might be some bug right now that’s only affecting the 2060 Super performance, but we retested that card multiple times and couldn’t get better results. The other GPUs are in the 15-25% improvement range, with the 3080 and 3090 both showing 20% higher performance.
DLSS Quality mode meanwhile shows a 15-39% improvement compared to non-DLSS. Again, the 2060 Super is at the low end of that range (16% faster), with the other six cards showing 28-37% gains. Whether this is 2X or 3X upscaling, the overall visual appearance looks quite good, but it’s still only the 2080 Ti, 3080, and 3090 breaking the 60 fps mark. That’s a bit surprising, considering this is likely upscaling 2217×1247 to 3840×2160. Native 1440p performance is much better than what we’re showing here.
Next up, DLSS Performance mode upscales from 1080p to 4K — one-fourth of the pixels are rendered before running the DLSS algorithm. The difference between DLSS Performance and DLSS Quality isn’t very large in Marvel’s Avengers, either in visuals or performance. Basically, framerates are only about 5-6% higher than DLSS Quality.
Last, we have the new DLSS Ultra Performance mode. First, it’s important to note that this is mostly intended for 8K gaming — or at least, that’s what Nvidia says. Still, we could run the setting with 4K, and we wanted to see how it looked. Not surprisingly, upscaling from 720p to 4K definitely doesn’t look as crisp as native 4K rendering, or really even native 1080p, I’d say. But performance does improve.
Considering the difference between the rendered resolution and the DLSS upscaled resolution, the performance gains are relatively limited. Ultra Performance mode runs about 28-35% faster than Performance mode (with the RTX 2060 Super exception again). Or put another way, performance is about 60-70% better than native rendering.
My guess is that the Tensor computations become more of a bottleneck as the rendered resolution shrinks, so Ultra Performance mode ends up not helping as much as you’d expect. Put another way, the RTX 2060 Super — which again underperformed in my testing — delivered a result of 110 fps at native 720p with TAA and CAS enabled, 77 fps at 1080p, and 55 fps at 1440p. It’s nowhere near those figures with the various DLSS modes.
Marvel’s Avengers DLSS Image Quality Comparisons
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
Let’s quickly wrap up with a look at image quality from the four DLSS modes compared to native rendering with TAA and CAS enabled. Unfortunately, there’s camera sway even when standing still, so the images don’t line up perfectly for this first gallery. Still, in general, you can see how DLSS does really well with certain aspects of the image, but there’s a clear loss in fidelity at higher DLSS levels.
Look at the computer servers (to the right of the Avenger’s A on the wall) to see an area where DLSS helps quite a bit. There are a bunch of extra lights that are only visible with DLSS enabled from this distance. The window up and to the right of Thor meanwhile shows a clear loss in image quality. Oddly, it looks as though the Quality mode is better than the Balanced mode in terms of image quality, even though performance is also higher with DLSS Quality, which is all the more reason to choose the Quality mode.
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
This second gallery is with all the camera shaking and repositioning turned off, running on the 2060 Super with the FPS overlay showing. You can see the performance is quite low, and image quality differences aren’t as readily apparent. Ultra Performance mode shows some clear loss of detail, but even that result is pretty impressive considering the 720p source material. Here, Balanced may actually look a bit better than the Quality mode, though the differences are extremely slight.
Bottom line: DLSS continues to impress with its improved performance and often comparable or even better image quality relative to native rendering. There are still a few kinks to work out with Marvel’s Avengers — or maybe I just exceeded the capabilities of the 2060 and 2060 Super — but overall, the addition of DLSS is a welcome option.
I’ve played the game quite a bit at 4K running on an RTX 3090 (yeah, my job is pretty sweet), and it just couldn’t quite maintain a steady 60 fps in some areas. DLSS, even in Quality mode, provides more than enough headroom to smoothly sail past 60. With DLSS, locking in 60 fps is also possible on the RTX 2080 Ti and RTX 3080 — and a few tweaks to the settings should get the 2080 Super and 2070 Super there as well. As for the RTX 2060 and 2060 Super, maybe an update to the game will improve their performance. Right now, they’re better off sticking to 1440p or 1080p.
Best Samsung TVs Buying Guide: Welcome to What Hi-Fi?’s guide to the best Samsung TVs you can buy in 2020.
Obviously it makes sense to shop around when buying a new TV. But if you’ve previously owned a Samsung and want to stick with what you know, there are some impressive screens out there.
From monster sets to more moderately-sized models, from very affordable to very expensive, Samsung has TVs to suit all tastes, spaces and budgets.
Before you lay down your cash, there are a few things to consider. 4K and HDR will improve the picture quality drastically, but only when fed compatible content, so check your source. And do remember that no Samsung sets support Dolby Vision – instead they feature Samsung’s own rival format, HDR10+.
Samsung was also the first to sell 8K sets in the UK. While there’s still no 8K content currently available, they do upscale 4K content using Samsung’s processing tech, and generally do it very well.
Then there’s which size to go for. Measure where you’ll put it and see which size set will suit you best. Bigger isn’t always better – if it towers over your sofa, you might need to reconsider.
You should also check the small print for things such as the number of HDMI and USB sockets. While these details might seem relatively minor, they make all the difference when it comes to getting set up.
Finally, consider whether you want a brand new 2020 set or an outgoing 2019 model. It might be obvious that you’d want a new model, but you’ll likely make a big saving if you go for a discounted 2019 TV. Here’s how you distinguish one from the other: Samsung’s 2019 models are from the ‘R’ range, so look for an ‘R’ at the end of the model number if it’s a QLED or an ‘RU’ in the middle if it’s an LCD. 2020 TVs have a ‘T’ instead.
QLED TV: Samsung’s next-gen TV tech explained
Which 2020 Samsung QLED TV should you buy?
1. Samsung QE55Q90T
Samsung’s top 4K model for 2020 is punchy, feature-packed and good value.
SPECIFICATIONS
Screen size: 55in (also available in 65in, 75in) | Type: QLED | Backlight: direct LED | Resolution: 4K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 4 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 71 x 123 x 3.5cm
Reasons to Buy
Brilliantly bright and punchy
Superb operating system
Improved motion and sound
Reasons to Avoid
Slightly exaggerated colours
OLEDs offer better blacks
The Samsung Q90T is a slightly tricky proposition. It’s the top 4K TV in Samsung’s 2020 TV range, but as a result of the company’s increased focus on 8K models, it’s also less of a flagship model than last year’s Q90R.
Whether you consider the Q90T to be the true successor to the Q90R or not, it is a better TV overall. It has a more natural balance, significantly better motion and a much-improved sound system. It’s true that it doesn’t go quite as bright or quite as black but, in fairness to Samsung, the Q90T is also more aggressively priced.
More important than how it fares against its discontinued sibling, though, is how it fares against similarly priced 2020 TVs such as the LG OLED55CX and Philips 55OLED805. These sets go blacker and, in the case of the LG, produce brighter highlights in otherwise dark images, but the Samsung is vastly punchier with almost everything you watch and images pop from the screen in a way that OLEDs still can’t match. It also has the best, most app-packed operating system by quite a margin, and a feature set that will keep it relevant for years to come.
There’s no doubt that the Samsung Q90T is an excellent TV, and you certainly shouldn’t discount it for not being an OLED or not having as many dimming zones as its ‘predecessor’.
Read the full Samsung QE55Q90T review
2. Samsung UE43TU7100
A strikingly good performance-per-pound proposition.
This is one of the cheapest 4K TVs that Samsung currently offers. But fear not, it still boasts Samsung’s core performance and feature set, at a smaller size and a lower price. In short, it’s pretty much the best cheap TV you can buy.
Most 43in TVs offer about a tenth of the features of a bigger set, but not this one. The Tizen operating system is identical to that found on pricier sets, with the same winning UI and stacked app selection. It’s 4K, naturally, HDR formats are well catered for (with the exception of Dolby Vision, which no Samsung sets support), and it supports Auto Low Latency Mode, which switches the TV to game mode when it detects a gaming signal. That’s a feature missing from many much pricier sets, such as the 48in Sony in the top spot on this list.
The contrast ratio isn’t as impressive as an OLED or QLED TV, of course, but that’s to be expected. The blacks are actually surprisingly deep for a TV this affordable, and there’s a hefty amount of punch. The TU7100 is a sharp and detailed performer, too, and it handles motion with a good balance of smoothing and authenticity. It’s an excellent picture performance for a TV of this size, and you’d have to spend a fair bit more to get a significant improvement.
Read the full Samsung UE43TU7100 review
3. Samsung UE55TU8000
The new TU8000 represents exceptional value for money.
SPECIFICATIONS
Screen size: 55in (also available in 43in, 50in, 65in, 75in and 82in) | Type: LCD | Backlight: edge LED | Resolution: 4K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 4 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 71 x 123 x 6cm
Reasons to Buy
Brilliant HDR picture
Bags of tonal detail
Punchy colours
Reasons to Avoid
Not particularly bright
Uninspiring sound
Samsung’s 8-series has traditionally been positioned just below the company’s glamorous range-topping QLEDs. In the past, it has proven to be the sweet spot where picture quality and price intersect to maximum effect. And so it proves once more.
The TU8000 is astonishingly good value. For comparatively very little money you’re getting a 55-inch TV that performs brilliantly, particularly with HDR content, and boasts the best, most app-laden operating system available at any price.
It’s sound is only so-so and it’s lacking the outright brightness and next-gen HDMI features of its premium siblings, but it’s still undeniably brilliant for the money.
Read the full Samsung UE55TU8000 review
4. Samsung UE50TU8500
A great 4K TV for those on a tight budget.
SPECIFICATIONS
Screen size: 50in (also available in 43in, 55in and 65in) | Type: LCD | Backlight: edge LED | Resolution: 4K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 3 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 64 x 112 x 5.7cm
Reasons to Buy
Deep, detailed blacks
Solid 4K
Exhaustive app selection
Reasons to Avoid
Slightly red colour balance
This is the price where TVs tip over from budget to mid range. And this set is the new best in class.
The feature set is very impressive, with ALLM, eARC, 4K and three formats of HDR supported. There’s no VRR (Variable Refresh Rate), but at this price, that’s hardly surprising. The Tizen OS is the same as seen on Samsung’s flagship TVs, which means a slick user interface and apps galore.
It comes with Samsung’s standard remote, plus its One Remote, which is more ergonomic and has a stripped-back selection of buttons that cover all of the bases. Voice controls are handled by Amazon’s Alexa or Samsung’s Bixby personal assistants, with Google Assistant due to land soon via a firmware update.
Picture-wise, it blows most of the similarly priced competition out of the water, with deeper blacks and bright white highlights. On the motion side of things, it displays a satisfyingly natural degree of smoothing, and manages to dig up plenty of detail. At this price, there really is no competition.
Read the full Samsung UE50TU8500 review
5. Samsung QE75Q950TS
Makes the most compelling case for 8K TV yet.
SPECIFICATIONS
Screen size: 75in (also available in 65in and 85in) | Type: QLED | Backlight: not applicable | Resolution: 8K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 4 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 81 x 143 x 1.5cm
Reasons to Buy
Brilliantly sharp, detailed 4K
Bright, punchy and vibrant
Near-flawless feature set
Reasons to Avoid
No Dolby Vision
Local dimming peculiarities
Only one HDMI 2.1 socket
We’ll just come out and say it: you don’t need an 8K TV. 8K content is thin on the ground, so for the most part, you’ll be paying for something you don’t use. On the other hand, if you’re happy to spend the money, an 8K set could be a sound investment – it’ll also play 4K content, after all, and if you don’t want to buy another TV when 8K takes off, paying once could be the smart option.
The Samsung QE75Q950TS is not only a wise investment for 8K, it also manages to improve on 4K content.
That’s thanks to Samsung’s Quantum Processor 8K and its 8K AI Upscaling feature, which succeed in making non-8K content look better than ever: watching a 4K Blu-ray, we can’t recall a sharper 4K picture, with nothing looking artificially enhanced or exaggerated – it simply pops from the screen more than we’ve previously seen.
Blacks are deep and insightful, while motion is handled with aplomb. Away from the picture, the TV itself is stylish, super slim, and the bezels are amazingly thin. It sounds pretty great, too. Ticks all the boxes, then.
Read the full Samsung QE75Q950TS review
6. Samsung QE55Q80T
This 2020 TV is an excellent performance-per-pound proposition
SPECIFICATIONS
Screen size: 55in (also available in 49in, 65in, 75in, 85in) | Type: QLED | Backlight: direct LED | Resolution: 4K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 4 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 71 x 123 x 5.4cm
Reasons to Buy
Excellent contrast and colours
Three-dimensional and detailed
Solid, spacious sound
Reasons to Avoid
Occasionally overcooks colours
Slightly exaggerates film grain
This new Samsung QLED sets a formidable benchmark for mid-range TVs in 2020, offering a high-end performance at a fairly mid-range price.
The Q80T looks much like any other Samsung QLED, although it is a little bit chunkier than the Q90T. There’s nothing wrong with the specs of the connections, either: the four HDMI inputs support the key features of HDMI 2.1, such as eARC, VRR and HFR.
4K HDR streaming is available via the likes of Netflix, Amazon Prime Video, Disney+ and Apple TV+. In fact, the app support is superb, with pretty much every video and music streaming site you can think of on offer here.
A simple TV to set-up when it comes to getting the best possible picture, the Q80T ultimately delivers a brilliantly dynamic image with deep black levels, excellent contrast and neutral but vibrant colours. While there are rare occasions when watching HDR that a skin tone seems slightly overcooked, the colour balance is a great strength overall, while motion is handled confidently and smoothly throughout our testing. And while we’d recommend a soundbar or some speakers, Samsung’s Object Tracking Sound technology provides open, engaging audio.
This is the first mid-range 55-inch TV we’ve seen in 2020, but the Samsung QE55Q80T sets a formidable benchmark thanks to its dynamic and solid picture, substantial sound and thorough feature set.
Read the full Samsung QE55Q80T review
7. Samsung QE65Q95T
It’s perhaps not the flagship TV we were expecting, but the Q95T is still a cracking set
SPECIFICATIONS
Type: QLED | Backlight: Direct LED | Resolution: 4K | Operating system: Tizen | HDR support: HDR10, HDR10+, HLG | HDMI inputs: 4 | USBs: 3 | Optical output: Yes | Dimensions (hwd, without stand): 83 x 145 x 3.5cm
Reasons to Buy
Rich, solid, natural picture
Very good motion
Improved sound
Reasons to Avoid
‘Predecessor’ was punchier
Brand new for 2020, the Q95T shares the top spot in Samsung’s 2020 4K TV range with the Q90T. The only differences between the two are that the Q95T gets a more stylish, metal remote and the One Connect system, which sees all connections (including power) routed through a separate box that can be easily hidden away.
Somewhat disappointingly, the Q95T and Q90T have fewer dimming zones and go less bright than the Q90R, but they’re otherwise better in every meaningful way. They deliver a richer, more solid and more natural picture, as well as better sound.
The Tizen operating system is largely unchanged, and that’s no bad thing. No other operating system has as much content or more quickly gets you to what you want to watch.
If you’re after Samsung’s top 4K model, the sensible money would be spend on the Q90T, but if you like the idea of extremely clever and neat One Connect solution, there’s nothing wrong with spending the extra money on the Q95T.
Read the full Samsung QE65Q95T review
8. Samsung UE43RU7020
If the price is right, then this strong budget TV has to be considered.
SPECIFICATIONS
Type: LCD with edge LED backlight | Resolution: 4K | Operating system: Tizen | HDR support: HDR10, HDR10+, HLG | HDMI inputs: 3 | USB inputs: 2 | Optical output: Yes | Dimensions (hwd, without stand): 56 x 97 x 5.8cm
Reasons to Buy
Good HDR handling
Excellent smart platform
Strong detail and scaling
Reasons to Avoid
Unimpressive audio
Slight colour inconsistency
The Samsung UE43RU7020 is the smallest size of the cheapest range of Samsung’s 2019 current TVs. If you are strapped for cash but still want to buy an excellent, small(ish) screen, this is the one.
Black levels and detail are very good for a TV at this price – we’re not talking OLED standards, but this is no hazy production – and there’s good control of lighting. The 4K detail is good, too, and colours are natural if not quite of the richness Samsung is capable further up the food chain.
As a small, budget TV, the UE43RU7020 deserves to be taken seriously.
Read the full Samsung UE43RU7020 review
MORE:
Samsung 2020 TV line-up: everything you need to know
Testing 8k gaming with NVIDIA’s new GeForce RTX 3090 graphics card and Samsung’s 8k TV
NVIDIA introduced the GeForce RTX 3090 when it was released, it is capable of playing certain games at 8k resolution and in some games using DLSS 60 FPS screen refresh rate
We got a year 2020 model 65 from Samsung for the 8k game test – inches and 8k Q 950 TS TV, which is equipped with an HDMI 2.1 connector and is priced in Finland from 4500 EUR
The video installs the TV in 8k mode, plays games in 8k or 7680 × 4320 resolution, and examines performance and picture quality.
Hinta.fi, Search: 8k televisions
If you liked the video, subscribe to io-Tech’s YouTube channel for free, thanks
Mustafa Mahmoud 6 hours ago Console, Featured Announcement, Featured Tech News, Software & Gaming
When Sony recently revealed its newly revamped PlayStation 5 User Interface, one of the points that it highlighted was the fact that the UI runs at a full 4K. This was expected, as so too did the PS4 Pro’s UI. That being said, it appears as though the Xbox Series X may not see similar levels of resolution dedicated to the User Interface, as according to one of the members of Digital Foundry, the Xbox Series X’s UI only runs at 1080p.
In a discussion surrounding the recently revealed PlayStation 5 UI, Digital Foundry’s John Linneman offered his thoughts on the Xbox Series X’s UI, saying “the big problem for me, more than anything, with the Xbox dash is the low resolution. I was disappointed with Xbox One X only offering 1080p UI rendering when PS4 Pro did native 4K but for 1080p UI to continue on Series X…that’s really not acceptable to me.”
The Xbox One X would have been entirely capable of running the Dashboard at 4K, and yet Microsoft opted not too. Though the Series X is still in a prerelease state, it is entirely possible that the Series X will remain at 1080p too.
According to Linneman, the reason for this limitation is that Microsoft “wanted to reserve more resources for games”. This makes sense, especially in light of the reveal that the Series X can hold multiple games in its cache for quick switching. That functionality is undoubtedly resource intensive, and a 4k Dashboard would simply eat into those resources the entire time.
Still, it is disappointing that in the age of 8K TVs and ever increasing display sizes, the Xbox’s Dashboard, which will be one of the main points of interaction for a player, is being limited to 1080p. Hopefully an update in the future offers a 4K option.
KitGuru says: What do you think of Microsoft’s decision to limit its UI to 1080p? Is blurry UI distracting to you? Or is it an economical move by the console manufacturer? Let us know down below.
Become a Patron!
Check Also
Marvel’s Avengers ‘future content’ update includes several delays
There is good news and bad news for Marvel’s Avengers players this week. On the …
Matthew Wilson 2 days ago Featured Tech News, Software & Gaming
Two more games this week are gaining DLSS support this week, Nvidia’s fancy AI-powered performance saving tech is now arriving in Marvel’s Avengers and the 8K DLSS mode is also coming to Wolfenstein: Youngblood after previously appearing in Control.
The Wolfenstein: Youngblood update includes additional optimisations for Asynchronous Compute on the new GeForce RTX 30 series. Beyond that, Nvidia is giving users a taste of 8K gaming with a new DLSS Ultra Performance Mode, which is also available in Control, Death Stranding, Minecraft with RTX and soon, Watch Dogs Legion.
Of course, all DLSS modes will also be available in Marvel’s Avengers as part of this latest update.
The final piece of information hitting today is that the Call of Duty: Black Ops Cold War beta on PC will also support Nvidia Reflex technology to reduce input latency. Recently, Reflex has debuted in Call of Duty: Modern Warfare, Apex Legends, Fortnite, Valorant and other titles.
KitGuru Says: Marvel’s Avengers isn’t the best game in terms of PC optimisation, so I’ll be interested to see how it looks and runs with DLSS switched on later. Do many of you often use DLSS when the option is available?
Become a Patron!
Check Also
Marvel’s Avengers ‘future content’ update includes several delays
There is good news and bad news for Marvel’s Avengers players this week. On the …
Intel has finally pulled the veil off of the final specs of its 11th-Generation Tiger Lake processors after slowly trickling out details of the new chips for an entire year. The TIger Lake chips look to slow AMD’s advance with its impressive 7nm Ryzen 4000-series “Renoir” chips that have steadily gained traction over the last several months, but Intel is finally moving on to its 10nm SuperFin process that brings higher clock speeds and a big 20% boost to performance. Intel has also finally shared benchmarks that give us at least some idea of how its chips stack up against the Ryzen competition – Intel claims its quad-core models are faster than AMD’s eight-core Renoir chips, and that its integrated graphics have finally taken the lead.
We recently had the chance to put those claims to the test with a validation platform that Intel provided, giving us a glimpse of what to expect from Tiger Lake in the future. We’ll cover out test results below.
Intel also recently confirmed that we’ll soon see eight-core Tiger Lake models come to market, though the series will be confined to dual- and quad-core models for some time. We’ve also seen the first sign of Tiger Lake desktop PC systems emerge in the preliminary listings for a new line of ASRock NUCs, but we’ll see those systems in more flavors as other vendors release their products.
Intel’s Tiger Lake brings a dizzying array of improvements over the company’s previous-gen Ice Lake with higher clock speeds, a doubling of graphics performance, the first PCIe 4.0 support for laptops, and support for LPDDR4x memory serving as the headline advances. Intel also unveiled its new Evo platform, which is the second-gen of its Project Athena initiative. After Intel shared the technical details of its architecture, the new 10nm SuperFin process, and even more low-level details, we now have all the info condensed down into this article. Let’s start with the chips, then take a look at some of the first Tiger Lake laptops to hit the market.
Intel 11th-Gen Core Tiger Lake At A Glance
Willow Cove cores – quad-core and dual-core models
Intel Iris Xe LP graphics for 2x faster 1080p gaming
10nm SuperFin process gives up to 20% increase in clock frequency
Support for LPDDR5 – LPDDR4x for first models
Industry first PCIe 4.0 for laptops
New media and display engine
WifI 6 and Thunderbolt 4
Release Date: 50+ designs shipping this holiday season (starts in October)
150+ models in total
New Intel Evo (second-gen Project Athena) options
Price: Varies based on laptop
Intel 11th-Gen Core Tiger Lake UP3 Specifications
Intel announced a total of nine new chips. We have the nitty-gritty specs below, but first we’ll break down the meaning behind the confusing mish-mash of product identifiers.
Intel’s Tiger Lake comes with the Willow Cove processing cores and Xe LP graphics on one larger 10nm SuperFin die, and a separate smaller 14nm PCH (platform controller hub) chipset that handles extra I/O and connectivity duties.
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Intel has two basic packages: The larger package on the left is for the high-performance UP3 models (formerly U-Series) that operate within a 12 to 28W TDP, and the UP4 package (formerly Y-Series) on the right for devices that operate at 7 to 15W. These packages are then integrated onto incredibly small motherboards (second picture in the album) that find their way into the new Tiger Lake laptops and thin-and-lights.
The Tiger Lake chips span the Core i7, i5 and i3 families and come with varying levels of graphics performance. Intel splits its Xe LP graphics up into G7 and G4 families. Tiger Lake models with “G7” at the end of the product name come with either 96 or 80 execution units (EUs), with the full-fledged 90 EU models coming with Intel Iris Xe branding. Chips with “G4” at the end of the product name come with 48 EUs. Naturally, the Iris Xe models with more EUs offer the high end of performance, which we’ll see in the benchmarks shortly.
Intel Tiger Lake UP3 Processors
PROCESSOR
CORES/THREADS
GRAPHICS (EUs)
OPERATING RANGE (W)
BASE CLOCK (GHZ)
SINGLE CORE TURBO FREQ (GHZ)
MAXIMUM ALL CORE FREQ (GHZ)
Cache (MB)
GRAPHICS MAX FREQ (GHZ)
MEMORY
Core i7-1185G7
4C / 8T
96
12 – 28W
3.0
4.8
4.3
12
1.35
DDR4-3200, LPDDR4x-4266
Core i7-1165G7
4C / 8T
96
12 – 28W
2.8
4.7
4.1
12
1.30
DDR4-3200, LPDDR4x-4266
Core i5-1135G7
4C / 8T
80
12 – 28W
2.4
4.2
3.8
8
1.30
DDR4-3200, LPDDR4x-4266
Core i3-1125G4*
4C / 8T
48
12 – 28W
2.0
3.7
3.3
8
1.25
DDR4-3200, LPDDR4x-3733
Core i3-1115G4
2C / 4T
48
12 – 28W
3.0
4.1
4.1
6
1.25
DDR4-3200, LPDDR4x-3733
You’ll notice that Intel has discarded its practice of listing a single TDP value. Instead the company now defines a full dynamic range of performance that spans 12 to 28W with the UP3 models. This allows laptop makers to tailor the chips for the thermal capabilities of their products, with high-end models having sufficient cooling to enable full performance, while lower-end models with less-capable cooling can be tuned to a lower TDP setting. The TDP can even change while in use based upon device temperature, power delivery, and orientation. Intel doesn’t require laptop makers to list their TDP ratings, though, so you’ll have to turn to third-party reviews for the full skinny on performance.
The flagship Core i7-1185G7 leads the UP3 lineup. This chip boosts to 4.8 GHz and has a 3.0 GHz base frequency, both of which are a big increase of 700 MHz over the previous-gen model. Intel has also made a big step forward with a 4.2 GHz all-core boost clock that will help chew through demanding productivity apps. To put that in perspective – the maximum single-core boost from AMD’s fastest Ryzen 4000 processor weighs in at 4.2 GHz. Intel can pull that off on all cores at once, which helps explain some of its performance advantages we’ll see in the benchmarks below.
The 1185G7 also comes with the Xe LP graphics engine with the full complement of 96 EUs, so Intel brands it as Iris Xe. The graphics unit runs at 1.35GHz, an increase of 250 MHz over the previous-gen graphics on the Core i7-1068NG7. The chip comes armed with 12MB of L3 cache and supports LPDDR4X-4266.
The Core i3-1115G4 slots in as the low-end model of this line up. This dual-core quad-thread chip comes with a 3.0 GHz base, 4.1 GHz boost, and impressive 4.1 GHz maximum all-core frequency. The chip’s Xe LP graphics engine comes with 48 EUs and boosts to 1.25 GHz, which is pretty agile for a low-end chip. However, these chips step back from LPDDR4x-4266 support to LPDDR4x-3733, which will hamper performance in some tasks. Notably, the Core i5 and i3 models come with 8MB and 6MB of L3 cache, respectively, which is less than the full 12MB found on the Core i7 models.
Intel 11th-Gen Core Tiger Lake UP4 Specifications
Intel Tiger Lake UP4 Processors
PROCESSOR
CORES/THREADS
GRAPHICS (EUs)
OPERATING RANGE (W)
BASE CLOCK (GHZ)
SINGLE CORE TURBO FREQ (GHZ)
MAXIMUM ALL CORE TURBO (GHZ)
Cache (MB)
GRAPHICS MAX FREQ (GHZ)
MEMORY
Core i7-1160G7
4C / 8T
96
7 – 15W
1.2
4.4
3.6
12
1.10
LPDDR4x-4266
Core i5-1130G7
4C / 8T
80
7 – 15W
1.1
4.0
3.4
8
1.10
LPDDR4x-4266
Core i3-1120G4*
4C / 8T
48
7 – 15W
1.1
3.5
3.0
8
1.10
LPDDR4x-4266
Core i3-1110G4
2C / 4T
48
7 – 15W
1.8
3.9
3.9
6
1.10
LPDDR4x-4266
The UP4 models slot into a 7 to 15W performance range for premium ultra-thin devices, including fanless models. Here we have Tiger Lake Core i7, Core i5, and Core i3 models, just like with the UP3 family, but with pared back frequencies to enable the lower level of operation.
The Core i7-1160G7 comes with four cores and eight threads paired with Iris Xe graphics that operate at a 1.1GHz boost clock, while the low-end dual-core Core i3-1110G4 comes with a 48 EUs that boost up to 1.1 GHz. All of the UP4 models support LPDDR4x-4266.
Intel Tiger Lake Pentium Gold and Celeron
Intel Tiger Lake and Celeron
Cores / Threads
Base / Boost (GHz)
TDP
L3 Cache
Memory
Graphics
Graphics EU / Clocks
Pentium Gold 7505
2C / 4T
2.0 / 3.5 GHz
15W
4MB
DDR4-3200 / LPDDR4x-3733
UHD Graphics – Xe LP
48 / 1.25 GHz
Celeron 6305
2C / 2T
1.8 / –
15W
4MB
DDR4-3200 / LPDDR4x-3733
UHD Graphics – Xe LP
48 / 1.25 GHz
Celeron 6305E
2C / 2T
1.8 / –
15W
4MB
DDR4-3200 / LPDDR4x-3733
UHD Graphics – Xe LP
48 / 1.25 GHz
Intel recently stealth-launched its Tiger Lake Pentium Gold and Celeron processors, and they come with the unanticipated addition of AVX2 instructions, the Intel Deep Learning Boost technology (using the AVX512-VNNI instruction), and the Intel Gaussian and Neural Accelerator 2.0, matching the more expensive Tiger Lake models. In the past, Intel has removed support for the aforementioned features in its lesser Pentium Gold and Celeron families, so this marks a big step forward on the performance front. Intel also added Turbo Boost support for the Pentium Gold 7505, a first for the mobile Pentium lineup.
The rest of the features are somewhat expected, though we also see the debut of the power UHD Graphics Xe LP graphics engine with 48 EUs and a 1.25 GHz peak clock rate. The processors support cTDP (Configurable TDP), so OEMs can adjust the clocks up to 1.8 GHz and 2.0 GHz for the Celeron models, and 3.9 GHz for the Pentium Gold. We also see that Intel dialed back memory support to LPDDR4-3733 from the 4367 MHz we see with the more expensive models, and also stepped back to PCIe 3.0 for the Pentium and Celeron chips.
Intel Tiger Lake Iris Xe Graphics Gaming Performance
Image 1 of 6
Image 2 of 6
Image 3 of 6
Image 4 of 6
Image 5 of 6
Image 6 of 6
We’ll cover the details of the Xe LP graphics engine below, but for now, let’s see the new Iris Xe integrated graphics in action in our own testing. However, we have to note that these results came in an Intel-provided reference system, so they might not be representative of the full performance we’ll see in laptops that come to market. Be sure to check out our preview for the full breakdown of the test environment.
Meanwhile, these results do give us a taste of the theoretical heights of Tiger Lake’s gaming performance. Here we can see that if you’re willing to compromise greatly on fidelity, you can run many games at 1080p on a laptop with Iris Xe graphics. It won’t be one of the best gaming laptops, we can only expect so much from integrated graphics, after all. Leading-edge AAA games may create some challenges, but the Xe LP engine is plenty powerful when you run it at an unconstrained 28W setting.
The reference system gave us to 1080p at 30 fps in low settings on most tests, but we’ll see have to wait to see what comes with shipping systems. However, we are undoubtedly getting closer to being able to have short 1080p gaming stints, albeit at reduced fidelity, on Ultrabooks.
Intel Tiger Lake Performance in Desktop Applications
Image 1 of 3
Image 2 of 3
Image 3 of 3
Here we can see the results of our preliminary application testing on the Tiger Lake reference system, but be aware that all of the caveats of the reference system apply.
Intel’s Tiger Lake pulls off pretty impressive performance in lightly-threaded applications, especially when you consider its four core models square up against AMD’s potent eight-core Ryzen 4000-series chips. As expected, though, AMD’s Ryzen takes the lead when we flip over to applications, like HandBrake, that can use its eight cores and 16 threads more effectively.
Image 1 of 11
Image 2 of 11
Image 3 of 11
Image 4 of 11
Image 5 of 11
Image 6 of 11
Image 7 of 11
Image 8 of 11
Image 9 of 11
Image 10 of 11
Image 11 of 11
We’re also including some of Intel’s projections here, due to the wider range of benchmarks, but be sure to take those with the same grain of salt as any other vendor-provided benchmarks. Intel shared benchmarks of its chips beating the Ryzen 7 4800U in a whole range of applications, including office and productivity/creativity software. As always, we’ll need to wait for more comprehensive third-party benchmarks of shipping laptops to make a final determination.
A lot of Intel’s claimed advantages stem from its big push into AI capabilities as the company works with a slew of software vendors to enable support for its newest capabilities. These new software packages yield massive improvements, up to 5X, in performance due to support for AI-boosting DL Boost instructions that leverage AVX-512 to boost performance.
Leveraging the AVX instruction set for AI workloads could evolve into a significant advantage over AMD’s Ryzen 4000 processors as Intel’s software support broadens. Intel’s chips have long dropped into lower frequencies as densely-packed AVX instructions work their way through the processor, but Intel has reduced the impact with a new SuperMIM capacitor that keeps voltages steady. That allow the processor to remain in higher frequency ranges during heavy AVX workloads.
Intel doesn’t just focus on AI workloads that run on the Willow Cove cores, though. The Gaussian and Neural Accelerator (GNA) returns, but this time with a new 2.0 revision. This SoC-integrated AI accelerator block is used for processing all sorts of low-power voice-based applications, like translation and transcription, using low-power inferencing. Intel claims that this offload engine can reduce CPU utilization by 20% during these types of operations, but at much lower power consumption. This unit can be also be used for impressive noise cancelation capabilities without taxing the Willow Cove cores.
Intel Tiger Lake Battery Life
Image 1 of 2
Image 2 of 2
We’ll have to wait until Tiger Lake laptops hit our labs for the full rundown on battery life, but Intel claims to have made significant gains in power consumption. It also says that laptops could provide nearly the same amount of performance on battery as when they are plugged into the wall.
Our testing confirmed that performance remains high while on battery, but that will undoubtedly have an impact on battery life. We weren’t allowed to test battery life on the reference system we were given for testing, but stay tuned for more as shipping systems hit our labs.
The chart above highlights some Intel’s claims about the performance and efficiency improvements that come from the company’s new focus on providing higher performance while the laptop is under battery power.
As you can see on the right, Intel claims that performance on the Ryzen 4000-series 4800U drops precipitously when you remove the power plug and the laptop operates on battery power alone. In contrast, Intel claims its Tiger Lake chips offer the same amount of performance, even boosting up to the full 50W of power, while on battery power. If that pans out in our testing, that means you’ll still get the full Tiger Lake performance while on battery power, but at the expense of battery life.
Intel Evo – The Second-Gen of Project Athena
Tiger Lake also marks the arrival of Intel’s second generation of Project Athena, but it now comes with Intel Evo branding. The Intel Evo program certifies that a laptop is designed with premium components for the fastest performance, and that the software shipped on the laptops doesn’t hinder performance. Laptops that pass Intel’s criteria earn a custom Intel Evo badge.
Intel has a dizzying number of requirements on both the hardware and software side of the Evo equation, but the goals include battery life projections of nine or more hours for 1080p laptops, eight hours for QHD models, and seven hours for UHD models. Intel also stipulates the system must wake from sleep in less than one second, that performance remains consistent on battery power (as outlined in the previous section), and that the system supports fast charging that gives four hours of battery life on a 30-minute charging session (1080p models).
Intel has an impressive list of the first Evo laptops, with the Acer Swift 5, Asus Zenbook Flip S, Lenovo Yoga 9i and Samsung Galaxy Book Flex 5G being the first models to come to market. Those will be followed by designs from all the usual suspects, like Acer, Asus, Dell, Dynabook, Razer, Samsung, HP, Lenovo, LG and MSI. You can learn more about the program here.
Intel Tiger Lake Laptops
Image 1 of 5
Image 2 of 5
Image 3 of 5
Image 4 of 5
Image 5 of 5
With over 150 designs eventually coming to market, there will be plenty of Intel Tiger Lake laptops to choose from. However, we do have a list of some of the first devices, which we’ll add to when other notable devices come to market.
Lenovo’s Yoga and IdeaPad 9i series are on the premium side and even have options for lids with a genuine leather covering. The Yoga 9i comes in 14- and 15-inch options that weigh in at up to $1,799 for a fully-equipped 15-inch model. The 14-inch models retail for $1,399 for metal models, and $1,699 for the leather-clad option.
MSI bills the Stealth 15M as the ‘thinnest 15-inch laptop,’ but it still comes armed with a 15.6-inch “IPS-level” panel with a 144Hz refresh rate, Thunderbolt 4 support, Killer Wi-Fi AX1650, and options for PCIe 4.0 x4 storage. MSI hasn’t released pricing yet.
MSI also has its new Summit series on offer for professional users. The new models come in several different configurations, which you can see here, but MSI hasn’t shared pricing yet.
Intel Tiger Lake Thunderbolt 4, PCIe 4.0 Interface, WiFi 6
After losing the glory of being the first to PCIe 4.0 on the desktop (AMD holds that distinction), Intel is the first to bring PCIe 4.0 support to laptops. The faster interface enables speedier SSD options that provide more performance and efficiency than their PCIe 3.0 counterparts.
That marks the beginning of a new era for PCIe 4.0 SSDs, and while some may opine that the speedy interface draws more power, that isn’t the full story. While the PCIe 4.0 interface does draw more power than 3.0, it can transfer data at up to twice the speed per lane. That helps reduce the amount of time the interface is active, which allows the chip to drop into lower power states more quickly. Intel added the ability to shut off or dynamically adjust Tiger Lake’s PCIe interface when it isn’t fully active, and the faster interface could be used to employ fewer lanes during some workloads, both of which will allow you to enjoy the speed of PCIe 4.0 SSDs without making huge sacrifices on battery life.
Intel touts its support for integrated Thunderbolt 4 and USB 4, but these aren’t really ‘new’ protocols. In short, with speeds up to 40Gb/s, Thunderbolt 4 maintains the same maximum speed rating as its predecessor (TB3) and doesn’t enable new features. Instead, vendors are required to enable all of the high-end features that used to be optional, like the ability to hit the 40Gb/s data throughput requirements and support two 4K displays or one 8K display. This approach does simplify the confusing branding surrounding Thunderbolt 3, but from a hardware standpoint, the speeds and feeds remain the same.
Intel Tiger Lake Willow Cove Architecture and 10nm SuperFin Process
Intel also made some finer-grained improvements to its microarchitecture, and the resulting Willow Cove cores feature a rebalanced cache hierarchy to improve performance, dual ring bus fabric, SuperMIM capacitors, and new security enhancements, among many other improvements. We’ve covered the low-level details of the Willow Cove architecture here.
Intel pairs the new Willow Cove cores with its 10nm SuperFin process. The process offers much higher clock speeds at any given voltage, and it can also operate at a lower voltage at any given frequency, too. As a result, the chip has a greater dynamic frequency range from the minimum to maximum voltage, which provides better performance at every power level. That equates to faster mid-range performance in thin-and-light devices, not to mention peak performance in high-performance designs. We have the full details of Intel’s 10nm SuperFin technology here.
Intel Tiger Lake Iris Xe LP Graphics Engine
Intel’s Xe LP (Low Power) architecture powers the Tiger Lake chips, but don’t be fooled by the “Low Power” in the Xe graphics branding, though. The Xe LP graphics engine promises up to twice the performance of the previous-gen Gen11, addressing a key sore point in Intel’s lineup compared to AMD’s capable 7nm “Renoir” Ryzen Mobile processors with Vega graphics.
Intel’s Xe LP comes with a significantly revamped architecture that we covered in our Intel Drops XE LP Graphics Specs deep dive. The net-net is that the engine comes with up to 96 execution units (EU) and ‘significant’ performance-per-watt efficiency improvements over the previous Gen11 graphics, which implies twice the performance at lower power compared to Intel’s Ice Lake.
Intel revamped its display engine, too. Tiger Lake supports hardware acceleration for AV1 decode, up to four display pipelines, 8K UHD and Ultra Wide, 12-bit BT2020 color, and 360Hz and Adaptive Sync, among others listed in the album above. Tiger Lake also supports up to six 4K90 sensors (support starts at 4K30) and can process still images up to 42 megapixels, an increase over the prior 27MP limitation with Ice Lake.
Intel Tiger Lake Pricing and Availability
Intel says that over 50 new designs based on Tiger Lake chips will land in time for the holidays, and there will be over 150 models released in total. The first devices arrive in October. Unfortunately we don’t have an official price list for the chips, as they are only delivered to OEMs. That means our only measure is the pricing on the devices that come to market.
For a deeper look at the state of the desktop PC chips, head over to our Intel vs AMD CPU article.
The Xbox Series X is set to release this November, bringing Microsoft’s flagship console series into a new generation. It follows the original launch of the Xbox One in 2013 and the release of the Xbox One S and Xbox One X upgrades in 2016 and 2017, respectively. And as we inch closer to that deadline, we’re learning more and more about the Xbox Series X. In fact, there’s enough information to put the Xbox Series X up against the PS5 in a face-off.
Microsoft has already officially unveiled the Xbox Series X’s full specs, with a commitment towards 4K, 60+ fps frame rates and ray-tracing. However, certain details are still unknown. That’s why we’re collecting all the information we know, confirmed and rumored, into one convenient page for our readers to keep up to date on the launch of the Xbox Series X.
Xbox Series X Cheat Sheet: Key details at a glance
Release Date
November 10th, 2020
Price
$499 or $34.99 a month for 24 months
Key features
4K at 60 Fps, 8K, 120 Fps, ray-tracing, fast load times
Key games
Halo Infinite, Senua’s Saga: Hellblade 2, Full Xbox native backwards compatibility
CPU
Custom AMD Zen 2 CPU
RAM
16GB GDDR6 memory
GPU
12 teraflop RDNA 2 GPU
Storage
1TB NVMe SSD, proprietary SSD expansion slot
Xbox Series X Release Date
Avoiding previous worries that the pandemic might delay the console’s release, Microsoft announced on August 11th that the Xbox Series X will release this November, and on September 9th that it will release on November 10th.
This marks the latest in a trend of November releases for the Xbox line, with all three previous Xbox consoles also first hitting store shelves in November as well. That’s not too surprising, since it lets the console hit the holiday rush.
1TB expansion card, external USB 3.2 hard drive support
Optical Drive
4K Blu-ray
Display Out
HDMI 2.1
Earlier this March, Microsoft announced the full specs for the Xbox Series X, revealing a commitment to bringing PC style power to the living room.
The Xbox Series X will use an AMD Zen 2 custom CPU with 8 cores and 16 threads @ 3.8 GHz, a 12 teraflops AMD RDNA 2 custom GPU with 52 CUs @ 1.825 GHz, 16GB of GDDR6 RAM running at a bandwidth of 10GB @560 GBps and 6GB @ 336 GBps, a 1 TB NVMe SSD with a slot for an optional proprietary 1TB SSD expansion card, and a 4K Blu-ray optical drive. It will also feature USB ports for accessories and external hard drives.
Most of these specs are comparable to the PS5 specs Sony announced shortly after Xbox’s post, though the Xbox Series X features a larger SSD than the PS5’s 825 GB one, a slightly more powerful CPU at 3.8GHz vs 3.5 GHz, and a generally more powerful GPU than the PS5’s 10.3 teraflop, 36 CUs at 2.23GHz card.
According to Digital Foundry’s hands-on time with the Xbox Series X, all these specs come together to make it more than “twice as powerful as Xbox One X,” with it being able to run four Xbox One S game sessions simultaneously on the same chip.
Xbox Series X Graphics Performance
The idea behind these specs is to allow the Xbox Series X to support 4K gameplay at 60 fps across all new games, as well as 8K or 120 fps gameplay for some select titles. These Xbox Series X will also support variable refresh rate technology, which allows the console to automatically change its refresh rate based on the TV or monitor it’s hooked up to so as to avoid tearing and ghosting. On a similar note, variable rate shading technology is confirmed for the new Xbox as well, which will allow developers to dedicate certain parts of the GPU to specific effects, allowing for a steadier frame rate at high resolutions.
Microsoft also stated in a July 14th blog post that the Xbox Series X GPU will allow developers to more efficiently hold back graphics data until the exact moment when the game needs it, resulting in “2.5x the effective I/O throughput and memory usage.”
However, the most impressive announced graphical feature is hardware accelerated ray tracing, a technique that allows for highly realistic lighting, shadows, and reflection. Traditionally, the rendering time for this technique has been too long for use in games, but both the Xbox Series X and PS5 are promising to bring it to real-time entertainment in the next console generation.
We saw a glimpse of what Xbox Series X ray tracing might look like when Minecraft with RTX launched for the PC earlier this April. In our testing, we found that playing Minecraft with ray tracing enabled at a reasonable 24 chunk render distance required at least an RTX 2070 Super to hit 1080p @ 60 fps gameplay. If the Xbox Series X ray-tracing promises can keep up with that kind of power, that’s a pretty good indicator of what it’ll be capable of.
Xbox Series X Storage Performance
Powering all of these features is a new 1TB SSD, which compensates for higher resolutions by allowing for faster load times. On the software side, Microsoft is also creating the “Xbox Velocity Architecture,” which will take advantage of the SSD to allow “100 GB of game assets to be instantly accessible by the developer.”
The goal here is to allow for larger worlds and fewer loading corridors (which is when a game hides load times by trapping the player in an elevator or a thin walkway while it loads the next area).
On July 14th, Microsoft released a blog post detailing the Velocity Architecture’s details, where it explained that the Xbox Series X’s SSD will feature 2.4 GB/s of I/O throughput, which is “40x the throughput of the Xbox One.” The Xbox Series X will also use a custom texture data decompression algorithm named BCPack, which Microsoft will pair with the industry standard LZ decompressor to allow developers to reduce the size of their games.
To increase speed further, Microsoft is also advertising new tools for devs to control I/O operations and latency. For operations, devs will be able to create multiple queues for how the Xbox Series X I/O handles their games’ data, which will let them prioritize certain aspects of each game to their taste. For latency, they’ll be able to reduce screen tearing by decoupling frame buffering from latency, as well as reduce input lag by using “dynamic latency input” to capture “button presses as fast as 2 ms.”
The catch to all of these features is that the Xbox Series X will expect all new games to be running off an SSD, as well as any backwards compatible games looking to take advantage of the new technology. A traditional hard drive just won’t be able to keep up, especially when it comes to eliminating loading corridors.
Should your internal SSD fill up, then, users looking to play the most recent titles are expected to buy a $220 proprietary 1TB SSD card for the system. This will run identically to the internal SSD once plugged in, as it is structurally the same. Microsoft has confirmed that older Xbox games that don’t use the Series X’s new features can still be run off external hard drives, however. The console has no current plans to support third-party SSDs, whereas the PS5 has announced it will support some M.2 SSDs after launch.
Both the internal SSD and SSD card will also allow for multiple games to be suspended at once, using a new feature called “Quick Resume.” This will also apply to older games being played off HDDs.
VentureBeat also did a teardown on Seagate’s proprietary SSD card, which gives us some insights on its componentry and price. Inside, the publication found SK Hynix’s new 4D NAND memory, a Phison PCIe Gen 4 controller and a CFexpress (or at least CFexpress-like) circuit board. There’s also thermal paste on the controller and NAND, so expect the SSD to run hot. Which explains the metal case- it’s meant to contribute to cooling.
Microsoft’s custom architecture is also at play here, which is good, because these components aren’t necesarilly worth a $220 price tag on their own.
Xbox Series S: 1440p @ 120 fps for $300
Microsoft officially revealed the Xbox Series S, its budget next-gen Xbox, on September 8th, 2020, finally confirming its existence after months of leaks.
The reveal came in the wake of a leaked (now officially released) trailer that confirmed speculation that the console would target 1440p @ 120 fps. While the trailer didn’t reveal specs, it did clue viewers into the Series S’ features. An all digital machine, it can natively run games at 1440p and “up to 120 fps” at the same time, supports DirextX raytracing, has a 512 GB NVMe SSD and can stream media at 4K with “4K upscaling for games.” It’s also “nearly 60% smaller than Xbox Series X.”
The leaked trailer also dropped probably the biggest news for a next gen console yet- the price. All of these leaks together seemed to force Microsoft’s hand, and the company officially confirmed the budget console in a 3:13 AM EST tweet.
? Let’s make it official! Xbox Series S | Next-gen performance in the ˢᵐᵃˡˡᵉˢᵗ Xbox ever. $299 (ERP). Looking forward to sharing more! Soon. Promise. pic.twitter.com/8wIEpLPVEqSeptember 8, 2020
Looking something like a large speaker, the Xbox Series S is $299. Even with 1440p @ 120 fps specs, that’s cheap, and given that the leaked trailer heavily pushes Game Pass, it’s probably being sold at a loss to encourage subscriptions. If $299 is still too much of an upfront cost, though, you can also finance the console starting at $24.99 for 24 months (which adds up to $599.76).
Microsoft has since posted the trailer in an official capacity, officially confirming its feature list.
CPU
8-core AMD Zen 2 CPU @ 3.6 GHz (3.4 GHz w/SMT)
GPU
AMD RDNA 2 GPU 20 CUs @ 1.565 GHz
GPU Power
4 TFLOPS
SoC
Custom 7nm SoC
RAM
10GB GDDR6
RAM bandwidth
8GB @ 224GB/s, 2GB @ 56GB/s
Storage
Custom 512GB PCIe 4.0 NVMe SSD
Expandable Storage
1 TB expansion card
Disc Drive
Digital Only
Display Out
HDMI 2.1
On September 9th, Microsoft followed up its Xbox Series S price and features reveals with a full list of specs. The biggest difference between the Series X and the Series S seems to be the GPU, with the Series S downgrading to a 20 CUs RDNA 2 GPU with about 4 teraflops of power. Aside from that, it uses the same CPU architecture as the Series X (though with slightly less power), and the same SSD architecture but with less capacity. It also has less memory and is digital only, but Microsoft still boasts that it “delivers 4x the processing power of an Xbox One console.”
The Xbox Series S will launch this November, alongside the Xbox Series X.
Xbox Series X Price and Payment Options
While Sony has yet to drop the price on the PS5, Microsoft announced on September 9th that the Xbox Series X would cost $499.99.
That’s the same as what the Xbox One cost at launch, and is only $100 more than the original Xbox 360’s launch price.
If $500 upfront is too steep, though, you’ll also be able to finance the Xbox Series X, starting at $34.99 a month for 24 months. Careful, though- you’ll eventually end up paying $839.76 for the console if you buy it completely through a payment plan.
Microsoft also announced last October that anyone currently financing an Xbox One who has already made at least 18 payments will be able to upgrade their plan to a Series X when it launches.
Xbox Series X Controller
The Xbox Series X controller is set to be largely identical to the Xbox One controller, aside from a few quality of life upgrades.
In a move that will come as a relief to those of us with tiny hands, the blog post announcing the controller says that its “size and shape have been refined to accommodate an even wider range of people.” The new controller also seems to be taking notes from the PS4 controller by including a dedicated share button. The triggers and bumpers feature a new matte finish, and the bumpers include new textured dots as well. The D-pad has been redesigned to better match the Xbox Elite Series 2 controller.
If you prefer your existing stuff, the Xbox Series X is also set to work with all existing Xbox One accessories, including controllers.
Xbox Series X Backwards Compatibility
The Xbox Series X is set to include full native backwards compatibility with all Xbox One games, as well as an unspecified but seemingly wide selection of original Xbox and Xbox 360 games. Because the games are running natively, they can all expect to see some improvement from the more advanced hardware.
Some Xbox One games running on the Xbox Series X/S via SSD are also set to be “Optimized for Xbox Series X,” meaning that they will feature dramatically higher frame rates and resolution than when playing on Xbox One. This means that, aside from base-level upgrades from simply playing on more advanced hardware, the developers have gone out of their way to patch in extra features that are only available on Xbox Series X/S. For instance, Gears of War 5 is currently being optimized for Xbox Series X, with the team already hitting 4K 60 fps resolution on equivalent settings to PCs running the game on “Ultra,” as well as 100 fps at lower resolutions. Other older games like Destiny 2 will also be optimized for Xbox Series X, though curiously, all new Xbox Series X games will also have branding to indicate their optimization for the system on the box. This is presumably because these newer games are also set to be playable on the Xbox One, at least for the first few years of the console’s lifespan (more below).
Microsoft is also planning a new “Smart Delivery” feature, which will allow gamers to only buy games once and then share them across multiple consoles. No more having to buy PS3 games remade for PS4 to use the new console’s higher specs. Just buy the base game once, and it will automatically use the highest specs available depending on the system it’s being played on. In other words, like a PC, your system determines your performance more than the game.
The move to native compatibility is also a step-up from the emulation-based compatibility that the Xbox team relied on for backwards compatibility on the Xbox 360 and Xbox One. A May 28th blog post claimed the Xbox Series X will have “thousands of games at launch,” and on October 15th, Xbox confirmed that the following games will be “optimized for Xbox Series X.”
Assassin’s Creed Valhalla
Borderlands 3
Bright Memory 1.0
Cuisine Royale
Dead by Daylight
Devil May Cry 5: Special Edition
DIRT 5
Enlisted
Evergate
The Falconeer
Fortnite
Forza Horizon 4
Gears 5
Gears Tactics
Grounded
King Oddball
Maneater
Manifold Garden
NBA 2K21
Observer: System Redux
Ori and the Will of the Wisps
Planet Coaster
Sea of Thieves
Tetris Effect: Connected
The Touryst
War Thunder
Warhammer: Chaosbane Slayer Edition
Watch Dogs: Legion
WRC 9 FIA World Rally Championship
Yakuza: Like a Dragon
Yes, Your Grace
Xbox Series X Games
On July 23rd, Xbox held an event that outlined 27 games that are confirmed for Xbox Series X. These include exclusives like Halo Infinite andSenua’s Saga: Hellblade II, as well as a number of multi-platform releases like Watch Dogs Legion. Outside of the event, Xbox has also previously confirmed that games like Cyberpunk 2077, Assassin’s Creed: Valhalla and Starfield will be coming the Xbox Series X as well.
Most of these games have trailers that help give us an idea of what the console can do. This includes Hellblade II, which is confirmed to be the first Xbox Series X game to use Epic’s impressive new Unreal Engine 5. Halo Infinite also premiered an extended gameplay demonstration during the July 23rd event.
To give you an idea of what Unreal Engine 5 on Xbox Series X means, a PS5 demo Epic released to show off UE5’s capabilities used an environment constructed from 8K cinematic assets, including a room with over 500 instances of full 33 million triangle direct ZBrush imports, with no frame drops. The Xbox Series X will no doubt target the same kind of power, so get ready for some big games.
Microsoft has also confirmed that all exclusives for the Xbox Series X will also be playable on Xbox One and PC. This mirrors the company’s recent initiative to release all of its new Xbox One games on PC as well. However, this might change in the future, as Head of Xbox Game Studios Matt Booty only confirmed the promise for “the next year, two years,” according to MCV. That’s probably because Microsoft doesn’t want the Xbox One to hold it back as developers get more familiar with the Series X.
Xbox has promised that the Xbox Series X will have “over 100 titles” at launch, though an August 11th announcement stated that Halo Infinite will not be one of them. Here’s a full list of games confirmed for Xbox Series X:
Dragon Quest XI
Exomecha
Watch Dogs Legion
Echo Generation
Balan Wonderland
Halo Infinite
State of Decay 3
Unnamed Forza Motorsport game
Everwild
The Outer Worlds: Peril on Gorgon
Tell Me Why
Ori and the Will of the Wisps (Optimized for Xbox Series X)
Grounded
Avowed
As Dusk Falls
Senua’s Saga: Hellblade 2
Psychonauts II
Destiny 2 (Optimized for Xbox Series X)
S.T.A.L.K.E.R. 2
Warhammer 40000: Darktide
Tetris Effect Connected
The Gunk
The Medium
New Genesis: Phantasy Star Online 2
Crossfire
Unnamed Fable game
Assassin’s Creed: Valhalla
Cyberpunk: 2077
Starfield
Gears of War 5
The Lord of the Rings: Gollum
Fortnite
Warframe
Yakuza: Like a Dragon
Vampire: The Masquerade: Bloodlines 2
The Ascent
Second Extinction
Scorn
Scarlet Nexus
Dirt 5
Chorus
Call of the Sea
Bright Memory Infinite
Gods and Monsters
Rainbow Six Quarantine
Rainbow Six Siege
Madden NFL 21
Ultimate Fishing Simulator 2
Xbox Series X Pre-order
On September 9th, Microsoft posted on its blog that pre-orders for the Xbox Series X and Xbox Series S will both start on September 22nd. Xbox also told us over email that pre-orders will begin at 8:00 am PDT/11:00 am EDT, and that retailers taking pre-orders will include Amazon, Target, Walmart, Best Buy, Costco, Sam’s Club, Gamestop, Newegg and the Army and Airforce Exchange Service.
Xbox Series X Design
Yes, it still looks like a fridge.
Fridge for scale. #PowerYourDreams pic.twitter.com/2n4OEUKXUzMarch 16, 2020
The Xbox Series X focuses on a vertical orientation and a featureless black exterior with big “monolith from 2001: A Space Odyssey” vibes. While it can be placed horizontally, its rectangular design resembles a computer tower more than a game console, so it’s unlikely to be thin enough to fit under a monitor. On the top is an indented cooling vent with what looks to be a green light inside, with the back housing the I/O, including the proprietary SSD expansion slot.
Best 8K TV Buying Guide: Welcome to What Hi-Fi?’s round-up of the best 8K TVs you can buy in 2020.
One day, 8K TVs will drive 4K TVs into extinction. It won’t be for a while yet, but with the likes of Sony, Samsung and LG already selling 8K TV sets (and others such as Panasonic and Philips set to join them in the coming months), it looks like next-generation TVs with a whole heap more pixels will one day be in living rooms across the land.
They’re expensive (at least right now), but 8K TVs offer four times the pixel density of their 4K TV siblings. That makes for a stunningly lifelike picture that represents a massive step up from 4K.
Sadly, there’s more or less no 8K content available at the moment. In the meantime, 8K TVs make themselves useful by upscaling 4K, HD and even standard-def content. That means you can expect a gloriously cinematic experience right now, even though 8K content is far from mainstream.
So what should you look for when buying an 8K TV? Good upscaling is absolutely critical – you want all of the content you watch now to look great, and that involves the TV doing lots of clever processing. It’s also worth looking for HDMI 2.1 ports, too, as they have baked-in support for higher resolutions and frame rates.
Beyond that, you’re looking for broadly the same qualities you’d seek in a 4K TV: great colours, contrast, sharpness and detail; a user-friendly and app-packed operating system; good sound and a smart design.
Jump straight to our pick of the best 8K TVs
What is 8K?
What we’re talking about here is resolution. This means the number of horizontal and vertical pixels. Pixels equal information, so more pixels should mean a better quality image. That’s the theory, at least.
In the case of 8K, this means a horizontal resolution of 7680 pixels and a vertical resolution of 4320 pixels (7680 x 4320), resulting in a display that consists of just under 33 million pixels.
By comparison, 4K video has half the number of horizontal lines and half the number of vertical lines (3840 x 2160), equating to a total pixel count of around 8.3 million.
So, yes, 8K has four times as many pixels as 4K (and 16 times the number of Full HD, for what it’s worth).
Who is making 8K content?
8K video developments to date have largely been driven by filmmakers and TV broadcasters. From a video-editing point of view, the higher resolution can be useful. While filmmakers may not ultimately deliver an 8K film, shooting in the higher resolution gives editors room to manoeuvre, allowing for cropping and zooming while still retaining a high-resolution image. That said, 6K cameras are currently far more prevalent in Hollywood.
Meanwhile in Japan, broadcasters have been experimenting with 8K TV for some time. Back in 2015 the Japanese Broadcasting Corporation, NHK, ran a series of 8K trials, and in 2016 the company announced it was successfully demoing 8K broadcasts. So successful were the trials, NHK has now launched the world’s first 8K television channel. Since 1st December 2018, it has broadcast 8K TV shows on a daily basis, 12 hours a day, and even broadcast the 2019 Rugby World Cup in 8K. Next up is the Tokyo Olympics, which is now scheduled to take place in the summer of 2021.
The Korean Broadcasting Corporation (KBS) is also researching 8K broadcasts, working with LG on content, possible broadcasts and displays – there was 8K experimentation at the 2018 PyeongChang Winter Olympics. And if you were in Brazil at the time, you could have watched the 2018 World Cup in 8K.
The likes of Netflix and YouTube were, of course, quick out of the blocks when it came to 4K content, and now streaming site Vimeo has jumped aboard with 8K. A recent update adds support for HDR and 8K resolution videos. Naturally, you will need an 8K screen to take advantage, and you might be hard-pushed to find anything truly worth watching.
Rakuten TV wants to become a true global alternative to Amazon Video and Netflix, and has ambitious plans to help that become a reality. Along with a rapid expansion into new countries, it seems 8K content is also part of the strategy – the company announced plans to have 8K films on its service by the end of 2019, although all has since gone rather quiet on that front.
How do we choose the best 8K TVs?
Here at What Hi-Fi? we review hundreds of products every year – and that includes plenty of TVs. So how do we come to our review verdicts? And why can you trust them?
We have state-of-the-art testing facilities in London and Bath, where our team of expert reviewers do all of our testing. This gives us complete control over the testing process, ensuring consistency.
All products are tested in comparison with rival products in the same price category, and all review verdicts are agreed upon by the team as a whole rather than an individual reviewer, again helping to ensure consistency and avoiding any personal preference.
The What Hi-Fi? team has more than 100 years experience of reviewing, testing and writing about consumer electronics.
From all of our reviews, we choose the products to feature in our Best Buys. That’s why if you take the plunge and buy one of the products recommended below, or on any other Best Buy page, you can be assured you’re getting a What Hi-Fi? approved product.
The best 8K TVs right now
1. Samsung QE75Q950TS
Makes the most compelling case for 8K TV yet.
SPECIFICATIONS
Screen size: 75in (also available in 65in and 85in) | Type: QLED | Backlight: not applicable | Resolution: 8K | HDR formats supported: HDR10, HLG, HDR10+ | Operating system: Tizen | HDMI inputs: 4 | ARC/eARC: eARC | Optical output: Yes | Dimensions (hwd, without stand): 81 x 143 x 1.5cm
Reasons to Buy
Brilliantly sharp, detailed 4K
Bright, punchy and vibrant
Near-flawless feature set
Reasons to Avoid
No Dolby Vision
Local dimming peculiarities
Only one HDMI 2.1 socket
We’ll just come out and say it: you don’t need an 8K TV. 8K content is thin on the ground, so for the most part, you’ll be paying for something you don’t use. On the other hand, if you’re happy to spend the money, an 8K set could be a sound investment – it’ll also play 4K content, after all, and if you don’t want to buy another TV when 8K takes off, paying once could be the smart option.
The Samsung QE75Q950TS is not only a wise investment for 8K, it also manages to improve on 4K content.
That’s thanks to Samsung’s Quantum Processor 8K and its 8K AI Upscaling feature, which succeed in making non-8K content look better than ever: watching a 4K Blu-ray, we can’t recall a sharper 4K picture, with nothing looking artificially enhanced or exaggerated – it simply pops from the screen more than we’ve previously seen.
Blacks are deep and insightful, while motion is handled with aplomb. Away from the picture, the TV itself is stylish, super slim, and the bezels are amazingly thin. It sounds pretty great, too. Ticks all the boxes, then.
Read the full Samsung QE75Q950TS review
2. Sony KD-85ZG9
Sony’s first 8K TV is thrilling – and sets a new TV benchmark.
SPECIFICATIONS
Screen size: 85in (also available in 98in) | Type: LCD | Backlight: direct LED | Resolution: 8K | Operating system: Android TV | HDR support: HDR10, HLG, Dolby Vision | HDMI inputs: 4 | USBs: 3 | Optical output: Yes | Dimensions (hwd, without stand): 114 x 191 x 12cm
Reasons to Buy
8K is utterly stunning
Punchy, vibrant and natural
Excellent motion and strong sound
Reasons to Avoid
Hugely expensive
Blacks could be deeper
This 85-inch Sony 8K TV, known as the ZG9 in the UK and Z9G in the US, offers an astonishingly-lifelike image and almost no downsides in terms of performance. Convincing blacks, superb motion control, outstanding upscaling – this TV excels in every area.
Sony’s X1 Ultimate chip and 8K X-Reality Pro tech do a superb job of upscaling, adding detail that looks completely natural, ensuring an incredibly immersive performance with HD and 4K content.
Sound is equally spectacular. Four sets of three forward-firing speakers and four woofers help deliver dramatic sound that outshines the average soundbar.
There’s plenty of support for streaming apps, including Netflix and Amazon Video (in 4K and Dolby Vision), and Google Play TV and Movies. Sony’s Android-powered user interface isn’t quite as good as Samsung’s but the ZG9 comes with Google Assistant and is ‘Works with Alexa’-certified, so it’s responsive to voice commands.
You’ll need seriously deep pockets, but for the bleeding edge of 8K TV tech, look no further than the Sony KD-85ZG9 – the finest 8K TV we’ve reviewed.
Read the full Sony KD-85ZG9 8K TV review
5G explained: the network, phones, speeds and more
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.