apple-says-imessage-on-android-‘will-hurt-us-more-than-help-us’

Apple says iMessage on Android ‘will hurt us more than help us’

Apple knows that iMessage’s blue bubbles are a big barrier to people switching to Android, which is why the service has never appeared on Google’s mobile operating system. That’s according to depositions and emails from Apple employees, including some high-ranking executives, revealed in a court filing from Epic Games as part of its legal dispute with the iPhone manufacturer.

Epic argues that Apple consciously tries to lock customers into its ecosystem of devices, and that iMessage is one of the key services helping it to do so. It cites comments made by Apple’s senior vice president of Internet Software and Services Eddie Cue, senior vice president of software engineering Craig Federighi, and Apple Fellow Phil Schiller to support its argument.

“The #1 most difficult [reason] to leave the Apple universe app is iMessage … iMessage amounts to serious lock-in,” was how one unnamed former Apple employee put it in an email in 2016, prompting Schiller to respond that, “moving iMessage to Android will hurt us more than help us, this email illustrates why.”

“iMessage on Android would simply serve to remove [an] obstacle to iPhone families giving their kids Android phones,” was Federighi’s concern according to the Epic filing. Although workarounds to using iMessage on Android have emerged over the years, none have been particularly convenient or reliable.

According to Epic’s filing, citing Eddie Cue, Apple decided not to develop iMessage for Android as early as 2013, following the launch of the messaging service with iOS 5 in 2011. Cue admits that Apple “could have made a version on Android that worked with iOS” so that “users of both platforms would have been able to exchange messages with one another seamlessly.” Evidently, such a version was never developed.

Along with iMessage, Epic cites a series of other Apple services that it argues contribute to lock-in. Notably, these include its video chat service FaceTime, which Steve Jobs announced would be an open industry standard back at WWDC 2010. FaceTime subsequently released across iPhones, iPads, and Macs, but it’s not officially available for any non-Apple devices.

intel’s-upcoming-dg2-rumored-to-compete-with-rtx-3070

Intel’s Upcoming DG2 Rumored to Compete With RTX 3070

(Image credit: Intel)

According to Moore’s Law Is Dead, Intel’s successor to the DG1, the DG2, could be arriving sometime later this year with significantly more firepower than Intel’s current DG1 graphics card. Of course it will be faster — that much is a given — but the latest rumors have it that the DG2 could perform similarly to an RTX 3070 from Nvidia. Could it end up as one of the best graphics cards? Never say never, but yeah, big scoops of salt are in order. Let’s get to the details.

Supposedly, this new Xe graphics card will be built using TSMC’s N6 6nm node, and will be manufactured purely on TSMC silicon. This isn’t surprising as Intel is planning to use TSMC silicon in some of its Meteor Lake CPUs in the future. But we do wonder if a DG2 successor based on Intel silicon could arrive later down the road.

According to MLID and previous leaks, Intel’s DG2 is specced out to have up to 512 execution units (EUs), each with the equivalent of eight shader cores. The latest rumor is that it will clock at up to 2.2GHz, a significant upgrade over current Xe LP, likely helped by the use of TSMC’s N6 process. It will also have a proper VRAM configuration with 16GB of GDDR6 over a 256-bit bus. (DG1 uses LPDDR4 for comparison.)

Earlier rumors suggested power use of 225W–250W, but now the estimated power consumption is around 275W. That puts the GPU somewhere between the RTX 3080 (320W) and RTX 3070 (250W), but with RTX 3070 levels of performance. But again, lots of grains of salt should be applied, as none of this information has been confirmed by Intel. TSMC N6 uses the same design rules as the N7 node, but with some EUV layers, which should reduce power requirements. Then again, we’re looking at a completely different chip architecture.

Regardless, Moore’s Law Is Dead quotes one of its ‘sources’ as saying the DG2 will perform like an RTX 3070 Ti. This is quite strange since the RTX 3070 Ti isn’t even an official SKU from Nvidia (at least not right now). Put more simply, this means the DG2 should be slightly faster than an RTX 3070. Maybe.

That’s not entirely out of the question, either. Assuming the 512 EUs and 2.2GHz figures end up being correct, that would yield a theoretical 18 TFLOPS of FP32 performance. That’s a bit less than the 3070, but the Ampere GPUs share resources between the FP32 and INT32 pipelines, meaning the actual throughput of an RTX 3070 tends to be lower than the pure TFLOPS figure would suggest. Alternatively, 18 TFLOPS lands half-way between AMD’s RX 6800 and RX 6800 XT, which again would match up quite reasonably with a hypothetical RTX 3070 Ti.

(Image credit: Moore’s Law Is Dead)

There are plenty of other rumors and ‘leaks’ in the video as well. For example, at one point MLID discusses a potential DLSS alternative called, not-so-creatively, XeSS — and the Internet echo chamber has already begun to propogate that name around. Our take: Intel doesn’t need a DLSS alternative. Assuming AMD can get FidelityFX Super Resolution (FSR) to work well, it’s open source and GPU vendor agnostic, meaning it should work just fine with Intel and Nvidia GPUs as well as AMD’s offerings. We’d go so far as to say Intel should put it’s support behind FSR, just because an open standard that developers can support and that works on all GPUs is ultimately better than a proprietary standard. Plus, there’s not a snowball’s chance in hell that Intel can do XeSS as a proprietary feature and then get widespread developer support for it.

Other rumors are more believable. The encoding performance of DG1 is already impressive, building off Intel’s existing QuickSync technology, and DG2 could up the ante signficantly. That’s less of a requirement for gaming use, but it would certainly enable live streaming of content without significantly impacting frame rates. Dedicated AV1 encoding would also prove useful.

The DG2 should hopefully be available to consumers by Q4 of 2021, but with the current shortages plaguing chip fabs, it’s anyone’s guess as to when these cards will actually launch. Prosumer and professional variants of the DG2 are rumored to ship in 2022.

We don’t know the pricing of this 512EU SKU, but there is a 128EU model planned down the road, with an estimated price of around $200. More importantly, we don’t know how the DG2 or its variants will actually perform. Theoretical TFLOPS doesn’t always match up to real-world performance, and architecture, cache, and above all drivers play a critical role for gaming performance. We’ve encountered issues testing Intel’s Xe LP equipped Tiger Lake CPUs with some recent games, for example, and Xe HPG would presumably build off the same driver set.

Again, this info is very much unconfirmed rumors, and things are bound to change by the time DG2 actually launches. But if this data is even close to true, Intel’s first proper dip into the dedicated GPU market (DG1 doesn’t really count) in over 10 years could make them decently competitive with Ampere’s mid-range and high-end offerings, and by that token they’d also compete with AMD’s RDNA2 GPUs.