Apple is infusing its phones with a new A.I., and it might be their biggest—and riskiest—bet yet.
Apple Intelligence, long teased by the Big Tech giant as the fix for its laggard position within the A.I. game, left beta testing this week and rolled out to customers lucky enough to own Apple’s most advanced models (i.e., the iPhone 15 Pro and anything released after). There are plenty of slick A.I. features to excite Apple fans, and to complement the brand’s new custom-chip-powered hardware: “Writing Tools” for proofreading and editing messages, overhauls of Siri and the Mail app, the added ability to record and transcribe phone calls, and upgrades to the search and edit functions for your photo albums.
But, of all the iOS updates, the one you probably heard about and experienced the most was the auto-summarizer for your notifications—and not necessarily for the best reasons, per the screenshots circulating on social media:
While there were many such cases of odd text summaries, perhaps the most notable, infamous example of the feature landed two weeks ago, in a now-deleted viral tweet where developer Nick Spreen screenshotted his phone’s interpretation of an inbound breakup text: “No longer in a relationship; wants belongings from the apartment.” The notoriety has only heightened since then, to the point that CNET published a helpful guide on Monday for turning off “the most annoying Apple Intelligence feature.”
And that’s not the only Intelligence goodie that’s hardly living up to promise. As Quartz reported Wednesday, users are also complaining about an onerous software-update process, malfunctioning Siri mechanisms, and A.I.-generated responses that are even less accurate than those that spew from hallucinating engines like ChatGPT. The mismatch between Apple Intelligence’s actual quality and the way it’s portrayed in the company’s latest masterpieces of condescending advertising—that is, as genius agents that can “write smarter” than you can—is quite stark. Understandably, then, this intelligence doesn’t appear to be driving a sales surge for the A.I.-customized iPhone 16 lineup. (Nope, not even for the Mac Mini.)
But that shouldn’t have been too surprising: A summer study in the Journal of Hospitality Marketing and Management found that consumers overwhelmingly lost interest in products when they were labeled as being A.I.-powered as opposed to simply “high-tech,” while a recent CNET survey found that only 18 percent of its respondents viewed A.I. integration as their “main motivator for upgrading their phone.”
That’s bad news in a year when iPhone sales have plunged globally—to the point that Apple’s yet again ceded its onetime position as world’s biggest phone maker to Samsung. And while Apple reported record quarterly revenues to investors this week, analysts continue to warn that iPhone sales (still the corporation’s biggest moneymaker by far) aren’t getting a much-needed boost from the Apple Intelligence previews.
“iPhone revenue stands as the report’s Achilles’ heel,” Thomas Monteiro, a senior analyst for Investing.com, told me in an email. “Given the strong trend in consumer spending, the presented numbers indicate that users were generally unimpressed by the recent features, meaning that the next suite of A.I.-to-product offerings will need to do an overall better job to impress the public.”
The calculus behind Apple’s A.I. approach was that it wouldn’t be as hasty as its rivals (namely, Meta and Google) in its attempts to catch up to the ChatGPT era. Rather, it would steadily game out the most useful applications of the tech for its everyday products, and maximize its historic advantage as hardware and software pioneer. This was sensible thinking, especially in light of Google’s endlessly clumsy A.I. foibles and Meta’s horrific misinformation crisis. And the products Apple’s been announcing lately definitely stand out (check out the AirPods Pro that also serve as hearing aids). If Apple wants to stick to a business model that’s worked out quite well so far—that being, a brand of lifestyle accessories with actual, practical use for your everyday needs—it makes sense for the company to behave more frugally when it comes to splurging cash on and rolling out A.I. that users may not even want. Both Meta and OpenAI have already been learning that the hard way.
The issue remains, however, that a lot of this A.I. remains fundamentally faulty, no matter who’s making the model or what’s operating it. Right now, every major A.I.-focused or -curious firm is hoping to build out veritable forests of data centers in the hopes that more data, energy, capacity, and training will finally free the biggest large language models from the curses of making shit up and getting basic facts wrong. Yet that race for superiority also involves these companies (especially Google) getting their A.I. engines to eat and regurgitate much of their own generated slop in turn, thus worsening the models by corroding the overall value of their training data.
Having already flouted copyright law and invited numerous lawsuits in the quest to make god-A.I., Big Tech is falling into a vortex of self-propelling errors that’s had the effect of polluting the entire digital information ecosystem. It’s hard to tell right now, but there may very well be a cap on how far these models can go—and Apple, even after having taken (supposedly) more careful study than its peers, is coming up on that right now, with a flurry of its notification-summary mistakes.
Perhaps Apple has some remedies awaiting. But if, two years after ChatGPT’s debut, Apple’s biggest rollouts fall into the same PR nightmares that its brasher rivals are still overcoming, what does that say about Big Tech’s biggest bet?
Discover more from CaveNews Times
Subscribe to get the latest posts sent to your email.