Phone makers are loading their devices with AI, but is it AI people actually want?

2024 is shaping up to be the year of AI.

Sure, you could argue that last year was the year of AI thanks to the rapid proliferation of OpenAI’s ChatGPT and other chatbots, but what’s different with 2024 is the promise of AI in devices people use every day — smartphones and laptops.

Intel went big on AI with its latest Core Ultra chips, which include neural processing units (NPUs) to handle on-device AI capabilities. Likewise, Qualcomm’s Snapdragon 8 Gen 3 chipset, which will power most Android flagships this year, has a major focus on AI. Plus, software makers are rushing to get AI capabilities to the market. Microsoft is the obvious one, leveraging a partnership with OpenAI to shove AI-powered chatbots and tools into Windows, Office and more. Google is also working on AI with Bard and its new Gemini model, which has a variant designed for smartphones (and even powers a few features on the Pixel 8 Pro).

Samsung is set to unveil its Galaxy S24 series on January 17th, and reports suggest the devices will be all about AI. Even Apple is rumoured to be working on a large language model (LLM) and might integrate it with Siri to improve its digital assistant.

However, this sudden rush to inject AI into products leaves me more than a little concerned, especially when you look at what these companies are trying to implement. So, let’s dive into the rumours and look at what AI features we’re supposedly getting this year, and then talk about what AI features we should actually want.

What we’ve got and what we’re getting

There’s a ton of hype about AI right now and, by extension, a lot of misinformation, too.

When most people here ‘AI,’ they think about artificial general intelligence, or AGI — you know, like Skynet from Terminator. But that’s very far from the reality of the AI that’s available to us today. Primarily, we’re dealing with generative AI, which are machine learning tools capable of generating content based on user prompts.

Depending on what a generative AI model is trained on, it can produce text, images, code, video, music, and more. Most available models are limited to one of these modalities (i.e., an image generator like DALL-E can create images but can’t output a paragraph of text). However, there has been progress on multimodal models that can handle various types of inputs, like GPT-4 and Gemini.

These generative AI capabilities power many of the new features reportedly coming to smartphones this year. For example, the Galaxy S24 series is expected to include ‘Galaxy AI,’ a suite of features that could include things like ‘Generative Edit,’ for moving or removing objects in photos, or a ‘Live Translate’ feature that supposedly can translate your voice right in the Samsung Phone app. There are also rumours about Samsung lifting features from ChatGPT and Google Bard to boost speech-to-text functionality and, possibly, improve its Bixby digital assistant.

“Call me a skeptic, but I have yet to see anything legitimately useful come out of the current generative AI technology despite its hype.”

We’ve already seen similar generative features in the Pixel 8 series’ Magic Editor and Best Take, and we know Google is working to integrate Bard with Assistant. Moreover, when Google announced its Gemini model, it brought Gemini Nano (a version of Gemini designed to run on smartphones) to the Pixel 8 Pro to power a ‘Summarize’ feature in the Recorder app and Gboard’s ‘Smart Reply’ capability, though that was only as a developer preview limited to WhatsApp. Summarize can generate summaries of recorded conversations, provided they’re not too long or too short, while Smart Reply can generate message responses “with conversational awareness.”

To sum up, the rush to implement generative AI into smartphones has led to only a few new features or enabled expansions of existing features, though with dubious results. We’re seeing manufacturers use generative AI to expand image editing capabilities and improve things like speech-to-text and suggested replies, along with a push to use LLMs to better the voice-activated digital assistants we’ve had access to for years. None of this is particularly exciting, and the features that are already available haven’t blown me away. Maybe that will change soon — Samsung could unveil a Galaxy AI feature that will actually impress me in a few days — but I doubt that will be the case. Why? Because no one has done anything with AI that makes me want to use it. Yet.

What we should actually want

Call me a skeptic, but I have yet to see anything legitimately useful come out of the current generative AI technology despite its hype. The AI dream, as I have always understood it, was that it would be able to do all the boring, monotonous, tedious nonsense that takes up people’s time so that people would be free to do more interesting stuff. What we’ve got now feels like the opposite — people are stuck doing boring tasks while AI is off creating art. Lame.

So here are some things I’d like to see AI actually do, things that would actually improve people’s lives and make these AI-infused products worth buying.

A digital assistant that actually assists

There are plenty of digital assistants out there already — Alexa, Bixby, Siri, Cortana (RIP), and the aptly-named Google Assistant. But I increasingly find that these assistants aren’t all that helpful, and in some cases, they’re actually getting less useful over time. Companies are rushing to infuse these assistants with generative AI, but other than improving the ability for these assistants to converse naturally with people, I’m not sure how generative AI will make them more helpful.

Instead of limited capabilities like checking the weather or looking up some information, I want to see digital assistants actually save me some time by being able to reliably complete complex tasks. They should be able to summarize my email inbox, highlight important emails I need to review, triage emails I don’t need to look at, and even schedule meetings into my calendar for me. I particularly like Rabbit’s bold vision of having an AI that’s capable of using apps on users’ behalf — this would be a great addition to existing assistants, allowing them to get things done with the apps on my phone without me needing to do them.

At this point, AI should answer my phone calls like a secretary and even make calls on my behalf to book reservations at a restaurant or get me an appointment at my favourite barber. Come on, impress me!

Reliable researchers

Google and Microsoft seem to think generative AI can transform the search experience. They might have a point, but the current implementations are lacklustre at best, mostly because generative AI isn’t reliably accurate. Having an AI tool to parse the vast ocean of content on the internet and find information for me sounds incredibly powerful, but it’s only worth using if I can trust it.

“If you really want to wow me, create an AI tool that works seamlessly across services and devices.”

Right now, any time these tools might save me ends up going into fact-checking their work — I’d be better off doing the research myself. Once they solve the accuracy and hallucination problems, AI search tools might actually revolutionize the space.

Put the ‘smart’ back in smartphone

Some of this might tie into the digital assistant point above, but AI tools could also impress me if they make smartphones ‘smart’ again. By that, I mean reducing the overhead of using phones to simply telling the phone what you’d like to do.

Let’s take the iPhone’s ‘Focus’ feature as an example. Focus lets users create different Focus categories with various settings that impact their phone, such as muting certain notifications or hiding specific apps. It’s a neat feature, but one I’ve never interacted with because I didn’t want to spend the time setting it all up. Imagine if, instead of navigating through a bunch of menus and adjusting settings, you could just tell Siri to set up a Focus with the settings and schedule you want.

This type of nuanced control could apply to so many different settings and features that already exist in smartphones today and make them much more accessible to people who might not be as tech-savvy as someone like me.

Interconnected devices

If you really want to wow me, create an AI tool that works seamlessly across services and devices. I should be able to say to my phone, “I want to watch Oppenheimer,” and it pulls up the movie on my TV and dims my lights, all without me needing to figure out where I can stream or rent the movie or how to get it on my TV.

And if someone can figure out a way to automate the process of setting up smart home devices like smart lights, I might actually use more of them.

What AI do you want to see?

Those are just a few ideas I had, but what are your thoughts? Do you think the current generative AI features are impressive, or do you want to see something more? Let us know what you want out of AI on your smartphones in the comment section below.


 

Reference

Denial of responsibility! My Droll is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment