Home > Artificial Intelligence > Technically Speaking: Tech Companies, Stop Plastering “AI” on Every Single Gadget
Artificial IntelligenceFeaturesPersonal Technology

Technically Speaking: Tech Companies, Stop Plastering “AI” on Every Single Gadget

Almost every single product launch I’ve attended in the past year was inundated with this one word – actually much less a word than two vowels that symbolise the cutting edge of all technology. I heard at events launching all types of gadgets, ranging from TVs, speakers, headphones, and ovens even fridges. I’m talking about AI: Artificial Intelligence. If you’re sick of hearing the whole AI spiel every time you tune into a keynote event or crack open the spec sheets, you’re absolutely not alone.

I believe there’s a need within the tech industry to define what AI is exactly. Otherwise, it might go the way of the venerable smartphone notch: at first ingenious for solving the age-old bezelless problem, but now just trendy, unsightly, and frankly rather pointless in most cases.

Now, AI is a thorny topic, and I think we owe you an explainer before we begin this rant.

There’s a TL;DR in the last section if you’re not one for lengthy articles.

What is Artificial Intelligence?

Everyone knows what Artificial Intelligence is, right? Well, yes, probably – but you’d likely know what a diesel engine is without necessarily knowing how it works. We start with the first layer – computing: when humans input codes (algorithms, rules) into machines that enable them to help solve our problems.

Next, AI is when machines use these algorithms to form their own added ‘layer’ of rules to solve more complex problems by observing trends and patterns in data inputs and observations. These observations and problems are as diverse as they come: AI-capable machines can reason, plan, learn by comprehending human inputs like languages, actions and gestures, and environmental inputs like weather data, seismic and oceanographic trends.

But here we have a problem. Let’s go back to when you were five. You’ve just learnt the alphabet: its sequence, and how to recognise and associate each character with its corresponding sound. Your parents are heady with pride and praise you for your intelligence.

You’re now in your late twenties, and you’ve got to recite the alphabet to your toddler. You do flawlessly. It’s the same deed done when you were five, but there’s no applause. It’s something you should know. Have you gotten any less intelligent? No. But have you demonstrated intelligence? Not really, not this time. You see: intelligence is incremental. It is when you add knowledge or capability at a point in time.

AI feats of yesteryear really shouldn’t be considered ‘intelligent’ today – things like speech recognition and scene recognition: there’s still room for improvement, like how you probably wouldn’t be able to recite the alphabet backwards quickly (or at all), but it’s far too beneath the current technology to be considered cutting-edge intelligence. Manufacturers, stop calling it AI if it isn’t learning anything significantly useful.

Now, Artificial Intelligence isn’t a term that has official industry definitions, and thus is under threat by manufacturers that abuse this term, intentionally or not. It’s happened before, en masse: in many jurisdictions, margarine manufacturers can’t use ‘butter’ in the branding or even colour their products yellow. Hot dog manufacturers aren’t allowed to call their products ‘sausages’ unless a certain amount of real meat is used.

Here’s the issue: though AI is complicated and rather tricky to explain, why should it be a core part of manufacturer’s marketing taglines if users are unable to discern, understand and differentiate between different implementation between products?

Where’s the AI?

While now most prominently implemented on the smartphones we carry with us daily, AI was first implemented for handwriting and speech recognition. But as technology advances, we deign to consider the humble act of speech and text recognition as Artificial Intelligence.

These days, we see all manner of products touted with this ubiquitous feature. Phones and their cameras, computers, speakers, fridges and TV sets – all manner of consumer electronics seem to have those two vowels plastered on them. But what does it do, exactly? Does it just work to recognise your voice, and then defer to the opinion of higher (search engine) beings? Or does it just ‘learn’ your usage patterns to give you better predictions to streamline your usage? Or will it one day be so intelligent that nuclear codes are within grasp and we’ll all be under threat of a mysterious Skynet-esque entity?

That’s where we begin our investigation.

1. Cloud or Onboard AI?

It’s an important consideration and a rather simple one. Cloud AI of the likes of Google, Amazon, Apple, Alibaba and Baidu have the benefit of a vast pool ocean of data to optimise their services. AI can learn to recognise images from the billions of pictures and their captions that are uploaded into their online repositories. They also process millions of search requests per hour, which allows them to better understand human behaviour. By allowing voice search, these services can better understand accents, speech patterns and other human inflexions that vary greatly between regions.

Onboard AI has a limited amount of data points to work from: your personal device gets activated regularly, but perhaps only gets a handful of useful inputs per day.

What’s worse: adapting to human behaviour can be useful, but frustrating. Think about your favourite shortcuts on the quick access bar that is curated by AI: different apps for different times of day. But one day, you’re on leave or on holiday in a different time zone – and you can’t get to your favourite application easily! Humans are creatures of habit, but we’re also creatures of irregularity, change and revolution.

There’s stuff going for it, though. Onboard AI does give some peace of mind in these tumultuous times where privacy scares are rife and rampant.

Next, there’s the large issue of marketing spiel.

2. Neural Networks? Neural Processors? Teraflops? Operations? What?

Possibly the greatest reason that so many are averse to understanding and keeping up with technology is the sheer amount of jargon needed to join its ranks. Brace yourself: it just got worse. Just like how you’d feel going for lunch with a bunch of investment bankers as a librarian, sitting in any product keynote in 2018 is not for the uninitiated.

What should be done?

The short answer: don’t bother – it’s too complex for the bulk of us to understand unless we have a background (read: bachelor’s degree) in any computing field.

The long answer: it’s the responsibility of these tech companies to better explain the features, components and specifications of the products consumers are going to buy. We vote with our wallet, and just like how we vote for people with the best marketing spiels and the sharpest looks, we tend to buy the products we think would make us the happiest.

Well, the economist in me says good on you: you’ve maximised your utility. But the tech journo in me is red-faced with flared nostrils. Tech specifications mean something. They are the ingredients list, brand name, and the calorie count all combined. You’re unlikely to buy something without any of those, are you?

My point: provide tech journalists with in-depth briefings of features, innovations and components. We’ll be far better equipped to inform customers what exactly they’re spending those dollars on. Unless you’ve got something to hide…

3. Intelligence Requires Feedback

Really the inspiration behind this article is the issue of feedback (or lack thereof). All intelligence requires feedback. If you want to know if something is right or wrong, you’ve got to check with someone, or with an answer sheet.

I’ve used about a dozen smartphone cameras that screamed “AI CAMERA”, but this feature had only manifested itself in a tiny logo of what it thought I was pointing the camera at, while plastering a filter or changing the settings. Most didn’t even look very good.

Let’s look at the Google Assistant and Google Lens. Just last week, I had asked the assistant to enter an event at 1600 HRS, but it responded with “… 1700 HRS” three times. Absolutely annoyed, I demanded, “Hey Google, I would like to give feedback”. And give feedback I did. There’s also talk about how Google Assistant might now regularly prompt users to rate the response given in order to improve the service.

Google Lens leads the pack with a simple on-screen thumbs-up thumbs-down option to allow users to ‘train’ the image recognition algorithm on board.

My point: it’s not AI if I can’t tell the device it got something wrong.

4. AI Dies With the Device

Currently, no device gives users the ability to port the onboard AI to their next device. That’s a huge waste of work done by the processor, given the limited amount of inputs available for onboard AI. This also means that users have to start from scratch on their new devices.

If anything, it might actually prove that the onboard AI on these devices is doing nothing much to learn and improve users’ experiences, since there’s no need to move this valuable personalised information on to the next device. Prove me wrong, tech companies.

TL;DR and the Future of AI

I’m an optimist, and a believer in technology. Computational power has helped us have the very, very comfortable lives for us privileged enough to be empowered by it. AI represents the new frontier in computing: by giving these electronic entities the power to reason, to form new rules and algorithms of their own.

We’ve seen what the digital divide has done to our society: a huge boon to those who understand how to use tech gadgets, while tremendously disadvantaging those who don’t. AI is potentially the next wave of change: taking away jobs, making society ever more ruthlessly efficient while freeing us to do more with our lives. It’s prime time those at the forefront of technology ensured that the general public truly understands AI in order for everyone to fully take advantage of its capacity to empower.

The TL;DR is a simple one.

Tech companies need to:

  1. Give meaning to jargon; it’s gibberish if almost no one understands ‘neural networks’ and ‘flops’. Not everything is AI.
  2. Educate the public through the media (it’s literally our job to explain).
  3. Be honest about what features and performance consumers are actually going to get.
  4. Enable users to take the progress made with their AI on to the next device.
  5. Ensure privacy is their priority.

Users need to:

  1. Be discerning and cut through the verbal thicket of marketing lingo.
  2. Seek alternate opinions, always (not necessarily only in tech).
  3. Question everything, especially spec sheets.
  4. Vote with your wallet.

Technically Speaking is a weekly op-ed where VR Zone’s Editor-in-Chief Ian Ling probes pertinent issues for hidden truths and offers technically-minded insights.

Ian Ling
http://uncommontragedy.com
Ian is the resident Tech Monkey and Head of Content at VR Zone. His training in Economics and Political Science is at the basis of his love for journalism and storytelling. A photographer by passion, and an audiophile by obsession, Ian is captivated by all forms of tech that makes enthusiasts tick.

Leave a Reply

Your email address will not be published.

Read previous post:
All We Know About the Huawei Mate 20 Series: Kirin 980, Large Battery, Tiny Notch

Photography was what put Huawei up amongst the top global smartphone manufacturers. While previous iterations of its smartphones had good...

Close