Artificial intelligence (AI) has long been heralded as the next big step for tech, and it's always been a source of fascination for people going right back to the early days of computing. But while some envisage a Jetsons-esque future of robot maids and butlers making our lives easier, there are always those who have more sinister ideas about what the future of AI will hold.
And sometimes, it's easy to see why people worry about the emergence of HAL or the T-800. AI is now getting to a stage where it's more 'real' than ever, and has capabilities that even just a few years ago would have been confined to the realms of science fiction.
Some of the implications of this are great. But several of these tools, if used incorrectly, have the potential to be hugely disturbing. Here are five creepy AI technologies that might just find their way into any future Skynet.
1. Google's Virtual Assistant is becoming less virtual
By now, we're all familiar with the concept of virtual assistants - tools like Amazon Alexa or Google Home that can answer our spoken search queries, keep us up to date on the latest news, or offer reminders about our diary appointments. But now, they can't just alert us to what's in our calendar, they can make appointments themselves.
Earlier this year, Google showed off a new feature that can actually phone up businesses, speak to a person on the other end - who has no idea they're talking to a robot - and make reservations. The Duplex system uses natural-sounding spoken language, even adding fillers like 'um' and 'er' to make it sound more realistic, and can converse with the person to give details and answer questions.
2. Facebook AIs made up their own language
Getting AIs to communicate seamlessly with humans, as with the Google Duplex example above, takes a lot of work, but sometimes, this doesn't go as intended. In 2017, for example, Facebook set up a system that would see two AI agents converse with each other to learn how to negotiate with humans.
However, things quickly took a turn for the creepy when the agents - with no human intervention at all - abandoned English and developed their own 'language' that they understood perfectly, but was completely unintelligible to us.
The agents worked out that English, with its myriad rules, exceptions and sentence structures, isn't actually a very efficient way of conveying information, so they simplified things to make it faster for themselves, resulting in sentences such as "balls have a ball to me to me to me to me to me to me to me". So when the robot uprising does happen, we may have no idea what they're planning.
3. Amazon Alexa gives evidence in court cases
In many ways, the popularity of Amazon Alexa and similar virtual assistants is a bit strange. It seems many of us are perfectly content to have a machine sitting in the corner of our living room, constantly connected to the internet and listening to everything we say, just so we don't have to get up from the couch to turn the lights on and off.
If the risk of it occasionally sending all your details to a total stranger isn't creepy enough (and don't get us started on the laughing), Alexa is now even being asked to give evidence in court cases, with prosecutors looking to use any recordings captured on the devices to prove what happened during crimes.
While catching killers may be a positive use for the technology, it does throw up many wider questions about privacy, so if you do decide to get your own, you should remember: it's always listening.
4. Facial recognition tracks you everywhere
We've long-since entered the days when Minority Report-style advertising that knows who you are and what you're interested in became a practical reality. But while billboards that greet you by name aren't yet in common use, there are many places that certainly know who you are, and they're being used everywhere.
For instance, several police forces around the world now use Amazon's real-time facial recognition system to help crack down on crime, while sports fans heading to some stadiums have unwittingly taken part in trials to see if the AI can spot potential troublemakers. This is something that's still very much a work-in-progress (most of the red flags in one trial turned out to be false positives), but it's getting better all the time, and with governments such as China taking an interest, could privacy be a thing of the past?
5. AI learns to cheat the system
One of the key things many experiments into AI have taught us is that the machines have a very non-human view of the world; what seems logical to them may often be highly impractical for us.
For example, one simulation asked an AI to apply the minimum force required to land a plane on an aircraft carrier. Instead, it found that by applying a very large force - enough to crash the plane several times over - it would overload the memory limitations of the simulation and thus register as a tiny force. Great for passing the test, less great for any unfortunate human pilot in the real world.
This predilection for finding loopholes, workarounds and outright cheats for the tasks assigned to them crops up again and again. So for instance, a robot was given the job of sorting a list of numbers until there were no unsorted numbers left. But instead of completing the task as expected, it simply deleted the list - hey presto, no more unsorted numbers!
These examples may be mildly amusing, but they should remind us that, however good AI gets, it doesn't think like we do. And this could become more than just a mild inconvenience as they continue to get ever-smarter.
Access the latest business knowledge in IT
Get Access
Comments
Join the conversation...