GETTY IMAGESWhat if AI is harnessed by hackers for malicious purposes?
Technology of Business has garnered opinions from dozens of companies on what they think will be the dominant global tech trends in 2018. Artificial intelligence (AI) dominates the landscape, closely followed, as ever, by cyber-security. But is AI an enemy or an ally?
Whether helping to identify diseases and develop new drugs, or powering driverless cars and air traffic management systems, the consensus is that AI will start to deliver in 2018, justifying last year's sometimes hysterical hype.
It will make its presence felt almost everywhere.
AI can sift through vast amounts of digital data, learn and improve, spot patterns we can't hope to see, and hopefully make sensible decisions based on those insights.
The key question is how much freedom should we give AI-powered systems to make their own decisions without any human intervention?
Fed up with motorway driving? Now you can switch to virtual reality
A driverless car making a split-second decision to apply the brakes or swerve to avoid an accident makes obvious sense. It can react much faster than we can.
But an autonomous weapons system deciding to fire on what it thinks are "insurgents" based on what it has learned from previous experience? That's not so easy.
Yet almost by stealth, AI is infiltrating almost all aspects of our working lives, from machine learning algorithms improving the accuracy of translation software, to call centre chatbots answering our questions.
"AI-powered chatbots will continue to get better at conveying information that can help consumers make better, more informed purchase decisions," says Luka Crnkovic-Friis, chief executive of Peltarion, a Swedish AI specialist.
Customer experience firm Servion predicts that by 2020, 95% of all customer interactions will involve voice-powered AI, and that 2018 will be the year this really takes off.
GETTY IMAGESWill chatbots become indistinguishable from human helpers?
"Advances in speech recognition, biometric identification, and neurolinguistics will also mean that as we interact with businesses and brands via voice, our experiences will become increasingly conversational and human-like," says Servion's Shashi Nirale.
J. Walker Smith, executive chairman of Kantar Futures, agrees, saying that "learning emotional empathy is the final barrier to AI's full-scale market growth".
Talking to machines will become as natural as typing used to be, the tech utopians believe. Apple's Siri, Amazon's Alexa, and Google's Home voice-activated assistants are already in many homes, rapidly learning new skills.
In the workplace, these digital assistants - think of IBM's Watson - will give employees "more immediate access to data" that will lead to "a reduction in repetitive or administrative tasks in their roles", say Javier Diez-Aguirre, vice president of corporate marketing at office equipment maker Ricoh Europe.
This means that AI-powered machines "will become colleagues, not competitors", concludes Mark Curtis, co-founder of Fjord, the Accenture-owned design and innovation company.
GETTY IMAGESWill we see AI-powered machines as helpers or rivals in the workplace?
AI-powered human resources systems could even help workforces become "superhuman", argues Pete Schlampp, vice president of analytics at software platform Workday.
All those forecast to lose their jobs as a result of AI-powered automation may disagree.
And global collaboration will become easier, the world smaller, as translation services become more accurate, argues Alec Dafferner, director of GP Bullhound, a San Francisco-based tech investment and advisory firm.
"Driven by machine learning - seamless and instantaneous translation will become the new normal," he says.
There seem to be few areas AI will not permeate, from recruitment to facial recognition, cyber-security to energy management systems.
The explosion in the amount of data generated by mobile devices, computers and the "internet of things" is feeding these learning algorithms. While the ability to analyse it all in real time using virtually unlimited cloud computing power has accelerated the pace of development.
AI still operates according to the rules we set - until we allow it to develop its own rules, of course. But what if those rules and assumptions are inherently biased?
Can AI keep Marwell Zoo's animals warm?
It's something Rob McCargow, AI programme leader at global consultancy firm PwC, warns about, particularly in the field of recruitment and human resources.
"AI-augmented recruitment stands out as a key growth area with a range of offerings in the market," he says.
"But if not governed and implemented responsibly, it can lead to an amplification of bias and discrimination."
In other words, if past data shows the algorithm that white middle-class males have previously performed well at a company, it might conclude that this is the type of candidate it should select in future, reinforcing gender and racial stereotypes. An example of AI not acting intelligently at all.
The old computing maxim "rubbish in, rubbish out", still applies.
Cyber-security: 'Weaponised AI'
While many cyber-security firms point out AI's potential in combating cyber-attacks - monitoring computer networks in real time for signs of abnormal behaviour, for example - others soberly observe that AI will also be "weaponised" by the hackers.
"2018 could be the year we see the first battle of the AI bots," warns Dr Adrian Nish, head of threat intelligence at BAE Systems.
GETTY IMAGESAI programs could end up doing battle in cyberspace
"It's inevitable that attackers will begin to incorporate machine learning and AI at the same rate as network defence tools. We may already be at this point, with online Twitter bots able to react to emerging events and crafting messages to respond."
Simple but effective email phishing attacks, whereby employees are hoodwinked into clicking on dodgy links or downloading malware because they think the message is from someone genuine, could reach another level of sophistication, says Dave Palmer, director of technology at security firm, Darktrace.
Imagine a piece of malware that can train itself on how your writing style differs depending on who you are contacting and uses this to send relevant messages to your contacts, he says.
"These phishing messages will be so realistic that the target will fall for them. Such advances in AI will take us to the next stage in defenders versus attackers, and we need to be ready."