Ask people on the street how much AI uses today affect their lives, and most would probably answer that it doesn’t affect them right now. Some might say that it’s pure science fiction. Others might say that it may affect our future but isn’t used in our world today. Some might correctly identify a few ways it’s used in modern technology, such as voice-powered personal assistants like Siri, Alexa and Cortana. But most would be surprised to find out how widely it is already woven into the fabric of daily life.

AI uses for personal assistants and other humanlike bots

Voice-powered personal assistants show how AI can expand human capabilities. Their ability to understand human speech and almost instantaneously interact with widespread data sources or cyber-physical systems to accomplish users’ goals gives people something akin to their own genie in a magic lamp.

Nor does the list of such personal assistants end with those well-known examples. Special-purpose applications such as Lucy[i] for marketers fill special niches in more and more industries. Lucy uses IBM Watson to gather and analyze marketing data, using the same kind of natural language interface to communicate with marketers that popular home personal assistants do.

Other AI tools provide intelligent interfaces between stores and customers. Global office supplies retailer Staples turned its old “Easy Button”[ii] advertising campaign into an AI personal assistant that enables customers to tell their Easy Button in-office personal assistant what they need, and have it instantly place an order for speedy delivery.

The North Face uses an AI assistant[iii] to help customers determine the right outerwear for their needs. It asks them natural language questions to understand what activities make up their active lifestyles and matches those needs to the store’s inventory. Healthcare organizations are developing AI systems that provide personalized answers to patient questions or that interface with doctors to bring them the latest data on clinical trials pertinent to their patients’ needs.

Companies like Cogito are working with bots in the customer service industry to expand the boundaries of bots’ emotional intelligence.[iv] This enables them to recognize cues in humans’ facial expressions or tone of voice and identify the emotional state of the person they are interfacing with.

In most cases, these bots will escalate a matter to a human when they detect emotional impediments to an interaction, but the more advanced bots are becoming able to respond appropriately to a growing variety of human emotions that they encounter. Many of the customer service interactions you currently have – and, unfortunately, telemarketer calls[v] you receive – may find you interacting not with a human, but with an AI-enabled bot – without you suspecting it in the least.

Such, too, may be the case with many simple news stories[vi] that you read. Articles that are primarily data-driven, such as financial summaries and sports recaps, are increasingly written by AI.

Or consider for a moment the voice-to-text features on your smartphone. They, too, are increasingly being taken for granted, but until the introduction of artificial neural networks, such transcription abilities were out of reach for even the most advanced computers. With them, though, they have become even more accurate than human transcription.[vii]

And if you’ve ever interacted through email with someone named [email protected] or [email protected] to set up a meeting with a busy executive, you’ve been corresponding with a bot.[viii] This bot uses machine learning (ML), a core component of AI, to learn the executive’s schedule and meeting preferences. Once trained, the executive can CC it on meeting-related emails and the bot will communicate with other email recipients in humanlike language to arrange meetings that fit the executive’s meeting preferences.

AI uses for analyzing and structuring raw data

ML is at the heart of AI’s ability to identify user preferences and anticipate user needs. It analyzes large quantities of data to structure it into a useable form. As increasing volumes of data become available, AI’s ability to analyze everything from manufacturing processes to customer behaviors to market trends makes this increased body of data useable. Being less constrained by human limits to how much data can be analyzed at once, AI can take into consideration more disparate types of data to produce far deeper and more comprehensive analyses.

Not only can AI analyze more data, but it can do so without being distracted as humans so often are. It can monitor information at minute levels that humans would find mundane over long periods of time.

The old images of a manufacturing plant technician monitoring a large console covered with gauges, or a guard monitoring a large bank of security monitors are thus becoming obsolete. AI-driven systems can monitor more gauges more closely than a human technician could. And AI-enabled security systems are being trained to “see” what’s happening on closed circuit TV feeds and identify any anomalies that require human intervention. Such ML also drives a wide variety of applications that are ubiquitous in everyday life.

Google Maps uses anonymized data from smartphones and information from crowdsourced apps to analyze traffic conditions and suggest the fastest routes to commuters’ destinations. Ride-sharing apps like Uber and Lyft use some of the same techniques to enhance their predictive ability. They are becoming increasingly precise in predicting arrival times, travel times, pickup locations and even fraud detection.

Gmail uses ML to learn what you perceive as spam. Rather than relying only on specific keywords to reroute incoming mail to your spam folder, it analyzes how you treat your incoming email to predict what you will want to see and what you will immediately discard. This also applies to Gmail’s sorting of email into Primary, Social and Promotions inboxes. The more you confirm its analysis of a type of email, the more it will follow the same pattern. The more you correct its decisions, the more it will revise how it assesses the indicators in your emails.

One of the industries into which AI has penetrated most deeply is finance. Checks can be scanned by smartphones and read with the help of ML and AI rather than being physically delivered to a bank. AI powers fraud detection systems, analyzing the vast number of daily transactions – and the vast number of variables that may combine to suggest a fraudulent one – to flag those that show suspicious signs.

Financial institutions increasingly use AI in making credit decisions, too. MIT researchers[ix] found that loan defaults could be decreased by 25% through AI-enabled tools. A wide variety of companies also use such predictive abilities of AI to improve customer experience and engage customers more deeply.

AI uses for predictive engines

Personalized searches and recommendations on shopping sites have become so commonplace that most users don’t realize how AI drives them. Users simply take those features for granted. When you pick up your cell phone, it provides you with news headlines and information about your friends’ social lives based on its analysis of what has drawn your attention in the past.

Bricks-and-mortar stores increasingly provide customers with coupons customized to their past purchases through the predictive powers of ML applied to customers’ loyalty cards. Fashion ecommerce site Lyst uses ML and metadata tags to identify what the clothing in different images look like and match the images that fit users’ tastes to their search text.

ML is becoming increasingly adept at powering predictive features, and it does it extremely effectively. One study[x] claimed that such recommender features increased sales as much as 30%.

Amazon’s ML enables it to predict user needs with an almost scary degree of accuracy. It is now even working to develop a system for identifying and delivering what users need even before the users realize they need it.

Social media sites use AI to analyze the content that users create or consume, so the site can serve them content and ads that fit their needs and interests. They also use surrounding context to more clearly distinguish user intent in what they write. One of the most advanced uses of AI on social media is the capability to “see” uploaded images and suggest related images. Or, in the case of Facebook, it uses facial recognition to accurately identify people in uploaded images and suggest those names for image tags.

Home environmental control systems manufacturer Nest’s behavioral algorithms learn users’ heating and cooling preferences. The more data those systems obtain, the better its systems can anticipate those preferences, meaning that users are relieved from having to make manual adjustments.

Netflix’s growing mastery of predictive technology enables it to satisfy customers with recommendations customized to what members have enjoyed in the past. And Pandora’s predictive technology goes beyond Netflix in its recommendations, as its combination of human curating and algorithms ensures that little-known songs and artists don’t get overlooked in favor of heavily marketed ones. In other words, it gets to know users’ musical tastes so well that it successfully identifies music the user will like even before the user knows that those artists and songs exist. This provides consumers with the added delight of discovery.

AI uses for autonomous operation

AI’s ability to analyze and predict has also enabled it to carry out complex tasks that people, even today, find it hard to imagine a machine doing. The arrival of self-driving cars enabled and controlled by AI has been widely predicted as being on the horizon. The truth is that that horizon is already at our doorstep. To illustrate our position, let’s look at the current state of practice.

Potential gains from development of autonomous or self-driving cars are so great in terms of both revenue and profitability on the one hand and the danger of losing out to more agile competition on the other that the marketplace is becoming crowded. Five levels of autonomy have been defined:

  • Level 0. Comprises an automated system that issues warnings and, in some cases, may stage a short-term intervention, but does not take control of the car in any sustained manner.
  • Level 1. This is known as the ‘hands-on’ level. Control of the vehicle is shared between driver and automated system, though at no time does the system dominate. Typical Level 1 implementations include:
    • AC up C (Adaptive Cruise Control); steering is in the hands of the driver, while ACC controls speed
    • Parking Assistance; this is the reverse of ACC, with automated steering but speed controlled by the driver
    • LKA (Lane Keeping Assistance) Type II; the system issues a warning signal when the car crosses a lane marking

In Level 1, the driver must be ready to take back control of the vehicle at any time.

  • Level 2. This is known as the ‘hands-off’ level, though in fact many Level 2 systems will only work if the driver’s hands are on the wheel. This is a way to ensure that the driver is ready to intervene at any time with a system that, pending intervention, is in full control of steering, speed and braking. The driver monitors what is happening and must be ready to intervene at any point.
  • Level 3. This is the ‘eyes off’ level. The car controls steering, speed and braking and is able to respond instantly to a situation demanding immediate response. The driver is therefore free to do other things – but must still be ready to resume control when asked by the vehicle to do so. A typical Level 3 implementation is a Traffic Jam Pilot. This does not come into operation until the driver activates it and works only at speeds of not more than sixty kph on highways where traffic moving in opposite directions is separated by a physical barrier. In essence, what happens is that, in slow-moving traffic, the driver asks the system to take control of the vehicle.
  • Level 4. At this ‘mind off’ level, the driver can go to sleep. Level 4 can only operate in traffic jams or in areas bounded by geo-fencing. If the driver does not take back control, it has to be possible for the car to park itself.
  • Level 5. At this level, having a steering wheel is entirely optional, because driver intervention is not only unnecessary but may be impossible. If you ever find yourself sitting in the back of a taxi with no-one in the front, and the taxi takes you where you want to go without you doing anything other than stating your destination, you will know you are in a Level 5 vehicle.

Those are the levels of autonomous driving – so what’s the current state of play? I’ll look at the players in purely alphabetical order.

  • Aptiv has more than thirty self-driving cars which are on the road almost continuously.
  • Aurora has self-driving cars on the road in three American cities.
  • The BMW Group is working with a number of partners to produce both semiautomated and automated cars and may also be planning a self-driving JV with Daimler.
  • General Motors has self-driving cars on the road in four American cities.
  • Drive.ai has self-driving vans with drivers for safety providing shuttle services in a geofenced area in Dallas, Fort Worth.
  • Ford has self-driving cars on the road in four American cities and has entered into a collaborative venture with VW to develop both autonomous and electric vehicles.
  • Tesla has Autopilot, which provides an element of autonomous driving and its cars come with all the necessary hardware to be fully autonomous.
  • Uber has tested autonomous driving in Pittsburgh. They suspended it in 2018 when a woman died, but have since resumed testing.
  • Yandex has self-driving cars on the road into cities in Russia.
  • Volvo is testing self-driving cars and trucks in Sweden.
  • Waymo, part of the Alphabet Group which owns Google, has self-driving cars available for sale.

And although autonomous cars are therefore here, if not yet commonplace. Other areas of transportation, though, are further advanced. Autonomous operation is already used to a growing degree in the autopilot features of airplanes. The New York Times[xi] reports that a typical commercial flight requires only seven minutes of human pilot control, mainly for takeoffs and landings.

AI uses for Industry 4.0

Industry 4.0, or I4, is in effect the Fourth Industrial Revolution as it applies to manufacturing industry. It combines the Internet of Things (IoT), cloud computing and AI to produce what are referred to as ‘smart factories.’ Whereas factories of the recent past have seen men and women monitoring gauges to keep track of the factory’s performance, in I4 monitoring is done by cyber-physical systems. These systems monitor what is actually happening – the physical processes – far more thoroughly and in more detail than could ever be done by a human monitor. They then copy this physical reality in a virtual form, accessible from anywhere in the world.

As an example of how I4 will work in practice, consider an organization owning steelworks in more than one place – and, particularly, in more than one country and especially on more than one continent. It has been possible for many years to use sensors and analog computers to monitor what was going on inside a steel mill. But the information was available in real time only to someone sitting beside the monitoring computer. Information could be gathered at the corporate center, but only after a delay, and the people considering it and making decisions based on it were human.

What is now possible, thanks to the cloud, is to do away with the monitor sitting beside the local computer and, indeed, to do away with the local computer itself. Sensors can now talk directly to the center.

But AI takes this capability a stage further. Thanks to its capability to process data in much greater quantities and at much greater speed than a human analyst, AI can first produce a very accurate picture of what is happening at every steelworks the company owns and can then use that picture as the basis for automatic decisions based on algorithms with no human involvement.

The IoT presents immense opportunities for enhanced efficiencies – and it is AI that makes it possible.

AI uses for medical diagnosis

AI is also far more prevalent in the medical field than most people realize. On the more basic end of the complexity scale, AI is at the core of the Human Diagnosis Project (Human Dx)[xii] to help doctors give patients’ with limited means more advanced care than they could otherwise afford.

Doctors who serve patients with limited means can submit patient symptoms and questions to a medical crowdsourcing app. Many specialists whose services the patients would otherwise be unable to afford respond, and Human Dx’s AI system then analyzes and refines the responses to bring the submitting doctor a relevant consensus of advice.

Consolidating specialists’ diagnoses are not, though, the upper limits of what AI can do in medicine. AI systems are increasingly being by doctors as diagnostic aids. Being able to process and quickly analyze far larger amounts of data than a human could, AI is proving to be a valuable tool in helping doctors make more effective diagnoses.

Not only does the data include patients’ medical records, but also anonymized results from similar cases, the latest clinical research and even studies that dig into results of treatments based on patients’ genetic traits. This can help doctors detect life-threatening medical conditions at earlier stages than those doctors could by themselves and deliver more personalized treatments.

The much-acclaimed IBM Watson supercomputer is involved in a growing number of medical use cases.[xiii] This includes genetically sequencing brain tumors, matching cancer patients to the clinical trials that offer the most promising treatments for their cancers and more precisely analyzing patients’ potential susceptibility to stroke or heart attack, just to mention a few. It has even proven successful in diagnosing some cases that had stumped human physicians,[xiv] although much more testing is needed before we see this feature rolled out for widespread use.

Takeaways

Clearly, AI and ML have already made far greater inroads into our lives today than most people realize. They are increasingly expanding human capabilities and taking over an increasing number of tasks.

Yet in many ways, the use of AI is still in its infancy. There is so much more to come. How many tasks that today we do on our own will it take over? And which tasks? How will this impact the workers that presently do them?

Contrary to popular beliefs about AI, it will not impact only blue collar workers. Many of the tasks AI is used for right now involve white collar – or even professional – workers.

Marketers, data analysts, customer service representatives – even doctors – are seeing AI perform tasks they currently do. It will not be only low-skilled workers who will be impacted. In many ways, AI stands to enhance the abilities of both middle-skill and high-skill workers. But in many ways AI threatens to replace some of those workers whose jobs it currently enhances. Before we can properly prepare for the coming AI disruption, we need to get a clearer idea of what kinds of shifts AI is likely to bring.

In the next three chapters, we will look at each of the three main – and dramatically different – views of the future that AI will bring. Those views, while often extreme, each point to important issues that we need to consider if we are going to move into that future with minimal negative impact on our lives.


[i] Barry Levine, IBM’s Watson now powers Lucy, a cognitive computing system built specifically for marketers, MarTech Today, 2016, Available: https://martechtoday.com/ibms-watson-begets-equals-3s-lucy-supercomputing-system-built-specifically-marketers-180950

[ii] Chris Cancialosi, How Staples Is Making Its Easy Button Even Easier With A.I., Forbes, 2016, Available: https://www.forbes.com/sites/chriscancialosi/2016/12/13/how-staples-is-making-its-easy-button-even-easier-with-a-i/#433606c859ef

[iii] Sharon Gaudin, The North Face sees A.I. as a perfect fit, ComputerWorld, 2016, Available: https://www.computerworld.com/article/3026449/retail-it/the-north-face-sees-ai-as-a-perfect-fit-video.html

[iv] Ashley Minogue, Beyond the Marketing Hype: How Cogito Delivers Real Value Through AI, OpenView, 2017, Available: https://labs.openviewpartners.com/beyond-the-marketing-hype-how-cogito-delivers-real-value-through-ai/#.WqADxudG2Uk

[v] John Egan, What’s the Future of Robots in Telemarketing, DMA Nonprofit Federation, 2017, Available: https://chi.nonprofitfederation.org/blog/whats-future-robots-telemarketing/

[vi] Matthew Jenkin, Written out of the story: the robots capable of making the news, The Guardian, 2017, Available: https://www.theguardian.com/small-business-network/2016/jul/22/written-out-of-story-robots-capable-making-the-news

[vii] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig, Achieving Human Parity in Conversational Speech Recognition, Cornell University Library, 2016, revised 2017, Available: https://arxiv.org/abs/1610.05256

[viii] Ingrid Lunden, Rise of the bots: X.ai raises $23m more for Amy, a bot that arranges appointments, TechCrunch, 2016, Available: https://techcrunch.com/2016/04/07/rise-of-the-bots-x-ai-raises-23m-more-for-amy-a-bot-that-arranges-appointments/

[ix] Andrew Lo, Consumer Credit-Risk Models Via Machine-Learning Algorithms, MIT, 2009, Available: http://bigdata.csail.mit.edu/node/22

[x] Amit Sharma, Third-Party Recommendations System Industry: Current Trends and Future Directions, SSRN, 2013, Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2263983

[xi] John Markoff, Planes Without Pilots, New York Times, 2015, Available: https://www.nytimes.com/2015/04/07/science/planes-without-pilots.html?_r=0

[xii] Jeremy Hsu, Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor?, Scientific American, 2017, Available: https://www.scientificamerican.com/article/can-a-crowdsourced-ai-medical-diagnosis-app-outperform-your-doctor/

[xiii] Jeremy Hsu, ibid.

[xiv] James Billington, IBM’s Watson cracks medical mystery with life-saving diagnosis for patient who baffled doctors, International Business Times, 2016, Available: http://www.ibtimes.co.uk/ibms-watson-cracks-medical-mystery-life-saving-diagnosis-patient-who-baffled-doctors-1574963

Avatar of Marin Ivezic
Marin Ivezic
Website | Other articles

For over 30 years, Marin Ivezic has been protecting critical infrastructure and financial services against cyber, financial crime and regulatory risks posed by complex and emerging technologies.

He held multiple interim CISO and technology leadership roles in Global 2000 companies.