How well do you really know your customers? If your company is like most brands, you already realize that you don’t really know them at all. In fact, a 2015 study from Aberdeen Group found that just 4 percent of organizations are fully satisfied with their ability to ensure data-driven…
Management teams often assume they can leapfrog best practices for basic data analytics by going directly to adopting artificial intelligence and other advanced technologies. But companies that rush into sophisticated artificial intelligence before reaching a critical mass of automated processes and structured analytics can end up paralyzed. They can become saddled with expensive start-up partnerships, impenetrable black-box systems, cumbersome cloud computational clusters, and open-source toolkits without programmers to write code for them.
By contrast, companies with strong basic analytics — such as sales data and market trends — make breakthroughs in complex and critical areas after layering in artificial intelligence. For example, one telecommunications company we worked with can now predict with 75 times more accuracy whether its customers are about to bolt using machine learning. But the company could only achieve this because it had already automated the processes that made it possible to contact customers quickly and understood their preferences by using more standard analytical techniques.
So how can companies tell if they are really ready for AI and other advanced technologies?
Automating basic processes
First, managers should ask themselves if they have automated processes in problem areas that cost significant money and slow down operations. Companies need to automate repetitive processes involving substantial amounts of data — especially in areas where intelligence from analytics or speed would be an advantage. Without automating such data feeds first, companies will discover their new AI systems are reaching the wrong conclusions because they are analyzing out-of-date data. For example, online retailers can adjust product prices daily because they have automated the collection of competitors’ prices. But those that still manually check what rivals are charging can require as much as a week to gather the same information. As a result, as one retailer discovered, they can end up with price adjustments perpetually running behind the competition even if they introduce AI because their data is obsolete.
Without basic automation, strategic visions of solving complex problems at the touch of a button remain elusive. Take fund managers. While the profession is a great candidate for artificial intelligence, many managers spend several weeks manually pulling together data and checking for human errors introduced through reams of excel spreadsheets. This makes them far from ready for artificial intelligence to predict the next risk to client investment portfolios or to model alternative scenarios in real-time.
Meanwhile, companies that automate basic data manipulation processes can be proactive. With automated pricing engines, insurers and banks can roll out new offers as fast as online competitors. One traditional insurer, for instance, shifted from updating its quotes every several days to every 15 minutes by simply automating the processes that collect benchmark pricing data. A utility company made its service more competitive by offering customized, real-time pricing and special deals based on automated smart meter readings instead of semi-annual in-person visits to homes.
Structured data analytics
Once processes critical to achieving an efficiency or goal are automated, managers need to develop structured analytics as well as centralize data processes so that the way data is collected is standardized and can be entered only once.
With more centralized information architectures, all systems refer back to the primary “source of truth,” updates propagate to the entire system, and decisions reflect a single view of a customer or issue. A set of structured analytics provides retail category managers, for instance, with a complete picture of historic customer data; shows them which products were popular with which customers; what sold where; which products customers switched between; and to which they remained loyal.
Armed with this information, managers can then allocate products better, and, see why choices are made. By understanding the drivers behind customer decisions, managers can also have much richer conversations about category management with their suppliers — such as explaining that very similar products will be removed to make space for more unique alternatives.
Trying out AI
After these standard structured analytics are integrated with artificial intelligence, it’s possible to comprehensively predict, explain, and prescribe customer behavior. In the earlier telecommunications company example, managers understood customer characteristics. But they needed artificial intelligence to analyze the wide set of data collected to predict if customers were at risk of leaving. After machine learning techniques identified the customers who presented a “churn risk,” managers then went back to their structured analytics to determine the best way to keep them — and use automated processes to get an appropriate retention offer out fast.
Artificial intelligence systems make a huge difference when unstructured data such as social media, call center notes, images, or open-ended surveys are also required to reach a judgment. The reason Amazon, for instance, can recommend products to people before they even know they want them is because, using machine learning techniques, it can now layer in unstructured data on top of its strong, centralized collection of structured analytics like customers’ payment details, addresses, and product histories.
AI also helps with decisions not based on historic performance. Retailers with strong structured analytics in place can figure out how best to distribute products based on how they are selling. But it takes machine learning techniques to predict how products not yet available for sale will do — partly because no structured data is available.
Finally, artificial intelligence systems can make more accurate forecasts based on disparate data sets. Fund managers with a strong base of automated and structured data analytics are predicting with greater accuracy how stocks will perform by applying AI to data sets involving everything from weather data to counting cars in different locations to analyzing supply chains. Some data pioneers are even starting to figure out if companies will gain or lose ground using artificial intelligence systems’ analyses of consumer sentiment data from unrelated social media feeds.
Companies are just beginning to discover the many different ways that AI technologies can potentially reinvent businesses. But one thing is already clear: they must invest time and money to be prepared with sufficiently automated and structured data analytics in order to take full advantage of the new technologies. Like it or not, you can’t afford to skip the basics.
Effective data management is critical to B2B digital marketing success. Columnist Sonjoy Ganguly explains how implementing a rich, B2B-focused data structure and collaborating with the right strategic partners can make all the difference.
Today’s leading organizations are using machine learning–based tools to automate decision processes, and they’re starting to experiment with more-advanced uses of artificial intelligence (AI) for digital transformation. Corporate investment in artificial intelligence is predicted to triple in 2017, becoming a $100 billion market by 2025. Last year alone saw $5 billion in machine learning venture investment. In a recent survey, 30% of respondents predicted that AI will be the biggest disruptor to their industry in the next five years. This will no doubt have profound effects on the workplace.
Machine learning is enabling companies to expand their top-line growth and optimize processes while improving employee engagement and increasing customer satisfaction. Here are some concrete examples of how AI and machine learning are creating value in companies today:
Personalizing customer service. The potential to improve customer service while lowering costs makes this one of the most exciting areas of opportunity. By combining historical customer service data, natural language processing, and algorithms that continuously learn from interactions, customers can ask questions and get high-quality answers. In fact, 44% of U.S. consumers already prefer chatbots to humans for customer relations. Customer service representatives can step in to handle exceptions, with the algorithms looking over their shoulders to learn what to do next time around.
Improving customer loyalty and retention. Companies can mine customer actions, transactions, and social sentiment data to identify customers who are at high risk of leaving. Combined with profitability data, this allows organizations to optimize “next best action” strategies and personalize the end-to-end customer experience. For example, young adults coming off of their parents’ mobile phone plans often move to other carriers. Telcos can use machine learning to anticipate this behavior and make customized offers, based on the individual’s usage patterns, before they defect to competitors.
Hiring the right people. Corporate job openings pull in about 250 résumés apiece, and over half of surveyed recruiters say shortlisting qualified candidates is the most difficult part of their job. Software quickly sifts through thousands of job applications and shortlists candidates who have the credentials that are most likely to achieve success at the company. Care must be taken not to reinforce any human biases implicit in prior hiring. But software can also combat human bias by automatically flagging biased language in job descriptions, detecting highly qualified candidates who might have been overlooked because they didn’t fit traditional expectations.
Automating finance. AI can expedite “exception handling” in many financial processes. For example, when a payment is received without an order number, a person must sort out which order the payment corresponds to, and determine what to do with any excess or shortfall. By monitoring existing processes and learning to recognize different situations, AI significantly increases the number of invoices that can be matched automatically. This lets organizations reduce the amount of work outsourced to service centers and frees up finance staff to focus on strategic tasks.
Measuring brand exposure.Automated programs can recognize products, people, logos, and more. For example, advanced image recognition can be used to track the position of brand logos that appear in video footage of a sporting event, such as a basketball game. Corporate sponsors get to see the return on investment of their sponsorship investment with detailed analyses, including the quantity, duration, and placement of corporate logos.
Detecting fraud. The typical organization loses 5% of revenues each year to fraud. By building models based on historical transactions, social network information, and other external sources of data, machine learning algorithms can use pattern recognition to spot anomalies, exceptions, and outliers. This helps detect and prevent fraudulent transactions in real time, even for previously unknown types of fraud. For example, banks can use historical transaction data to build algorithms that recognize fraudulent behavior. They can also discover suspicious patterns of payments and transfers between networks of individuals with overlapping corporate connections. This type of “algorithmic security” is applicable to a wide range of situations, such as cybersecurity and tax evasion.
Predictive maintenance. Machine learning makes it possible to detect anomalies in the temperature of a train axel that indicate that it will freeze up in the next few hours. Instead of hundreds of passengers being stranded in the countryside, waiting for an expensive repair, the train can be diverted to maintenance before it fails, and passengers transferred to a different train.
Smoother supply chains. Machine learning enables contextual analysis of logistics data to predict and mitigate supply chain risks. Algorithms can sift through public social data and news feeds in multiple languages to detect, for example, a fire in a remote factory that supplies vital ball bearings that are used in a car transmission.
Other areas where machine intelligence could soon be commonly used include:
Career planning. Recommendations could help employees choose career paths that lead to high performance, satisfaction, and retention. If a person with an engineering degree wishes to run the division someday, what additional education and work experience should they obtain, and in what order?
Drone- and satellite-based asset management. Drones equipped with cameras can perform regular external inspections of commercial structures, like bridges or airplanes, with the images automatically analyzed to detect any new cracks or changes to surfaces.
Retail shelf analysis. A sports drink company could use machine intelligence, coupled with machine vision, to see whether its in-store displays are at the promised location, the shelves are properly stocked with products, and the product labels are facing outward.
Machine learning enables a company to reimagine end-to-end business processes with digital intelligence. The potential is enormous. That’s why software vendors are investing heavily in adding AI to their existing applications and in creating net-new solutions.
But there are barriers to overcome. The most important is the availability of large quantities of high-quality data that can be used to train algorithms. In many organizations, the data isn’t in one place or in a useable format, or it contains biases that will lead to bad decisions. To prepare your enterprise for the future, the first step is to assess your existing information systems and data flows to distinguish the areas that are ready for automation from those where more investment is needed. Consider appointing a chief data officer to ensure that data is being properly managed as a corporate asset.
Another problem is prioritization; with so many opportunities, it can be hard to know where to start. To ease this burden, software providers are starting to offer predefined solutions enabled with state-of-the-art machine learning out of the box. Many organizations are also implementing AI centers of excellence to work closely with business departments. Wherever you start, it’s important to link the projects to a long-term digital platform strategy to avoid having disconnected islands of innovation.
Lastly, don’t underestimate the cultural barriers. Many employees worry about the consequences of all of this technology on their roles. For most, it will be an opportunity to reduce tedious tasks and do more, but it’s vital that employees have incentives to ensure the success of new machine learning initiatives. You’ll also have to think carefully about customers. AI can augment the power to get insights from customer data — perhaps beyond the point where customers are comfortable. Organizations must take privacy seriously, and relying on computers for important decisions requires careful governance. They should implement procedures to audit the real effects of any automated systems, and there should always be recourses and overrides as part of the processes. AI systems that use data about people should involve informed consent.
AI’s continued rise is inevitable, and it’s advancing into the workplace at a dizzying speed. The question now is not about whether managers should investigate adopting AI but about how fast they can do so. At the same time, organizations need to be thoughtful about how they apply AI to their organizations, with a full understanding of the advantages and disadvantages inherent in the technology.
It was winter in New York City and Asaf Jacobi’s Harley-Davidson dealership was selling one or two motorcycles a week. It wasn’t enough.
Jacobi went for a long walk in Riverside Park and happened to bump into Or Shani, CEO of an AI firm, Adgorithms. After discussing Jacobi’s sales woes, Shani, suggested he try out Albert, Adgorithm’s AI-driven marketing platform. It works across digital channels, like Facebook and Google, to measure, and then autonomously optimize, the outcomes of marketing campaigns. Jacobi decided he’d give Albert a one-weekend audition.
That weekend Jacobi sold 15 motorcycles. It was almost twice his all-time summer weekend sales record of eight.
Naturally, Jacobi kept using Albert. His dealership went from getting one qualified lead per day to 40. In the first month, 15% of those new leads were “lookalikes,” meaning that the people calling the dealership to set up a visit resembled previous high-value customers and therefore were more likely to make a purchase. By the third month, the dealership’s leads had increased 2930%, 50% of them lookalikes, leaving Jacobi scrambling to set up a new call center with six new employees to handle all the new business.
While Jacobi had estimated that only 2% of New York City’s population were potential buyers, Albert revealed that his target market was larger – much larger – and began finding customers Jacobi didn’t even know existed.
How did it do that?
AI at Work
Today, Amazon, Facebook, and Google are leading the AI revolution, and that’s given them a huge market advantage over most consumer goods companies and retailers by enabling them to lure customers with highly personalized, targeted advertising, and marketing. However, companies such as Salesforce, IBM, and a host of startups are now beginning to offer AI marketing tools that have become both easier to use (that is, they don’t require hiring expensive data scientists to figure out how to operate the tool and analyze its outputs) and less expensive to acquire, with software-as-a-service (SaaS), pay-as-you-go pricing. And instead of optimizing specific marketing tasks, or working within individual marketing channels, these new tools can handle the entire process across all channels.
In the case of Harley-Davidson, the AI tool, Albert, drove in-store traffic by generating leads, defined as customers who express interest in speaking to a salesperson by filling out a form on the dealership’s website.
Armed with creative content (headlines and visuals) provided by Harley-Davidson, and key performance targets, Albert began by analyzing existing customer data from Jacobi’s customer relationship management (CRM) system to isolate defining characteristics and behaviors of high-value past customers: those who either had completed a purchase, added an item to an online cart, viewed website content, or were among the top 25% in terms of time spent on the website.
Using this information, Albert identified lookalikes who resembled these past customers and created micro segments – small sample groups with whom Albert could run test campaigns before extending its efforts more widely. It used the data gathered through these tests to predict which possible headlines and visual combinations – and thousands of other campaign variables – would most likely convert different audience segments through various digital channels (social media, search, display, and email or SMS).
Once it determined what was working and what wasn’t, Albert scaled the campaigns, autonomously allocating resources from channel to channel, making content recommendations, and so on.
For example, when it discovered that ads with the word “call” – such as, “Don’t miss out on a pre-owned Harley with a great price! Call now!” – performed 447% better than ads containing the word “Buy,” such as, “Buy a pre-owned Harley from our store now!” Albert immediately changed “buy” to “call” in all ads across all relevant channels. The results spoke for themselves.
The AI Advantage
For Harley-Davidson, AI evaluated what was working across digital channels and what wasn’t, and used what it learned to create more opportunities for conversion. In other words, the system allocated resources only to what had been proven to work, thereby increasing digital marketing ROI. Eliminating guesswork, gathering and analyzing enormous volumes of data, and optimally leveraging the resulting insights is the AI advantage.
Marketers have traditionally used buyer personas – broad behavior-based customer profiles – as guides to find new ones. These personas are created partly out of historic data, and partly by guesswork, gut feel, and the marketers’ experiences. Companies that design their marketing campaigns around personas tend to use similarly blunt tools (such as gross sales) – and more guesswork – to assess what’s worked and what hasn’t.
AI systems don’t need to create personas; they find real customers in the wild by determining what actual online behaviors have the highest probability of resulting in conversions, and then finding potential buyers online who exhibit these behaviors. To determine what worked, AI looks only at performance: Did this specific action increase conversions? Did this keyword generate sales? Did this spend increase ROI?
Even if equipped with digital tools and other marketing technologies, humans can only manage a few hundred keywords at a time, and struggle to apply insights across channels with any precision. Conversely, an AI tool can process millions of interactions a minute, manage hundreds of thousands of keywords, and run tests in silica on thousands of messages and creative variations to predict optimal outcomes.
And AI doesn’t need to sleep, so it can do all this around the clock.
Consequently, AI can determine exactly how much a business should spend, and where, to produce the best results. Rather than base media buying decisions on past performance and gut instincts, AI acts instantly and autonomously, modifying its buying strategy in real-time based the on ever-changing performance parameters of each campaign variable.
Taking the AI Plunge
Because AI is new, and because marketers will be wary of relinquishing control and trusting a black box to make the best decisions about what people will or won’t do, it’s wise to adopt AI tools and systems incrementally, as did Harley-Davidson’s Jacobi. The best way to discover AI’s potential is to run some small, quick, reversible experiments, perhaps within a single geographic territory, brand, or channel.
Within these experiments, it’s important to define key desired performance results; for example, new customers, leads, or an increased return on advertising spending.
When it comes to choosing a tool, know what you want. Some tools focus on a single channel or task, such as optimizing the website content shown to each customer. Others, like IBM’s Watson, offer more general purpose AI tools that need to be customized for specific uses and companies. And still other AI tools produce insights but don’t act on them autonomously.
It’s worth taking the plunge, and, in fact, there’s an early adopter advantage. As Harley’s Jacobi told me, “The system is getting better all the time. The algorithms will continue to be refined. Last year, we tripled our business over the previous year.”
That’s good news for Jacobi and his employees, and not such good news for his competitors.
While debate drags on about legislation, regulations, and other measures to improve the U.S. health care system, a new wave of analytics and technology could help dramatically cut costly and unnecessary hospitalizations while improving outcomes for patients. For example, by preventing hospitalizations in cases of just two widespread chronic illnesses — heart disease and diabetes — the United States could save billions of dollars a year.
Toward this end, my colleagues and I at Boston University’s Center for Information and Systems Engineering have been striving to bring the power of machine-learning algorithms to this critical problem. In an ongoing effort with Boston-area hospitals, including the Boston Medical Center and the Brigham and Women’s Hospital, we found that we could predict hospitalizations due to these two chronic diseases about a year in advance with an accuracy rate of as much as 82%. This will give care providers the chance to intervene much earlier and head off hospitalizations. Our team is also working with the Department of Surgery at the Boston Medical Center and can predict readmissions within 30 days of general surgery; the hope is to guide postoperative care in order to prevent them.
The hospitals provide patients’ anonymized electronic health records (EHRs) that contain all of the information the hospital has about each patient, including demographics, diagnoses, admissions, procedures, vital signs taken at doctor visits, medications prescribed, and lab results. We then unleash our algorithms to predict who might have to be hospitalized. This gives the hospital opportunities to intervene, treat the disease more aggressively in an outpatient setting, and avoid a costly hospitalization while improving the patient’s condition.
How the most innovative providers are creating value.
The accuracy rates of these predictions surpass what is possible with well-accepted risk scoring systems such as the one that emerged from the famous Framingham Heart Study, the ongoing long-term cardiovascular cohort study that is now in its third generation of participants. Using that system, a doctor assesses the patient’s age, cholesterol, weight, blood pressure, and several other factors to arrive at the individual’s chances of developing cardiovascular disease over the next 10 years. Using the Framingham Study 10-year cardiovascular risk score, one can predict hospitalizations with an accuracy of about 56%, which is substantially lower than the 82% rate we achieved.
In fact, we found that feeding the factors used in the Framingham 10-year risk score into more sophisticated machine-learning methods still leads to results inferior to ours (an accuracy rate of about 69%). This suggests that using the entirety of a patient’s EHR (which can contain as much as 200 factors) instead of just a few key factors leads to superior prediction results. What’s more, an algorithmic approach can easily be scaled so it can be applied to a very large number of patients — something that is impossible with human monitors only.
The potential benefits from applying machine-learning analytics in health care are enormous. Based on a study of a year’s worth of hospital admissions, the U.S. Agency for Healthcare Research and Quality (AHRQ) estimated that 4.4 million of those admissions in the United States, totaling $30.8 billion in costs, could have been prevented. Of that $30.8 billion, $9 billion was for patients with heart diseases and $5.8 billion for patients with complications from diabetes. That’s half of all unnecessary hospitalizations.
Just 5% of Medicaid’s 70 million beneficiaries account for 54% of Medicaid annual expenditures of more than $500 billion, and 1% account for 25% of the total. Of this 1%, 83% have at least three chronic conditions. Approaches like ours could reduce their use of hospital services and save Medicaid a large amount of money.
Ongoing U.S. reforms in health care that link payments with outcomes are forcing hospitals to assume more financial risks. In response, hospitals are increasingly making analytics and new technologies an integral part of hospital operations. Business analytics widely used in the transportation industry by airlines and shipping companies are beginning to be employed to schedule operating rooms and staffing. Other algorithms are being developed to assist physicians in making diagnoses. My team has developed methods to automatically titrate medications in intensive care units in response to the patient’s condition.
These advances are only the tip of the iceberg. We are on the cusp of major changes in health monitoring and care. Google and other companies with lots of experience in collecting and learning from data appear ready to step into this domain. A myriad of technologies, from implantable medical devices (such as defibrillators and pacemakers) to fit trackers, smart watches, and smartphones already capture our health data and lifestyle choices. Our credit-card and electronic-payment systems know our purchase history and the type of food we consume. The result is the emergence of a rich personal health record we carry in our pockets.
If we can now predict future hospitalizations with more than 80% accuracy using medical records alone, imagine what is possible if we can tap into this trove of personal data. Recommender systems could be used to nudge us to adopt healthier eating habits and behaviors. The holy grail of heading off the emergence of conditions by keeping people well could be realized.
Yes, analytics and data-driven personalized medicine and health monitoring present risks. Do we want our employers and health insurers to know the status of our health and the risks we face? Privacy, security, and reliability of new systems and methods are also critical concerns. But rather than retreating from this new era, we should be working on how to strengthen our methods, institutions, laws, and regulatory framework to avoid those unintended consequences. Algorithms — the foundation of encryption methods, privacy-preserving data processing, and intrusion- and fraud detection systems — could help.