Jessica Abo sits down with the founder of hello products, Craig Dubitsky to discuss how he’s disrupting the $40-billion global oral care category.
For critically ill patients on breathing machines, a simple step drastically improves their survival chances by almost 10% — from 60% to 70%. It involves programming the machine to deliver enough life-sustaining breaths, but not so much that it damages their lungs by overinflating them. Given that this intervention could prevent more suffering than many wonder drugs, one would expect that there would be zero market for a breathing machine that didn’t make lung-preventive ventilation as easy as possible. But in health care, few things work as expected. Fewer than half of patients, and in some hospitals fewer than 20%, receive this life-saving intervention.
One big reason why is that hospitals purchase technologies without requiring that they communicate with each other. The optimal air flow is based on a straightforward calculation using the height of the patient. Height data, however, resides in the electronic medical record, which typically does not communicate with the ventilator. As a result, physicians must retrieve this information from the medical record, perform the calculation (sometimes on paper), and enter the order. A respiratory therapist then takes the order and types it into the ventilator, often relying on memory.
If the ventilator and medical record communicated with each another, calculating the ideal air flow would be automated and clinicians would only need to verify the correct settings. Instead, they waste time on nonproductive work, filling the gap between these two systems. Because similar gaps exist between dozens of other hospital technologies, and clinicians are asked to perform hundreds of steps each day to deliver evidence-based care, unnecessary errors occur, and providers’ productivity has fallen, even while spending on technology has ballooned.
Health care’s safety and quality challenges are exacerbated by its procurement problem. For years, hospitals have invested in sophisticated devices and IT systems that, on their own, can be awe-inspiring. Yet these technologies rarely share data, let alone leverage it to support better clinical care.
How did we get here? First, the number of devices that work well with others is small. Manufacturers have been slow to embrace interoperability, which would allow health care technologies to share data with one another. In recent years, there has been movement to change that. More companies have pledged to open their data, giving innovators everywhere the chance to mine that data and use it to drive better care. But we are far from where we should be.
Second, despite significant work, health care lacks widespread adoption of interoperability standards that govern formats and elements of data shared between different systems. Without such standards, data cannot be shared and understood among devices. An accelerated effort is needed to create mature standards and expand their adoption by manufacturers. At Johns Hopkins, we are leading development of a report for the National Academy of Medicine that will identify the barriers to widespread interoperability and suggest opportunities to overcome them, such as policies, requirements, standards, and purchase specifications.
Part of the solution must involve hospitals. If they truly want technologies that save lives and boost productivity, they will need to exert their considerable pressure as purchasers, requiring that manufacturers embrace openness and interoperability, and only purchasing devices that support this. Too often, hospitals treat equipment and IT procurement in a siloed way, focusing on price without looking at how those devices will work as part of a larger system. For example, many new hospital beds come with a sophisticated array of sensors that can track such information as whether a patient is at risk of developing a bedsore, based on data about how often they move in bed. Such sensors may be 30% of a bed’s costs. Yet at one of our hospitals, that data is unusable — it’s in a format that our system cannot read.
It’s a similar situation for much of the data that is fed from wireless monitors of patients’ heart rate, blood oxygen levels, blood pressure, and breathing rate: This data doesn’t link to the medical record.
Health care is woefully underengineered. Too often, clinicians mold their work processes around the demands of multiple devices and health IT systems, yet those technologies don’t work together to serve their needs and the needs of patients. Using systems engineering, we can integrate technologies and build hospitals and clinics that ensure consistently safe, high-quality, and efficient care.
The vision of an integrated hospital unit that is much safer and more productive will not be possible without widespread availability of products that share data openly and freely. Just as the U.S. Navy demands that its submarines and ships have interoperable technologies, this change can be driven by those who purchase these technologies. Health care leaders that purchase technologies need to do the same. When health systems insist on interoperable technologies, the market will respond.
It’s unrealistic to think that each hospital should go it alone, exerting its purchasing power to move the marketplace. However, hospitals could work together, writing specifications and functional requirements for the products that they will purchase and refusing to do business with manufacturers that don’t comply. Group purchasing organizations, which help procure products and devices for thousands of hospitals under their umbrella, might also fill that role.
And we should take it one step further: Rather than looking to assemble hospital rooms one product at a time, hospitals should be able to purchase modules, sets of interoperable products that work together to support an aspect of care. This model makes sense, as few if any hospitals have the resources to design and manage all the connections between technologies, or to optimize how the data is used and displayed to support top-quality care. Ultimately, when a hospital is built or renovated, it would have the option to buy modular patient rooms, clinical units or floors — a “hospital in a box,” built to its specifications.
We don’t expect airlines to build their own planes. They buy them from experienced system integrators such as Boeing or Airbus. There’s no reason that hospitals shouldn’t have a similar model. The question is whether health care leaders will have the resolve to require it. The survival of their patients, the financial survival of their organizations, and our ability to reduce health care costs may depend on it.
Hit fast-forward in your mind. Imagine a world where data about traffic, public transportation, and pedestrian patterns is continuously analyzed to provide the smoothest possible commute for the largest number of people. Centralized, one-click tax preparation and payment. A single, voice-activated digital assistant ready to answer any civic question.
How far ahead do you think you’d have to jump to make these things happen? Five years? Ten years? Neither. In fact, each of these innovations is already up and running somewhere in the world today, with more happening every day. They are signs of profound change.
Digital transformation—or the way of thinking about this change—refers to the use of technology to improve the reach and performances of enterprises. It’s not limited to private enterprise. When applied to the social fabric, digital transformation points to a reimagining of the way governments interact with their people, cities serve their inhabitants, and public agencies address the needs of their communities.
As diverse forces such as social media, climate change, urban migration, and sprawl continue to upend the status quo, employing every tool and service available to help societies respond to disruption is critical. Today, new devices coupled with artificial intelligence using vast amounts of data from millions of sensors are helping us tackle key social concerns.
Building more efficient, secure, and resilient governments
Turn to Estonia for an example of a digital transformation of the social infrastructure. Estonia has only 1.3 million citizens but is larger in landmass than Switzerland; as a result, many towns do not have a nearby government office. Every citizen carries a digital ID card that allows him or her to vote remotely, pay taxes with a few clicks, manage health care, and much more. These days, the country has opened its digital service to everyone in the world via e-Residency. But because of this digital dependence, the government needed to ensure its resilience in the event of a natural disaster, cyberattack, or other disruption. How? As part of a joint research project with Microsoft, Estonia moved the official digital record of land ownership to the cloud. Since then, they have clarified public cloud usage guidelines to allow most data to be stored in a public cloud located within the EU, and they are building up data embassies to keep critical e-government databases and systems abroad backed up in the cloud.
Globally, these types of changes will be most effective when they’re supported by a legal and policy framework that reinforces the technology, particularly for issues of security, privacy, and resilience. A collaborative research project has launched in New Zealand to explore some of these larger questions of how governments can harness digital technologies to develop smarter, more inclusive societies.
“Essentially, we are interested in better understanding what it takes for New Zealand to become a digital society, what opportunities and challenges it presents, and what role digital government plays in getting there,” says Graeme Osborne, the general manager for system transformation at the New Zealand Government Chief Information Office (GCIO). The results of the project—a collaboration between the GCIO, Microsoft Digital, and the Fletcher School at Tufts University—will be published in a future white paper that will provide insights and ideas not only for New Zealand but also for countries around the world.
Urban digital nervous system
“Digital transformation is helping people and organizations reimagine work and personal life. It’s empowering cities and countries to realize digital dreams that create better education, and safer, healthier and more sustainable living. The opportunity for our digital societies to drive social and economic progress is unprecedented,” according to Anand Eswaran, Corporate Vice President, Worldwide Services and Microsoft Digital.
The technology is sophisticated enough now that the possibilities seem almost limitless. Thanks to advances in artificial intelligence and data analytics, technology is now able to anticipate human intentions, becoming increasingly responsive to the needs of the people it’s designed to serve. Underpinning this work at Microsoft is a belief in inclusive design, which holds that technology should be empathetic; habitable environments should be not only aesthetically pleasing but also usable by everyone, regardless of ability, age, or life status.
Inclusive design helps inform the concept of the urban digital nervous system (UDNS), which is a metaphor (first used by Bill Gates in 1999) for the systems that regulate a city’s operations and automate its core functions. Thanks to advances in artificial intelligence and data analytics, it’s a metaphor with strong connections to the real thing. As it matures, the UDNS will start to anticipate human intentions, becoming increasingly responsive to the needs of the people it’s designed to serve. One such project is under way in Auckland.
With 1.4 million residents, Auckland is New Zealand’s largest city, and it’s growing fast; its population is expected to double by 2040. With growth comes traffic, and already Auckland’s existing transportation infrastructure is struggling to cope. Auckland Transport, the agency responsible for helping people move around the city safely and efficiently, worked with Microsoft Digital to better understand how to plan for population growth. The project used Internet of Things data from public transport nodes, traffic lights, and intersections to shorten travel time, ease congestion, and make the streets safer for pedestrians. Eventually, a social listening tool and a mobile app for parking will help Auckland Transport become even more responsive.
Digital transformation is a central element in what is increasingly being referred to as the fourth industrial revolution, signaled by our burgeoning understanding of how to embed technologies in the physical and biological spheres. The pace of this change is historically unprecedented and disrupting nearly every industry in every country. Societies too must adapt, not only to protect their members’ livelihoods, lifestyles, and longevity but also to offer their communities the services they need and to provide a framework for future growth. As that happens, governments will become more customer-focused—and will contribute to a better quality of life for everyone.
For more information visit www.microsoft.com/digitaldifference.
When I was Chief Innovation Officer at Boston Children’s Hospital, I often felt that my title should have been Chief Innovation Communication and Relations Officer. In any firm, an innovation program cannot be effective without building bridges within the firm. But, in highly regulated industries, such as healthcare delivery, pharma, banking, and insurance, good relationships and effective communication are especially vital.
Innovating in highly regulated industries can be challenging. But it is necessary, because even firms in these industries must innovate to gain competitive advantages and thrive. For innovation to flourish despite legal and regulatory obstacles, you must address innovation barriers head-on. Here are a few tips based on my experience:
1. Build relationships proactively with internal regulatory and legal folks. That’s right. Seek out—don’t avoid – the staff responsible for legal, regulatory, and compliance within your organization. Innovators sometimes think they are better off steering clear of these gatekeepers and guardians for as long as possible. That’s a big mistake. You can’t avoid working with these folks, and if you don’t find them, they will come and find you.
Talk to your legal, regulatory, and compliance colleagues early, well before your innovation is ready. Discussing your project at the beginning of the innovation lifecycle, when the stakes are still low, gives them time to digest the new idea and provide input and guidance while the idea can still be shaped into an innovation.
When I was Chief Innovation Officer at Boston Children’s Hospital, my team realized our doctors could use videoconferencing to care for critically ill patients in small community hospitals. Our legal department, however, greeted the idea with scowls and skepticism. The lawyers were worried about patient consent, physician licensure, medical liability, and a long litany of other legal and regulatory concerns.
Eventually, after many intense conversations, they came on board. In fact, once our “Teleconnect” program launched, the attorneys actually became some of the most enthusiastic internal advocates for the program.
Sometimes, however, resistance is strong. When that happens, should you circumvent the lawyers and appeal to the CEO? Going above the lawyers can break an impasse. But it also can damage relationships, and make it harder to gain legal or regulatory approval the next time around.
Instead of doing an end-run around your legal and regulatory people, continuously emphasize to them why and how your innovation is important to the business. Explain that killing the project isn’t a good option because it hurts the organization. When the lawyers understand the benefits, they will find ways to drive the innovation forward.
2. Find a champion within your legal or regulatory departments who is interested in innovation. Recruiting a legal or regulation expert who will champion innovation lends internal credibility to your project and can help you navigate not just your organization, but also your industry.
To find this innovation ambassador, be open. Your legal innovation champion could be a far-sighted junior attorney or a seasoned lawyer.
At a biotech company where I worked, my regulatory champion on a social media initiative was a relatively new lawyer. He had joined the company recently and was thoughtful, caring, and interested in exploring the possibilities of social media. He created a useful bridge to other regulatory experts in the organization.
3. Frame conversations as discussions. By asking questions and exploring hypothetical scenarios, you can change responses from legal and regulatory folks from “no” to “yes, but…”
For example, asking “How can we…?” rather than “Can we…?” encourages thinking about how something can be accomplished despite existing restrictions.
At Boston Children’s Hospital, when we began planning our first hackathon, people were concerned about who would own any intellectual property generated at the hackathon. When the question was reframed during a series of discussions, outside innovators and the hospital found common ground and a way to share the IP.
4. Communicate broadly and repeat yourself. One of the most impactful things we did to drive innovation forward was to set up regular monthly meetings we set up with our legal team at Boston Children’s Hospital. This gave us the time and space to discuss emerging issues calmly and collaboratively. While it’s great to have informal check-ins, nothing beats regularly scheduled meetings for ensuring legal, regulatory, and compliance partners are up-to-speed–and on board with your plans.
Similarly, we also had monthly fora where we shared progress on innovations that were underway with a broader team of stakeholders. As Chief Innovation Officer, I also continuously made rounds to all the clinical departments to provide updates and information to the doctors and nurses there. We produced an annual innovation report, which proved an effective way to disseminate information about our progress on a yearly basis. Holding annual “innovation days” provided us with a platform for innovators to share their exciting work. Highlighting the outcomes of prior innovation laid the foundation for future innovation.
Effective, broad internal communication is vital when innovating in any industry, and even more so in heavily regulated ones. Why? In regulated industries, folks throughout the organization – not just legal, regulatory, and compliance experts – may worry about innovation-related risk.
5. Tackle “folk law” head on. In regulated industries, there are clearly industry rules that need to be followed and laws to be aware of. However, there are also often gray areas where the law is unclear. Innovation frequently occurs in that uncharted territory, where regulations may not be well-defined. In the absence of clear laws and regulations, people tend to rely on “folk law.”
Folk law is simply the way things have always been done. Even though folk law is often based on mistaken assumptions, it is treated as if it were an actual regulatory constraint. Folk law develops based on what people are familiar with and comfortable doing. Once restrictions are identified as “folk law,” moving past them becomes easier.
Folk law at a biotech company where I worked dictated that executives couldn’t tweet. Our legal team was extremely uncomfortable with the idea. Once we began to discuss the reasons for their discomfort, it became clear their concern was rooted in the fact that none of the executives had ever been active on Twitter. The folk law that “executives can’t tweet” disintegrated once it was poked and prodded. Soon after, the regulatory team issued a set of internal policies and guidelines around the use of Twitter by employees.
6. Track the competition. Legal and regulatory folks are not always comfortable having their organization be the first to innovate and chart new territory. However, when in-house legal and regulatory experts can see how other organizations in the same industry are innovating and pushing the envelope –and how regulators respond– they often then become comfortable with their own organization following these other leaders.
There is a sense of “safety in numbers.” Tracking how your competition is innovating and sharing it within your organization can help you make the case for innovation with your internal legal and regulatory experts.
7. Be patient. Innovating in regulated industries takes longer than in other industries. People are more uncomfortable with innovation in regulated industries. They are quicker to raise red flags and barricades. But you still can innovate successfully. Involve those people early, while they can help shape the project, and be patient.
Expect setbacks. Innovation is rarely easy, especially in regulated industries. But it becomes easier when you treat legal, regulatory and compliance colleagues as assets rather than as adversaries. With time and patience, and by involving legal, regulatory and compliance folks early, you too can bring innovation to a highly-regulated industry.
Regulated industries need time to adapt, and they need to understand innovation, its risks and benefits. Don’t get discouraged when there are setbacks. (I’ve been there!) Instead, be patient and focus on the ultimate goal. And communicate, communicate, communicate.
Ronald Coase nailed it back in 1937 when he identified scalable efficiency as the key driver of the growth of large institutions. It’s far easier and cheaper to coordinate the activities of a large number of people if they’re within one institution rather than spread out across many independent organizations.
But here’s the challenge. Scalable efficiency works best in stable environments that are not evolving rapidly. It also assumes that the constituencies served by these institutions will settle for standardized products and services that meet the lowest common denominator of need.
Today we live in a world that is increasingly shaped by exponentially improving digital technologies that are accelerating change, increasing uncertainty, and driving performance pressure on a global scale. Consumers are less and less willing to settle for the standardized offerings that drove the success of large institutions in the past. Our research into the long-term decline of return on assets for all public companies in the US from 1965 to today (it’s gone down by 75%) is just one indicator of this pressure. Another indicator is the shrinking life span of companies on the S&P 500. A third is the declining rates of trust indicated by the Edelman Trust Barometer — as the gap grows between what we want and expect and what we receive, our trust in the ability of these institutions to serve our needs erodes.
To reverse these trends, we need to move beyond narrow discussions of product or service innovation, or even more sophisticated conversations about process innovation or business model innovation. Instead, we need to talk about institutional innovation, or re-thinking the rationale for why we have institutions to begin with.
We believe there still is a compelling rationale for large institutions, but it’s a very different one from scalable efficiency. It’s scalable learning. In a world that is more rapidly changing and where our needs are evolving at an accelerating rate, the institutions that are most likely to thrive will be those that provide an opportunity to learn faster together.
We’re not talking about sharing existing knowledge more effectively (although there’s certainly a lot of opportunity there). In a world of exponential change, existing knowledge depreciates at an accelerating rate. The most powerful learning in this kind of world involves creating new knowledge. This kind of learning does not occur in a training room; it occurs on the job, in the day-to-day work environment.
For example, our informal survey of where employees are spending their time in major departments across large companies suggests that 60-70% of their time is consumed in “exception handling” – addressing unexpected events that the existing processes can’t handle. These exceptions are a great opportunity to create new knowledge – how to handle something never anticipated. Yet, today this work is generally done inefficiently – workers struggle to find each other and to access the relevant data and analytics required to resolve the exception. Once they resolve the exception, what they did and learned is largely lost to the rest of the organization.
Moreover, most organizations seem to use digital technology to simply automate tasks and eliminate people. But scalable learning harnesses technology to augment the capabilities of people. Routine tasks do need to be automated, but for the purpose of freeing up people to explore new approaches to create even more value. In this context, one key dimension of learning is for workers to discover how to more effectively use increasingly powerful digital tools in specific contexts. Historical studies of the Industrial Revolution have shown that there was a significant lag between the introduction of new industrial machinery into the workplace and resulting productivity improvements because it took time for workers to develop the skills required to get the most value out of the machinery – skills that could only be taught in a very limited form because they had to be adapted to specific contexts and needs.
Scalable learning not only helps people inside the institution learn faster. It also scales learning by connecting with others outside the institution and building deep, trust-based relationships that can help all participants to learn faster by working together. For example, a number of entrepreneurial motorcycle companies in Chongqing, China have created product design networks connecting a large number of technologists and component vendors and helping them to work together to improve the designs of the components in ways that have led to significant cost reduction while maintaining or improving product performance and reliability.
What if we went further, redesigning our work environments (physical, virtual, and management systems) to help accelerate learning and performance improvement on the job? We have not been able to find a single company that has undertaken this in a systematic and holistic way. We did find some intriguing examples of companies that have introduced interesting elements into the work environment to accelerate learning. For example, Intuit has deployed experimentation platforms throughout the company to encourage employees to try and test new approaches to deliver more value while managing the risk associated with these experiments.
In institutions driven by scalable efficiency, it is the responsibility of the individual to fit into the assigned tasks and roles required by the institution. In institutions driven by scalable learning, the institutions must find ways to evolve and adapt to the needs of the individuals within their organization.
Scalable efficiency doesn’t just demand conformity among the individuals within the institution. It also seeks conformity among those it serves – that’s the path to scalable efficiency.
Scalable learning on the other hand is driven by the desire to learn more about those who are being served by the institutions and then to provide ever more value to those constituencies by tailoring products and services to address the individual and evolving needs of those being served. That learning is a prerequisite to understanding how to deliver more and more value to those being served. Becoming more responsive to the evolving unique needs of the individuals being served by institutions could help to restore the trust that has been eroding.
Not only could innovating around scalable learning help to rebuild trust in our institutions, it could also lead to a profound shift in the nature of performance improvement. The scalable efficiency institutional model is inherently a diminishing returns model – the more efficient these institutions become, the longer and harder they will need to work to get the next increment of performance improvement. Scalable learning, on the other hand, for the first time offers the potential to shift to an increasing returns model where the more people who join together to learn faster, the more rapidly value gets created.
We live in a culture of “yes.” We don’t want to disappoint our bosses, colleagues, families, or friends, so we say “yes” as often as we can manage. Oftentimes, we say “yes” when we should say “no.”
There’s nothing wrong with wanting to please. In fact, we’re hardwired for it. But when we overcommit ourselves, we spend our time checking things off a list rather than actually creating value.
This problem has ramped up in recent years as likability has become a key determinant in landing jobs and other professional opportunities. But here’s the trouble with having a corporate culture built around likability: When people are afraid to turn down noncritical projects, good ideas get smothered. Without the ability to say “no” to low-level tasks in order to say “yes” to groundbreaking ones, people stop innovating.
This misuse of talent is rampant in large organizations today.
Frequently, when I speak at a company with a strong “yes” culture, I ask the employees to close their eyes and raise their hands if they are currently working on a project that they don’t believe will be successful — something they don’t believe will accomplish its goals. Every time, a majority of the hands go up. Of course, they don’t raise them high: They know it’s not a good idea to express this opinion, but they feel so strongly about it that they feel it necessary to say something.
Every company is in a value race. Not only do you have to create value for your customers, but you also have to do it before someone else does. Doing so requires the ability to say “yes” to truly great ideas — and, more importantly, to say “no” to all those good ideas that just aren’t good enough.
Here’s how to cultivate that mindset in your organization:
1. Establish a value assessment system.
Instead of saying “yes” or “no” to a project, rate all new initiatives on a scale of 1 to 10. Ask each department to create a list of criteria for scoring new opportunities. These might include costs, how many customers will be affected, and how much revenue it will generate. The next time an executive asks the team to change course, it can be measured against these criteria.
I worked with a software development company in which the CEO came up with a new product feature on a weekly basis. His staff was overwhelmed by all the requests and didn’t know how to weigh them against what they were hearing from customers. So they developed a value assessment with the CEO’s input and ran all new requests through the tool. This helped them not only prioritize the value of requests, but also see which feature suggestions were not going to bring enough value to the organization.
Everyone has his or her pet projects or biases toward what matters. A value assessment removes subjectivity from the decision-making process and helps whole teams agree on which projects rank as an eight, nine, or 10 for the department.
2. Pay attention to warning signs.
In 2005, ”Frontline” introduced Americans to the PlayPump. A well-intentioned inventor designed a product to address the fact that many poor Africans have to pump their water by hand, a strenuous and time-consuming process. The inventor created a new kind of pump, one that was powered by children playing on a merry-go-round-like device. People were enamored with the idea and donated millions of dollars to install 4,000 of these merry-go-round pumps in African villages.
Unfortunately, the pumps were a disaster. They were inefficient and difficult for adults to use — and, of course, the adults were the primary operators because the kids got tired of using the equipment after about 15 minutes. In all the hype, no one had spent time observing villagers actually using the solution before they scaled it to thousands of communities. Had they conducted such an early experiment, they would have quickly learned the pivot indicators of the project and had time to make adjustments.
Perhaps you haven’t experienced a catastrophe on PlayPump’s level, but you’ve likely invested time and money on ideas that didn’t work. Just because an idea sounds good or makes you feel inspired doesn’t mean it will translate to the real world.
Pivot indicators help ensure your idea will succeed by tracking metrics that answer two questions: When will we know if this doesn’t work, and how will we know? PlayPump’s downfall lay in its failure to measure by those metrics, leading them to run headfirst toward catastrophe. Had the team waited for an outside study to measure the effectiveness of the system, or even implemented on a smaller scale before installing 4,000 units, those signposts for failure would have been impossible to ignore.
As with PlayPump, most organizations measure only success metrics — numbers that tell them they did well: how much money was raised, how many pumps were installed, or how close they stayed to the budget and timeline. But they usually ignore pivot indicators that could give them important warning signs before they’ve spent all of their resources and lost the time to make adjustments.
3. Celebrate saying “no.”
It’s easy to say “no” to bad projects and ideas, but you open yourself up to big opportunities only if you say “no” to the good ideas that just aren’t good enough.
Steve Jobs prided himself on saying “no.” He knew that to do great things, he needed to focus his attention only on the utmost priorities. As he put it at a conference in 1997, “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.”
His example inspired me to apply rigorous standards in my work and personal life. Using the value assessment system described above, I say “no” to anything below a seven. I don’t waste time on anything that doesn’t create value for my family, my clients, or the causes and charities I care about.
Encourage your departments and individual team members to determine their cutoffs, and observe how that changes the quality of their output. Establishing a process to say “no” to big ideas that aren’t good enough creates the space for truly innovative ideas to grow. And because it can be hard to say “no” — too many teams don’t get credit for deciding what not to pursue — celebrate it. Keep track. Give people credit and kudos not just for the great ideas they greenlit, but also for the middling ideas they passed up.
4. Reward initiative.
If an employee comes up with an interesting, viable concept, let him or her run with it. Uber did just this when a recent hire suggested an idea for a new offering. This manager observed that the company often received requests from event organizers for group codes that attendees could use to book the service. Because this wasn’t a widely promoted feature, someone had to manually set up the codes for each client.
Uber’s leaders encouraged the manager to devise a more streamlined system, going so far as to help him and some colleagues rent an Airbnb property for a weekend to work on the idea. Within a few days, the product was complete and ready for the 50,000 clients who fell under the Uber Business umbrella. This is an example of knowing when to say “yes” to a good idea while saying “no” to entrenched ways of thinking.
The most important skill any leader — any person, really — can learn is how and when to say “no.” When you’re able to confidently walk away from opportunities that don’t generate value, you have the time and the resources to say “yes” to those that matter. These are the ideas that are going to revolutionize your company and change the world.
Typical stories of creativity and invention focus on finding novel ways to solve problems. James Dyson found a way to adapt the industrial cyclone to eliminate the bag in a vacuum cleaner. Pablo Picasso and Georges Braque developed cubism as a technique for including several views of a scene in the same painting. The desktop operating system developed at Xerox PARC replaced computer commands with a spatial user interface.
These brief descriptions of these innovations all focus primarily on the novel solution. The problem they solve seems obvious.
But framing innovations in this way makes creativity seem like a mystery. How could so many people have missed the solution to the problem for so long? And how in the world did the first person come up with that solution at all?
In fact, most people who come up with creative solutions to problems rely on a relatively straightforward method: finding a solution inside the collective memory of the people working on the problem. That is, someone working to solve the problem knows something that will help them find a solution — they just haven’t realized yet that they know it.
Sure, some people stumble on the answer. When Archimedes stepped into the bath and noticed the water level rise, he lucked into the solution for finding the volume of an ornately decorated crown. And others invest decades and millions (or even billions) of dollars into research and development (see drug companies). But tapping into the individual’s or group’s memory is one of the most cost effective and repeatable problem-solving approaches.
The key to this method is to get the right information out of memory to solve the problem.
Human memory is set up in a way that encountering a piece of information serves as a cue to retrieve other related things. If I ask you to imagine a birthday party, you can quickly retrieve information about birthday parties you have attended, and you will likely be able to think about party hats, cake, and singing “Happy Birthday.” You don’t have to expend much effort to recall this information; it emerges as a result of the initial cue.
If you want to retrieve something else from memory, you need to change the cue. If I now ask you to think about salad, you can likely call to mind information about lettuce, tomatoes, and dressing, even though you were thinking about birthday parties just a minute ago.
When doing creative problem solving, the statement of the problem is the cue to memory. That is what reaches in to memory and draws out related information.
In order to generate a variety of possible solutions to a problem, then, the problem solver (or group) can change the description of the problem in ways that lead new information to be drawn from memory.
For example, it is hard to see how Dyson would have gotten to industrial cyclones from thinking about vacuum cleaner bags. But an alternative way to describe the problem is that a vacuum takes in a combination of dirt and air and has to separate the dirt from the air. Bags do this by acting as a filter that traps the dirt and lets the air pass through pores in the bag. But there are many ways to separate particles from air. Industrial cyclones create a spinning mass of air that throws particles to the edges by centrifugal force.
This way of describing a vacuum is that it generalizes the problem by removing some of the specific components typically used to solve it. The phrase “separating dirt from air” does not mention the bag at all. When you focus on the bag, you’ll naturally be reminded of aspects of bags. The large list of patent numbers on most vacuum cleaner bags suggests that many inventors have done just that. A radically new solution to a problem, though, requires a new problem statement.
So how do you create the problem statement you need to find a solution to your business problem? Unfortunately, there is no ideal problem statement. Instead, the most consistently creative people and groups are ones that find many different ways to describe the problem being solved. Some of those statements will be specific and talk about the objects being acted on (e.g. vacuum bags). That leads to retrieval of specific information that is highly related to the problem (e.g. different types of vacuum bags). Then, groups should find several ways to describe the essence of the problem being solved in ways that focus on the relationships among the objects or a more abstract description of the goal (e.g. separate dirt from air). Each of these descriptions will help people to recall knowledge that is more distantly related to the domain in which the problem is stated.
Most of us have been looking in the wrong place for our creative insights. We ask people to “think outside the box,” but we should be asking people to find more descriptions of the box and see what that causes us to remember.
As a musician, I want to encourage other artists to collaborate with my music. But recently, a visual artist had all of his Vimeo videos taken down for using just 30 seconds of one of my songs. The label that exclusively licenses one of my songs likely had a bot looking for copyright infringement that automatically took it down. I hear the artist now has them back online after a few weeks of hair loss and negotiations. I’d personally like to avoid these types of situations in the future, which means providing an easy way for others to license and collaborate with my music. A blockchain-empowered rights and payments layer could provide the means to do so.
A major pain point for creatives in the music industry — such as songwriters, producers and musicians — is that they are the first to put in any of the work, and the last to ever see any profit. They have little to no information about how their royalty payments are calculated, and don’t get access to valuable aggregate data about how and where people are listening to their music. But a rising tide of musicians and bands are pushing toward transparency and fairness in their own ways — for example, Paul McCartney’s recent lawsuit again Sony, Duran Duran’s lost battle with Sony/ATV, and Taylor Swift’s dust-up with Spotify. It’s within this climate that an enticing seed of an idea is being planted: blockchain technology has the potential to get the music industry’s messy house in order.
One of the biggest problems in the industry right now is that there’s no verified global registry of music creatives and their works. Attempts to build one have failed to the tune of millions of dollars over the years, largely at the expense of some of the collective management organizations (CMOs) — the agencies (such as ASCAP, PRS, PPL and SOCAN) who ensure that songwriters, publishers, performers, and labels are paid for the use of their music by collecting royalties on behalf of the rights owners. This has become a real issue, as evidenced by the $150 million class action law suit that Spotify is currently wrestling with. The inter-organizational cooperation that blockchain is providing for the fintech sector should inspire these “collecting societies” to use the technology to create an open (or partially open) global registry if they hope to remain relevant, which would help organize the immense amounts of new music being uploaded every day. Music creatives could build upon such a registry to directly upload new works and metadata via blockchain-verified profiles.
Here are five basic principles underlying the technology.
Each party on a blockchain has access to the entire database and its complete history. No single party controls the data or the information. Every party can verify the records of its transaction partners directly, without an intermediary.
Communication occurs directly between peers instead of through a central node. Each node stores and forwards information to all other nodes.
Every transaction and its associated value are visible to anyone with access to the system. Each node, or user, on a blockchain has a unique 30-plus-character alphanumeric address that identifies it. Users can choose to remain anonymous or provide proof of their identity to others. Transactions occur between blockchain addresses.
Once a transaction is entered in the database and the accounts are updated, the records cannot be altered, because they’re linked to every transaction record that came before them (hence the term “chain”). Various computational algorithms and approaches are deployed to ensure that the recording on the database is permanent, chronologically ordered, and available to all others on the network.
The digital nature of the ledger means that blockchain transactions can be tied to computational logic and in essence programmed. So users can set up algorithms and rules that automatically trigger transactions between nodes.
Blockchain has the potential to provide a more quick and seamless experience for anyone involved with creating or interacting with music. For example, listening to a song might automatically trigger an agreement for everyone involved in the journey of a song with anyone who wants to interact or do business with it — whether that’s a fan, a DSP (digital service provider such as Spotify or iTunes), a radio station, or a film production crew.
Where would this new music ecosystem “live”? One idea is .music, the soon-to-be-introduced and much-anticipated new generic top-level domain (gTLD). Decisions about its fate, and about who will be granted control of the domain, are currently on hold at ICANN (The Internet Corporation for Assigned Names and Numbers), the non-profit organization that’s responsible for coordinating and managing internet domains. Currently, DotMusic, a music community applicant, is appealing for control of the .music gTLD. But if they fail, it could potentially go to the highest bidder at auction. Some of the bidders in the running include Google and Amazon. Should the winning party do the right thing and hand over individual .music URLs to verified music creatives (for example, Paul McCartney would own paulmccartney.music and Taylor Swift would own taylorswift.music), rather than treating these new URLs as their own storefronts, it would make a lot of sense to link a blockchain-enabled ready-for-business music registry to those URLs, adding a whole new dimension for music creatives to drive business toward themselves and their work.
The blockchain could also store information about and/or link to a musician’s online profile, (or “Creative Passport,” as I like to refer to this concept), such as latest biography, tour dates, press images, and social media-style information, such as artists you champion, charities you support, skill sets, or organizations or companies you are connected to. This information could then be updated and accessible to anyone searching for that data, whether human or machine. At the song level — e.g. michaeljackson.music/maninthemirror — the blockchain could share information on all of the people involved in the making of the song, at the very least, but in addition could be linked to the metadata on specifics such as the equipment that was used to produce the song, where and when the song was recorded, the artists’ inspiration for the song, attributions, and more – sort of like extended liner notes. This could help spawn new apps and services atop of those datasets, and with them, new revenue streams for everyone involved.
Two years ago, the penny dropped for me as a musician when I was introduced to Ethereum, an open-source, public, blockchain-based distributed computing platform featuring smart contract functionality. Soon after discovering Ethereum, I dreamt up a music industry ecosystem that I called Mycelia, and used my next musical release — the song Tiny Human — as an excuse to explore the potential of blockchain further. I began by posting everything about that track on my website for anyone to experiment with and for fans to enjoy. Phil Barry at the Ujo Music platform joined in, which resulted in Tiny Human being the first song ever to automatically distribute payments via a smart contract to all creatives involved in the making and recording of the song. It was very basic — no licensing terms were exhibited — and it raised little money, due in part to the fact that you had to have an Ether wallet with Ether in it (the crypto-currency used on the Ethereum platform) before you could purchase the track, which lost some people along the way. But it nonetheless was a first step forward that generated a lot of steam for those in the business of music and blockchain.
Ease of use is one of the biggest keys to success for the widespread adoption of any new technology. The idea of a Semantic Web of linked media, artist profiles and other metadata spawning new apps with instantaneous peer-to-peer payments and exchange of data is an exciting one, but it will only become a reality for those who wish to interact with music if its solutions are better and simpler than those that currently exist. It was much easier and much more preferable for 60 million users to download music from Napster than it was to go to the store to buy a CD. It was a total failure on the part of the commercial music industry that they didn’t find a way to capture even a portion of those Napster users and turn it into a legitimate service at that time. Napster was an innovative idea that made music more accessible to music lovers. But, the RIAA (Recording Industry Association of America) chose to crush it, rather than explore the idea of sharing libraries and peer-to-peer music sharing in a legal context.
These days, however, the landscape is different and the vast majority of those wanting to listen to music head over to YouTube, which is free and perfectly legal. Astonishingly, thousands upon thousands of new songs are uploaded every day, not registered properly, and so are in desperate need of associated metadata. Surely, we can find better ways for people to both easily publish and interact with music that makes sense for everyone?
Some are trying. Organizations like Berklee’s Open Music Initiative (OMI) have managed to gather almost every party under the industry-wide sun to explain why blockchain is at least worth exploring and engaging with. And an increasing number of new all-in-one music services for artists, such as Revelator (which is blockchain-based) and Amuse (which is not) are using big data combined with audio fingerprinting to provide really useful data feedback, analysis and curation. They understand that good feedback data can be as valuable as money to creatives, enabling an artist to make business decisions with confidence and clarity. Combine this with the capabilities of social media aggregating apps like Hootsuite or Social Sprout, and artists’ partitioned online representations and scattered creative wares start to come together. Imagine being able to know, or being alerted to, when and where your music is being played. Say your song is playing on a certain channel on the radio… you could then dial the DJ to thank him for playing your song, while connecting to listeners in the moment, adding context and meaning to your songs.
Now is the time for the music industry to take the long-view look and explore blockchain together with its creatives for the sake of its sanity and future. It won’t be hard to make the business more efficient, as it’s such a giant mess right now. The larger players in the industry just need to have faith that they will make more money by doing the right thing — which would lead to fair remuneration, transparency, and a multitude of new business opportunities for artists. Simply put, if the industry is to have any clout, or any say in the sustainability of our music ecosystem, it needs to come together to develop tools and standards, so the necessary game-changing new services can flourish — but this time, under our own internet of agreements for music, where artists would be represented fairly.
I believe that featured artists — those “on the cover” — should inevitably be entrusted to ensure that everyone involved in creating music in their name will be duly acknowledged and compensated. The blockchain effect has inspired creatives in the industry that a better future lies ahead. If guided and nurtured in the right ways, blockchain holds the potential to give us a golden age of music not just for its listeners, but for those who make it, too.
Accelerating growth is on every CEO’s agenda. Each year business leaders commit to an overall revenue growth target, but the reality is that growth within a business is often very uneven. Some parts grow faster, and one hopes that they offset the other parts that may be declining. Dave Calhoun, former vice chair at General Electric and now senior managing director at Blackstone, says that it’s better to double down on your winners than to invest in fixing the losers. But many companies have a one-size-fits-all mindset toward metrics, which makes it hard to use that judgment when allocating resources from the top.
Similarly, there tends to be very little incentive for leaders below the C-suite to double down, even when they see a great opportunity. We personally know of three executives who were pivotal in launching $100 million-plus innovations. Despite the huge incremental value all three created for their corporations, their compensation plans failed to adequately reward them for creating such explosive growth. Yes, they received bonuses and public recognition, but they had to fight with HR to ensure their teams received just a tiny fraction of the value they’d created. Why? Again, it comes down to metrics and key performance indicators (KPIs) that don’t properly capture the subtleties of how a business is growing. Sadly, all three of these executives left their big companies to work in smaller, more entrepreneurial firms.
How do we fix the problems of properly measuring, allocating resources to, and compensating people for driving growth? Here are two ideas: First, companies should move beyond looking simply at market share, and instead focus on “share of growth” as the key metric when driving a business forward. Second, companies should find ways to exponentially reward leaders who drive share of growth.
Adding share of growth as a KPI solves for three drawbacks to market share.
The definition of “market” is likely outdated. Market share definitions are rarely updated, and the reality is many markets are blurring due to disruptive innovation. The basis of competition is now category versus category, as opposed to brand versus brand.
Market share is inherently backward looking. This is where a forward-looking share of growth is more valuable. Consider the market for single-serve coffee pods, such as those made by Keurig. If you looked at share of growth, you could have predicted the national scale of single serve about eight years earlier than when it actually occurred. Share of growth tells you where a market is going, not where it’s been.
Market share engenders less helpful emotions than share of growth. Share of market tends to create a static worldview where those with high market share are at risk of overconfidence, whereas those with low market share are at risk of fatalistic despair in their decision making. Share of growth creates curiosity. Leaders ask: “Why is this segment growing so fast, and what can I do about it?”
Importantly, share of growth must drive allocation of resources and rewards that are exponentially greater than typical programs. In the same way that a star athlete can make more money than the coach or general manager, the directors and vice presidents should have the ability to earn seven-figure incentives (and even make more money than the CEO) if they create such value.
This will not be a huge cost to the organization. Two of our colleagues looked at more than 27,000 brands, using Nielsen data. Only 3% of brands had share of growth higher than their share of market. Companies should reserve 5%–10% of budgets and incentive pools for brands and leaders that sign up to be measured on share of growth and believe they can grow faster than the category.
Consider a few examples. A major publishing company set out to create a new weekly magazine brand that would grow the celebrity news category. It created a special “phantom stock option” program that would share 10% of the profit created if this brand was successful. The new magazine was the first successful weekly-magazine launch in 10 years, and it became the fastest-growing subscription-based magazine in history. Years later, the publisher recounted that the magazine owed much of its success to the incentive plan, which enabled it to attract top talent from across the industry to a risky startup.
Another example is from an established paper products company with dominant market share. The company decided that for emerging, high-growth categories like adult diapers, focusing on market share was insufficient. By introducing share of growth alongside its more traditional market share measure, the company could increase the urgency of achieving a market-leading position as quickly as possible. Not only did the company succeed, it grew the category. The adult diaper category is forecast to grow 48% by 2020, according to Euromonitor, or nearly $1 billion.
The data suggests that brands with higher share of growth than share of market exist across brands of all sizes, with a particular sweet spot for brands between $100 million and $1 billion in sales. How much more growth could be created if fuel was added to these already growing fires?
Ultimately, one of the downsides of adding share of growth is that it’s not a straightforward metric. It requires careful consideration, especially if you’re going to measure and reward executives based on it. However, adding a bit of complexity and chaos to a crusty KPI like market share may be exactly what is necessary for executives to dig deeper to find growth.
In early May 2017 Republicans in the U.S. House of Representatives voted to repeal and replace the Affordable Care Act (or Obamacare). Subsequently, Republicans in the U.S. Senate began working on their version of a law to do the same. The House bill is flawed, leaving many uncertainties that the Senate has promised to address. While the fate of the bill is in flux, there are three immutable trends in the U.S. health care system that won’t change. As a result, regardless of how the law evolves, tremendous opportunities will remain for consumers, medical providers, health care payers, and investors to shape and improve the health care system.
The first trend is demographic: The U.S. population is continuing to age. In 1960 the median age for men and women in the U.S. was 29.5; it is now 37.9, and in the next 12 years will exceed 40. Per capita annual health care costs are roughly $4,500 for people age 19 to 44; they double for people age 45 to 64; and they double once again for those 65 and older. Thus, as the population ages, health care services will naturally expand, as will the pressure to find efficient ways to deliver those services.
Second, technology has become a pervasive element across the health care system, with a major impact on diagnosis, treatment, and communications. In 2004 one in 5 practicing physicians used an electronic health record (EHR) in the U.S. Today nearly nine in 10 physicians regularly employ EHRs. There’s a tremendous amount of information and structured data now available to guide treatment, assess outcomes, and measure quality of care. Beyond EHRs, digital health tools — apps, wearable devices, and other hardware and software that measures and monitors health — are becoming common in consumers’ lives. From 2015 to 2016, investors poured more than $8 billion into funding these tools. More than 3,000 apps are now available to help manage diabetes alone. Clearly, most of these tools won’t survive. But technology has become rooted firmly in U.S. health care and, as elsewhere, consumers will choose many of the winners.
Third, irrespective of revisions to the ACA, discoveries in the life sciences that enhance the quality and extend the length of life will continue to flow from research laboratories. These are being driven by two major trends: the availability of personal health data, and the plummeting cost of integrating massive health data sets in the cloud. Based on these two foundations, we’ll begin to see the emergence of personalized medicine.
The pipeline for new drugs is bursting, and new devices and tools in the rapidly emerging digital health space will come to market more quickly. According to QuintilesIMS, there are more than 2,000 drugs in the late-stage approval process, and they will yield an estimated 45 new active substances annually over the next five years. This therapeutic deluge will make decision making more complex for clinicians, who must understand efficacy and risk, and for payers, who must choose which treatments to favor through preferred pricing. Indeed, the profusion of new treatments may present a serious challenge to the current payer strategy of negotiating favorable pricing with drug and device companies.
Taken together, these three trends will drive dramatic changes in health care, regardless of government policies. We see several areas where patients and care providers, as well as entrepreneurs and investors, will likely benefit.
First, businesses that help patients understand, access, and use the health care system will be rewarded. Patient engagement has been a mantra for those seeking to reform health care, as it’s widely accepted that patients who are engaged in their own health care have better outcomes. Technology plays a crucial role in promoting engagement, in part by customizing medical information for each patient, and digital platforms — whether websites, apps, or EHRs — that promote health and help patients understand their medical conditions and their options for treatment and prevention will grow in importance.
Investors are already keenly focused on this area, with many startups competing for a slice of the market. In 2011 81 digital health startups received venture funding; with consistent year-over-year increases, 296 startups were venture backed in 2016. The venture industry is betting big on digital health, with $4–$5 billion invested annually. But traditional business models focused on and serving third-party reimbursement continue to struggle with how to monetize digital health tools. We believe models will emerge that capture value from the growing consumer demand for effective digital health-promotion support. Solutions that drive patient engagement and improve outcomes will succeed in the marketplace.
Second, we expect to see growth in businesses that make it easier for consumers to access affordable health care while living where they want to live, in a setting that they can afford. In the U.S., the two key drivers of this trend are the aging of the population and the need for cost control. Telemedicine is increasingly becoming an adjunct to care that addresses these trends. Today’s technology enables practitioners to scale their services, seeing more patients in less time, and it embeds analytics that can help focus clinicians’ time on the cases where they can have the greatest effect. From the patients’ perspective, telemedicine is appealing because it allows them to engage more frequently with doctors than they could through in-person visits — a particular aid for older patients with chronic conditions, who benefit from the frequent contact and care coordination that telemedicine can provide.
The market for services tailored to the elderly, helping them age in place, will expand in many directions. Stanley Healthcare, a division of Stanley Black & Decker Corporation, which sells products to over 17,000 hospitals and senior living facilities, offers a good example. One product helps reduce falls; another, Wander Guard, helps seniors with early stages of dementia live semi-independently. Stanley and others have quickly gained market share by serving the needs of this population and addressing patients’ and caregivers’ eagerness to adopt assistive technologies.
A third growth area is in EHRs and digital health applications. While new EHR offerings continue to emerge, the market has consolidated around a few large players, which has held back innovation and interoperability. The proprietary nature and standards for EHRs are likely to diminish, however, as industry pressure opens up data repositories and personal data become more accessible. Two initiatives deserve particular attention, because both will accelerate data liberation, punish companies that resist, and reward vendors that get onboard early: the Human API platform and the Fast Healthcare Interoperability Resources (FHIR) specification. While they are different from each other, both are significant attempts at retrieving, aggregating, and contextualizing patient wellness and medical data. With the venture capital firm Andreessen Horowitz and Alphabet’s Eric Schmidt among its investors, Human API has the audacious objective of creating a consumer-controlled digital repository, where health data is securely shared with just those parties selected by the consumer. FHIR is a standard crafted by Health Level Seven, a health data sharing nonprofit, to provide interoperability among health systems. Rather than passing entire health documents among providers, FHIR allows the transfer of clinical and administrative data between software applications used by different health care providers, enabling them to access the specific data needed from medical records across systems.
We’re convinced that these trends will ultimately drive mainstream adoption for proven digital health solutions. Where clinical trials demonstrate efficacy, and the solutions allow for improved cost management, we’ll begin to see multiple models emerge: insurance reimbursement, employer subsidies, and even consumer purchases. As adoption increases, companies that today provide therapeutics — principally pharmaceutical and medical device manufacturers — will begin to add digital health solutions to their portfolios.
Uncertainty surrounding the health care bill shouldn’t have a material effect on the success of various solutions. Indeed, with the current government gridlock, the rapid development of and growing demand for new health care technologies may help policy makers chart the course forward.