Last week the world listened as Donald Trump announced, ‘We are out,’ proclaiming the U.S. withdrawal from the Paris climate accord. Such a move might create inertia for some companies and investors if they see it as evidence that “business as usual” is now the most likely climate scenario.
We call this the “Trump climate trap,” and it is a real danger. But another major action last week points in the opposite direction and leaves us more optimistic.
We witnessed a monumental event in a shareholder resolution calling on ExxonMobil, the world’s biggest publicly listed energy company, to disclose the impact on its business under a 2-degree scenario. (That means a world in which we have at least a 50% chance of limiting temperature increases to no more than 2 degrees Celsius.)
Despite the company’s board recommendation that investors to vote against the proposal, a striking 62.2% of the votes were in favor, providing a strong signal that climate change is an important financial risk and that shareholders want to know more about what companies are doing to transform their operations and products to remain competitive in a low-carbon world.
The success of the proposal requesting increased disclosure by ExxonMobil suggests that we have reached a tipping point within the investment community in the recognition of climate risks. Just a year ago a similar resolution at Exxon’s annual meeting received support from investors holding only 38.1% of shares. In very little time, the recognition that our economy will have to transform to decrease carbon emissions has gone from a minority view among Exxon shareholders to a majority view.
This transformation will not be easy, and we expect that while a number of companies will be able to adapt and grow their market share, others will go out of business in the process or shrink significantly.
In the meantime, more shareholder proposals will address climate change issues, raising important questions for boards of directors. Boards will have to demonstrate competence at monitoring the organization’s transformation process. In particular, boards will have to show that they understand two things.
They will need to demonstrate that they grasp how climate change and the adaptation to a low-carbon economy will affect different sectors. For example, directors of auto manufacturers and auto parts suppliers will need to understand how shared and autonomous mobility will accelerate electrification of the transportation sector, affecting car sales. Similarly, directors of power companies need to understand how advances in energy storage and information technology will accelerate development of micro-grids and the decentralization of the electric grid.
Boards will also have to communicate how the organization’s strategy is compatible with a low-carbon economy and what investments will need to be made to remain competitive. Capital expenditures, future acquisitions, research and development expenses, future financing needs, and payout policies, such as dividends and share repurchases, need to be assessed in part based on how they fit into a low-carbon world.
Boards are not the only ones under pressure. As our economies adapt, investors need to improve their practices in order to safeguard their portfolios. Investors can now go beyond “screening,” an exclusionary approach to their investment strategies, whereby companies are excluded from portfolios based on industry or because of negative environmental incidents such as spills or supply chain issues, to integrating environmental, social, and governance issues in their overall investment strategies. This requires scenario planning, thoughtful modeling of potential outcomes, an assessment of management’s capacity to drive transformational change in the business, and board capacity to choose and incentivize a competent management team. Improvements in the landscape of environmental data over the last few years, such as carbon footprinting, have provided the infrastructure for more sophisticated analysis. Using state of the art tools and databases developed by organizations such as Ceres and SASB to understand environmental impacts will be critical if the private sector is going to avoid the Trump climate trap.
A flawed network of pipes and valves at a manufacturing plant in La Porte, Texas, led to the release of a poisonous pesticide that killed four workers.
These are just three examples of recent workplace injuries and fatalities. U.S. companies are facing pressure to meet earnings expectations, and research indicates that meeting analyst forecasts is a more important benchmark than meeting the prior year’s earnings or avoiding losses. While these issues may seem unrelated, we wondered whether there is a connection or correlation. Do workplace injuries occur more commonly in companies that are facing increased pressure to meet earnings expectations?
In our study recently published in the Journal of Accounting and Economics, we test whether there is any relationship between workplace safety and managers’ attempts to meet earnings expectations. To do so, we used establishment-level injury data (e.g., individual store or factory) compiled by the Occupational Safety and Health Administration (OSHA) from 2002 to 2011 and matched it to earnings data. This yielded a sample of 35,350 establishment-year observations for 868 firms, excluding financial firms and firms in regulated industries. Our investigation focused on those companies that met or barely beat analysts’ expectations, and we uncovered a previously undocumented phenomenon of higher workplace injuries at these firms in particular.
The numbers are telling. Controlling for other factors, injury/illness rates are 5%–15% higher in periods where a firm meets or just beats analyst forecasts. The injury/illness rates for such firms are also significantly higher than those for firms that miss or comfortably beat analyst forecasts.
We found that pressure to meet earnings forecasts can relate to workplace safety in at least two ways: high workload and cuts to safety-related expenditures. When managers believe their company may be close to missing earnings benchmarks, they may increase employees’ workloads by pressuring them to work faster or for longer hours. In addition, employees may compromise their own safety by overexerting themselves or ignoring safety protocols that slow workflows. All of these behaviors can undermine worker safety.
Managers may also circumvent or overlook explicit and implicit safety-related measures, such as maintenance spending on equipment and employee training. When managers engage in such practices, workplace safety deteriorates and workplace injuries mount.
What does this mean in terms of real people? According to the injury data from OSHA, we find that about one in every 24 employees is injured in firms that meet or just beat analyst earnings forecasts, compared with about one in 27 workers in firms that miss or comfortably beat forecasts.
We identified three factors that characterized the companies that beat earnings benchmarks. First, we found that benchmark beaters in industries with high unionization report lower injury rates than those in industries with low unionization by about 6.4%. That’s because unions typically serve as a proxy for employees’ power to ensure safe work environments. They negotiate safety protocols and compliance into their contracts, and workers can report safety issues to their union representatives.
A second factor emerged when we compared the insurance premiums of workers’ compensation programs. These state-mandated programs differ considerably in their policies and coverage requirements. The premium in North Dakota, for instance, is $0.88 per $100 of payroll, while California’s is $3.48 per $100 of payroll.
It turns out that benchmark beaters in states with high workers’ compensation premiums have a nearly 5% lower injury rate, compared with those in lower-premium states. In other words, in states where workplace injuries are more costly, managers appear to be more diligent about their workers’ safety and less willing to increase workloads and demands on employees.
Finally, we found that companies doing considerable business with the government have better workplace safety records. Federal and state governments typically require that companies submitting bids for contracts maintain adequate workplace safety. Indeed, companies that do not meet certain workplace safety benchmarks may be barred from competing for such work. It’s likely that contract requirements cause managers to remain cognizant of workplace safety as they race to meet or beat expectations.
The effects that we document may represent the tip of the iceberg about employee health, as OSHA only collects data on relatively serious and physical injuries and illnesses that require hospitalization or days away from work. Additionally, our results suggest that disclosures about workplace safety could serve as signals to investors that managers are engaged in short-sighted activities to meet earnings targets. In other words, unusually high injury rates may signal that the firm is engaged in practices that resulted in a transitory boost to earnings that investors should not expect to persist.
When managers and workers lose sight of workplace safety while focusing short-term financial targets, the consequences can be severe. At the company level, the costs include fines, litigation, increased insurance and workers’ compensation premiums, and negative publicity. For workers, however, the price may be significantly worse: pain, lost wages, and, in the very worst scenarios, loss of life.
The U.S. labor market is like an aging athlete; it is taking longer and longer to recover from recessions. It took two and a half years to regain the jobs lost during the 1990-92 recession. The next recession, which came in 2001, was short and mild (GDP barely fell), but it took four years for the job market to heal, prompting the Federal Reserve to administer the economy a long course of low interest rates. Then came the Great Recession. It took seven years for employment to return to its 2007 level. It is taking even longer for real wages to recover–they are still below their pre-recession trend.
What’s behind these “jobless recoveries”? In 2016, an influential study by Nir Jaimovich and Henry Siu showed that jobless recoveries occur because routine jobs are permanently eliminated during recessions. Firms strive to cut costs during downturns and in the past quarter century automation and outsourcing have made jobs a juicy target for cost-cutting. Unfortunately for workers, once a call center is automated or production is outsourced to China, those jobs don’t come back when the U.S. economy recovers.
But there’s more to it than that. In recent work, Nir Jaimovich, Arlene Wong, and I show that consumer behavior is contributing to deeper recessions and slower recoveries. During downturns, consumers “trade down” to economize; that is, they reduce the quality of the goods and services they consume. To wit: Fast foods restaurants like McDonalds and Chipotle and general merchandise stores like Target and Walmart gained market share during the Great Recession. Trading down makes sense at an individual level but at a macro level it creates a trap because goods and services of lower quality are produced with less labor than those of higher quality. So, as consumers flock to lower quality goods, they reduce the demand for American labor, adding to the woes of the recession. In some cases, trading down persists after the recession ends (consumers are by and large creatures of habit) adding to the time it takes for the job market to recover.
Restaurants and supermarkets are clear examples of the association between quality and labor intensity. The number of employees per meal served is lower in a fast-food restaurant like Wendy’s than in a mid-scale restaurant like Bakers Square. (And it is much lower at Bakers Square than at a fine dining restaurant like Chicago’s famed Alinea.) Whole Foods, an upscale supermarket, employs six workers per million dollar of sales. The company does not want customers to wait in the cashier line so, as soon as there are more than a few people in line, the store manager opens up another cashier. Safeway, a supermarket that targets the middle class, employs four workers per million dollar of sales, which is why it has longer cashier lines than Whole Foods. The lines are even longer at Sam’s Club, which employs only two workers per million dollars of sales. Every million dollar of sales that moves from Whole Foods to Safeway or from Safeway to Sam’s club leads to the loss of two jobs. (And now even Whole Foods is under pressure to reduce costs).
In the U.S. the availability of lower quality, inexpensive goods produced in developing countries has increased over time. This wider availability has presented consumers with more opportunities to trade down, exacerbating the decline in labor demand that occurs during recessions.
These new trends raise important questions for companies: Should firms change the positioning of their brands? Should they launch frugal brands during recessions? They also leave policy makers with a new challenge: what is the most effective way to train routine workers so they can more quickly find new occupations?
America has always been “a nation of immigrants” to quote the title of John F. Kennedy’s famous book. Yet the role of immigrants in U.S. competitiveness has become increasingly contentious, especially in light of the recent presidential election. Our research attempts to shed light on this debate, by focusing on the history of immigrants as technological innovators.
To study the role of migrant inventors in U.S. innovation, we linked the birthplace of millions of individuals from Federal Censuses between 1880 and 1940 to millions of inventors from patent records. Using labor income information in the 1940 Census, we further examined how immigrant and domestic-born inventors were compensated.
Large scale data compiled from U.S. patent and Census records allows us to move beyond the anecdotes of successful immigrant inventors, of which there are many. For example, Alexander Graham Bell, a key figure in the invention of the telephone, was born in Scotland; the Swedish inventor David Lindquist played a major role in the development of the electric elevator; and Herman Frasch, a German-born chemist, worked in Philadelphia and Cleveland on mineral exploration and extraction, which can be linked to present-day fracking.
Our study shows that immigrants accounted for 19.6% of all inventors between 1880 and 1940. Today, that share is approximately 30%. The chart below shows the share in each state of inventors who were born abroad. Immigrant inventors were heavily concentrated in U.S. “rust-belt” states, which were some of the most productive areas during the late nineteenth and early twentieth centuries. They were noticeably absent from southern states, where they may have faced either fewer economic opportunities, or cultural barriers to assimilation.
We also looked at the technology areas in which immigrant inventors were active. The largest share of immigrants were involved in developing medical technology inventions, such as surgical sutures. But medical technology accounted for just 1% of all U.S. patents. In areas that had a much larger effect on the U.S. economy at this time – specifically electricity and chemicals, which accounted for 13.9% and 12.6% of all U.S. patents respectively – immigrants were also strongly represented. Migrant influence was widespread, with migrant inventors accounting for at least 16% of patents in every technology area. The majority of immigrant inventors originated from European countries, with Germans playing a particularly prominent role.
To examine the relationship between immigrant inventors and U.S. technological development over the long-run, we constructed a measure that we call foreign-born expertise. In effect, this measure captures the extent to which inventive expertise in a particular technology area may have been transmitted by the movement of foreign inventors to the United States.
Areas of technology with higher levels of foreign-born expertise experienced much faster patent growth between 1940 and 2000 than otherwise comparable technology areas, in terms of both the number of patents and a citation-adjusted measure of patent “quality”. That relationship isn’t necessarily causal, however our results provide suggestive evidence that immigrant inventors played a key role in the development of America’s technology leadership.
Migrant inventors may have an outsized influence on innovation for two primary reasons. First, immigrant inventors like Nikola Tesla, who was born in Serbia, develop important ideas in their own right. Additionally, their insights may augment the skills of domestic inventors through collaboration. For example, in the 1940s Canadian immigrant James Hillier developed the first commercially viable electron microscope at Radio Corporation of America alongside Ladislaus Marton, a Belgian inventor, Vladimir Zworykin, a Russian inventor, and U.S.-born engineers.
Our study also shows that immigrants were paid less on average than domestic inventors, despite being more productive in terms of patenting. The precise source of this wage penalty is difficult to pin down; however, inventors from other marginalized groups, such as black and female inventors, were also paid less than similarly productive white males. Our evidence is therefore consistent with classic notions of discrimination, where the wage income of certain types of individuals in the market is lower due to factors unrelated to their productivity.
Overall, our study suggests that immigrant inventors were vital to U.S. competitiveness, despite their lower wages. Although high skill migration is not costless – it is possible that immigrant inventors might displace domestic inventors, for example – an inflow of foreign talent may create positive benefits through improved skills, innovation, and other spillovers. Technological innovation is a central determinant of long-run economic growth, and access to the best inventors matters, regardless of their country of origin.
Each year, the United States produces more per person than most other advanced economies. In 2015 real GDP per capita was $56,000 in the United States. The real GDP per capita in that same year was only $47,000 in Germany, $41,000 in France and the United Kingdom, and just $36,000 in Italy, adjusting for purchasing power.
In short, the U.S. remains richer than its peers. But why?
An entrepreneurial culture. Individuals in the U.S. demonstrate a desire to start businesses and grow them, as well as a willingness to take risks. There is less penalty in U.S. culture for failing and starting again. Even students who have gone to college or a business school show this entrepreneurial desire, and it is self-reinforcing: Silicon Valley successes like Facebook inspire further entrepreneurship.
A financial system that supports entrepreneurship. The U.S. has a more developed system of equity finance than the countries of Europe, including angel investors willing to finance startups and a very active venture capital market that helps finance the growth of those firms. We also have a decentralized banking system, including more than 7,000 small banks, that provides loans to entrepreneurs.
World-class research universities. U.S. universities produce much of the basic research that drives high-tech entrepreneurship. Faculty members and doctoral graduates often spend time with nearby startups, and the culture of both the universities and the businesses encourage this overlap. Top research universities attract talented students from around the world, many of whom end up remaining in the United States.
Labor markets that generally link workers and jobs unimpeded by large trade unions, state-owned enterprises, or excessively restrictive labor regulations.Less than 7% of the private sector U.S. labor force is unionized, and there are virtually no state-owned enterprises. While the U.S. does regulate working conditions and hiring, the rules are much less onerous than in Europe. As a result, workers have a better chance of finding the right job, firms find it easier to innovate, and new firms find it easier to get started.
A growing population, including from immigration. America’s growing population means a younger and therefore more flexible and trainable workforce. Although there are restrictions on immigration to the United States, there are also special rules that provide access to the U.S. economy and a path for citizenship (green cards), based on individual talent and industrial sponsorship. A separate “green card lottery” provides a way for eager people to come to the United States. The country’s ability to attract immigrants has been an important reason for its prosperity.
A culture (and a tax system) that encourages hard work and long hours. The average employee in the United States works 1,800 hours per year, substantially more than the 1,500 hours worked in France and the 1,400 hours worked in Germany (though not as much as the 2,200+ in Hong Kong, Singapore, and South Korea). In general, working longer means producing more, which means higher real incomes.
A supply of energy that makes North America energy independent. Natural gas fracking in particular has provided U.S. businesses with plentiful and relatively inexpensive energy.
A favorable regulatory environment. Although U.S. regulations are far from perfect, they are less burdensome on businesses than the regulations imposed by European countries and the European Union.
A smaller size of government than in other industrial countries.According to the OECD, outlays of the U.S. government at the federal, state, and local levels totaled 38% of GDP, while the corresponding figure was 44% in Germany, 51% in Italy, and 57% in France. The higher level of government spending in other countries implies not only a higher share of income taken in taxes but also higher transfer payments that reduce incentives to work. It’s no surprise that Americans work a lot; they have extra incentive to do so.
A decentralized political system in which states compete. Competition among states encourages entrepreneurship and work, and states compete for businesses and for individual residents with their legal rules and tax regimes. Some states have no income taxes and have labor laws that limit unionization. States provide high-quality universities with low tuition for in-state students. They compete in their legal liability rules, too. The legal systems attract both new entrepreneurs and large corporations. The United States is perhaps unique among high-income nations in its degree of political decentralization.
Will America maintain these advantages? In his 1942 book, Socialism, Capitalism, and Democracy, Joseph Schumpeter warned that capitalism would decline and fail because the political and intellectual environment needed for capitalism to flourish would be undermined by the success of capitalism and by the critique of intellectuals. He argued that popularly elected social democratic parties would create a welfare state that would restrict entrepreneurship.
Although Schumpeter’s book was published more than 20 years after he had moved from Europe to the United States, his warning seems more appropriate to Europe today than to the United States. The welfare state has grown in the United States, but much less than it has grown in Europe. And the intellectual climate in the United States is much more supportive of capitalism.
If Schumpeter were with us today, he might point to the growth of the social democratic parties in Europe and the resulting expansion of the welfare state as reasons why the industrial countries of Europe have not enjoyed the same robust economic growth that has prevailed in the United States.
We are living in the age of the superstar firm. Companies like Samsung, Google, or BMW—the top players in their respective industries—are prospering. Yet economic growth remains sluggish in many parts of the world. The reason for that paradox, as the OECD has warned, is that the productivity gap between firms at the global frontier and those lagging behind has widened. Frontier firms are able to employ the most advanced technologies, which in turn allow them to win market share at the expense of their less productive competitors. And the globalized markets that frontier firms operate in disproportionately reward their knowledge advantage, setting them even further apart from the rest.
In a recent Harvard Business Review article, Nicholas Bloom from Stanford University argued that this type of “winner takes most” competition is an important driver of rising income inequality. The Google’s of the world, in their global hunt for talent, are extremely generous when it comes to employees’ salaries. Meanwhile, wages are stagnating for many workers at less successful firms.
Several explanations have been proposed for the emergence of this “winner takes most” competition: a drop in search and transaction costs because of the Internet; network effects; the ability to scale up quickly due to IT and automation.
My analysis suggests another driver: R&D investment is increasingly concentrated in a few top firms. Some firms are investing heavily in R&D to expand their technological capabilities, while others don’t make that investment and so fall further behind. I believe this could be one of the main reasons for the widening productivity gap we observe.
Take the example of Germany: Between 2003 and 2015, R&D expenditure in the business sector increased by 59%, reaching a record high of 157.4 billion euro. Over the same period, however, the share of firms in the economy investing in R&D fell from 47% to 35%. In particular, small and medium-sized enterprises reduced their innovation efforts. So even as R&D expenditure has risen, it has become more and more concentrated within a smaller share of firms. The Gini coefficient—a commonly used measure of inequality—has been increasing steadily in Germany since the mid-1990s.
Whether the same thing is happening in other countries remains an ongoing research question. More often than not, researchers are constrained by the lack of good data sources. Nonetheless, U.S. data show something similar. Overall, business R&D increased by 67% between 2003 and 2014. And the increase was largest for the firms investing the most. In 2014, the hundred U.S. companies with the largest R&D budgets invested 92% more in innovation than in 2003. And the gap between how much large firms spend on R&D compared to smaller ones has exhibited a noticeable upswing since 2009. Moreover, recent research suggests that nowadays basic research activities are more concentrated in more specialized firms than it was the case several decades ago.
It’s unrealistic to expect every firm to invest in R&D. Yet, the concentration of this crucial activity is quite concerning. A higher concentration of innovation efforts can be a major source of productivity differences between firms, and economists, policy makers, and business leaders should pay close attention to these trends. Competition at the global research frontier is getting more and more fierce. At the same time, many firms seem to be unable to keep up with the pace at which this development is unfolding. Those left standing become the superstar firms. The rest get left behind.
Imagine you’re a middle class American, with an average education and average skills. You’re employed. What are the chances that next year you’ll vault into the top third of earners?
It depends quite a bit on the company you work for.
For middle-skilled, middle class workers at low-paying firms, the chance of moving into the top third of the income distribution was just 0.6%, according to a recent paper analyzing U.S. Census data from 1990 to 2013. For middle-skilled, middle class workers at middle-paying firms, the chances of moving up the following year were 2.6%. But for middle-skilled middle class workers at high-paying firms, the chance was substantially better: nearly 12%. (The paper divides earners, skillsets, and firms up into thirds. So “high-paying” means the top third of firms, “middle-paying” means the middle third, and “low-paying” the bottom third. The same is true for skills. I use “middle class” to refer to the middle third of earners.)
Where you work matters, not just for how much you make, but for your economic mobility — how much you rise or fall in income over the course of your lifetime. If that sounds obvious, consider how often conversations about economic mobility leave companies out entirely, instead focusing on education, skills, or geography.
The new paper — by John Abowd of the U.S. Census Bureau, Kevin McKinney of the California Census Research Data Center, and Nellie Zhao of Cornell — adds to a growing literature connecting how well different firms pay to rising income inequality across wealthier economies. In a recent Harvard Business Review article, Stanford’s Nicholas Bloom argued that this between-firm inequality explains most of the increase in inequality between Americans since 1980, and is caused by an increasingly winner-take-all economy.
Abowd and his co-authors used standard statistical techniques to estimate how much of a worker’s earnings are attributable to employee-specific characteristics (e.g., skills, experience, etc.) and how much are attributable to the firm where they work. They also control for numerous relevant factors, from gender and ethnicity, to part-time vs. full-time, to the strength of the labor market in each year. The part attributable to the worker should be a measure of skill, and the part attributable to firms should measure how well different firms pay, independent of who they hire.
“We show that a typical worker of any skill type would benefit from working at a middle-paying firm relative to a low-paying firm,” the authors write in the paper. “But it is the workers of any skill type employed at high-paying firms who benefit the most.” Hence middle class workers of average skill are a bit more likely to move up in the earnings distribution if they work at a mid-paying firm, relative to a low-paying one. But the big difference is between working at a high-paying firm vs. everything else.
Moreover, the researchers found that once workers find those high-paying firms, they stay there. “Once you’re fortunate enough to find a job at a top paying firm, you get the benefits of that, independent of your position in the skills distribution,” said Abowd, “and you’re much more likely to stay put. If you’re fortunate enough to find one of these jobs, you’re probably not going to quit it.”
For Abowd, the mystery is what enables firms to sustain high wages. “They’re the most successful firms in their industries in many cases,” he said. And through some combination of timing, luck, intellectual property, valuable assets, and the right combination of employees, they have created a moat that competitors struggle to cross. In economics, that’s called a mystery; in the field of strategy, it’s called success.
The worrying thing is the growing gap between firms that have figured out a strategy that supports decent wages, and those that haven’t. A few firms seem to be doing well, and paying well, while the rest fall further behind. Some argue that this productivity gap is the result of too little competition; however, a nascent-but-growing body of research attributes it to technology.
In March, Andy Haldane, the Bank of England’s chief economist, offered another explanation for the growing gap between the most productive UK firms and the rest. “For the same reason most car-owners believe they are above-average drivers, most companies might well believe they have above-average levels of productivity,” he said in a speech at the London School of Economics. In other words, many executives not realize how poorly they’re managing their firms.
And management does matter, not just to the success of companies but for the growth of entire economies. It may have at least some role in determining economic mobility for employees, too. Individuals’ chances of climbing the economic ladder over the course of their lifetimes depends in part on where they work. And whether that firm has the strategy, the business model, and the values that enable it to pay high wages depends in part on how well it is managed.
The competitiveness of the U.S. economy depends on technological progress, but recent data suggests that innovation is getting harder and the pace of growth is slowing down. A major challenge in business and policy spheres is to understand the environments that are most conducive to innovation. One way to do that is to look to history. In our research we focused on the golden age of invention: the late 19th and early 20th centuries, when America became the world’s preeminent industrial nation.
The golden age is associated with some of America’s leading technology pioneers, such as Thomas Edison and Nikola Tesla in electrical illumination and Alexander Graham Bell and Elisha Gray in telephony. Our analysis goes beyond these well-known individuals. We built a systematic data set that contains millions of patented inventions and millions of individuals in Federal Censuses from 1880 to 1940. We also linked patent data to state- and country-level information. By analyzing this data, we were able to shed light on why the U.S. was so innovative.
The context for technological development was very different a century ago. For instance, in 1880 most inventive activity was the result of inventors operating outside the boundaries of firms. Research laboratories, such as the famous one opened, in 1876, by Thomas Edison in Menlo Park, New Jersey, were rare. From the middle of the 20th century, however, the modern corporation started to dominate patenting. By 2000 almost 80% of patents were assigned to inventors associated with firms.
Nevertheless, the impact of innovation on economic growth was typically large. The chart below illustrates a strong relationship between patenting activity and GDP per capita at the state level. It predicts that an innovative state like Massachusetts, which from 1900 to 2000 had four times as many patents as a less innovative state, like Wyoming, would become 30% richer in terms of GDP per capita by 2000.
Innovation was more prevalent in some areas than others. The map below shows regions that today are declining, such as the Rust Belt, used to be innovation hotspots during the golden age. Our research finds that innovation flourished in densely populated areas where people could interact with one another, where capital markets to finance innovation were strong, and where inventors had access to well-connected markets. States with a legacy of slavery were considerably less innovative, and religion had a negative effect, too, though to a lesser degree. Places that were economically and socially open to disruptive new ideas tended to be more innovative, and they subsequently grew faster.
Inventors in the golden age were overwhelmingly white and male. They were less likely to marry and they had fewer children, perhaps because of the time commitments associated with making technological discoveries. Inventors in U.S. history have tended to be highly educated, in contrast to the common portrait of the uneducated amateur. They typically invented in pursuit of profit, and the financial returns to innovation were large. The innovation sector was highly competitive. The best inventors survived. The worst exited quickly.
The family backgrounds of inventors were distinctive. Having a father who was an inventor increased the likelihood of becoming one, perhaps because fathers passed along their aspirations, or perhaps because it facilitated access to the right types of social networks. Fathers’ incomes were positively correlated with the probability of becoming an inventor. This means that talented individuals from low-income families were more likely to be excluded. (This remains the case today.) Much of the link between family income and invention appears to have been due to education. High-income families invested in the education of their children, and, in turn, educated inventors were more productive.
Our study also examined the relationship between innovation and income inequality. New innovation is a disruptive force, which may reduce inequality or perpetuate it.
We found that the relationship between innovation and inequality depends on the type of inequality we’re talking about. Innovation was negatively correlated to the Gini coefficient, a broad measure of inequality. On the other hand, innovation was higher in places where the share of income held by the top 1% was larger, including in states like New Jersey, Massachusetts, and Connecticut, where patenting activity was extensive.
Our findings are consistent with two different approaches to thinking about inequality. If innovation is associated with financial rewards from patents and the associated monopoly rights, then we should see a positive association between innovation and inequality. But if innovation permits new entrants or small business owners to catch up with incumbent leaders, then innovation should lead to lower income inequality.
Our study is predicated on the idea that what made the United States an innovation powerhouse during the golden age is relevant to the way technological development progresses in the modern era. History matters because innovation and growth are largely about long-run changes. Creating an innovation sector that is both dynamic and inclusive was as challenging a century ago as it is today.
On February 3 President Trump issued an executive order directing the Treasury Department to conduct a sweeping review of financial regulation, including Dodd-Frank, the financial reform bill passed, in 2010, as part of the Obama administration’s response to the 2008 financial crisis and subsequent recession.
“We expect to be cutting a lot of Dodd-Frank,” the president said, “because, frankly, I have so many people, friends of mine that had nice businesses, they can’t borrow money.”
Would such a rollback be wise? I spoke with Adam Posen, president of the Peterson Institute for International Economics, about Dodd-Frank, financial regulations, and their impact on nonfinancial companies. Dodd-Frank is enormously complex, and we didn’t touch on all of it. For more on what the law does, go here.
Posen had lots of interesting things to say about what the law got right, what it made worse, and what the best criticisms of it are. His bottom line: “For a manager running a nonfinancial business, the proposed reforms to Dodd-Frank are probably a bad trade-off.”
We spoke for a while, so I condensed his comments into several major themes.
Why Dodd-Frank was created:
“Following the American and global financial crisis, there was legitimate political demand to try to prevent it from repeating. And the cost in terms of lost output, lost jobs, lost houses, lost opportunities, as well as political repercussions, was so enormous that it seemed to mandate a very big rethinking of the financial system. The rethink took place in the context of a lot of skepticism (at best) about the banking system, the banking business model, and its contribution to American well-being. And, of course, as reflected in the names Dodd and Frank, [it passed] at a time when the Democratic Party had a majority in both the Senate and the House.”
It’s a historically complex piece of legislation:
“It was an enormously complex piece of legislation. It’s almost as complex as health care reform. There’s almost nothing else comparable. Even its proponents and people very concerned about financial stability admit it had some unneeded complexities.”
The tension that underlies all financial regulation:
“One of the main themes of the legislation was to make sure that banks were constrained in their ability to take risks using public money. In my view, that is rightly seen as the core of the issue, that banks would make loans and take gambles that, if they paid off, got profits for the owners of the banks and management, but if they failed big-time got bailed out by public deposit insurance or direct government bailouts. This was a question not only of injustice and waste, but of trying to prevent the next crisis, because if banks had more of their own money at stake they would be less likely to engage in risky behavior.
“A subset of this is responding to what’s commonly called being ‘too big to fail.’ There is a strong line of evidence showing that banks that are too big to fail, or consider themselves too big to fail, repeatedly get into trouble, have a lower cost of borrowing than other banks, have more political favors done for them, and so on. There also, however, is strong evidence that it is a bad idea to let big banks fail in the midst of a crisis. The so-called Lehman’s shock, when the Federal Reserve and U.S. Treasury allowed Lehman Brothers to go under in a disorderly fashion, demonstrates how much harm can be done to the economy when something as large as a big commercial bank fails. This is a core dilemma of Dodd-Frank, and all financial regulation. How do you deal with the reality that very large financial firms are riskier and take advantage of their position in the system when you know that ignoring that can cause harm?”
What Dodd-Frank does:
“The upshot of all this was a series of measures that first required banks to have more capital on hand, meaning more of their own money would be at risk when they lend, and second, a set of measures restricting certain kinds of lending activities, in particular, nontransparent chopping and dicing of loans into other securities. And a third piece of the action, particularly aimed at the too-big-to-fail institutions, is that they were forced to create what are called ‘living wills,’ which are supposed to make it easier for authorities to unwind or shut down a troubled financial institution when a crisis hits.
“The fourth, and in some ways perhaps most important, was that the largest institutions, not just in the U.S. but throughout the advanced economies, are now subject to what are called stress tests. Stress tests are a specific form of simulation developed by the Federal Reserve and other central banks to allow them to figure out how badly a given financial institution’s portfolio would hold up if there was a broad sell-off across a bunch of asset classes, or a specific kind of shock like what we suffered in 2008. These are probably the most useful tool the central banks have in terms of forcing banks to disclose to their supervisors the risks they actually face. And, additionally, the hope is this makes the banks more cognizant of the risks they face. The flip side of the stress test, though, is that there is some possibility for the banks to reverse-engineer and game the results. And that’s unfortunately inherent in the process.”
Are we any safer from financial crises than we were a decade ago?
“There’s no question that the system is safer as a result of this. But it may have as much to do with the fact that supervisors and bankers are feeling burned and scared as it is to do with the specifics of Dodd-Frank. As we saw after the Great Depression, as we saw in Japan after the early 1990s, in Canada after the early 1990s, once people have been through a major financial crisis, they tend to be very risk averse. So banks and many other financial institutions are being much more careful about where they’re putting their money. They’re much more reluctant to lend money to lower-grade borrowers. Similarly, supervisors, the implementation arm of bank regulators, are able to be much more intrusive and tough on banks and financial institutions than they were, and are much less willing to give the benefit of the doubt. And in the market more broadly, there is a sense that certain kinds of bad assets should be avoided.”
How Dodd-Frank could make the next crisis worse:
“I have two concerns that persist. First is, we continue to have a system where financial risk and intermediation are highly concentrated, more concentrated than pre-crisis. [The size and influence of U.S. financial institutions] says to me that there’s been insufficient change and still puts us at risk of systemic fragility. This is better in the U.S. and Germany than in a lot of other countries, like the UK, France, Italy, and Japan, where the financial system is even more concentrated and less diverse. The continued lack of competition in the system means there are fewer alternatives for businesses to borrow from when there are problems.
The second concern I have has to do with crisis response rather than crisis prevention. What we saw in 2008 and 2009 was, once the crisis hit, it required very large measures on the part of central banks and governments to keep the situation from spiraling out of control. Congressional leadership included in Dodd-Frank some measures to constrain the Federal Reserve’s power. In particular, they very much limited the ability of the Federal Reserve to either purchase a wide range of assets in the midst of a big sell-off, or to bail out institutions that are not banks, narrowly defined. Practically every other major central bank has more powers to do that since the crisis, whereas the Federal Reserve, as a result of Dodd-Frank, has far less power, far less capacity to do that. And so even if Dodd-Frank has helped on some fronts to make a crisis less likely, Dodd-Frank contains in it restrictions on government and central bank action that are likely to make whatever next crisis happens be worse.”
The best criticisms of Dodd-Frank:
“There are a lot of reasons for legitimate criticism of Dodd-Frank, even beyond the top-level worries about financial stability. Three, in particular, have been raised by the administration and the current Republican leadership in Congress, [and] have some merit.
First, the costs of compliance with these stress tests and disclosures and new capital rules on banks and other financial firms are extremely high. You’ve just got an enormous share of the financial sector workforce engaging in the kinds of things that non-banking businesses complained about after Sarbanes-Oxley, times ten.
The second issue in Dodd-Frank that is legitimate to be concerned about is: Is there too much burden on the banking system and not enough attention paid to the rest of the financial system? Most of the regulations are specified in terms of bank oversight, in part because banks were at the core of the problem and have the most public insurance, and in part because that’s what’s easiest to regulate. Nonetheless, it leads to distortions in good times, that non-banks get a considerable advantage versus the banking system, because they’re not as regulated. And it raises the potential that in bad times the crisis could come from the non-bank part of the system anyway.
A third concern has to do with capital, especially given too big to fail worries — there is a credible argument that banks should hold more capital, not only than they did in 2008, but than they currently are required to do. This is not a partisan issue. Republican vice chairman of the FDIC, Thomas Hoenig, a former Federal Reserve bank president, for example, is a strong advocate of more capital for the banking system, as are some Republicans in Congress. There’s also a question about making sure that [the system doesn’t] unfairly disadvantage American financial institutions [by requiring them to hold more capital than foreign competitors.]”
The worst criticisms of Dodd-Frank:
“What the administration and some of the Republican congressional leadership are pushing under Dodd-Frank reform goes beyond [the criticisms mentioned above], and in my opinion goes against the public interest. For one thing, there are some people who say the problem isn’t too little capital. It’s too much. They make the misleading claim that having more capital constrains the amount of lending available in the society. This is untrue. We’ve seen the [overall] amount of lending in the U.S. and other countries continue to rise as more capital is required. It’s also untrue because the requirement for more capital is about how banks fund their balance sheets, not the size of the balance sheet. It is true that this might cut into bank profits, especially during a transitional period, as you build up capital. But that’s not an argument against tough capital requirements. Yet it is possible we may get some rollback of the capital requirements.
The second thing that is being attacked under the guise of Dodd-Frank reform is a lot of consumer protection rules that were put in place after the financial crisis. This includes but is not limited to the issue of the Consumer Finance Protection Bureau (CFPB) as well as a bunch of measures about disclosure to individual investors and savers. It is completely unclear to me how to justify this, full stop. This is like food and drug regulation. You should not be able to sell to American citizens items that are not safe or effective. This is potentially a pure transfer of risk from the financial sector to American households and a fee directly from the American households to the financial sector.
A third set of potential changes under the guise of Dodd-Frank reforms are loosening up what kinds of activities banks are allowed to engage in. These had been restricted somewhat after the crisis. I am less concerned about the impact of this than some other changes previously mentioned. But it does create more opportunities for conflict of interest between banks, including investment banks, and their customers.”
Why repeal of Dodd-Frank would be a bad deal for non-financial companies:
“For a manager running a non-financial business, the proposed reforms to Dodd-Frank are probably a bad tradeoff that will have little visible impact in the short term. Ideally, if the need to accumulate more capital is diminished, and the costs of compliance are reduced, there should be some savings passed on to customers in the form of loans. [The] evidence is [that] this is unlikely to occur. We still have a very concentrated banking system. There is a lot of market power for banks over the individual businesses that borrow from them. It’s not easy for any individual business to say, oh, OK, I don’t like you, so I’ll take my business to another bank. It is possible that there may be a little bit easier lending standards, in particular for certain kinds of speculative activities.”
The real problem with the economy isn’t too little lending:
“We’ve got low interest rates. We’ve growth prospects in the stock market. And not very much cap ex, not very much innovation. Dodd-Frank and Dodd-Frank rollback is unlikely to address that.”
On allocating talent between finance and the rest of the economy:
“We have had a world in which there’s been a distortion. A lot of the best talent has been pulled into the financial sector instead of into real business because of some of the outlandish compensation the financial sector’s been able to generate. We’ve seen a beneficial rebalancing of that in recent years, where the risks involved and day-to-day compliance hassles being in a financial career have gone up, and so that should have increased the pool of talented people available to go into other business activities. At the margin, any Dodd-Frank rollback will probably reverse that a little bit, but I don’t think will reverse it fully.”
More lending isn’t worth a financial crisis:
“The problems with Dodd-Frank are probably more that we didn’t change the system enough at a broad level, not that we overregulated, not that we demanded too much capital. We probably won’t have a financial crisis any time soon, because generally, across history, including U.S. history, once you’ve had one, everyone gets so scared and buttoned down that it’s at least a decade before you have the next one. But the kind of rollback on Dodd-Frank we’re talking about raises somewhat the risk that there will be a next one. It certainly raises the likely costs when the next one happens. So for an average American business, their interest over the long term would be in continued financial reform and not Dodd-Frank rollback.”
It’s not realistic to erect a physical barrier and to shove the costs on a top trading partner without weakening your own economy and putting in jeopardy the 1.1 million American jobs that depend on that trade. But we could use economic forces, rather than flying in the face of them. What the United States needs are smart economic policies that disrupt the market forces that are currently driving undocumented immigration.
A Wall Would Hurt American Pocketbooks
Never mind the cost of the wall. Forget how taxing goods coming across the border would raise prices. Put all that aside. Even if a wall appeared out of thin air, without a dollar being spent, it would still cost Americans money.
Here’s how. Those who cross the border without documentation and work as farm laborers play an outsize role in American food production. The H-2A visa program allows agriculture producers to bring workers into the U.S. for seasonal work (like picking strawberries). However, no similar legal channel is available for sourcing year-round work (like milking cows) that many Americans are unwilling to do. This helps explain why more than half of all U.S. farm laborers are in the country without legal documentation.
If a wall magically appeared on the U.S.-Mexico border, farmers who currently rely on undocumented immigrants would lose access to that labor supply. Those producers would face higher labor costs and potential production disruptions. That means higher prices for milk, meat, and other foods. The same goes for construction and other industries where large numbers of undocumented immigrants find work.
A Wall Could Actually Make the Border Less Safe
So Trump’s wall would cost Americans, but at least the border would be secure, right? Unfortunately, the economics of border control suggests that a wall could actually make the border less secure. Migrants routinely pay thousands of dollars to “coyotes” to guide them across the border illegally. Many of these coyotes are freelancers, shoestring operations that survive in the marketplace thanks to the U.S.’s (literally) low barriers to entry.
Trump’s aim is to raise those barriers. But, as any econ or MBA student knows, higher barriers to entry can actually be good for business. That’s true especially for the deep-pocketed firms — here, the Mexican cartels — that are able to invest sufficiently to overcome them. A border wall could thus create an opportunity for the cartels to monopolize the business of undocumented immigration, padding their pockets while also tightening their grip over the border regions.
Other Economic Solutions to Consider
A better way to secure the border is to reduce demand for undocumented immigrants. How do you do that? Allow foreign laborers who are needed in the United States to enter legally. Once work can be had legally, the price that people will pay to cross illegally will plummet. Coyotes will see demand for their services dry up. The U.S. Border Patrol can then focus on stopping more dangerous criminal activity.
Effective immigration reform would ensure that needed workers can enter the country legally to match the existing demand for such laborers. But it’s not enough simply to open a new legal immigration channel — employers and immigrants must choose to use it.
Currently it is very convenient to hire undocumented labor because there is a good supply of these workers actively seeking jobs. Employers therefore have little incentive to use a legal process. In this scenario, the country could remain stuck in a situation where a legal immigration process exists, but migrants continue to enter the country outside of that process because employers keep hiring them — and employers keep hiring undocumented immigrants because there are plenty around to meet their needs.
In game theory, this sort of situation is known as a “chicken-and-egg problem.” Fortunately, game theory provides a way to crack such strategic challenges: by incentivizing either side to “move first” in choosing legal over undocumented immigration.
To Force Employers to Move First, Create an Economic Incentive
Suppose that Congress passed an immigration reform package that not only created a new legal channel for needed foreign workers to enter the country but also included robust monitoring and penalties for employers who continue to hire undocumented immigrants. So long as the legal immigration process is sufficiently convenient, American employers would have an incentive to look first for immigrant labor through the new legal channel, to avoid the risk of being penalized for hiring illegal labor. Knowing that employers are actively looking for legal immigrants, foreign workers would then have an incentive themselves to look for legal work rather than entering the country outside of the proper channels.
A Targeted Remittance Tax Would Incentivize Legal Immigration
Individuals sent more than $130 billion from the United States to other countries in 2015. The vast majority was remitted through electronic transfer despite substantial fees charged by the leading provider, Western Union (typically on the order of 5%–10%). Right now, legal and undocumented immigrants pay the same fees.
But what if those in the United States (from any country) without documentation had to pay an additional surcharge on their remittances? Anticipating that they would not be able to send as much money home, prospective immigrants would have less incentive to enter the country outside of the legal process. Knowing that fewer immigrants are entering the country outside of the legal process, employers would have more incentive to make the necessary arrangements to bring in legal immigrants for those jobs that Americans turn down, including helping with the costs associated with securing the necessary documentation.
A natural concern with this idea of taxing undocumented immigrants’ remittances is that those currently in the U.S. under these circumstances will be harmed. They will no longer be able to send as much money home.
Fortunately, there are ways to mitigate this harm. One is to allow those currently in the U.S. without documentation to enter the legal immigration process without first needing to return home. In the long run, immigrants will be better off as more legal jobs become available. They would no longer face a dangerous journey across the border just to find work.
A surcharge on undocumented immigrants’ remittances would be difficult to implement. Workers in the U.S. without documentation could try to evade this new fee by finding someone with legal status to send their money for them. (They might also turn to mobile payment services such as M-Pesa and alternative currencies such as Bitcoin that are even more difficult to monitor than electronic transfers.) To deter such “surcharge evasion,” monitoring and enforcement efforts would be required. Even so, such efforts would likely be far easier, and far cheaper, than building and maintaining a multibillion-dollar wall.
By applying economic principles wisely, the government could secure the U.S.-Mexico border without harming American businesses or consumers — and without Trump’s wall. In so doing, it could also make life better for people looking for work in the United States.