The problem of measuring productivity

When it comes to productivity, only two things are undebatable: The official rate of U.S. productivity growth has stalled since at least 2007, having started to slow before then, and there is no consensus about why or what to do about it.

There is additionally some broad consensus that without stronger productivity growth going forward, standards of living for the vast majority of Americans will not improve appreciably, which is likely to fuel the current wave of populist discontent.

One explanation, however, is increasingly popular even as it faces considerable skepticism among economists and policymakers: The problem is less about productivity than about our inability to measure the effect of the digital and now data revolution that has redefined the American economy. There is a growing chasm between what our economic system is and what our numbers are capable of measuring.

Take Google. Its searches are used billions of times a day. Every single one is free. The same could be said of service after service, such as using Google Maps or Waze (also owned by Google), which are free to the user. While some of what they offer adds little to collective economic output (a group chat between a gaggle of teens has no immediate economic value), a considerable percentage does. A navigation app, for example, reduces time spent on the road or stuck in traffic, potentially reduces the amount of gas used, and then frees up that time and that savings for other possibly more productive uses.

Several years ago Erik Brynjolfsson, an MIT economist, tried to measure what these "free goods of the Internet" might be adding to gross domestic product. The methods were innovative, trying to gauge what value people assign to their time and then multiplying that by time spent using Google and similar services. He estimated that as of 2012 such free goods might add $300 billion to gross domestic product, increasing at the rate of $40 billion a year, which would mean close to $500 billion in 2017. It's a large figure, but these were only halting initial steps in what is surely a complicated and as yet unresolved process to factor the innovations of the past decade and more into calculations of economic output and activity.

In early May the Bureau of Economic Analysis released a paper concluding that its own measures of inflation and GDP had been unable to keep up with the changes in the economy, and hence had been off by as much as half a percent a year.

These issues are not new, but they remain unresolved. The lessons of the Federal Reserve in the 1990s are instructive. The U.S. economy was booming, new technologies were proliferating, and yet productivity numbers were anemic. Then Fed Governor Alan Greenspan tasked the team of economists at the Fed to investigate. Building on the 1989 observation of Robert Solow that "computers were everywhere but in the productivity statistics," the Fed began to assess how productivity was calculated and understood. That led to more emphasis on different formulas such as multi-factor productivity, which went beyond looking just at labor and capital investment; they also took a longer view that new innovations can take years to show up in official statistics.

Measured productivity did begin to accelerate in the mid-1990s, along with greater attention by policymakers to different formulas such as multi-factor productivity to measure it. That said, the debates today haven't altered much, with a few voices suggesting that we fail to account for the "consumer surplus" or adequately account for the gains from the digital revolution, while many others, such as Robert Gordon, contend that the productivity slowdown is a result of a mature economy that is not keeping pace with societal needs. For them, the statistics, even if slightly outmoded, reflect an unarguable reality whose economic and social consequences are evident.

Even those who acknowledge some underestimation of productivity tend to argue that if you added back some amount for the hard-to-quantify effects of the digital revolution, you still wouldn't get back to the levels of the 20th century.

Perhaps. Or perhaps the mismeasurement currently debated is only a portion of just how significant these mismeaurements are. Even more, perhaps the entire framework is now flawed. The hard numbers today are clearly failing to account for certain observable contradictions, such as how there can be high levels of employment combined with very little wage growth and extremely low inflation.

If various free or inexpensive digital solutions are generating adequate output without adding much in the way of labor costs or capital spending, then that would explain why labor costs and capital investment are low. And if those solutions are also leading to less expensive goods and services, that would in part explain why measured productivity is weak. A cheaper digital solution leading to cheaper and more efficient goods doesn't do much for GDP or corporate revenue, but it meets collective needs just as much as a labor- or capital-intensive solution leading to more output and higher labor productivity.

All of this suggests that much more attention needs to be paid to the real possibility that productivity isn't slowing the way we think, or that slower measured productivity isn't having the same consequences as when the economy was primarily based on making physical goods. Perhaps if we emphasized quantity and quality rather than market price, the optics would be different. Governments seem unwilling to allocate resources to developing a system for better accounting for free services and how the deflationary effects of technology can both improve standards of living and lower GDP. But it's clear that if we are going to understand the causes of inequality and formulate solutions, we need to start with data we can count on.

Editorial on 05/28/2017

Upcoming Events