Behavioral economics studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions of individuals and institutions and the consequences for market prices, returns, and the resource allocation.
It shows that California has the same size economy as France. Both have nearly $2.54 trillion in output.
Together, the remaining western states produce $1.8 trillion in GDP, almost the equivalent of Italy’s, the world’s eighth-largest economy, with $1.81 trillion in output.
The economy of the north-east states – Massachusetts and Pennsylvania up to Maine and including New York – amounts to $4.2 trillion, similar to Japan’s at $4.12 trillion.
Florida and Alabama, with $1.087 trillion in GDP, almost match Mexico at $1.14 trillion, while Texas’s economy, $1.59 trillion, is close to that of Brazil at $1.74 trillion.
Germany, Europe’s biggest economy, has a GDP roughly the same as 12 of the eastern states, at $3.35 trillion in 2015.
The GDP of US cities
The next map matches the economies of metropolitan areas of the United States to entire countries.
New York’s GDP matches that of Canada, both around $1,500 billion. Los Angeles produces a similar amount to Indonesia, at around $900 billion in GDP each. Miami’s GDP of around $300 billion is comparable to that of South Africa.
Over half of US GDP in cities
This final map shows how America’s economic output is concentrated in metropolitan areas. It shows that just over half of the $18 trillion in GDP comes from the country’s 23 largest metro areas.
The top five are New York, Los Angeles, Chicago, Houston and Washington.
Understanding the future of work is difficult, if not impossible. According to the MacArthur Foundation, 65% of today’s schoolchildren will eventually be employed in jobs that don’t exist yet.
As technology, globalization, and many other factors continue to redefine work, one constant will be the need for soft skills, or “skills for life.” Peer-to-peer deliberation, brainstorming, and collaboration are familiar to working professionals today, but we can’t assume that they come naturally, especially to the millions of students without access to proper training and college- and career-planning resources. In fact, a growing global skills gap suggests that many young workers are already falling behind.
According to the United States Bureau of Labor Statistics, the US economy has 5.9 million job openings, while 7.8 million people remain unemployed. In Europe, 5.6 million young people are unemployed, while another two million are neither working nor in school.
As young people worldwide express their eagerness to work, many businesses say they struggle to find candidates with the appropriate qualifications for open positions. For example, one recent survey in East Africa found as many as 63% of recent graduates “lacking job market skills.”
This skills gap is extremely expensive. In China, it is estimated to cost the economy $250 billionannually. In the US, the annual cost is $160 billion, with companies losing $14,000 for every job unfilled for longer than three months, while taxpayers bear the cost of unemployment insurance and other safety-net programs. In the United Kingdom and Australia, respectively, the skills gap costs $29 billion and $6 billion per year.
When jobs are unfilled for too long, they are more likely to be permanently outsourced to countries with lower labor costs or a better-suited talent pool. This trend is now threatening traditionally stable economies; according to some estimates, by 2020 as many as 23 million workers in advanced economies will not have the right skills to be gainfully employed in meaningful careers.
Meanwhile, the myth that soft skills are innate – and that only technical skills can be taught – continues to fuel the skills gap. In reality, seemingly abstract traits like problem-solving ability, work ethic, and self-awareness can be developed in the right setting and with the right tools.
Many students cannot pinpoint exactly which skills they lack, but they are acutely aware that they lack self-esteem as a result. Consider Abi, a student from Boston who had difficulties in her home life and once believed she would never be successful. Simple mentoring and training programs carried out byCollege For Every Student (a nonprofit organization funded by the GE Foundation) taught her essential skills and boosted her confidence, setting in motion a virtuous cycle of personal development.
Indeed, the skills gap is even wider for young people from low-income households, who largely miss out on educational and job opportunities. Only 9% of people within this underprivileged demographic earn a college degree, whereas college is a prerequisite for most jobs in today’s economy. By 2018, over 60% of the 47 million job openings in the US will require some kind of post-secondary education. And in Europe, less than 25% of students feel they have received sufficient information on post-secondary education opportunities.
The skills gap and the opportunity gap go hand in hand. If we do not improve access to college-level education and career-ready skills training for youth from all economic groups, the skills gap will widen, and inequality will continue to worsen, with obvious implications for social and political stability.
Fortunately, the problem can be solved. Guaranteeing future economic health and stability in an era of unprecedented change requires, at a minimum, that we expand access to education and skills training for all future workforce participants – not just a select few.
Whatever approach we take must be collaborative and comprehensive, ensuring that young people learn the soft skills they will actually need for all future scenarios. With targeted funding and a shared strategic framework, governments, educators, and businesses can close the skills gap for the current crop of young people, and for generations yet to come.
Today’s young people are diverse, smart, and determined to tackle the challenges facing tomorrow’s workforce. Private and public institutions have a responsibility to teach today’s students how to prepare for those challenges.
With the right strategy, we can help millions of young people secure a place in the twenty-first-century economy. Every student developing soft skills today could potentially change the world for the better in the decades to come. That will be a future from which we will all benefit.
The usage of big data is likely to transform economic measurement in ways that we are only beginning to grasp (Cavallo and Rigobon 2016). Big data encompasses four fundamental shifts from standard datasets: volume, veracity, velocity, and variety. It is commonplace to focus on the first three of these. Clearly, scanner data offers us more observations, fewer opportunities for human input errors to creep in, and the capacity to measure prices and sales almost instantaneously instead of waiting for monthly surveys. These benefits are creating opportunities for improving the timeliness and quality of existing measurement techniques. However, in a new paper, we argue that the most exciting part of the big data revolution is likely to come from the new varieties of data that have become available (Redding and Weinstein 2016).
One of the principal challenges in producing numbers like real GDP or real wages is that while nominal variables are easy to measure, the measurement of real variables requires a theory of economic behaviour. Just as accountants like to joke that “sales are a fact; profits are an idea”, in economics, we face a similar conundrum -“consumer expenditures are a fact; real income is an idea”. While few people would disagree about what the nominal sales of any firm are or how much a consumer spends on a product, translating nominal numbers into real output or welfare is challenging (and was a key component of the path-breaking work of the recent Nobel Laureate, Angus Deaton, as in Deaton and Muellbauer 1980).
Unfortunately, the problem is not that we don’t know how to convert nominal expenditures into welfare; it is that we know too many ways of doing it. Broadly speaking, the profession has settled on three disjoint approaches. First, macroeconomists typically assume there are no demand shifts when measuring real income movements. A foundational assumption in these models is the idea that taste parameters never shift, so the utility function is constant. Economists make this assumption in order to derive a ‘money-metric’ utility function, which guarantees that welfare can be measured if one only knows income and prices. Applied microeconomists take a very different approach by assuming that there are time-varying demand and supply curves. Although it is not typically acknowledged, the existence of these time-varying demand curves is inconsistent with the macroeconomist’s idea that taste parameters are fixed. Demand shifts reflect the fact that a consumer likes one product more than another, which in general will mean that utility is not money-metric. Finally, actual price and real output data is constructed by statistical agencies using formulas that differ from either approach.
The inconsistencies are so deep that the same assumptions that form the foundation of demand-system estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard price indexes invalidate demand-system estimation because if no demand parameter ever shifts, one can recover the demand elasticity without recourse to estimation. In other words, extant micro and macro welfare estimates are inconsistent with each other as well as the data.
One can ignore these problems in conventional datasets, because one typically does not observe all the key features of each transaction. Thus, one can assume errors in demand systems are due to unobserved quality changes, measurement error, or something other than a shift in demand. However, it is much harder to ignore these contradictions in barcode data because we observe prices and quantities sold of precisely defined products. While conventional price measurement is based on identifying ‘the price’ for heterogeneous bundles of goods (milk, carbonated beverages, computers) that contain substantial quality variation within product categories, barcode data eliminates the ambiguity in the definition of a product. Put differently, while one might be able to assume that shifts in demand for carbonated beverages are the result of unobserved quality upgrading, it strains credulity to make the same assumption for a 330mL can of Coke Zero.
The precision of the data impels us to be precise about the theoretical assumptions. In order to deal with this problem, we present a new empirical methodology, which we term ‘the unified approach’, that nests all major price indexes used in welfare or demand system analysis (Redding and Weinstein 2016). We show that the measures of welfare used by economists and statistical agencies can be understood in terms of an internally consistent approach that has been altered by ignoring data or theoretical conditions that must hold in any coherent system. As shown in Figure 1, all of the major approaches to price (and therefore welfare) measurement are actually linked together via the unified approach.
Figure 1. The big picture
The first key insight of our unified approach is that any demand system errors (e.g. taste shocks) must show up in the utility and unit expenditure functions, and therefore the price index. However, all economically motivated price indexes are derived under the assumption that the demand parameter for each good is time invariant. Researchers make this assumption because it is a sufficient condition to guarantee the existence of a constant aggregate utility function, but it stands in the face of an overwhelming body of evidence that demand curves shift.
Our analysis shows that the assumption of time-invariant preferences for each good is neither the correct nor necessary condition to make consistent comparisons of welfare over time when there are demand shocks for individual goods. To be able to make such consistent welfare comparisons, one must obtain the same change in the cost of living between a pair of time periods, whether one uses today’s preferences for both periods, yesterday’s preferences for both periods, or the preferences for each period. This necessary condition is (trivially) satisfied when preferences for each good are time invariant. One of our main contributions is to show that this necessary conditional can also be satisfied when shifts in demand cancel on average. This yields our ‘unified price index’ that is valid even when the set of goods is changing over time (due to product innovation, as in Feenstra 1994).
Our index enables us to identify a novel form of bias that arises from the assumption of time-invariant demand for each good in existing price indexes. ‘Consumer valuation bias’ arises whenever expenditure shares respond to demand shifts. Since conventional indexes assume that expenditure shares are only affected by price changes, they will be biased whenever expenditure share changes are correlated with demand shifts. For example, if higher consumer demand causes prices to rise, a conventional index will overstate cost-of-living changes because it will not adjust for the fact that some of the price increase is offset by the higher utility per unit associated with the demand shift.
Our second main insight is to develop a novel way of estimating the elasticity of substitution between goods. Extant approaches focus on identification from supply and demand systems. However, we show that one can also identify this parameter by combining information from the demand system and unit expenditure function. One of the desirable properties of this ‘reverse-weighting’ estimator is that it minimises departures from money-metric utility given the observed data on prices and expenditure and the assumed constant elasticity of substitution utility function.
Finally, we use barcode data to examine the properties of our unified price index and reverse-weighting estimator. We find that we obtain reasonable elasticity estimates in the sense that they are similar to those identified using other methodologies on the same data. Moreover, the consumer valuation biases in existing indexes appear to be quite substantial, suggesting that allowing for demand shifts is an economically important force in understanding price and real income changes.
Figure 2. Changes in cost of living for various indexes
We can see these differences at the aggregate level in Figure 2, which plots the expenditure-share-weighted average of the changes in the cost of living across product groups for each of the different index numbers over time, again using the initial period expenditure share weights. Not surprisingly, the Fisher, Törnqvist and Sato-Vartia result in almost identical changes in the cost of living that are bounded by the Paasche and Laspeyres indexes. This similarity is driven by the fact that they all assume no demand shifts for any good. The distance between the Sato-Vartia index and the Common-Goods Unified Price Index tells us the importance of the consumer valuation bias and the distance between the Sato-Vartia and the Feenstra-CPI indicates the value of the adjustment for changes in variety. In other words, big data suggests that standard methods of measuring welfare overstate cost of living increases by several percentage points per year because they ignore new goods and demand shifts.
The Lures of Advertising – How Susceptible are You?
Donna L. Roberts, Ph.D.
In the competitive and cluttered environment of today’s commercial marketplace, the average American is inundated with between 3000 and 5000 advertising messages per day in various forms, and yet, considers their effect inconsequential (Du Plessis, 2008; Kilbourne, 1999; Vollmer & Precourt, 2008). Advertisers, however, understand the persuasive power their communications can have upon consumer behavior and thus attempt to make such a lasting impression that their distinct message will positively influence the purchase decision. In the most direct and simplistic model, consumers see a commercial or print ad that creates or modifies their perceptions of the brand and, as a result, they are more likely to purchase the brand. However, a more likely, albeit less direct, conceptualization of the process posits that consumers absorb some impression or interpretation from the ad, perhaps without conscious attention, which is…