Why Left Economics is Marginalized

After the 2009 recession, Nobel Prize winner Paul Krugman wrote a New York Times article entitled “How did economists get it so wrong?” wondering why economics has such a blind spot for failure and crisis. Krugman correctly pointed out that “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” However, by lumping the whole economics profession into one group, Krugman perpetuates the fallacy that economics is one uniform bloc and that some economists whose work is largely ignored had indeed predicted the financial crisis. These economists were largely dismissed for not falling into what Krugman calls the “economics profession.”

So let’s acknowledge there are many types of economics, and seek to understand and apply them, before there’s another crisis.

 

Left economics understands power

Let’s take labor as an example. Many leftist economic thinkers view production as a social relation. The ability to gain employment is an outcome of societal structures like racism and sexism, and the distribution of earnings from production is inherently a question of power, not merely the product of a benign and objective “market” process. Labor markets are deeply intertwined with broader institutions (like the prison system), social norms (such as the gendered distribution of domestic care) and other systems (such as racist ideology) that affect employment and compensation. There is increasing evidence that the left’s view of labor is closer to reality, with research showing that many labor markets have monopsonistic qualities, which in simple terms means employees have difficulty leaving their jobs due to geography, non-compete agreements and other factors.

In contrast, mainstream economics positions labor as an input in the production process, which can be quantified and optimized, eg. maximized for productivity or minimized for cost. Wages, in widely taught models, are equal to the value of a worker’s labor. These unrealistic assumptions don’t reflect what we actually observe in the world, and this theoretical schism has important political and policy implications. For some, a job and a good wage are rights, for others, businesses should do what’s best for profits and investors. Combative policy debates like the need for stronger unions vs. anti-union right-to-work laws are rooted in this divide.

 

The role of government

The left believes the government has a role to play in the economy beyond simply correcting “market failures.” Prominent leftist economists like Stephanie Kelton and Mariana Mazzucato, argue for a government role in economic equity and shared prosperity through policies like guaranteed public employment and investment in innovation. The government shouldn’t merely mitigate product market failures but should use its power to end poverty.

On the other hand, mainstream economics teaches that government crowds out private investment (research shows this isn’t true), raising the wage would reduce employment (wrong) and that putting money in the hands of capital leads to more economic growth (also no). As we have seen post-Trump-cuts, tax cuts lead to the further enrichment of the already deeply unequal, equilibrium.

 

Limitations to left economics: public awareness and lack of resources

History and historically entrenched power determine both final outcomes but also the range of outcomes that are deemed acceptable. Structural inequalities have been ushered in by policies ranging from predatory international development (“free trade”) to domestic financial deregulation, meanwhile poverty caused by these policies is blamed on the poor.

Policy is masked by theory or beliefs (eg. about free trade), but the theory seems to be created to support opportunistic outcomes for those who hold power to decide them. The purely rational agent-based theories that undergird deregulation have been strongly advocated for by particular (mostly conservative) groups such as the Koch Network which have spent loads of money to have specific theoretical foundations taught in schools, preached in churches and legitimized by think tanks.

There have been others who question the centrality of the rational agent, the holy grail of the free market, believe in public rather than corporate welfare, and the need for government to not only regulate but to make markets and provide opportunity. This “alternative” history exists but is less present – it’s alternative-ness defined by sheer public awareness, lack of which, perhaps, stems from a lack of capital.

Financial capital is an important factor in what becomes mainstream. I went through a whole undergraduate economics program at a top university without hearing the words “union” or “redistribution,” which now feels ludicrous. Then I went to The New School for Social Research for graduate school, which has been called the University in Exile, for exiled scholars of critical theory and classical economics. In the New School economics department, we study Marxist economics, Keynesian and post-Keynesian economics, Bayesian statistics, ecological and feminist economics, among others topics. There are only a few other economics programs in the US that teach that there are different schools of thought in economics. But after finishing at the New School and thinking about doing a PhD there, I understood this problem on a personal level.

There’s barely any funding for PhDs and most have to pay their tuition, which is pretty unheard of for an economics doctorate. Why? Two reasons – 1. Because while those who treat economics like science go on to be bankers and consultants, those who study economics as a social science might not make the kind of money to fund an endowment. And 2. Perhaps because of this lack of future payout, The New School is just one of many institutions that doesn’t deem heterodox economics valuable enough to warrant the funding that goes to other programs, in this case, like Parsons.

Unfortunately, a combination of these factors leaves mainstream economics schools well funded by opportunistic benefactors, whether they’re alumni or a lobbying group, while heterodox programs struggle or fail to support their students and their research.

 

The horizon for economics of the left

Using elements of different schools of thought, and defining the left of the economics world, is difficult. Race, class, and power, elements that define the left, are sticky, ugly, and stressful, and don’t provide easily quantifiable building blocks like mainstream economics does. Without unifying building blocks, we’re prone to continuing to produce graduates from fancy schools who go into the world believing that economics is a hard science and that the world can be understood with existing models in which human behavior can be easily predicted.

Ultimately the mainstream and the left in economics are not so different from the mainstream and the left politically, and there is room for a stronger consensus on non-mainstream economics that would bolster the left politically. It’s worth exploring and strengthening these connections because at the heart of our economic and political divides is a fundamental difference in opinion regarding how society at large should be organized. And whether we continue to promote wealth creation within a capitalistic system, or a distributive system that holds justice as a pinnacle, will determine the extent to which we can achieve a healthy, civilized society.

Fortunately, the political left in many ways is upholding, if not the theory and empirics, the traditions and values of non-mainstream economics. Calls from the left to confront a half-century of neoliberal economic policy are more sustained and perhaps successful than other times in recent history, with some policies like the federal job guarantee making it to the mainstream. After 2008 the 99 percent, supported by mainstreamed research about inequality, began to organize.

There’s hope for change stemming from a new generation of economists, in particular, the thousands of young and aspiring economists researching and writing for groups like Rethinking Economics, the Young Scholars Initiative (YSI), Developing Economics, the Minskys (now Economic Questions), the Modern Money Network, and more. But ideas and policies are path dependent, and it will take a real progressive movement, supplemented by demands by students in schools, to bring left economics to the forefront.

By Amanda Novello.

 

A version of this post originally appeared on Data for Progress’ Econo-missed Q+A column, in response to a question about the marginalization of leftist voices in economics.

Amanda Novello (@NovelloAmanda) is a policy associate with the Bernard L. Schwartz Rediscovering Government Initiative at The Century Foundation. She was previously a researcher and Assistant Director at the Schwartz Center for Economic Policy Analysis at The New School for Social Research.

 

There Is No Such Thing As Low-Wage Competitiveness

By Daniel Olah and Viktor Varpalotai. 

An old myth

Moderate labor costs serve as the basis for the international economic success of a country – this has been the approach favored by policymakers and academicians since the eighties. Still today, most analyses and definitions of competitiveness refer primarily to cost and price factors since these are easy to measure. If you keep your wages down, foreign capital will find you – as the overly simplistic approach suggests, which is a very dangerous narrative.

Countries on the peripheries of the richer Western economies often tried to follow this path and it may indeed have been a crucial step towards attracting the much needed capital inflow into developing economies. Think of post-socialist countries which had to achieve what no one managed to do before: to transform their economies from a centrally planned one into a well-functioning market economy in just a few years without an adequate amount of capital, savings, technology, and know-how. A typical win-win situation: developing countries were offered a chance to integrate into global value chains, while companies outsourced production processes with low added value into these economies.

But there is a crucial problem with that: this is just the first period of childhood. To say so, the role of a low-wage model in an economy is similar to that of parents in human life: it is difficult to grow up without them in a healthy way, but once you are an adult you have to realize that you need to live your own life. This means commitment and efforts to move out from the parental nest. Although the low-wage model may be needed to grow up and acquire the potential for an own future life, every economy should move on. But this depends on willingness and ability as well since nothing comes for free. Becoming a successful adult is the most challenging transformation of our lives.

This story is exactly about being able to overcome the low-wage model. When the economy is growing in its childhood period it is key for economic development, but once it turns 18 it suddenly becomes an obstacle to it. The low-wage model conserves inefficient production methods and means no incentive for companies to innovate and invest in the future. A low-wage model is never truly competitive in the long-term: it is a necessary evil in the development process. Nicholas Kaldor already showed this decades ago.


It’s nothing new: Nicholas Kaldor already said that

Kaldor, the famous Hungarian economist of Cambridge University, claimed in 1978 that countries with the most dynamic economic growth tended to record the fastest growth in labor costs as well. The renown “Kaldor-paradox” may be confusing for policymakers influenced by the neoclassical mainstream. It tells us that keeping costs low may not lead to competitive advantages and faster economic growth. So let’s resurrect the Kaldorian ideas and see whether the relationship has changed at all (hint: it has not).

An Econ 101 course would tell us that there is no causality here, and it’s true. But another thing is valid as well: that average annual real GDP growth and the annual growth of unit labor costs per person employed are not negatively related in developed countries.

But let’s examine an even better measure, the export share of an economy, which is the best indicator to grasp export competitiveness in an international context. It shows us that in the case of OECD countries it is hard to find a negative relationship between unit labor costs and export market shares. (If we created two groups of OECD countries based on GDP per capita in international dollars we would find no relationship in case of the richer but strong positive relationship for the poorer countries.)


Increasing labor costs: a sign of economic success?

In fact, outside the pure neoclassical framework, the Kaldor-paradox is not a paradox anymore. A wide literature suggests that increasing real wages result in higher productivity: better quality of customer service, lower incidence of absences and higher discipline inside companies. Corporations gain on increasing labor productivity thanks to better housing, nutrition and education opportunities for workers. It is no coincidence that increasing wages improve mental performance and self-discipline as well (Wolfers & Zilinsky, 2015).

As for the companies, a main mechanism for adapting to increasing wages is to improve management and production processes and bring forward new investments. What is more: the often extremely large costs of fluctuation and that of recruiting workers may also be greatly reduced. And finally, the most important aspect: the increase of wages result in greater capacity utilization on the supply side, which results in growing capital stock in the economy (Palley, 2017). Could the Kaldor-paradox imply that most of the examined countries are wage-led (or demand-led) economies?

Several empirical results validate that export-competitiveness is nothing to do with depressed labor costs. Fagerberg (1988) analyzed 15 OECD countries between 1961 and 1983 – more thoroughly than this article does – and found the same results. He states that technological and capacity factors are the primary determinants of export competitiveness instead of prices. Fagerberg (1988) argues that the Japanese export successes are due to technology, capacity, and investments while the US and the UK lost market shares because they allocated resources from investing into production capabilities towards the military.

Storm and Nasteepad (2014) argues that the German recovery from the crisis is not primarily due to depressed wages but to corporatist economic policy, the key reason which focuses shared attention of capital, labor and government towards the development of industry and technology. As for Central-Europe, the case is the same: Bierut and Kuziemska-Pawlak (2016) finds that the doubling of Central-European export share is due to technology and institutions, and not due the cost of labor. In fact, unprecedented wage growth and dynamic export increases go hand in hand in many Central European countries nowadays.

And if we consider the new approach to competitiveness by Harvard-researchers then we come to the conclusion that economic complexity instead of wages is the key driver of future economic and export growth. Their competitiveness ranking seems much different from the traditional measure of the World Economic Forum, having the Czech Republic, Slovenia, Hungary and the Slovak Republic among the first 15 countries in the world. This shows that peripheric countries of the developed West may become deeply embedded in global value chains, becoming more and more organically complex and this complexity of their economic ecosystem has the potential for future growth – even despite forty years of communism.

This evidence shows that policymakers should be careful and conscious. Economic relationships or the adequate economic policy approaches may change faster than we think. Economies are just like children: they grow up so fast that we hardly notice it. That is what the stickiness of theories is about.

 

About the authors:

Daniel Olah is an Economics editor, writer and PhD student.

Viktor Varpalotai is the Deputy Head of Macroeconomic Policy Department at the Ministry of Finance, Hungary.

Behavioral Economics: Still Too Devoted To Homo Economicus?

How we interpret the “ultimatum game” suggests that it is. By Alexander Beunder.

It’s always an illuminating experience to discuss economic literature with non-economists, as these conversations often reveal the many blind spots of economists. Considering myself to be quite ‘pluralist’ and interdisciplinary in my economic thinking, a true adherent of the global Rethinking Economics movement (and somewhat involved in the Dutch branch), recent encounters with non-economists revealed I’m perhaps still more ‘orthodox’ than I thought.

These encounters revealed how one of my favorite fields – behavioral economics, a flourishing interdisciplinary field known for attacking the orthodox idea that people behave selfishly and rationally (as a homo economicus) – might still be too devoted to this idea they’re supposedly attacking. They revealed how standard interpretations of the  ‘ultimatum game’ – a behavioral experiment which famously shows that people do not behave like homo economicus – might still be too tainted by the orthodox framework. Though behavioral economists have done a decent job in finding the anomalies in human behavior which seem to contradict economic rationality, they continue to, mistakenly, ‘rationalize’ behavior which can be rationalized.

As if the burden of proof is on them: people are rational until proven irrational.

They’re not.

Ultimatum Game: A Short Recap

Standard economics assumes people behave rationally and selfishly to maximize their individual welfare – as a homo economicus. Whether this is always true has been questioned even by the ‘founder’ of economics (see Adam Smith’s Theory of the Moral Sentiments from 1759) and, in recent decades, disproven convincingly by behavioral economists, conducting actual laboratory experiments with actual people.

A well-known experiment (even among cartoonists) undermining the notion of ‘economic rationality’ is the ultimatum game. For those unfamiliar with the game (others may skim), a short recap:

There are two players in the lab. The ‘Allocator’ is given a bag of money (say, 100 dollar). The Recipient’ starts with nothing. Then, the Allocator must offer the Recipient a portion of this bag (between 0 and 100). If the Recipient accepts, he/she will simply receive the accepted amount and the Allocator will keep the rest. If the Recipient rejects, both players receive zero! In the simple version, the game is played only once.

The results greatly contradicted the orthodox assumption of rationality. Under this assumption, the Recipient would never reject an amount higher than zero (even a penny) because by rejecting the individual payoff (the only thing homo economicus cares about) would be reduced to zero. However, real people generally do reject positive offers, especially relatively low ones. “The majority of recipients reject offers lower or equal to 20% of the endowment”, according to a meta-study of Tisserand, Le Gard and Le Gallo (2015).

The standard interpretation of this kind of seemingly ‘irrational’ behavior is that real people, unlike homo economicus, also care about values like, in this case, fairness. “The decline of an offer of .1c says, “I would rather sacrifice .1c than accept what I consider to be an unfair allocation of the stake”, economics professor Richard H. Thaler concluded in 1988. In economic parlance: “When a Recipient declines a positive offer, he signals that his utility function has non-monetary arguments”. (This ‘inequality aversion’ is also present in monkeys by the way).

And are the Allocators acting like homo economicus? That’s somewhat more complex. “The majority of proposers share equally, and offer 40% of their endowment on average”, say Tisserand e.a. This seems generous but can be ‘rationalized’; their main motive might be to avoid rejection (reducing the Allocator’s payoff to zero). However, results from a different game suggest there are also some non-monetary motives at play. The ‘dictator game’, developed by Daniel Kahneman, is the ultimatum game without the veto-power of the Recipient; the almighty Allocator can now offer zero without risking rejection and losing everything. However, Allocators keep offering positive amounts – between 20%-30% of the bag, according to Tisserand e.a. This can not be rationalized as homo economicus would, as a dictator, offer zero.

What’s Possibly Wrong With This Interpretation?

In recent decades, behavioral economics succeeded in demonstrating numerous ‘anomalies’ in a great variety of games; human decisions that can not be ‘rationalized’ and prove that people are no homo economicus but more complex beings. That is the great merit of the field.

Still, behavioral economists mistakenly observe a homo economicus when there’s no clear evidence for its existence. How can that be?

For example, when a Recipient accepts one penny from a bag of 100 dollar in the ultimatum game. This decision seems entirely rational and selfish and is labelled as such in the literature. Only a true homo economicus would accept one penny over zero pennies, ignoring the great unfairness of being offered only 0,01 percent of the bag of money (100 dollar).

I never doubted this standard interpretation, until I asked a non-economist friend what she would do if she was offered a penny in the ultimatum game. “I would accept”, she replied decisively. It surprised me as she’s normally very ‘progressive’ in her opinions – someone who cares a lot about ‘fairness’ – but she answered like a homo economicus. “Wouldn’t you think it’s unfair?”, I asked. “But if I reject the other one would also receive nothing, right? That’s a waste!”

So she’s accepting a low offer, like a homo economicus would do, but primarily because it would make the other person better off. Acting as a homo economicus because of her altruism. (One might even say she’s being extremely altruistic, as she’s being kind to someone who’s extremely unfair to her).

Wondering whether this was just a radical outlier to be ignored safely, I asked two other people (who had not heard each other’s answers) the same question and received exactly the same answers three times in a row!

It was surprising because it contradicted the standard interpretation, but it was perhaps even more surprising that I was surprised. Why did I not think of the possibility that a Recipient might accept a penny because of altruism instead of egoism myself? I had read at least a dozen of papers in which the ultimatum game was central or described elaborately. How come I had not thought of this possibility, which could have been ‘discovered’ by just using common sense?

Part of the reason must be that, in general, the literature on the ultimatum game doesn’t seem to question the rationality of the Recipient (but if you know of literature which does, do send!). It seems that even behavioral economics has its limits; it limits itself to ‘irrationalizing’ those actions that cannot be rationalized. As the decision to accept a penny can be ‘rationalized’ and explained within the orthodox paradigm, it is rationalized.

Behavioral economists ask: “Will subjects behave optimally? And if not, why … ” (Güth, Schmittberger and Schwarze (1983) in the first paper on the ultimatum game). Only when subjects do not behave optimally will they search for alternative, unorthodox explanations of human behavior. Behavioral economists focus on the, in the words of Thaler (1988), “anomalies” which are “difficult to “rationalize” or for which “implausible assumptions are necessary to explain it within the paradigm”.

There seems to be a tendency to assume rationality until the opposite is proven, or until the “anomaly” is found which cannot be rationalized. It is as if behavioral economists accept that the burden of proof is on them: humans are rational until proven irrational.

Of course, new experiments might reveal that human decisions which were previously ‘rationalized’ are actually completely or partially irrational (note the previously mentioned findings from the dictator game). However, this only highlights that the point of departure is economic rationality (rational until proven irrational).

This is problematic, as there is no convincing evidence for the idea that economic rationality is the ‘standard’ from which humans occasionally deviate. So why does behavioral economics start from a flawed foundation? Why is it still too devoted to the idea of the homo economicus, even though it has a reputation of aggressively undermining this concept? And how does this compromise the field?

A Symbiotic Relationship with Standard Economics

In its defense, there is an obvious explanation. Behavioral economics is still ‘in a relationship’ with orthodox economics and, in a relationship, one makes compromises. As Wolfgang Pesendorfer (2006) concludes in an evaluation of the field: “Behavioral economics remains a discipline that is organized around the failures of standard economics” and has a “symbiotic relationship with standard economics” which “works well as long as small changes to standard assumptions are made”.

We all know how stubborn the other side in this relationship is: standard economics will always ‘rationalize’ behavior wherever it can and will only recognize ‘irrationality’ when there is clear and convincing evidence of it. Understandably, behavioral economics devoted itself to finding this evidence – the “anomalies”, in the words of Thaler (1988), which are “difficult to “rationalize”. And surely, it has done an impressive job in finding them.

However, accepting this burden of proof remains problematic, for several reasons. Firstly, it will lead to false positives; ‘rationalizing’ behavior where rationality might, in reality, be absent. What’s more, in doing this, the field will repeatedly lend credence to the flawed concept of the homo economicus. Secondly, it will lead to false negatives; failing to observe ‘irrationality’ (like altruism) when clear and convincing evidence of it is lacking, or perhaps even impossible to produce, thereby ignoring the complexity of human motives. (What’s more, “that’s mean!”, the first person from my sample replied when I told her that her decision to accept a penny is normally labeled rational and selfish. Of course, no one likes to be labeled selfish when they feel they’re being generous.)

This article mentions only one example of a false positive/false negative, but we might discover many more once we rid ourselves of the orthodox idea that economic ‘rationality’ is the standard from which humans occasionally deviate. For example, when you’re buying a cup of coffee, which can be completely ‘rationalized’. But who knows? Perhaps you actually dislike coffee but wish to support the owner of the bar who is also your friend.

All of the above is not meant to downplay the achievements of behavioral economics. Neither to deny that economic rationality exists to a certain extent in certain situations. It is simply to argue that one shouldn’t label behavior rational and selfish without any convincing evidence for it, and behavioral economists might have done so too often.

It is to argue that behavioral economics should not let its “symbiotic relationship” with standard economics limit its own ambitions. This relationship, as Pesendorfer (2006) says, “works well as long as small changes to standard assumptions are made”. We should not fear bigger changes.

____________________________________

REFERENCES

Güth, Werner, Rolf Schnittberger, and Bernd Schwarze (1982). “An Experimental Analysis of Ultimatum Bargaining.” Journal of Economic Behavior and Organization. 3, 367-88.

Pesendorfer, Wolfgang (2006). “Behavioral Economics Comes of Age: A Review Essay on Advances in Behavioral Economics.” Journal of Economic Literature. 44(3): 712–721.

Thaler, R.H. (1988). “Anomalies; The ultimatum game.” Journal of Economic Perspectives. 2, 195–206.

Tisserand J. C., Cochard F., Le Gallo J. (2015). “Altruistic or Strategic Considerations: A Meta-Analysis on the Ultimatum and Dictator Games”. Besançon: University in Besançon.

 

About the Author: Alexander Beunder is an independent journalist, economics tutor at the University of Amsterdam (the Netherlands) and previously involved in the Dutch branch of Rethinking Economics.

Buying Power: an often neglected, yet essential concept for economics

Buying power is a concept that is absent from basic economic theory, and this has major implications both for theory and for the practical issues that we face. This absence is odd because buying power is central to how the economy works. Its importance is not a new observation.  

By Michael Joffe.

In 1776, Adam Smith wrote that the degree to which a “man is rich or poor” depends mainly on the quantity of other people’s “labour which he can command, or which he can afford to purchase” (emphasis added). However, this idea of differential buying power has never been incorporated into economic theory, despite it being an obvious feature of the world we live in. The extent of a person’s disposable income and wealth gives them a corresponding degree of influence. It is like a voting system, where everybody votes for their view of what the economy should produce, but where the number of votes is very unequal. The term “power” here is best understood as meaning the degree of ability of a person or organization to bring something about. It is a causal (not e.g. a moral or political) concept.

Examples of its importance are everywhere. When prices rise, some potential consumers may be excluded – a form of rationing. In pleasant locations, affluent urban dwellers buy holiday homes, crowding out the local inhabitants who do not have the buying power to compete and therefore may have to leave the area. In low-income countries, the amount of transactional sex depends on inequality (the buying power of richer men), not poverty. Any industry depends on its (potential) customers’ buying power for support: washing machine manufacture is only possible if there is a market of people who can afford their product; luxury goods such as mega-yachts exist because there are mega-rich people to buy them.

Firms also have buying power, in varying degree, which enables them to transform the world, e.g. by taking possession of land and natural resources. Within the firm, the employers’ buying power is what enables them to employ workers, thereby creating the authority structure. And shareholders’ influence over the conduct of firms’ directors results from their ability to have bought shares. Finally, when China’s economy was expanding rapidly, its buying power created a worldwide commodities boom, with major impacts on (for example) Australia and Brazil, an impact that has diminished in recent years.

With a concept that is so obvious, one might expect it to be a prominent feature of economic theory. But in fact, it is only patchily represented. Notably, basic consumer theory obscures it completely, by looking at a potential consumer’s decision making given the amount of money that they have to spend – the fixed “budget constraint”. This naturally leads to a conception of the economy that neglects the role of effective demand.

At the aggregate level, in macroeconomics, buying power is represented: it is Keynes’ key concept of aggregate demand. It is also implicit in the flow diagrams that are often used to introduce students to economics, showing two-way flows with money in one direction and goods/services in the other, e.g. between households and firms in the aggregate.

Much of the controversy in economics is concerned with disputes over the competing varieties of macroeconomic theory, and other topics that are directly policy related. Commentators often say that micro is in a satisfactory condition, e.g. on the grounds that it is largely evidence-driven in specific areas such as labor economics, healthcare, education, etc. It is true that some good work is done in these applied areas. But the implication is that macro is the only problem, and this lets mainstream micro theory off the hook.

One implication – which is replicated across sub-disciplines such as health economics – is that the focus is mainly on the willingness to pay, obscuring the importance of the ability to pay. More broadly, it means that economics is a form of decision theory, and this often produces a default way of thinking that treats inequalities as an afterthought, rather than being inherent in how the economy operates. It has taken a huge rise in inequality in countries like the US and the UK, plus Piketty’s best-selling book, to bring this issue to mainstream attention.

This is especially problematic in the context of the widespread orthodox view that macro theory should be based on “micro-foundations” as if the micro theory is totally unproblematic. It implies adopting the extreme version of rationality assumed by mainstream microeconomic theorists, as well as optimization and so on. Naturally, there must be some correspondence between theories at the micro and macro levels – but that needs to involve concepts that correspond to the real world, both at the micro and macro levels.

But there is more: buying power is not the only type of economic power that we fail to recognize. Whilst monopoly power features in textbook economics and bargaining power is recognized, e.g. in game theory, other important types are neglected. These include corporate power and the power of the financial sector including the power of banks to create money. They overlap to some extent with buying power, but also have additional features that are beyond the scope of this article. This analysis is part of a broader rethinking of the foundations of economics, using concepts that actually correspond to the way the economy works – evidence-based economics.

Vast disparities in buying power have major macroeconomic and societal results. Inequality tends to lead to private-sector debt, which creates a vicious cycle, further enhancing the inequality. Private-sector debt also generates systemic instability and a risk of financial crisis. An IMF study concluded that restoring the bargaining power of the lower income groups would be the best way of reducing this debt, and enhancing the stability of the system.

Another consequence is environmental: increasingly rich consumers, in satisfying their wants, inflate their ecological footprints and damage the carrying capacity of the Earth. Recognizing and addressing over-consumption can play a major part in reducing our environmental impact.

Thus, buying power plays a central role, both in how the economy works and in pressing practical issues, and it is a serious error to ignore it. By incorporating it into our core thinking, we will be much better equipped to understand the economy and to address the challenges of increasing inequality, systemic instability, and environmental degradation.

About the Author
Michael Joffe was originally trained as a biologist, and for many years carried out epidemiological research at Imperial College, where he is still attached. He now applies his insight into the way that the natural sciences generate secure causal knowledge to his work in economics.

The Neoliberal Tale

“The tide of Totalitarianism which we have to counter is an international phenomenon and the liberal renaissance which is needed to meet it and of which first signs can be discerned here and there will have little chance of success unless its forces can join and succeed in making the people of all the countries of the Western World aware of what is at stake.” (Friedrich Hayek)

In the past year we’ve seen a number of mentions to the maladies that neoliberalism and globalization have brought upon Western societies (e.g., see here, here, and here). It is well known that during the past decades the levels of inequality and wealth concentration have continued to increase in capitalist economies, leading to the arrival of “outsiders” to the established political powers such as Trump in the US and Macron in France, a turn to the right all over Latin America, and Brexit.

Neoliberalism, one of the main elements to blame, is better known for the policies that defined the world economy since the 1970s. Faithful devotees like Ronald Reagan and Margaret Thatcher, in the US and UK respectively, exported a number of their neoliberal policies to low and middle income countries through the Washington Consensus under the pretense that it would bring about development.

Neoliberal policies did not exactly turn out the way their creators envisioned. They wanted to reformulate the old liberal ideas of the 19th century in a deeper and coherent social philosophy – something that was actually never accomplished. This article will review some of the origins of neoliberalism.

The first time the term “neoliberalism” appeared, according to Horn and Mirowski (2009), was at the Colloque Walter Lippmann in Paris, in 1938. The Colloque was organized to debate the ideas presented in Lippmann’s recent book The Good Society in which he proposed an outline for government intervention in the economy, establishing the boundaries between laissez-faire – a mark of the old liberalism – and state interventionism.

Lippmann set the foundations for a renovation of the liberal philosophy and the Colloque was a first opportunity to discuss the classical liberal ideas and to first draw a line in what the new liberal movement would or should differ from the old liberalism. It was a landmark that, in subsequent years, sparked several attempts to establish institutions that would reshape liberalism, such as the Free Market Study at the University of Chicago and Friedrich Hayek’s Mont Pelerin Society (MPS).

This event announced major difficulties among the peers of liberalism. Reservations and disagreements among free market advocates were not uncommon. A notable mention is Henry Simons, of the Chicago School, whose position against monopolies and how they should be addressed was a point of disagreement with fellow libertarians such as Hayek, Lionel Robbins – both at the London School of Economics (LSE) at the time – and Ludwig von Mises.

Simons’s view that the government should nationalize and dismantle monopolies would nowadays be viewed as a leftist attack on corporations but it fits perfectly under the classical liberal basis that Simons and Frank Knight, also from the University of Chicago, were following. Under their interpretation, any concentration of power that undermines the price system and therefore threatens market – and political, individual – freedoms should be countered, even if it meant using the government for that purpose.

It becomes clear that the reformulation of liberal ideas into what we know today as neoliberalism was not a smooth and certain project. In fact, market advocates struggled to make themselves heard in a world guarded by state interventionism that dominated the Great Depression and post-war period. Keynes’s publication of The General Theory in 1936 and the wake of the Keynesian revolution, swiped economic departments all over and further undermined the libertarian view.

By the end of the 1930s and of Lippmann’s Colloque, however, the perception that neoliberalism would only thrive if there were a concerted collective effort by its representatives changed Hayek’s perception over his engagement in the normative discourse. In 1946-47, the establishment of the Chicago School and the MPS, were both results of a transnational effort to shape public policy and fit liberal ideas under a broader social philosophy. The main protagonists beyond Hayek were Simons, Aaron Director, and the liberal-conservative Harold Luhnow, then director of the William Volker Fund and responsible for devoting funds to the projects.

The condition for success, as remarked in the epigraph, was to “join and succeed in making the people of all the countries of the Western World aware of what is at stake.” What was at stake? Social and political freedom. Hayek and many early neoliberals understood that any social philosophy or praxis crippling market mechanisms would invariably lead to a “slippery slope” towards totalitarianism.

It is important to note, though, that the causation runs from market to social and political freedom and not the other way around. As Burgin (2012) indicates, while market freedom is a precondition to a free democratic society, the latter may threaten market freedom. Free market should not be subjected to popular vote, it should not be ruled over by any “populist” government (a common swear-word today), and there needs to exist mechanisms to protect that from happening.

Once we have that in mind, it is not so bugging the association that Hayek, Milton Friedman, and the Chicago School once had with authoritarian governments such as Pinochet’s in Chile, one of the most violent dictatorships in Latin American history.

Several liberal economists that occupied important public positions in the Chilean dictatorship had been trained at the Chicago School. The famously known “Chicago Boys” first experimented in Chile what later would be applied in the US and UK and then exported to the rest of the developing world through the Washington Consensus.

In brief, the adoption of some form of authoritarian control over popular sovereignty was deemed acceptable in order to guarantee market sovereignty.

Nevertheless, in the discussions within the early neoliberal groups the boundaries of disciplinary economics were trespassed, and the formulation of neoliberalism – and the Chicago School and MPS – was not grounded on any scientific analytical basis but simply on political affiliation.

The multidisciplinary character, dispersion, and incertitude are some of the reasons why it is hard to give a straightforward definition of what the term “neoliberalism” really means. In order to understand it, we have to mind the set of “dualisms” (capitalism vs. socialism; Keynesianism vs. liberalism; freedom vs. collectivism, and so on) that marked the period. Its defenders (academics, entrepreneurs, journalists, etc.) did not know what their own agenda was – they only knew what they were supposed to oppose. Neoliberalism was born out of a “negative” effort.

It wasn’t until many years later that the division between normative and positive economics came to surface with Friedman and his book Capitalism and Freedom, published in 1962. The increasing participation of economists in the MPS, and a more active public policy advocacy by Milton Friedman brought an end to Hayek’s intention to construct a new multidisciplinary social philosophy.

Economically, Friedman embraced laissez-faire; methodologically, he embraced empirical analysis and positive policy recommendations, getting ever further away from abstract notions of value and moral discussions that his earlier MPS fellows, such as Hayek, were worried about. Neoliberalism lost its path on the way to its triumph; it became a “science” that offered legitimacy to a new credo, a new “illusion”.

As the shadows of neoliberalism became more intertwined with the current neoclassical economics and Friedman’s monetarism, it not only lost its name but also gave birth to a corporate type of laissez-faire; one in which social relations are downgraded to market mechanisms; politics, education, health, employment, it all could fit under the market process in which individuals maximize their own utility. There’s nothing that the government can do that the market cannot do better and more efficiently. Monopolies, if anything, are to be blamed on government actions, while labor unions are disruptive to the economy’s wellbeing. Neoliberalism became a set of policies to be followed: privatization, deregulation, trade liberalization, tax cuts, etc. on a crusade to commoditize every single essential service – or every aspect of life itself.

Hayek believed that these ideas could spread and change the world. And they certainly did. What is worth noting is that there is no fatalistic understanding that neoliberalism was unavoidably a result of historical factors.

The rise of neoliberalism was not spontaneous but rather orchestrated and planned; it was a collective transnational movement to counteract the mainstream of the time; it was originated out of delusion in a period marked by wars, authoritarianism and economic crisis; it was grounded on political affiliations and supported by the dominant ruling class that funded its endeavors and transformed public opinion. These are the roots of what is now the mainstream economic thought.

Economics as a Science?

By Johnny Fulfer.

Is economics a science?

Could it be? Should it be? The debate is as alive today as it was in the early twentieth century. This article reviews some of the key arguments in the discussion and provides a helpful backdrop against which to rethink the purpose of economics today.

In 1906,  Irving Fisher argued that economics is no less scientific than physics or biology. All three aim to discover “scientific laws,” he explained. Even though they may not always be represented in reality, scientific laws are considered fundamental truths in nature. Newton’s first law of motion, for instance, cannot be observed. Only if certain circumstances were met, a body would move uniformly in a straight line. The same holds true for economic science, Fisher concluded.

But not everyone agreed. The discipline was charged with unsound methods.

Specifically, economists were accused of using the deductive method without the necessary level of precision. Jacob Hollander addressed the charges in a 1916 essay. Scientific inquiry involves uniformity and sequence, Hollander maintained. Progression in science relies on the formation of hypotheses, which may at some point become ‘laws.’ Observation and inference are the first steps toward the creation of hypotheses. The final step in the scientific process is verification, which is required before we move from theory to law. Without verification, he argued, “speculation is an intellectual gymnastic, not a scientific process.”

Hollander’s work reveals one of the questions at the heart of this debate: Is verification required, and even possible, given the complexities of economic phenomena? Scholars have the disposition to rely on the works of previous thinkers, Hollander argued, without endeavoring to move beyond familiar perspectives.

This question lives on today.

In a 2013 opinion piece for the New York Times, Stanford economist Raj Chetty argues that science is no more than testing hypotheses with precision. Large macroeconomic questionssuch as the cause of recessions or the origin of economic growth“remain elusive.” This is no different than large questions faced by the medical field, such as the pursuit to cure cancer, he explains. The primary limitation of economics, Chetty argues, is that economists have a limited ability to run controlled experiments for theoretical macroeconomic conclusions. The high monetary cost and ethical standards make these types of controlled experiments impractical. And even if we could run a controlled experiment, it may not matter in the long run, for social changes.

In a 2016 essay, Duncan Foley added to the conversation. He argued that the distinctions between the social and natural sciences are not clear. Both come from the same scientific revolution, and both are influenced by values. The notion that scholars in the natural sciences “pursue truth” is a flawed assumption, Foley argues. Scholars in the natural and social sciences choose which problems to solve and the methodology they use.

This choice involves values since a scholar must value one research project more than another.

Examining the scientific nature of economics, John F. Henry explains that neoclassical economic theory holds a position of influence in society because of its universal and abstract nature. Henry maintains that we should reexamine this assumption of universality. If economics is based on subjective values, how can it be considered universal? Should economists continue making ‘progress toward a more scientific structure of knowledge? This leads us to ask how we define progress. There is no end to this debate.

It seems unproductive to continue asking such questions. Rather than debating whether economics is or is not a science, perhaps we should shift the discussion toward questions that ask why economics needs to be a science in the first place. Where does this desire to be ‘scientific’ come from, and why is it so important for economics to be considered scientific? Perhaps the real issue is the determination to make economics a science.

About the AuthorJohnny Fulfer received a B.S. in Economics and a B.S. in History from Eastern Oregon University. He is currently pursuing an M.A. in History at the University of South Florida and has an interest in political economy, the history of economic thought, intellectual and cultural history, and the history of the human sciences and their relation to the power in society. 

It’s gotta be true, because data says so

Data and statistics are everywhere, especially in economics. But we forget that empirical results are often manipulated, biased, or inconclusive. To ensure we design policies responsibly, we must meet empirical work with greater skepticism.

by Selim Yaman

In 2008, Doucouliagos and Ulubasoglu of Deakin University conducted a meta-analysis of 84 studies about democracy and economic growth. After evaluating 483 regression estimates from these studies, they find that every outcome about the political democracy and growth relationship is possible; they observe that  

  • 37% of the estimates are positive and statistically insignificant
  • 27% of the estimates are positive and statistically significant
  • 15% of the estimates are negative and statistically significant
  • 21% of the estimates are negative and statistically insignificant.

The link between inequality and economic growth is equally difficult to identify. Dominics et al (2006) make a meta-analysis of studies focusing on this relationship. According to their study, most of the regressions yield a negative relationship between inequality and economic growth. Yet, when used different estimation techniques and panel datasets, this negative effect vanishes. So, after analyzing a vast amount of empirical literature, no clear relation appears.

Data and statistics grew increasingly important in recent decades. Big Data came to play a large role in many fields, from technology to healthcare. Economics is no different; regression analyses had already been popularized by the neoliberal school of thought. Economics was intentionally made “a real science, within which basic connections between phenomena could be established, like in physics.

In the neoliberal world of economics, you are free from complicated, theoretical discussions, and able to draw firm conclusions. Unlike more nuanced fields like sociology or political science, neoliberal economics allows for simple, elegant arguments. With the help of mathematical modeling and statistical results, arguments take up just a few pages. This neoliberal methodology sounds pretty good in the first place: Direct scientific results, no chit chat. But it’s not as simple as it looks.

To what extent we can trust these statistical methods or the economists that use them? Economists can easily manipulate data to fit their ideological stances, or to comply with their initial hypothesis. Errors rooted in research design create unreliable results too: the type of the data used, the selection of the sample, differences in evaluation methods of estimates, availability of data, direction of causation, regional/country specific characteristics all influence the results. They jointly create a big divergence among empirical macroeconomic studies, leading to a conundrum in many questions.

These problems are not confined to economics; as the use of econometric methodology expands to other fields, the risk contained in data-interpretation increases. Say, there are studies on how religiosity levels affect people’s career paths. Knowing that even the large, carefully executed polls have failed at predicting Brexit, Trump’s victory, and Labour’s success in the UK, how can we trust other surveys to teach us about religion or social preferences? How can someone even build a theory on such data? Above all, how can these studies shape policy designs?

Some of the empirical studies that resulted in wrong conclusions remain unharmed in the ivory tower of academia, desperately waiting for a rare reader. But many of these studies do integrate to the real world, either through policy-making (top to bottom) or through media outlets (bottom to top).

Via the policy-route, developing countries have been one of the victims of empirical studies. The Washington Consensus, for example, suggested fiscal consolidation and trade liberalization. Later, however, it became clear that this was bad advice; copying the economic institutions from the Western world North and applying them to developing nations without considering country-specific environments can be devastating. While the Washington consensus was an elegant argument and supported by data from the West, it failed to account for the complexities of the Global South.   

The second route of influence is the media; when people read the news and encounter headlines like “a recent study found…”, it sparks their attention. But that recent study’s sample size can be very low, and its data can be deficient, and neither editors in the media nor readers will be aware.  To them, the study’s seemingly conclusive result are what is important.

To avoid that we act on false conclusions, academics, policy-makers, and media professionals all carry the responsibility to treat their empirical findings with skepticism. If it was physics, then a causal relationship based on data could be trusted. But for economics and politics, human factors create complications that statistical methods cannot always handle. Overall, it’s better not to overly believe in statistics, because data says so.

About the Author
Selim Yaman works at TRT World Research Centre. Yaman received his BSc from the Economics Department of Boğaziçi University. He is currently a graduate student in Political Economy of Development, at SOAS in London.

Going Beyond Exchange

Traditional economics reveals the dynamics of exchange. But is that all there is? Late economist Kenneth Boulding recommends that we look further. Once we consider that some transactions only go one way, we can see the economy in a different light.

If you’re a high school student and you’re hungry for lunch, you may go out and buy yourself a sandwich. You give the deli guy five bucks, and he gives you a BLT. That’s exchange! But where did your five dollars come from? If you’re lucky, your parents gave it to you. Just like they gave you breakfast, your clothes, and a home to live in. And what did you give them? Probably your dirty laundry.

Modern-day, Western world parenting is an example of a one-way exchange. Parents provide for their children because the market doesn’t. And they do so without expecting much in return. Upon reaching adulthood, none of us receive an invoice detailing the costs we incurred. If we did, we’d probably be quite disturbed. In some cultures, children “pay back” by supporting their parents when they are older. But in the West, retirement plans, social security, and old-age homes have largely removed that expectation too.

Economist Kenneth Boulding advocated for such one-way exchanges, or “grants”, to be included in our study of the economy. Grants make up a big part of our distribution of resources, he argues, but economists have limited themselves to the study of exchange. To construct a more holistic framework in which both systems are fully represented, Boulding introduces “Grants Economics,” which adds to our understanding of the economy both at the micro-level (grants within the household) as well as at the macro-level (grants from the government*).

Boulding distinguishes grants by their motivating force. In the example of parental care, the motivating force is one of love. Parents provide for their children because they care about them. Charity, scholarships, and much of government transfers fall into the same category. But each economy also contains grants based on threat. If you’re about to buy your deli sandwich, and an armed robber comes in, you may hand over your money because you’re scared of getting hurt. That’s a grant as well.

Every system, he explains, contains elements of exchange, love-based grants, and threat-based grants. But their respective shares in the total economy vary. To visualize this, Boulding presents a triangle, the corners of which represent a pure exchange-system, a pure love-based grant system, and a pure threat-based grant system. All the points inside the triangle represent different proportions in which the three systems can be combined. Where in the triangle we are, and where we are going, is the question.

At times, Boulding adds, the love-based grants economy may grow to compensate for failure in the exchange economy. If, for example, a hurricane strikes, we recognize the exchange system cannot support the situation, and make donations (grants) to fill the gap. But if we feel the efficiency of our grants is inadequate, the grants economy may shrink again. Only if perceive our grant to be able to be more useful in the hands of the grantee than in our own, do we want to provide it.

Since the 1970s, Boulding’s work has largely been forgotten. Perhaps because his definition of a grant, and the distinction between love and fear can be fuzzy at times, or because the scope of the theory is so vast. Nevertheless, the framework deserves credit for its potential to open our eyes to all the different ways in which resources are distributed. It can get us to think about the nature of our transactions.

Today, it may look like our current economy is increasingly based on exchange. Whereas we used to call a friend to help us assemble a new IKEA couch, many people may now use Handy to book an hour of paid-labor from someone they never met. Later that day, they may log in to TaskRabbit to hire someone for an errand. With the help of modern technology, Interactions that we would otherwise do without asking much in return are becoming two-way transfers.

At the same time, exchange continues to fail us, making large numbers of people rely on grants. In 2016, one in seven Americans received food stamps. That’s 43 million people for whom exchange is not bringing enough food on the table. On the other end of the income distribution, it may seem like things are different. But half of young adults (many with families of a high socio-economic status) rely on financial help from their parents. That’s a grant–typically with less of a stigma than food stamps–but a grant nonetheless.

These trends, and our potential path in the triangle raise various questions. How equal is our access to grants? Should we supply more grants (even a basic income?) or should we boost exchange (perhaps with a job guarantee?) Do we think we’re moving more towards a system based on love, in which care for one another dominates? Or are we finding it tough to get grants out of people unless we threaten them into providing them? Is there an ideal point in the triangle? Can we get there? Ponder on it. Boulding did so too, and being the only economist to sprinkle his books with poetry, he put his thoughts as follows:

 

Four things that give mankind a shove
Are threats, exchange, persuasion, love

But taken in the wrong proportions
These give us cultural abortions

For threats bring manifold abuses
In games where everybody loses

Exchange enriches every nation
But leads to dangerous alienation

Persuaders organize their brothers
But fool themselves as well as others

And love, with longer pull than hate
Is slow indeed to propagate

                                – Boulding, 1963

*Sometimes, of course, the lines between exchanges and grants are blurry. If we use taxes toward social security, and cash in at old age, that might be better described as a deferred exchange. If we, however, find ourselves on unemployment benefits, food stamps, rent support that we receive without having made an equal contribution, we can speak of a grant.