Category Archives: @MIIS

23/5/18: Contingent Workforce, Online Labour Markets and Monopsony Power


The promise of the contingent workforce and technological enablement of ‘shared economy’ is that today’s contingent workers and workers using own capital to supply services are free agents, at liberty to demand their own pay, work time, working conditions and employment terms in an open marketplace that creates no asymmetries between their employers and themselves. In economics terms, thus, the future of technologically-enabled contingent workforce is that of reduced monopsonisation.

Reminding the reader: monopsony, as defined in labour economics, is the market power of the employer over the employees. In the past, monopsonies primarily were associated with 'company towns' - highly concentrated labour markets dominated by a single employer. This notion seems to have gone away as transportation links between towns improved. In this context, increasing technological platforms penetration into the contingent / shared economies (e.g. creation of shared platforms like Uber and Lyft) should contribute to a reduction in monopsony power and the increase in the employee power.

Two recent papers: Azar, J A, I Marinescu, M I Steinbaum and B Taska (2018), “Concentration in US labor markets: Evidence from online vacancy data”, NBER Working paper w24395, and Dube, A, J Jacobs, S Naidu and S Suri (2018), “Monopsony in online labor markets”, NBER, Working paper 24416, dispute this proposition by finding empirical evidence to support the thesis that monopsony powers are actually increasing thanks to the technologically enabled contingent employment platforms.

Online labour markets are a natural testing ground for the proposition that technological transformation is capable of reducing monopsony power of employers, because they, in theory, offer a nearly-frictionless information and jobs flows between contractors and contractees, transparent information about pay and employment terms, and low cost of switching from one job to another.

The latter study mentioned above attempts to "rigorously estimate the degree of requester market power in a widely used online labour market – Amazon Mechanical Turk, or MTurk... the most popular online micro-task platform, allowing requesters (employers) to post jobs which workers can complete for."

The authors "provide evidence on labour market power by measuring how sensitive workers’ willingness to work is to the reward offered", by using the labour supply elasticity facing a firm (a standard measure of wage-setting (monopsony) power). "For example, if lowering wages by 10% leads to a 1% reduction in the workforce, this represents an elasticity of 0.1." To make their findings more robust, the authors use two methodologies for estimating labour supply elasticities:
1) Observational approach, which involves "data from a near-universe of tasks scraped from MTurk" to establish "how the offered reward affected the time it took to fill a particular task", and
2) Randomised experiments approach, uses "experimental variation, and analyse data from five previous experiments that randomised the wages of MTurk subjects. This randomised reward-setting provides ‘gold-standard’ evidence on market power, as we can see how MTurk workers responded to different wages."

The authors "empirically estimate both a ‘recruitment’ elasticity (comparable to what is recovered from the observational data) where workers see a reward and associated task as part of their normal browsing for jobs, and a ‘retention’ elasticity where workers, having already accepted a task, are given an opportunity to perform additional work for a randomised bonus payment."

The findings from both approaches are strikingly similar. Both "provide a remarkably consistent estimate of the labour supply elasticity facing MTurk requesters. As shown in Figure 2, the precision-weighted average experimental requester’s labour supply elasticity is 0.13 – this means that if a requester paid a 10% lower reward, they’d only lose around 1% of workers willing to perform the task. This suggests a very high degree of market power. The experimental estimates are quite close to those produced using the machine-learning based approach using observational data, which also suggest around 1% reduction in the willing workforce from a 10% lower wage."


To put these findings into perspective, "if requesters are fully exploiting their market power, our evidence implies that they are paying workers less than 20% of the value added. This suggests that much of the surplus created by this online labour market platform is captured by employers... [the authors] find a highly robust and surprisingly high degree of market power even in this large and diverse spot labour market."

In evolutionary terms, "MTurk workers and their advocates have long noted the asymmetry in market structure among themselves. Both efficiency and equality concerns have led to the rise of competing, ‘worker-friendly’ platforms..., and mechanisms for sharing information about good and bad requesters... Scientific funders such as Russell Sage have instituted minimum wages for crowd-sourced work. Our results suggest that these sentiments and policies may have an economic justification. ...Moreover, the hope that information technology will necessarily reduce search frictions and monopsony power in the labour market may be misplaced."

My take: the evidence on monopsony power in web-based contingent workforce platforms dovetails naturally into the evidence of monopolisation of the modern economies. Technological progress, that held the promise of freeing human capital from strict contractual limits on its returns, while delivering greater scope for technology-aided entrepreneurship and innovation, as well as the promise of the contingent workforce environment empowering greater returns to skills and labour are proving to be the exact opposites of what is being delivered by the new technologies which appear to be aiding greater transfer of power to technological, financial and even physical capital.

The 'free to work' nirvana ain't coming folks.

9/10/17: Nature of our reaction to tail events: ‘odds’ framing


Here is an interesting article from Quartz on the Pentagon efforts to fund satellite surveillance of North Korea’s missiles capabilities via Silicon Valley tech companies: https://qz.com/1042673/the-us-is-funding-silicon-valleys-space-industry-to-spot-north-korean-missiles-before-they-fly/. However, the most interesting (from my perspective) bit of the article relates neither to North Korea nor to Pentagon, and not even to the Silicon Valley role in the U.S. efforts to stop nuclear proliferation. Instead, it relates to this passage from the article:



The key here is an example of the link between the our human (behavioral) propensity to take action and the dynamic nature of the tail risks or, put more precisely, deeper uncertainty (as I put in my paper on the de-democratization trend https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535, the deeper uncertainty as contrasted by the Knightian uncertainty).

Deeper uncertainty involves a dynamic view of the uncertain environment in which potential tail events evolve before becoming a quantifiable and forecastable risks. This environment is different from the classical Knightian uncertainty in so far as evolution of these events is not predictable and can be set against perceptions or expectations that these events can be prevented, while at the same time providing no historical or empirical basis for assessment of actual underlying probabilities of such events.

In this setting, as opposed to Knightian set up with partially predictable and forecastable uncertainty, behavioral biases (e.g. confirmation bias, overconfidence, herding, framing, base rate neglect, etc) apply. These biases alter our perception of evolutionary dynamics of uncertain events and thus create a referencing point of ‘odds’ of an event taking place. The ‘odds’ view evolves over time as new information arrives, but the ‘odds’ do not become probabilistically defined until very late in the game.

Deeper uncertainty, therefore, is not forecastable and our empirical observations of its evolution are ex ante biased to downplay one, or two, or all dimensions of its dynamics:
- Impact - the potential magnitude of uncertainty when it materializes into risk;
- Proximity - the distance between now and the potential materialization of risk;
- Speed - the speed with which both impact and proximity evolve; and
- Similarity - the extent to which our behavioral biases distort our assessment of the dynamics.

Knightian uncertainty is a simple, one-shot, non-dynamic tail risk. As such, it is similar both in terms of perceived degree of uncertainty (‘odds’) and the actual underlying uncertainty.

Now, materially, the outrun of these dimensions of deeper uncertainty is that in a centralized decision-making setting, e.g. in Pentagon or in a broader setting of the Government agencies, we only take action ex post transition from uncertainty into risk. The bureaucracy’s reliance on ‘expert opinions’ to assess the uncertain environment only acts to reinforce some of the biases listed above. Experts generally do not deal with uncertainty, but are, instead, conditioned to deal with risks. There is zero weight given by experts to uncertainty, until such a moment when the uncertain events become visible on the horizon, or when ‘the odds of an event change’, just as the story told by Andrew Hunter in the Quartz article linked above says. Or in other words, once risk assessment of uncertainty becomes feasible.

The problem with this is that by that time, reacting to the risk can be infeasible or even irrelevant, because the speed and proximity of the shock has been growing along with its impact during the deeper uncertainty stage. And, more fundamentally, because the nature of underlying uncertainty has changed as well.

Take North Korea: current state of uncertainty in North Korea’s evolving path toward fully-developed nuclear and thermonuclear capabilities is about the extent to which North Korea is going to be willing to use its nukes. Yet, the risk assessment framework - including across a range of expert viewpoints - is about the evolution of the nuclear capabilities themselves. The train of uncertainty has left the station. But the ticket holders to policy formation are still standing on the platform, debating how North Korea can be stopped from expanding nuclear arsenal. Yes, the risks of a fully-armed North Korea are now fully visible. They are no longer in the realm of uncertainty as the ‘odds’ of nuclear arsenal have become fully exposed. But dealing with these risks is no longer material to the future, which is shaped by a new level of visible ‘odds’ concerning how far North Korea will be willing to go with its arsenal use in geopolitical positioning. Worse, beyond this, there is a deeper uncertainty that is not yet in the domain of visible ‘odds’ - the uncertainty as to the future of the Korean Peninsula and the broader region that involves much more significant players: China and Russia vs Japan and the U.S.

The lesson here is that a centralized system of analysis and decision-making, e.g. the Deep State, to which we have devolved the power to create ‘true’ models of geopolitical realities is failing. Not because it is populated with non-experts or is under-resourced, but because it is Knightian in nature - dominated by experts and centralized. A decentralized system of risk management is more likely to provide a broader coverage of deeper uncertainty not because its can ‘see deeper’, but because competing for targets or objectives, it can ‘see wider’, or cover more risk and uncertainty sources before the ‘odds’ become significant enough to allow for actual risk modelling.

Take the story told by Andrew Hunter, which relates to the Pentagon procurement of the Joint Light Tactical Vehicle (JLTV) as a replacement for a faulty Humvee, exposed as inadequate by the events in Iraq and Afghanistan. The monopoly contracting nature of Pentagon procurement meant that until Pentagon was publicly shown as being incapable of providing sufficient protection of the U.S. troops, no one in the market was monitoring the uncertainties surrounding the Humvee performance and adequacy in the light of rapidly evolving threats. If Pentagon’s procurement was more distributed, less centralized, alternative vehicles could have been designed and produced - and also shown to be superior to Humvee - under other supply contracts, much earlier, and in fact before the experts-procured Humvees cost thousands of American lives.

There is a basic, fundamental failure in our centralized public decision making bodies - the failure that combines inability to think beyond the confines of quantifiable risks and inability to actively embrace the world of VUCA, the world that requires active engagement of contrarians in not only risk assessment, but in decision making. That this failure is being exposed in the case of North Korea, geopolitics and Pentagon procurement is only the tip of the iceberg. The real bulk of challenges relating to this modus operandi of our decision-making bodies rests in much more prevalent and better distributed threats, e.g. cybersecurity and terrorism.

16/5/17: Insiders Trading: Concentration and Liquidity Risk Alpha, Anyone?


Disclosed insiders trading has long been used by both passive and active managers as a common screen for value. With varying efficacy and time-unstable returns, the strategy is hardly a convincing factor in terms of identifying specific investment targets, but can be seen as a signal for validation or negation of a previously established and tested strategy.

Much of this corresponds to my personal experience over the years, and is hardly that controversial. However, despite sufficient evidence to the contrary, insiders’ disclosures are still being routinely used for simultaneous asset selection and strategy validation. Which, of course, sets an investor for absorbing the risks inherent in any and all biases present in the insiders’ activities.

In their March 2016 paper, titled “Trading Skill: Evidence from Trades of Corporate Insiders in Their Personal Portfolios”, Ben-David, Itzhak and Birru, Justin and Rossi, Andrea, (NBER Working Paper No. w22115: http://ssrn.com/abstract=2755387) looked at “trading patterns of corporate insiders in their own personal portfolios” across a large dataset from a retail discount broker. The authors “…show that insiders overweight firms from their own industry. Furthermore, insiders earn substantial abnormal returns only on stocks from their industry, especially obscure stocks (small, low analyst coverage, high volatility).” In other words, insiders returns are not distinguishable from liquidity risk premium, which makes insiders-strategy alpha potentially as dumb as blind ‘long lowest percentile returns’ strategy (which induces extreme bias toward bankruptcy-prone names).

The authors also “… find no evidence that corporate insiders use private information and conclude that insiders have an informational advantage in trading stocks from their own industry over outsiders to the industry.”

Which means that using insiders’ disclosures requires (1) correcting for proximity of insider’s own firm to the specific sub-sector and firm the insider is trading in; (2) using a diversified base of insiders to be tracked; and (3) systemically rebalance the portfolio to avoid concentration bias in the stocks with low liquidity and smaller cap (keep in mind that this applies to both portfolio strategy, and portfolio trading risks).


9/11/16: Bitcoin vs Ether: MIIS Students Case Study


Following last night's election results, Bitcoin rose sharply in value, in line with gold, while other digital currencies largely failed to provide a safe haven against the extreme spike in markets volatility.

In a recent project, our students @MIIS have looked at the relative valuation of Bitcoin and Ether (cryptocurrency backing Ethereum blockchain platform) highlighting

  1. Fundamental supply and demand drivers for both currencies; and
  2. Assessing both currencies in terms of their hedging and safe haven properties
The conclusion of the case study was squarely in line with Bitcoin and Ether behaviour observed today: Bitcoin outperforms Ether as both a hedge and a safe haven, and has stronger risk-adjusted returns potential over the next 5 years.