Category Archives: technological evolution

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

12/7/18: Technology, Government Policies & Supply-Side Secular Stagnation


I have posted about the new World Bank report on Romania's uneven convergence experience in the previous post (here). One interesting chart in the report shows comparatives in labour productivity growth across a range of the Central European economies since the Global Financial Crisis.


The chart is striking! All economies, save Poland - the 'dynamic Tigers of CEE' prior to the crisis - have posted marked declines in labour productivity growth, as did the EU28 as a whole. When one recognises the fact 2008-2016 period includes dramatic losses in employment, rise in unemployment and exits from the labour force during the period of the GFC, and the subsequent Euro Area Sovereign Debt Crisis - all of which have supported labour productivity to the upside - the losses in productivity growth would be even more pronounced.

This, of course, dovetails naturally with the twin secular stagnations thesis I have been writing about in these pages before. In particular, this data supports the supply-side secular stagnation thesis, especially the technological re-balancing proposition that implies that since the late 2000s, technological innovation has shifted toward increasingly substituting sources of economic value added away from labour and in favour of software/robotics/ICT forms of capital:

Human capital is the only offsetting factor for this trend of displacement. And it is lagging in the CEE:

But the problem is worse than simple tertiary education figures suggest. Current trends in technological innovation stress data intensity, AI and full autonomy of technological systems from labour and human capital. Which implies that even educated and skilled workforce is no longer a buffer to displacement.

As the result, in countries like Romania, with huge slack in human capital and skills, investment is not flowing to education, training, entrepreneurship and other sources of human capital uplift:


While barriers to entrepreneurship remain, if not rising:


In effect, technological innovation in its current form is potentially driving down not only productivity growth, but also labour force participation. The result, as in the economies of the West:

  1. Notional large scale decline in official unemployment (officially unemployed numbers are down)
  2. Significant lags in recovery in labour force participation (hidden unemployed, permanently discouraged etc numbers are up)
  3. The two factors somewhat offset each other in terms of superficially boosting productivity growth (with real productivity actually probably even lower than the official figures suggest)
These three factors contribute to an expanding army of voters who are marginalised within the system.

Romania is a canary in the European secular stagnation mine.