Category Archives: AI

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

16/10/18: Data analytics. It really is messier than you thought


An interesting study (H/T to @stephenkinsella) highlights the problems with empirical determinism that is the basis for our (human) evolving trust in 'Big Data' and 'analytics': the lack of determinism in statistics when it comes to social / business / finance etc data.

Here is the problem: researchers put together 29 independent teams, with 61 analysts. They gave these teams the same data set on football referees decisions to give red cards to players. They asked the teams to evaluate the same hypothesis: are football "referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players"?

Due to a variation of analytic models used, the estimated models produced a range of answers, from the effect of skin color of the player on red card issuance being 0.89 at the lower end or the range to 2.93 at the higher end. Median effect was 1.31. Per authors, "twenty teams (69%) found a statistically significant positive effect [meaning that they found the skin color having an effect on referees decisions], and 9 teams (31%) did not observe a significant relationship" [meaning, no effect of the players' skin color was found].

To eliminate the possibility that analysts’ prior beliefs could have influenced their findings, the researchers controlled for such beliefs. In the end, prior beliefs did not explain these differences in findings. Worse, "peer ratings of the quality of the analyses also did not account for the variability." Put differently, the vast difference in the results cannot be explained by quality of analysis or priors.

The authors conclude that even absent biases and personal prejudices of the researchers, "significant variation in the results of analyses of complex data may be difficult to avoid... Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results."

Good luck putting much trust into social data analytics.

Full paper is available here: http://journals.sagepub.com/doi/pdf/10.1177/2515245917747646.

16/10/18: Data analytics. It really is messier than you thought


An interesting study (H/T to @stephenkinsella) highlights the problems with empirical determinism that is the basis for our (human) evolving trust in 'Big Data' and 'analytics': the lack of determinism in statistics when it comes to social / business / finance etc data.

Here is the problem: researchers put together 29 independent teams, with 61 analysts. They gave these teams the same data set on football referees decisions to give red cards to players. They asked the teams to evaluate the same hypothesis: are football "referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players"?

Due to a variation of analytic models used, the estimated models produced a range of answers, from the effect of skin color of the player on red card issuance being 0.89 at the lower end or the range to 2.93 at the higher end. Median effect was 1.31. Per authors, "twenty teams (69%) found a statistically significant positive effect [meaning that they found the skin color having an effect on referees decisions], and 9 teams (31%) did not observe a significant relationship" [meaning, no effect of the players' skin color was found].

To eliminate the possibility that analysts’ prior beliefs could have influenced their findings, the researchers controlled for such beliefs. In the end, prior beliefs did not explain these differences in findings. Worse, "peer ratings of the quality of the analyses also did not account for the variability." Put differently, the vast difference in the results cannot be explained by quality of analysis or priors.

The authors conclude that even absent biases and personal prejudices of the researchers, "significant variation in the results of analyses of complex data may be difficult to avoid... Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results."

Good luck putting much trust into social data analytics.

Full paper is available here: http://journals.sagepub.com/doi/pdf/10.1177/2515245917747646.

23/5/18: American Exceptionalism, Liberty and… Amazon


"And the star-spangled banner in triumph shall wave
O'er the land of the free and the home of the brave!"

The premise of the American Exceptionalism rests on the hypothesis of the State based on the principles of liberty.

Enter Amazon, a corporation ever hungry for revenues, and the State, a corporation ever hungry for power and control. Per reports (https://www.aclunc.org/blog/amazon-teams-law-enforcement-deploy-dangerous-new-face-recognition-technology), Amazon "has developed a powerful and dangerous new facial recognition system and is actively helping governments deploy it. Amazon calls the service “Rekognition."

As ACLU notes (emphasis is mine): "Marketing materials and documents obtained by ACLU affiliates in three states reveal a product that can be readily used to violate civil liberties and civil rights. Powered by artificial intelligence, Rekognition can identify, track, and analyze people in real time and recognize up to 100 people in a single image. It can quickly scan information it collects against databases featuring tens of millions of faces, according to Amazon... Among other features, the company’s materials describe “person tracking” as an “easy and accurate” way to investigate and monitor people."

As I noted elsewhere on this blog, the real threat to the American liberal democracy comes not from external challenges, attacks and shocks, but from the internal erosion of the liberal democratic institutions, followed by the decline of public trust in and engagement with these institutions. The enemy of America is within, and companies like Amazon are facilitating the destruction of the American liberty, aiding and abetting the unscrupulous and power-hungry governments, local, state and beyond.