Category Archives: AI

22/10/17: Robot builders future: It’s all a game of Go…


This, perhaps, is the most important development in the AI (Artificial Intelligence) to-date: "DeepMind’s new self-taught Go-playing program is making moves that other players describe as “alien” and “from an alternate dimension”", as described in The Atlantic article, published this week (The AI That Has Nothing to Learn From Humans - The Atlantic
https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/?utm_source=atltw).

The importance of the Google DeepMind's AlphaGo Zero AI program is not that it plays Go with frightening level of sophistication. Instead, it true importance is in self-sustaining nature of the program that can learn independently of external information inputs, by simply playing against itself. In other words, Google has finally cracked the self-replicating algorithm.

Yes, there is a 'new thinking' dimension to this as well. Again, quoting from The Atlantic: "A Go enthusiast named Jonathan Hop ...calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand."

But the real power of AlphaGo Zero version is its autonomous nature.

From the socio-economic perspective, this implies machines that can directly learn complex (extremely complex), non-linear and creative (with shifts of nodes) tasks. This, in turn, opens the AI to the prospect of writing own code, as well as executing tasks that to-date have been thought of as impossible for machines (e.g. combining referential thinking with creative thinking). The idea that coding skills of humans can ever keep up with this progression has now been debunked. Your coding and software engineering degree is not yet obsolete, but your kid's one will be obsolete, and very soon.

Welcome to the AphaHuman Zero, folks.  See yourself here?..


17/10/17: Intel Opens the Era of Unemployed Insurance Brokers…


If you have a job structuring and selling, marketing and monitoring/managing car insurance contracts, you should stop reading this now... because, Intel has developed the first set of algorithmic standards for self-driving vehicles that aim to ensure that any accident involving a self-driving vehicle cannot be blamed on the software that operates that vehicle.

How? Read some scant details here: https://www.bloomberg.com/news/articles/2017-10-18/intel-proposes-system-to-make-self-driving-cars-blameless.

What does this mean? If successful, regulating algorithmic standards, most likely more advanced than the one developed by Intel, will mean that self-driving vehicles collision will be by system definition blamed only on human drivers, bicyclists and pedestrians. Which will, de facto, perfectly standardise all insurance contracts covering vehicles other than those operated by people. The result will be rapid collapse in demand for car insurance as we know it.

Instead of writing singular (albeit standardised) contracts to cover individual drivers (or vehicles driven by them), using actuarial risk models that attempt to identify risk profiles of these drivers, insurance industry will be simply writing a single contract to cover software running millions of vehicles, plus a standard contract to cover the vehicle (hardware). There will be no room left for profit margins or for service / contract differentiation or for pricing variation or for bundling of offers. In other words, there will be no need for all the numerous marketing, sales, investigative, enforcement, actuarial etc jobs currently populating the insurance industry. Car insurance sector will simply shrink to a duopoly (or proximate) providing cash management service to autonomous vehicles owners.

There will be lots of armchair-surfing for currently employable insurance industry specialists in the near future...


25/12/15: WLASZE: Weekend Links on Arts, Sciences and Zero Economics


Merry Christmas to all! And in spirit of the holiday, time to revive my WLASZE: Weekend Links on Arts, Sciences and Zero Economics postings that wilted away under the snowstorm of work and minutiae, but deserve to be reinstated in 2016.

[Fortunately for WLASZE and unfortunately for die harder economics readers of the blog, I suspect my work commitments in 2016 will be a little more balanced to allow for this...]


Let's start with Artificial Intelligence - folks at ArsTechnica are running an excellent essay, debunking some of the AI myths. Read it here. The list is pretty much on the money:

  • Is AI about machines that can think (in human intelligence sense)? Answer: predictably No.  
  • Is AI capable of outstripping human ethics? Answer: not necessarily.
  • Will AI be a threat to humanity? Answer: not any time soon.
  • Can the AI system acquire sudden singularity? Answer: sort of too far away and doubtful even then.
The topic is hugely important, extremely exciting and virtually open-ended. Perhaps of interest, I wrote back in 2005 about the non-linearity and discontinuity of our intelligence as a 'unique' identifier of humanity. The working paper on this (I have not revisited it since 2005) is still available here.

And to top the topic up, here is a link on advances in robotics over the grand year of 2015: http://qz.com/569285/2015-was-a-year-of-dumb-robots/. The title says it all... "dumb robots"... or does it?..

Update: another thought-provoking essay - via QZ - on the topic of AI and its perceived dangers. A quote summarising the story:
"Elon Musk and Stephen Hawking are right: AI is dangerous. But they are dangerously wrong about why. I see two fairly likely futures:

  • Future one: AI destroys itself, humanity, and most or all life on earth, probably a lot sooner than within 1000 years.
  • Future two: Humanity radically restructures its institutions to empower individuals, probably via trans-humanist modification that effectively merges us with AI. We go to the stars."
Personally, I am not sure which future will emerge, but I am sure that there is only one future in which we - humans - can have a stable, liberty-based society. And it is the second one. Hence my concerns - expressed in public speeches and blog posts - with the effects of technological innovation and the emergence of the Gig-Economy on the fabric of our socio-economic interactions.

At any rate... that is a cool dystopian pic from QZ


Dangers of AI or not, I do hope we sort out architecture before robots either consume or empower us...

On the lighter side, or may be on a brighter side - for the art cannot really be considered a lighter side - Saatchi Art are running their Best of 2015 online show here: http://www.saatchiart.com/shows/best-of-2015 that is worth running through. It is loaded with some younger and excitingly fresher works than make traditional art shows. 

Like Jonas Fisch's vibrantly rough, Gears of Power 


All the way to the hyper-expressionist realism of Tom Pazderka, here is an example of his Elegies to Failed Revolutions, Right Wing Rock'n'Roll 



And for that Christmas spirit in us, by Joseph Brodsky, translated by Derek Walcott (for a double-Nobel take):


The air—fierce frost and pine-boughs.

We’ll cram ourselves in thick clothes,

stumbling in drifts till we’re weary—

better a reindeer than a dromedary.

In the North if faith does not fail

God appears as the warden of a jail

where the kicks in our ribs were rough

but what you hear is “They didn’t get enough.”

In the South the white stuff’s a rare sight,

they love Christ who was also in flight,

desert-born, sand and straw his welcome,

he died, so they say, far from home.

So today, commemorate with wine and bread,

a life with just the sky’s roof overhead

because up there a man escapes

the arresting earth—plus there’s more space.


Merry Christmas to all!

20/6/15: WLASze: Weekend Links of Arts, Sciences & zero economics


Couple of non-economics related, but hugely important links worth looking into... or an infrequent entry into my old series of WLASze: Weekend Links of Arts, Sciences and zero economics...

Firstly, via Stanford, we have a warning about the dire state of naturehttp://news.stanford.edu/news/2015/june/mass-extinction-ehrlich-061915.html. A quote: "There is no longer any doubt: We are entering a mass extinction that threatens humanity's existence." if we think we can't even handle a man-made crisis of debt overhang in the likes of Greece, what hope do we have in handling the existential threat?

Am I overhyping things? May be. Or may be not. As population ages, our ability to sustain ourselves is increasingly dependent on better food, nutrition, quality of environment etc. Not solely because we want to eat/breath/live better, but also because of brutal arithmetic: economic activity that sustains our lives depends on productivity. And productivity declines precipitously with ageing population.

So even if you think the extinction event is a rhetorical exaggeration by a bunch of scientists, brutal (and even linear - forget complex) systems of our socio-economic models imply serious and growing inter-connection between our man-made shocks and natural systems capacity to withstand them.


Secondly, via the Slate, we have a nagging suspicion that not everything technologically smart is... err... smart: "Meet the Bots: Artificial stupidity can be just as dangerous as artificial intelligence
http://www.slate.com/articles/technology/future_tense/2015/04/artificial_stupidity_can_be_just_as_dangerous_as_artificial_intelligence.html.

"Bots, like rats, have colonized an astounding range of environments. …perhaps the most fascinating element here is that [AI sceptics] warnings focus on hypothetical malicious automatons while ignoring real ones."

The article goes on to list examples of harmful bots currently populating the web. But it evades the key question asked in the heading: what if AI is not intelligent at all, but is superficially capable of faking intelligence to a degree? Imagine the world where we co-share space with bots that can replicate emotional, social, behavioural and mental intelligence up to a high degree, but fail beyond certain bound. What then? Will the average / median denominator of human interactions converge to that bound as well? Will we gradually witness disappearance of human capacity of by-pass complex, but measurable or mappable systems of logic, thus reducing the richness and complexity of our own world? If so, how soon will humanity become a slightly improved model of today's Twitter?


Thirdly, "What happens when we can’t test scientific theories?" via the Prospect Mag: http://www.prospectmagazine.co.uk/features/what-happens-when-we-cant-test-scientific-theories
"Scientific knowledge is supposed to be empirical: to be accepted as scientific, a theory must be falsifiable… This argument …is generally accepted by most scientists today as determining what is and is not a scientific theory. In recent years, however, many physicists have developed theories of great mathematical elegance, but which are beyond the reach of empirical falsification, even in principle. The uncomfortable question that arises is whether they can still be regarded as science."

The reason why this is important to us is that the question of falsifiability of modern theories is non-trivial to the way we structure our inquiry into the reality: the distinction between art, science and philosophy becomes blurred when one set of knowledge relies exclusively on the tools used in the other. So much so, that even the notion of knowledge, popularly associated with inquiry delivered via science, is usually not extendable to art and philosophy. Example in a quote: “Mathematical tools enable us to
investigate reality, but the mathematical concepts themselves do not necessarily imply physical reality”.

Now, personally, I don't give a damn if something implies physical reality or not, as long as that something is not designed to support such an implication. Mathematics, therefore, is a form of knowledge and we don't care if there are physical reality implications of it or not. But physical sciences purport to hold a specific, more qualitatively important corner of knowledge: that of being physically grounded in 'reality'. In other words, the very alleged supremacy of physical sciences arises not from their superiority as fields of inquiry (quality of insight is much higher in art, mathematics and philosophy than in, say, biosciences and experimental physics), but in their superiority in application (gravity has more tangible applications to our physical world than, say, topology).

So we have a crisis of sorts for physical sciences: their superiority is now run out of the road and has to yield to the superiority of abstract fields of knowledge. Bad news for humanity: deterministic nature of experimental knowledge is getting exhausted. With it, determinism surrounding our concept of knowledge diminishes too. Good news for humanity: this does not change much. Whether or not the string theory is provable is irrelevant to us. As soon as it becomes relevant, it will be, by Popperian definition, falsifiable. Until then, marvel of the infinite world of abstract.