Truth and laughter
Slate Star Codex has another great post: If the media reported on other dangers like it does AI risk. The new airborne superplague is said to be 100% fatal, totally untreatable, and able to spread...
View ArticleAnthropic negatives
Stuart Armstrong has come up with another twist on the anthropic shadow phenomenon. If existential risk needs two kinds of disasters to coincide in order to kill everybody, then observers will notice...
View ArticleThreat reduction Thursday
Today seems to have been “doing something about risk”-day. Or at least, “let’s investigate risk so we know what we ought to do”-day. First, the World Economic Forum launched their 2015 risk perception...
View ArticleCanine mechanics and banking
There are some texts that are worth reading, even if you are outside the group they are intended for. Here is one that I think everybody should read at least the first half of: Andrew G Haldane and...
View Article“A lump of cadmium”
Cadmium crystal and metal. From Wikimedia Commons, creator Alchemist-hp 2010. Stuart Armstrong sent me this email: I have a new expression: “a lump of cadmium”. Background: in WW2, Heisenberg was...
View Article1957: Sputnik, atomic cooking, machines that code & central dogmas
What have we learned since 1957? Did we predict what it would be? And what does it tell us about our future? Some notes for the panel discussion “‘We’ve never had it so good’ – how does the world today...
View ArticleAll models are wrong, some are useful – but how can you tell?
Our whitepaper about the systemic risk of risk modelling is now out. The topic is how the risk modelling process can make things worse – and ways of improving things. Cognitive bias meets model risk...
View ArticleThe hazard of concealing risk
Review of Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility by Dmitry Chernov and Didier Sornette (Springer). I have recently begun to work...
View ArticleThe capability caution principle and the principle of maximal awkwardness
The Future of Life Institute discusses the Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. It is an...
View ArticleCatastrophizing for not-so-fun and non-profit
Oren Cass has an article in Foreign Affairs about the problem of climate catastrophizing. It is basically how it becomes driven by motivated reasoning but also drives motivated reasoning in a vicious...
View Article