‘Thanks for reminding me’ – How our emotions, past experiences, judgements, sentimentality and sense of attachment interfere with understanding and coming to conclusions about the things that matter most. (+ The philosophy of: Nick Bostrom & one of my students)

The problem with the things we don’t want to talk about is exactly that: we don’t want to talk about them. But often, and I would argue that always, the things we most consciously avoid talking about are the things we most need to, as they are the most important things.

Even in trivial instances, say for example when you remind someone of the time they did something embarrassing, or the time their sports team lost, they might respond with (perhaps tongue-in-cheek) ‘thanks for reminding me.’ And this is said lightly but it indicates a deeper truth to it, there is a sense that on some level, it has affected them, and still does.

And this goes for any serious discussion about anything that needs a solution.

On Triple J the other day they spoke with an actress from Black Mirror, a show that highlights and shows what the negative implications of future (and present) technology can be. When ending the segment, the host finished with: ‘Black Mirror – Watch it. It’s terrifying, but it’s amazing.’

There’s a conditioning that has occurred within humans that anything of great importance, particularly existentially (particularly when it talks about death, pain or our own mortality) has to become an emotional discussion. And the problem with this is that once emotions get dragged into it, it’s no longer an intelligent discussion. It is now a juvenile back-and-forth, discussing something that we now want to stop discussing because it makes us uncomfortable.

One of my students has a level of self-awareness, awareness of those around her and a balance that is rare, which in my opinion is the foundation for a content and fulfilling life (to yourself and others). Although when she discusses the principle of things, their impermanence, and their fundamental nature stripped of sentiment and emotion, or highlights things that don’t matter (which most people consider should), those around call her cold or insensitive or a robot.

But that’s missing the point. There are times when it is incredibly valuable and wonderful to be sentimental and emotional, but when it’s at the expense of our tranquility, emotional balance or through the times it’s important to see things for what they are (and how we can deal with them), they are at best a hindrance, at worst a roadblock.

Ironically, to avoid a future where there is no emotion or the human things we value, we must discuss it without emotion

This tendency to reduce intelligent open forum discussions from balanced, common sense debates into emotional ones happens very frequently when discussing the biggest existential threat we perhaps have or will ever face – that of technology and Artificial Intelligence. Conversations around this topic are often, at best, preceded with dumbing-down disclaimers like: ‘So… right now you must be feeling pretty down about the impending doom of the universe…’ (or similar qualifiers), or at worst, completely stopped dead in its tracks because the discussion is bumming people out.

But it’s likely that people are going to feel much more bummed out when our poor ability to tackle these issues head-on leads to us having a future where either our liberties are further compromised, or, Humanity is entirely wiped out and not even alive to feel bummed out anymore.

At the end of Superintelligence, Nick Bostrom goes one step further and discusses the need to put aside analyses of our existence and how we came to be, to focus instead on having open (emotionless) discussions on our tenuous grasp on our future existence:

Philosophy covers some problems that are relevant to existential risk mitigation… Yet there are also subfields within philosophy that have no apparent link to existential risk or indeed any practical concern. As with pure mathematics, some of the problems that philosophy studies might be regarded as intrinsically important, in the sense that humans have reason to care about them independently of any practical application. The fundamental nature of reality, for instance, might be worth knowing about, for its own sake. The world would arguably be less glorious if nobody studied metaphysics, cosmology or string theory. However, the dawning prospect of an intelligence explosion shines a new light on this ancient quest for wisdom.

The outlook now suggests that philosophic progress can be maximised via an indirect path rather than by immediate philosophising. One of the many tasks on which superintelligence (or even just moderately enhanced human intelligence) would outperform the current cast of thinkers is in answering fundamental questions in science and philosophy. This reflection suggests a strategy of deferred gratification. We could postpone work on some of the eternal questions for a little while, delegating that task to our hopefully more competent successors– in order to focus our own attention on a more pressing challenge; increasing the chance that we will actually have competent successors. This would be high-impact philosophy and high-impact mathematics.’*

And why? He goes on to say:

 

The intelligence explosion may still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense, and good-humoured decency even in the teeth of this most unnatural and inhuman problem. We need to bring all our human resourcefulness to bear on its solution.

Yet let us not lose track of what is globally significant. Through the fog of everyday trivialities, we can perceive–if but dimly–the essential task of our age.

————

Bostrom qualifies this statement by saying that its not important that all mathematicians or philosophers (or people in other areas) steer away from discussing abstract questions and tasks that aren’t entirely of practical use, he merely says that it may be of more value that ‘at the margins’, some of the best minds may better serve humanity by focusing on mitigating those aforementioned existential threats. Before perhaps our best minds are superseded by intelligence that we may or may not have control over.