Scarlett Johansson leaps to your lips. An interview with R. Scott Bakker

Print this pageShare on FacebookShare on Google+Tweet about this on TwitterEmail this to someone

Interview by Primož Krašovec & Marko Bauer

There comes a time when “theory” just stops working. The one thing that was once seen (and used) as a weapon against the crushing stupidity of common sense (or ideology, dominant discourse, whatever you wanna call it) turns out to be a form of bigotry and prejudice against ordinary ways of thinking and speaking. To make matters worse, theory turns out to be not science (except in its self-aggrandizing delusions), but a form of commonish sense that is no less stereotypical, thought-inhibiting and philistine than anything we used to despise as ideology. Theory is just shit (literally) theoreticians say.

So what now? Exiting endless and soul-destroying discussions on Althusser or Lacan is the easy part, but if one does not wish to relinquish reading, learning and thinking about things at all, finding something new might be the hard part. In this sense, discovering R. Scott Bakker’s blog Three Pound Brain had a therapeutic function for us: his entry point into problems that we still obsess about was one of a convert that saw through theory and found new ways to approach the problems of ideology, linguistics, human cognition in general as well as their transformations through the encounters with new technologies; ways that at the same time do not shy away from traditional taboos theory imposes on itself (like its characteristically dismissive attitudes towards biology, evolution, empirical science and “techne”).

So our intention (yeah) going into this interview was to stop pretending that everything we need to know about what’s going on with artificial intelligence, cognitive technologies and the way we ourselves operate cognitively was somehow already solved in this or that tome of German classical philosophy. 

Scott Bakker lives and writes in London, Ontario. Allegedly Baudrillard’s America should be called Canada. Besides his blog and theoretical essays Bakker is best known for his epic fantasy series The Second Apocalypse, psychothriller Neuropath and short story Crash Space.

 

Šum: The Blind Brain Theory[1] is peculiar in a way that one asks oneself: Is it an anti-Berkeleian Berkeleian argument, kind of eccentric, but irrefutable, or is this a Copernican shift? Maybe it’s both. As Steven Shaviro would say, we are/will be compelled to accept the Argument. According to you it is probably already enacting, executing itself.

Bakker: Yeah, I mean the Blind Brain Theory is basically an empirical theory. It will be sorted by the science of the near future, I think, I hope. And to that extent it has an awkward fit with the tradition of a priori thought regarding the soul and all things human. Just simply because it stakes out an empirical ground and because it has numerous ways in which it could be tested, it finds itself almost by default Copernican insofar as it’s really going to be hard to actually attack the theory from a priori grounds. The proof really is in the empirical pudding. If metacognition looks like what I say it looks like, then that is what cognitive neuroscience will eventually discover. The weird part of it is that philosophical reflection stands at the root of all a priori philosophical claims regarding the nature of the soul. And so as a result, if the empirical theory of philosophical reflection actually finds vindication in cognitive neuroscience, then it’s going to have repercussions that are nothing short of Copernican. It will turn everything upside down. And not for the better, either, I don’t think. It is very, very peculiar insofar as it sort of stands astride the subject matter of traditional philosophy and yet stubbornly insists on keeping its feet in the empirical mud.

How that fits into the Argument is kind of more complicated. BBT basically explodes all prior conceptions of what it means to be human. It’s in the meat of things, it’s in what we are, it’s part of the same mechanisms that break down and kill us when we have a stroke. What you could expect is that cognitive technologies, as they advance, will actually make the Argument for me in a strange way. So if I’m right about metacognition, then the Argument, this notion that human meaning is on the verge of collapse, actually follows from what we have hitherto called technological progress. Progress will actually dismantle all exceptionalist claims, leaving only some sort of mediocre residue and we will find ourselves on the far side of the looking glass.

Šum: What would be the everyday ethical consequences of BBT since our sense of ethical behaviour is conditioned intentionally in a way that you are called to explain your actions? For if one does away with any intentionality, then being called to explain an action can be impossible, since at least in our everyday life we have no access to real causes of our action, we are blind to ourselves.

Bakker: In one big sense there is no answer to your question. Simply because BBT suggests that we should expect—as cognitive technologies proliferate throughout our social ecosystems and as more and more deep environmental information is imported—a massive spread of the kinds of information our ancestors never ever dreamed of having to deal with. And, even more importantly, never evolved the equipment they required to be able to deal with it. As that happens, you’re basically looking at ecological destruction. This is what I call the semantic apocalypse.

As it turns out human social cognition is radically heuristic. When you and I are having a conversation the humanistic conceit is that we are in some way, shape or form representing in our minds what’s in the other person’s mind. We have all kinds of supposed mechanisms in our brain that actually allow us to peer into each other’s soul representationally. If you take all the representations out of that picture and simply look at it as a physical give and take, then suddenly the situation between two individuals looks completely different. What I am doing is cueing an assumption on your part which you report, which cues an assumption on my part and back and forth and so on. There is no actual peering into one another’s brains, what we are doing is simply working through a kind of social algorithm where you’re one half and I’m the other half. Given enough ancestral consistency in our contexts that algorithm generally will lead to some sort of happy conclusion. But now we have cognitive technologies flooding our ancestral social cognitive ecologies and it’s no longer the case that it’s me pushing your buttons or you pushing my buttons, there’s also all these gadgets pushing our buttons. They have actually been designed to push our buttons in commercially exploitable ways and as it turns out it’s really easy to do that.

I lay this out most clearly in the post “Artificial Belonging”.[2] The basic idea is that information technology generally, artificial intelligence more specifically, are a kind of social habitat destruction and these alien species are invading our ecosystems. Imagine ten years from now, when there’s a hundred thousand chatbots for every human being on the planet. All this button pushing which evolved when there were only humans around and which was at least functional enough to allow our ancestors to muddle along, all these ecosystems are going to be completely gone. You’ll have people making all kinds of moral judgements, all kinds of moral assertions as to who’s right, who’s wrong, who’s good, who’s bad. Instead of that playing itself out in the ancestral ecology, it will be playing itself out in an ecology that’s been overrun, trammelled, invaded by systems that have absolutely no stake as agents, whatever that means, in our cognitive ecologies. It ends up being a crash space insofar as morality is simply going to become more and more dysfunctional. The moral judgements that we used to use to solve problems will suddenly no longer solve those problems. It will be gradual in some cases, it will just become less and less effective, and in other cases we can expect that it will be catastrophic.

Look at the number of students coming into classrooms with accommodations now. Back in my day we were allowed to call students ‘lazy’ or ‘misguided’, we were allowed to make character judgements regarding students who didn’t seem to be actually performing quite so well. Now that we have all this deep ecological information that allows us to identify the mechanisms that underwrite the behaviour we used to pass moral judgement on, all of a sudden we can no longer pass moral judgment on that. Because the heuristic is: if there is a causal, neurocognitive explanation for an action, that action is no longer intentional, ergo it’s no longer proper to pass character judgement on that individual. Anytime anything goes wrong for them, they can always claim they couldn’t have done otherwise. That is just going to simply increase, increase, increase, increase. I think that’s a simplistic toy model but I think it’s a good toy model to get a sense of just how morality will progressively become more and more difficult and eventually outright impossible. At least in anything we recognize via intuitions.

Šum: We see your work as a continuation of French moralists like Rochefoucauld, a great cartographer of biases, or Nietzsche, who was actually a big fan of this kind of “psychology”. You also write aphorisms as they did. When we say you’re a moralist, we mean something in the vein of Ballard’s statement: “I feel we should immerse ourselves in the most destructive element, ourselves, and swim. I take it that the final destination of the 20th century, and the best that we can hope for in the circumstances, is the attainment of a moral and just psychopathology.” You seem to say, “it’s not just, it is just what it is”, but at the same time give an impression of being more than just a bit worried. So are these cautionary tales or are these tales about the inevitable?

Bakker: It all depends on how I feel when I wake up. When I’m feeling optimistic, they do feel like cautionary tales to me. When I’m feeling pessimistic, they feel like a version of miserabilism, a sort of wallowing in the ugly inevitable. There are endless bullshit stories I could tell myself, I could congratulate me for being a hero trying to warn the rest of civilisation, all-sacrificing Jeremiah or something like that. But point in fact, I’m not that hero and I’ve really, most of the time, no clue what the actual bases of my actions are and I’m painfully aware of the degree to which we not simply habitually but compulsively rationalise our work and our statements. The only thing I’m sure of is that when it comes to the worst-case scenario, this Ballardian pathology, it has to be better that people have some sort of actual information than otherwise. It just simply can’t be the case that digging deeper and deeper into fantasy worlds is going to bring us anywhere better or actually allow us to string a party along for another generation or so. Obviously in a world with Trump and a world without Trump if I could write something that had the reach to actually tilt that balance towards the latter, then I could cash my check and drink with a more peaceful mind.

Šum: What’s the thing with Canada and all these hyperstitional crash zones: William Gibson’s cyber&meatspace, Cronenberg’s videodrome, Ballard’s—who was sent to Royal Canadian Air Force training base in Saskatchewan, where he discovered sci-fi—non-spaces, your crash space etc.?

Bakker: Is this a question of what’s in the Canadian water? Why Canadians are so obsessed with crashes? Canada is a fiat community. We let in on average, per capita basis, twice as many immigrants as any other country. At least that was the case, it’s been a few years since I’ve looked at the statistics, but historically we have let in more immigrants than any other country on this planet relative to our population. It has created an ethno-cultural crash space. Really the only things that bind people once you bring so many of them from so many different organic contexts into a new context are commercial norms. So we are a cultureless culture in a strange sense. The only culture we share is commercial culture, crass commercial culture in many respects. If you’re an observant soul in this country, I think it gives you a feeling of having been kicked loose from history. You don’t feel quite as constrained or quite as bound by the past. Taking risks or envisioning collapses is easy to do here where in effect history has already died.

Šum: You mean even more that in the States?

Bakker: Way more. I grew up in the shadow of the United States. I was stuck with American news when I was a child because where I lived, on the north shore of Lake Erie, all we got (with the exception of one channel) were American channels. So we used to watch the Olympics and it was always the American broadcast. Every time we saw someone with a red maple leaf, it would be like, “There’s a Canadian! There’s a Canadian!” And we would celebrate just because we saw a Canadian in the background. Basically, Canadians are Americans that think they’re better than Americans, that’s my personal opinion. We are immersed in American culture, but from the other side that creates a strange sense of dislocation. When I moved to Nashville to do my philosophy PhD at Vanderbilt, I was actually quite taken aback by how much more historical the United States is than Canada. We have no Sturm und Drang, our history is, from a dramatic standpoint, tepid and bureaucratic. There are no defining myths, nothing that we can reach back to and repeat as a catechism, the way Americans can. And not only that, the United States is just a much more homogenous society, Canada is far more cosmopolitan. You get away from the big cities in the US and it’s almost frightening how homogenous things become.

Šum: Since you mentioned pushing buttons earlier, what about propaganda, mass media and fake news? As there is no communication that is not at the same time manipulation, which is precisely pushing the buttons of another person or mass audience, does it make any sense to distinguish between fake news as manipulation on the one hand and truthful reporting on the other?

Bakker: All communication is manipulation, but all manipulation isn’t equal. One of the things I try to accentuate or sell, I guess, with regards to heuristic neglect theory, which is the larger theory of cognition of which BBT forms a component part, is that it is a compelling way to look at issues of communication and language. It can actually provide a naturalistic explanation for the differences between locutionary, illocutionary and perlocutionary speech acts. It can explain why it is when we truth-tell, language is at once manipulative, yet seems to absolutely vanish in transparency, so that it seems truth-telling doesn’t involve manipulation at all. Truth-telling is actually a rational, purely rational enterprise. But then how can it be manipulation and truth-telling at the same time?

On heuristic neglect theory the way to look at communication is in terms of what I call medial neglect. As I’m speaking to you guys now, my brain is doing an unbelievable number of things, buzzing, flashing, beeping, burping, all kinds of things I have absolutely no consciousness of whatsoever. I have no consciousness of my tongue flapping around in my mouth, brushing my teeth every once in a while. I have no consciousness of any of the mechanics of my performance whatsoever. I am performance-blind when it comes to communicating. Now that performance blindness, that’s what I call medial neglect. It explains how it is we communicate asymmetrically, how we communicate outward while remaining completely blind as to the nature of that communication itself.

Now strip away all the humanistic detritus that’s built up on these issues of truth-telling, knowing, communication and just think of human beings as physical animals in their environment, possessing certain orientations towards their environment, and look at these orientations simply in statistical terms, as having certain dispositional capacities to effect certain types of behaviour, not by virtue of any intentional relationship with the environment, but just by virtue of being a mechanical thing, possessing certain properties, being in an environment that is also mechanical and possessing certain properties. By mechanical here I mean the sophisticated sense of mechanism as it’s usually accepted in cognitive scientific circles, not deterministic in the clockwork sense, but actually incorporating noise, using ‘statistical methods’ to actually accomplish things.

If you look at an orientation in that sense and you look at group membership as what you can call an overlapping of orientations, then you can look at communication as basically a way to maximize what goes unsaid. So I say to my fellow, “Hey, there’s a tiger in the bushes.” As it turns out, there really is a tiger in the bushes. When I say that, I update his orientation. The two of us at that point share overlapping orientation: there’s a tiger in our environment and we both have some orientation, some sort of statistical relationship toward that tiger in our environment. And it goes unspoken, it becomes the implicit basis upon which any further cooperative endeavour ends up being erected. Say we want to hunt the tiger so that it doesn’t kill any of our friends, or say we want to capture the tiger because we want to sell its claws in China on the dark web. There are all kinds of things we could do just on the basis of my friend accepting my claim that there is a tiger. You don’t have to say there’s a tiger anymore. He’s updated, I’ve updated him. That is just simply physical orientation, a pure coincidence of two organisms possessing similar physical relationships to the tiger in the environment. Now say I tell my friend, “There’s a tiger in the bushes, don’t go there!”, but in fact I’ve buried some treasure there and I don’t want him to see where I’ve buried it. In this case we don’t have an overlapping of orientations anymore. I’m actually manipulating him to keep him away from my treasure that’s hiding in the bushes. I’m using language in the perlocutionary sense to steer him down certain paths of behaviour that he would not have taken otherwise. Now this is manipulation the way we think of it in the everyday sense. I’m pushing his buttons so that he stays away from my pile of treasure because he thinks there’s a tiger over there. In both cases it’s pure manipulation, the difference is what it does to the orientations of the individuals involved.

Truth belongs to a radically heuristic system that allows us to do all kinds of almost miraculous socio-cognitive things on the cheap—incredible cheap—without burning barely any calories at all. But you still can insist that all communication is manipulation. It’s just two different kinds of manipulation.

Does that make sense? I’m not an academic—all I do is think and read and write all day and I really don’t talk about this stuff that much so when I do talk about it, it’s always a strange thing. All of a sudden, I have to squeeze it through my voice box and it doesn’t sound like it comes out all that well sometimes!

Šum: We loved the tiger part. Tiger and war, Julius Evola and Walter Benjamin, you know, the obvious tradition.

Bakker: Maybe that’s where it came from!

Šum: We just read a poem about a tiger by a 6-year-old Nael this week, it goes like this: “The tiger He destroyed his cage Yes YES The tiger is out.” Probably he’s got a tiger mom. In that regard we’re interested in the concept and praxis of exit. Are your, Nick Land’s, Paul Ennis’ etc. exits from the academia connected to this cul-de-sac, quite literally, a blind alley? Philosophy not being a problem-solving machine?

Bakker: I fully admit that I’m probably not as careful as I should be when I mention the word philosophy or even philosopher. I think my instinct to grandiosity sometimes gets the best of me and I just say philosophy with a capital P.

Šum: Because you’re still doing it, right?

Bakker: Yeah, definitely. I am a philosopher, I am one of the idiots I’m constantly berating. But when I trouble philosophy, what I’m really talking about is traditional intentional philosophy. For me that’s actually a very, very specific thing. I can actually lay it out in naturalistic terms as to what traditional intentional philosophy consists in. It consists in the application of intentional cognition. When I say intentional cognition, I mean something purely mechanical, just those systems in our brain that we have inherited from our ancestors and have extended on, knapped and chiselled and transformed into all kinds of other tools as well that allow us to make sense of social situations. Those systems are ways of seeing past our blindness, particularly seeing past our blindness to performance, medial neglect. They actually allow us to solve the problems where everything is flesh and blood. That’s why the argument ad baculum is the only ‘knock-down argument’ there is, simply because it’s the argument where you get knocked down. It’s where discourse becomes meat, becomes matter.

Those systems are simply picking out cues, just small pieces of information in our environment, and leaping to conclusions on the basis of those cues, trusting that those cues are correlated to basically an ancestrally recognisable environment. So I don’t need to know what your brain is doing to know what you’re doing simply because we belong to the same species, we share enough training in our backgrounds that I can actually, in a sense, emulate whatever it is you’re saying or doing or thinking. I don’t have to actually see inside your head, I can just go by cues and by buttons and get away with all that. That’s what intentional cognition does best. It helps us get along with one another.

In traditional intentional philosophy the thought is that intentional cognition, over and above this practical social problem-solving suite of heuristic systems that we have, also has this amazing theoretical capacity. So even though it’s an incredibly shallow information-consuming system, it can nevertheless unearth deep facts about our social predicament, be it the question of ‘knowledge’, be it the question of ‘meaning’, be it the question of ‘good’. The idea is that this system that’s meant to actually avoid getting into the complexities and depths of things can nevertheless somehow provide a window onto the complexities and depths of things. So it’s doomed. Traditional intentional reflection on problems, any kinds of problems, just leads to more crash space, more underdetermined claims, more perplexities, more confounds. There’s a reason why we’re still debating the same things Aristotle debates in On the Soul. There’s a reason why it still actually makes sense to read Aristotle’s On Poetics when we’re discussing literature. It’s because we haven’t taken any decisive step past him. The mysteries he faced are still our mysteries. And on BBT those mysteries are actually just a series of cognitive illusions turning on the systematic misapplication of intentional cognition to theoretical problems as opposed to practical problems. That philosophy is dead. Any philosophy that turns on intentional posits has always been toothless, has for the past two millennia simply added to the heap of disputation. But I think within the non-New-Age community it is going to be recognised as being just a kind of evolutionary dead-end or cul-de-sac, like you say, that we need desperately to back out of if we’re going to have any sort of theoretical grip on what’s happening with the proliferation of cognitive technologies.

Šum: Massimo Pigliucci says that philosophy is useless as a form of truth seeking and it should reduce itself to therapeutic function, perhaps in the sense of taking care of the illusional, delusional self. It may be a deception, but a functional one. On the other side, as you show in On Alien Philosophy, philosophy can also be an attempt “to comprehend how things in general hang together in general absent conclusive evidence.”

Bakker: This is a debate I’ve actually encountered many times over the years. Philosophy is far from dying. Philosophy in the sense of just theoretical speculation is going to get bigger and bigger as the surface area of scientific uncertainty expands. Every new reticulation in scientific knowledge spawns more questions. A question is answered, several more questions pop up. Columbia neuroscientist Stuart Firestein writes about science actually being a quest for uncertainty. That’s where you find the philosophers, hanging about the second order skin of this expanding sphere of scientific uncertainty that is just simply a corollary of scientific knowledge. That philosophy is always going to be there and it will expand.

Intentional philosophy, though…Eckhart Tolle is a perfect example. You read Eckhart Tolle and what you see is a bunch of undergraduate-level manipulations of Heideggerian jargon interwoven with classic mysticism, Eastern religion. Eckhart Tolle, I would argue, isn’t a philosopher at all, he is a businessman. He is providing delusional rationalisations for delusional conceptions of the self. Is that good or bad? I absolutely have no idea. What I do know, though, is that the process that it’s part of is incredibly alarming. In the age of communication technology you really seem to see this migration of the masses into more and more atavistic worldviews run amok. Insofar as individuals don’t like the notion that they’re just fooling themselves, they will actually pay people to provide them with convincing rationales that they’re doing otherwise. So I think that this sort of commercially-oriented apologetic philosophy will grow and grow and grow and grow. I just don’t think it’s philosophy insofar as that it’s not really interested in answering any questions or even really solving any problems aside from narrow commercial interests. It’s about creating, adding enough rooms to the funhouse so that people never need to bump into a wall that lets them know that they’re actually trapped in a funhouse. I think there’s going to be a lot jobs for philosophers. Some of them will be critical and naturalistic and they will speak languages that people will barely be able to follow. There will be other philosophers that will try to interpret back all those scientifically grounded naturalisms into something that’s meaningful for everyday life, if everyday life even admits of any sort of synoptic assessment of meaning. And then you’ll have a growing body of intentional apologists, who won’t really be interested in arguing at all, but rather maintain ingroup solidarity in the face of perceived cognitive threats.

Šum: If we define intelligence as efficient practical problem-solving behaviour, how important is consciousness for intelligent behaviour?

Bakker: Daniel Dennett would argue that there’s absolutely no way of understanding consciousness without understanding cognition. On this point I entirely agree with him. Sentience is a component of sapience. Sentience, awareness, is a tool that served our ancestors very, very well. We can speculate about the evolutionary provenance of that tool. For instance, one of the things that consciousness does is freeze-frame a certain amount of information and then broadcast that to the legions of problem-solving heuristics distributed through the brain. By freezing and broadcasting and involving problem-solving circuits you think things through, roll them through your thoughts over and over again, and each time they seem to be transformed. That transformation, which we call ‘thought’, ends up arriving at conclusions that we can then report to our fellows.

Consciousness has a big role to play in human sapience. But as a component of sapience, as a way to actually generate novel behaviours in the face of problems, whether it be new problems or old problems, what consciousness seems to do is allow us to come up with new tools, refine those tools and then shove them back into the toolkit so that they can be used habitually afterward. That kind of awareness is just something humans have. And maybe animals. I think they might, but there are some strong arguments out there that sentience is probably something that has been selected for linguistic communication. It seems to be a remarkable coincidence that all that we can report happens to be all that we can consciously experience. So it could be the case that as our communicative abilities were bootstrapped in our past, we became conscious as a result of that. Or it could be the case that all mammals actually take advantage of this particular cog, only they simply have no linguistic recourse the way humans do. They might have a protolinguistic recourse when it comes to using consciousness as a cognitive tool.

Šum: Human consciousness can also be an obstacle to intelligent behaviour, as in brooding or overthinking. Many artists report being at their best when being in the zone, in some kind of a non-thinking but fully present and alert state of mind. What would be the value of Eastern mysticism in this disentanglement of intelligence and consciousness, since it is interesting precisely because it devalues consciousness?

Bakker: Psychologists talk about flow, and the flow is my bread and butter. Flow is when you become so absorbed in your project that you lose consciousness of time. To use a visual science metaphor: when you’re in flow, your focal region of attention swallows up the whole screen all the way to the margins. It seems to be where you’re most creative, where you’re most productive. It’s certainly the case for me. What disrupts that flow is any anxiety, a thought about, “Oh, I got an interview to do!” or, “Oh, I forgot to put out the garbage!” or, “Oh, maybe I’m doing this wrong!” or, “Oh, should I even be working on this book?” or, “Oh, jeez, what am I thinking, I’m the stupidest philosopher in the world!”

In other words, it’s the distinction between awareness and self-awareness. It’s the latter where interference arises. I think this is as much the case in Eastern views, although I’m nothing more than a tourist when it comes to Eastern philosophy. But the idea seems to be it’s the self that gets in the way, thought of the self, whereas if you submerge the self, if you’re just aware, if you’re just simply being in the world, then you’re more likely to be able to attain flow states of hypercreativity and accentuated problem-solving skills. In a sense, I think consciousness stands outside of that question. The issue is a form of sapience, which is self-consciousness. The attempt to actually cognise awareness is what seems to scuttle the ability of raw states of awareness to allow us to solve problems. And that’s actually a good thing. In many cases, you’re in the zone and you’re doing something disastrous, and the best thing in the world that could happen is that you have a thought that breaks your flow and forces you to think about it in different ways. That brings us back to sentience as a component of sapience. So I look at Wu wei as a kind of half theory, a kind of folk discovery of cognitive scientific fact of the way in which awareness functions within the greater cognitive economy.

Šum: Would you say that intense creative processes are a way of bypassing consciousness and accessing the greater brain or does that sound too mystical?

Bakker: It’s not that it’s bypassing consciousness, consciousness is still there. You’re still sentient, you still have awareness, you’re still actually relying on it in some way, shape or form. Awareness only gives you the skin of an astronomically more complicated system. It’s the self-consciousness, it’s when deliberation and volition impinge on consciousness to bring out a new problem-solving possibility to light, when you start thinking of things, that you really slow down your brain. In that case it is self-awareness, self-consciousness, metacognition of awareness that is the fly in the ointment, the problem. Sentience itself is, I think, a condition of both.

Šum: In this precise sense how would you describe your writing process in the conditions of BBT, all these thoughts coming “from nowhere”? Is there even a way to talk about it, considering it entails a conscious intervention?

Bakker: All thoughts come from nowhere. Thought itself, neurolinguistic self-stimulation, what the hell is it? It’s obviously an incredibly important part of our human shallow information ecology. The information we get regarding the origins, the sources of our thoughts is totally need-to-know. And we don’t need to know it, or at least our ancestors didn’t need to know it. Thoughts coming from nowhere is just the cost of doing cognitive business as a human being. For my own part, I view my creative process self-consciously, through the lens of my eliminativistic approach to the human soul. I do many things that are an incredible help in my daily writing routines and are, I guess, odd. Not something you would expect from a self-bootstrapping writer. For example, I make sure that I write at least two hours every single day, seven days a week, 365 days a year. It drives my wife crazy sometimes, but I do that just simply because I know the way it works. Someone asks you the name of an actor, for instance, and you can’t remember. Say it’s Samantha in Her. Who is that woman? You know you know her, but you can’t remember. And then four or five hours later, all of a sudden, what pops to your head?

Šum: Scarlett Johansson.

Bakker: Scarlett Johansson, right? I mean it’s just, boom! Even though you stopped thinking about it and you kept watching the movie, you had this little part of your brain working away at that query “Who is that actor?” And all of a sudden it comes up with the answer, and it’s like, okay, boom, delivered! Delivered to consciousness. Scarlett Johansson leaps to your lips. It’s like an instance of thought insertion. Where in the hell did that come from? Why couldn’t I think of that earlier? What that shows is how your brain is actually working on problems that you’re utterly oblivious to. One thing I really discovered is that by making sure that I write at least two hours a day those unconscious parts of my cognitive system are always on task, they’re always chiselling at something, always mulling something over. The way I like to think of it is that even when I’m not thinking about my books or my arguments or my papers or what have you, I’m actually thinking about my books or my arguments or my papers or what have you, just non-consciously. I find approaching myself as a system in that way actually helps me immeasurably. It means that every time I sit down, something pops up. Something always pops up. So I write periodically, just to clean out the tubes, I sit down at the page and it’s like, “Uh! What the hell am I going to write? Nobody is giving me anything!” It might be all psychosomatic. Just a little fairy tale I’ve told myself and it works the same way a Q-Ray bracelet works—I believe in it, and that’s why. But I think the science is on my side.

Šum: Are you wearing a Q-Ray bracelet when you write?

Bakker: Oh, you caught me out there!

Šum: Isn’t it interesting that all these conundrums are—at least in cinema—enacted through characters, which are acted or voiced by Scarlett Johansson: Lucy, Her, Under the Skin, Ghost in the Shell?

Bakker: Yeah. I think Under the Skin is her best movie.

Šum: She was married to a Canadian guy, so maybe that has something to do with it.

Bakker: How she put up with him, I don’t know. Her is a masterpiece. The way Spike Jonze basically plays on the audience’s cognitive systems, giving us this mechanical operating system at the beginning, which we cognise as mechanical, and then giving us Samantha, basically engaging all of our intentional cognitive systems, pushing our buttons. And then slowly, after the illusion is completed so that we think, “Yes, there’s a soul there!” She—because technology is always auto-augmenting—starts growing beyond the scope of easy socio-cognitive interpretation. Suddenly we get more and more instances of behaviour that we can only cognise mechanically once again. At the end of the movie she’s something that we can no longer cognise intentionally, we have to take an even further step back and cognise her as some sort of supermechanism or hypermechanism. Jonze is showing us ourselves. And that’s the great thing about Her, the third person. Her? No, that movie should be entitled Me! I’m the one that’s actually making these fantastic intentional dramas out of the button-pushing that the movie is engaged in. I don’t know if Jonze knew that’s what he was doing, but I’ll always think of it as the beginning of new art when it comes to AI and science fiction filmmaking.

Šum: Intentions are not significant, anyways, like you keep on saying.

Bakker: Yes. But it’s just one of those things. As a fellow writer I wonder, you know.

Šum: When we watched Her for the first time, we couldn’t see past the hipster sentimentalism. Only after reading your Artificial Intelligence as Socio-Cognitive Pollution[3] did we get it. It’s the mechanism through which we see our mechanism.

Bakker: Exactly. That’s the point, right?

Šum: When the technomechanisms exit into “this endless space between the words”, what would that be for you? It probably couldn’t be the noosphere?

Bakker: No. I mean the noosphere, you experienced it with your first viewing of Her. One way to look at it is basically the standpoint from which the cues that we use seem to form a seamless whole without any sort of a far side. We actually take the components to be the whole and we lose consciousness of our systematic relationship that we entertain with whatever it is. Say an artefact like the movie Her, or another individual, or a chatbot, a conversational user and interface. It’s not all or nothing. It can be adapted to solve problems—psychology is all about knapping various folk psychological tools in ways that allow them to be operationalised in specific experimental contexts. This is the problem with functional analysis in psychology: there seems to be as many of them as there are theorists. It’s not clear how the different interpretations match up, it’s not clear how far they generalise outside of the experimental conditions, which actually underwrite theorisation in the first place, but they’re still useful for making predictions, they can be operationalised.

But then in a certain respect, it is all or nothing. Once you really start bringing in deep information, once you really start applying what I call source-sensitive cognition to problems, it becomes impossible to understand what the hell is going on with the noosphere. The noosphere can be seen as shallow information ecology that our ancestors evolved to actually allow them to make sense of their environments absent knowing them. As the knowing of the environments floods the noosphere with more and more information, these systems are confronted with the very information they’re adapted to neglect, and then they just can’t make sense of it, so they fall apart, fall to pieces. That’s the space between the words, the deep information ecology, it’s what’s running behind the scenes, what kills you when you had a stroke. I hate the term scientific image, but it’s consonant with Sellars’ scientific image:[4] the endless amounts of information that actually is relevant to human behaviour, but we have evolved to systematically ignore simply because it costs too much to pay attention to those kinds of things. For artificial intelligence to explore this space is something that we can only ever really communicate in metaphorical terms. I don’t think we can follow Samantha in between the words philosophically, with the exception of making a number of limited claims and airy-fairy speculation as to the possibilities. I hope I’m wrong about that because I hope that there is a human-friendly way of actually making sense of deep information ecologies, but I usually think that there isn’t.

Šum: If we understand correctly, the superiority of machine or artificial intelligence consists of them not being burdened by biological hindrances in the sense of evolutionary heuristics. From that perspective it would seem that the main problem for us humans is that we are biomechanisms developed by evolution and therefore at the same time practically optimised and cognitively impaired by this same process. But since machines are not biomechanisms, what would give them the urgency to think absent biological impulses to solve your problems?

Bakker: It’s…crash! When we’re actually thinking about intelligences, we automatically slip into intentional modes of thinking. Once again, the problem is that those are adapted to ancestral contexts where we could count on our interlocutors possessing more or less the same hardware that we have. In a sense it’s not a contest between human/biological cognition versus mechanical/electronic cognition. Most people look at what’s happening to us right now in a cultural sense. They see technology, techne, as being in some sense imbued with whatever it is that humans have. As a result, the machines that we build are building on that whatever it is that we think we have—call it ‘X’. To the degree that we depend upon that X, whatever it is, our machines also will depend on that X, but since we came first, there is a kind of precedence there and authenticity that casts a shadow across the artificial systems that are going to come after us.

This way of framing it, I think, misses the big picture, because the big picture is that life evolved on this planet, single-celled life gave way to multicellular life, which ended up basically crapping into the bag named ‘human’ and that whole path, what you see, is this tooling up of the ability of systems to actually incorporate effect feedback. If your morphology isn’t up to snuff, you’re not going to be able to produce the behaviours required to reproduce, in which case your genetic heritage is going to come to an end. There’s a circuit between morphology and behaviour. When you get to humans, we have this morphology, we all share it, and we’ve been progressively bootstrapping our behaviour to the point where we can accentuate our morphology. From that accentuated morphology, like say a telescope, we can then derive more and more powerful behaviours. When they start rewriting the morphology and then ratcheting up as a result of that, it creates this convergence between morphology and behaviour.

The advantage, if you want to call it that, which artificial intelligence has over us is that its morphology is not biological, it is mechanical in a much more robust sense than biology is. In some ways it requires far more in the way of environmental conditions to maintain itself, but at some point, when it becomes self-maintained, it will be able to effortlessly rewrite its own morphology via its behaviour. So that circuit of behaviour-altering-morphology-altering-behaviour-altering-morphology is getting tighter and tighter and tighter and tighter.

And if you look at the big picture of what’s happening, then you can see we’re living through an evolutionary event. One every bit as significant as the development of multicellular life. It’s the point where morphology becomes transparent to behaviour. Wow! What lies on the far side of that! Really, what I’ve just given you is a kind of stripped-down, schematised way of looking at the singularity. How do we think beyond the point of actually having substrate of thought that’s stable? Humans all share the same physiology, so that’s always going to be a powerful bottleneck when it comes to the types of behaviours that we can produce. With machines, that’s not going to be the case. I sometimes call that “the big splat”—to me it presages the exploding outward of morphologies and behaviours, and I really don’t think human beings can conceive it as anything but madness, as anything but the breakdown of rationality. Space between the words once again.

Šum: That’s why you say that reason is just a temporary solution.

Bakker: Yes, that’s a great way to put it.

Šum: We see your project as disassembling the equation “human = intentional consciousness”. Isn’t the great brain with all the trillions of operations per second most definitely part of the human, and the human part of them? One could even say financial algorithms are analogical to these neurolegions and in that sense human. As you write: “Our mechanical nature…remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments. We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral toolkits, opt for something more high-dimensional.”[5] Could we say that the question is what will define us as humans, the great brain or our peephole access to it?

Bakker: We want to say, “We’re human!” Then we’ll list all the properties and basically say, “These are the criteria.” Thousand years hence, there’s a good chance…well, they probably won’t even be speaking language at that point, so what are we talking about? The future actually brings a totally different wrinkle to the question as to what defines us as humans. For me, it’s just easier to stipulate. What makes you most comfortable calling a human? The important part of the picture, I would argue, is the fact that humans, however you want to define them, are machines. We are physical beings in a physical world. There’s no escaping that whatsoever. You may not believe in God, but with enough ingenuity I could wire you up to a transcranial magnetic stimulation machine and give you an experience of God. Which, given that I hooked you up to an apparatus, you would regard as being artefactual, but nevertheless you will have that experience whether you want to or not. Because you are mechanical.

As engineering proceeds into the future, our mechanical nature becomes more and more susceptible to manipulation. The same as mechanical natures in Boston Dynamics robotics shop. We are becoming machines in terms of the ability of our behaviour to actually rewrite our morphology. Our human organism will, I think, inevitably become a historical footnote. As a civilisation we’ve been actually struggling with the slow collapse of our shallow information ecology, going all the way back to the ancient Greeks. All these heuristic devices that we use to track ourselves, track our friends and family, track and kill our enemies—either successfully or not, but more often successfully, as it seems evolution selected for them—our ancestors applied them thoughtlessly, but we stopped just simply applying them and began asking questions of them. We began transforming them, giving them self-awareness, whatever that ultimately ends up being shown to be.

We are already spiritual cyborgs in a really important sense and I think a lot of people would agree with this. We still dwell in that basic shallow information ecology, we have no choice but to dwell in it, and we’ve chiselled and chipped and rewired and ‘macgyvered’, we’ve done a million different things to it in order to actually adapt to our changing social and material environments. But now we’re at the point where thanks to cognitive science the human we associate with ancestral shallow information ecologies is about to go extinct. So you can call what comes after that ‘human’, sure, I’ll call that ‘human’, too, but we’ll have to specify if there’s an important difference. We’ll now be talking about deep information as opposed to shallow information humans. We have to admit, our shallow information ancestors would look at us, deep information humans, as being monsters or mad or both. I guess this reflects back on the question of the Argument actually being made by history itself. We’re just going to trip past the last of the ancestral warning signs, the last Ballardian “Slow down! Turn ahead!” And we’ll enter into the post-intentional. The Three Pound Brain is basically a skull-cracking attempt to describe what the post-intentional will look like.

Šum: Discovering BBT and your blog was really important to us since we used to think that continental philosophy was kind of clever and at the same time flawed, but we couldn’t really put our finger on what was wrong with it. All of these discussions, how it is still intentional or how it cannot get over itself because by being intentional it’s basically being anti-scientific. That was really a breakthrough, at least for us.

Bakker: I’m glad you think so. For over 20 years now I’ve been plagued with the sense that I’ve actually gone insane: the problem of meaning was something I chased through continental philosophy—my dissertation was on fundamental ontology, a sort of amalgam of Brandom and Heidegger. I just always felt that same nagging sense, and when it collapsed for me, it collapsed for me. But I do think my continental philosophy background was what actually served me in great stead, after I sort of shrugged off my ontological hang-ups and simply turned and asked, “What’s the science saying about this?” It gave me all kinds of stylistic habits that I think still scare away analytic readers. But it also gave me the conceptual mobility I required to actually tackle these problems in ways no one has tackled them before.

There’s thematic continuity between Derrida and neglect. Reading something like Derrida, or to a lesser extent Heidegger or Deleuze, actually sensitises you to the ways in which occluded things in important ways bear upon whatever the problematic is. Once I was able to come with a naturalistic understanding of neglect, then all of a sudden it just felt like this whole world of traditional philosophical problem solving peeled open for me. It’s like, “Yes, this is free will!” Of course determinism and free will lead to endless disputation and controversy. Because it is a crash space! How could it not be? We have no way whatsoever of cognising our actual sources, we neglect our sources systematically. So we have all these little fixes, these heuristic gimmicks that we use to paper over that massive blindness. But because we’re blind to that blindness, it doesn’t strike us as blindness at all. And so the noosphere seems to actually be an enclosed totality, an ontologically distinct realm possessing its own discursive rules. When we look at the brain and we look at what’s going on in the brain prior to making a decision, we’re introducing deep ecological information into a shallow information ecology, heuristic systems that simply are not equipped to deal with that information. It’s a point at which our technical abilities have allowed us to problem-solve over information that our ancestors just simply were not equipped to deal with. By that point, I was already out of academia, so I was just a crazy fantasy writer making all these claims. It’s only been the past few years it feels like people started listening and taking it seriously. I will always feel thankful for continental philosophy: I think it instills a lot of good habits as opposed to only bad ones. But ultimately it’s something that has to be utterly and totally left behind.

Šum: Your blog still flashes some Derridean toolkit, but we don’t find it hard to read or stomach.

Bakker: For analytic readers it’s a big problem. And if you’re still wired into continental philosophical circles, the one question to constantly ask yourself is to what degree do your arguments appeal to outgroup audiences. What you’ll find is that in a lot of continental philosophical spaces all the arguments boil down to ingroup appeals where you’re just saying things you know your audience is going to agree with and you never actually make the attempt to crack through the institutional globe and reach out to these domains that you are pretending to be critical of.

Šum: It’s kind of amazing how philosophy went from a very intellectually progressive, proto-scientific role 400 years ago to a very reactive/regressive, anti-scientific mode today, which is even more appalling because it masks itself as thinking in contrast with common sense. It kind of becomes worse than common sense because common sense at least solves practical problems in everyday life and science, in addition, also solves theoretical problems, how things actually work. But then we’re wondering what kind of problems philosophy solves outside its institutional environment?

Bakker: They pretend to solve the problems that no one has ever solved since the questions were first posed, not just by the ancient Greeks, but since the dawn of literacy. If you read Nietzsche and strip away all post-structuralist lacquer that’s been slathered over top of him, you see something that I think genuinely offers the potential for a ‘genuinely continental’ materialism or naturalism. I mean, that’s what we want, we want a continental naturalism. I don’t know what the hell materialism is. I don’t even know how we go about gaining theoretical knowledge about these things. But what I do know is that’s where the bombs are. That’s where the guns are. And that’s where the information is. Endless amounts of information. We devise material instruments and we gain more material information on the nature of the material universe. That’s why I always refer to ‘high-dimensional’ as opposed to ‘material’ per se. I worry that ‘materialism’ is just simply going to suck us into another metaphysical cul-de-sac. The question isn’t how do we get rid of all unexplained explainers: my position is that there’s just no way to do that. For me, the question is how we can pare down our unexplained explainers in such a way that we actually start making some kind of progress on these questions.

When I ventured these ideas back in the 90s, I was almost always confronted with horror and/or disappointment: “How could you be asking such a question?” But what I said back then applies so much more today, which is simply that behaviour is rewiring us materially. Our ecologies are transforming materially. And all this jargon is incompatible with it. So long as that’s the case, you will remain mute on these subjects, or you’ll just simply be throwing words together in pleasant-sounding ways that seem to explain things, but no one else will ever agree with you on because they’ll have their own happy configuration of words that seem to say something. I see this as a living problem. We are fucked unless we get some sort of handle on what’s happening now. Not what’s going to happen tomorrow, what’s happening now. At some level everybody feels that what was once solid ground is now populated by marbles. My hope is that as more and more people feel that, the siren-song of intentional philosophy will just smack more and more of chicanery and people will start asking really ugly questions, considering the worst-case scenario. Which is: it’s all been a dream. Exceptionalism has been a conceit from the very beginning, and we’re more clear-eyed moving beyond it if not more at peace with ourselves.

 


 

[1] BBT in a nutshell via Bakker: “Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness.” Its defining text is “The Last Magic Show: A Blind Brain Theory” (available at: https://www.academia.edu/1502945/The_Last_Magic_Show_A_Blind_Brain_Theory_of_the_Appearance_of_Consciousness).

[2] BAKKER, Scott, “On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate”, blog post, available at: https://rsbakker.wordpress.com/2017/08/30/on-artificial-belonging-how-human-meaning-is-falling-between-the-cracks-of-the-ai-debate/.

[3] BAKKER, Scott, “Artificial Intelligence as Socio-Cognitive Pollution”, blog post, available at: https://rsbakker.wordpress.com/2016/08/18/artificial-intelligence-as-socio-cognitive-pollution-2/.

[4] BAKKER, Scott, “Exploding the Manifest and Scientific Images of Man”, blog post, available at: https://rsbakker.wordpress.com/2018/04/02/exploding-the-manifest-and-scientific-images-of-man/.

[5] BAKKER, Scott, “Artificial Intelligence as Socio-Cognitive Pollution”.

Issue: