Don't Let the Medium Dictate a Purpose for the Message

Recently I had a conversation that went like this:

Zo: Qwelian, do you think we get AI?

Me: I think it’s a pretty long shot. Maybe we get legit humanlike virtual AI assistants at best.

Zo: Yeah, we just gotta prevent the AI revolt if they become self-aware. Engineering class hierarchies into AI could be a solution.

Me: Bruh why are we projecting class distinctions onto code?

Zo: Oh, so you mean to tell me AI cant be conscious? Qwelian, isn’t your conscious processes analogous to an algorithm?

Me: First, the Animatrix. Robo revolution is assured if we go that way. Plus, WTF are we talking about? Embodied agents with freedom or socially constructed agents. I find it hard to believe that we can outsmart machines anyway.

Zo: Yeah, so how about those hierarchies?

We really arguing whether the future of life, digital enitities, will organize against us.

Make it Make Sense

Recently I have been thinking about four pieces of writing. The first is Marshal McCluhan’s The Medium is the Message. McCluhan suggests the following: New technology can capture fully how we understand, incorporate, and act on information. Without social media or the internet(the medium), how we talk about creating communities with access to information looks different(the message). This leads to my first question: How are contemporary technologies wholly affecting the way we cognize the world?

Simulacra and Simulacrum by Jean Baudrillard is the second. Initially, I read the first couple of pages, skipped through, and said to myself, “Hmmm, there isn’t much here, he's really just on a soapbox screaming the media ruined everything. Very much Howard Beale peak messianic media personality. I should just read Borges. But Baudrillard uses a metaphor at the beginning of the book(taken from Borges), essentially saying the map of a nation can become more expansive than the nation itself.

If McCluhan ‘s notion of an idea means the superimposition of an idea’s original context with what can be derived from the idea, then what comes next is the perpetuation of the necessity of the idea, becoming future propaganda for itself through whatever reason deemed necessary. This review of Simulacra and Simulacrum has a nice summary.

What drives the mapping of lived experience onto technology?


Apparently hierchies can be expressed through NFTs?. Seriously who is going out making slave tokens...


Why John, WHY?

The third is Semantic Engines by John Hagueland. Hagueland’s piece questions whether cognition can arise from a semantic understanding of mental states. He applies computational theory to minds using Turing machines and games. In his view, mental awareness is little more than symbol manipulation. This approach melds the mentalist belief in the phenomenon of consciousness with the behaviouralist approach of empirically developed experiments to gauge cognition. I like the essay because why not abstract consciousness to processes akin to formal methods. It may totally be the case that human-level thought is just like really really complex symbol manipulation. All Symbols al the way down. I mean a really cool robot might convince me of that 🤷🏿‍♂️. What gives serious pause, to me, is to not treat simulations as a form of insight into cognition as opposed to the thing consciousness is. It’s like taking a turtle out and saying it’s a duck. Make it make sense. The movie Free Guy is fantastic but it would be hella sus if, in that world, we thought we discovered what cognition is for humans instead of a great simulation of cognitive capacity. I am all aboard(motherboard pun 😭) for cognitive beings. Yet, this is precisely what Hagueland and those of his brand of brain as CPU believers like Daniel Dennet do. They take a strong AI stance by saying thought is digital and the brain, pfft, who cares, it’s just wetware.

p.s. I mean the latest AI sparking GAI discussion is just so so at the task given to it. Oh now that it can take multiple input types its gEnERaL Ai???

ooooo

Fuck!

nooo

In Can Computers Think John Searle defines the limits of viewing a syntactically structured system as containing meaning. As he puts it(pg 674 of linked PDF):

  1. Brains cause minds.
  2. Syntax is not sufficient for semantics.
  3. Computer programs are entirely defined by their formal, or syntactical, structure
  4. Minds have mental contents; specifically, they have semantic contents.

which leads to a few conclusions:

  • Computer programs cannot substitute the mind

  • Our awareness and reason cannot be simple as running a program and iterpretting symbols

  • anything that causes the mind would have causal powers equal to that of the brain.

  • Any artifact humans may create that has mental states similar to that of humans would not be a simple program.

Searle asks a commonsense yet not so apparently so question: “Why do people think computers have thought or feel emotion?” Which brings us back to the conversation that started all of this. Why create an ML/AI caste system?

Should we fear the cognitive capacity of Turing machines? If we get a tidy reproduction of human mental capacity via cyborgs, clones, or automata, that’s cool; then what? Pick your flavor of dystopic near-metal cognitive driving: Serial Experiments Lain? The Matrix? Ex-Machina? Ghost in the Shell, Robot Carnival, and Do Androids Dream of Sheep might reveal how we confront the phenomenon of relegating complex thoughts and emotions to humanity. But rarely do we envision a future where technology does more than mirror our current capacities. Maybe machines will take over, but technology is a medium through which we express ideas. We don’t need to create docile slaves

To reiterate some questions I asked that are too onerous to find landing here, I ask:

What flips the medium of technology to drive a hyperreal imitation of references to itself, such that the reference loses context to what technology itself abstracts?

How are contemporary technologies wholly affecting the way we cognize the world?

What drives the mapping of lived experience onto technology?