Don't Let the Medium Dictate a Purpose for the Message
Recently I had a conversation that went like this:
Zo: Qwelian, do you think we get AI?
Me: I think it’s a pretty long shot. Maybe we get legit humanlike virtual AI assistants at best.
Zo: Yeah, we just gotta prevent the AI revolt if they become self-aware. Engineering class hierarchies into AI could be a solution.
Me: Bruh why are we projecting class distinctions onto code?
Zo: Oh, so you mean to tell me AI cant be conscious? Qwelian, isn’t your conscious processes analogous to an algorithm?
Me: First, the Animatrix. Robo revolution is assured if we go that way. Plus, WTF are we talking about? Embodied agents with freedom or socially constructed agents. I find it hard to believe that we can outsmart machines anyway.
Zo: Yeah, so how about those hierarchies?
Make it Make Sense
Recently I have been thinking about four pieces of writing. The first is Marshal McCluhan’s _[The Medium is the Message](https://en.wikipedia.org/wiki/The_medium_is_the_message)._ McCluhan suggests the following: New technology can capture fully how we understand, incorporate, and act on information. To use a metaphor, without social media or the internet(the medium), how we talk about creating communities with access to information looks different(the message). This leads to my first question: *How are contemporary technologies wholly affecting the way we cognize the world?*
Simulacra and Simulacrum by Jean Baudrillard is the second_._ Initially, I read the first couple of pages, skipped through, and said to myself, “Hmmm, there isn’t much here, he's really just on a soapbox screaming the media ruined everything. Very much [Howard Beale](https://www.youtube.com/watch?v=1cSGvqQHpjs&ab_channel=MovieclipsClassicTrailers) peak messianic media personality. I should just read Borges. But Baudrillard uses a metaphor at the beginning of the book(taken from Borges), essentially saying the map of a nation can become more expansive than the nation itself.
If McCluhan ‘s notion of an idea means the superimposition of an idea’s original context with what can be derived from the idea, then what comes next is the perpetuation of the necessity of the idea, becoming future propaganda for itself through whatever reason deemed necessary. This [review](https://irfanajvazi.substack.com/p/baudrillards-simulacra-and-simulation?s=r) of Simulacra and Simulacrum has a nice summary.
*What drives the mapping of lived experience onto technology?*
**[SHE BON : Sensing the Sensual](https://www.youtube.com/watch?v=fA3M7fjr60Y&ab_channel=ROBOHEMIAN%21)**
[Racist ‘Meta Slave’ NFT Project Rebrands After Being Called Racist](https://www.vice.com/en/article/5dgw9a/racist-meta-slave-nft-project-rebrands-after-being-called-racist)
Why John, WHY?
The third is an essay called _[Semantic Engines](https://www.scribd.com/document/119239415/John-Haugeland-Semantic-Engines)_ by John Hagueland_ Hagueland’s piece questions whether cognition can arise from a semantic understanding of mental states. He applies computational theory to minds using Turing machines and games. In his view, mental awareness is little more than symbol manipulation. This approach melds the mentalist belief in the phenomenon of consciousness with the behaviouralist approach of empirically developed experiments to gauge cognition. I like the essay because why not abstract consciousness to processes akin to formal methods. It may totally be the case that human-level thought is just like really really complex symbol manipulation. I mean a really cool robot might convince me of that 🤷🏿♂️. What gives serious pause, to me, is to not_ treat simulations as a form of insight into cognition as opposed to the thing consciousness is. It’s like taking a turtle out and saying it’s a duck. Make it make sense. The movie Free Guy is fantastic but it would be hella sus if, in that world, we thought we discovered what cognition is for humans instead of a great simulation of cognitive capacity. I am all aboard(motherboard pun 😭) for cognitive beings. Yet, this is precisely what Hagueland and those of his [ilk](https://en.wikipedia.org/wiki/Intentionality) like [Daniel Dennet](https://www.goodreads.com/review/show/4206814457) do. They take a strong AI stance by saying thought is digital and the brain, pfft, who cares, it’s just wetware.
p.s. I mean the [latest AI](https://venturebeat.com/2022/06/04/is-deepminds-gato-the-worlds-first-agi/) sparking GAI [discussion](https://open.spotify.com/episode/5Y0W9FVRIg3xRgp4fij1Zf?si=MYR_7O3tRtefOdO38sdqpg) is just so so at the task given to it. Oh now that it can take multiple input types its gEnERaL Ai???

Fuck!
In _[Can Computers Think](https://danielwharris.com/teaching/101online/weeks/13/Searle.pdf)_, John Searle defines the limits of viewing a syntactically structured system as containing meaning. As he puts it(pg 674 of linked PDF):
which leads to a few conclusions:
Searle then asks a simple question: “Why do people think computers have thought or feel emotion?” This brings us back to the conversation that started all of this. Why create an ML/AI caste system?
Should we fear the cognitive capacity of Turing machines? If we get a tidy reproduction of human mental capacity via cyborgs, clones, or automata, that’s cool; then what? Pick your flavor of dystopic near-metal cognitive driving: Serial Experiments Lain? The Matrix? Ex-Machina? Ghost in the Shell, Robot Carnival, and Do Androids Dream of Sheep might reveal how we confront the phenomenon of relegating complex thoughts and emotions to humanity. But rarely do we envision a future where technology does more than mirror our current capacities. Maybe machines will take over, but technology is merely a medium through which we express ideas. We don’t need to create [slaves](https://www.newsweek.com/elon-musk-tesla-bot-people-can-run-away-it-1621330)…
To reiterate some questions I asked that are too onerousundefined to find landing here, I ask:
and finally
#mediaandtechnology
*What flips the medium of technology to drive a hyperreal imitation of references to itself, such that the reference loses context to what technology itself abstracts?*
*How are contemporary technologies wholly affecting the way we cognize the world?*
*What drives the mapping of lived experience onto technology?*
computer programs cannot substitute the mind
our awareness and reason cannot be simple as running a program
anything that causes the mind would have causal powers equal to that of the brain.
any artifactundefined humans may create that has mental states similar to that of humans would not be a simple program.