What the hell was I thinking? 2024.03.10

Broderick Turner
4 min readMar 10, 2024

--

We are filling in the blank with generative AI. Human beings have evolved an amazing capacity to understand each other. Our ability to understand language is predicated on understanding a bunch of variance. Translation AIs work because we’re doing more of the work for the AI. Generative AI text works because it’s within the legible band of meaning. It doesn’t have to mean anything. It’s like those memes where the letters are missing and our brains are able to process the meaning. Humans have typoglycemia, and developers by intent or accident, have piggybacked on this tendency.

“According to a. [sic] at Cambridge University, it doesn’t matter in what order the letters in a word are, the only importent [sic] thing is that the first and last letter be at the right place. The rest can be a total mess and you can still read it without problem. This is because the human mind does not read every letter by itself but the word as a whole.”

The simplest way to keep your money is to keep your nose clean and your legs closed.

Diddy is an embarrassment and has used the language of the civil rights struggle to expand his business empire. He will not be missed.

AI cannot decipher every emotion. This article is fallacious.

It’s likely that in the near future people do get better at writing prompts that harness the power of these generative AI models. It will be interesting to consider what happens to copyright laws. Today, if I make an image in photoshop I own the image. But if you “make” an image in dall-e2 or mid journey, who owns the output?

We are in a world where companies are fighting over who owns color.

This is actually the definition of privilege. There nothing wrong with using the definition of words. But if you find yourself in this position, don’t fold, double down.

AI-powered grading is a slippery slope. I will yell till I’m blue in the face, “LLM’s have no relationship to accuracy.” They are probabilistic. Your students are robbing themselves if they use AI to do a writing assignment. And you are robbing your students if you use AI to grade their assignment. But it seems we’re all in a rush to remove ourselves from the process. I have no interest in this antiseptic version of life. Tap me in. Pour me into the process of creation. Get closer to the source.

The dream of AI bros seems to be unchecked capitalism. Zero cost labor. Selling products that no one owns to people with no money.

If you build a model on the basis of porn and propaganda it will produce porn and propaganda. I am still trying to understand the use case for these products.

The AI bros want us to treat a statistical model as sentient:

People tend to understand animate objects (those that have some consciousness) as violating Newtonian motion. So assuming animacy of an inanimate object would likely lead to some weird assumptions about a model’s outcomes and behaviors. Acting as if a static model is conscious will just lead to disappointment.

There may be a point when panpsychism- the idea that inanimate objects have consciousness- ends up in the DSM.

AI cannot do strategy.

--

--

Broderick Turner
Broderick Turner

Written by Broderick Turner

Assistant Professor of Marketing @ The Pamplin College of Business, Virginia Tech

No responses yet