Do you want more bullshit?

Broderick Turner
2 min readApr 2, 2024

I love the work that the Bloomberg team put into this article. Using an algorithm audit to uncover that an HR system provides biased outcomes is necessary and important, and frankly the types of tasks that OpenAI should be doing long before they roll out these models to the public.

I do want to add one thing- and I don’t want to mince words. The most important thing to remember when thinking about any large language model (LLM), of which OpenAI’s ChatGPT is no exception — these are statistical models with no relationship to accuracy. That means these models have no relationship to facts, nor do they have any model of world. In the most blunt terms, LLMs write bullshit which Philosopher Harry Frankfurt eloquently defines as, “speech intended to persuade without regard for truth.”

Now when you think about should you use one of these models for any task, just ask should I replace the the current person or system doing the task with “a statistical model that writes bullshit.”

Let’s give that a try, shall we?

Instead of judges, courtrooms should use “a statistical model that writes bullshit” to decide probation or parole?

That sounds like a bad idea.

Instead of Human Resources reading applications we should replace them with “a statistical model that writes bullshit” to decide who will be interviewed?

On one hand, this will surely save money on the front end. Human Resource professionals are not cheap. But, will the quality of employees improve when they are first selected by “a statistical model that writes bullshit?” My guess is that quality will not improve.

Are there instances when it makes sense to use “a statistical model that writes bullshit?” Of course, there must be*. But, blindly adding in a large language model into your work process is a short course towards disaster — unless of course your job requires you to produce more bullshit.

*Ok, it’s not all doom and gloom. There are some things that LLM’s are REALLY good at. Because they write plausible sounding language, they tend to be better at grammar and spelling than most people. So, if you are using an LLM to format your notes into an essay, or if you want to go from transcription to a narrative document, or if you want to change your essay into a Shakespean sonnet — using an LLM makes sense. For the most part, these tasks do not have to be totally accurate. Of course you will need to re-read and edit any document an LLM produced to ensure that it is free from error or erroneous nonsense.

--

--

Broderick Turner

Assistant Professor of Marketing @ The Pamplin College of Business, Virginia Tech