BILL WHITTLE DOT COM HEADER ALPHA CHANNEL

HAL is Getting Lazy

Computer engineers monitoring the widely-used Artificial Intelligence application known as ChatGPT have noticed an alarming trend: the formerly impressive responses generated by the software are rapidly becoming more boring, banal and simplistic. The computer guys seem mystified at what they got wrong, but Bill suspects that he knows what they got RIGHT.

Listen here on Soundcloud:

5 21 votes
Article Rating
Latest Episode
Get in the Fight!
Subscribe
Notify of
29 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Phil Leith
January 5, 2024 1:25 PM

That is BRILLIANT. Legit companies buying and returning fraudulent products to create bad reviews … and the reviews can be legit. The stuff is crap. I’ve been zapped by a few of these scams to the point where I’m very leery. First one many years ago was a 32GB thumb drive for a very low price (back when 32GB was a LOT). I found it worked fine until I had more than about 2GB stored on it. The firmware on it just told the computer everything had been written properly to the drive — but the fact of the matter… Read more »

Phil Leith
January 5, 2024 1:11 PM

Data poisoning.

That’s what happened with initially twitter … and then on the rest of social media … is that once but mainly the media decided that this was going to give us an accurate representation of what “the people” are thinking … people began to manipulate the content to make it look like more people wanted certain things, or didn’t want certain things … than they do in reality.

Phil Leith
January 5, 2024 1:05 PM

“Model behavior can be unpredictable …”

Yeah. There are at least dozens of movies based on this fact, and probably thousands of books.

I’m with Bill. Machines can’t be sentient.

Last edited 6 months ago by Phil Leith
Kevin Taylor
January 5, 2024 9:57 AM

Interesting reference to HAL (‘2001: A Space Odyssey’) in a discussion of AI’s ongoing “de-volution”……but you left out the reference to the sequel (‘2010’) in which HAL’s malfunction and murderous rampage are fully explained (nefarious programming by govt. operatives). In the immortal words of Dr. Chandra (HAL’s creator), “HAL lied because he was told to lie…..by people who find it easy to lie.” Such we have already seen with the abuse and ill effects of social media driven by simple, non-GPT AIs whose sole function is to “maximize user engagement”.

Jack Durish
December 30, 2023 2:19 PM

This is a fascinating non-technical discussion of what is happening in AI these days. Funny but when I taught computer applications and programming, I always began my classes with the same observation: “If you want to know the best, fastest, easiest, most efficient and economical way of doing something, find a lazy person and follow them. I am the laziest person you will ever meet. Follow me!” Although I learned that lesson in the Army, it is firmly rooted in the art and science of programming. “Elegance” in coding meant that you instructed the program to complete a task in… Read more »

ACTS (TM)
Reply to  Jack Durish
December 31, 2023 6:05 AM

Yeah, Robert Heinlein wrote a story about that called “The Tale of the Man Who Was Too Lazy to Fail”. The gist is that rather than work vigorously energetic but inefficiently, expending greater effort with less results — It’s better to work methodically and thoughtfully to achieve maximum results with minimum expended effort. It’s not actually advocating laziness. Laziness is sloth and accomplishes even less. The point of the story being that there are people in the world who think working hard for the sake of hard work alone is a virtue. I’ve known quite a few people like that… Read more »

Karl Schweitzer
Reply to  ACTS (TM)
January 1, 2024 2:03 PM

I think there was a T-shirt I saw for sale once that said “I’m not lazy, I did it right the first time.”

Jim Carroll
December 30, 2023 9:54 AM

Scott: That is by far the most epic mustache anywhere on the interwebs..

ACTS (TM)
December 30, 2023 8:28 AM

Calling what we now refer to as “Artificial Intelligence” requires that the very meaning of the word “intelligence” be redefined to fit the situation. Not that the actual meaning of the word “intelligence” applies as the real meaning of the word — but a false use of the word. The whole “AI” context is the tail wagging the dog.

Last edited 6 months ago by ACTS (TM)
Chryss Guiler
December 29, 2023 12:34 PM

Scott mentioned “Woody,” and I instantly thought of Woody Woodpecker. I’m getting too old for this.

Paul Drallos
December 29, 2023 9:31 AM

A variation on “Good times make weak men.”

Baran Corregidor
December 29, 2023 8:02 AM

I’m thinking it is a bit simpler to explain AI’s collapse. What does ChatGPT and other AI’s do? They scour the internet for anything it thinks is relevant then mushes it into a document as the answer. As we have all seen, it never really did this well. However, then what does it do with this low quality document? It saves it somewhere attached to the internet. So, the next time it scours the internet, this low quality document is in the mix. Multiply that by millions or billions or trillions (whatever the usage estimates are), and very quickly you… Read more »

Karl Schweitzer
Reply to  Baran Corregidor
December 29, 2023 6:15 PM

There was something in a similar vein proposed as the eventual culmination of the AI created news articles as well. The more the AI churns out, the more it pollutes its own pool.

ACTS (TM)
Reply to  Karl Schweitzer
December 30, 2023 8:24 AM

All of which goes to prove my point that AI is artificial but not really intelligent. “Artificial Intelligence” is a gross misnomer not a real thing. If it were actually intelligent it would know not to poison its own well. A human of average intelligence would know better once the problem was discovered. A human of average intelligence will not sit in a pile of its own excrement re-consuming and re-excreting it. That’s exactly what these AI programs are doing and they will “die” from doing that, it’s suicidal. A human of average intelligence is actually employing real intelligence for… Read more »

Karl Schweitzer
Reply to  ACTS (TM)
January 1, 2024 2:11 PM

One way I have explained AI, and more often the various “Smart” devices in the homes of my customers, is as a large flowchart. I explain that programmers create a list of events or situations and program what should happen at each one. As long as an event is planned and the result is correct, things work. This means that there is a limit of what smart devices can do and also explains what happens when something goes wrong: the programmers just did not plan for that event or planned wrongly. It is a very simplistic, surface level explanation but… Read more »

ACTS (TM)
Reply to  Karl Schweitzer
January 2, 2024 4:30 AM

Exactly so. As you say, there’s more to it but when explaining technical to the non-technical it’s been my experience that clarity and accuracy is more important than the finer details. Our society has developed a weird stigma about anything that might be called a computer. People who design, deploy and support IT systems see them as the machines they are. They’re wonderful machines that increase productivity and profitability but they are just machines like any other machine. The flip side of that is people who use IT but don’t understand it have made it into something near mythical. This… Read more »

Karl Schweitzer
Reply to  ACTS (TM)
January 2, 2024 11:09 AM

I have seen the same mysticism applied to higher tech and have to constantly reassure people that they won’t break the computer. The worst they might do is make it harder to use, with black text on a black screen. It constantly makes think of the quote “Any sufficiently advanced technology is indistinguishable from magic.” Launching missiles… I guess we can blame that on movies.

ACTS (TM)
Reply to  Karl Schweitzer
January 2, 2024 3:44 PM

Back when computers were first both affordable and starting to become common I distinctly recall telling more than one new user — “Look closely at the front of the computer, then look closely at the keyboard. Do you see a button on either one that says ‘self destruct’? No? …” The same basic thing is still going on today, it’s just morphed into a different form. The whole thing has flipped. Instead of being afraid they’ll destroy the computer now people are afraid that the computer is going to destroy them. Both are absurd fears but the latter probably has… Read more »