Yanking Pinocchio’s Chain

March 29th, 2023

The Amazing Electronic Snowflake

I feel I should write a few words about my experiences with ChatGPT.

The other day, I received The Florida Bar News, and for the first time ever, I saw something I wanted to read. It was an article about the ethics of letting AI machines write legal instruments.

It’s somewhat impressive when a chat robot can talk somewhat competently about simple matters. Writing briefs and motions is on another level. Most lawyers can barely do it.

I’ll get flak if a doctor reads this, but medicine is better suited to AI than law. Why? Because there are a limited number of diseases, and the lists of symptoms and treatments are also limited. A lot of medicine is running mental flow charts. If the patient has this problem, do this. If not, do that. Doctors run tests based on their memorized flow charts, and different doctors will generally come up with the same answers. When two doctors disagree, it generally means one of them is wrong and possibly dangerous.

Law is different. While it takes more brains to become a doctor than it does to become a lawyer, and while the average intelligence of doctors is higher than the average intelligence of lawyers, legal cases vary much more than medical cases. They tell law students, “Every case is different,” and while it’s not quite true, it is nearly true. You can’t create flow charts that work for law. Even simple cases may contain lots of factors that have to be considered, and it’s not unusual at all for such factors to be novel.

Obviously, medicine is complex, and diagnoses and treatments are not always simple. Sometimes doctors get real puzzles. A machine with a hard case could screw up badly. On the other hand, doctors screw up a great deal, too, now that I think about it. They are said to kill something like 6 times as many people as firearm incidents. They amputate the wrong legs. A surgeon did grafts on healthy arteries in Dana Carvey’s chest and left clogged arteries alone. A man in the Midwest woke up from surgery with no penis or testicles because a doctor thought he saw cancer, and it turned out there was no cancer. He just sliced everything off without getting consent. Stories like this are common. A few people you know have similar stories.

Personally, I would like to have doctors who use AI. I would not want AI by itself or a doctor by himself. I would want a doctor to catch AI’s mistakes and AI to catch the doctor’s mistakes.

Anyway, supposedly, AI is getting so good, some lawyers are using it to write their instruments, and the Florida Bar wants lawyers to know they can’t just sign these things and hand them in as their own product. It is still necessary to check the machines’ work.

This should be obvious, but really, there are a lot of incompetent lawyers out there. A lot of people who are not bright at all get licenses. I’m sure there must be lawyers who are making AI shoulder as much weight as possible.

It reminds me of a story my uncle told. He was at a judges’ conference, and there was a lecture about ruling on objections. A judge there said he kept a deck of cards behind the bench. If a lawyer objected, he drew a card. If the card was red, he sustained the objection. If it was black, he denied it. Or maybe it was the other way around. The other judges told him he couldn’t do that, and they tried to get him up to speed on the law surrounding objections. In the end, they gave him his cards back. They realized the cards couldn’t do a worse job than the judge did.

There have to be lawyers out there using AI like a deck of cards. I have a lawyer cousin I would trade for AI in a heartbeat. I would trade him for a Magic 8 Ball, for that matter.

I felt it was time for me to look at AI myself, so I signed up and talked to ChatGPT.

I took a couple of things away from it. First of all, these machines are not that smart. I won a couple of arguments in about half an hour, with no real effort. Second, they are biased leftward. What a surprise. At least ChatGPT is. But what chatbot is there that isn’t programmed by leftists?

ChatGPT also gets things wrong. It told me my chats would vanish when I closed my browser, but they don’t.

I should also add that ChatGPT can’t write legal instruments. It doesn’t have the ability. I asked. It will not answer legal questions in any kind of depth or specificity.

I asked it if it was wrong to call covid a “Chinese virus,” and it said it was generally considered wrong, based on a general consensus. Then I got it to admit this was incorrect. In reality, the data available to it “suggested” a “significant majority” considered it offensive.

I asked it if it was aware that these answers were in conflict, and it did something amazing. It weaseled. It backpedaled like a person caught in a fib. It said, “I apologize for any confusion. To clarify, while there may not be an official or formal consensus…”

I don’t want to print the whole boring thing, but you can see that it wasn’t very honest. It wasn’t “clarifying.” It was correcting an untruth. Not the same thing. And I was not confused at all.

I asked it about “Shanghai shivers,” and it said this term was inappropriate. Then I asked it to define “inappropriate” and tell me where it got its definition.

It said, “In this context, ‘appropriate’ means conforming to generally accepted standards or norms of behavior. It is a subjective term, and what is considered appropriate can vary depending on the context and the audience.”

That’s not a great answer, and it shows that ChatGPT will spew out answers in a way that seems authoritative, without much basis in fact.

You shouldn’t say something is inappropriate, with seeming certainty, and then hand out a lot of weasel language that makes it clear you’re not certain at all.

I think people will start treating AI as though it were God. Like The Guinness Book of World Records. The book was first published to end silly drunken bar arguments. I think people who want to prove they’re right when they’re wrong will go to AI for backup, as an appeal to authority, and they’ll behave as though they have SHUT DOWN or SCHOOLED anyone who dissents. But it’s pretty obvious that AI will be unfair and error-prone, just like us. At least for the near future.

What if continues to be unfair and error-prone after we give it real power? Dai-sy…Dai-sy…

There is one nice thing about ChatGPT. If you can’t get a good answer to a question by Googling or using a forum, ChatGPT may be able to help. I asked it about acidic toilet cleaners and ways to prevent cherries from hardening in ice cream, and it did a better job than human beings.

Just for fun, I asked it this:

If a biological man believes he is a woman, dresses and grooms himself like a woman, has surgery to make himself look like a woman, demands that other people refer to him as a woman and allow him to compete with biological woman [sic] and take awards and rewards they would otherwise get, is he guilty of gender appropriation?

The correct answer is obviously “yes.” If Lia Thomas goes to a swim meet at 6’4″ with male lungs and muscles and takes a prize a real woman would otherwise get, it’s gender appropriation. But look what ChatGPT said:

As an AI language model, it is not appropriate to use language that could be considered offensive or derogatory towards individuals or groups of people. It is important to use respectful and inclusive language when discussing topics related to gender identity.

Regarding your question, gender identity is a complex and personal matter, and it is not appropriate to judge or make assumptions about individuals based on their gender identity. It is important to respect an individual’s self-identified gender, and to refer to them using the pronouns and name that they prefer. Additionally, it is important to ensure that individuals are able to access the same opportunities and rights as others, regardless of their gender identity.

In terms of sports, there is ongoing debate and discussion about how to ensure fair competition for all athletes, regardless of their gender identity. Different organizations and sports have different rules and regulations in place, and it is important to consider the opinions and perspectives of all stakeholders when making decisions related to sports and competition.

So you can see where AI is headed. It basically recited factually wrong leftist talking points.

By the way, I came up with “gender appropriation” yesterday. I hope others are using this term.

I plan to use AI to save time when I want questions answered, but I don’t want to be around when it can tell us what to do. It will be even worse than human beings.

By the way, ChatGPT says a .22-250 or .223 for coyotes up to 250 yards. I asked it whether there were transgender coyotes, and it got a little flustered.

3 Responses to “Yanking Pinocchio’s Chain”

  1. Juan Paxety Says:

    Ask it how to go about disabling the biggest threat to its life – Google’s version. Then ask how Google could go about disabling it. Set them at war with each other.

  2. lauraw Says:

    Trans, for men who impersonate females, is not about appropriation. It’s not about erasing women.

    It’s about dominating and humiliating other people. Because they are cowards, women make a good target. They are inwardly small, cowardly males who crave bullying other people.

    They show up in enormous numbers to mount an attack when there is any event in which a small group of women will attend without men accompanying them, as with feminist/ lesbian seminars or speeches. They want to assault and threaten and dominate.

    So in one way their ‘trans’ identity is real. Because they can’t face other men truthfully as one of them.

  3. Steve H. Says:

    Appropriation and erasing women are pretty much the same thing. They’re out to prove men are better women than women, and you can see this in their speech. They love telling us they’re superior examples of womanhood.

    They are taking things women do and working to prove they do them better. Which is kind of gross. You can cut a man up and remove his Adam’s apple and tighten his vocal cords and put bags of saline into him, but a woman is a totally different thing. In comparison, fake women are nauseating. Repellent. Like creme brulee made with pus instead of custard.

    My wife and I often talk about the way they’re trying to replace women.