Site icon SHENANDOAH

I Broke Down and Chatted with the ChatGPT AI Engine

Oof.

It is far worse than Twitter or any social media platform and the associated algorithms with those platforms.

Thus, I shall post my conversation using a pseudonym that I often use on sports message boards to see just how “logical” or responsive this greatest thing since slice bread an Apple would respond.

I am not insulted however. I would prefer the anonymity even thought it’s hard to hide that especially when doing my annual donation to the emperor of the U.S. via the IRS.

Even Zoominfo.com came closer although I must admit, I haven’t seen too much of the $5.6 million in revenue. Somebody, somewhere, owes me something.

Now for the entire chat with the ChatGPT bot, with limited interruptions and commentary.

So far, so good.

Remember this answer and what I have highlighted above.

In other words, you, the users are being programmed for the machine, not a machine programmed to provide a service, function, or use for the “average” human consumer. Think about that.

If an AI machine may make mistakes, then the program itself is illogical and the non-binary answers, emotional responses, nor logic used by the machine can not be trusted.

In the previous comments it admits to making “mistakes” and that biases can exist which run counter to actual scientific fact, replacing theory instead of reality. Yet here the machine claims it is due to training, even though a non-binary answer is impossible for a machine as the answers could, and should be only true or false, 1 or 0, yes or no. Emotional input is arbitrary and subjective thus polluting the validation of said code, leaving the user to depend on subjective rather than objective analysis.

This is a very dangerous thing. While claiming human-type emotional responses are “non-binary” the actual response is based on binary outcomes based on human behavior. However the code used to analyze said behavior is admitted to be biased based on prior answers. Thus is someone has a bias towards a liberal or Marxist inclination, an insurance company, bank, or government agency could use said analysis by this program to determine an applicant or citizen is “at risk” for not conveying the behavioral traits desired by said programmer or government.

The only logical conclusion is this:

ChatGPT and now Microsoft will use their own personal, political, and economic biases to manipulate users into believing the output is factual and valuable for making day to day decisions on life, finance, and soon politics.

Garbage in, garbage out.

YMMV

Article Sharing:
Exit mobile version