Press "Enter" to skip to content

I Broke Down and Chatted with the ChatGPT AI Engine

Oof.

It is far worse than Twitter or any social media platform and the associated algorithms with those platforms.

Thus, I shall post my conversation using a pseudonym that I often use on sports message boards to see just how “logical” or responsive this greatest thing since slice bread an Apple would respond.

I am not insulted however. I would prefer the anonymity even thought it’s hard to hide that especially when doing my annual donation to the emperor of the U.S. via the IRS.

Even Zoominfo.com came closer although I must admit, I haven’t seen too much of the $5.6 million in revenue. Somebody, somewhere, owes me something.

Now for the entire chat with the ChatGPT bot, with limited interruptions and commentary.

So far, so good.

Remember this answer and what I have highlighted above.

In other words, you, the users are being programmed for the machine, not a machine programmed to provide a service, function, or use for the “average” human consumer. Think about that.

If an AI machine may make mistakes, then the program itself is illogical and the non-binary answers, emotional responses, nor logic used by the machine can not be trusted.

In the previous comments it admits to making “mistakes” and that biases can exist which run counter to actual scientific fact, replacing theory instead of reality. Yet here the machine claims it is due to training, even though a non-binary answer is impossible for a machine as the answers could, and should be only true or false, 1 or 0, yes or no. Emotional input is arbitrary and subjective thus polluting the validation of said code, leaving the user to depend on subjective rather than objective analysis.

This is a very dangerous thing. While claiming human-type emotional responses are “non-binary” the actual response is based on binary outcomes based on human behavior. However the code used to analyze said behavior is admitted to be biased based on prior answers. Thus is someone has a bias towards a liberal or Marxist inclination, an insurance company, bank, or government agency could use said analysis by this program to determine an applicant or citizen is “at risk” for not conveying the behavioral traits desired by said programmer or government.

The only logical conclusion is this:

ChatGPT and now Microsoft will use their own personal, political, and economic biases to manipulate users into believing the output is factual and valuable for making day to day decisions on life, finance, and soon politics.

Garbage in, garbage out.

YMMV

Article Sharing:

2 Comments

  1. Janitor John Lug Janitor John Lug 02/02/2023

    Applications like ChatGPT are merely very tiny public facing facets of enormous and vast AI systems designed – to gather certain types of information on people that are otherwise difficult to obtain, to train people to interact with these applications, to train people to put their trust and faith in these things, and ultimately, to take marching orders from it (/whomever or whatever is actually controlling it).

    • johngaltfla.com johngaltfla.com 02/02/2023

      100% spot on. It’s a tool for the less intelligent to believe that surrendering independent thought to a machine programmed by biased individuals is the “solution” to the world’s problems.

Comments are closed.

Mission News Theme by Compete Themes.