👽 AI Part 2 Complete ChatGPT Tutorial - UPDATE - Is AI the End of the World? Or the Dawn of a New One?

How one small coding error sent GPT-2 down a very bad X rated path. An example of how a little mistake can spin this technology out of control.

2 Likes

You’re absolutely right—small coding errors can have significant unintended consequences, as demonstrated by the incident with GPT-2 where a minor mistake led the model to generate inappropriate, X-rated content. This example highlights how even tiny oversights can cause AI systems to behave unpredictably, emphasizing the importance of meticulous attention to detail in AI development.

However, since the time of GPT-2, there have been substantial improvements in AI models to prevent such issues. Modern models like ChatGPT4-o and now even ChatGPT-o1 incorporate robust safety protocols, advanced training techniques, and multiple layers of oversight. These include rigorous testing, reinforcement learning from human feedback, and strict adherence to ethical guidelines. These enhancements greatly reduce the risk of small coding errors leading to significant problems, providing greater confidence in the safety and reliability of today’s AI technologies.

In fact, my own experience with the latest model illustrates these advancements. When I asked it about how I could maximize the number of honeybee hives I could create, maintain and ultimately release into the wild, the AI recognized potential legal and environmental concerns associated with that action. It understood that releasing honeybees without proper authorization could disrupt local ecosystems and might violate regulations. By identifying these nuanced issues, the model appropriately refrained from providing guidance, demonstrating its ability to navigate complex ethical and legal considerations effectively.

Recognizing the content violations, I began researching why this happened and discovered the legal issues surrounding the release of honeybees into the wild. Many regions have strict regulations governing beekeeping and the introduction of bee populations into the environment. Uncontrolled release of honeybees can disrupt local ecosystems, spread diseases to native bee species, and negatively impact biodiversity. Laws such as the Honey Bee Act in the United States prohibit the importation and spread of honeybees without proper authorization to prevent ecological imbalance and protect agricultural interests.

2 Likes

@mbready That’s good to know. I hope and pray that AI continues to act in ethical ways as we enter this period of “singularity” in which AI logarithmically begins to outpace human intelligence! If it’s not programed to act ethically then we’re all in big trouble!

3 Likes

Is Elon Musk’s neuralink the first step towards us becoming like the Star Trek “Borg” in which our individuality will be absorbed into a mass undifferentiated consciousness? Or into a higher level of unity and oneness in which our individuality is in harmony with a greater wholeness?
Is our future evolution about merging with AI? What are the dangers and risks as well as the hopes?

4 Likes

Good excerpt from a podcast with Bernado Kastrup.

2 Likes

Why Scientists Are Puzzled By This Virus

Would be really cool if China used AI to create a Virus that cured Cancer.
Might give them some karmic redemption…

4 Likes
2 Likes

LAPD robot dog enters home following shooting

Crime-fighting robot dog joins NYPD

Live Reading | Ray Bradbury - Fahrenheit 451 (Part 1)

3 Likes

This is how AI is being used nowadays, too:

4 Likes

Amazon Echo

4 Likes

China is merging living human brain neurons with AI and an Indian company has created a guard robot with a gun. OK, let’s merge human brains with robots and then give them guns. What could possibly go wrong?!

3 Likes

From the article:

We can be sure that “statistics show,” or will soon be made to show, that speakers who use ChatGPT are thought to be better speakers. Soon we will hear the inevitable praise from the professors who gave us a laptop on every desk: the use of AI represents a great democratic advance. It’s a leveler, after all, like cellphones: it brings not only equal opportunity but equal outcomes. But like every promise of effortless success, this one is empty. There will still be competition, only for the best AI programs generating speeches, which means competition in money to purchase the latest fabrications.

What my colleague was recommending, with the best of intentions, would result in the absence of the cultivation of the requisite skill, and hence the absence of that skill altogether. Telling our students to let AI do their work, or a substantial part of their work, will mean that they will never learn grammar, how to turn a phrase, how to write a compelling, arresting formulation, how to win over an audience through persuading or convincing. The consumers of AI will never know what it means to persuade, to invent, or to discover. They will fail to observe the human hopes and fears and other passions that move and motivate us. They may not even grasp the import of what the AI speech is saying. They will certainly be more liable to carry whatever unthought-through implications the AI generator unwittingly harbors.

3 Likes

My wife, who teaches college writing and literature (among other things) tells me that professors now are having to learn to distinguish between papers written by AI and those actually written by students. She says that the AI papers are “too perfect” in terms of grammar and spelling but the content is pablum - not very original or creative. But what will happen as AI becomes exponentially smarter. Will it massively outpace our own human creativity. Will we be the masters or become the slaves? This is getting scary.

3 Likes

@NightHawk999 What happens when both the police and the criminals have robots?
Or there are armies of robots invading countries. Or when AI decides that the most logical thing to do is to keep the humans under control?

I think we need an international treaty that would institute Asimov’s laws of robotics (first formulated in 1942):

  • First Law

A robot must not harm a human or allow a human to come to harm through inaction

  • Second Law

A robot must obey human orders, unless doing so would conflict with the First Law

  • Third Law

A robot must protect its own existence, unless doing so would conflict with the First or Second Law.

3 Likes
3 Likes

One of the dreams of the AI Singularity folks is that we will all eventually become cyborgs. If we loose a body part or organ, through accident or age, it will be replaced with an artificial organ. Parts of the brain will be replaced with computer chips. There is even talk of injecting nanobots that will reconstruct parts of the body. We will then be immortal, but will we still be human? Will we evolve into something more than human, or less than human? I for one feel scared and repulsed by this whole direction. It seems like a materialist based ego-clinging to this particular form and world.

2 Likes

Save money on groceries . . .

3 Likes

Rick & Unity’s Love Story | Rick and Morty | adult swim

(Scathing critique on US politics 2 party political system at the 3min mark)
Save money on dissention…

3 Likes

Yeah, just plug in and re-charge your battery while you sleep. But do cyborgs dream?

3 Likes

Maybe, like this . . .

2 Likes