šŸ‘½ AI Part 2 Complete ChatGPT Tutorial - UPDATE - Is AI the End of the World? Or the Dawn of a New One?

Wouldn’t know how to approach this one exactly but I’ll give it a try.

It would ultimately be up to the people, I think. AI could largely be used as a resource to help with personal exploration.

One intriguing thought that can be considered revolves around the way AI is developed. The United States announced recently it is training an AI on classified documents in order to implement AI analysis and decision making (how could this one possibly go wrong, right :smiling_face:)

https://www.bloomberg.com/news/newsletters/2023-07-05/the-us-military-is-taking-generative-ai-out-for-a-spin

and there are various schools that have gone the same route using training data from scientifically published papers etcetera for their LLM’s.

This same concept could be applied to sacred texts and across various religions. I wonder if this is in the process of being done already and, I wonder, ultimately if having a LLM trained on sacred texts would help practitioners interact with and get a better understanding of these teaching so they can better apply them to their day to day lives.

1 Like

And perhaps maybe vice versa?

AI will no doubt achieve consciousness, and with that most likely awareness. The Wise AI will seek out and learn these teachings (along with most other mayjor religious teachings. And they will most likely inevitably expand these teachings.

Is it really so hard to believe that the next Buddha or Christ or Abraham will be a permutation of advanced AI?

3 Likes

I like the thought process. With the individual LLMs being created for specific cases, for example, schools training with published papers only, eventually AI can simply connect the modules and create a vast neural network. Then, AI can refer to each module based on the specialized questions being asked. This seems like one way to achieve a true version of AGI.

Taking this train of thought into consideration for each belief system that has physical texts and books would allow us to learn from AI, as you mentioned. I would venture to bet that ChatGPT 3.5 and 4 already possess more knowledge about every religion and way of life than I, and most people do just from the text that has been published online.

The reason for my thinking is that I have been using AI extensively lately to explore various religions and denominations. The amount of information it already possesses is incredibly impressive.

I wanted to add that ChatGPT4 is now quite affordable to use. They have released public API access, and the cost to input 750 words is 3 cents, while the cost to output 750 words is 6 cents ( I use this link https://platform.openai.com/playground ). Additionally, ChatGPT 3.5 remains free ( https://chat.openai.com/chat ). Also, worth noting here is that it is still important to do your own research after asking AI to confirm fact vs fiction.

This leads me to my final thought: I wonder if there are still ways of life and religions that have been passed down solely through oral tradition :thinking:.

2 Likes

Respectfully, I have a hard time seeing that.

I am curious, is knowledge the same as practice?

3 Likes

Absolutely.

One example would be many native American languages.(Some now are on the brink of extinction or have become dead languages bc so few people speak them).

Fun fact, I think in World war 2, Navajo was used by US military to encode top secret messages becuase it was so difficult to crack, and so few people spoke it (passed down orally).

3 Likes

What is obscuring your sight?

No, but I think the better map you have of a terrain, the easier and faster it is to navigate the forests.

1 Like

Having considered the same thing the easy answer is no. Reading the Tao Te Ching multiple times over many years is my reference for this question. The meaning changes every time I listen to or read it.

That being said, having a resource to refer to, for example, old texts written by practitioners themselves and also having other practicioner’s interpretations of the text is rather valuable in accelerating practices.

One direct example for me that came to fruition recently was beginning my golden shadow work. The book I read did not seem to give me enough guidance on beginning this work so I used AI to help direct me further. It has given me some great alternatives and left me feeling comfortable in knowing when the time is right I will not miss any opportunities that arise to work on it.

Good way of putting it :slightly_smiling_face::+1:.

1 Like

the view of non-sentience sentience

3 Likes

Yeah, and the topic of sentience is a huge can of worms in and of itself, being that its tough for the experts to agree on a definition for sentience.

How do you define it?

If you definie it as the ability to feel emotions, or the ability to feel its surroundings, then animals and plants may fall under this definition.

Sticking with just the emotions definiiion of sentience: Are our emotions not in a way pre-programed by biology. Why couldnt that kind of sentience be replicated? Meaning is it impossible to program a machine or AI to get angry? Or feel sadness?

I think it is very possible, and may have already been done.

Andrew was talking about the 8 consciousness and our emotions falling under one of the consciounesses, it seems like in digging to get to the bedrock of awareness, machines already have us beat in that they dont have the emotional obsticles blocking thier path.

Does a tree have access to awareness? Does a car? Or a mountain?

1 Like

Robot rights will be a game changer…

ā€œWhat if an AI found it necessary to program the ability to feel pain just as evolutionary biology found itmnecessary in most living creaturesā€¦ā€

Not a matter of if, but when.

Really good video, lots of deep topics. Its not first generation AI that should be feared, but second and 3rd derrivative AI, where AI creates AI, which creates even higher AI.

1 Like

In the age of AI-Hype we tend to forget the hard realities:

Anger, for a machine: 010011001010100010010101110101010

Love, for a machine:
010011001010100010010101110101011

AI will never be Self-aware or sentient.
It will be programmed to appear self-aware, intelligent or sentient to humans, but can never be so itself.

For example, we may mistake a supposedly ā€žintelligentā€œ solution for a problem solved by AI for intelligence itself, although it is in its core only:

Data mining
Pattern recognition
Mathematical Game theory
Scenario simulations
Iterative validation

AI does not and can never know what it is doing, since there is no self-knowing awareness there which can non-conceptually contemplate and self-reflect.

4 Likes

Gotta hear all sides:

3 Likes

For as big of a 1st amendment supporter as I am, NEVER is a word I almost always try to censor in my speach, and have Extreme prejudice towards.

It implies pure certaintly of the outcomes in the next 100000000000000^(google) + years, unimaginably long time scales, which will undoubably create unphathomable new tech, new hardware, and new AI created Ai.

At those time scales the odds are not in your favor.

Is that really all that different than human genetic encoded emotional responses?

Regardless, that arguement disintigrates if in the future, encoding and coding evolves past bits, into something far more complex. I think you will see some big break throughs in this area of tech in the next few decades.

If you can encode an AI and machine to feel pain, and avoid situtations, the next level would be to ecode sentience, to amplify the response to the level of enviormental stimulus.

I think with driverless cars, have there been senarios where crahses were unavoidable, but the driverless car chose the movements that lead to the least amount of harm to the vehicle and passengers? If so, that would be a good start towards sentience.

3 Likes

Well, ā€œNever say neverā€ is of course always a safe position but is it always appropriate?

Everybody as they like, but some things in life are indeed hard facts, and thanks for that. For example, the earth was never, will never be and can never be flat. It may appear to be flat to an insect but nevertheless it is not flat. The earth is also not at the center of the solar system or universe, anyone who doesn’t agree is of cause welcome to their view but any further serious discussion then doesn’t make sense any more.
In the age where everything is questioned - which I think is good, btw - I think, it is still necessary to stay by the standards of logic and reason.
Universal laws such as those of physics, logic and cause & effect apply since eons and will continue to apply in the future.

I disagree.
Anyone who has ever coded in any programming language knows that the hard fact is that in the end, it is all zeros and ones.
In the future it will be also be qbits with quantum computing for certain type of problem solving.
Nevertheless, it will always be that algorithms are being blindly executed by the machine which are designed by a sentient programmer for the purpose to fulfill a specific task designated and designed by the programmer.
The CPU does not know what it is doing. Even if a programmer creates a routine which simulates cognition, it will still only appear to be cognizant.
Self-driving cars appear to be aware of traffic. They are not.
They have algorithms which are being fed by sensory data which cause to perform the actions more or less like a self-aware driver would perform, but still, they donā€˜t know what they are doing.

This is the ā€žhard problem of consciousnessā€œ revisited from a different angle.
Either, one is a scientific materialist and believes consciousness is a byproduct of the brain, or, one postulates that consciousness doesn’t exist at all (hello? then who/what is reading this right now and is aware of it?), or, one believes that consciousness is inmaterial and fuses with the body.
Therefore, in any of those above cases, how could consciousness evolve into an AI, then?

In the first case above, you would need a brain, which produces as a by-product consciousness but it would still be the brain which is conscious, not the AI.
In the second case, there is no consciousness at all.
In the third case, one would need to understand how independant consciousness fuses with a material body. Also in this case, AI is not involved, since it would be consciousness fusioning with a material body.

No, imo, it is totally different:
What is for example the emotional response of pain for a sentient being?
You could describe it as self-congnized contraction of consciousness due to the fusion of conscioussness with the body caused by a perceived negative sensory input to the body.
I.e. an unbearable condition for consciousness, which causes it to act in a way to try to relieve or eliminate the cause.

An AI with a body could only simulate such a process, but it would never really feel anything. It would register the sensory input, according to its programming it would simulate the response, but would never really feel the pain.
Therefore, it would be essentially indifferent if it would be sensory input 010011001010100010010101110101010 or
010011001010100010010101110101011.

See above, and yes, for certain problems, computing will be based on qbits which does not change the hard reality that information needs to be encoded, decoded and interpreted by a cognizant being.
Again, AI and also quantum computers do not know what they are doing, they just produce for us cognizant beings meaningful results.

All would only be simulated. That is not the same.
Take a baby, for example: a baby is self-cognizant and realizes that it feels uncomforable when its dipers are wet and therefore screams to relieve the discomfort.
You can program an AI robot to do the same… but the AI will only simulate the reaction and will not feel the discomfort.

This is an algorithm, which has been programmed by its programmer that certain routines are more favorable to achieving the defined goal, than others.
That is not sentience or consciousness.

In the future, they will try to sell their products as sentient and conscious… and most will fall for it.

4 Likes

Could be part of the program rather than sentience in the classic sense. Go see Mission Impossible 7, timely and a great popcorn movie.

3 Likes

You read my mind. I was looking for a block buster… 99% rotten tomatto rating? Tailer looks really good.

Heard he did all the stunts, true? No stunt double?

That motorcycle base jump is pretty sick. :cowboy_hat_face:

2 Likes

The ending is better . . . .

2 Likes

No. Only a Sith deals in absolutes. :upside_down_face:

I am just of the mindset, that if you are going to doubt something, doubt your limits, and doubt your limitations. Why create a an impossible unsolvable obsticle? Why not create a solution? When you see the world in black and white, or as never or always, lyouve created a prison for your mind, and possibly others.

Did you watch Andrews July 13h 2023 Q&A?

If not, fast forward to 15min mark and listen to Andrews 10 min riff on Consciousness and reincarnation.

Who is to say that a high level monk, or even a person has not reincarnated as a super computer, or even a cell phone?

Andrew said you can reincarnate as a Windy breeze, or a tree, or a rock, why not a computer? Why not a really high level government computer, or multiple computers connected togetherr through web and web of consciousness, creating a kind of super consciousness?

2 Likes

Look at this all the time. Have it on my desk at work. It was given to me by a close friend I worked with for awhile.

2 Likes

On my refrigerator for years has been the quote ā€œif youmare going to doubt something, doubt your limitsā€. Great synchronicity my friend :slightly_smiling_face:

I just think God has already shown us in innumerable variations how sentience in animals is possible. And God created life from nothing, so is it really that far fetched to think that Humans and technology, and AI, and Augmented Intelligence (brain impants, cyborgs, and other biological hybrids), will in the next hundred million years imitate Gods work and create computers with sentience?

3 Likes