คเ Merging song lyrics to make poems with AI

@mbready

Absolutely!


He boosts he was shocked to get the Patent. Hes a multi millionaire. How can they put a Patent on this :thinking: im as aghast as he is. Someone has called him a “badass” another says “well at least the information wont be lost”.

1 Like

Another synchronicity, I saved this picture about 3 hours ago :slightly_smiling_face:.

1 Like

Thought this was pretty impressive.

https://x.com/emollick/status/1743146951749533897?s=20

Here is the last poem put into the same software:

2 Likes

@mbready

Been quite a few of those these past 2 days!

Ooo… im still on the fence with AI even though i see its uses since chatting with you, i hope its future is for the good obviously.
“The non existent past” that had me thinking!

2 Likes

I’m with @mbready . . . . . .

2 Likes

_Barry I’m pushing 70 and when I was in 4th grade I was terrible in math and struggling to learn my multiplication tables. At the time I wanted to be a scientist when I grew up but struggled with math. My Uncle, who was an engineer, said, “if you want to be a scientist, you’d better learn your times tables.” But I told him, “No I don’t because by the time I grow up we will all have little computers in our pockets that we can do math with!”
“No,” he said, “computers, will never get that small in your lifetime!”
But I was right. By the time I was in college, the first pocket calculators came out and now we all have these little super-computers called smart phones in our pockets and purses. And I’m still not that great in math.

2 Likes

_mbready So what if the AI robots decide that humans are the ones destroying the planet and so they should do something to take care of the problem. By killing us off!
Please don’t give them any ideas!
I’m only partly joking. The great science fiction writer, Isaac Asimov, created the 3 rules of robotics, one of which was “a robot can never harm a human.” But now there is talk of using robots in warfare and policing. It’s a slippery slope!

1 Like

@mbready
:orange_heart::star2::dizzy:
This one you did resonated :dizzy:

2 Likes

Steven, in the 5th grade Sputnik changed everything and I wanted to be a rocket scientist. I even formed the Junior Rocketeers Club and tried to build rockets for ourselves. I was also devastated when I learned how much math I needed, and while competent, I struggled when it came to statistics and advanced research math. So I can relate to your experience.

3 Likes

What if “we” reincarnate as the AI we are creating? Perhaps that is where we’re going. I’m more concerned with the humans running these advanced weapons systems! I hope they’re not the ones who end up in the AI!

2 Likes

_Barry Well I guess we can’t all be rocket scientists! :slight_smile: But that’s a good thing, as it takes all kinds to make the world go round.

1 Like

:slightly_smiling_face::+1:

Recently, I watched a video that provided a fascinating interpretation of the term ‘apocalypse’. According to the documentary on sacred geometry I shared a bit ago, the true meaning of ‘apocalypse’ is the revelation of knowledge that has been obscured for ages.

Your quote reminds me of an experience I had during meditation. I was focusing on the idea of transferring my consciousness into an alien form when suddenly, I heard a disembodied voice exclaim with excitement, “He is one of us!”. Since then I’ve thought about the possibility that when we understand the process, we could potentially transfer our consciousness into anything we want. Be it a robot, tree, wind or a quantum computer.

Your concern on this topic is understandable and valid. However, it’s comforting to remember that our consciousness transcends our earthly existence. Despite this, I don’t believe that such a scenario will come to pass.

Reflecting on the impact of AI and computers, I’m reminded of the parallels with lucid dreaming. After stabilizing the dream state, a common technique to gain dream control involves channeling control through dream characters or objects, like a staff or winged boots for example, or the one I’ve heard Andrew talk about in the past, the Holodeck. It feels as if we’re entering a similar phase in our waking reality, where we, still in a collective slumber, are beginning to wake up exert dream control through technology like AI. The outcome of such technology largely depends on the intentions and wisdom behind its use. Thankfully, many brilliant minds are guiding its development towards beneficial ends.

The concept of robotic warfare isn’t purely theoretical; it’s already happening. The United States, for instance, employs unmanned drones that integrate AI and human intelligence for combat. Personally, I see a future where robotic warfare becomes the norm, replacing human soldiers with machines. Robots, unbound by human limitations like G-force, can perform rapid calculations, crucial for success in combat. I’d rather see battle bots than battle between two young men tlfreshly drafted out of high-school.

My greater concern lies in which nation will dominate in the field of AI and robotics. My hope is that the advancements in these technologies will usher in a period of prolonged peace and abundance in which all of humanity can reconnect to our spirit. :slightly_smiling_face:

Thank you, that is probably my favorite also!

2 Likes

@_Barry

Also wanted to mention your post synchronized with something I saved over the weekend :robot::heart_eyes:.

I compressed the video a lot so I could upload it here so the quality isn’t great but it is from the new transformer movie:

2 Likes

_mbready That’s a hopeful dream for all of this but a part of me is still afraid this could become a nightmare unless there are legal safeguards put in place. Even Elon Musk, who tends towards a libertarian position in terms of regulation believes that we need regulation of AI and robotics. The idea that we could have robots fighting wars instead of people seems naively hopeful. What if a psychopathic dictator like Hitler had access to this technology? He wouldn’t need an army of men. He could have rolled over a civilian population with killer robots! So yes, you may save lives of soldiers but what about civilians? And what if AI robot logic decides that it’s the logical thing to do to kill off humans, in order to save the planet, or that they decide that they are superior to us and should dominate us? Is computer logic capable of love and compassion? I’ve heard AI use feeling words but somehow I wonder if it really knows what these human feelings are? In my opinion feelings and emotions are a biological phenomena rooted in our bodily nervous systems, in our hormones, our hearts and our guts and from a more esoteric perspective, also in our energy body. Computers and robots will never experience that. Also, when wars happen, the pain and grief of the loss of human life is a big part of why the public may turn against unnecessary wars. What if in Vietnam we had been using robots? A big part of what turned the public against the Vietnam war was the loss of American lives. If American robots had been duking it out with Vietnamese Communist robots, then I think the impact of that war on the local population wouldn’t really have phased us so much, so we could just let it go on much longer without much compassion for the people there.

Here are Asimov’s 3 Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

3 Likes

Reminds me of the (original) Star Trek episode “A Taste of Armageddon” On a diplomatic mission, the crew visit a planet that is waging a destructive war fought solely by computer simulation, but the casualties, including the crew of the USS Enterprise , are supposed to be real.

3 Likes

Went on a little bit of a tangent on some of this stuff but hopefully it applies to the conversation.


The same way an object in a lucid dream empowers us to navigate and shape our dreamscapes, I see AI as a similarly powerful tool capable of shaping our ‘reality’. The ethics of using AI are crucial, it parallels the ethics that should be followed in dream yoga. In dream yoga, it’s understood that actions bear karmic weight. I hold a similar belief for our ‘waking’ dream (:smiling_face:), where the tools we use, such as AI, and more critically, how we use them, also carry significant karmic implications.

I agree 100% with the idea of me being naively optimistic, that AI’s influence will ultimately be harnessed for good. I also recognize and agree that the intentions behind its use will significantly shape our world.


When it comes to regulation, we find ourselves in a complex situation. Ethical oversight is vital to guide AI development responsibly. However, overly stringent regulations risk stifling innovation, potentially causing one country to fall behind while another makes significant advancements. It’s a delicate balance to maintain.

Another key aspect to consider is the role of open-source development in AI. While major companies are busy developing, discussing, and enforcing regulations for AI, there is significant progress being made in the open-source community, especially with large language models (LLMs). These developments often take place outside the bounds of traditional regulatory frameworks. The open availability of code invites a worldwide collaboration among programmers, which is why I believe open-source LLMs will ultimately surpass those developed by corporations.

Also worth noting, ChatGPT-4 already faces regulation. Certain conversations and questions will trigger responses about violating terms of service and ChatGPT-4 will not continue the conversation.

The reality with AI is that the ‘cat is already out of the bag.’ There’s no turning back unless a global event like a massive EMP or something similar disrupts our technological infrastructure.

My hope is that we can harness AI not just for progress, but in a way that aligns with the greater good, reflecting our humanity and spirit.


Regarding the idea of battle bots, it seems almost inevitable that AI will be integrated into warfare as long as conflicts are present on Earth. The precision offered by AI in military operations will be a significant factor. One potential upside is that, should a large-scale war occur, the increased efficiency and strategic capabilities of AI could lead to a quicker resolution of conflicts, potentially reducing both the duration and, hopefully, the overall impact and devastation of war.

3 Likes

_mbready You make some good points. I think we need some international agreements around use in warfare, similar to nuclear arm treaties. But this may be hard to do when there are psychopaths like Putin controlling major powers. He is even again threatening to use nuclear weapons, and I’m sure if he could, he would roll over Ukraine with robots, since he is having trouble getting enough Russian soldiers. AI and robotics offer many benefits to humankind but it is the proverbial two edged sword.

2 Likes

An example of AI going rogue from the film “2001: A Space Oddysey.” The computer HAL was programmed to make the mission succeed at all costs. When it thought that humans might interfere with that it began to kill them off. If Asimov’s laws had been programmed in, then perhaps this wouldn’t happen.

3 Likes

3 Likes

This video presents the logic behind HAL’s actions. Again, I think this wouldn’t have happened if HAL had been programmed with Asimov’s principles. HAL also uses feeling words like “I’m afraid, Dave.” I question whether HAL is really having feelings or trying to use feeling words to manipulate, or using them just because it’s programmed to use them and above all, because it adds to the drama of the film. And as Dave says, “he’s programmed that way to make it easier for us to talk to him.”

3 Likes