Will I dream, Dave?

4 Likes

For example, asking AI to cure cancer as quickly as possible could be dangerous. “It would probably find ways of inducing tumours in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs,” said Russell. “And that’s because that’s the solution to the objective we gave it; we just forgot to specify that you can’t use humans as guinea pigs and you can’t use up the whole GDP of the world to run your experiments and you can’t do this and you can’t do that.”

Definitely got my attention…

This from DUST:

4 Likes

Reminds me of one of the best SciFi movies of the 60s-70s, [The Forbin Project]

6 Likes

@Steve_Gleason Very disturbing vision but a very possible future use of an unregulated disruptive technology.
Drone wars have become reality already; it’s technologically the next logical step to introduce autonomous swarm “intelligence” via AI :

Thinking about some other possible applications of AI:
With regards to the current discussion about Facebook’s algorithm which identifies user profiles, feeds taget-tailored content, amplifies anger echo chambers, I wonder where this will lead society if AI would autonomously run unchecked algorithms in social media platforms and news feeds…

From the former article:
“It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

What a scary potential for subliminally influencing public opinion and changing societies.

In my opinion, there are several problems with AI which will need to be addressed soon:

  • Since AI has currently no rules or conventions to observe when tasked to find a solution autonomously (other than those that a programmer might or might not program), it will do exactly that: find a solution without taking into account any side effects. It will do „its job“.
  • There needs to be human accountability for damages and consequences of actions caused by AI.
  • There need to be human checks and balances.
  • There need to be programming conventions regarding autonomous decision making by AI (e.g. a machine must not autonomously decide to harm or kill a human being)

A further issue is the misnomer and its consequences: Artifical “intelligence”.

AI is not intelligent - but a lot of people think that it is and are uncritical about it.
In my opinion, AI is a sophisticated principle of computer programming designed to analyse huge amounts of data with the aim to find patterns and correlations and learn and adapt from the correlations in order to act on or solve a posed problem. It emulates decision making processes and measures and learns from the data.
But that is a very narrow definition of “intelligence”; it’s just an evolved way of “number crunching”.
The AI is not self-aware of what it is doing. It is being told by the programming what consitutes numerically “success” and runs in a sophisticated way all possible permutations to check if it has reached its goal. It is not conscious.

That’s why AI will never dream, Dave.

3 Likes

What a cool trailer! :slight_smile: I didn’t know that SKYNET had a father called COLOSSUS :wink:

1 Like

“When do you think you’ll be able to attempt the overload?”. Lol!

2 Likes

Gotta find Sarah Connor first. :wink:

2 Likes

Those Toys in Toy Story 1-4 were pretty intelligent. I’ve been told that objects that we perceive aren’t really what we perceive at all but instead are self-aware—so the potential for machines to manifest consciousness doesn’t strike me as too far out, no more than pineapple on pizza (which I like). :grinning:

2 Likes

In the view of Idealism, consciousness rather than matter is the ontological primitive, that is, a Universal Consciousness within which all creation happens and out of which all creation is made circumscribes all things.

The inanimate universe is merely the extrinsic appearance of the “thoughts” of this universal mind while humans and other living organisms are the extrinsic appearance of dissociated alters of universal consciousness.

In his excellent book, The Idea of the World, Bernardo Kastrup says that:

“As such, the quest for artificial consciousness boils down to the quest for abiogenesis”

From Wiki:

In biology, abiogenesis , or informally the origin of life ( OoL ),[3][4][5][a] is the natural process by which life has arisen from non-living matter, such as simple organic compounds.[6][4][7][8] While the details of this process are still unknown, the prevailing scientific hypothesis is that the transition from non-living to living entities was not a single event, but an evolutionary process of increasing complexity that involved molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes.[9][10][11]

1 Like

See also:

Charlie Stross’s CCC talk: the future of psychotic AIs can be read in today’s sociopathic corporations

Stross is very interested in what it means that today’s tech billionaires are terrified of being slaughtered by psychotic runaway AIs. Like Ted Chiang and me, Stross thinks that corporations are “slow AIs” that show what happens when we build “machines” designed to optimize for one kind of growth above all moral or ethical considerations, and that these captains of industry are projecting their fears of the businesses they nominally command onto the computers around them.

and:

Cory Doctorow: Skynet Ascendant

What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?

I think it’s corporations.

We haven’t made Skynet, but we have made these autonomous, transhuman, transnational technolo­gies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if we’re breaking the business logic of their makers and self-destructing if they sense tampering.

Corporations run on a form of code – financial regulation and accounting practices – and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to inves­tors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesn’t backfire so much that it harms the investor’s quarterly return.

We humans are the inconvenient gut-flora of the corporation. They aren’t hostile to us. They aren’t sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corpora­tion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with anti­biotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.

and:

Silicon Valley Is Turning Into Its Own Worst Fear: We asked a group of writers to consider the forces that have shaped our lives in 2017. Here, science fiction writer Ted Chiang looks at capitalism, Silicon Valley, and its fear of superintelligent AI.

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator . Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism…

~

3 Likes

Corporations as 'slow (sociopathic) AI…brilliant connection! A kind of sequence of habitual actions that functions but lacks conscious intent… like we are also in our regular waking (asleep) state.

3 Likes

But these are self-aware humans who make up the corporation so I’m not sure it is such a deep or useful analogy. Umberto Eco does a great job of dissecting analogies (Focult’s Pendulum) with the observation that we tend to take them too far and I think in this case, the analogy of AI to corporations doesn’t work at any meaningful level.

From the book (p.668):

Rule One: Concepts are connected by analogy. There is no way to decide at once whether an analogy is good or bad, because to some degree everything is connected to everything else. For example, potato crosses with apple, because both are vegetable and round in shape. From apple to snake, by Biblical association. From snake to doughnut, by formal likeness. From doughnut to life preserver, and from life preserver to bathing suit, then bathing to sea, sea to ship, ship to shit, shit to toilet paper, toilet to cologne, cologne to alcohol, alcohol to drugs, drugs to syringe, syringe to hole, hole to ground, ground to potato.

Rule Two says that if tout se tient in the end, the connecting works. From potato to potato, tout se tient. So it’s right.

Rule Three: The connections must not be original. They must have been made before, and the more often the better, by others. Only then do the crossings seem true, because they are obvious.

From an essay about Eco’s book:

Throughout the book, Eco basically shows us that one can justify any theory, any line of thought, if there is a psychological or practical need to make the argument ‘work’, and that any theory, if you formulate it according to certain ‘rules’, can become accepted by a large group of people.

I think the evidence for this over the past five or fifty years is overwhelming. Make that 75 years (my lifetime experience).

. . . . .

@KhyungMar Will I dream, Dave?

I sometimes feel like Dave near the end of the movie

1 Like

Well, an analogy doesn’t have to be perfect. I think there are some striking similarities between the radical efficient way an AI works and the way some corporations operate (think of certain banks):

  • Paramount is to reach the goal set (raison d‘etre of most banks is to maximize profits)
  • Possibilities to exploit systems and maximize profit within the set of rules given are searched for and pursued. (E.g. cum-ex deals)
  • morals and ethics are subordinate to profit maximization (e.g. speculation on foods and hedging, manipulating prices with detrimental effects in third world countries, etc.)
  • Employees of the bank might have personal ethical issues with parts of the business but will do their job. Effectively, as a collective, the employees will set ethics aside and participate as long as no rules or regulations will have been violated. (Similar to the collective set of operations an AI will perform in trying to achieve the goal, except for the missing bad conscience)

Some corporations do seem somewhat similar to a „cold“ AI that operate as a system of complex operations trying to learn, adapt and act in order to reach its goal at all costs.

@_Barry Love the pic! Such a great movie!

2 Likes

Great movie at the right time, as well!

A surface analogy works with one eye closed—is reasonable, but has no deeper explanatory power, is all I am saying. It’s more of a political perspective than an operational principle.

You wrote,

  • Paramount is to reach the goal set (raison d‘etre of most banks is to maximize profits)
    -Banks still have to operate within laws, regulations and environmental conditions. Radical AI has no such governors.

  • Possibilities to exploit systems and maximize profit within the set of rules given are searched for and pursued. (E.g. cum-ex deals)
    -Subject to penalties and economic ruin when rules are breached

  • morals and ethics are subordinate to profit maximization (e.g. speculation on foods and hedging, manipulating prices with detrimental effects in third world countries, etc.)
    -individuals within corporations may be whistleblowers, informants and not always work towards "common goals. The poor schnooks at the bottom of the chain are often sacrificed. AI has no such limits.

  • Employees of the bank might have personal ethical issues with parts of the business but will do their job. Effectively, as a collective, the employees will set ethics aside and participate as long as no rules or regulations will have been violated. (Similar to the collective set of operations an AI will perform in trying to achieve the goal, except for the missing bad conscience) -Same as the previous point, time limits, cultural attitudes and other societal issues will provide bumps on the road that AI won’t have.

While we’re having abstract conversations on the validity of this analogy, fossil fuel corporations, in the single-minded pursuit of profit, are severely degrading the biosphere and quite likely soon pushing us past irrevocable tipping points that will involve mind-numbingly horrific suffering on an unprecedented scale for many generations.

Facebook/“Meta” is fostering genocide, undermining democracy, eroding critical thinking, promoting conspiracy theories etc., also in the single-minded pursuit of profit. (And shoutout to Google, hey yo, “don’t be evil” amirite?)

To my eye, this sure does look like the kind of fears that people project onto AI, e.g. in the initial article that started this thread – except happening right now, all around us.

But wait! Does this mean we shouldn’t worry about AI? AI is being developed and deployed by those same corporations under the logic of capitalism in its current form, so maybe the purported AI-pocolypse is another aspect of the same problem.

Or maybe we could take a step back and look at this through a dream lens.

AI researchers and corporate leaders are having nightmares about out-of-control AI destroying humanity in the pursuit of narrow goals. “If this was my dream, I think it might be about…”

I recall “The Cold War,” Duck and Cover, The Cuban Missile Crisis, Dr. Strangelove and the Vietnam War—in which I served—as being those type of existential threats that AI could become. Those events/analogies only went so far but the fear was real and palpable amongst most of humanity. The fears you cite are also real and palpable, and are potential calamities—yet unrealized, much the same as the fears about unrestricted AI. I think job 1 is to work on Fear and avoid analogies.

I cited past and current damage being actively done by corporations, rather than fear of same. The suggestion (by myself and the authors of the three articles I linked to) is that, at least in part, when people express fear of AI, there may be an element of psychological projection going on.

[see: numerous articles on Facebook, Myanmar, genocide; climate change deception, fossil fuel corporations.]

I’m not sure what you’re getting at here. Could you elaborate?

Might as well add “The Government,” Religion, "the Media . . . "

Corporations, governments, unions, armies are all made up of people. People often do bad things, as well as good things. Same for groups of people. There will always be things to be afraid of, and those people with vested interests, whatever they may be, will often stoke one fear or another to further their interests. We choose our fears and can learn to accept them and work through them with no need to analogize (transfer) them to anything else—is all I am suggesting, though I believe Eco says it better. Thanks for the opportunity to clarify.

Thank you for clarifying your perspective, Barry. In my opinion it’s completely irrelevant and orthogonal to the points I was making, and that’s OK. Onward! :slight_smile: