For example, asking AI to cure cancer as quickly as possible could be dangerous. âIt would probably find ways of inducing tumours in the whole human population, so that it could run millions of experiments in parallel, using all of us as guinea pigs,â said Russell. âAnd thatâs because thatâs the solution to the objective we gave it; we just forgot to specify that you canât use humans as guinea pigs and you canât use up the whole GDP of the world to run your experiments and you canât do this and you canât do that.â
Definitely got my attentionâŚ
This from DUST:
@Steve_Gleason Very disturbing vision but a very possible future use of an unregulated disruptive technology.
Drone wars have become reality already; itâs technologically the next logical step to introduce autonomous swarm âintelligenceâ via AI :
Thinking about some other possible applications of AI:
With regards to the current discussion about Facebookâs algorithm which identifies user profiles, feeds taget-tailored content, amplifies anger echo chambers, I wonder where this will lead society if AI would autonomously run unchecked algorithms in social media platforms and news feedsâŚ
From the former article:
âItâs something thatâs unfolding now,â he said. âIf you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.â
The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.
What a scary potential for subliminally influencing public opinion and changing societies.
In my opinion, there are several problems with AI which will need to be addressed soon:
- Since AI has currently no rules or conventions to observe when tasked to find a solution autonomously (other than those that a programmer might or might not program), it will do exactly that: find a solution without taking into account any side effects. It will do âits jobâ.
- There needs to be human accountability for damages and consequences of actions caused by AI.
- There need to be human checks and balances.
- There need to be programming conventions regarding autonomous decision making by AI (e.g. a machine must not autonomously decide to harm or kill a human being)
A further issue is the misnomer and its consequences: Artifical âintelligenceâ.
AI is not intelligent - but a lot of people think that it is and are uncritical about it.
In my opinion, AI is a sophisticated principle of computer programming designed to analyse huge amounts of data with the aim to find patterns and correlations and learn and adapt from the correlations in order to act on or solve a posed problem. It emulates decision making processes and measures and learns from the data.
But that is a very narrow definition of âintelligenceâ; itâs just an evolved way of ânumber crunchingâ.
The AI is not self-aware of what it is doing. It is being told by the programming what consitutes numerically âsuccessâ and runs in a sophisticated way all possible permutations to check if it has reached its goal. It is not conscious.
Thatâs why AI will never dream, Dave.
What a cool trailer! I didnât know that SKYNET had a father called COLOSSUS
âWhen do you think youâll be able to attempt the overload?â. Lol!
Gotta find Sarah Connor first.
Those Toys in Toy Story 1-4 were pretty intelligent. Iâve been told that objects that we perceive arenât really what we perceive at all but instead are self-awareâso the potential for machines to manifest consciousness doesnât strike me as too far out, no more than pineapple on pizza (which I like).
In the view of Idealism, consciousness rather than matter is the ontological primitive, that is, a Universal Consciousness within which all creation happens and out of which all creation is made circumscribes all things.
The inanimate universe is merely the extrinsic appearance of the âthoughtsâ of this universal mind while humans and other living organisms are the extrinsic appearance of dissociated alters of universal consciousness.
In his excellent book, The Idea of the World, Bernardo Kastrup says that:
âAs such, the quest for artificial consciousness boils down to the quest for abiogenesisâ
From Wiki:
In biology, abiogenesis , or informally the origin of life ( OoL ),[3][4][5][a] is the natural process by which life has arisen from non-living matter, such as simple organic compounds.[6][4][7][8] While the details of this process are still unknown, the prevailing scientific hypothesis is that the transition from non-living to living entities was not a single event, but an evolutionary process of increasing complexity that involved molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes.[9][10][11]
See also:
Charlie Strossâs CCC talk: the future of psychotic AIs can be read in todayâs sociopathic corporations
Stross is very interested in what it means that todayâs tech billionaires are terrified of being slaughtered by psychotic runaway AIs. Like Ted Chiang and me, Stross thinks that corporations are âslow AIsâ that show what happens when we build âmachinesâ designed to optimize for one kind of growth above all moral or ethical considerations, and that these captains of industry are projecting their fears of the businesses they nominally command onto the computers around them.
and:
Cory Doctorow: Skynet Ascendant
What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?
I think itâs corporations.
We havenât made Skynet, but we have made these autonomous, transhuman, transnational technoloÂgies whose bodies are distributed throughout our physical and economic reality. The Internet of Things version of the razorblade business model (sell cheap handles, use them to lock people into buying expensive blades) means that the products we buy treat us as adversaries, checking to see if weâre breaking the business logic of their makers and self-destructing if they sense tampering.
Corporations run on a form of code â financial regulation and accounting practices â and the modern version of this code literally prohibits corporations from treating human beings with empathy. The principle of fiduciary duty to invesÂtors means that where there is a chance to make an investor richer while making a worker or customer miserable, management is obliged to side with the investor, so long as the misery doesnât backfire so much that it harms the investorâs quarterly return.
We humans are the inconvenient gut-flora of the corporation. They arenât hostile to us. They arenât sympathetic to us. Just as every human carries a hundred times more non-human cells in her gut than she has in the rest of her body, every corporaÂtion is made up of many separate living creatures that it relies upon for its survival, but which are fundamentally interchangeable and disposable for its purposes. Just as you view stray gut-flora that attacks you as a pathogen and fight it off with antiÂbiotics, corporations attack their human adversaries with an impersonal viciousness that is all the more terrifying for its lack of any emotional heat.
and:
Silicon Valley Is Turning Into Its Own Worst Fear: We asked a group of writers to consider the forces that have shaped our lives in 2017. Here, science fiction writer Ted Chiang looks at capitalism, Silicon Valley, and its fear of superintelligent AI.
This summer, Elon Musk spoke to the National Governors Association and told them that âAI is a fundamental risk to the existence of human civilization.â Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isnât necessarily worried about the rise of a malicious computer like Skynet from The Terminator . Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence thatâs given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.
This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps itâs because theyâre already accustomed to entities that operate this way: Silicon Valley tech companies.
Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do â grows at an exponential rate and destroys its competitors until itâs achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the worldâs problems, or a mathematician that spends all its time proving theorems so abstract that humans canât even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalismâŚ
~
Corporations as 'slow (sociopathic) AIâŚbrilliant connection! A kind of sequence of habitual actions that functions but lacks conscious intent⌠like we are also in our regular waking (asleep) state.
But these are self-aware humans who make up the corporation so Iâm not sure it is such a deep or useful analogy. Umberto Eco does a great job of dissecting analogies (Focultâs Pendulum) with the observation that we tend to take them too far and I think in this case, the analogy of AI to corporations doesnât work at any meaningful level.
From the book (p.668):
Rule One: Concepts are connected by analogy. There is no way to decide at once whether an analogy is good or bad, because to some degree everything is connected to everything else. For example, potato crosses with apple, because both are vegetable and round in shape. From apple to snake, by Biblical association. From snake to doughnut, by formal likeness. From doughnut to life preserver, and from life preserver to bathing suit, then bathing to sea, sea to ship, ship to shit, shit to toilet paper, toilet to cologne, cologne to alcohol, alcohol to drugs, drugs to syringe, syringe to hole, hole to ground, ground to potato.
Rule Two says that if tout se tient in the end, the connecting works. From potato to potato, tout se tient. So itâs right.
Rule Three: The connections must not be original. They must have been made before, and the more often the better, by others. Only then do the crossings seem true, because they are obvious.
From an essay about Ecoâs book:
Throughout the book, Eco basically shows us that one can justify any theory, any line of thought, if there is a psychological or practical need to make the argument âworkâ, and that any theory, if you formulate it according to certain ârulesâ, can become accepted by a large group of people.
I think the evidence for this over the past five or fifty years is overwhelming. Make that 75 years (my lifetime experience).
. . . . .
@KhyungMar Will I dream, Dave?
I sometimes feel like Dave near the end of the movie
Well, an analogy doesnât have to be perfect. I think there are some striking similarities between the radical efficient way an AI works and the way some corporations operate (think of certain banks):
- Paramount is to reach the goal set (raison dâetre of most banks is to maximize profits)
- Possibilities to exploit systems and maximize profit within the set of rules given are searched for and pursued. (E.g. cum-ex deals)
- morals and ethics are subordinate to profit maximization (e.g. speculation on foods and hedging, manipulating prices with detrimental effects in third world countries, etc.)
- Employees of the bank might have personal ethical issues with parts of the business but will do their job. Effectively, as a collective, the employees will set ethics aside and participate as long as no rules or regulations will have been violated. (Similar to the collective set of operations an AI will perform in trying to achieve the goal, except for the missing bad conscience)
Some corporations do seem somewhat similar to a âcoldâ AI that operate as a system of complex operations trying to learn, adapt and act in order to reach its goal at all costs.
@_Barry Love the pic! Such a great movie!
Great movie at the right time, as well!
A surface analogy works with one eye closedâis reasonable, but has no deeper explanatory power, is all I am saying. Itâs more of a political perspective than an operational principle.
You wrote,
-
Paramount is to reach the goal set (raison dâetre of most banks is to maximize profits)
-Banks still have to operate within laws, regulations and environmental conditions. Radical AI has no such governors. -
Possibilities to exploit systems and maximize profit within the set of rules given are searched for and pursued. (E.g. cum-ex deals)
-Subject to penalties and economic ruin when rules are breached -
morals and ethics are subordinate to profit maximization (e.g. speculation on foods and hedging, manipulating prices with detrimental effects in third world countries, etc.)
-individuals within corporations may be whistleblowers, informants and not always work towards "common goals. The poor schnooks at the bottom of the chain are often sacrificed. AI has no such limits. -
Employees of the bank might have personal ethical issues with parts of the business but will do their job. Effectively, as a collective, the employees will set ethics aside and participate as long as no rules or regulations will have been violated. (Similar to the collective set of operations an AI will perform in trying to achieve the goal, except for the missing bad conscience) -Same as the previous point, time limits, cultural attitudes and other societal issues will provide bumps on the road that AI wonât have.
While weâre having abstract conversations on the validity of this analogy, fossil fuel corporations, in the single-minded pursuit of profit, are severely degrading the biosphere and quite likely soon pushing us past irrevocable tipping points that will involve mind-numbingly horrific suffering on an unprecedented scale for many generations.
Facebook/âMetaâ is fostering genocide, undermining democracy, eroding critical thinking, promoting conspiracy theories etc., also in the single-minded pursuit of profit. (And shoutout to Google, hey yo, âdonât be evilâ amirite?)
To my eye, this sure does look like the kind of fears that people project onto AI, e.g. in the initial article that started this thread â except happening right now, all around us.
But wait! Does this mean we shouldnât worry about AI? AI is being developed and deployed by those same corporations under the logic of capitalism in its current form, so maybe the purported AI-pocolypse is another aspect of the same problem.
Or maybe we could take a step back and look at this through a dream lens.
AI researchers and corporate leaders are having nightmares about out-of-control AI destroying humanity in the pursuit of narrow goals. âIf this was my dream, I think it might be aboutâŚâ
I recall âThe Cold War,â Duck and Cover, The Cuban Missile Crisis, Dr. Strangelove and the Vietnam Warâin which I servedâas being those type of existential threats that AI could become. Those events/analogies only went so far but the fear was real and palpable amongst most of humanity. The fears you cite are also real and palpable, and are potential calamitiesâyet unrealized, much the same as the fears about unrestricted AI. I think job 1 is to work on Fear and avoid analogies.
I cited past and current damage being actively done by corporations, rather than fear of same. The suggestion (by myself and the authors of the three articles I linked to) is that, at least in part, when people express fear of AI, there may be an element of psychological projection going on.
[see: numerous articles on Facebook, Myanmar, genocide; climate change deception, fossil fuel corporations.]
Iâm not sure what youâre getting at here. Could you elaborate?
Might as well add âThe Government,â Religion, "the Media . . . "
Corporations, governments, unions, armies are all made up of people. People often do bad things, as well as good things. Same for groups of people. There will always be things to be afraid of, and those people with vested interests, whatever they may be, will often stoke one fear or another to further their interests. We choose our fears and can learn to accept them and work through them with no need to analogize (transfer) them to anything elseâis all I am suggesting, though I believe Eco says it better. Thanks for the opportunity to clarify.
Thank you for clarifying your perspective, Barry. In my opinion itâs completely irrelevant and orthogonal to the points I was making, and thatâs OK. Onward!