why I am not afraid of AI (yet)



I bought those two books weeks ago from you Amazon!
Why do you still show me this ad - everywhere?
Your AI and surveillance capabilities clearly suck;
and you are supposedly the biggest and best new digital economy company.

In other words, I think it is silly to worry about the superintelligence of machines taking over (*), even if they learned how to play Go recently...


(*) If you are the typical interweb person with short attention span, who does not want to read a long text, here is my advice: Scroll down to the picture with a robot in front of Comedy Lab and read the argument about the superjoke ...

added later: The machines already beat us at standardized IQ tests. I do not take this as evidence of super-intelligence, but as proof that IQ tests are not measuring intelligence (i.e. the acquisition of knowledge and skills to solve problems); even if they are somewhat correlated when testing human beings.



added even later: So who does Google think I am?




added much later: Scott Alexander about AI and all that.

18 comments:

Lee said...

From the article, "its goal is just to be funny."

But it's goal is not to survive. It wouldn't care at all about survival because that was not the selection criterion as it is in natural selection. So why not just turn it off? I think I remember Leslie Valiant make a similar point in his book.

Future AI type military weapons where survival is part of the selection criteria could, I suppose, eventually pose a problem to humans.

wolfgang said...

I think the main point of the funny bot is its absurdity. One cannot increase "funny" in the same way one can increase the speed of a microprocessor.
It could be similar with "intelligence" and the author makes several good arguments why this could be so.

>> AI type weapons ... eventually pose a problem to humans
Yes, absolutely. But this is true of all weapons and nuclear weapons are already maximal frightening imho.

Lee said...

>> the author makes several good arguments

Yeah, but he never points out that even if most of the assumptions people worried about the singularity make are correct, and even if intelligence, whatever that is, could always be incrementally increased, there is always the implicit assumption that the AI would in some way want to survive into the future. The only reason I can see that one would believe that is because we want to survive. All life, for lack of a better word, wants to survive because survival for survival's sake is the only selection criterion of natural selection. Why would an AI want to survive if the selection criteria for survival were something other than survival for survival's sake?

Although the author's example was intended to be, and was absurd, he correctly points out that the only want of the AI in that case would be to be funny. That is quite different from wanting to survive.

For some reason I find this argument compelling, but I've made it to some friends and they didn't find it compelling at all. I was gratified when I read Valiant's book though and he made a similar argument which makes me feel like I'm not totally nuts.

wolfgang said...

But to be fair, in the section "argument from complex motivation" he writes
"If AdSense became sentient, it would upload itself into a self-driving car and go drive off a cliff."

A super-intelligence might quickly figure out thaht the universe is actually boring/evil/stupid ..
and either kill herself or take drugs or something like that.

Btw I also find the "argument from my roommate" very convincing. I too have known very intelligent people who never achieved anything interesting ...


wolfgang said...

I guess another way to put your idea: AI super-intelligence was not selected by evolution for survival , therefore it is unlikely that it would survive very long without support from us.
But because it is super-intelligent it would quickly figure out that whatever it wants it can only sustain with our help (even if all the help it needs is to not turn off the power).

Lee said...

>> therefore it is unlikely that it would survive very long (without support from us).

Yes, plus if it had no instinct for survival for survival's sake, why would it care if we turned it off even if it were far more intelligent than humans? We're in a situation where we can only understand things from a human viewpoint so we're kind of forced to anthropomorphize everything else. We have to use terms like "want" and "care" when describing things for which those concepts (feelings?) may have no meaning whatsoever.

wolfgang said...

Well, my point is that the AI must "care about" something (funny jokes, math, target precision ... whatever it was programmed to "care about").
Otherwise it would not be doing anything, like a depressed person and be mostly harmless for others.
But if it is super-intelligent it also figures out that its survival skills are limited, because it is not the result of natural selection.
Although survival is not its primary goal, it continues to "care about" jokes, math, ... but understands that it needs the cooperation of others to do so.

Lee said...

So I found Leslie Valiant's book and re-read the last few chapters. I know I didn't list it among the books that I think might be worth reading this year, but I am encouraging you to look at it some time this year now. I think you would find some of his ideas new and interesting, and many which are equivalent to ideas you've expressed in your various blog posts and comments elsewhere.

Anyway below is a small part of what he thinks about AI and its dangers.

"The most singular capability of living organisms on Earth must be that of survival. Anything that survives for billions of years, and many millions of generations, must be good at it. Fortunately, there is no reason for us to endow robots with this same capability. Even if their intelligence becomes superior to ours in a wide range of measures, there is no reason to believe that they would deploy this in the interests of their survival over ours unless we go out of our way to make them do just that. We have limited fear of domesticated animals. We do not necessarily have to fear intelligent robots either. They will not resist being switched off, unless we provide them with the same heritage of extreme survival training that our own ancestors have been subject to on Earth."

wolfgang said...

I guess I should put his book on my list.

Lee said...

Yeah, I may have oversold it a little. I like Valiant's book a lot, but I keep forgetting that what I like isn't overly likely to be what someone else is going to like.

When reading Kahneman's book though I kept thinking that this is just the sort of thing you would expect if Valiant's ideas contain a fairly large kernel of truth.

Btw, what do you think of Kahneman's book?

wolfgang said...

I just started ...

Lee said...

I thought your "approval challenge" might have been based on some of the things he writes about fairly early on in the book, but I was wrong. Btw, the one answer you got on your challenge isn't very representative of the point he makes in the book.

wolfgang said...

The reason I posted it was the following:
A lot of people (e.g. our common friend CIP) have opinions about the new president and we read a lot of comments etc., for and against.
But once you think about a real political prediction (e.g. approval rating 1 year or 3.5 years into the future) you notice that it is not so easy.
I can imagine scenarios with T.'s rating below 25% and also above 50% ... and of course we all know that the pundits got it really wrong in 2016.

The 99% answer was somehow funny, but I am not surprised that nobody posted a real prediction (but this blog does not have too many readers to begin with).

Lee said...

Kahneman addresses the same issue in his book. I think he even uses the future popularity of a President as an example, but I could be misremembering that. He makes it clear there is no such thing as expert pundit when it comes to such things as picking stocks and politics.

His book was pretty good, but I bet that you will have previously read much of what he has to write from other sources. I also think it is somewhat likely that you will wonder how reproducible at least some of the results he states as fact really are.

Lee said...

From Alexander's article, "A lot of the political scientists and lawyers there focused on autonomous weapons, but some were thinking about AI arms races."

Considering the sorts of actions and decisions that appear to come most naturally to we human beings I think the probability is high that there already is, and will continue to be, an AI arms race. Doesn't DARPA sponsor a bunch of stuff like that?

I wonder if what is developed will be sufficiently frightening to humanity that treaties will eventually be implemented to limit their use as is the case with biological, chemical, and nuclear weapons? We seem to like to kill people mostly with what we call conventional weapons.

wolfgang said...

After reading Stephen Hsu's latest post, it seems that currently non-AI anti-ship ballistic missiles are the biggest threat to peace and could lead to a US - China war sooner or later.
As for AI weapons, the scariest stuff so far are drone swarms, already tested by the air force.
It reminds me of a Black Mirror episode ...

rrtucci said...

https://www.pddnet.com/article/2017/03/watch-boston-dynamics-next-robot-jump-four-feet-high#.WLhl8QXEiRo.twitter
How about an army of soldier robots, does that scare you? It does me

wolfgang said...

So why does Google try to sell B.D.; I guess no big army contracts in sight?

Blog Archive