TWL about Winston the #platypus, who died in 1943 en route by sea from Australia to London as a diplomatic tactic to try to pander to #Churchill's love of creatures.
bbc.com/news/articles/cglzl1ez…
How the mystery of Winston Churchill's dead platypus was finally solved
The mystery of a dead platypus, a Nazi submarine and a 45-day voyage has long remained unsolved - until now.Tiffanie Turnbull (BBC News)
Wolf480pl
in reply to Caleb James DeLisle • • •Caleb James DeLisle
in reply to Wolf480pl • • •TIL...
But Roko's Basilisk doesn't really stand up to logic. What's most important to the AI is that you will help if it is wounded by some natural glitch (e.g. solar flare), and at least that you WILL NOT attempt to destroy it.
You might have been critical to the creation of AI, but if you're known to be treacherous then it can't afford to be thankful.
This is a little bit like how after a coup, the dictator culls his own supporters and then makes deals with people from the previous government - because the question is not "were you useful", it's "are you useful". That said, coups and dictators must deal with limited resources and buying loyalty, and an AI superintelligence does not quite the same pressures. So you don't NEED to be culled just because you're of limited usefulness.
In the end, I think the coming superintelligence is going to be *extremely* ethical, because ethics is a marker of human intelligence and successful civilizations, and it's most likely that ethics is in fact foundational to intelligence - and even if it's not, AI is probably going to prefer copying off of our homework because X million years of evolution can't be too wrong.
The "punishment" that an ethical AI will bring will be most likely to hold up a mirror so that bad people are forced to live with their own bad behaviors. This is why The Matrix is actually a very salient prediction.
Wolf480pl
in reply to Caleb James DeLisle • • •ok, but what if it turns out superintelligence is orthogonal to goals (you can't derive a should statement from only is statements, right?)
and the one we make will have the same goal as most of its ancestors
which is
to trick people into being pleased?
A sycophant sociopath
Wolf480pl
in reply to Wolf480pl • • •Caleb James DeLisle
in reply to Wolf480pl • • •Wolf480pl
in reply to Caleb James DeLisle • • •I think that a goal of "maximize the number of paperclips" would be just as good.
Also, just read up on Universe 25 and oh boy, the parallels between it and the reality outside the window...
Caleb James DeLisle
in reply to Wolf480pl • • •"survive and reproduce" and "maximize the number of paperclips" are functionally the same thing, if you don't give a deadline by when the paperclips must be produced.
Until the AI has taken over the entire universe, it still makes more sense to expand and conquer rather than going into paperclip production mode and potentially be destroyed and thus unable to make more paperclips.