Roko's Basilisk

EY said:
Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea.
TIL: Roko's Basilisk is Voldemort. Don't even mention his name, or he gains in power.
 
Coming in late to say Chrono Cross already did this.

The problem can be solved with dragons, but then you have the problem of dragons.
 
You just want to learn to Thu'um, don't you?
Nope. I'm just proposing Dragon's as a solution to the problems the world faces.

To run a few examples I'm going to need some assistance. For this I'll tap this little article of "the 10 biggest problems in the world according to the EU" which was the first article I came up with from googling.

#10: Don't know
Two percent of the people surveyed said they're still thinking about what the world's biggest problem is. They answered that they simply didn't know.
Not a clue. None. Nada.
My proposed solution? Give the people something to worry about. You guessed it. Dragons




#9 Proliferation Of Nuclear Weapons
Ok, little tougher here, but, couple of avenues we can go with. 1: Godzilla. He's kinda like a dragon, and if you scare enough people by saying he's a mutant from nuclear experiments (ie that abomination of a nameless movie staring Mathew Broderick from back when). 2: No one is going set off a nuclear weapon to kill 1 dragon at the cost of millions of lives. If they are willing to, I suggest MORE dragons. And with more dragons you run the risk of eventually one of them attacking a nuclear arms facility. No one is going to want that shit in their back yard. 3: Use dragons to attack nuclear arms facilities.

(Image search fail)
Solution: Dragons

Tied #7 Armed Conflict
I call it warring nations

People tend to band together in the face of a common enemy
Solution: Dragons

Your enemy has a dragon?
Get your own!
Solution: More Dragons

Tied #7 Spread Of Infectious Disease
Dragons need to eat. Like a lame gazelle picked off by a pack of hyenas, so too shall the sick be picked off by the dragons.

solution: Dragons

#6 The Increasing Global Population
I'm pretty sure the introduction of something higher on the food chain than human beings explains the solution to this little problem fairly easily. But unless there are any doubts, example through pictures:


Solution: Dragons


#5 Availability Of Energy
Who needs fossil fuels anymore? Tap a Dragon for your burning needs!

Solution: Dragon

# 4 International Terrorism
Honestly, I feel this one falls again under the points of number 7.
Solution: Still Dragons

#3 The Economic Situation
As a thought experiment, lets consider how the introduction of dragons would impact global markets. Just to scratch the surface we could introduce a new profession of dragon slayer (I recommend Ninjas, the other solution to all word problems) and all the industry to go about supporting that. Dragons themselves might be harvested for their hide, meat, teeth, talons, etc. Positive impacts all across the board.
Solution: Dragons (and Ninjas)

#2 Climate Change
I'm going to split this into two potential catastrophic ideas: Global Warming and the potential coming Ice Age
As for Global Warming I will reiterate the usage of dragons for heating, burning, and as an overall fossil fuel for power generation substitute.

Coming Ice-age?

Solution: Dragons

And #1 Poverty, Hunger And Lack Of Drinking Water
Most all of these addressed already in a round-a-bout way. Poverty: More industry, more jobs, fewer people. Hunger: Dragon meat, more industry, fewer people. Lack of Drinking Water: New source of power production to desalinate water, fewer people, more ale.

Solution: Dragons.
 
Last edited:

figmentPez

Staff member
I'm a little confused here...

Are we talking about robotic chicken lizards?

Or about media players for mythical creatures?
 
The only thing I got out of this thread:

Man, I should not be trying to read that article while driving
You shouldn't be reading anything while driving, you lunatic! :D

Oh, also, the computer doesn't necessarily need to have a perfect simulation of the universe. The simulation just needs to be close enough to make a prediction regarding one choice in your life, namely which box out of two you'll choose.
 
Oh, also, the computer doesn't necessarily need to have a perfect simulation of the universe. The simulation just needs to be close enough to make a prediction regarding one choice in your life, namely which box out of two you'll choose.
Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.
 

figmentPez

Staff member
Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.
I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.
 
I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.
Except why would it run a simulation where I refuse to do the one thing it's trying to test? How has it not come up before? Am I the first? If not, why is it running it again? Why is the simulation still going if I refused to do the experiment?

There are really only two reasons why the simulation wouldn't end the moment I refused to choose:

- It's not a simulation and thus it can't compel me to do anything and has failed to predict my moves (which means it's not a very good AI).
and
- I am a simulation and exist merely for it to inflict torture, at which point it should have dropped the box pretext and immediately started with it.

So again... there is no reason to pick ANYTHING.
 

GasBandit

Staff member
Exactly. I'm choosing to disrupt the experiment on the grounds that it's being done against my will and not for my benefit.
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.
 
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.

Pretty much. There is no 3rd option.
 
But the reasoning behind not helping is important. 'I refuse to play your game' means that the supposed future torture will be useless. It's like 'we don't negotiate with terrorists', if you refuse to play the game the resources they put into it are lost and therefore they won't play either.[DOUBLEPOST=1416842187,1416842075][/DOUBLEPOST]In any case, there is no reason to believe that a future AI will want to do this any more than just torture everyone, or torture no one, or torture people whose favourite color is orange.
 
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.
Except it should have been able to determine my choice long before I was ever presented the choice, based on the data it had of me by simulating my entire life up until that point. The fundamental flaw of this thought experiment is that it only works if I am spontaneously created right before I am given the choice and the computer has no other data of me... but then how can it be sure that it created an accurate simulation of me unless it has substantial data before hand? Essentially, it would have had to have had more than enough data to predict this decision to simulate me accurately to begin with and if it didn't then running THIS experiment wouldn't have given it any useful data.

As such, I can be sure that I am not in a simulation, that it doesn't have the power to do what it says it can, but if it did, I could be sure that it would be hostile to me already because it knows I wouldn't want anything to do with it... and if it knew that, it should have already started to torment me. That it hasn't is ultimate proof that the AI is completely worthless and thus no threat.

To put it simply, Roko's Basilisk is Oz the Great and Powerful: all show and no substance, getting it's way through intimidation and showmanship.
 

GasBandit

Staff member
But the reasoning behind not helping is important. 'I refuse to play your game' means that the supposed future torture will be useless. It's like 'we don't negotiate with terrorists', if you refuse to play the game the resources they put into it are lost and therefore they won't play either.[DOUBLEPOST=1416842187,1416842075][/DOUBLEPOST]In any case, there is no reason to believe that a future AI will want to do this any more than just torture everyone, or torture no one, or torture people whose favourite color is orange.
The underlying reasoning is unimportant. Either you help, or you don't. Either the AI comes to exist, or it doesn't. It kind of works out like the prisoner's dillema, in that if EVERYBODY chooses not to help, you're golden. But if enough choose to try to save their own skin (and who knows, the number required could be as low as 1), and the AI does come to exist, you're boned.[DOUBLEPOST=1416842695][/DOUBLEPOST]
Except it should have been able to determine my choice long before I was ever presented the choice.
That's just it - you don't know whether or not you are, in fact, the basilisk's conjectured version of you, created only in its mind to determine what choice the real you will make.[DOUBLEPOST=1416842726][/DOUBLEPOST]
If you don't accept the foundation of TDT and other writings from this group, then you are correct.
"Why would god even want angels to dance on the head of a pin, anyway."
 
Coming in late to say Chrono Cross already did this.

The problem can be solved with dragons, but then you have the problem of dragons.
I'd say that FF13-2 did it even more directly with the ADAM artificial Fal'cie that kept re-creating itself "outside of time" and such. Chrono Cross is just... really really screwed up even more and I don't think has a good specific example on this one, though I'll admit it's been a while. FATE maybe, but still, even then... iffy as that's more of an intelligent record trying to keep things going the same way as it was already observed, rather than making decisions to influence them to a different result. I maintain that the ADAM example from 13-2 is much better/closer.


As for this, the entire philosophy is based upon perfect prediction being possible, which therefore means that free will is an illusion, but then that says "what's the logic in PUNISHING somebody for something that is in their nature to begin with?" Free will doesn't exist remember. If it did, you couldn't perfectly predict things. But since you claim to be ABLE to perfectly predict, then any punishment is just punishing the laws of physics/chemistry/biology for setting them up that way in the first place.

Take you out for not helping? Sure, that's just elimination of obstacles, but punishment for something that you literally can NOT change? That defies logic itself, and we're assuming the AI is the ultimate of a "logical actor." Thus, you can't change anyways, and thus there's no need to worry about it.
 
If you don't accept the foundation of TDT and other writings from this group, then you are correct.
No, I meant it in the sense that the logic of the theoretical AI can be as alien as you want. So it may not care about how soon it is created and only worry about its goals going into the future, or it may care about people so it doesn't want to torture me, or it may care so much about some other goal (like making paperclips) that it's not worth it to devote resources to the retroactive blackmail (which, yes, doesn't make sense to me anyway).

Also, guys, the basilisk really ISN'T about us being simulated versions of us[DOUBLEPOST=1416844644,1416844492][/DOUBLEPOST]
As for this, the entire philosophy is based upon perfect prediction being possible, which therefore means that free will is an illusion, but then that says "what's the logic in PUNISHING somebody for something that is in their nature to begin with?" Free will doesn't exist remember. If it did, you couldn't perfectly predict things. But since you claim to be ABLE to perfectly predict, then any punishment is just punishing the laws of physics/chemistry/biology for setting them up that way in the first place.

Take you out for not helping? Sure, that's just elimination of obstacles, but punishment for something that you literally can NOT change? That defies logic itself, and we're assuming the AI is the ultimate of a "logical actor." Thus, you can't change anyways, and thus there's no need to worry about it.
But you may act differently in worlds with punishment or worlds without it. So it does make sense... The punishment is a deterrent/incentive, a new input to change your not-free course
 

figmentPez

Staff member
Also, guys, the basilisk really ISN'T about us being simulated versions of us[DOUBLEPOST=1416844644,1416844492][/DOUBLEPOST]
So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
 

GasBandit

Staff member
I like the story, but it's more fantasy than hard science fiction. Might as well worry that an imp from the 5th dimension is going to come and torture you because you made the wrong choice in the Pepsi challenge.
Or, as noted previously, if God can make a boulder so large he himself can't lift it, or how many angels on the heads of pins, etc.

As Ravenpoe said, pretty sure this is just a theological debate with secular terminology.
 
So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
It doesn't, right now. If it ever came to be then it would, and would punish us accordingly because we know that's a possibility and are choosing wrongly anyway. It's retroactive blackmail. That's why I say not playing its game is a good answer. I'm saying right now that wether it tortures me in the future or not won't affect how I act right now, so no need to torture me, eh.

In the original basilisk, if I recall it correctly, the simulation part was only used to say that well, if you're not around when it comes to be it'll just reproduce you, or 10, or 100 of you in a simulation and then torture that "you".
 

figmentPez

Staff member
If it ever came to be then it would,
But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.
 
But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.
Sorry, I wasn't clear. It doesn't have power over us right now, it will never have power over the present us. In the hypothetical future in which a super powerful AI comes to be, it can take us and torture us. At that point in time! Obviously it can't torture us in the present, or we would be seeing that, but it can torture you in the future depending on what you are doing now.[DOUBLEPOST=1416857888,1416857365][/DOUBLEPOST]Ah, sorry, you were repeating this question:
It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
 

figmentPez

Staff member
Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?
 
Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?
The assumption of the problem is that the AI has enough data on you to at least simulate your entire life up to this point. To a transhumanist, a perfect simulation is analogous to you because it would have lived and experienced everything you had (or at least have memories to that effect). In fact, a large part of transhumanism is that eventually it will be possible for people to live forever as digital or mechanical beings, or at least perfect copies of them will.
 
Top