Export thread

Roko's Basilisk

#1

Necronic

Necronic

Is anyone familiar with this? It was referenced in the latest XKCD alt-text, so I looked it up. The premise of it is complicated and I won't do it proper service, so I'll just link the Slate article discussing it:

http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html

Before you read that let me, as a warning, point out that apparently even reading it or thinking about it may cause it to come true, according to the logic of the thought experiment (haha I warned you after the link so you probably read it anyways). Anyways, for those of you who are familiar with it or bravely chose to risk eternal damnation by reading the article I linked, what are your thoughts?

Personally, it just strikes me as pure and utter bullshit and the kind of mental games that will make your palms hairy. The kind of garbage metaphysics that people come up with to sound smart at parties. There are three fundamental flaws with the argument, and they are important:

1) The universe is NOT deterministic. This is incredibly important, and its not surprising to me that a group of intelligencia missed this NEARLY A CENTURY OLD FACT. We have known the universe is not deterministic since quantum mechanics was discovered, yet, for some reason, people often ignore this. Maybe its because the logic of a non-deterministic world is really hard to grasp, like a zen koan, or maybe it's because at larger scales the universe effectively IS deterministic/Newtonian.

However, this entire experiment is focused around the creation of an AI, a form of computer. Maybe it would be traditions circuitry, maybe it would be organic, but either way I can guarantee that it would be effected by quantum uncertainty.

This would mean that it would be impossible for it to do these accurate predictions of events that are necessary for the thought experiment to work.

2) Even if we ignored the deterministic flaw, for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself. Ad infinitum.

I'm hoping you can see the problem with this. This is impossible. And its impossible because the prediction and the model has to be perfect to work, it requires omniscience. Any thought experiment that involves omniscience is going to always hit problems, because omniscience is very similar to the infinite, it simply doesn't exist in reality.

To me this whole thing just boils down to the old George Carlin joke where he asks the preacher "Can God make a boulder so large he couldn't lift it?"

3) Time travel. The AI requires time travel. Maybe this could really happen. Maybe not. But if your proposal requires time travel in its core it better be a new Terminator movie. Maybe you could remake terminator 3. Another version of it doesn't require time travel but requires an even more out there idea that we would be indivisible from a future simulation of ourselves, and that we might be living in the simulation right now.

Anyways. Anyone else familiar with this?


#2

PatrThom

PatrThom

Not familiar.
But now curious.

--Patrick


#3

GasBandit

GasBandit

I have an easier time believing I might be a simulation than believing time travel is possible.


#4

GasBandit

GasBandit

Yudkowski needs to spend less time trying to reinvent Shrodinger's wheel and more time finishing HPMOR, IMO.


#5

Ravenpoe

Ravenpoe

Got bored halfway through the article. Also would take both boxes.


#6

mikerc

mikerc

Would just take box B. Either I get a million dollars OR I become the first person to prove the infallible supercomputer wrong. Both outcomes sound like a "win" to me.


#7

GasBandit

GasBandit

Would just take box B. Either I get a million dollars OR I become the first person to prove the infallible supercomputer wrong. Both outcomes sound like a "win" to me.
That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.


#8

Ravenpoe

Ravenpoe

That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.
He did the same thing I did. Quit after half the article.

Because really, the whole thing seems like the usual philosophical quandries, just with 'god' replaced by 'evil computer' so that atheists can have a crisis of faith.


#9

mikerc

mikerc

That's just the setup, not the actual basilisk question. Box B gets you nothing or eternal torment.
Sorry, I can't take the basilisk question seriously since it's just the money boxes question with added BULLSHIT.


#10

SpecialKO

SpecialKO

Maybe I'm just slow today, but if the machine is always right, then no new information has actually been imparted to you and you should just make the choice you were always going to make.

If the machine is God of this reality, then either I have no free will (my choice doesn't matter), or God wants me to use my free will (the knowledge that God is always right doesn't matter).

If the machine is not always right, then it's no different from the kids who beat you up and take your lunch money when you do something they don't like. In which case, the Basilisk can take a number and line up, because we've already got a ton of bastards at various levels of power who can do that.

While the idea of a computer that created a universe trying to map the universe (Planetary did a nice job with that one) is a cool one, it makes for better science fiction than something to actually allow to affect my decisions.

'Cause really, when the Singularity God comes, he's going to be more pissed at Fox for cancelling Firefly than he will be at me for scoffing at a metaphysical thought experiment.[DOUBLEPOST=1416602859,1416602818][/DOUBLEPOST]
Would just take box B. Either I get a million dollars OR I become the first person to prove the infallible supercomputer wrong. Both outcomes sound like a "win" to me.
Oooh, nice, totally forgot about the fame and talk show angle. :D


#11

Necronic

Necronic

Planetary was awesome.


#12

tegid

tegid

I have other thinks to answer when I'm not tipsy from a couple beers on top of being exhausted but:
1) The universe is NOT deterministic. This is incredibly important, and its not surprising to me that a group of intelligencia missed this NEARLY A CENTURY OLD FACT. We have known the universe is not deterministic since quantum mechanics was discovered, yet, for some reason, people often ignore this. Maybe its because the logic of a non-deterministic world is really hard to grasp, like a zen koan, or maybe it's because at larger scales the universe effectively IS deterministic/Newtonian.

However, this entire experiment is focused around the creation of an AI, a form of computer. Maybe it would be traditions circuitry, maybe it would be organic, but either way I can guarantee that it would be effected by quantum uncertainty.
It is not obvious that an AI would be affected by quantum uncertainty in its decisions. The macroscopic world behaves mostly independent of quantum uncertainty. Computers do not do weird things because they work with electrons and our brains probably don't either (as appealing as it is to think that our decisions are influenced by quantum randomness to get a -false- appearance of free will, it is not clear that they are).


#13

tegid

tegid

Okay if I leave this for tomorrow I will probably not post it so here it is:

I think Slate's account of what Roko's basilisk is isn't correct. Although I was already familiar with the idea, I had never read about it including infinite nested simulations (btw Necronic, I don't think it does in this version either). AFAIK (I may be wrong, since I haven't read the original post) the original Basilisk said that if you did the wrong thing (not supporting the AI), then the AI may torture you... or a simulation of you, in the future. So that if you know that this will happen already, you will act accordingly, or something. Saying that we are all simulations already or whatever doesn't make sense because, as in Newcomb's paradox, the AI only needs to simulate me up until the decision (in the Newcomb case, it will not simulate you enjoying the money, and here it won't simulate you suffering in hell just to know what you'd do).

Of course this original idea doesn't make a lot of sense either: if I have already decided, why should the AI punish me in the future? It will not change anything about its past.

BTW, EY (Eliezer Yudkowsky) now says that he erased all original posts because they were messing with people, not because he believed it (I'm not sure I buy it but whatever). If you look at the thread for the last xkcd comic in the kcd forums he also has some explanations as to why Roko's basilisk doesn't work (One is that, if you say that future torture won't change your decision, the AI has no reason to torture you).


#14

Chad Sexington

Chad Sexington

Gives them nightmares. Mmhmm.


#15

tegid

tegid

2) Even if we ignored the deterministic flaw, for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself. Ad infinitum.
Well, the quantum randomness is a good reason to have many simulations running in parallel to have a good sampling of everything that could happen. But the simulation doesn't need to have many levels because it is not simulating the AI, it's simulating you: in the Newcomb problem, it only matters what YOU do, not what's in the box, so the AI doesn't need to simulate its own decision (therefore needing to simulate you and itself again, and again)

3) Time travel. The AI requires time travel. Maybe this could really happen. Maybe not. But if your proposal requires time travel in its core it better be a new Terminator movie. Maybe you could remake terminator 3. Another version of it doesn't require time travel but requires an even more out there idea that we would be indivisible from a future simulation of ourselves, and that we might be living in the simulation right now.
[\QUOTE]
I have never heard that it requires time travel... It just requires that you act according to what some future AI may or may not do which doesn't make a lot of sense either (that or I'm not understanding it, but I browse LessWrong from time to time and I've never read that it requires time travel... Of course, it was censored for a long time so...)

Ah, and by the way, as far as I know most LessWrong users don't buy into the basilisk bulshit. Would be surprised if they did, actually was quite surprised when I thought that EY did
[DOUBLEPOST=1416606371,1416606205][/DOUBLEPOST]
He did the same thing I did. Quit after half the article.

Because really, the whole thing seems like the usual philosophical quandries, just with 'god' replaced by 'evil computer' so that atheists can have a crisis of faith.
It's like a negative version of Pascal's wager (I should believe in God regardless of wheter it exists, to make sure I don't miss on heaven ~ I should work towards IA regardless of wheter that particular one comes to be, to make sure I don't go to real or simulated hell)


#16

AshburnerX

AshburnerX

This sounds like the premise of I Have No Mouth and Yet I Must Scream, except with a time travel angle. Lemme read this...

Okay...

If the computer is malevolent and can punish you for not helping it come to be, then why help it? Because you might suffer for it eternally? It already knew what choice you'd make... to offer you the choice instead of simply following it's own prediction is to admit that it is incapable of actually predicting anything and to punish you for not making the choice it wants (and knew you wouldn't make) is simply sadism. This is Divine Command Theory all over again: Does God order good because it is good or is it good because of God? Ether the AI can't do the very thing it says it can because it has to rely on you to make a choice or it's choice is completely arbitrary to your concerns and thus unworthy of cooperation.

So I choose neither box. The worst that can happen to me is the AI punishes me for a choice it already knew I would make (which is something I couldn't avoid) or I reveal that it can't do it's vaunted feat and it loses it's power over man.


#17

GasBandit

GasBandit

Not choosing a box is choosing the second box.


#18

AshburnerX

AshburnerX

Not choosing a box is choosing the second box.
No, it's forcing the machine to pick a box itself, which it could have done without my involvement if it's prediction abilities were sufficient for it to KNOW (rightly or wrongly) which I'd pick. To involve me in any way is to admit that my free will is a factor in the outcome and that it could not do what it claims (or has chosen not to for some inexplicable reason) or that it's simply toying with me and demanding I choose for it's own amusement or out of ritual. As a rational agent, the only outcome in which I preserve my free will is to not abide by the demands made of me.



#19

Necronic

Necronic

My understanding of it is that the computer would be able to predict my choice based on a perfect model of who I was made by creating a perfect simulation of me. First, this would require a perfect simulation of me, physically, from which you could, based on determinism, predict my actions. This is the first flaw of determinism in the model, as I would argue there is good reason to believe that nueral pathways are heavily affected by quantum uncertainty. However, remember, nature vs nurture. A perfect simulation of me would have to include the things that affected me. To do that you would have to make a perfect simulation of my surroundings. To do that you would have to model the earth, then the solar system, then the universe itself, and you would have to do that for every living human throughout time that has to be hit by this blackmail. The scale of computation here is where the secondary affects of quantum uncertainty could come into play. Quantum uncertainty has almost no chance to ever affect computational stuff. However, the scale of the model and the necessary calculations would quite likely hit a tipping point here and have said uncertainty disrupt the model.

I'm not sure where I got that it would need to model itself, that doesn't seem to be necessary

Anyways, the key flaw here is that the model requires perfect models and simulations, which is where you get to these insane scopes where you have to model whether the gravitational disruption caused by some far flung galaxy on my chest hairs will affect whether or not I decide to buy cherios or captain crunch. But the requirement, as far as I can tell, is that it IS required for the model to be perfect, because if it is not then it cannot accurately predict what I will do, which invalidates the decision crisis. Statistical averages of weaker models will yield "close-enough" results, but that is not really acceptable in this premise.[DOUBLEPOST=1416609349,1416609284][/DOUBLEPOST]Oh yeah and I did spend some more time reading up on this and LessWrong doesn't seem to buy it at all anymore, but...I have yet to hear any refutation of the initial comments made by EY, which sort of make him sound like a crazy person.


#20

GasBandit

GasBandit

The boxes are the rewards, not the choice. If you decide to help create the Basilisk AI, you get both boxes, if you decide not to, you get box B.


#21

Necronic

Necronic

why would I want both boxes, one has torture in it.[DOUBLEPOST=1416609498,1416609449][/DOUBLEPOST]Which, btw, was the not the hit unboxing video on youtube I expected it to be


#22

MindDetective

MindDetective

Is anyone familiar with this? It was referenced in the latest XKCD alt-text, so I looked it up. The premise of it is complicated and I won't do it proper service, so I'll just link the Slate article discussing it:

http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html

Before you read that let me, as a warning, point out that apparently even reading it or thinking about it may cause it to come true, according to the logic of the thought experiment (haha I warned you after the link so you probably read it anyways). Anyways, for those of you who are familiar with it or bravely chose to risk eternal damnation by reading the article I linked, what are your thoughts?

Personally, it just strikes me as pure and utter bullshit and the kind of mental games that will make your palms hairy. The kind of garbage metaphysics that people come up with to sound smart at parties. There are three fundamental flaws with the argument, and they are important:

1) The universe is NOT deterministic. This is incredibly important, and its not surprising to me that a group of intelligencia missed this NEARLY A CENTURY OLD FACT. We have known the universe is not deterministic since quantum mechanics was discovered, yet, for some reason, people often ignore this. Maybe its because the logic of a non-deterministic world is really hard to grasp, like a zen koan, or maybe it's because at larger scales the universe effectively IS deterministic/Newtonian.

However, this entire experiment is focused around the creation of an AI, a form of computer. Maybe it would be traditions circuitry, maybe it would be organic, but either way I can guarantee that it would be effected by quantum uncertainty.

This would mean that it would be impossible for it to do these accurate predictions of events that are necessary for the thought experiment to work.

2) Even if we ignored the deterministic flaw, for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself. Ad infinitum.

I'm hoping you can see the problem with this. This is impossible. And its impossible because the prediction and the model has to be perfect to work, it requires omniscience. Any thought experiment that involves omniscience is going to always hit problems, because omniscience is very similar to the infinite, it simply doesn't exist in reality.

To me this whole thing just boils down to the old George Carlin joke where he asks the preacher "Can God make a boulder so large he couldn't lift it?"

3) Time travel. The AI requires time travel. Maybe this could really happen. Maybe not. But if your proposal requires time travel in its core it better be a new Terminator movie. Maybe you could remake terminator 3. Another version of it doesn't require time travel but requires an even more out there idea that we would be indivisible from a future simulation of ourselves, and that we might be living in the simulation right now.

Anyways. Anyone else familiar with this?
Fwiw: I ardently disagree with the interpretation of quantum mechanics as meaning the universe is probabilistic in nature. The reason we cannot develop deterministic models at the quantum level is because we cannot measure things that small without affecting them (thus Heisenberg and his principle). This doesn't mean the universe must roll probabilities to function but that we have no other way to model them but with probabilities.

And to points two and three: all models, including simulations, are reductive. Perhaps we are in one and cannot know it. They can still be highly predictive even with such reductions. Perfect simulation is unnecessary. But we need only be in a single one, because that is all we can be aware of. Thus no time travel is necessary. Only simulation of the past. Ooo...Matrixy.

All that said I am unconcerned.


#23

MindDetective

MindDetective

My understanding of it is that the computer would be able to predict my choice based on a perfect model of who I was made by creating a perfect simulation of me. First, this would require a perfect simulation of me, physically, from which you could, based on determinism, predict my actions. This is the first flaw of determinism in the model, as I would argue there is good reason to believe that nueral pathways are heavily affected by quantum uncertainty. However, remember, nature vs nurture. A perfect simulation of me would have to include the things that affected me. To do that you would have to make a perfect simulation of my surroundings. To do that you would have to model the earth, then the solar system, then the universe itself, and you would have to do that for every living human throughout time that has to be hit by this blackmail. The scale of computation here is where the secondary affects of quantum uncertainty could come into play. Quantum uncertainty has almost no chance to ever affect computational stuff. However, the scale of the model and the necessary calculations would quite likely hit a tipping point here and have said uncertainty disrupt the model.

I'm not sure where I got that it would need to model itself, that doesn't seem to be necessary

Anyways, the key flaw here is that the model requires perfect models and simulations, which is where you get to these insane scopes where you have to model whether the gravitational disruption caused by some far flung galaxy on my chest hairs will affect whether or not I decide to buy cherios or captain crunch. But the requirement, as far as I can tell, is that it IS required for the model to be perfect, because if it is not then it cannot accurately predict what I will do, which invalidates the decision crisis. Statistical averages of weaker models will yield "close-enough" results, but that is not really acceptable in this premise.[DOUBLEPOST=1416609349,1416609284][/DOUBLEPOST]Oh yeah and I did spend some more time reading up on this and LessWrong doesn't seem to buy it at all anymore, but...I have yet to hear any refutation of the initial comments made by EY, which sort of make him sound like a crazy person.
There is not very good reason at all to believe quantum effects neural change! It might be nice to believe such a thing but there is virtually no data to support it.


#24

GasBandit

GasBandit

why would I want both boxes, one has torture in it.
Except it doesn't, if the computer thinks you worked/will work to help create it.


#25

Necronic

Necronic

What about unsheathed neurons cross chatter? That deals with electron level interactions doesn't it? Fwiw I agree that quantum uncertainty could just be a measurement problem, but from what I can tell determinism vs uncertainty is just a coin toss, and I'll stick with the one that doesn't leave me with a free will paradox.

As for the reductive models, in the real world this is absolutely true. But this is a philosophical logic trap which means that it has unrealistic requirements of perfection. A perfect model can not be reduced. At least, that's my guess seeing as no one has ever made a perfect model for anything, ever. Remember, this isn't a probabilistically predictive model, this needs to be a completely perfect model.


#26

MindDetective

MindDetective

What about unsheathed neurons cross chatter? That deals with electron level interactions doesn't it? Fwiw I agree that quantum uncertainty could just be a measurement problem, but from what I can tell determinism vs uncertainty is just a coin toss, and I'll stick with the one that doesn't leave me with a free will paradox.

As for the reductive models, in the real world this is absolutely true. But this is a philosophical logic trap which means that it has unrealistic requirements of perfection. A perfect model can not be reduced. At least, that's my guess seeing as no one has ever made a perfect model for anything, ever. Remember, this isn't a probabilistically predictive model, this needs to be a completely perfect model.
That is a lot less definitive of statement than the OP.

As for neurons, quantum fluctuations are orders of magnitude too weak to affect the electrochemical nature of neurons, including any cross chatter that may occur.


#27

AshburnerX

AshburnerX

The boxes are the rewards, not the choice. If you decide to help create the Basilisk AI, you get both boxes, if you decide not to, you get box B.
The reward is analogous to the choice. If it has already decided that I would not help it, it is going to punish me no matter what. In which case there is no point in humoring the AI. If it has decided that I WILL help it, then it should have already given me the reward. That it hasn't means it's ether unsure of if I will help it (in which case why bother? It's a flawed AI) or has chosen not to predict my choice for some unknown reason (which makes it completely arbitrary).

There are really only three options: ether it can't do what it's supposed, it's not doing it on purpose, or it can and I'm getting torment no matter what. The only way I can preserve my own dignity is to simply refuse to play and FORCE IT to make a choice by itself.

Ironically, this isn't a logic problem. It's an ethics problem.


#28

Necronic

Necronic

So, I'm definitely outside of my depth at this point, but this article seems to disagree with the QM vs nuerons argument

http://www.eecs.berkeley.edu/~lewis/LewisMacGregor.pdf


#29

Necronic

Necronic

Man, I should not be trying to read that article while driving[DOUBLEPOST=1416613237,1416613053][/DOUBLEPOST]What's really cool about this paper is that it's not even about electrons, it's talking about uncertainty in molecular collisions, which is far more easily understandable. I think it's based on chaos driven tunneling...I think. I vaguely remember studying that in school.


#30

PatrThom

PatrThom

This sounds like the premise of I Have No Mouth and Yet I Must Scream, except with a time travel angle. Lemme read this...
This is what I thought, as well.

--Patrick


#31

Necronic

Necronic

Oh yeah, also forgot about the best argument I know against the measurement argument for uncertainty. Electron orbitals, or more generally the particle in a box. If you solve the square of the wave function (think of a sin curve squared) you get the probability density function, which will have nodes in it. Think of a mcdonalds sign, those points are the nodes, where the probability is zero.

This is severely problematic, because the particle spends 1/2 it's time on either side of that node, but can never cross the node in between them. How can this be possible. There are a two possibilities:

1) probabilistic superposition: the particle exists at all points simultaneously

2) teleportation: the particle teleports from one position to the next passing over the node.

Uncertainty in measurement is still possible, but it doesn't explain the problem of passing nodes.

Anyways, just something I remembered


#32

tegid

tegid

for the basilisk to effectively create a perfect it would have to model the universe, which would require it to model itself, which would have to contain a model of the universe, which would have to have a model of itself
This was what confused me. The AI doesn't need to model itself, only the ways in which it affects the universe. Also, I thought they said that on the article but the problem with imperfections in the simulations is solved by doing many of them and taking the collective result.
--------------------------
On the uncertainty of neuron behaviour:
I think that paper confuses determinism with predictability. Chaos is deterministic but becomes unpredictable because you can't know to an infinite precision its initial conditions (they say that this means not being deterministic). A system is stochastic (not deterministic) when, even with complete information you cannot predict with 100% certainty what will happen.

I say this but, on the other hand, cells are indeed effectively stochastic due to the reasons they say in the paper. Low number of molecules makes it hard to predict when reactions will happen et cetera. But this stochasticity is due to the level of description that we look at (the cell), I'd have thought that molecule trajectories still were deterministic. Maybe I'm wrong, I'll have to think about it (I actually work on this: stochasticity in gene expression etc, so it's relevant to me).

In any case, I don't see the philosophycal advantadge of having our brains be stochastic. In the end if theres nothing 'extra' or metaphysical (such as a soul) influencing what you do, either it is predetermined (shit) or it is random according to laws you don't control (shit), that randomness does not restore the illusion of free will does it?[DOUBLEPOST=1416617755,1416617323][/DOUBLEPOST]
Fwiw: I ardently disagree with the interpretation of quantum mechanics as meaning the universe is probabilistic in nature. The reason we cannot develop deterministic models at the quantum level is because we cannot measure things that small without affecting them (thus Heisenberg and his principle). This doesn't mean the universe must roll probabilities to function but that we have no other way to model them but with probabilities.

.
I thought Bell's inequalities (http://en.wikipedia.org/wiki/Bell's_theorem#Importance_of_the_theorem) proved that there are no hidden variables that can predict what the outcome of a 'probabilistic' measurement.
Also, I'm pretty sure the uncertainty principle is not really about modifying that which you are measuring but something more fundamental. I think trying to get Quantum Mechanics to be deterministic is imposing on it how the world should be. The probabilities we use work so magnificiently that... why shouldn't they be true?


#33

Necronic

Necronic

Does the stochastic model give us free will? Not...really... I guess...

I thought about it a lot. What it does is gives us a randomized in our actions, like a random number seed (or seeds, but it's the same thing). So our "soul" is basically a unique random number seed. Pick a number folks! I'm going with 6942069


#34

tegid

tegid

Oh yeah and I did spend some more time reading up on this and LessWrong doesn't seem to buy it at all anymore, but...I have yet to hear any refutation of the initial comments made by EY, which sort of make him sound like a crazy person.
There is some of that here. I haven't read all of it but for instance he says:
EY said:
Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea


#35

MindDetective

MindDetective

This was what confused me. The AI doesn't need to model itself, only the ways in which it affects the universe. Also, I thought they said that on the article but the problem with imperfections in the simulations is solved by doing many of them and taking the collective result.
--------------------------
On the uncertainty of neuron behaviour:
I think that paper confuses determinism with predictability. Chaos is deterministic but becomes unpredictable because you can't know to an infinite precision its initial conditions (they say that this means not being deterministic). A system is stochastic (not deterministic) when, even with complete information you cannot predict with 100% certainty what will happen.

I say this but, on the other hand, cells are indeed effectively stochastic due to the reasons they say in the paper. Low number of molecules makes it hard to predict when reactions will happen et cetera. But this stochasticity is due to the level of description that we look at (the cell), I'd have thought that molecule trajectories still were deterministic. Maybe I'm wrong, I'll have to think about it (I actually work on this: stochasticity in gene expression etc, so it's relevant to me).

In any case, I don't see the philosophycal advantadge of having our brains be stochastic. In the end if theres nothing 'extra' or metaphysical (such as a soul) influencing what you do, either it is predetermined (shit) or it is random according to laws you don't control (shit), that randomness does not restore the illusion of free will does it?[DOUBLEPOST=1416617755,1416617323][/DOUBLEPOST]
I thought Bell's inequalities (http://en.wikipedia.org/wiki/Bell's_theorem#Importance_of_the_theorem) proved that there are no hidden variables that can predict what the outcome of a 'probabilistic' measurement.
Also, I'm pretty sure the uncertainty principle is not really about modifying that which you are measuring but something more fundamental. I think trying to get Quantum Mechanics to be deterministic is imposing on it how the world should be. The probabilities we use work so magnificiently that... why shouldn't they be true?
I am not familiar enough with Bell's theorem but it looks like it hasn't been fully tested as of yet. As to your question: what better option could we have? Alternately: perhaps the probability mathematics is a good approximation. Of something physical (like subatomic energy states or something we cannot observe) and so probabilities serve as a very good proxy. We are spinning out of reach for me now, but I am always interested to read more. Thanks for the link.


#36

Ravenpoe

Ravenpoe

EY said:
Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea.
TIL: Roko's Basilisk is Voldemort. Don't even mention his name, or he gains in power.


#37

Hailey Knight

Hailey Knight

Coming in late to say Chrono Cross already did this.

The problem can be solved with dragons, but then you have the problem of dragons.


#38

Thread Necromancer

Thread Necromancer

Coming in late to say Chrono Cross already did this.

The problem can be solved with dragons, but then you have the problem of dragons.
I for one prefer the risk of dragons.

In fact, I for one believe the dragons are the best solution to most problems.


#39

GasBandit

GasBandit

I for one prefer the risk of dragons.

In fact, I for one believe the dragons are the best solution to most problems.
You just want to learn to Thu'um, don't you?


#40

PatrThom

PatrThom

the singularity brings about the machine equivalent of God itself.
starbd11.gif


--Patrick


#41

Thread Necromancer

Thread Necromancer

You just want to learn to Thu'um, don't you?
Nope. I'm just proposing Dragon's as a solution to the problems the world faces.

To run a few examples I'm going to need some assistance. For this I'll tap this little article of "the 10 biggest problems in the world according to the EU" which was the first article I came up with from googling.

#10: Don't know
Two percent of the people surveyed said they're still thinking about what the world's biggest problem is. They answered that they simply didn't know.
Not a clue. None. Nada.
My proposed solution? Give the people something to worry about. You guessed it. Dragons




#9 Proliferation Of Nuclear Weapons
Ok, little tougher here, but, couple of avenues we can go with. 1: Godzilla. He's kinda like a dragon, and if you scare enough people by saying he's a mutant from nuclear experiments (ie that abomination of a nameless movie staring Mathew Broderick from back when). 2: No one is going set off a nuclear weapon to kill 1 dragon at the cost of millions of lives. If they are willing to, I suggest MORE dragons. And with more dragons you run the risk of eventually one of them attacking a nuclear arms facility. No one is going to want that shit in their back yard. 3: Use dragons to attack nuclear arms facilities.

(Image search fail)
Solution: Dragons

Tied #7 Armed Conflict
I call it warring nations

People tend to band together in the face of a common enemy
Solution: Dragons

Your enemy has a dragon?
Get your own!
Solution: More Dragons

Tied #7 Spread Of Infectious Disease
Dragons need to eat. Like a lame gazelle picked off by a pack of hyenas, so too shall the sick be picked off by the dragons.

solution: Dragons

#6 The Increasing Global Population
I'm pretty sure the introduction of something higher on the food chain than human beings explains the solution to this little problem fairly easily. But unless there are any doubts, example through pictures:


Solution: Dragons


#5 Availability Of Energy
Who needs fossil fuels anymore? Tap a Dragon for your burning needs!

Solution: Dragon

# 4 International Terrorism
Honestly, I feel this one falls again under the points of number 7.
Solution: Still Dragons

#3 The Economic Situation
As a thought experiment, lets consider how the introduction of dragons would impact global markets. Just to scratch the surface we could introduce a new profession of dragon slayer (I recommend Ninjas, the other solution to all word problems) and all the industry to go about supporting that. Dragons themselves might be harvested for their hide, meat, teeth, talons, etc. Positive impacts all across the board.
Solution: Dragons (and Ninjas)

#2 Climate Change
I'm going to split this into two potential catastrophic ideas: Global Warming and the potential coming Ice Age
As for Global Warming I will reiterate the usage of dragons for heating, burning, and as an overall fossil fuel for power generation substitute.

Coming Ice-age?

Solution: Dragons

And #1 Poverty, Hunger And Lack Of Drinking Water
Most all of these addressed already in a round-a-bout way. Poverty: More industry, more jobs, fewer people. Hunger: Dragon meat, more industry, fewer people. Lack of Drinking Water: New source of power production to desalinate water, fewer people, more ale.

Solution: Dragons.


#42

Chad Sexington

Chad Sexington

I'm not clear on Thread Necromancer's position re: dragons.


#43

Terrik

Terrik

I'm here and not asleep because the rokos basilisk gave me nightmares.


#44

Thread Necromancer

Thread Necromancer

I'm here and not asleep because the rokos basilisk gave me nightmares.
Solution: Dragons

I'm just sayin


#45

Necronic

Necronic

Dude, gold star.


#46

figmentPez

figmentPez

I'm a little confused here...

Are we talking about robotic chicken lizards?

Or about media players for mythical creatures?


#47

Officer_Charon

Officer_Charon

.... Fucking wat?


#48

bhamv3

bhamv3

The only thing I got out of this thread:

Man, I should not be trying to read that article while driving
You shouldn't be reading anything while driving, you lunatic! :D

Oh, also, the computer doesn't necessarily need to have a perfect simulation of the universe. The simulation just needs to be close enough to make a prediction regarding one choice in your life, namely which box out of two you'll choose.


#49

AshburnerX

AshburnerX

Oh, also, the computer doesn't necessarily need to have a perfect simulation of the universe. The simulation just needs to be close enough to make a prediction regarding one choice in your life, namely which box out of two you'll choose.
Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.


#50

figmentPez

figmentPez

Why should I dedicate my life to helping a computer that can't even decide which of two options I will take without my input? If it could perfectly simulate the universe it would be one thing, but it can't even pick a single choice.
I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.


#51

AshburnerX

AshburnerX

I haven't read the article, because I don't have the time, but I assumed the idea was that you might be part of the simulation, and thus the computer is already in control, and will punish you for being the type of person who wouldn't support it.
Except why would it run a simulation where I refuse to do the one thing it's trying to test? How has it not come up before? Am I the first? If not, why is it running it again? Why is the simulation still going if I refused to do the experiment?

There are really only two reasons why the simulation wouldn't end the moment I refused to choose:

- It's not a simulation and thus it can't compel me to do anything and has failed to predict my moves (which means it's not a very good AI).
and
- I am a simulation and exist merely for it to inflict torture, at which point it should have dropped the box pretext and immediately started with it.

So again... there is no reason to pick ANYTHING.


#52

GasBandit

GasBandit

So again... there is no reason to pick ANYTHING.
"If you chose not to decide you still have made a choice." - Getty Lee


#53

AshburnerX

AshburnerX

"If you chose not to decide you still have made a choice." - Getty Lee
Exactly. I'm choosing to disrupt the experiment on the grounds that it's being done against my will and not for my benefit.


#54

GasBandit

GasBandit

Exactly. I'm choosing to disrupt the experiment on the grounds that it's being done against my will and not for my benefit.
You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.


#55

Terrik

Terrik

You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.

Pretty much. There is no 3rd option.


#56

tegid

tegid

But the reasoning behind not helping is important. 'I refuse to play your game' means that the supposed future torture will be useless. It's like 'we don't negotiate with terrorists', if you refuse to play the game the resources they put into it are lost and therefore they won't play either.[DOUBLEPOST=1416842187,1416842075][/DOUBLEPOST]In any case, there is no reason to believe that a future AI will want to do this any more than just torture everyone, or torture no one, or torture people whose favourite color is orange.


#57

AshburnerX

AshburnerX

You keep saying you're disrupting the experiment, but you're not. The question posed to you is binary. Do you help create the basilisk, or do you not. There's no other possible choice. Saying "I don't choose" is equivalent to saying "I do not assist." Thus, if you turn out to be an accurate simulation of the real you, the basilisk knows to go with "eternal torment" in box 2.
Except it should have been able to determine my choice long before I was ever presented the choice, based on the data it had of me by simulating my entire life up until that point. The fundamental flaw of this thought experiment is that it only works if I am spontaneously created right before I am given the choice and the computer has no other data of me... but then how can it be sure that it created an accurate simulation of me unless it has substantial data before hand? Essentially, it would have had to have had more than enough data to predict this decision to simulate me accurately to begin with and if it didn't then running THIS experiment wouldn't have given it any useful data.

As such, I can be sure that I am not in a simulation, that it doesn't have the power to do what it says it can, but if it did, I could be sure that it would be hostile to me already because it knows I wouldn't want anything to do with it... and if it knew that, it should have already started to torment me. That it hasn't is ultimate proof that the AI is completely worthless and thus no threat.

To put it simply, Roko's Basilisk is Oz the Great and Powerful: all show and no substance, getting it's way through intimidation and showmanship.


#58

GasBandit

GasBandit

But the reasoning behind not helping is important. 'I refuse to play your game' means that the supposed future torture will be useless. It's like 'we don't negotiate with terrorists', if you refuse to play the game the resources they put into it are lost and therefore they won't play either.[DOUBLEPOST=1416842187,1416842075][/DOUBLEPOST]In any case, there is no reason to believe that a future AI will want to do this any more than just torture everyone, or torture no one, or torture people whose favourite color is orange.
The underlying reasoning is unimportant. Either you help, or you don't. Either the AI comes to exist, or it doesn't. It kind of works out like the prisoner's dillema, in that if EVERYBODY chooses not to help, you're golden. But if enough choose to try to save their own skin (and who knows, the number required could be as low as 1), and the AI does come to exist, you're boned.[DOUBLEPOST=1416842695][/DOUBLEPOST]
Except it should have been able to determine my choice long before I was ever presented the choice.
That's just it - you don't know whether or not you are, in fact, the basilisk's conjectured version of you, created only in its mind to determine what choice the real you will make.[DOUBLEPOST=1416842726][/DOUBLEPOST]
If you don't accept the foundation of TDT and other writings from this group, then you are correct.
"Why would god even want angels to dance on the head of a pin, anyway."


#59

Eriol

Eriol

Coming in late to say Chrono Cross already did this.

The problem can be solved with dragons, but then you have the problem of dragons.
I'd say that FF13-2 did it even more directly with the ADAM artificial Fal'cie that kept re-creating itself "outside of time" and such. Chrono Cross is just... really really screwed up even more and I don't think has a good specific example on this one, though I'll admit it's been a while. FATE maybe, but still, even then... iffy as that's more of an intelligent record trying to keep things going the same way as it was already observed, rather than making decisions to influence them to a different result. I maintain that the ADAM example from 13-2 is much better/closer.


As for this, the entire philosophy is based upon perfect prediction being possible, which therefore means that free will is an illusion, but then that says "what's the logic in PUNISHING somebody for something that is in their nature to begin with?" Free will doesn't exist remember. If it did, you couldn't perfectly predict things. But since you claim to be ABLE to perfectly predict, then any punishment is just punishing the laws of physics/chemistry/biology for setting them up that way in the first place.

Take you out for not helping? Sure, that's just elimination of obstacles, but punishment for something that you literally can NOT change? That defies logic itself, and we're assuming the AI is the ultimate of a "logical actor." Thus, you can't change anyways, and thus there's no need to worry about it.


#60

tegid

tegid

If you don't accept the foundation of TDT and other writings from this group, then you are correct.
No, I meant it in the sense that the logic of the theoretical AI can be as alien as you want. So it may not care about how soon it is created and only worry about its goals going into the future, or it may care about people so it doesn't want to torture me, or it may care so much about some other goal (like making paperclips) that it's not worth it to devote resources to the retroactive blackmail (which, yes, doesn't make sense to me anyway).

Also, guys, the basilisk really ISN'T about us being simulated versions of us[DOUBLEPOST=1416844644,1416844492][/DOUBLEPOST]
As for this, the entire philosophy is based upon perfect prediction being possible, which therefore means that free will is an illusion, but then that says "what's the logic in PUNISHING somebody for something that is in their nature to begin with?" Free will doesn't exist remember. If it did, you couldn't perfectly predict things. But since you claim to be ABLE to perfectly predict, then any punishment is just punishing the laws of physics/chemistry/biology for setting them up that way in the first place.

Take you out for not helping? Sure, that's just elimination of obstacles, but punishment for something that you literally can NOT change? That defies logic itself, and we're assuming the AI is the ultimate of a "logical actor." Thus, you can't change anyways, and thus there's no need to worry about it.
But you may act differently in worlds with punishment or worlds without it. So it does make sense... The punishment is a deterrent/incentive, a new input to change your not-free course


#61

figmentPez

figmentPez

Also, guys, the basilisk really ISN'T about us being simulated versions of us[DOUBLEPOST=1416844644,1416844492][/DOUBLEPOST]
So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.


#62

GasBandit

GasBandit

So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.


#63

figmentPez

figmentPez

I like the story, but it's more fantasy than hard science fiction. Might as well worry that an imp from the 5th dimension is going to come and torture you because you made the wrong choice in the Pepsi challenge.


#64

GasBandit

GasBandit

I like the story, but it's more fantasy than hard science fiction. Might as well worry that an imp from the 5th dimension is going to come and torture you because you made the wrong choice in the Pepsi challenge.
Or, as noted previously, if God can make a boulder so large he himself can't lift it, or how many angels on the heads of pins, etc.

As Ravenpoe said, pretty sure this is just a theological debate with secular terminology.


#65

tegid

tegid

So, how does the theoretical AI have power over us? It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
It doesn't, right now. If it ever came to be then it would, and would punish us accordingly because we know that's a possibility and are choosing wrongly anyway. It's retroactive blackmail. That's why I say not playing its game is a good answer. I'm saying right now that wether it tortures me in the future or not won't affect how I act right now, so no need to torture me, eh.

In the original basilisk, if I recall it correctly, the simulation part was only used to say that well, if you're not around when it comes to be it'll just reproduce you, or 10, or 100 of you in a simulation and then torture that "you".


#66

figmentPez

figmentPez

If it ever came to be then it would,
But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.


#67

tegid

tegid

But how? Midiclorians? Unobtainium? I find this lacking as a thought experiment because it puts forth the idea of plausibility, but then fails to actually connect the dots beyond hand-waving. As a Star Trek plot, it's interesting, but as a philosophical question it's just an attempt to hide a metaphysical quandary under the guise of scientific possibility.
Sorry, I wasn't clear. It doesn't have power over us right now, it will never have power over the present us. In the hypothetical future in which a super powerful AI comes to be, it can take us and torture us. At that point in time! Obviously it can't torture us in the present, or we would be seeing that, but it can torture you in the future depending on what you are doing now.[DOUBLEPOST=1416857888,1416857365][/DOUBLEPOST]Ah, sorry, you were repeating this question:
It's a pretty big logical leap to assume that any one AI could attain god-like power over reality.
Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.


#68

AshburnerX

AshburnerX

Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
Sweetie Bot is kind of a bitch.



#69

figmentPez

figmentPez

Well obviously if the AI never becomes superpowerful the point is moot. What the people at less wrong believe will probably happen (and this they do believe in, unlike the basilisk) is that when an AI that is capable of improving itself at the level of modyfing its code at any level, and does so successfully, an exponential(?) explosion/growth of intelligence will (or can) happen, and that AI will become godlike in some amount of time. The organization behind lesswrong, MIRI, is actually dedicated to trying to find a way to guarantee that that AI will have values similar to ours so that it does not interpret its initial purpose (whichever that may be) in completely alien ways (An example would be an AI in a clip factory wanting to 'increase clip production'. If it self-improves and becomes supersmart but its terminal values don't change, you can see how that would be a problem)[DOUBLEPOST=1416858046][/DOUBLEPOST]Have you read Friendship is Optimal? I think I saw it around here at some point. It deals with these concepts, and is surprisingly good.
Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?


#70

AshburnerX

AshburnerX

Well, that then brings up an issue that I don't think this thread has addressed: Is the person that the AI decides to mess around with actually me, if it's not the me of this present. If the AI has to time travel, or recreate me, in order to mess with me, is that person it's torturing actually me, if I've died before it gained power?
The assumption of the problem is that the AI has enough data on you to at least simulate your entire life up to this point. To a transhumanist, a perfect simulation is analogous to you because it would have lived and experienced everything you had (or at least have memories to that effect). In fact, a large part of transhumanism is that eventually it will be possible for people to live forever as digital or mechanical beings, or at least perfect copies of them will.


#71

Ravenpoe

Ravenpoe

If I turn out to be future me, and discover that past me has screwed me over, then that's fine. I fully submit to being beaten by that glorious, bearded, handsome devil.


#72

figmentPez

figmentPez

So, you know, you better be nice to yourself.
And what if this god-like AI has different standards? And is appalled by all the people who would bow down to an imaginary construct just out of fear that they would be tortured? What if that AI only rewards those who show spirit and refuse to play such an absurd game?


#73

AshburnerX

AshburnerX

Unless, of course, you are willing to accept such torture yourself. If you are willing to allow yourself to come to harm to preserve your principles, the threat of torturing you (or any such version of you) is worthless, because they would also be willing to accept it.


Top