I like villains, but vaguely think that LW-ish types are unusually horrible at moral philosophy, and probably need to be dissuaded from ever trying to apply any of their atrocious theories. This includes you, ozymandias271.I got super derailed by the part in the last Captain America movie where he steals his old costume from the museum and the elderly security guard says, “I am so fired.”
Like, does it get sorted out? What if he really needed that…
P.S.: oddly enough, EY himself makes a much better impression in that regard. Idk. He just does.
P. P. S. Pragmatism » contrived moral dilemma porn.
Wait, EY himself makes a better impression that his fanbase on ethical issues? Eliezer “torture > 3^^^3 dust specks” freakin’ Yudkowsky?
I’m not trying to be antagonistic here, I’m just curious, since that particular application of naive “add ‘em up” utilitarianism seems like one of the more obvious awful ethical ideas that’s come out of LW. I mean, I guess it’s harmless in the sense of being obviously impractical (and thus an example of “contrived moral dilemma porn”), but it suggests an ethical view that could potentially imply bad things in more realistic contexts (in that he thinks a bunch of inconveniences can add up to a single truly awful thing, even if every single person being inconvenienced would be like “this is nbd, please don’t torture anyone for my sake”).
Unless the people above me here have something specific in mind I don’t know about, I strongly disagree with the idea that LW does a bad job at morality.
I reject the idea that the “torture versus dust specks” question should be mocked as “contrived moral dilemma porn” any more than the black hole firewall paradox should be mocked as “contrived astronomy dilemma porn” or Schrodinger’s Cat should be mocked as “contrived quantum dilemma porn”. I mean, all the accusations are true in a sense, but they’re also part of the broad intellectual project of trying to understand the world.
I’m not even going TRY to talk about firewalls, but If I understand Schrodinger’s Cat right, it argues that quantum theory says the cat can be both alive and dead, so we have either reject quantum theory, reject common sense, or come up with some clever way out of the problem that maintains both.
The dust specks dilemma argues that utilitarianism says to prefer torture over dust specks, so we either have to reject utiltarianism, reject common sense, or come up with some clever way out of the problem that maintains both.
I have the same kind of fondness for utilitarianism that physicists have for quantum theory. It seems to fit with everything else I know and explain things that nothing else can. So like physicists, I’m reluctant to throw it out the first time it does something seriously weird. I’m not sure how seriously Eliezer takes dust specks, but if he bites that bullet I am willing to give him the same respect as a quantum physicist who says “Well, what the hell, pending further investigation maybe cats can be both alive and dead at the same time”. Even if he’s later proven wrong - and even from the little I know of quantum theory I get the impression that modern physics has some satisfying ways out of Schrodinger’s cat - I respect him more than the people who scoff “Oh, you eggheads, always worrying about silly things, whatever.”
I haven’t noticed Less Wrongers being unusually morally weird *in a bad way*. The three biggest practical moral differences I can think of between them and the general population is more effective altruism, more vegetarianism, and more polyamory. The first two are things that all moral systems say are supererogatory goods - ie they don’t condemn them, just say it’s an option you don’t have to take - and the last seems less descended from moral philosophy and more descended from “Bay Area culture is weird in general”. I don’t see people doing anything like the murder offsets I mentioned on my blog a month or two ago, even though the math checks out, because people tend to have their heads on straight and realize that “as far as I can tell the math checks out so this is an apparent paradox and therefore an important field for further investigation” doesn’t always correspond to “this is definitely how morality works, do it right now”.
I think it’s unfair to hold Less Wrongers accountable for being more willing to probe how complicated morality gets when you try to make it consistent. I don’t see other people who are equally committed to probing it in depth but have come up with a system that never gets weird or awkward. I occasionally see people who say “Morality is vague and we shouldn’t probe it in depth,” but that itself is a moral statement that needs to be probed in depth, and when you that it ends up with just as many weird contradictions as any other moral statement, maybe more.
(Compare the argument I hear a lot of professional philosophers make about r/atheism’s attitude of “let’s just reject philosophy!” - rejecting philosophy turns out to require even more philosophical legwork than keeping it.)
When physicists start saying all kinds of weird things about how they can’t be sure time exists or whatever, we don’t think of them as dumber than the majority of people who are very sure time exists. We just acknowledge they’re thinking through things that the rest of us prefer to leave unexamined. And we certainly don’t expect them to be less punctual people, or to forget our birthdays, or whatever.
This is also my objection to post-rationality.
If I understand post-rationality right, it’s saying “the optimal way to make decisions isn’t by calculating Bayesian probabilities relating to every action and then multiplying by expected utility. It’s through common sense, tradition, and a toobox of heuristics.”
And my answer is: “Duh.”
Compare: “The optimal way to do moral philosophy isn’t to calculate how many people to kill in complicated cases with nuclear bombs and omniscient demons who always tell the truth. It’s just to never kill anybody.”
Or: “The optimal way to be on time isn’t to investigate the fundamental nature of the time-space continuum, it’s just to make sure to get a good watch.”
Or: “The optimal way to get blocks placed on top of other blocks isn’t to make computer scientists at MIT spend years inventing SHRDLU and programming it with laws about the physical world. It’s just to put the block on top of the other block yourself.
In all of these cases, the simple intuitive way is much better for everyday life and nobody denies that. However, after a LOT of work the scientific way can solve problems where the simple way sputters and breaks. GPS needs to keep time more accurately than is possible without the theory of relativity. Abortion is a complicated moral issue where our intuitions conflict. AIs can now outperform humans at some useful tasks and in the future will be able to outperform us at many more.
And yes, Nate Silver is able to predict things better than the rest of us, and he does it with Bayesian statistics.
I can’t blame the post-rationalists too much, because there was a brief time in Less Wrong history where a bunch of people were talking about how rationality was going to turn us all into supermen who made every decision with Bayesian reasoning. But I yelled at them, and they mostly stopped. And CFAR is totally on board with the "let’s teach useful heuristics and instill habits” model. Meanwhile, people still plod away at the difficult task of getting scientific and philosophical grounding for stuff, without necessarily saying it is immediately applicable to everyone in the world. So what exactly do you think you’re disagreeing with anyone else about?
