Log in

No account? Create an account

Previous Entry | Next Entry

Eliezer versus Jaron: smackdown

I agree with everything Eliezer says here, and as usual he does an excellent job of stating it. But occasionally, I think Jaron also makes some good points. At other times, he makes pretty bad points:

I think the worst moment on Jaron's side is when he doesn't dispute Eliezer's characterization of what he's saying as removing an airplane and leaving the quarks that make up the airplane in tact. If he does really believe this, it's got to be about the most radical rejection of reductionism I've ever heard. I think his strongest point is when he brings up IQ... I hadn't really thought of it that way before, but it does seem pretty crazy that people are assigned a number that has 3 significant figures that is supposed to represent their intelligence. I agree with him that it's an example of researchers either exaggerating the confidence in their models, or of beurocrats wanting a rigid system to enforce even if it's kind of arbitrary and only works very approximately. He also brings up a potentially good point that we shouldn't have extreme probability distributions on beliefs that concern things that weren't directly demonstrated by science (for example, beliefs about metaphysics, consciousness, god, etc.)... unfortunately, I don't quite believe what he's saying there enough to give up my extreme probability distributions with respect to such things (distributions which I assume agree well with Eliezer's... for instance, the odds that it's not possible to build a computer that is as smart or smarter than a human is very small). His argument makes some sense to me, yet I can't pretend that I don't have strong beliefs on those issues... I'll have to think about that one more. I dislike the way he says he believes humans are special because he believes it's an "undecidable" question whether they really are so he figures he might as well believe it (presumably because it makes him feel special? what a crappy reason to believe something). His frequent referral to Daniel Dennett as a "religious extremist" is hilarious, but obviously wrong.

Actually there is one line that Eliezer says which I kind of disagree with, although I suspect he didn't mean to say it, it was perhaps a slip of the tongue. After Jaron's repeated accusations that AI research is a "religion", he responds by saying "in order to call something a religion, you need to make the case that people believe certain things that aren't true." =) I would not define religion as belief in something false... I would say that you need to make the case that the reasons why people believe in something are faith based, which in general tends to lead to false beliefs... but could coincidentally lead to true beliefs once in a great while (if you waited a long, long time and came up with lots and lots of religions). So far I don't think anything like this has happened, but I think it is linguistically pointless to define religion in such a way that it's tautologically false. At any rate, it's an interesting sociological point that in some circles, the word religion has become somewhat synonymous with false beliefs. I personally see it as more synonymous with dogmatic faith-based beliefs. I also still cringe when I hear him (and Jaron) say he's a "rationalist" because I associate that with the philosophers who have epistemological beliefs which I so passionately disagree with. But I'm sure when Eliezer says it he means something more positive. Speaking of epistemology, Jaron says a couple times that Dennett "throws it out the window"... um, yeah, dream on Jaron... as if you have any clue compared to Dennett on the subject. Ironically, I agree with Jaron's statement that you can't call yourself a rationalist and be a Dan Dennett fan... although for very different reasons than he's implying. I'm an avid Dennett fan but I'm strictly an empiricist, not a rationalist... and I'd be really disappointed in Dennett if he started spouting rationalist crap.

All in all, an entertaining debate, and not without at least some good points on both sides. If you do end up watching it, go ahead and fill out the poll because I'm curious how similar or different other impressions are from mine on this subject:

Who made better points in this debate?


Who came across as foolish or naive?


Who came across as intelligent?


Whose personality do you identify with more?


I threw in a question about personality too, because I find the conflict in their personalities to be quite striking... it seems to parallel a very familiar personality type difference that I see arise in different contexts from time to time. I identify much more with Eliezer's personality, and something about Jaron's kind of bugs me, independently of what he's actually saying. Actually, I think the main thing that bugs me is that Jared is so vague and just refuses to nail down what he's saying a lot of the time, while Eliezer is so careful and clear in what he's saying all the time... maybe that's the difference? I'm not saying that means one personality type is more prone to error than the other... it's just an observation.


( 37 comments — Leave a comment )
Apr. 19th, 2009 06:04 am (UTC)
Just a nit (which has nothing to do with the overall subject): Jaron is Jaron, not Jared.
Apr. 19th, 2009 06:14 am (UTC)
I put Jaron in most places, but I must have unconsciously typed Jared in a few places. Weird, because I didn't remember typing that at all. When I saw your correction I was thinking "when did I write Jared?" I think this is an example of my fingers doing excessive autocompletion for commonly typed words.

Anyway, thanks. I corrected it above.
Apr. 19th, 2009 06:19 am (UTC)
Heh, no problem -- stuff like that stands out to me, is all. But only when other people do it. I am less likely to notice my own autopiloty stuff of course!
Apr. 19th, 2009 06:06 am (UTC)
I agree that IQ doesn't strictly need 3 significant digits, but I'm not sure that rounding away digits is useful, unless we want to underline the fact that these numbers are imprecise.

It seems like error bars are a better tool here, doesn't it?
Apr. 19th, 2009 06:29 am (UTC)
Sure, I guess I would say error bars are better than just rounding it off. I sort of agree with Jaron's point that a 1 to 10 score would be more honest than what we do now (in terms of the implied precision)... but I also agree with Eliezer that the important point of centering it on 100 is so that you can look at how many standard deviations you are from the norm.
Apr. 19th, 2009 06:13 am (UTC)
And now one that is subject-relevant: I actually don't *know* where I stand on the subject of Strong AI, etc....I feel like I am not well-educated enough in the relevant disciplines (computer science, neuroscience, biology, etc.) to have any really strong opinions on the matter. However, I have noticed a thing where....some of the people who might be described as "AI enthusiasts" tend to do this thing where....they sort of sound like they're being very clear and concrete, but when you try and actually get at what they are talking about, it's not based on much in the way of empirical evidence. Again, this is not me making a strong statement about what I think is or isn't possible in the long term, just that when people use phrases like "smarter than human intelligence" there often seems to be a lot of handwaving involved, only it doesn't LOOK like handwaving initially because the terminology being invoked seems familiar enough.

Philosophically of course I don't see any reason why the stuff human brains can do is necessarily restricted to things that are chemically and structurally identical TO human brains, but I also think we have an awful lot more to learn, and that nobody has really earned the right to be certain or overconfident about stuff in that field.
Apr. 19th, 2009 06:22 am (UTC)
I do think that that a lot of the "friendliness" stuff Eliezer talks about is based on handwaving. If Jaron had brought that up, I probably would have been on his side... but for some reason neither of them mentioned it.

I'm not sure whether I agree on the vagueness of "smarter than humans". In some ways, it seems kind of vague. But I feel like the Turing test mostly clears up the ambiguity. If a computer can pass the Turing test reliably, then it is at least as smart as humans (and presumably smarter, because at the very least, it will still be able to do math a lot quicker than we can, just as current computers can).
Apr. 19th, 2009 07:28 am (UTC)
I'm not sure whether I agree on the vagueness of "smarter than humans". In some ways, it seems kind of vague. But I feel like the Turing test mostly clears up the ambiguity. If a computer can pass the Turing test reliably, then it is at least as smart as humans

Well, for some value of "as smart as" based on the particular task (i.e., functioning as a human-imitating conversational entity), which is somewhat vague, since the characteristics of the judge are left unspecified in the classical formulation of the test...

The Wikipedia entry for "Turing Test" observes that it "is based on the assumption that human beings can judge a machine's intelligence by comparing its behaviour with human behaviour." How strongly is this believed at present? (And is the test really a test of the computer's "intelligence", or of the programmer's skill in implementing such conversational characteristics as will make the interaction seem "human"?)

I find it amusing to contemplate a "reverse Turing test": a human trying to respond in such a way as to be perceived as a computer!
Apr. 19th, 2009 08:46 am (UTC)

is the test really a test of the computer's "intelligence", or of the programmer's skill in implementing such conversational characteristics as will make the interaction seem "human"?

I'd say it's clearly a test of the computer's intelligence. But it may also be a test of the programmer's skill in having created the framework for that intelligence. Jaron Lanier would disagree--he seems to be committed to shunning words like AI from the field of AI... supposedly to avoid "ideology" even though in doing so he's imposing his own bizarre ideology.
Apr. 19th, 2009 07:25 pm (UTC)
Re. Turing tests: I'm skeptical it's that simple. It seems to depend very much on the neurology/assumptions of the person doing the test! When I was a kid my dad had a conversational program on the computer called "Racter", and that thing had me *totally* fooled when I was 6 -7 years old. I literally thought I was "talking to the computer" and that it was sentient. I am not saying the problem of determining machine intelligence is intractable, mind you -- just that there seem to be a lot of built-in and often unexamined assumptions in the Turing test idea. Like, for instance, the notion that a computer is "intelligent" if it can make an adult human, or a neurotypical human, even, think it's intelligent. Just stuff to think about.
Apr. 20th, 2009 06:03 am (UTC)
the Turing test may tell us as much about the judge as the computer under "test"
there seem to be a lot of built-in and often unexamined assumptions in the Turing test idea. Like, for instance, the notion that a computer is "intelligent" if it can make an adult human, or a neurotypical human, even, think it's intelligent.

Yes. That.
I think, for an interesting test, you want several different flavors of judge: children, schizoids, autistics, LSD-trippers, etc., as well as adult NT's...

I'm reminded of an investigative article published years ago (might have been in the SF Chronicle or something but ICBW about that)... title IIRC was "On Being Sane in Insane Places": a couple of reporters consulted a psychiatrist with complaints of hearing "unclean voices in their heads", got themselves diagnosed as "schizophrenic" and hospitalized, then dropped all pretense of insanity: the staff was oblivious, and continued to view the reporters through the lens of the diagnosis of "insanity": if one of them was taking notes, the psychiatrists weren't interested in asking about what they were writing, but merely recorded observations such as "patient engages in writing behavior"... straightforward practical questions of the nurses were met with "you'll have to bring that up with your doctor in therapy"... and so forth: just completely assuming that they were dealing with an actual insane person and interpreted everything accordingly.

The interesting thing is that the other patients caught on to the reporters' deception immediately: "You're not really crazy... what, are you some kind of reporter checking out the hospital or something?" The authors of the article thought it noteworthy that the patients seemed more accurately perceptive than the staff on this point!

So it would be interesting, as I say, to see if folks other than NT adults would evaluate various conversational entities differently, and at the very least, if they responded to different cues. For example, do a side-by-side comparison with two versions of a given conversational algorithm, identical except that one deliberately inserts random output errors to simulate imperfect human typing... for which of the judges would this increase the chance of a given conversation being considered "human"?
Apr. 19th, 2009 07:52 pm (UTC)
Eliezer strikes me as a severely repressed individual.
Apr. 19th, 2009 09:03 pm (UTC)
My previous comment was made prior to watching the debate, based purely on my prior exposure to Eliezer's work. Now, having watched the debate and accumulated more data, I can speak with greater confidence: Eliezer's severely repressed. He's brilliant, but not wise. Lanier might as well have been trying to convert the Pope. He wasn't always as successful at illustrating the Eliezer's hubris as might've been necessary.
Apr. 20th, 2009 04:26 am (UTC)
Oh wow, that's a pretty astute way of putting it: EY has high INT/low WIS. Whereas JL has apparently learned some WIS along the way.
(no subject) - spoonless - Apr. 20th, 2009 06:09 am (UTC) - Expand
(no subject) - azalynn - Apr. 20th, 2009 06:37 am (UTC) - Expand
(no subject) - spoonless - Apr. 20th, 2009 07:14 am (UTC) - Expand
(no subject) - geheimnisnacht - Apr. 20th, 2009 07:52 am (UTC) - Expand
Apr. 20th, 2009 06:17 am (UTC)
Thanks for the comment. I was curious to see how you would respond to said video. I'm not surprised that you identify more with Jaron.

I just wrote out a question about wisdom to azalynn below. I think the issue of what wisdom means may have come up before between you and I, but I can't recall what you said about it. At any rate, the question I ask her is also directed at you... what do you mean by wisdom as differentiated by intelligence (I give my own impression of what the distinction is below)?

Also, regarding repression... do you sense that there are specific things Eliezer is repressing, like there is a hidden personality of his just waiting to get out if only he didn't keep it at bay? What if certain people just prefer to operate within more narrow bounds... and they just seem repressed to you because they aren't expressing the things you'd be expressing if you picture yourself in their shoes?
(no subject) - ankh_f_n_khonsu - May. 1st, 2009 02:57 am (UTC) - Expand
(no subject) - spoonless - May. 1st, 2009 04:27 am (UTC) - Expand
(no subject) - ankh_f_n_khonsu - May. 1st, 2009 04:52 am (UTC) - Expand
( 37 comments — Leave a comment )


domino plural

Latest Month

May 2017


Powered by LiveJournal.com
Designed by Lizzy Enger