?

Log in

Previous Entry | Next Entry

Eliezer versus Jaron: smackdown

I agree with everything Eliezer says here, and as usual he does an excellent job of stating it. But occasionally, I think Jaron also makes some good points. At other times, he makes pretty bad points:



I think the worst moment on Jaron's side is when he doesn't dispute Eliezer's characterization of what he's saying as removing an airplane and leaving the quarks that make up the airplane in tact. If he does really believe this, it's got to be about the most radical rejection of reductionism I've ever heard. I think his strongest point is when he brings up IQ... I hadn't really thought of it that way before, but it does seem pretty crazy that people are assigned a number that has 3 significant figures that is supposed to represent their intelligence. I agree with him that it's an example of researchers either exaggerating the confidence in their models, or of beurocrats wanting a rigid system to enforce even if it's kind of arbitrary and only works very approximately. He also brings up a potentially good point that we shouldn't have extreme probability distributions on beliefs that concern things that weren't directly demonstrated by science (for example, beliefs about metaphysics, consciousness, god, etc.)... unfortunately, I don't quite believe what he's saying there enough to give up my extreme probability distributions with respect to such things (distributions which I assume agree well with Eliezer's... for instance, the odds that it's not possible to build a computer that is as smart or smarter than a human is very small). His argument makes some sense to me, yet I can't pretend that I don't have strong beliefs on those issues... I'll have to think about that one more. I dislike the way he says he believes humans are special because he believes it's an "undecidable" question whether they really are so he figures he might as well believe it (presumably because it makes him feel special? what a crappy reason to believe something). His frequent referral to Daniel Dennett as a "religious extremist" is hilarious, but obviously wrong.

Actually there is one line that Eliezer says which I kind of disagree with, although I suspect he didn't mean to say it, it was perhaps a slip of the tongue. After Jaron's repeated accusations that AI research is a "religion", he responds by saying "in order to call something a religion, you need to make the case that people believe certain things that aren't true." =) I would not define religion as belief in something false... I would say that you need to make the case that the reasons why people believe in something are faith based, which in general tends to lead to false beliefs... but could coincidentally lead to true beliefs once in a great while (if you waited a long, long time and came up with lots and lots of religions). So far I don't think anything like this has happened, but I think it is linguistically pointless to define religion in such a way that it's tautologically false. At any rate, it's an interesting sociological point that in some circles, the word religion has become somewhat synonymous with false beliefs. I personally see it as more synonymous with dogmatic faith-based beliefs. I also still cringe when I hear him (and Jaron) say he's a "rationalist" because I associate that with the philosophers who have epistemological beliefs which I so passionately disagree with. But I'm sure when Eliezer says it he means something more positive. Speaking of epistemology, Jaron says a couple times that Dennett "throws it out the window"... um, yeah, dream on Jaron... as if you have any clue compared to Dennett on the subject. Ironically, I agree with Jaron's statement that you can't call yourself a rationalist and be a Dan Dennett fan... although for very different reasons than he's implying. I'm an avid Dennett fan but I'm strictly an empiricist, not a rationalist... and I'd be really disappointed in Dennett if he started spouting rationalist crap.

All in all, an entertaining debate, and not without at least some good points on both sides. If you do end up watching it, go ahead and fill out the poll because I'm curious how similar or different other impressions are from mine on this subject:

Who made better points in this debate?

Eliezer
3(60.0%)
Jaron
2(40.0%)

Who came across as foolish or naive?

Eliezer
1(20.0%)
Jaron
4(80.0%)

Who came across as intelligent?

Eliezer
1(25.0%)
Jaron
1(25.0%)

Whose personality do you identify with more?

Eliezer
3(60.0%)
Jaron
2(40.0%)


I threw in a question about personality too, because I find the conflict in their personalities to be quite striking... it seems to parallel a very familiar personality type difference that I see arise in different contexts from time to time. I identify much more with Eliezer's personality, and something about Jaron's kind of bugs me, independently of what he's actually saying. Actually, I think the main thing that bugs me is that Jared is so vague and just refuses to nail down what he's saying a lot of the time, while Eliezer is so careful and clear in what he's saying all the time... maybe that's the difference? I'm not saying that means one personality type is more prone to error than the other... it's just an observation.

Comments

( 37 comments — Leave a comment )
azalynn
Apr. 19th, 2009 06:04 am (UTC)
Just a nit (which has nothing to do with the overall subject): Jaron is Jaron, not Jared.
spoonless
Apr. 19th, 2009 06:14 am (UTC)
I put Jaron in most places, but I must have unconsciously typed Jared in a few places. Weird, because I didn't remember typing that at all. When I saw your correction I was thinking "when did I write Jared?" I think this is an example of my fingers doing excessive autocompletion for commonly typed words.

Anyway, thanks. I corrected it above.
azalynn
Apr. 19th, 2009 06:19 am (UTC)
Heh, no problem -- stuff like that stands out to me, is all. But only when other people do it. I am less likely to notice my own autopiloty stuff of course!
kutta
Apr. 19th, 2009 06:06 am (UTC)
I agree that IQ doesn't strictly need 3 significant digits, but I'm not sure that rounding away digits is useful, unless we want to underline the fact that these numbers are imprecise.

It seems like error bars are a better tool here, doesn't it?
spoonless
Apr. 19th, 2009 06:29 am (UTC)
Sure, I guess I would say error bars are better than just rounding it off. I sort of agree with Jaron's point that a 1 to 10 score would be more honest than what we do now (in terms of the implied precision)... but I also agree with Eliezer that the important point of centering it on 100 is so that you can look at how many standard deviations you are from the norm.
azalynn
Apr. 19th, 2009 06:13 am (UTC)
And now one that is subject-relevant: I actually don't *know* where I stand on the subject of Strong AI, etc....I feel like I am not well-educated enough in the relevant disciplines (computer science, neuroscience, biology, etc.) to have any really strong opinions on the matter. However, I have noticed a thing where....some of the people who might be described as "AI enthusiasts" tend to do this thing where....they sort of sound like they're being very clear and concrete, but when you try and actually get at what they are talking about, it's not based on much in the way of empirical evidence. Again, this is not me making a strong statement about what I think is or isn't possible in the long term, just that when people use phrases like "smarter than human intelligence" there often seems to be a lot of handwaving involved, only it doesn't LOOK like handwaving initially because the terminology being invoked seems familiar enough.

Philosophically of course I don't see any reason why the stuff human brains can do is necessarily restricted to things that are chemically and structurally identical TO human brains, but I also think we have an awful lot more to learn, and that nobody has really earned the right to be certain or overconfident about stuff in that field.
spoonless
Apr. 19th, 2009 06:22 am (UTC)
I do think that that a lot of the "friendliness" stuff Eliezer talks about is based on handwaving. If Jaron had brought that up, I probably would have been on his side... but for some reason neither of them mentioned it.

I'm not sure whether I agree on the vagueness of "smarter than humans". In some ways, it seems kind of vague. But I feel like the Turing test mostly clears up the ambiguity. If a computer can pass the Turing test reliably, then it is at least as smart as humans (and presumably smarter, because at the very least, it will still be able to do math a lot quicker than we can, just as current computers can).
daze39
Apr. 19th, 2009 07:28 am (UTC)
I'm not sure whether I agree on the vagueness of "smarter than humans". In some ways, it seems kind of vague. But I feel like the Turing test mostly clears up the ambiguity. If a computer can pass the Turing test reliably, then it is at least as smart as humans

Well, for some value of "as smart as" based on the particular task (i.e., functioning as a human-imitating conversational entity), which is somewhat vague, since the characteristics of the judge are left unspecified in the classical formulation of the test...

The Wikipedia entry for "Turing Test" observes that it "is based on the assumption that human beings can judge a machine's intelligence by comparing its behaviour with human behaviour." How strongly is this believed at present? (And is the test really a test of the computer's "intelligence", or of the programmer's skill in implementing such conversational characteristics as will make the interaction seem "human"?)

I find it amusing to contemplate a "reverse Turing test": a human trying to respond in such a way as to be perceived as a computer!
spoonless
Apr. 19th, 2009 08:46 am (UTC)

is the test really a test of the computer's "intelligence", or of the programmer's skill in implementing such conversational characteristics as will make the interaction seem "human"?

I'd say it's clearly a test of the computer's intelligence. But it may also be a test of the programmer's skill in having created the framework for that intelligence. Jaron Lanier would disagree--he seems to be committed to shunning words like AI from the field of AI... supposedly to avoid "ideology" even though in doing so he's imposing his own bizarre ideology.
azalynn
Apr. 19th, 2009 07:25 pm (UTC)
Re. Turing tests: I'm skeptical it's that simple. It seems to depend very much on the neurology/assumptions of the person doing the test! When I was a kid my dad had a conversational program on the computer called "Racter", and that thing had me *totally* fooled when I was 6 -7 years old. I literally thought I was "talking to the computer" and that it was sentient. I am not saying the problem of determining machine intelligence is intractable, mind you -- just that there seem to be a lot of built-in and often unexamined assumptions in the Turing test idea. Like, for instance, the notion that a computer is "intelligent" if it can make an adult human, or a neurotypical human, even, think it's intelligent. Just stuff to think about.
daze39
Apr. 20th, 2009 06:03 am (UTC)
the Turing test may tell us as much about the judge as the computer under "test"
there seem to be a lot of built-in and often unexamined assumptions in the Turing test idea. Like, for instance, the notion that a computer is "intelligent" if it can make an adult human, or a neurotypical human, even, think it's intelligent.

Yes. That.
I think, for an interesting test, you want several different flavors of judge: children, schizoids, autistics, LSD-trippers, etc., as well as adult NT's...

I'm reminded of an investigative article published years ago (might have been in the SF Chronicle or something but ICBW about that)... title IIRC was "On Being Sane in Insane Places": a couple of reporters consulted a psychiatrist with complaints of hearing "unclean voices in their heads", got themselves diagnosed as "schizophrenic" and hospitalized, then dropped all pretense of insanity: the staff was oblivious, and continued to view the reporters through the lens of the diagnosis of "insanity": if one of them was taking notes, the psychiatrists weren't interested in asking about what they were writing, but merely recorded observations such as "patient engages in writing behavior"... straightforward practical questions of the nurses were met with "you'll have to bring that up with your doctor in therapy"... and so forth: just completely assuming that they were dealing with an actual insane person and interpreted everything accordingly.

The interesting thing is that the other patients caught on to the reporters' deception immediately: "You're not really crazy... what, are you some kind of reporter checking out the hospital or something?" The authors of the article thought it noteworthy that the patients seemed more accurately perceptive than the staff on this point!

So it would be interesting, as I say, to see if folks other than NT adults would evaluate various conversational entities differently, and at the very least, if they responded to different cues. For example, do a side-by-side comparison with two versions of a given conversational algorithm, identical except that one deliberately inserts random output errors to simulate imperfect human typing... for which of the judges would this increase the chance of a given conversation being considered "human"?
spoonless
Apr. 20th, 2009 06:45 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

Yes. That. I think, for an interesting test, you want several different flavors of judge: children, schizoids, autistics, LSD-trippers, etc., as well as adult NT's...

Do you really think there are that many AI programs which could pass some of these but not all of these? I guess if someone is really gone on LSD they may not be paying much attention at all or asking totally wrong questions... so I could see it passing for that one just by mistake. But for most adults, even with severe non-neurotypical mental conditions I don't think there would be much variation. For children, I guess it depends on how young you're talking about.

I think it is absolutely undeniable that if a computer could convince me it is a human, then it must have at least human-level intelligence. I know that I am competent enough to judge such things. And I think the vast majority of adults would also be competent in judging that. But admittedly, a much harder question is whether there could be highly intelligent computers that can't pass a Turing test... on that issue I agree there is a lot of gray area and possibility for non-human-like intelligence to go overlooked.
geheimnisnacht
Apr. 20th, 2009 07:44 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I think it is absolutely undeniable that if a computer could convince me it is a human, then it must have at least human-level intelligence.

I think its arguable that one must first understand human intelligence to be able to know with scientific certainty whether or not a system has it. At least, one must know everything human intelligence is capable of, otherwise you will not know to design a test that covers the unknown capabilities. Are you claiming to know every human mental capability? While this knowledge might not require complete reductionist and phenomenological knowledge of the brain, I still claim we can't say we know.

There are levels of tests that are good approximations, such as, say, seeing if an AI can do anything that a specific human accomplished in their entire history. This is, I think, feasible and a worthwhile comparison. Even though I think Jaron mostly spent time saying things that weren't relevant (such as references to his personal history, making the same claim repeatedly, claiming Eliezer's statements or questions were "fundamentally flawed" or "on shaky ground" without justifying that claim) he seems to have characterized (fuzzily) the flaw I believe you are presenting here: overconfidence in what you think you know.

If Jaron had made the clear point "people are probably too stupid to handle the belief in the singularity" perhaps I could agree. However, the "belief" itself is not the issue. It's the "guns cause murder" stance. The belief in the singularity here is useful in that it should be considered as a potential outcome and properly addressed. Eliezer was alluding to this. So while to me it seems likely that Jaron is indeed right (a la "too stupid to handle..."), he was arguing it incorrectly and perhaps Eliezer is stuck fighting Jaron's imprecision. In the end, to me it seems a human capacity question which cannot be easily resolved.
spoonless
Apr. 20th, 2009 06:12 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

I think its arguable that one must first understand human intelligence to be able to know with scientific certainty whether or not a system has it.

I disagree. Intelligence is a word made up by humans to describe a certain vague set of intellectual abilities. That set has never been well-defined and never will be or ever could be. That's why I think Jaron's point about the human IQ tests is such a good one... there's just no way to quantify it on such a fine resolution, since it's just a word we use to describe things that are generally recognized as "intelligence".

But Eliezer also had a crucial point, which I've heard him discuss in more depth in the past, which is that using a human IQ test to measure an AI would be like using a "Fly Q" test calibrated on pigeons to measure how well an airplane flies. The kind of IQ test that should be administered to see if something has human level intelligence is just trying to distinguish the huge and obvious gap between humans and other creatures such as monkeys, birds, fish, or current computers. There's nothing that comes anywhere close to human intelligence yet, but it will be obvious to everyone if there is something that does. Identifying the presence of human-level intelligence is a much, much easier task than talking to someone and trying to judge whether their IQ is closer to 40 or closer to 140, which I assume most people can also do.

As I mentioned in my earlier comment, I do think there's a real possibility that a highly intelligent AI could be developed which has lots of intellectual faculties that exceed human capabilities, and yet for some reason they lack the ability to emulate human-style thought and therefore could not pass a Turing test. But I really think it is unarguable that there's anyway that the opposite could happen... that something could be able to successfully emulate human thought and yet not have its own intelligence that is equal or more powerful to human thought. Remember, that intelligence is purely about abilities, not about some internal states like some people say consciousness is about... proving that they can accomplish the same intellectual abilities as humans is direct proof that they have that kind of intelligence. They may also have other kinds of intelligence that humans don't have, or they may not... but you know at least that they have human-style intelligence because they can accomplish the tasks that humans can.
geheimnisnacht
Apr. 20th, 2009 07:14 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I didn't intend to mean anything specific with "intelligence", I could have said "I think its arguable one must first define the capabilities of the human mind".

That set has never been well-defined and never will be

How can you possibly say that? It seems you imply that research into the human mind will hit a wall and leave some things unknowable? I certainly think its likely that in the far future the human mind can be functionally mapped out with performance benchmarks for the multitude of actions it can take. However, depending on what levels the brain is a chaotic system or involves quantum mechanics, any benchmarks may prove inaccurate. Yet, we'll still be able to say more than what I think you are claiming is the limit of our knowledge.

Eliezer's point only holds so much water. In some senses its true, perhaps a supposed AGI may operate differently than humans and requires a different testing strategy. For the pigeon test, the 747 will obviously fail if it needs to flap its wings. However, if the point is to see if an AGI can perform like a human, then this is like the 747 needing to perform like a pigeon, and if it can't flap then its not a pigeon. In this way, an AGI with a different type of intelligence may fail a Turing test, as you mention.

Anyway, you've missed my main point. I agree with your last paragraph except when it runs it what I'm arguing: what is the set of tasks that humans can accomplish? You seem to have assumed that this is trivially known. Like I said before, good approximations can be made, and my point is that until we understand humans better, we only have approximations.
spoonless
Apr. 20th, 2009 07:41 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

It seems you imply that research into the human mind will hit a wall and leave some things unknowable?

Not at all. I'm just saying there is nothing to be known there. On the spectrum of vague folksy words to legitimate scientific parameters in the brain, I'd say consciousness if very much on the folksy side whereas intelligence is somewhere in the middle.

http://en.wikipedia.org/wiki/Folk_psychology

If we had a perfectly accurate description of the brain, we wouldn't need vague words like "belief", "consciousness", "desire", or to some extent "intelligence". Intelligence seems slightly more quantifiable than these other things, which I think is what prompted people to make IQ tests. But Jaron's point is crucial, that the tests are an extreme exaggeration of the resolution you get out of trying to quantify such a thing. To the extent you make a statement like "a human is smarter than a rat" I think it's a very well-defined measurable thing. Once you start trying to compare the different ways in which individual humans think, it becomes pretty ill-defined and there is nothing more to be said.

Is shooting freethrows a form of intelligence? According to some people, sure. According to others, no. There is no objective correct answer to it. It's not that our knowledge is limited at all, it's just that intelligence is (somewhat) folk psychology and can be eliminated in a more precise description of the brain.

Anyway, you've missed my main point. [...] what is the set of tasks that humans can accomplish? You seem to have assumed that this is trivially known.

I don't believe there is any such (perfectly defined) set. There are no hard limitations on what humans can accomplish, and we accomplish more and more as new humans are born. Nevertheless, there is a fuzzily defined set of things that humans can do, some of which some people regard as intellectual abilities (intelligence) and others of which people regard as motor, emotional, artistic, etc. abilities. I think if you stick to the core of what most people consider "intelligence" it's something that is easily recognizable when it comes to distinguishing humans from non-humans. It's only when you assume a fictitious high resolution in the definition that you run into disputes and it becomes subjective.
geheimnisnacht
Apr. 20th, 2009 08:34 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I'm just saying there is nothing to be known there

Consider this scenario: medical science pinpoints where and how simple arithmetic is performed in the brain. It turns out that there is a strong correlation between some structure function of the neurons and speed of mental calculation. To me, this is knowledge about one of the capabilities of the human mind that provides a well-defined benchmark.

On the spectrum of vague folksy words

For the purposes of our discussion, why do we care what everyday people define as intelligence? I'm trying to cut through the bullshit and you respond "but everyone else uses bullshit terms". In the example above on arithmetic, I don't care who defines it as intelligence or not. I think we can agree that it is one of the mind's capabilities though.

Nevertheless, there is a fuzzily defined set of things that humans can do

As far as I can tell, you are agreeing but don't know it. What I'm saying about approximations is essentially that,yes, we have a fuzzy definition of human ability and can use that to apply a test. Of course, this test would clearly indicate that the subject was fuzzily human. What you said a few posts back was, to me, a stronger statement,
"I think it is absolutely undeniable that if a computer could convince me it is a human, then it must have at least human-level intelligence"

To me, it would be "deniable" if there existed any ability that humans commonly have which we are not aware of or simply can't test.

spoonless
Apr. 21st, 2009 01:03 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

Consider this scenario: medical science pinpoints where and how simple arithmetic is performed in the brain. It turns out that there is a strong correlation between some structure function of the neurons and speed of mental calculation. To me, this is knowledge about one of the capabilities of the human mind that provides a well-defined benchmark.

Of course. My point is that once we understand these specific abilities, we will have no use for vague words like "intelligence". You seem to think that intelligence is this thing that exists independently from the collection of mental abilities (like speed of arithmetic). I think that it does refer to some loose set of abilities, but there is no way of making that rigorous because nobody will ever agree on which set of abilities is supposed to be included. It doesn't matter.

For the purposes of our discussion, why do we care what everyday people define as intelligence? I'm trying to cut through the bullshit and you respond "but everyone else uses bullshit terms". In the example above on arithmetic, I don't care who defines it as intelligence or not. I think we can agree that it is one of the mind's capabilities though.

I don't think it's a bullshit term, I think it only becomes bullshit if you try to make it refer to some exact precise thing. There is no such exact precise thing... intelligence is fundamentally not something like that. I would say the same thing about any non-scientific word, say for example love. It's not that there is anything bullshit about love, it's just that if you think we can measure it on a scientific meter, you're kidding yourself. Sure, you can measure the amount of pupil dilation and stuff, and that correlates fairly well with people's idea of what being in love is... but it's never going to capture it exactly. With intelligence, we can do a bit better and get a fairly large complex of abilities that seem to positively correlate with each other, that people generally agree represent their idea of intelligence. That's what IQ is. But I personally don't think it can get much better than that... and even IQ is an exaggeration.

To me, it would be "deniable" if there existed any ability that humans commonly have which we are not aware of or simply can't test.

So your definition of intelligence is "all common human abilities"? That's not my definition, and I don't think many people would agree with you. Just because most people can catch a fly ball or twiddle their thumbs does not mean that an AI of human level intelligence will have these abilities. All it needs is the intellectual faculties, which can be ascertained by having a remote conversation with it. Indeed, the whole reason Turing designed the test to be remote is specifically to exclude incidental abilities which humans happen to have but are not relevant to intelligence. For instance, if it can't morph into the shape of a human and fake a smile. Of course, there may be other incidental abilities that should not be counted as intelligence but would be required in order to pass the Turing test. But my statement is simply that *if* it passes and can successfully survive answering any question I can ask it that might tip me off it lacks human intelligence (and incidentally, many of the questions I would include in my evaluation of it would be questions you could find on an IQ test) then it must have at least human intelligence (and probably much greater). I think it takes a lot to pass a well-designed Turing test and while there is danger of false negatives there is no danger of a false positive if administered properly.
geheimnisnacht
Apr. 21st, 2009 01:37 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
You seem to think that intelligence is this thing that exists independently from the collection of mental abilities

Nope. In my first post I mention "intelligence" alongside "mental capabilities". I agree the grouping into "intelligence" is subjective and only used it for continuity and lack of other good options. Let's just not use the word. So let's refer instead to the rigorous set of abilities that will eventually be discovered through brain study. Call it the Mental Ability Group (MAG). Let's say MAG includes everything, including motor control. So to refer to the ones I'm assuming we're interested in, essentially just reasoning abilities, let's call that MAG-R.

So do we agree that we only have partial knowledge of MAG-R? We have bad definitions of abilities that may be in it, and don't know if we are missing other abilities that are in it. If we agree, continue below.

Now to my point again. In order to make a foolproof Turing test, one that gives neither false negatives nor false positives, I say it is necessary to have full knowledge of MAG-R. It may not be sufficient (if for some reason there does not exist such a test). False negatives may occur if we include in the test something that is not actually in MAG-R (but we thought it was). False positives may occur if we do not include something in the test which IS in MAG-R (we weren't aware of the ability, or mis-attributed its nature).


spoonless
Apr. 21st, 2009 04:13 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

Call it the Mental Ability Group (MAG). Let's say MAG includes everything, including motor control. So to refer to the ones I'm assuming we're interested in, essentially just reasoning abilities, let's call that MAG-R.

Okay. As long as you are not saying that we could ever separate off "reasoning abilities" cleanly from everything else. That is, as long as the boundaries of this subset are sufficiently fuzzy. I'm not sure that matters for the rest of what you're saying, but I figured I'd point it out just in case.

So do we agree that we only have partial knowledge of MAG-R? We have bad definitions of abilities that may be in it, and don't know if we are missing other abilities that are in it. If we agree, continue below.

Agreed.

Now to my point again. In order to make a foolproof Turing test, one that gives neither false negatives nor false positives, I say it is necessary to have full knowledge of MAG-R. It may not be sufficient (if for some reason there does not exist such a test). False negatives may occur if we include in the test something that is not actually in MAG-R (but we thought it was). False positives may occur if we do not include something in the test which IS in MAG-R (we weren't aware of the ability, or mis-attributed its nature).

I feel like this is sort of missing the point of the Turing test by insisting it be far too rigorous. I don't know why anyone would care about whether a machine could exactly reproduce all of MAG-R. That seems irrelevant and uninteresting. The question is whether it has a level of intelligence that is roughly on par with humans... that's all. If you're worried about something more specific, then I don't think you're talking about a Turing test, you're making up your own test.
geheimnisnacht
Apr. 21st, 2009 07:44 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I feel like this is sort of missing the point of the Turing test by insisting it be far too rigorous.

The problem I had was with this sentence, which I claim is far too rigorous:

I think it is absolutely undeniable that if a computer could convince me it is a human, then it must have at least human-level intelligence.

To say this, I feel you need the kind of rigor I've proposed.
spoonless
Apr. 21st, 2009 05:50 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

I feel like this is sort of missing the point of the Turing test by insisting it be far too rigorous.

The problem I had was with this sentence, which I claim is far too rigorous:

I think it is absolutely undeniable that if a computer could convince me it is a human, then it must have at least human-level intelligence.

To say this, I feel you need the kind of rigor I've proposed.

hmmm... I did not intend for it to be rigorous in that way. But retrospectively I wish I had used another word besides "absolutely". What I was trying to convey was that I disagree strongly with anyone who thinks that something that passes a Turing test (that I administer) is not truly intelligent... and I don't think it makes sense for them to deny it. I guess this gets into when it's appropriate to use words like "absolutely". And I'll admit that it was a bit strong of a word to use in this context.
spoonless
Apr. 21st, 2009 01:13 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I just thought of one more way to put things to you, so you understand what I'm saying.

Imagine I charged you with the following task: you have to chat with 10 random people over the internet. 5 of them are Nobel Prize winners and 5 are severely mentally handicapped. Do you think you could identify which 5 are which?

Sure... maybe if you did it 500 times, you might make 1 mistake. But now think of the much easier task, of just identifying whether the person/thing you're chatting with is anywhere in that most general ballpark (all the way from the dumbest human you can imagine to the smartest human). Essentially, you are just trying to discern if they are human at all, or if they are just a lower animal or a simplistically programmed machine. Do you think you could do it? I claim, you could do it every time, no problem.
geheimnisnacht
Apr. 21st, 2009 03:09 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
I don't disagree with this! The problem is this example doesn't address the issue we're debating. Consider the following.

Deep Blue beat Kasparov at chess. Jaron talks about face recognition software that outperforms himself. Mathematica is much better at calculus than us. If we find enough systems such as these, paste them together into some sort of interface, will they pass a Turing test?



spoonless
Apr. 21st, 2009 04:22 am (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"

I don't disagree with this! The problem is this example doesn't address the issue we're debating. Consider the following.

All I did there was give a concrete example of what I was saying in my original statement that you responded to. Your question about chess doesn't seem relevant to the issue we've been debating to me. So perhaps we've just been debating two different issues all along? It wouldn't surprise me =)

Deep Blue beat Kasparov at chess. Jaron talks about face recognition software that outperforms himself. Mathematica is much better at calculus than us. If we find enough systems such as these, paste them together into some sort of interface, will they pass a Turing test?

I think these systems are evidence that machines are already somewhat intelligent (obviously Jaron would disagree with me there). But they still lack some other important kinds of intelligence that humans have. As to your question about whether pasting together more systems like this will eventually result in something that could pass a Turing test, I think it depends on what you mean by "like this". Those are all narrow AI type tasks. I agree with Eliezer and Goertzel that what we really need is AGI, something that can match humans at learning entirely new frameworks rather than just operating in an existing framework. So the intelligence we have that machines lack appears to have something to do with flexibility. If adding such a mental ability is considered another system like the ones you mentioned, then yes. If it counts as something special or more general, then no.
spoonless
Apr. 20th, 2009 07:46 pm (UTC)
Re: the Turing test may tell us as much about the judge as the computer under "test"
To put it yet another way, in terms of Jaron's analogy to movie rating... I think what you're arguing here is akin to saying "we need to understand much more about movies in order to say whether something is a good movie". No, there's nothing to understand there. What is a good movie, to the extent it's well defined, is defined by our reaction to it and by the consensus. People recognize it when they see it, but can't quantify all the parameters that go into that judgement. Admittedly, I think movies are a bit *more* subjective than intelligence, but not too much more.
ankh_f_n_khonsu
Apr. 19th, 2009 07:52 pm (UTC)
Eliezer strikes me as a severely repressed individual.
ankh_f_n_khonsu
Apr. 19th, 2009 09:03 pm (UTC)
My previous comment was made prior to watching the debate, based purely on my prior exposure to Eliezer's work. Now, having watched the debate and accumulated more data, I can speak with greater confidence: Eliezer's severely repressed. He's brilliant, but not wise. Lanier might as well have been trying to convert the Pope. He wasn't always as successful at illustrating the Eliezer's hubris as might've been necessary.
azalynn
Apr. 20th, 2009 04:26 am (UTC)
Oh wow, that's a pretty astute way of putting it: EY has high INT/low WIS. Whereas JL has apparently learned some WIS along the way.
spoonless
Apr. 20th, 2009 06:09 am (UTC)
I've never been quite sure what people mean by the term "wisdom" as differentiated from intelligence. The most sensible interpretation I can come up with is that intelligence refers to your innate potential, like how good the hardware is you're running on... whereas wisdom refers to accumulated knowledge... how many facts you've picked up along the way. So intelligence would remain fixed during your life, while wisdom would increase with age, presumably faster if you have a greater number of important, new or interesting experiences.

Is this what you guys mean when you say "wisdom" or something else? If it is what you mean, are you saying that Jaron has had more life experience which has given him a better perspective in some ways? Do you think certain people (like possibly Eliezer) spend too much time in their own head and don't end up getting enough "real life experience"?
azalynn
Apr. 20th, 2009 06:37 am (UTC)
Something like that. Or maybe "crystallized" vs "fluid" intelligence might apply (I think one accumulates whereas the other is more based on one's permanent neurostructure).

Personally? I've read heaps of Eliezer's stuff, some of it long before I even know who he was (like - I typed "What is the meaning of life?" into a computer when I was around 19 and got his "Meaning of Life FAQ" and had this whole weird "holy crap I'm not the only one!" experience then. Which was odd). And I think he's definitely learning over time. I can relate to him in some respects very strongly, especially in the area of apparently really really needing to go THROUGH learning processes and not just be told "this is how things are" by others. He strikes me as someone who badly needs to learn stuff in his own, unconventional way and isn't very easy to "teach" stuff to, and I'm very similar in that regard.

However, AS someone like that, I also know that there have been times where I've *thought* I had something all figured out, only to have that completely shattered, at which point I realize some of the people I was thinking of as "naysayers" were right all along (and aren't saying what I thought they were, either).

Given that, I have been sort of compelled to seek out and try and understand the stuff people like Jaron are saying. Jaron strikes me as someone worth listening to because he's covered the kind of territory that I (to some extent), and probably EY, and probably a ton of other ambitious nerdy types are still swimming in. There was a point sometime last year (or slightly before) when my brain did kind of a flip-thing and I suddenly was able to see all kinds of handwaving in places that had previously looked solid. And that has made me, upon hearing people like Jaron speaking in certain levels of vagary, that maybe vagary really is the best we can do in some areas, at least given what is known right now.

Okay, I don't know if any of that made sense and I really need to go to bed, but those are my thoughts for now.
spoonless
Apr. 20th, 2009 07:14 am (UTC)

Something like that. Or maybe "crystallized" vs "fluid" intelligence might apply (I think one accumulates whereas the other is more based on one's permanent neurostructure).

So which is which then? If it's something like what I was saying, then wisdom is the one that is accumulated knowledge, so presumably that would be crystalized intelligence while "intelligence" would be fluid intelligence. Except that I'm wondering now whether both of you would assign Jaron the higher crystalized and Eliezer the higher fluid, or if one or both of you might assign it backwards... Jaron higher on fluid and Eliezer higher on crystalized. If you both assigned them differently, it would to me prove that the word "wisdom" that presumably got used to communicate an idea that you both agreed on was practically meaningless... or had completely different meaning between the two of you.

However, AS someone like that, I also know that there have been times where I've *thought* I had something all figured out, only to have that completely shattered, at which point I realize some of the people I was thinking of as "naysayers" were right all along (and aren't saying what I thought they were, either).

*nods*, yeah unfortunately I'm the same type of person. And that has happened to me as well... politics being the first example I can think of. It's amazing how things people were saying to me a long time ago suddenly make sense after having shifted enough of my worldview to be able to understand it.

Jaron strikes me as someone worth listening to because he's covered the kind of territory that I (to some extent), and probably EY, and probably a ton of other ambitious nerdy types are still swimming in.

Wait, what makes you think Jaron has "covered" this territory while Eliezer is still "swimming in it"? They've clearly both given it a lot of thought and come to different conclusions. Do you agree with Jaron's conclusions more, and if so is it possible that's why you're saying he's covered the territory (and come to the right conclusions?) or are you saying it because Jaron actually has more experience thinking about these questions?
geheimnisnacht
Apr. 20th, 2009 07:52 am (UTC)
Jaron strikes me as someone worth listening to

Seemed the opposite to me; he spent more time hand-waving and making unfounded claims, or just giving irrelevant anecdotes. The test here would be to try and logically organize what each said during the hour. My hunch is that Jaron has less of a logical hierarchy supporting his statements.
spoonless
Apr. 20th, 2009 06:17 am (UTC)
Thanks for the comment. I was curious to see how you would respond to said video. I'm not surprised that you identify more with Jaron.

I just wrote out a question about wisdom to azalynn below. I think the issue of what wisdom means may have come up before between you and I, but I can't recall what you said about it. At any rate, the question I ask her is also directed at you... what do you mean by wisdom as differentiated by intelligence (I give my own impression of what the distinction is below)?

Also, regarding repression... do you sense that there are specific things Eliezer is repressing, like there is a hidden personality of his just waiting to get out if only he didn't keep it at bay? What if certain people just prefer to operate within more narrow bounds... and they just seem repressed to you because they aren't expressing the things you'd be expressing if you picture yourself in their shoes?
ankh_f_n_khonsu
May. 1st, 2009 02:57 am (UTC)
Sorry for the delay in responding... it temporarily fell through the cracks...


I think the issue of what wisdom means may have come up before between you and I, but I can't recall what you said about it. At any rate, the question I ask her is also directed at you... what do you mean by wisdom as differentiated by intelligence (I give my own impression of what the distinction is below)?

I'm fairly certain it's come up previously too - I think I probably drew a dichotomy between vicariously imbued awareness and gnostic understanding. Intelligence, as I'm using it here, is a function of that which can be taught and learned. Wisdom, on the other hand, can only be learned. (Eliezer would probably scoff at that contradiction.) From another angle, intelligence could be looked upon as incremental literal awareness, but wisdom directly perceives figuratively and holistically. If that comes across as gobbledygook, we'd find ourselves in familiar territory. ;)


do you sense that there are specific things Eliezer is repressing, like there is a hidden personality of his just waiting to get out if only he didn't keep it at bay? What if certain people just prefer to operate within more narrow bounds... and they just seem repressed to you because they aren't expressing the things you'd be expressing if you picture yourself in their shoes?

I think it would be irresponsible of me to speculate on the nature of his 'hidden repressions' in the absence of prolonged intimate exposure. Nonetheless, repressions - sublimations of self - often dominate interpersonal transactions. People don't hide their demons - they embody them. When you learn the language of demons, they don't hide so well any more. I think this is a common skill among those who walk the path of interpersonal discovery, and in my experience this has certainly been the case.

The concept of wisdom often proves vexing for intellectuals. You're in good company. :)
spoonless
May. 1st, 2009 04:27 am (UTC)

If that comes across as gobbledygook, we'd find ourselves in familiar territory. ;)

Alas, I fear we are in familiar territory. Phrases like "vicariously imbued awareness" and "gnostic understanding" don't mean much to me.

Intelligence, as I'm using it here, is a function of that which can be taught and learned.

This is at odds with my own use of intelligence, which I believe is the more standard usage, where intelligence is not something which can be taught or learned (like knowledge) but an innate cognitive ability that one is either born with or not born with.
ankh_f_n_khonsu
May. 1st, 2009 04:52 am (UTC)
Phrases like "vicariously imbued awareness" and "gnostic understanding" don't mean much to me.

One implies learning through symbolic representations of actual experience, and the other involves direct experience.


This is at odds with my own use of intelligence, which I believe is the more standard usage, where intelligence is not something which can be taught or learned (like knowledge) but an innate cognitive ability that one is either born with or not born with.

Yes, "knowledge" is fairly apt. In that sense, intelligence involves the manipulation of knowledge. However, knowledge doesn't catalyze wisdom and neither does intelligence.

Stripping it down to its bones and framing it in academic semantics, I'd say this is a fairly appropriate characterization of my understanding of wisdom: "Wisdom is the use of one’s intelligence and experience as mediated by values toward the achievement of a common good through a balance among (1) intrapersonal, (2) interpersonal, and (3) extrapersonal interests, over the (1) short and (2) long terms, to achieve a balance among (1) adaptation to existing environments, (2) shaping of existing environments, and (3) selection of new environments." (link)

For a more scholarly survey of the topic, you might see here.

( 37 comments — Leave a comment )

Profile

blueshirt
spoonless
domino plural

Latest Month

May 2017
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Tags

Powered by LiveJournal.com
Designed by Lizzy Enger