Log in

No account? Create an account

Previous Entry | Next Entry

Singularity Summit, part three

This is a continuation of my summary of the singularity summit last Saturday. The first speaker was Ray Kurzweil whom I described in part two. Here are my thoughts on the next few speakers...

Peter Theil:

Theil actually spoke before Kurzweil, but I didn't count him since he wasn't one of the 12 official speakers, he just introduced them. Theil is the former CEO of Paypal. I liked his introduction a lot and I thought he had some great points to make. He used to spend a lot of time playing chess, and while he was in college he heard a lot of people saying things to him like "computers will never be as smart as humans, because they could never do something that takes human creativity and ingenuity like playing chess." As computers gradually got better and better, people changed their story about what computers could or could not be expected to do. By the time Big Blue was good enough to compete head to head with Kasparov, Kasparov said before their great match he was "defending human dignity." In that match, Big Blue beat him 2 games out of 3, and nowadays computers are consistently superior to the best human chess players. But what's happened in response is not for people to accept computers as intelligent but instead for people to lose interest in chess. Since there is a pervasive anti-AI bias in society [IMO this will be approrpiately called "racism" soon, but we're not quite there yet], people just assume that if a computer can play chess then "it must not be all that hard". There's this feeling that humans are the best possible species ever, the center of the universe, and couldn't possibly be beat by anything smarter. He had a lot more to say on this topic, and it was all very wonderfully put I thought. Nice intro.

Douglas Hofstadter:

After Kurzweil spoke, Douglas Hofstadter took the podium. He had some entertaining cartoons, and had some positive and some negative things to say in response to what Kurzweil had said just before him. While he's a very cool guy, and I really enjoyed reading Godel, Escher, Bach when I was little, and he's even published books with Daniel Dennett, I was a bit disappointed with him. He made one argument against something Kurzweil said about the amount of information needed to specify the design of a human brain that was clearly invalid, and as one person mentioned to me afterwards "it sounded like he just had a basic misunderstanding of Claude Shannon and information theory". I have a lot of respect for Hofstadter, so I'd rather believe he just had a misunderstanding of what Kurzweil was saying, but either way his argument held no water. He did, however, have a couple other arguments against Kurzweil that did hold water. One is that Kurzweil should not be talking about a "knee" of an exponential which he apparently does several times in his book. I'd completely agree with that. Another thing he mentioned is that, if each of Kurzweil's assumptions are 60% correct, then when you multiply all of them together you might get something more like 1/1000 chance his overall conclusions are correct. While I think Hofstadter is exaggerating here a bit, he does have a good point. And this is why I say Kurzweil is far too overconfident in the predictions he's made... even if he did the best job he could in making them. The only other critical thing Hofstadter had to say was that "there's a lot of handwaving going on" which I'd also agree with. He said in most places, there was no particular thing Kurzweil was saying he could find a hole in, but the overall thing seemed a little shaky to him and he challenged scientists to come forward and make a "serious criticism" of Kurzweil's work which nobody has done yet.

Nick Bostrom:

While he's clearly my favorite person out of all of the speakers, and I think by far the brightest, his talk was very disappointing. He talked about "existential risks" to humanity (such as nuclear holocaust or pulling the plug on the simulation), and it just came across as mostly boring and only very marginally relevant to AI or the singularity. Yes, we need to think about these things, but I felt like most of the other talks held my interest more and most people I talked to afterwards agreed. Bostrom has a lot more interesting things to say, but he probably wanted to make rigorous arguments for them and decided he couldn't do that in the 20 minutes allotted. That's cool, he's still my hero. And as I mentioned in part one, my conversations with him afterwards were the highlight of the whole summit for me personally.

Sebastian Thrun:

He gave an exciting account of the automated cars he's designed, which navigate themselves around obstacles with no human input. Lots of neat video of training them, and racing them.

Cory Doctorow:

Talked about why we need to get rid of Digital Rights Management. Cory works for the EFF which is an organization I've supported for years (not financially, aside from two t-shirts of theirs I used to wear, but verbally). His speech was great, even if in my case he was "preaching to the choir". Kurzweil argued with him a bit afterwards, as Kurzweil is more in favor of intellectual property rights. I've always been opposed to intellectual property rights of any kind, which is probably a more radical position than Doctorow's, but DRM is certainly among the worst of the problems with intellectual property so it's a good one to convince people on. Well done, Cory! Sat across from him at dinner, talked about a strange kind of psychological disorder where people can't recognize brands.

I guess I'll save the rest for part four, as I have to go to school now. Speakers I haven't gotten to yet: Eric Drexler, Max More, Christine Peterson, John Smart, Eliezer Yudkowsky, Bill McKibben, moderated discussion, Q&A. I also took some pictures which I haven't put up yet.


( 2 comments — Leave a comment )
May. 19th, 2006 05:19 pm (UTC)
Heh. We're doing things in reverse. I met Hofstadter when I was little, and I'm just now getting around to reading GEB. Hopefully you got to talk to him; he's actually a neat person to talk to, and some of his ideas about AI really intrigued me when I was young(er). Still do, actually. AI is just neat. :)

A lot of people seem to think that computers/AI could never develop "human creativity." I'm not so sure of that. I think if true neural nets can be created and then they become organic on their own (probably a poor choice of words; I'm not all up on the lingo here) that they could certainly develop that capacity. After all, Dan says all we are are robots made of robots made of robots, which are made of proteins and such. ;) I have a hard time at times conceptualizing myself as nothing more than proteins and electronic impulses, but if that's the case, then why couldn't AI develop the same way we do?

May. 20th, 2006 06:29 pm (UTC)

I'd like to comment on the "all we are are robots" comment. While I agree with the statement taken literally, I don't like to say it like that because I feel like it's derogatory toward robots. Kind of like if I say to someone "all you are is JUST a woman"... it might be true technically, but somewhat offensive to say... since it sort of implies "being a woman" is a bad thing. Similarly, I don't like when people say "we are just robots" because it makes it sound like being a robot is a bad thing. The robots we have around today are inferior to people, but the robots which will be around tomorrow will be vastly superior. We just happen to be somewhere in between. In the future, I expect being a robot will be a compliment. So it's all a matter of perspective.
( 2 comments — Leave a comment )


domino plural

Latest Month

May 2017


Page Summary

Powered by LiveJournal.com
Designed by Lizzy Enger