You are viewing a single comment's thread from:

RE: I'm a tenured philosophy professor

in #philosophy8 years ago

Since reading:
https://steemit.com/anarchism/@ai-guy/3gjiju-killer-robots-artificial-intelligence-and-human-extinction
a couple days ago, I've been revisiting my thoughts on 'benefiting from AI without getting destroyed by it." With that said I am an optimist and believe in the goodness of most individuals, but the I hold hold as much ethical esteem for the financial gatekeepers to these types of projects. I don't have answers for this yet (outside ability of embedding abstract feeling/emotion into AI.)

I'd love to hear a question/problem/issue you're currently working on or have found interest or counter intuitive. Seems like it would give a great springboard to further discussion.
Welcome to Steemit @spetey!

Sort:  

Ha thanks for the kind welcome!

As you might have seen in the comments on that thread you linked to, I just wrote a paper on superintelligence and AI extinction, to be published soon in an anthology called Robot Ethics 2.0. My thesis was roughly that any AI with "complex" goals will have to learn those goals, and learning goals is tantamount to reasoning about goals - enough to make room for ethical reasoning even in an AI with quite different values, such as one designed to maximize paperclips. There are a lot more moving parts to the argument (sadly), but that's the gist of it. This is some reason to be optimistic, I think, but I have to say Bostrom's arguments worry me.

Perhaps my most "counterintuitive" and still accessible argument is also in robot ethics - in the first Robot Ethics anthology, I argued that it would actually be okay to make intelligent, ethically valuable robots who want to be our servants. (Of course I'm giving away my identity here, but it was hardly a secret for anyone who wanted to check anyway.)

What I think of as my main work these days is in a formal model of Ockham's razor based on algorithmic complexity. A related project, it turns out, is in an issue philosophers call the special composition question - when do a bunch of things get together in the right way to make ("compose") a new thing? The standard answers to that problem are totally counterintuitive - basically the standard answers are

a) always - there are tables, and cats, and table-cats
b) never - there are no tables or cats, just fermions and bosons (or strings or whatever's at the bottom)
c) when the things together make up something living - there are cats but no tables

Here I actually try to defend a much more intuitive answer against these standard responses. According to my theory, there are cats and tables but no table-cats. The question may sound totally crazy and obscure, but it's the kind of question you're driven to when you start with "real world" questions - such as whether and when abortion is ethical - and try to answer them as rigorously as possible.

More of a response than you wanted maybe, but thanks for your interest!