It begins: Robot and Frank, a positive film portrayal of AI

(via Boing Boing) Frank is a senior citizen, starting to lose the ability to take care of himself on his own.  Instead of putting him in a home, his daughter gets him a care robot with an advanced AI.  At first he resists, but they become friends, and partners in a jewel heist.

This isn't exactly a positive portrayal, despite what I said in the headline, but at least it's a step past the "Enemies of all humans" portrayal of AI in most media in the last decade.  That trope is dealt with early on -- Frank reacts to the robot at first by saying, "You have got to be kidding me.  That thing is going to murder me in my sleep."

But unless the trailer is a horrible, horrible lie, it doesn't.  Instead, they grow close, and Frank trains Robot in the ways of his past career as a jewel thief.  I don't know how that's going to be portrayed -- whether the robot will come across as naively incapable of telling right from wrong (a negative portrayal), particularly susceptible to criminal acts (a negative portrayal) or whether it handles the complex realities of relationships, friendship, injustice and property rights by making a decision that is influenced by his immediate peers, and by a sense of compassion (a positive portrayal).

I've got my fingers crossed it'll be the latter, and I look forward to seeing this film as soon as possible.  Here's the trailer:

Google brain: Woo!

(via SourceFed) Google is doing the best thing in science yet.  They're creating a "Brain-styled neural network," which they're feeding random information off from YouTube.

So far, the computer knows what a cat is.  That's awesome.  (It's also great that it's what it learned from YouTube.) This isn't really the first step towards artificial intelligence, Google made that first step a long time ago, but it's a big one, and it means we might be close to seeing a singularity-like event.

The fact that the computer is learning how to identify and define things like 'cats' means it will likely soon come up with a definition for 'human,' and that will answer a pretty big question.

I don't think you can just ask a computer what a human is.  I would assume it'd be obvious to anyone that a computer's estimation of what a human is would just be a useful set of guidelines that aren't representative of some deep, universal truth.

In fact, that's my point.  I love the idea of a computer that can learn, because I think it makes it a lot more obvious, and a lot more undeniable, that the way we categorize things isn't some magic, universe-piercing insight, it's just a categorization set that's useful to us.  Our goals are to survive, so we're good at categorizing things in ways that relate to our biological survival.

Google's brain computer's goal would be to successfully interact with humans.  So, it's going to learn how to categorize things in a way that enables it to achieve concept-overlap between itself and the people it talks to.

SourceFed has already given us an example of people freaking out because it's totally going to kill us all.  And it's not going to do that, because it's got no reason to.  What I'm really looking forward to seeing is the people who get obsessively indignant about how it's totally not a human or whatever, and it's an abomination, or shouldn't have equal status, basically the whole spectrum of anti-robot racism is what I think we have to look forward to.