Google Glass test run

Joshua Topolsky at the Verge has had the opportunity to give Google Glass a test-run, and he's written a very long article about the experience.

The design of Glass is actually really beautiful. Elegant, sophisticated. They look human and a little bit alien all at once. Futuristic but not out of time — like an artifact from the 1960’s, someone trying to imagine what 2013 would be like. This is Apple-level design. No, in some ways it’s beyond what Apple has been doing recently. It’s daring, inventive, playful, and yet somehow still ultimately simple. The materials feel good in your hand and on your head, solid but surprisingly light. Comfortable. If Google keeps this up, soon we’ll be saying things like "this is Google-level design."

[...]

When you activate Glass, there’s supposed to be a small screen that floats in the upper right-hand of your field of vision, but I don’t see the whole thing right away. Instead I’m getting a ghost of the upper portion, and the bottom half seems to melt away at the corner of my eye.

Steve and Isabelle adjust the nose pad and suddenly I see the glowing box. Victory.

It takes a moment to adjust to this spectral screen in your vision, and it’s especially odd the first time you see it, it disappears, and you want it to reappear but don’t know how to make it happen. Luckily that really only happens once, at least for me.

Here’s what you see: the time is displayed, with a small amount of text underneath that reads "ok glass." That’s how you get Glass to wake up to your voice commands. Actually, it’s a two-step process. First you have to touch the side of the device (which is actually a touchpad), or tilt your head upward slowly, a gesture which tells Glass to wake up. Once you’ve done that, you start issuing commands by speaking "ok glass" first, or scroll through the options using your finger along the side of the device. You can scroll items by moving your finger backwards or forward along the strip, you select by tapping, and move "back" by swiping down. Most of the big interaction is done by voice, however.

[...]

Let me start by saying that using it is actually nearly identical to what the company showed off in its newest demo video. That’s not CGI — it’s what Glass is actually like to use. It’s clean, elegant, and makes relative sense. The screen is not disruptive, you do not feel burdened by it. It is there and then it is gone. It’s not shocking. It’s not jarring. It’s just this new thing in your field of vision. And it’s actually pretty cool.

(emphasis mine)

This is just a small selection of some of the amazing details about the product in this article.  The thing that sounded the coolest to me was the navigation -- mapping instructions directly onto your field of vision.  That is a feature I would benefit from immensely.

(I wonder if they'll be coming out with a Glass-inspired overlay for car windshields?  No, more likely we'll just get driverless cars soon.)

Honestly, I started to like Glass a lot when I was wearing it. It wasn’t uncomfortable and it brought something new into view (both literally and figuratively) that has tremendous value and potential. I don’t think my face looks quite right without my glasses on, and I didn’t think it looked quite right while wearing Google Glass, but after a while it started to feel less and less not-right. And that’s something, right?

(emphasis mine)

I am looking forward to this technology so much you guys have no idea.

Youtube paid channels might be a thing I guess

There's a website called AdAge.com.  I didn't know that. SourceFed's latest video, YouTube To Unveil Paid Subscriptions?!, is about the rumor that YouTube may, soon, be offering paid subscriptions.  Link to the video. Embedded below.

This sounds like an awesome step up past the sponsored channels that YouTube has been funding this past year.  I love Crash Course and SciShow, and I don't mind Felicia Day's channel so much that it makes me want to unsubscribe.  Of course, I don't want those channels to jump up to a pay model -- especially with Crash Course and SciShow, that would kind of defeat the purpose.  But they do make a great proof of concept that YouTube creators can generate consistent, high-quality content that's worth a greater investment than just "You have access to our upload page."

Imagine if Tor had a YouTube channel, that financed quality adaptations of sci fi and fantasy books, the way HBO is doing for Game of Thrones.  Imagine if getting enough subscribers and jumping over to YouTube had been an option for Joss Whedon when Firefly got cancelled.

According to AdAge.com,

It's not clear which channels will be part of the first paid-subscription rollout, but it is believed that YouTube will lean on the media companies that have already shown the ability to develop large followings on the video platform, including networks like Machinima, Maker Studios and Fullscreen. YouTube is also looking outside its current roster of partners for candidates.

I don't think it would go over very well with fans if old channels threw up a paywall for all their new content.  But I think those channels could expand into higher quality, higher production-value work, that would go up on a new channel, and I think external producers of higher-level content might be able to step down on the payscale the way groups like Machinima would be stepping up -- like, imagine if Pixar had a channel, that just produced those shorts from the start of their movies?

This is a great example of the kinds of things that the internet and companies like Google are doing, not just to open up new opportunities for existing art to thrive, but to create new levels at which art can be successful, unpinned from the constraints of pre-existing time slots or demand based on which advertisers were willing to pay.

Replacing passwords with jewelry

Wired writes about Google's effort to eliminate the password as a means of authenticating your identity online.  Passwords are incredibly insecure, and only becoming more so.  They will never again be a good way to protect your data.

Passwords are a cheap and easy way to authenticate web surfers, but they’re not secure enough for today’s internet, and they never will be. 

Google agrees. “Along with many in the industry, we feel passwords and simple bearer tokens such as cookies are no longer sufficient to keep users safe,” Grosse and Upadhyay write in their paper.

Fortunately, Google is working on a solution.

Thus, they’re experimenting with new ways to replace the password, including a tiny Yubico cryptographic card that — when slid into a USB (Universal Serial Bus) reader — can automatically log a web surfer into Google. They’ve had to modify Google’s web browser to work with these cards, but there’s no software download and once the browser support is there, they’re easy to use. You log into the website, plug in the USB stick and then register it with a single mouse click.

They see a future where you authenticate one device — your smartphone or something like a Yubico key — and then use that almost like a car key, to fire up your web mail and online accounts.

In the future, they’d like things to get even easier, perhaps connecting to the computer via wireless technology.

“We’d like your smartphone or smartcard-embedded finger ring to authorize a new computer via a tap on the computer, even in situations in which your phone might be without cellular connectivity,” the Googlers write.

The future may not exactly be password-free, but it will at be least free of those complex, hard-to-remember passwords, says Grosse. “We’ll have to have some form of screen unlock, maybe passwords but maybe something else,” he says, “but the primary authenticator will be a token like this or some equivalent piece of hardware.”

Personally, I can't wait until this technology comes out.  I like jewelry, but I've never been able to come up with anything I would be particularly motivated to wear, or to make work with my outfit.  But having a ring that was my key to the internet would be perfect.

Also: security and stuff.

EDIT 7:58pm -- I actually think a bracelet would be a lot cooler.   Would that work?

Google Fiber is online (in Kansas City, Missouri)

After reading this article on ArsTechnica, I went and did a speed test to see what kind of upload and download speeds I'm getting.  I'm not super computer-literate, so sometimes I have trouble bringing that kind of information into context.  My download speed is 4.63Mbps; upload, 6.37Mbps. Google Fiber offers speeds of 600-700Mpbs.  On the low end, that's 130 Times faster than my internet connection.

The article also discusses Ben Barreth, a man who bought a home in the Fiber-section of Kansas City, to offer free temporary housing to programmers who want to create startups.  They're also considering renting out a room for tourists:

He anticipates getting donations, sponsorships, or, the most-likely scenario, renting out one of the bedrooms via AirBnB to the first Google Fiber "tourists"—people who might want to come for a day or two at a time to try it out.

"$50 a night for 10 nights a month would cover mortgage and most of the utilities," he said.

So, Kansas City has just made the list of places I really, really want to visit, and I know exactly where I want to stay. Heh, I could live-blog the internet at high speed! It makes me a little giddy just thinking about how quickly my post would upload, and re-load the editing page after I hit update.

UPDATE: I just published, and it took like 30 seconds before it would let me type more stuff.

TechCrunch explains how Facebook is getting even worse

Yesterday on Boing Boing, Cory Doctorow posted a link to an article on TechCrunch, breaking down the ways that Facebook's new app interface is more manipulative and dishonest than their previous ones.  I haven't actually seen the new interface, because I've logged into Facebook about three times this month, and that was only to check for messages after someone told me they'd sent one. The article, 5 Design Tricks Facebook Uses To Affect Your Privacy Decisions, is an easy read, and has accompanying pictures to illustrate the problems.  The writer, Avi Charkham, points out:

Facebook keeps “improving” their design so that more of us will add apps on Facebook without realizing we’re granting those apps (and their creators) access to our personal information. After all, this access to our information and identity is the currency Facebook is trading in and what is driving its stock up or down.

Facebook's stock has not been doing well since the company went public.  It seems like the company's approach to solving this problem is going to be to try and extract even more personal information from its users.

For the record, Tumblr, Reddit and Twitter all have a very good track record for not exploiting their users.  If you're not ready to quit Facebook, a good first step is picking some of these other sites and getting active on them, as well.  Get your friends to do it, too.  Diversify your social presence online.  That way, no one service can hold hostage relationships that are important to you.

The moral problems of Big Data

Cory Doctorow linked to a great article about the civil rights implications of data collection. By the way, data collection is totally a civil rights issue.  Alistair Croll explains,

“Personalization” is another word for discrimination. We’re not discriminating if we tailor things to you based on what we know about you — right? That’s just better service.

There's a lot of information you can get out of the amount of data that corporations gather about their customers -- and a lot of ways that information can be used in damaging ways.  There was a case in which Target accidentally outed pregnant teens to their families by mailing them personalized catalogs close to entirely about things like baby carriages and diapers.

Croll raises the issue of that sort of information being figured into issues like bank loans or housing.  That's a big problem -- it means existing trends of social dysfunction will implicitly get reinforced.

If I collect information on the music you listen to, you might assume I will use that data in order to suggest new songs, or share it with your friends. But instead, I could use it to guess at your racial background. And then I could use that data to deny you a loan.

It doesn't even matter if they actually try to guess your race.  If the trends among fans of a particular band is that they're less likely to make their loan payments, then being part of a particular musical subculture can unfairly affect your ability to get loans.  And musical taste often does break down along the traditional lines of discrimination -- race, gender, sexuality.

Eli Pariser discussed this issue in his book, The Filter Bubble, which explores a huge variety of the ways in which the massive amount of data companies gather about us is potentially (and often practically) a very bad thing.

Ideally, citizens on the internet need to be empowered to decide how their data is used.  But Croll points out that it's a lot easier to say that's a good thing than to actually make it happen:

The only way to deal with this properly is to somehow link what the data is with how it can be used. I might, for example, say that my musical tastes should be used for song recommendation, but not for banking decisions.

Tying data to permissions can be done through encryption, which is slow, riddled with DRM, burdensome, hard to implement, and bad for innovation. Or it can be done through legislation, which has about as much chance of success as regulating spam: it feels great, but it’s damned hard to enforce.

Croll calls it the civil rights issue of our generation. I think LGBTQ rights still tops it for urgency, and none of the old civil rights problem are really gone, entirely, but he's right that this is a massive issue, and it needs more attention.  Organizations with a lot of power have a bad record for looking out for the rights of the people they have that power over.

Google's new anti-piracy initiative

Why, Google? Why would you do this to us? Alright, to be fair, Google's anti-piracy approach is not as inherently destructive to the basic nature of the internet as SOPA, PIPA, CISPA and ACTA were.  But Google's new approach, to punish sites that receive a high number of copyright infringement notices by decreasing their PageRank values and pushing them further down the search results, has some potentially serious consequences.

It would only take into account whether people claim a site has infringed, not whether the site has actually done so.  Bogus copyright takedown notices definitely exist.  And while the scale of i might not necessarily make it possible for individuals to deliberately target sites with legal content that they object to, it will likely punish websites that produce fringe content that is defensible but easy to attack.

Fair use of trademarks might result in takedown notices that are completely unjustified on the basis of parody laws, but it seems Google won't distinguish those from legitimate claims.  So, issues like Matthew Inman's recent controversy with FunnyJunk could have punished The Oatmeal for its criticism (though this system would likely also punish FunnyJunk for the initial art theft that started the controversy.)

I'm against this, because I'm against Google taking sides with the current legal zeitgeist, even though the people responsible for this decision must know that copyright reform would be better for the internet than reinforcing contemporary legislators' bad behavior.

Facebook founder's family member announces via Twitter that she works for Google

(via Ana Ulin on Google+) Randi Zuckerberg, Mark Zuckerberg's sister, tweeted yesterday that she's working for Google now, after the company she works for, Wildfire, was acquired by Google.

Wildfire is an advertising app that helps organize companies' social presence or a more successful, targeted marketing campaign.  My main focus for this story is that the tweet was funny, but I also want to talk about the existence of third-party marketing organizations, especially backed by Google.

Unlike a lot of people on the internet, I don't think advertising is outright evil.  It needs way more ethical oversight than it has now, but there's a gem of value in there.  If you assume the basic goal of advertising is to connect a customer with a product they would benefit from, then advertising is a mutually beneficial relationship.  With more ethical guidance, the better the targeting, the more valuable the ads are to both the advertiser and the consumer.

We're not moving in this direction now, and even if Google wanted to, their obligation to their shareholders would probably prevent them pushing towards more ethics in advertising.  But I think it's a direction worth pursuing -- even more now that there are companies who specialize in organizing ad campaigns, so the advertiser companies can focus on the quality of their product.

Vernor Vinge at Google

Today is a good day for good video.  Jane McGonigal released a new TED talk, John Green put up his first video on Fahrenheit 451, which I can't watch yet because I haven't gotten around to reading the first section of the book[EDIT: I caved, and am watching it now.  I guess It'll just inform my reading when I get around to it.], and Vernor Vinge's Author Talk at Google went up. I've been meaning to start catching up on Vernor Vinge's thinking and writing for a while now, because he's one of the popular names in the Singularity conversation -- he's the guy who came up with that name.  Personally, my opinion on the Singularity went back and forth for a while, and has now settled into a comfortable state of "I have no ████ing clue what's going on, but I don't think things are going to be the way they are now, this time next year."

This Google Talk turned out to be a pretty nice way to start to dip my toes in -- I found I could follow all of it, which was a plus, and I liked that it explored more Vinge's portrayals of the Singularity in fiction, rather than his beliefs about it in real life -- which seem, largely, to be:  He thinks it will happen, but accepts the possibility it won't, and doesn't have the remotest clue what it will entail.

Here's the talk, also embedded below:

And if you're not familiar with Google Author Talks, it's a channel all on its own, and generally features a few talks a week, mostly around an hour long.  I watch all but the ones that seem really, really boring.  There are probably at least some that would interest you.

Google brain: Woo!

(via SourceFed) Google is doing the best thing in science yet.  They're creating a "Brain-styled neural network," which they're feeding random information off from YouTube.

So far, the computer knows what a cat is.  That's awesome.  (It's also great that it's what it learned from YouTube.) This isn't really the first step towards artificial intelligence, Google made that first step a long time ago, but it's a big one, and it means we might be close to seeing a singularity-like event.

The fact that the computer is learning how to identify and define things like 'cats' means it will likely soon come up with a definition for 'human,' and that will answer a pretty big question.

I don't think you can just ask a computer what a human is.  I would assume it'd be obvious to anyone that a computer's estimation of what a human is would just be a useful set of guidelines that aren't representative of some deep, universal truth.

In fact, that's my point.  I love the idea of a computer that can learn, because I think it makes it a lot more obvious, and a lot more undeniable, that the way we categorize things isn't some magic, universe-piercing insight, it's just a categorization set that's useful to us.  Our goals are to survive, so we're good at categorizing things in ways that relate to our biological survival.

Google's brain computer's goal would be to successfully interact with humans.  So, it's going to learn how to categorize things in a way that enables it to achieve concept-overlap between itself and the people it talks to.

SourceFed has already given us an example of people freaking out because it's totally going to kill us all.  And it's not going to do that, because it's got no reason to.  What I'm really looking forward to seeing is the people who get obsessively indignant about how it's totally not a human or whatever, and it's an abomination, or shouldn't have equal status, basically the whole spectrum of anti-robot racism is what I think we have to look forward to.

YouTube read my letter!

(via hankschannel)

YouTube's Google+ style site redesign

A Two-Part Open Letter

Okay, well probably not.  It's not terribly likely that YouTube released a trial version of their upcoming site redesign because I complained about spiders.  But it remains true that YouTube's new site design does offer the features I hoped for:

It should be possible to hide a video from your results.  I would like to be able to click a little X somewhere on the thumbnail or text, or even by some more complicated option, alter the algorithms that produce lists of videos anywhere on the site so that they exclude particular videos.

I can't tell yet whether it hides the videos I remove from everywhere on the site -- and, if it doesn't, that wouldn't mean it doesn't in the official release of the redesign.  But YouTube's upcoming new homepage has a bunch of great features.

First of all, the thumbnail sizes are more reasonable.  As Hank pointed out in his video, the recommended video thumbnails used to be bigger than the thumbnails in your subscription bar.  It also grays out and labels videos you've already watched, making the process of catching up a lot easier -- I can just scroll through and find the vivid pictures.

You can activate the redesign by following the instructions on this page.

Fox complains about absence of Google doodle

So I guess today is Flag Day.  It's not a federal holiday -- Pennsylvania is the only state to celebrate it as a state holiday.  It's basically not important. Wikipedia lists a bunch of country's flag days as being synonymous with their independence days.  That seems pretty reasonable.  And Google does a doodle for every American Independence Day.

Not good enough for Fox, though.  They actually ran an article called Google Ignores Flag Day, while rival Microsoft celebrates.  Turns out, Bing put up a flag background to appeal to their primary customer base, people who can't figure out how to change the search settings on Internet Explorer.

This kind of crap doesn't belong on a major news site.  As of right now, it's their top story in Tech.  It serves no purpose other than to inflame nationalistic indignation, and that's not the role of a journalistic publication.

I mean, I wouldn't accuse Fox of being a journalistic publication, but they do pretend to be one, and a lot of people aren't in on the joke.

More on Google warnings; IE vulnerability

(via PC World) Previous context So, I've heard good things about the new release of Internet Explorer, which challenges my preconceived biases against the browser.  But, good news!  The current manifestation of the program has a vulnerability in it that makes it much easier to continue to be snarky.

The security warnings that some Gmail users have been getting, warning them that their accounts may be under attack by a state-sponsored organization, are apparently being triggered -- at least in part -- by a vulnerability in Internet Explorer that can be triggered on certain websites, but only through that browser.

In order for a hacker to exploit the vulnerability, an IE user needs to land on an infected webpage. To steer traffic to such pages, cybercriminals will typically use phishing e-mails or instant messages containing links to the infected locations.

Until Microsoft patches the vulnerability, the company is offering a temporary solution that can be downloaded from its Technet website.

According to cybersecurity software maker Trend Micro, the vulnerability has prompted Google to issue warnings to some of its Gmail users. "Google is flagging attempts to exploit this vulnerability by noting 'Warning: We believe state-sponsored attackers may be attempting to compromise your account or computer,'" it said in an e-mail to PCWorld.

"Reports show that this vulnerability has been used to compromise Gmail accounts," it added.

Microsoft plans to patch the problem soon, but if you use IE and don't have Gmail, you might want to be careful on the internet in the near future.

Or, you know, switch to Chrome.

Cory Doctorow on Google's Algorithms (and Plato)

(via Boing Boing) Cory Doctorow just gave me everything I want to see in a headline:

Google admits that Plato's cave doesn't exist

The article is about a recent change in rhetoric by Google about their pagerank methodology.  As Doctorow puts it:

The pagerank algorithm isn't like an editor arguing aesthetics around a boardroom table as the issue is put to bed. The pagerank algorithm is a window on the wall of Plato's cave, whence the objective, empirical world of Relevance may be seen and retrieved.

That argument is a convenient one when the most contentious elements of your rankings are from people who want higher ranking. [...]

The problem with that argument is that maths is inherently more regulatable than speech. If the numbers say that item X must be ranked over item Y, a regulator may decide that a social problem can be solved by "hard-coding" page Y to have a higher ranking than X, regardless of its relevance. This isn't censorship – it's more like progressive taxation.

I like this because of what it says about Google's evolving role in the business of information curating.  I like the idea of Google taking more responsibility for the content people see via their search engine, and refusing to diminish that responsibility by being swayed by corporate interests.

It's also great to think that Google's filtering protocols are becoming more public knowledge -- they make content on the internet valuable, but they also carry significant risks, and it's important that we remain conscious of them and make proactive decisions about our relationship to the content we're exposed to.

But I love the way Doctorow frames the issue, because I love seeing any public stab against Platonism.

Platonism (summarized to emphasize the aspects I object to[1. I think this is valid, since I'm acknowledging I'm doing it, because philosophical discussions can get confusing quickly and I'd rather this not get derailed by nitpicking],) is the belief that there are fundamental, immutable truths called forms that literally exist, and can be perceived with sufficient training.  The highest of these forms is the form of the Good, which Platonism argues can be accessed by individuals after years and years of study, giving them straightforward, unambiguous correct answers to moral questions.

It'd be awesome if this were true, but it isn't -- and one of the many problems with Platonism is that it leads people who've spent a lot of time dwelling on a particular idea to ultimately come to believe that they've accessed ultimate truth, rather than that they've just spent too much time getting far too good at finding illusory patterns.  (There are other problems too.)

I've written before about how the organization of a story affects how it gets read, and this is the sort of use that I like to see, in that vein.  Stuff like the titles of opinion pieces flavor the conversation we have as a society about more than just the subject of the article.

Eff Yeah Google

Caveat:  I understand that Google is a corporation, is capable of doing wrong, and is not definitely the savior of all humankind. Google has been doing some cool stuff lately, two of which showed up in my newsfeeds today.

Google protects its users from government spies

(via TechNewsWorld, Boing Boing)

Google claims to have identified instances of state-based or state-sponsored attempts to hack into users' email accounts, and an unknown number of users have received alerts letting them know that someone tried to break into their account.

On spotting the warning ribbon, users can immediately create a unique password that has a good mix of capital and lower-case letters and punctuation marks and numbers; enable two-step verification for additional security; and update their browsers, operating systems, plugins and document editors, Google stated. (TechNewsWorld)

It's good business, yes, but this effort also represents Google's orientation towards corporate responsibility.  It's a long-term strategy, building a trustworthy product that can help make the world a better place.  And I love that Google feels comfortable standing up to nations -- although I think that's less a Google thing, and more a worldwide transition from nations as the basic unit of politics, towards something else.  That something else might be corporations, and I worry about the other companies out there.

Google Maps getting cooler

(via Fox News)

It’s a pretty limited search engine that only draws from a subset of sources. In the same way, it’s not much of a map that leaves you stranded the moment you step off the highway or visit a new country. Over the last few years we’ve been building a comprehensive base map of the entire globe—based on public and commercial data, imagery from every level (satellite, aerial and street level) and the collective knowledge of our millions of users.

Today, we’re taking another step forward with our Street View Trekker. You’ve seen our cars, trikes, snowmobiles and trolleys—but wheels only get you so far. There’s a whole wilderness out there that is only accessible by foot. Trekker solves that problem by enabling us to photograph beautiful places such as the Grand Canyon so anyone can explore them. All the equipment fits in this one backpack, and we’ve already taken it out on the slopes. (Google Blog)

I love Google maps and I really love the idea of the mapping spreading out into the wilderness.  I wonder how big a subset of the nature-loving community is going to be enraged about this, though?  I can imagine people complaining that the mere existence of digital mapping diminishes the purity of the nature.

A note on the source: I got this second story from Fox News, who titled the article"

With Apple breakup looming, Google shows off some 'magic'

It's incredibly unclear for much of the article, but what they mean is that Apple is planning to develop its own map programs for the iPhone rather than sticking to Google products.

The way the article is written, Fox portrays Google's announcement as a direct attack on Apple.  It's the kind of gossipy, unfounded reporting one expects from celebrity magazines -- and a headline like that can have significant effects on the market that the content of the article doesn't justify.

Yet more evidence that Fox News isn't just shoddy reporting, it's actively pursuing anti-news goals.

Google chairman calls for computer science education

Google chairman Eric Schmidt gave a talk yesterday in London in which he raised his fears about the future of the internet.  He argued that the greatest threat to the future of the internet was not individual cybercriminals, but nations attempting to disrupt its function.

Eric Schmidt said [that] the internet would be vulnerable for at least 10 years, and that every node of the public web needed upgrading to protect against crime. Fixing the problem was a "huge task" as the internet was built "without criminals in mind" he said. (Source)

He moved on to a plea that British schools focus more energy on computer science and engineering (apparently British schools don't even teach computer science -- which, thinking about it, neither did my high school, except in a fringe occupational class only about a dozen students a year took.) and offered this excellent quote:

[S]o long as more kids aspire to win X Factor than win a Nobel Prize, there's room to improve.

Google Badges: Gamifying news

Google's news section has a new feature:  Google Badges.  Google is monitoring how much of particular types of stories you read, and you can level up in different areas by reading more stories on those topics.  So far, I've read 1 story about Google, so I've got a Google badge with no stars.

I have mixed feelings about this.  On the super-positive side, I love the idea of gamifying news consumption -- putting out in public your news reading habits as a ranked icon is a solid step towards objective measures of how informed people are in various topics, and provides a nice little social kick towards reading more news, and more substantial news.

On the other hand, it's subject to some problems.  This method rewards equally following up on light reading, like the example category, basketball, as it does more in-depth categories.  Google isn't exactly encouraging getting informed as a civic duty here.

The video says you'll be able to keep your badges private, or share them with your friends.  If you want to keep secretly informed about some topics, Google offers a solution:

Sharing Badges By default, only you can see your badges. You can choose to share a specific badge in your badge collection by mousing over the badge and clicking one of the sharing icons. When you share a badge, it reveals your badge’s name and level, as well as the rough number of articles that you have read about the badge’s topic. Your friends will not see the specific articles that you have read.

This mechanism seems handled altogether quite well, but I don't know whether to hope that, in the future, Google leans towards encouraging good information consumption habits and discouraging binges on nothing but pop culture, or whether the natural tendency towards easier news will go unchecked.

Google's Knowledge Graph

I make no secret of the fact that I love Google.  I mean, I'm not generally optimistic about corporations, and I know Google is capable of screwing things up.  But in general, I think they're one of the handful of entities in the world most meaningfully pushing towards a brighter future. At the heart of that push is the Google search engine, which is sort of the consciousness of the internet -- or, at least, that's the end goal.  In an interview in October of 2000[1. Writing the year 2000 always seems weird.  Like, it doesn't flow properly.  I don't feel like there's a way I can write it that will be satisfying to read in your head.], founder Larry Page said:

[...]artificial intelligence would be the ultimate version of Google. So we have the ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing. That's obviously artificial intelligence [...] because almost everything is on the Web, right? We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on. [...] (Source)

That in mind, I love seeing Google get closer to making the search engine more like AI.  The Knowledge Graph is a huge step towards that.  They're setting up Google to connect ideas in a coherent mesh, similarly to how human minds do.  (Video linked here, embedded below)

The graph would pick up our search habits, and start to connect ideas based on the way humans around the world relate those ideas to each other.  Not only will it allow Google to think more like a person, but that person will be the collective consciousness of all Googling humanity.

This could be scary, sure.  Maybe the hivemind will just reinforce bad species-wide behavior and prejudices.  But maybe this is exactly what we need.  Maybe something like a Mind On The Web is exactly the sort of medium within which we as a species can start developing skills of discipline, self-sacrifice, self-improvement and peace.