Grom social: Facebook for kids

So there's this 11 year old kid whose parents kicked him off Facebook.  He responded by inventing his own Facebook.  It currently has over 7000 users.  So, that's a thing.  Via SourceFed:

The good stuff

I have a few problems with the specific implementation, but altogether I think this is a really good idea.  As has been pointed out in just about every conversation about he internet in the last five years, kids these days are jumping into an environment where all their actions might be permanent, and they don't necessarily have the maturity or perspective to understand what that means.

In that sense, kids on Facebook is a terrible idea, and it's good that they have a more monitored, more controlled social network in which to begin to learn what it means to be a citizen of the internet.

As SourceFed points out, it's also good that they won't be exposed to (a.) creepy adults friending kids in a predatory manner, (b.) aggressive normalization of adult content and rude behavior, and (possibly most importantly because it's probably the most pervasive) (c.) normal adults being their normal, whiny, underachieving and petty selves, normalizing being an awful person for all the kids watching how they talk to each other.

Grom's anti-drug policy

As far as problems, they're going for an aggressively anti-drug policy, which I don't think is really going to help these kids.  I mean, I'm not for under-sixteen year olds doing drugs.  That's the area where research really does show that drugs are bad.  But SourceFed calls it a "D.A.R.E.-like program."  The D.A.R.E. program is a well-established failure -- their extremist approach to insisting all drugs are apocalyptically bad, and the implied message "All your friends are doing it" behind the "Don't listen to all your friends when they tell you to do it" message reliably increase drug use and degrade trust in authority.  Which is legitimate -- the authority is lying to the kids, why would they keep trusting it?

Videos like this one equate the dangers of drugs like alcohol and weed to the dangers of drugs like meth and heroin, which is counterproductive.  They also bulldoze over important distinctions like "You shouldn't do this while your brain and body are still developing" vs. "You're not in a good place in your life to use this drug responsibly" vs. "This is a prescription drug, which should only be used at the advice of a doctor," vs. "This drug is seriously dangerous and addictive, and you should avoid it entirely."  (Alcohol, marijuana, ritalin, and cigarettes respectively.)


It's great to see that kids are getting their own social network, that the frontier-attitude of the internet is beginning to break down enough that we're really trying for safe places for people who aren't yet necessarily in a good place to brave the frontiers of the web.  I hope that they employ the drug policy maturely and effectively, but I don't think they're going to -- they have to appeal to parents, after all, and parents are notoriously irrational about teaching kids lessons consistent with reality -- and I think that's going to degrade trust in the network and ultimately lead to its failure.  But it might not, and the website has enough going for it that I hope it doesn't.

TechCrunch explains how Facebook is getting even worse

Yesterday on Boing Boing, Cory Doctorow posted a link to an article on TechCrunch, breaking down the ways that Facebook's new app interface is more manipulative and dishonest than their previous ones.  I haven't actually seen the new interface, because I've logged into Facebook about three times this month, and that was only to check for messages after someone told me they'd sent one. The article, 5 Design Tricks Facebook Uses To Affect Your Privacy Decisions, is an easy read, and has accompanying pictures to illustrate the problems.  The writer, Avi Charkham, points out:

Facebook keeps “improving” their design so that more of us will add apps on Facebook without realizing we’re granting those apps (and their creators) access to our personal information. After all, this access to our information and identity is the currency Facebook is trading in and what is driving its stock up or down.

Facebook's stock has not been doing well since the company went public.  It seems like the company's approach to solving this problem is going to be to try and extract even more personal information from its users.

For the record, Tumblr, Reddit and Twitter all have a very good track record for not exploiting their users.  If you're not ready to quit Facebook, a good first step is picking some of these other sites and getting active on them, as well.  Get your friends to do it, too.  Diversify your social presence online.  That way, no one service can hold hostage relationships that are important to you.

Forbes illuminates cultural bias towards Facebook

(via Reddit) Following the Aurora, CO shooting, one of the points that have been raised is that the shooter didn't have a Facebook page. He wasn't on any social network, in fact, except Adult Friend Finder.  Slashdot has pointed out an article that highlights the fact that mass-shooter Anders Breivik was on MySpace, rather than Facebook.

Forbes expands on these arguments, pointing out that not having a Facebook is becoming an acceptable red flag for people across culture:

It’s not just love seekers who worry about what the lack of a Facebook account means. Anecdotally, I’ve heard both job seekers and employers wonder aloud about what it means if a job candidate doesn’t have a Facebook account. Does it mean they deactivated it because it was full of red flags? Are they hiding something?


But it does seem that increasingly, it’s expected that everyone is on Facebook in some capacity, and that a negative assumption is starting to arise about those who reject the Big Blue Giant’s siren call. Continuing to navigate life without having this digital form of identification may be like trying to get into a bar without a driver’s license.

This article hasn't dissuaded me from leaving Facebook, still scheduled for the end of this month.  In fact, it only bolsters my motivation to leave -- we've let one private company take such dramatic control over our social lives that it's transcended being convenient to have an account -- it's become a liability not to.

It's not okay for one private company to have this kind of grip on the social lives of people.  It's becoming more and more clear that the internet and social networks are more like a utility (like water or electricity) than a free-market product (like McDonald's or motorcycles.)

Facebook gets away with massive ethical violations all the time because we let it have that much leverage on our lives.  I strongly urge my readers to leave Facebook, and diversify into other social networks.  Get on Tumblr, Reddit, Google+, Twitter, get your social needs met in a variety of places so that if any one starts trying to control your social life or abuse your trust you can drop out of it without disrupting your social web.

Facebook founder's family member announces via Twitter that she works for Google

(via Ana Ulin on Google+) Randi Zuckerberg, Mark Zuckerberg's sister, tweeted yesterday that she's working for Google now, after the company she works for, Wildfire, was acquired by Google.

Wildfire is an advertising app that helps organize companies' social presence or a more successful, targeted marketing campaign.  My main focus for this story is that the tweet was funny, but I also want to talk about the existence of third-party marketing organizations, especially backed by Google.

Unlike a lot of people on the internet, I don't think advertising is outright evil.  It needs way more ethical oversight than it has now, but there's a gem of value in there.  If you assume the basic goal of advertising is to connect a customer with a product they would benefit from, then advertising is a mutually beneficial relationship.  With more ethical guidance, the better the targeting, the more valuable the ads are to both the advertiser and the consumer.

We're not moving in this direction now, and even if Google wanted to, their obligation to their shareholders would probably prevent them pushing towards more ethics in advertising.  But I think it's a direction worth pursuing -- even more now that there are companies who specialize in organizing ad campaigns, so the advertiser companies can focus on the quality of their product.


I don't do a lot of online chatting, but nonetheless I don't want my conversations watched. We know that Facebook monitors our conversations on their website, explicitly to scan for criminal activity (already pretty creepy) but who knows whether that information is staying in those bots for those purposes.  Every app asks for a bunch of permissions, so your information might be filtered into dozens of advertisers' statistical analysis. But I can barely get the regular internet to work, there's no way I can set up a secure chat service. (I tried to use Tor once, it went horribly.)  Unfortunately, the same is true of a lot of people who need that protection a lot more than me.

This quote is from Wired's article about CryptoCat, and its creator, Nadim Kobeissi:

 When faced with the torture of using crypto software or the torture of a repressive government, some dissidents have — intentionally or not — opted for the latter.

CryptoCat -- URL: -- is a secure chat service that's easy and pretty.  I know it is, because I used it.  I opened it up on two computers and talked to myself.  You set up a custom or randomly generated URL for a single-use chat, and you can invite people in through Facebook or give them the URL.  Kobeissi also has an adorable video on Vimeo explaining the service.

The program is open-source, so anyone  can look at the code.  It's secure, and it doesn't save your information.  Cat-themed though it may be, this is a very important worldwide resource, and could save lives in more oppressive countries.


I've just discovered Medicative, a blog run by Dan Gillmor about false information on the internet. The first post I read on the site has pretty strongly endeared me to its contents. It describes a business decision on the part of the New York Times to require commenters to log in via Facebook accounts, in order to verify their identity.  Gillmor says:

This is vastly, vastly better for Facebook than the Times. Given Facebook’s tendency to track what people do online whenever possible — something you can take for granted in this case, given the attractive (for marketers) demographics of Times readers — the company will gain deep insights into what these people read and buy.

This post is from March 20 -- posts on this site appear to be rare, the last three being in March, February, and January.  It might be dead, but I'll be checking back for a while just in case.  There's also a book, of the same title, which I'm interested in getting a copy of when I have money, whenever that happens.

A case study in Facebook privacy

I'm planning on quitting Facebook soon, and am currently going through the preparations necessary.  One of the major reasons is the way Facebook uses the default privacy settings to fudge people towards giving up more information than they really intended to. There's a great example of this fact in action at We know what you're doing..., a website that posts public statuses of Facebook users in four categories: Who wants to get fired?, Who's hungover?, Who's taking drugs?, and Who's got a new phone number?

In their about page, they point out:

These people probably wouldn't want this info publishing, would they? Probably not to be fair, but it was their choice, or lack of, with regards to their account privacy settings. People have lost their jobs in the past due to some of the posts they put on Facebook, so maybe this demonstrates why. Efforts have been made to remove any personal data from the results, such as the actual phone numbers, surnames, etc. The data is still easily accessible from the API, the filters have been put in place to protect the site from legal issues.

The idea comes from a great performance by Tom Scott, which I'm embedding below:

A lot of the people on these sites don't know that they've left their Facebook pages this open.  And that's the problem -- it's not enough to protect people's privacy to say, "You need to look at the privacy settings."  Facebook buries them, and they set all the defaults to "Share everything."    As a result, people who are on Facebook not because they want to stay in touch with the cutting edge of social technology but because they want to talk to their friends (read: damn near all of them) are unlikely to protect themselves.

Facebook's new approach to email

(via Wil Wheaton on Tumblr) I thought about it again today.  I thought about quitting Facebook.

I'm getting closer every time I log on.  Today, it was because I found out that Facebook has removed my email from my page, and replaced it with the email they've decided to give me, connected to my Facebook account.  They did it without telling anybody.  They did it without asking.

I accept that it's a small thing, but it's one of the hundreds of small things that Facebook does wrong.

LifeHacker explains how to get your own email back:

  1. Click "About" on your profile and scroll down to your email address. Click "Edit" to change them.
  2. Click on the circle next to your Facebook email address and change its setting to "Hidden From Timeline".
  3. Click on the circle next to your other email addresses and change their settings to "Shown On Timeline".
  4. Click the Save button at the bottom of the Edit popup (Don't forget this step).


That's all it takes. It's a really quick fix, but it was a big jerk move for Facebook to do this without asking permission, or even telling you that it happened. Spread this info around so people don't get stuck without any contact information, too, lest we lose the one aspect of Facebook that was still useful.

I didn't quit today because when I logged on, there was a message from my girlfriend.  That was enough of a heartstring-tug to stop me from deactivating my profile right now.  But I don't feel good about it, and I don't think I'll be sticking around on Facebook much longer.

Huh. Facebook actually did go down.

I thought I was suffering weird, site-specific connectivity problems yesterday, when I couldn't get Facebook to load in my browser or on my phone.[1. Despite the accusations of my family and less tech-savvy friends, I am not a computer person.]  It occurred to me to Google about it, and I discovered at least one site claiming that Facebook had gone down. Still, it seemed overly optimistic to believe the whole site was down, everywhere.  But I guess I was mistaken -- Computerworld reports that Facebook was, in fact, down on Thursday, and that Anonymous is taking credit.

Yeah, I have a Facebook.  I barely use it, but I have not yet taken that leap of integrity and switched entirely over to a more varied set of social connectivity tools.  I'm hoping that the internet makes it easy for me, and Facebook just MySpaces soon.  There are inherent risks of having a single, dominant social network.  It's not a good thing.

Still, I have mixed feelings about Facebook going down because of Anon attacks. It's tough to find a way to justify breaches of the social contract of the internet such as those which Anonymous represents, without arguing in a way that could just as easily justify attacks against yourself.  On the other hand, it's not as though Facebook isn't using underhanded tactics to preserve their own place in the market.

On that topic: It's Facebook election week

According to, Facebook is holding an election today for a vote on whether to switch to a new policy. Apparently, the biggest issue is the data policy:

So what’s different about the new privacy rules? Comparing the current and new rules side-by-side, one thing jumped out at us: the new Data Use Policy. It contains an expanded list of activities in which user data can be collect by Facebook — whether you’re interacting with an app or something else on the site.

I voted to keep the old rules.  Voting continues through next Friday, June 8, at 9am Pacific Time.  That's 12pm, Noon, EST.

Here's a link to vote.