Metatopia: a rumination

Anyone who's heard me talk about metatopia in the past couple days: this is not that post. Instead, this is a post about whether I'm confident that it's actually a good idea.

A quick summary: "metatopia" is an umbrella concept for (u|dys)topia stories, meant to highlight the difference between stories that are about (u|dys)topia and stories that simply contain speculative or fantastical governments. The idea is that, most of the time, when someone says "Is this a (u|dys)topia?" the answer is "That's not a metatopic story."

I'm hesitating because I'm not certain that this argument is a harmless semantic distinction. I think it would definitely improve discourse around fiction, but there's another use of (u|dys)topia in discourse that I don't want to attack: describing and critiquing atrocities in the real world.

Yesterday Sabrina Vourvoulias, the leader of the Fantastical Dystopia panel I was on (she volunteered last-minute), published a blog post called "Readercon 27: Confronting the fails." I didn't say anything about the metatopia idea in the panel -- because I came up with it during conversation afterward with Michael Deluca. 

Vourvoulias's post mostly isn't about the Fantastical Dystopia panel -- it focuses a lot more on serious problems on other panels -- but my apprehension about this topic stems directly from her comments during the panel, and what I was thinking while she was talking.

She wrote,

Fantastical Dystopia, on the other hand, was really quite awful. I took on the role of leader the day before, and consequently hadn’t organized it — and it showed. I truly value everyone’s contributions under less than optimal conditions, but things never meshed for us. On the other hand, at least nothing “outright barbarous” (to, fittingly, quote George Orwell) was said or enacted by any panelist — which reportedly happened at other panels on dystopia and apocalyptic fiction.

To the point about none of the panels doing anything "outright barbarous" -- Vourvoulias explored the question of dystopia during the panel by way of describing her own experiences of having grown up in what she described as a dystopic state. I didn't have anything to add to that line of discussion, and an attempt to tie it in to the point I had last made would have functionally amounted to "Listen, I know you're talking about your extensive experience with suffering and horror, but let me tell you how you're wrong about the semantics." And that isn't what I wanted to convey at all, but there would have been no way to make that segue that wouldn't have sounded like that.

The question I'm struggling with, then, is: 

Can I make this argument for a "metatopia" designation at all, without it constituting an attack on the ability of marginalized people to talk about their experience? Regardless of whether the concept helps keep literary discussions on the rails, will it also be a tool to derail real-world accounts of suffering?

My solution to this, I think, is going to be to write the essay I was planning on writing, and discuss this specific concern with some of my professors. In the meantime I plan on not actually sending that panel suggestion until after I decide whether I really feel okay about this concept.

Confidence and wonder in political writing (a disorganized mess shaped like a blog post)

I had a thought while doing a reading for a class tonight that I almost don't want to write, because I'm not sure how to write it correctly. Instead of that, since it's this or "ugh IDK what to blog about," I'm writing this paragraph to highlight the fact that I'm unsure about my precise phrasing and reasoning to follow.

The line that triggered the thought, from "The Dialectics of Seeing: Walter Benjamin and the Arcades Project" by Susan Buck-Morss, was:

It is the "accelerated progression of economic crises" that keeps history cyclical.

It was the kind of statement that, true or not, takes a kind of brazen confidence to actually write. 

Now, I believe this statement to be true -- or, at least, I agree with the sentiment it corresponds to, more-or-less. But I hedged on "true or not" because it reminded me of a feeling that has in the past corresponded to some very not-true statements. 

The feeling was "I didn't know you could just say that," and it's kind of world-shaking when it happens to you. Less so for me now, in my mid-20s, having been experiencing that sensation repeatedly every few readings for most of my adult life, but still enough that it inspires much deeper thought than my baseline. 

This time, it produced a sudden, strong sympathy for the obnoxiously "edgy" teens who say everything like they're trying to be world-shakingly profound. It was a reminder of how powerful it is when that feeling happens to you, and it reminded me that it's really appealing to want to be able to create that in other people's experience. 

Of course, you can still be an asshole about it, and those teens usually are: they aren't trying to share in an experience of wonder so much as they're trying to assert dominance and authority by way of that wonder, and they're generally failing either way. This is why I was nervous to write this post -- I don't want to accidentally write something that reads as "I think pretentious, intellectually aggressive teenagers are in the right," and there are a lot of ideas in this post that I haven't even begun to explicate -- like, part of this thought process was originally about how the anti-PC crowd gets their ideology in part from this feeling -- 

idk. I don't know what to blog about right now. I just know I was reminded of an important emotion and I didn't want to ignore that experience just at the moment.

Re: Eric Shouse's "Feeling, Emotion, Affect"

In my Queer Feelings class, one of the first things we read was an essay by Dr. Eric Shouse called "Feeling, Emotion, Affect," which attempts to lay out a system for understanding and discussing human internal experiences and their expression. 

There are three categories, as implied by the title -- roughly, affect is the immediate, visceral, unconscious experience; feeling is that experience once conscious and internally defined by reference to language and past experience; and emotion is the expression of the experience (including feigned expression) deliberately to others.

To me, there's a really obvious, massive hole here, between affect and feeling.

The issue is, by Shouse's conception, an experience is either unconscious, or it is defined by language and memory. 

But in my experience, there's often a huge, days-to-weeks-long gap when an experience is noticed, and conscious, sometimes even having qualities intrinsically tied to its being conscious, during which it can't be adequately named or connected clearly to prior memories.

In this state my ability to cross-reference with language and memory is disrupted. I can think of names for the state itself -- dissociation, crisis, void -- but those names don't adequately describe the feeling itself -- and besides, the ability to tag a name to it later represents a movement from that state into the state Shouse calls "feeling," not a disproof of the state itself.

The important part is consciousness: Shouse writes, "affect is the most abstract because affect cannot be fully realised in language,  and because affect is always prior to and/or outside of consciousness[.]"  (Bolding mine.) He constrains these two conditions together: his taxonomy explicitly and firmly has no place for conscious non-linguistic experience.

These periods are a major part of my mental illness, and without addressing them any explication of my inner life would be deeply incomplete. 

I suspect this state exists in most or all people's experience, but for Shouse, and for neurotypical people without mental illness in general, it's probably generally so brief that it's easy to ignore or fail to notice entirely. But because it lacks this category, it would be impossible for me to use Shouse's taxonomy to meaningfully discuss my internal life.

Academic hypertext

I mentioned yesterday that wikis are a great format for academic information, and I've been thinking a lot about the formats of academic writing lately. 

It seems to me that the central conflict in academic writing is the tension between providing enough information that someone who came to the text to learn can find their footing in it, and leaving enough out that it's not an intolerable slog for anyone already reasonably familiar with the subject. Some writers do a better job of navigating this than others -- and some prioritize different hypothetical audiences than others.*

There are all sorts of strategies for dealing with this. Footnotes and endnotes can nest extraneous information that might either be essential to someone who isn't familiar but boring to an expert, or extraneous nonsense to a novice but interesting to an expert. Some writers spend huge stretches of time exhaustively covering everything they can think of, anticipating that the reader will just skim past once they get the idea. Some include appendices, charts, supplementary material, etc.

But with a printed work, it always boils down to a single fundamental problem: in the end, there can only be one text. What's printed is printed, and it's up to the reader to learn how to interact with that text. The author has to decide who to optimize for, and how to give the readers on either side of that optimization the tools to make the piece work for them.

Hypertext has the capacity to deal with this problem. Works could be made intricately variable -- not in a choose-your-own-adventure way, but in a choose-your-own-depth way.

I'm imagining a slider at the top of the page, that says "Jargon level." There's a check box next to it that says "Highlight," and a drop-down that says "Advanced." Slide Jargon back and forth, and the text substitutes sections of dense jargon with much longer segments fully explicating** them. Click highlight, and all the words, lines and paragraphs that either have changed or could be changed light up. Hover over the highlighted entries and it could show you what they would be substituted with -- so if you want to see the jargon in context but need to keep checking, it'll always be right there; and if you want to read the expanded version and know what jargon you're missing out on, you can see that, too.

Under "Advanced," you could select substitutions from a list: say you struggle with the word "Explicate" but otherwise pretty much get the jargon, you can just turn on the expanded version of that word. Or say you have a word or phrase that you frequently mess up that doesn't have a programmed alternate version: you can type that right into your copy of the book. "Ambiguated" could become "Made ambiguous," if  you struggle with that kind of verb form. "1.8 billion" could get a parenthetical phrase after it saying "(1,800 million)" and "9 trillion" could get "(9,000,000 million)," if you struggle keeping track of large numbers' relationships to each other.

You could set up favorites, or download other people's favorites. You could get modified versions of old texts, that let you dip your toes into the complexity of the original while providing a fluid safety net to toss you a line when you need it. You could read versions of texts that are prepped to let you know that the words in a particular part means something different in the context than you expect them to, like in legal texts or very old things.

You could get books that have your trigger warnings in them, so you can brace yourself right before the relevant scene, without having to put that alert in the book for everyone who doesn't share your triggers. 

This direct, hands-on access to the structure of the text could make academic writing massively more accessible. It could also help make it more collaborative, if the authors let their books take on a life of their own.

Vitally this doesn't ever have to harm the integrity of the original text. You could always have a setting somewhere that says "Author preferred," that gives you the version of the text the author feels best represents their view. That might be the one with all the jargon -- but I bet pretty often it would be the one with the jargon followed by the explication, because if you're going to have a version that counts as the truest one, why not the one where you get to say "This is exactly what I meant by..."

And I would imagine it would always be obvious, when manipulating the text, whether you're applying filters that came with the book or filters you brought in from elsewhere. 

Ooh, and imagine having these in really introductory classes? Like, being handed an original Shakespeare in seventh grade, masked over to the point of being a basic, middle school level summary? Imagine being a curious kid with an interesting assignment, and being able to start clicking away at the settings, revealing layer after layer of increasing depth. Imagine being able to see, right in front of you, in your half-a-page worksheet, the whole academic landscape underlying it.

Being able to control how the text reveals itself to you means you can make yourself maximally comfortable in the text, and it means you can make yourself feel safe and confident in your ability to approach it. Furthermore, it gives you a strong, tangible sense of the degree of abstraction you're working with: because if you don't understand the core material, it helps to be able to find out the nature of that non-understanding. The summaries and explanations can help you articulate your confusion even if they can't resolve that confusion for you. 

This is probably my least clear, most convoluted post in a long time. I think I might try and rewrite it with some hypertextual elements soon.

*That is to say, a text that is inaccessible to non-experts is not automatically badly written, although that's a whole other topic that's worth attention on its own. My use of the word "accessible" in this parenthetical is deeply ambiguous.

** "Explicating" is an example of a word that would change if you slid the jargon slider on this blog post. It means to break apart and explain a piece of media that was written in a way that is clear to someone who's accustomed to the corresponding background information, but unclear to people outside that group.

A fun word game

I had a very stressful evening so to distract myself (and play with my new pen, a fine nib charcoal black Lamy Safari, which is amazing and which I think I'll review soon) I started playing a game, that's a ton of fun, if you're a huge nerd.

First, you write the alphabet down the side of a piece of paper. Then, you make up plausible-sounding words for all of them. You're going for things that aren't words, but sound like they could be. Then, you google all of them to find out how well you did at actually making up new words.

I got as far as googling "G" before I remembered that I hadn't blogged, and here's what I've got so far:

  • Adrivant is someone's username.
  • Bosquire is a misspelling of a French surname, Bosquier.
  • Crainery appears to be either a first name or a last name. 
  • Dinfaile is a fake word.
  • Edile means building or construction in Italian, or is an alternate spelling of Aedile, an ancient Roman word for an inspector of buildings.
  • Finoil is a fake word, I think: a result for words in Hindi came up, but when I clicked it it showed me a page that had autocorrected to Final.
  • Gosper is a surname.

Once you get to the end of the sheet, you go back and come up with definitions for all the words that were actually fake. Then, you put them in a file and save them for when you're writing fantasy and science fiction stories.

I may report back tomorrow on what fake words I come up with.

(Disclosure: The pen link is an Amazon Affiliate Link.)

Physical metaphors in language

I know I talk about this all the time but it's something I'm obsessed with and I had a long day and don't wanna talk about it. English has an embedded theory of physics. That theory of physics is wrong -- it doesn't correspond to, like, reality -- but it's still there, and it's how all English speakers think. (Other languages have this too. I don't know to what extent it overlaps.)

So, the X, Y, and Z axes of English -- X is left and right, Y is up and down, Z is backwards and forwards.

The Y axis is moral. Up is good, down is bad.

Z is temporal. Forward is the future, backward is the past.

X is subjective. Side-to-side, left and right, one hand or the other, are about opinion.

Sometimes it's fun to take a sentence apart and find all the physical metaphors in it that are used to convey non-physical concepts.

For example, to take a sentence apart -- I am taking no physical action with respect to the sentence above. I'm just continuing to discuss its content. But it makes more sense to describe it in terms of dismantling a physical thing that is made of parts.

To find is also physical. It can even be argued that in this case it's literal, because I do find the metaphors by scrutinizing the text of the sentence.

In "The physical metaphors in it," the word "in" is a spacial metaphor, that refers to concepts of 'inside' and 'outside' that don't correspond to the nature of text on a page or words in a sentence, except insofar as we understand them as metaphorically physical things.

Convey is literally a word that means "Move." Which is an action taken upon physical things.

See? Fun!

resolution: an inquiry

Does anybody else out there on the internet really frequently think in metaphors about resolution? As in, like, 1080p, hi-def, I-can-tell-that's-photoshopped resolution. Images. I was going to come on here and say my blog was going to suck tonight, because I made a really bad decision, at a really high resolution. Like, it's a big deal, but only in a really small spot, and only if you look really closely, and if you were looking at the whole picture over a shitty internet connection you probably wouldn't be able to see it at all.

The decision was having a pint of ice cream tonight. As a result of which, I have a horrible headache and feel nauseous. It's on the extreme end of negative effects that tend to follow ice cream for me, which I could have anticipated, because it's been a busy week and I'm pretty drained and malnourished.

I bring this up because (a.) it's a slightly better blog post than "I ate ice cream and now I feel like I fell face first into a pile of rocks that smells like rotten fish" and (b.) I use this internal metaphor, like, all the time. And I frequently find myself starting to say something in this metaphor, then stopping because I'm not sure it'll make any sense to anyone else.

So -- is this a thing? Do other people think in terms of resolution, or at least think they'd understand metaphors phrased that way?

Poorly distributed complexity: stuff isn't fair

You know the phrase 'life isn't fair?' That's true, but it's an incredibly hard thing to wrap your head around. The universe isn't set up to serve the interests of living things. The ecosystem isn't set up to serve the interests of humans. Human institutions aren't set up to serve the interests of all humans equally. Even personal, individual relationships aren't always set up to optimize for the well-being of all parties.

Comic: a whale has been struck by a harpoon, another whale says "It's okay, Hank. I just read that the goal of ethics is to maximize human flourishing." SOURCE: Saturday Morning Breakfast Cereal, comic number 3420


I spend a lot of time thinking about un-fairness as a founding principle of understanding life. I want things to be more fair -- and I think that's a goal that can be pursued with meaningful success.

I could do a whole series on this,[1. Other examples: it's not fair that purchasing goods and services can lead to supporting anything other than the existence of those goods and services; it's not fair that there's no way to choose political neutrality without siding by default with the current power structure; it's not fair that Western culture on the whole systematically misrepresents adulthood to children to make adults feel good about their idealized notions of the world; it's not fair that it's impossible to use language to communicate ideas clearly without leaving out important details.] but one of the unfair things that bothers me most, and most frequently, is that there is absolutely no set of relationships between:

  • How important it is to understand something,
  • How easy that thing is to understand,
  • How easy it is to get help in understanding that thing.

Like, the United States legal system is literally so complicated that it takes an advanced degree to be able to deal with it with a significant level of competency, but that degree is very expensive, the ideas that you have to learn in getting it are complex, often contradictory, and usually counter-intuitive, and everyone in the US is nonetheless required to behave in a way that corresponds in a certain way to those ideas.

Or, understanding the suffering of a marginalized group requires accepting that they face a constant barrage of microagressions, but any attempt a marginalized person makes to testify to those experiences sounds very much like cherry-picking and can rhetorically be neutralized by actually cherry-picked experiences that a privileged person has had.

Or, we're taught to understand money in terms of a static value -- a millionaire is a person who has a million dollars, you can get rich by winning the lottery and being given a big pile of money -- when the actual functionality of money is more like a rate of flow -- a million dollars is 20 thousand a year if you want it to last 50 years, which is like having an extra household member with a poverty-level job, not like being a millionaire at all.

And, importantly, to all three of those examples you could criticize my summary by pointing out that it's actually way more complicated than that. Which is my point.

Stuff like this reminds me why stuff like Voltaire's famous quote, "The perfect is the enemy of the good," is so important. These problems are all fundamentally un-solvable, because the universe is unfair and we've got brains shaped by evolution and there are lots of people who stand to keep a lot of money and power if these ideas stay confusing.

But knowing we can't solve these problems doesn't mean trying would be bad. The difference between any of them being 0% solved or 10% solved or whatever[2. And calculating percentage-solvedness of these problems is another impossible thing that is nonetheless useful.] and being 50% or 80% solved is a difference of a huge amount of suffering or well-being. Even individual actions by individual people contain a degree of significance that is both trivial and meaningful.

Which is a confusing idea that seems complicated or self-refuting and is hard to express using language, but it's also really important.

bulleted lists and footnotes

I use a lot of bulleted lists in my non-fiction writing. Like, I use bulleted lists in almost all of my blog posts, and in essays at school. It only just occurred to me, though, that that's an element of my writing style. There are certain thresholds that I know I pass in my mind when I'm writing. For lists, I could probably make a flowchart. If: less than three items, in paragraph. If not, if items follow a specific logical flow, in paragraph. If not, if items can be clearly organized by importance or sequence or chronology or just that they have corresponding numbers, numbered list. If not, bulleted list.

I have a similar sequence for footnotes. Roughly, the shortest asides are separated by commas. Next up, em dashes. Parentheses for long asides or asides more tangential than the dashes imply. Footnotes for asides that become their own complete points, but would derail the piece to address in-text, or for extremely tangential asides.

I think I'm gonna try using bulleted lists more in my fiction writing, too. Just to try it out.

Hypothetical future commentary by historians on DFTBA

Sometimes it's fun to think about what future historians are going to say about phrases people look back on.  Like how "Hat trick" doesn't actually come from ice hockey, or how "Blood is thicker than water" apparently used to mean literally the opposite of what it means today. I was thinking about this after watching Dan Brown's recent video, "Feel Free to FTBA," in which he points out that the common nerdfighter saying "Don't Forget to Be Awesome" (DFTBA) is problematic if you interpret "Awesome" in the traditional sense -- Google defines it as "Extremely impressive or daunting; inspiring great admiration, apprehension, or fear."

So I started imagining what sort of paper an internet historian might write about DFTBA, clarifying the social context for people who might otherwise misinterpret the Nerdfighter movement:

Though it seems to mean that Nerdfighters believed they should make an effort to always act or seem more important, and is often interpreted to describe Nerdfighting as a thoroughly narcissistic movement, one must bear in mind that at the time, the word "Awesome" was most commonly used colloquially to mean something like "Exceptionally morally and/or aesthetically uplifting or good," or to describe holding oneself to a higher standard of personal conduct, above and beyond what might constitute the minimum for acceptable behavior.

And, obviously, people will think it's ridiculous, because there's plenty of evidence that awesome could mean really bad for a huge amount of time before the early 21st century, so why should Nerdfighters have interpreted it differently?  (Unless, like, hypothetical future people are less instinctively anti-academic.)

That was fun to write, but I don't really have any good way of wrapping it up.  Anybody know anything else that's going to take historians some effort to unpack?

Politics of language: The Bean & I discuss prescriptivism

I had a casual debate today with my friend at Bean' Alive, who I've mentioned before, about language usage, and especially whether there is, or should be, an absolute 'correct' version of English.  Subsequently, she posted this post, Prescriptivism v. Descriptivism: repressive grammar is a thing.  It's a pretty cool post.  (I think it's particularly relevant that she mentions she's a Francophile, as France actually has an official organization that acts as a governing body for the language, the Académie française.)

Poverty & stuff (And I found a new blog to follow!)

So I found a new blog to follow.  Yesterday, John Scalzi published a blog post titled Why I Wear What I Do, in which he cataloged his outfit choices, as well as the privileges they confer and the privileges inherent to being a white male that allows him to make those choices.  He was writing as a reaction to a post called The Logic of Stupid Poor People, on the blog of Tressie McMillan Cottom, who describes herself as "Woman. Friend. Daughter. Scholar. Armchair activist. Hell-raiser. Intellectual Catfish.* Not particularly in that order." She appears to post several times a month.  I'm looking forward to trying to keep up with it.

Here's an excerpt from the post I just read, The Logic of Stupid Poor People, which explores the rational but hard-to-measure reasons a person living in poverty might buy something significantly more expensive than the most aggressive stoicism might require.

[...] Another hiring manager at my first professional job looked me up and down in the waiting room, cataloging my outfit, and later told me that she had decided I was too classy to be on the call center floor. I was hired as a trainer instead. The difference meant no shift work, greater prestige, better pay and a baseline salary for all my future employment.


I have about a half dozen other stories like this. What is remarkable is not that this happened. There is empirical evidence that women and people of color are judged by appearances differently and more harshly than are white men. What is remarkable is that these gatekeepers told me the story. They wanted me to know how I had properly signaled that I was not a typical black or a typical woman, two identities that in combination are almost always conflated with being poor.

Things I Did Not Know: Google Definitions Edition

I was working on that troll story I've been talking about lately, and I needed to put a word that means 'prey,' or 'hunted thing,' and I knew there was one that started with a Q -- and I was pretty sure that it was the one I wanted.  I was, in fact, even a little confident that the word was quarry.  But I wasn't 100%, so I googled it.

Here's what I got:define-1

I was discouraged.  What word could it have possibly been, then?  Quory? Query? Quarrel? Quincy? I explored my internal vocabulary without success, until I realized that I had seen a gray arrow at the bottom of the google result, pictured above.

I had already navigated away from the page, so I searched it again, and clicked the button:


There!  At the bottom!  See it?  "An animal pursued by a hunter, hound, predatory mammal, or bird of prey.  /  A thing or person that is chased or sought."  That's exactly what I was looking for -- but that isn't all!  Apparently, it doesn't even come from the same root, when used in that sense:


And, there's even more information: a translation option, and a graph of its use over the last 200 years.

define-4And there's that gray arrow at the bottom again, in case I want to go back to just seeing the first definition.  (Joke's on you, Google:  I'll never go back!)   And now you know!  By which I mean now I know.  You probably already knew, because you're more computer savvy than I am, and know when and where to look for more information.  But in case you don't, here's how Google can help.


I'm bored right now, so I've decided to ramble on a bit about postmodernism.  This isn't supposed to be a Treatise or anything, I'm just exploring my own thoughts.  There could be (there probably are) a huge number of things in this post that are horribly wrong. "There's no such thing as smurb."

Smurb is a word I've just invented. I've invented it because it's incredibly difficult to have a conversation about knowledge, objectivity, reality, or anything of the like, because people tend to come in with their own assumptions about what those words mean, and their assumptions tend to change faster than they can keep track of them.

Smurb means the experience, that humans have, where we experience actual insight into the unadulterated truth about reality. For an experience to qualify as smurb, it has to constitute an acquisition or unveiling of both (a.) truth and (b.) knowledge, it has to be verifiable -- that is, the person who experienced smurb must honestly know for sure that the thing they experienced was smurb, and smurb must be about a truth that Is, which means there can be no version of reality in which the subject of smurb might not Be.

Lots of people think they experience smurb. Plato's Idealism is entirely based on smurb. The notion of transcendence or numinous experience are notions of smurb. Some scientists believe that the process of experimentation is a mechanism for generating smurb -- and even more non-scientists think that's what scientists think they're doing.

When I say there's no such thing as smurb, I don't mean that the reality into which smurb provides insight doesn't exist. What I mean is, there's nothing in humans that makes us capable of having that experience, and there's nothing in the Reality outside humans that can penetrate into us to create that experience.

I've often heard people who've taken a lot of LSD explain to me that you can't truly know what color is until you've seen the colors LSD can show you. Personally, I've never seen colors that made me doubt the colors I see when I'm sober. In a casual sense, I'd say I know what colors are. These friends of mine would say that my experience of colors is inadequate, and that they know what colors are in a way that I fail to. Both of us could say that what we know of colors is smurb. Both of us would be lying, since color is nothing to do with the world outside people -- it's just a system our brains use for sorting visual data by light's wavelength. (Well, maybe not lying, but wrong. That's a different can of worms that I'd rather avoid today.)

I want to make it very clear: smurb is not the stuff outside people. Smurb is not the same thing as the noumenal world, or Plato's Forms, or Qualia. Smurb is humans' access to those things. To follow Plato's Allegory of the Cave, when I say there's no such thing as Smurb, I'm not saying there's no such thing as the sun -- I'm saying there's no way out of the cave.  (There are more problems with that allegory that I'm not in the mood for right now.)

The reason it's important to address this question is because, if there's no such thing as smurb, and people believe there is such a thing as smurb, people will take whatever experiences they happen to have decided are verified by smurb and bend the whole of their whole worldview around them.

I had about 350 more words after this point, but I think this idea is getting a little out of hand.  I'm going to call it here.  I may continue to use the word smurb in the future.

Nightvale, Lovecraft, Idea Channel, and an argument I had with one of my teachers today

We just got to the part of the history of western civilization in my Western Civ 1 class when Socrates shows up!  Yay... I shouldn't be totally surprised, but I was, that when we discussed Socrates, phrases like "Greatest philosopher ever" and "Still extremely relevant today" were thrown around.  I have some pretty strong contentions with that point, especially where Socrates fades into Plato, and the Western canonization of Essentialism and fundamental truths.

See, as the teacher told it, Socrates was the hero who freed Greece from the cynical Sophists, who believed there was no such thing as essential truth.  Now, I will grant that it's possible to get pretty cynical on that premise.  But I'm on the side of the Sophists -- at least, the ones who understood, if any of them did, that the point is humans don't have access to unrestrained truth, and that all we have to work with are narratives that are varyingly successful in describing and predicting the reality they attempt to describe and predict.

I brought this up with the teacher after class, and we had a fun discussion in which he asked me if I thought the Pythagorean theorem would go away if nobody knew about it, and I said "Yes."  The fact that triangles have certain relationships to themselves wouldn't, but the Pythagorean theorem isn't an insight into the core truth of the universe -- it's a narrative we use to arrive at certain among those truths.

Which is why it's pretty cool that today's Idea Channel decided to help me out by talking about H.P. Lovecraft, Welcome to Nightvale, and the huge problem most people have with accepting that some things just aren't knowable.  Whole video linked and embedded below, but I particularly want to emphasize this quote:

Philosopher Graham Harmon describes Lovecraft as a writer of gaps.  A gap specifically between what we understand to be possible, and what the characters are experiencing in the stories, expressed by the gap in the existence of something and the ability of language to accurately and appropriately describe that thing.

How Does Night Vale Confront Us With the Unknown? 

Here, I also want to throw out another Idea Channel video, Is Math a Feature of the Universe or a Feature of Human Creation?

There are totally people who believe math is a real thing, as Mike addresses above.  Personally, I think those people are nuts -- math is a narrative we use to describe things.  "Math comes from the human brain, and nowhere else."  Fictionalism ftw.

I will totally be sharing these videos with my professor.  But first!

The thing about essentialism, whether mathematical, Platonic, Christian, political, whatever -- (well, one of the things.  Well, one of the things, and the line about where it becomes and stops becoming a thing is fuzzy.  There's no real essential truth about essentialism.)  -- is that it encourages people to believe that everything should be relatively easy for humans to understand.

If we believe the Socratic claim that all knowledge is embedded in the human mind, and it just takes the right questions to unlock it, how do we ever understand Quantum Physics?  How do we even approach the question, "Can language even describe some things?"  How do we deal with incompleteness?

It also leads to a different, more cultural, problem: the danger of a single story.

In this point, I'm referring to an awesome TED talk by Chimamanda Ngozi Adichie, a storyteller who grew up in Nigeria, embedded above.  The point I want to draw here is that the simplification of narratives that Essentialists pursue is not just wrongheaded, irresponsible, and doomed to fail:  it's also civilizationally corrosive and destructive.

Colorado campaign for Local Power makes me think about video as an information format

You know something that bugs me about viral campaigning?  I have no idea how to tell the difference between hyped-up, reductionist campaigns like KONY 2012, and stuff that looks legitimate but might be just as bullshit like this video, "Campaign for Local Power," from Colorado:

The thing is, I can't think of any conventionally structured video content that doesn't come across as deeply, aggressively full of crap.

A year ago, I would have said that I just don't think video is a capable format for disseminating information.  But then I spent the last year watching YouTube videos, and now I'm somewhat more convinced that it's possible to create informational, and even educational, content with video.  The Vlogbrothers make a lot of videos that seem perfectly clear and not at all misleading.  So does VSauce, MinutePhysics, CGPGrey, and ViHart, along with many others.

And since this is the first time I've seriously thought about this subject and not concluded that film is a bullshit medium, here are my aggressively underthought hypotheses:


Traditional documentary and ad for mat involve an anonymous narrator over video of actual stuff.  That's mostly not what happens in YouTube videos:  usually you get to directly see the person talking, even when we're supposed to take what they're saying as uncontested fact.  And, it's pretty much always the same person.  And, it's usually the person who did the research, and they're around in some capacity to answer questions and engage in discussion about the topic of the video.

Infographics: when these channels do jump away from straight video of the person talking (which some of them do at all times) the content they jump to is usually some kind of animation of diagrams, graphs, math, etc.  Stuff that, while it's manifestly not actual, real things in the world, are somewhat less inherently biased by the context.

The YouTubers I follow might just generally cover the easier topics.  They certainly second-guess themselves, a lot, on camera, when they address the bigger questions that are harder to get clear answers about, like environmental reform.

They aren't doing some of the things that conventional video does all the time:  Get a dozen people to say the same thing, to create the illusion of widespread agreement; feed information in audio format while throwing up moody, or energetic, or otherwise emotionally charged imagery to appeal to your emotions; end with specific calls to action; beg for money or deliver their viewers' eyeballs in maximum quantity to advertisers.

Closing non-conclusions

The thing that frustrates me most about this train of thought is that the only place I can really settle is "Yeah, it's basically impossible to trust any sort of explanation of information other than  your own, personal, direct grasp of things, which is susceptible to your own errors and prejudices, instead of those of whoever's doing the explaining."

I still feel like there are uniquely manipulative things about video, but I'm still totally unable to put my finger on what they are.  Maybe it's just that the people who established the conventions for video in 20th and 21st century culture were so awful at objectivity that it's totally outside the sensibility of anyone with a camera to present information fairly.

What is a whistleblower?

I spent all day today at the offices of the NECC Observer, the student newspaper on which I'm Copy Editor.  Since I had time for basically nothing else today, and I wrote a whole page of news on world events, here is one of my stories. Wikipedia (which is not a good primary source, but is a great place to get a general idea about a subject) has this to say about American whistleblowing legislation: “Whistleblowing in the U.S. is affected by a complex patchwork of contradictory laws.”

This complex patchwork is the reason that the laws relevant to Edward Snowden, and those relevant to Chelsea Manning, when defining who is and isn’t a whistleblower, are completely different. Under the Whistleblower Protection Act, federal employees have a right to submit their concerns about "a violation of law, rule or regulation; gross mismanagement; gross waste of funds; an abuse of authority; or a substantial and specific danger to public health or safety."

Under that standard, a federal employee acting with the information Snowden had could have submitted it to the US Office of Special Counsel (OSC) without fear of reprisal; however, according to a report by former federal employee Robert J. McCarthy, 98 percent of these reports are rejected. The report quotes the Government Accountability Project, saying "[t]he Federal Circuit Court of Appeals has a 3-219 track record against whistleblowers since Congress last reaffirmed the law in 1994."

There are two reasons that Snowden would not have been entitled to that protection. First, it only applies to federal employees: Snowden was a contractor, and so is entitled to no protection; second, Snowden leaked information to the Guardian, a UK newspaper.

There is no way to know whether Snowden would have used more conservative legitimate channels if any such channels existed for someone in his position.

Chelsea Manning is subjected to different regulations: above a certain rank, members of the military are actually required to blow the whistle on unethical behavior. They can be court-martialled if they don’t.

But Manning, being a Private, was below that threshold. She would only be subjected to the rule of the Non-disclosure agreements she would have signed on joining the military, making any kind of disclosure, even one motivated by conscience and judged to be in the public interest, illegal.

Neither Manning nor Snowden could possibly have approached their concerns in a way that the US government would see as legitimate. Though they both leaked information that, if addressed by the right person, would demand protection, both are by technicality outside the boundaries of that protection.

But the question about what to call these two is not just a matter of legal categorization. It also reflects a personal, moral conviction: did either of these two do the right thing? Was it in the public interest that the information they leaked be made known? There’s disagreement on that subject even within the staff here at the Observer.

The Associated Press has instructed their reporters to refer to Snowden and Manning as "Leakers," arguing that it's not appropriate for reporters striving for objectivity to choose terms loaded with a moral judgement.

Things I can never unlearn: That vs. Which

Anyone who either (a.) doesn't care at all about grammar and doesn't want to read a post about it, or (b.) cares enough about grammar that, once they learn a new rule, they'll always notice it, should probably stop reading now, because I'm about to write about the difference between that and which. I'm writing about this, not because I think it's important that everybody always get it right, but because I have personally had a lot of reading experiences worsened by the knowledge of the correct usage of which. The reason is that people use the word which all the time, because it's one of those words that just sounds like it ought to be correct more often.  Loads of people just assume that they're using that wrong, so they start substituting in whiches all over the place.

Here are some hypothetical examples of probable misuse of the word which:

  • I like cheese which is aged for more than ten years.
  • Don't open any doors which look like they might be holding back ghosts.
  • I have an aunt which won't stop texting me during dinner.

Here's what those sentences mean, if the use of which is correct:

  • I like cheese, and anything called cheese must have been aged more than ten years.
  • Doors, by definition, look like they might be holding back ghosts, so you should not open them.
  • Aunts are nonhuman creatures that text me during dinner; I only have one.

One can account for those problems by switching out the whiches with thats and whos as appropriate:

  • I like cheese that is aged for more than ten years.  (I like old cheese better than new cheese.)
  • Don't open any doors that look like they might be holding back ghosts.  (It's fine to open doors if there probably aren't any ghosts behind them.)
  • I have an aunt who won't stop texting me during dinner.  (My one particular aunt keeps texting me during dinner.  Furthermore, she is a human being, and a person can be an aunt even if they don't text me during dinner.)

Now, there's not much to gain by knowing this.  Like most grammatical nitpicks, the correct meaning is obvious something like 95% of the time, and for another 4.9% the confusion can be settled just by talking about it a little longer.

Unfortunately, now that you've learned this, at least some of you will never be able to unlearn it, and dozens of teachers, orators, essayists and bloggers whose grasp of language you once admired will suddenly become very annoying to read.

Dialects! The results of a huge survey of America on basically all the words

(via Hank Green) Business Insider has published an article called 22 Maps That Show How Americans Speak English Totally Differently From Each Other.  They pulled some of the most interesting results from this study, showing some of the strongest and most notable divides in word choice and pronunciation throughout the country.

If you're from America, I recommend going through and trying some of the differences in pronunciation.  They feel weird in your mouth, I promise.

The setting that I think I'll find most useful, though, is on the survey page itself, where you can display individual cities -- way more individual cities than I expected.  My home town, of about 30,000 people, is on the list.  And, since I don't want to show a map pointing out exactly where I live, here's an image showing Saint Cloud, Minnesota, where they disagree about the pronunciation of one or more of the words: roof, room, broom, and/or root, with the rest of the country.

saint cloud minnesota

This, I expect, will be useful in many arguments with my parents, who were born and raised about a half an hour south of where they raised me, and therefore bitterly disagree with me about the correct pronunciation of many words.  (It may hurt me in some of those arguments, though, with words I pronounce in either the British way or in a more generic, pan-American accent, because I dislike the way New Englanders say some stuff.)

NOTE: This post was written on 2013-06-08, but scheduled in advance, because I've started to notice that my work schedule is heavy enough to prohibit regular blogging.  I'm going to try to start scheduling posts to fill up days in which I won't be writing directly.

Looking forward to it

What does "I'm looking forward to it" mean?  TheFreeDictionary defines it  "To anticipate something with pleasure."  Wiktionary says "To anticipate, expect, or wait for, especially with a feeling of approval or pleasure.  Be excited or eager to." And I get that it means that, culturally.  If I say "I'm looking forward to work this weekend," it sounds like I'm saying "I'm thrilled at the prospect of spending my weekend working."

But it seems to me the phrase doesn't really hold up to the sentiment.  Forward works as a metaphor for time, backward being the past and forward being the future, I don't think it works as a metaphor for mood.  Or, if it is, every direction of mood is positive, at least within the person doing the looking:

Looking forward means excited,

Looking back means nostalgic,

Looking up means admiring,

Looking down means feeling superior,

Looking to the side doesn't really mean anything, but we do have a sidelong glance, which means sharing a mutual acknowledgement of understanding.

With looking back, though, there's plenty of additional information we can provide about mood:  it's not weird to say "Looking back with regret," or "Looking back uncomfortably."  In that sense, looking back just means acknowledging things that have happened.

If we take "Looking forward" the same way, then it just means "I agree that this thing will happen."  So when I say I'm looking forward to working this weekend, all I would mean is that they scheduled me, and I intend to show up.  Which is true.

Of course, idioms like "Look forward to" have a lot more to do with what everybody agrees to hear when you say it than with what the words add up to, so the truth is I'm not looking forward to work this weekend, though they do expect me to be there and I do intend to show up.  Which is all I was really trying to say.