D&D advances medical science

(via Reddit) Ed Yong, a blogger at Discover Magazine, writes about an eye-tracking study.  His post, 12-year-old uses Dungeons and Dragons to help scientist dad with his research, is about the problem of human attention -- when we see someone, we focus on their eyes.  Or, their faces.  It's hard to tell which.

One evening, Kingstone was explaining these two hypotheses to Julian over dinner. “A colleague had said that dissociating the two ideas — eyes vs. centre of head — would be impossible because the eyes of humans are in the centre of the head,” Kingstone said. “I told Julian that when people say something is impossible, they sometimes tell you more about themselves than anything.”

Julian, Kingstone's son, suggested that his father use pictures from the Monster Manual, a sort of self-explanatory D&D book.

The Reddit thread I got this post from also contained a comment that linked a picture which appears to be from the study, showing the results for three different kinds of image: human, humanoid, and monster.

The article also points out the significance of this research:

This isn’t just an academic exercise, says Kingstone. “If people are just targeting the centre of the head, like they target the centre of most objects, and getting the eyes for free, that’s one thing. Bu if they are actually seeking out eyes that’s another thing altogether,” he says. It means that different parts of the brain are involved when we glean social information from our peers. It might also help to explain why people with autism often fail to make eye contact with other people, and which parts of the brain are responsible.

India's nuclear reactors

Thorium fangirls and fanboys, rejoice! (Myself included.)  India is building a Thorium Reactor, a type of nuclear reactor that runs on the common mineral Thorium rather than the rare, dangerous mineral, Uranium. Or, they're hoping to build a serious one within five years.  They're looking into it, quite seriously, because they don't have access to the necessary volume of Uranium to keep up with India's demand.  India is well known for its struggle powering the country, so they're in a great position to switch over to the much more abundant Thorium.

The video on Vice's article quotes Harold F. McFarlane, the former president of the American Nuclear Society:

[... I] expect that this would be the best facility in the world to obtain benchmark data for thorium-powered reactors and it can be a wonderful research tool for training new generations of scientists.

The reactor they've built produces essentially no power.  Instead, it is overwhelmingly well-designed to change the parameters to maximize the quantity and quality of the data they can produce.

From Vice's article:

Thorium is abundant in India (and pretty much everywhere else), and the plant, which itself will largely be used as an experimental facility, will generate 65% of its power from the famed radioactive chemical element. DT notes that the “first AHWR reactor – with thorium for fuel — will be used to test new technologies on safety as well as on thorium fuel cycle … It will be India’s first step to embrace thorium as the nuclear fuel of choice.”

If this reactor's tests are successful, the scientists are looking forward to providing India with sufficient energy for the next 100-250 years.  Here's the video:

Moonwater!

You know what could make space exciting again?  Like, in the short term? If the moon, that thing right over our heads, that we can actually reach, was more than just a big, dead rock.  The fantasies of moon men are long dead, but it seems there's enough up there that we shouldn't dismiss the moon as a destination of fantasy, just yet. It turns out, there's way more water on the moon than scientists previously thought -- enough that it could constitute a fantastic resource for rocket fuel.  (The form it comes in isn't so great for drinking, though.)

the Christian Science Monitor writes:

"Reservoir" does not mean a source of readily tapped liquid, the researchers caution. The evidence shows up as hydroxyl – a single oxygen and hydrogen atom representing two-thirds of a standard water molecule. Hydroxyl and water molecules are captured in tiny deposits of glass in rock and soil grains. The glass forms from heat generated when micrometeoroids slam into the surface and fuse soil grains into tiny clumps.

There are a crazy number of ways water can form on the surface of an airless rock like the moon.  Like, enough ways that I'm far less surprised now that life exists anywhere in the universe than I would otherwise have been.

Some of the water could have come as ice from comets colliding with the moon. A second source: in effect, the sun, whose endless stream of "solar wind" protons strike the moon's sunlit hemisphere. Bind a pair of those protons to an oxygen atom, readily available in the minerals that make up the lunar soil, and you get a water molecule. Finally, water and hydroxyl molecules also are bound up in volcanic rock and soil that originated as water-bearing magma in the moon's interior.

If all it takes to have water on your large space-object is (a.) having some oxygen, (b.) being pretty close to a star, and (c.) being at the right band of temperature that the water stays water, I think we might have a pretty good chance, as a species, to find somewhere to migrate when we finally wreck this planet completely.

The Windstrument, a new kind of windmill

I never really got the anti-wind power argument that goes, "It will kill so many birds!"  Like coal and oil doesn't?  Does every death caused directly by the machine creating the energy count for a thousand caused indirectly by the environmental damage the machine does? Regardless, that doesn't seem to be a problem anymore:

The shape of the turbine’s blades are called conical helicoids, inspired by the design of racing sails and capable of sustaining their functionality even in fierce winds. And unlike other turbines, the Windstrument’s design disperses the air in such a way that birds don’t get sucked in. In nearly two years of trials in a wetland heavily populated by birds, not a single one was harmed.

That quote is from an article about the Windstrument, a new design of wind turbine that allows for multiple turbines on a single pillar, called a "Windorchard."  The CoExist.com article focuses on the fact that, unlike existing windmill designs, the Windstrument is far more scaleable.  It can be installed on rooftops to power buildings, or even neighborhoods, and can bring wind power to places that couldn't possibly have had it before.

The inadequacies of science

Maggie Koerth-Baker, renowned science information superhero, has a great post up on Boing Boing about things going wrong in science, called Fraud, Failure and FUBAR in science.

Like any other job, most people are honest most of the time. Unlike other jobs, however, the culture of science has long operated on the assumption that everybody is honest all of the time — and that we always catch them when they aren't.

That's why it's actually exciting to me to see more people talking about the problems and misconduct that do happen in science.

It's a great summary of the reality of imperfection in science, an explanation of some of the reasons that it's gone unnoticed, and a list of recommended additional sources to learn about the issue.

I love when Boing Boing (generally, Maggie Koerth-Baker) writes about this stuff, because I find it's often hard to have conversations about the problems with science that don't devolve into an unpleasant and inadequate binary: Either science is awful and wrong and destroying all the truth and beauty in the world, or science is the platonic ideal of information gathering, and it's an unchanging and unchangable institution that gets everything right.  Annoyingly, when I try to  have these conversations, people generally assume and insist that I'm on the latter side.

The truth is, science is a pretty good method for gathering certain kinds of information, and there's no as-good or better method for gathering the kinds of information that science can't gather.  Science could be better, but making science better is a complicated social process and the existing problems in science include road blocks within the system as it is.  (Like members of the current institution and fandom of science who believe it really is the platonic ideal of information gathering.)

Cool new technology: Washable microchips and see-through memory

I am very, very far from being a computer scientist (and not really heading much in that direction, anyway) so I don't really know how to work out what kinds of predictions about technology are sensible, and which ones aren't. That said, I tend to agree with the school of thought that says there must, somewhere, be a cap on Moore's Law -- the claim (which, as far as I know, has been true so far) that the number of transistors on integrated circuts doubles approximately every two years.  From Wikipedia:

The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed,memory capacity, sensors and even the number and size of pixels in digital cameras.[7] All of these are improving at (roughly) exponential rates as well (see Other formulations and similar laws). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy.[8] Moore's law describes a driving force of technological and social change in the late 20th and early 21st centuries.[9][10]

This trend has continued for more than half a century. Sources in 2005 expected it to continue until at least 2015 or 2020.[note 1][12] However, the 2010 update to the International Technology Roadmap for Semiconductors has growth slowing at the end of 2013,[13] after which time transistor counts and densities are to double only every three years.

The eponymous Moore himself thought the trend would last "At least ten years," a claim he made almost 50 years ago.  And, looking at the physical limitations of the technologies we have to work with today, the cap seems inevitable.

But scientists keep coming up with new ways to build electronics, that rely on often fundamentally different mechanisms and torpedo any estimations about the future of electronics that are based on the physical limitations of the existing materials.

Here are just some of the technologies I read about today that make me feel very unable to predict or anticipate the future of technology in the next few decades:

Transient Electronics

Transient electronics [...] are a combination of silk and silicon designed to work seamlessly in our bodies and in our environments. In a new study, researchers built a thermal device designed to monitor infection in a rodent and a 64-pixel digital camera--all from dissolvable material.

As described by PopSci.com, who suggest that they might be "an eco-friendly solution for obsolescent tech."  These devices open up previously hard to contemplate uses for electronics.  I'm distressed to say that the first place I can imagine them being applied outside the suggestions in the article is in DRM and other forms of legal control -- like, putting a chip in your driver's license that decays after the expiration date, without which you can't operate a car.

Flexible, transparent memory chips

Manufacturers who have been able to fit millions of bits on small devices like flash memories now find themselves bumping against the physical limits of their current architectures, which require three terminals for each bit.

But the Rice unit, requiring only two terminals, makes it far less complicated. It means arrays of two-terminal memories can be stacked in three-dimensional configurations, vastly increasing the amount of information a memory chip might hold. Tour said his lab has also seen promise for making multi-state memories that would further increase their capacity.

Phys.org writes. This section explicitly points out that the technology involved extends the validity of Moore's Law beyond the physical limitations of existing technology.  Apart from that, after the first half of the article (covering the concepts involved in a very readable, understandable way) the article covers the context and story behind the discovery:

Yao's discovery followed work at Rice on graphitic-based memories in which researchers saw strips of graphite on a silicon oxide substrate break and heal when voltage was applied. Yao suspected the underlying silicon oxide was actually responsible, and he struggled to convince his lab colleagues.

Here's the link to the whole article again.  It's really incredible, and it sounds like the kind of sci fi I wasn't sure was ever really going to happen.

I do still think Moore's Law is going to cap off at some point.  I may not be alive when it happens, but there's only so much smaller you can make things before you hit essentially sizeless stuff.  I think.

Scary questions about medical science

Ben Goldacre has written a new book!  Which I haven't read.  But he wrote a big post for the Guardian outlining the basic point!  Which I got about a third of the way through.  But he's got a TED talk, also covering the same content! This is great, because it's much easier to watch Ben Goldacre talk than to read long articles.  So, here's that video.

The basic point is that Big Pharma is really, really awful.  Like, the drug companies' and medical journals' habits frequently kill people.

Now, this is Ben Goldacre.  He's not suggesting that we run to alternative medicine and eat plants and drink tea and take sugar pills instead of seeing doctors.  But we've hit a point in medical research where the quality of our method is insufficient to keep improving the quality of our drugs.

What happens is, (a.) journals don't like publishing negative studies -- studies that turned out to disprove or not support the hypothesis -- because they're boring, they're not interesting science, so they damage the quality of the reviews.  So, a drug with a few fluke good results, and dozens of bad results, will have a few published good reviews and go to market.  And, (b.) drug companies bury bad results, too.

Particle science is pushing towards a dramatically more open source scientific publication standard.  I think the medical industry needs this, too.

On the benefits and drawbacks of choice

I watched the new David Mitchell's Soapbox today, which was all about how too much choice is bad or you. He points out some very good reasons why having a lot of choice tends to make people unhappy.

It seems to me that it's perfectly sensible that too much choice makes people unhappy. I mean, there have been loads of articles and TED talks about it. But it's not necessarily that obvious, is it? So I thought I'd gather up all the information I know about it in one place.

Starting with the case FOR choice, here's Malcolm Gladwell's famous Spaghetti Sauce TED talk:

The bit at the end, about coffee groups -- "The difference between X and X is the difference between coffee that makes you cringe, and coffee that makes you deleriously happy," -- seems like it ought to shoot a hole in the argument for less choice. But the problem with choice isn't about having 3 kinds of coffee and not knowing which one to choose, it's about having 100.

Increasing choice has a rate of diminishing returns so dramatic that it ends up reversing itself, what Barry Schwartz calls the Paradox of Choice:

But cutting all the choice entirely isn't helpful, as Gladwell covered. So, how do we decide? It seems that choosing when and how to choose is a skill unto itself, and may be one of the significant life skills of the 21st century. Here's my last video embed, The Art of Choosing:

Schwartz covered the ways in which choosing hurts satisfaction -- Iyengar covers the ways in which choosing hurts sales.

There's a description, somewhere on the internet that I couldn't find, of a wine store. They only sold 100 options for wine at any given time -- 50 white, 50 red, subdivided into 5 categories of 10 each.  Once you figured out what kind of wine you were looking for, there was plenty of time to learn about each of the wines available and make a good, informed decision you can feel confident about, and of which you can appreciate the consequences.

I wish I had more access to choices like that -- the kind of handholding choosing that can help an amateur make good decisions, and develop a genuine sense of comprehension within a complex area.

That's all the good content on choosing I know off the top of my head. I hope it helps.

The internet is made of stuff

I knew the internet was made of physical things, when you work down to it, at the basest level.  I get that, it makes sense, that data needs to be represented physically somewhere. But I never really thought about how stuff-ish the internet really is, until I watched this TED talk, by Andrew Blum:

It's really, remarkably physical.  It's extraordinary to see that there are parts of internet maintenance and assembly that require real physical ability.  Like, more than it takes to help move a couch.  There are swimmers who dive into the ocean and pull the internet out to connect the world together.

It's kind of scary, too, how many weak points there are.  It's good that, in most cases, the under-ocean connections are redundant.  But there are nowhere near as many ways to get information from New York to England as there are ways to get information from New York to LA.

Also, it's weirdly imperialistic.  Not in love with that aspect.  Still, it's growing, and hopefully it's more liberating than oppressive.

Survival of the fittest

The concept of survival of the fittest came up in one of my classes today.  And, as usual, the discussion entailed a number of radical oversimplifications. It seems to me like the phrase "survival of the fittest" more often undermines a clear comprehension of evolution than aids it. First, it is almost completely nonsensical to discuss evolution in terms of humans.  We most likely aren't evolving -- civilization undermines that process.  And that's a good thing.  Evolution only selects for 'better' in the very narrow sense of having children who have children.  Like other ways of attempting to simplify success (amount of money, sports victories, relative number of people killed at war) trying to interpret successfulness in evolutionary terms works, but only if you snip off the part of the concept that describes the reasons success matters.

Secondly, as it was used in this class, the case is often used to implicitly or explicitly devalue people who have skill sets, who represent value, outside the speaker's preferred area.  In this case, the prompt was "Would you rather have a genius kid, or a kid with street smarts?" -- followed by an explanation that, sometimes, if you know enough stuff it trips you up and makes you unable to apply any of it.

I want to really tear into this argument, but my phone is about to die. I will return to the blog later.

Type 3 Diabetes

My favorite kind of passage in an article about science is this one, in the Guardian's article, Alzheimer's could be the most catastrophic impact of junk food:

New Scientist carried this story on its cover on 1 September; since then I've been sitting in the library, trying to discover whether it stands up. I've now read dozens of papers on the subject, testing my cognitive powers to the limit as I've tried to get to grips with brain chemistry. Though the story is by no means complete, the evidence so far is compelling.

I'm less unambiguously enthused about the premise of the article:  that Alzheimer's Disease might actually be a form of diabetes.  According to diabetes.co.uk, which outright refers to Alzheimer's as Type 3 Diabetes, "many type 2 diabetics have deposits of a protein in their pancreas which is similar to the protein deposits found in the brain tissue of Alzheimer's sufferers."

This scares the crap out of me, because Alzheimer's is one of my biggest looming fears,[1. As opposed to non-looming fears, like spiders, car accidents and being alone in the dark.] and I don't exactly have a fantastic diet.  I'm pretty sure I'm safe for Type 1 Diabetes, but my family has a history of Type 2, a fact I know because my mother pointed it out to me every few weeks/months since middle school, which is why Type 2 Diabetes is another one of my big looming fears.

On the bright side, though I find it difficult to improve my eating and exercise habits for reasons like "I'll probably be healthier and live longer and stuff," imagining a future in which my mind slowly degrades to the point where I become a burden and a constant reminder of a lost person to the people I love, while I'm trapped knowing that I will never again think the way I used to, is sufficiently terrifying that it might stop me eating pop tarts so often.

Although, if climate change is any indication, fear of ensured doom is a terrible way to motivate humans to change for the better.

Silencing wildlife

Bernie Krause, a musician and naturalist, has for the last 40 years been leaving microphones in various habitats, making recordings of the wildlife.  He's recently released a book, The Great Animal Orchestra, quoted in the Guardian:

"A great silence is spreading over the natural world even as the sound of man is becoming deafening," he writes in a new book, The Great Animal Orchestra. "Little by little the vast orchestra of life, the chorus of the natural world, is in the process of being quietened. There has been a massive decrease in the density and diversity of key vocal creatures, both large and small. The sense of desolation extends beyond mere silence.

"If you listen to a damaged soundscape … the community [of life] has been altered, and organisms have been destroyed, lost their habitat or been left to re-establish their places in the spectrum. As a result, some voices are gone entirely, while others aggressively compete to establish a new place in the increasingly disjointed chorus."

The article is fascinating, and sad.  I'm not a huge fan of nature, personally, but this approach -- showing, tangibly, the holes in the soundscape that human intervention leaves -- makes it easy to relate to preservation activists.  It's genuinely tragic to hear the world losing its songs.

I'd also like to add that this is yet another argument in favor of retreating, as a species, into massive supercities, and letting the rest of the world return to wilderness.

Study suggests math is a whole-brain activity

I'm bad at math, so when I see articles that seek to explain why some people are good at math, and those articles don't have any math obviously present in them, I'm interested.  This article, via EurekAlert, describes an investigation into dyscalculia, "the numerical equivalent of dyslexia."  So, it's possible this doesn't apply to me specifically -- I don't think I'm dyscalculic, I'm just not very good at math. The study, which compared fMRI scans in which subjects were determining whether two groups of objects matched in number, and confirming the accuracy of equations presented to them.

Consistent with previous studies, the researchers found that the basic number-matching task activated the right parietal cortex, while the addition and subtraction tasks produced additional activity in the left parietal cortex. But they also found something new: During the arithmetic tasks, communication between the left and right hemispheres increased significantly compared with the number-matching task. Moreover, people who exhibited the strongest connection between hemispheres were the fastest at solving the subtraction problems.

Math is generally characterized as overwhelmingly centered in one part of the brain -- the left hemisphere.  It's a fairly substantial recontexualization, in my mind, to think of math as requiring the whole brain, the way music does.  That said, I'm not a neuroscientist -- I have very nearly no idea what I'm talking about when it comes to brain science.

American Belief Statistics

Browsing EurekAlert, I noticed an article titled Canadians Overwhelmingly Believe Climate Change Is Occurring.  The article claims that only 2% of Canadians deny the existence of climate change.  The survey report breaks it down further:

Canadians most commonly (54%) believe that climate change is occurring partially due to human activity and partially due to natural climate variation. One third (32%) believe that climate change is occurring due to human activity while one in ten (11%) believe that climate change is occurring due to natural climate variation or that climate change is not occurring at all.

Comparatively, according to a Gallup report this year 15% of Americans believe that the effects of global warming will never happen, and another 15% say that its effects won't occur within their lifetimes. About half of Americans believe what scientists in the field are saying about the heat lately:  global warming is already happening.

Reading this, I got curious: what are the percentages of some other conspiracy-style, anti-sense beliefs in the US?

According to Wikipedia, somewhere in the area of 15-30% of Americans believe that the US government at least had advance knowledge of the 9/11 attack plans, and chose to let it happen.

As of 2001, somewhere between 6% and 20% of Americans believe that the moon landings didn't happen.

In 2011, after the release of his long-form birth certificate, about 10% of Americans still believe that President Obama was not born in the United States.  Among only Republicans, it's at 23%.

Also in 2011, a health poll conducted by Thomson Reuters and NPR concluded that 21.4% of Americans believe that vaccines can cause autism.

A criticism I hear a lot about America, which I personally believe, is that Americans believe that they are not obligated to consider evidence when it conflicts with their views -- that facts are just as subjective and malleable as opinions.  Unfortunately, the broader cultural trends in America seem to reinforce this position.  Mainstream news media's Balance principle pushes them towards giving coverage to verifiably wrong positions, and pseudo-educational media outlets like The History Channel produce shows like UFO Hunters, Ancient Aliens and The Bible Code: Predicting Armageddon.

Also unfortunately, the web as it's currently structured makes this worse.  Most peoples' major portals to content online, Facebook and Google, filter the content they show the user based on past trends of liking, clicking, and otherwise positively responding.  Then, we all head off into media outlets that target our own demographics, pretty much exclusively.

But, obviously, America is getting it wrong more than the rest of the world.  Our climate change blindness is at 30%.  Canada's is at 2%.

I've depressed myself now, so I'll end it there.

Displays more 3D than ever before

It feels a little weird that I'm the one reporting this, not Mike, but the US military is basically inventing the 3D technology from Star Wars.  Their goal is to create 3D displays that can be viewed by any number of people, from any angle, without any special eyewear.

That SourceFed video, embedded above, is packed with more information about 3D technology than I knew could exist.  Apparently, a system of laser imaging called LIDAR already produces 3D image files that could be projected with this kind of technology.  There's even a name for the resolution of a 3D image, "Hogels," analogous to 2D pixels.

I really shouldn't be this surprised that there's such a well-developed language around 3D display technology.  But, to be honest, I didn't think this was going to happen.  Real 3D displays is one of the Sci Fi staples that I sort of assumed were always going to be Sci Fi.

The weird relationship between advancing technology and the realm of science fantasy is just going to get weirder and weirder, isn't it?  I think we might be pretty much past the point, as a species, where we can reliably say that anything outright proven impossible, isn't possible.  And, even that impossible stuff, we might figure out a work-around.

Drinking and Art

Did you know? is one of  my favorite tumblr blogs, and they had a particularly good weekend.  I've had several of their tabs open for a few days now, because I wanted to blog about them.  Ultimately, the one I settled on was this:

I've suspected this for a while -- I don't think there's anyone who can reasonably defend the claim that getting blackout drunk can help smooth the work of a creative endeavor, but the effects of light drinking can only help.  In one of my favorite New Yorker articles, Drinking Games, Malcolm Gladwell explains:

Steele and his colleague Robert Josephs's explanation is that we've misread the effects of alcohol on the brain. Its principal effect is to narrow our emotional and mental field of vision. It causes, they write, "a state of shortsightedness in which superficially understood, immediate aspects of experience have a disproportionate influence on behavior and emotion."

Alcohol makes the thing in the foreground even more salient and the thing in the background disappear. That's why drinking makes you think you are attractive when the world thinks otherwise: the alcohol removes the little constraining voice from the outside world that normally keeps our self-assessments in check. Drinking relaxes the man watching football because the game is front and center, and alcohol makes every secondary consideration fade away. But in a quiet bar his problems are front and center—and every potentially comforting or mitigating thought recedes. Drunkenness is not disinhibition. Drunkenness is myopia.

So, that's a reasonably respected reporter discussing the work of an established scientist.  The thoroughly unscientific provisional conclusion I've drawn from this is that having a drink or two while I work will probably help block out the otherwise disabling awareness that Tumblr is just a few clicks away.

TIME Magazine's article in their health section, the one to which the above Did You Know refers, goes further than that:

 Increasingly, science is confirming that altered states of consciousness — whether induced by drugs, alcohol, sleepiness, travel or anything else that removes us from our usual way of seeing the world — do indeed improve creative thought. The inhibition of what researchers call executive functioning, which includes focus and planning — abilities that decline when we’re under the influence — may be what lets us generate new ideas and innovative solutions, instead of remaining fixed on the task at hand.

The study, which, thankfully, TIME actually links to, is published in Consciousness and Cognition, a peer-reviewed journal with a self-explanatory name.

The article also dips into the risks of this kind of finding -- whether attention-focusing drugs like Ritalin diminish creative thinking, and whether this contributes to the rate of addiction in artists:

Having less executive control before you even take drugs means you’ll have less ability to stop once you start.

That would increase addiction risk two ways — by increasing desire to use, and by increasing the risk from use that occurs. And of course, the more high-profile creative types who become addicted, the more it seems that drugs and alcohol must be crucial to creativity. And that itself would attract even more artists to initiate drug use, escalating the cycle.

Personally, I know I'm prone to addiction, so I weigh my decisions to drink very carefully against the various risks, especially dependence. In a world governed entirely by my own preferences, I'd have access to professionals that can help me regulate my use of chemicals to adjust my state of mind to my preference.  By which I mean, I'd be able to get prescriptions for gin and LSD from my psychiatrist.

A whole new kind of 3D image

(via Boing Boing) Some of technological advancement is stuff like confirming the existence of the Higgs Boson, studying the fabric that gives particles mass and making huge leaps in the most basic levels of understanding -- or, getting a vehicle the size of a Mini Cooper to land on the surface of another planet intact and send high-resolution photos.

But sometimes, technology is some people figuring out clever uses for old knowledge, maybe noticing that we can do something now just because we have refined enough tools, that we always could have imagined, but didn't.

These may be less awe-inspiring, but I think they're way cooler to learn about -- perhaps only because I can wrap my head around the whole of their implications (maybe), but mostly just because they're so damn clever.

Scientists at the University of California: Santa Cruz have figured out how to print a 2D(ish) picture that looks 3D, but not the 3D we're used to -- this kind of image reflects light as though it were a 3D object in the paper.

I don't know how to describe what this does, because I've never seen anything like it.  In the realm of image creation, it's basically a totally new thing, and that's one of the coolest things that technology gives us.

Right now, the technology is at about the level of dot-matrix printers, which is so out-of-date we don't even generally use it for receipts anymore, but I can only barely imagine what it will be like to look at one of these pictures in five years of improving technology, and then in five more years of price dropping.  I want a Van Gogh print that responds to light like you're looking at the actual shape of the oil paint sticking off the canvas.  And I really want some of the art made specifically for this medium.

Here's the video about it.  Watch it.  They manage to successfully explain what they're doing.