Geoffrey Long
Tip of the Quill: Archives
Progress report, part 3.

OK, well, that's not quite true. (And if you haven't read my previous entry yet, then please do so to figure out what I'm talking about.)

When I left off, I was saying that I wasn't sure what I wanted to do to expand my skills and, by extension, my portfolio. That's not true. Here are my top areas of potential expansion:

  • digital video
  • video games
  • Flash animation
  • mobile computing as narrative environment
  • art for art's sake
  • interactive elements in general
That last one sounds a little weird, so I should clarify.

I've been doing a lot of web design, but I haven't been doing a lot of truly interactive art. That is, if you go through the website for the MIT Media Lab and look at what they're doing in there, you'll see all kinds of interesting digital artwork. (You'll also see a decent amount of non-earth-shattering stuff and digital nose hair trimmers, but I digress.) I haven't done nearly any of that – and there's a reason for that.

I may be shooting myself in the foot in future applications if the professor in question ever reads this entry, but this has been on my mind for a while now.

When people say "digital storytelling", they tend to infer one of two things: one, that it's non-fiction, or two, that it's some kind of user-controlled narrative. Why?

The Center for Digital Storytelling, which is headed up by Joe Lambert, deals almost exclusively with people using technology to tell their life stories. Derek Powazek's {fray} is a non-fiction forum. Dana Atchley's Last Exit was a computer-assisted autobiography of sorts. All of these things hinged on the stories being true – but when people say "storytelling" in a general context, they don't necessarily assume that the stories are true. Those stories could be folklore, ghost stories, or anything. So why assume that digital stories have to be true?

Alternatively, groups like MIT's Interactive Cinema trumpet the ever-approaching advent of audience-controlled narratives – but if you actually examine what the technology is bringing us, it's not a bunch of voter-controlled narratives, it's more storytellers. As cameras and editing suites and special effects get cheaper, we're seeing a heartening rise of indie cinema. We're seeing more guys in their garages producing sci-fi epics like Sky Captain and the World of Tomorrow, not more theaters with voting machines installed in every seat to control the narrative.

I think it might be an intrinsic component of human nature to listen to others tell stories more than to tell our own. People aren't going to unsubscribe to HBO just because handicams are cheap enough for them to make their own TV shows. Therefore, why would an audience necessarily want to take control of a story? If I go see a Quentin Tarantino movie, I don't want to see Quentin pop up every five minutes and say, "How do you want to see The Bride whack the Crazy 88?" I want Quentin to decide it for me. That's why he's the director.

Now, do I think that there are still great breakthroughs to be made by hybridizing digital video and videogames? Absolutely. That's why I think there's great room for innovation in the notion of transmedia storytelling, and this is something I'd love to explore in grad school.

I just need to get in.

Comments

Speaking as a filmmaker, I love you to death, Geoff.

Sorry- had to truncate that post due to a client coming over. (A client! Yeee!)

Anyway, what I wanted to go on and say, was that you of all people should not worry about getting into grad programs. Your portfolio is seriously impressive, and I know with complete certainty that your GRE's will be too. You'll do great. And god knows, even if you don't get into your program of choice at first, you can always come to SCAD, rack up a few quarters' worth of MFA-level education & re-apply. That would look very impressive.

Post a Comment


leaf