Thursday, December 5, 2013

The evolution of the villain in video games

     My paper will focus on, as the post's title suggests, how the role of the villain in video games has evolved in complexity over the past few decades. For the purposes of the paper and simplicity (and the sake of not spending more time and disposable income then I have handy), I'll stick with referencing specific games I've actually played, and making generalizations therefrom, rather than trying to encompass a broader sampling across platforms I don't or haven't owned and subgenres I don't play.

     First, I'll examine the video game as a tool for imposing and relaying narrative structures. In the earlier days of gaming, the games were simple (in complexity as regards to graphics, programming, and similar attributes, not necessarily in terms of skill to achieve victory), as were the stories they conveyed. Take Space Invaders or Galaga, for example. Wave after wave of alien ships descend, you shoot them, they attempt to shoot you, and that's about it. There's no narrative coordination worth mentioning just the pre-imposed condition of shoot or be shot, without any development on either the hero's side or the villain's.

     As a child, in a generation where growing up with video games was still a new-ish thing, that was novel enough to suffice. As I've aged, however, demands for more to the story have risen, much as I demand more, generally, from other sources of entertainment; novels,  movies, television, etc. Currently we demand games that offer a more grown-up feel, in terms of narrative structure. This springs from a desire to legitimize gaming as an adult pastime, rather than the poor, child-associated obsession it used to be ( and by some, may well still be) viewed as. Video games, like movies or television, have evolved to become (so I tell myself, rationalizing like mad) an art form, telling intricate, memorable tales as well as simple ones. Like the written narrative, they encompass generations rather than excluding them.

     I'll mention that the role of the hero is left two-dimensional to some degree by design, as it essentially is just a rough sketch, an outline in which we fill in ourselves. The villain, by contrast, is (or should be) a complete and fleshed-out personality, with its own psychology. I'll note Kant's theory of evil, and that said theory offers the opinion that what we see as evil is rarely so in the eyes of those performing the actions. rather, it can be seen to be a degree of selfishness greater than we generally allow ourselves, as a society, to indulge in. The selfishness comes through objectification and dehumanization, of course, but lacks within the eye of the perpetrator the moral wrongness that we associate with evil, instead often encompassing a ruthless sense of necessity.

     I'll mention some of the various villain tropes lending credence to the concept of villains with psychology, ones we see already in other media and indeed in life. I'll identify with those tropes various villains that fall in the category from games I've played. In general, I'll tie these elements together with the concept of advanced narrative structuring that we desire from our imagination-stretching pastimes.

Thursday, November 21, 2013

Convergence culture

     As Jenkins notes in his work, convergence culture has been with us for some time, as the world has grown more multidimensional in terms of the technology we use and the demands we make of that technology. Our phones are no longer just phones, even those comparatively unsophisticated versions we hardwire in and dub "land lines." Certainly, they make calls, but even for the land lines there's an increasing degree of complexity in the behind-the-scenes process while a decrease in complexity by the end user. Analog signals have fallen by the wayside in telecommunications, in favor of (at least at points) digital translation. Our words over the lines are broken into digital packets, delivered, and reassembled in (ideally) the same order as they were offered, through a series of difficult-to-follow paths, packet switches, and so on. Given that the land line itself now has become more the back-up option to the default preference we offer cell phones, we see how such a shift speaks volumes as to what we prioritize in our efforts to communicate. We demand constant accessibility (though some do bemoan it, often on the device they purportedly lament), accessibility to others and from others.
    
     Add, then, the features of smart phones, and their similar displacement of "dumb" cell phones, and we see additional qualities demanded from our hardware. It isn't enough not to just communicate from wherever to wherever, but we want to be able to communicate in different ways through the same device. On the same phone, one can call traditionally, or initiate a video or audio conference over the internet, or send text messages and still images. We can choose alternate means for the latter, using social media applications to send our thoughts and pictures and videos, not just to one, but to many at once. We're always on the grid, so to speak, and producing content to display on that grid, from the banal to the sublime.
    
     Phones are an obvious example, but other traditionally "dumb" appliances now have had an intelligence boost. Smart televisions feed our viewing habits to advertisers, who in turn target us with more specific advertisements. Some models even send information on the peripherals plugged into the television, and information on those secondary devices themselves, such as thumb drives. Some, even more disturbingly, have cameras built in that (some fear) will activate and offer advertisers live images of us, from which they can derive through our clothes, surroundings, etc what content would most likely appeal. Our game systems, as Jenkins mentions, are no longer solely platforms for games, but offer services as varied as the phones earlier, allowing movie streaming, data storage, audio playing, and so on. Game players are, in some cases, encouraged to generate their own content and make that available to other gamers, as in the case of user-generated challenges in Dante's Inferno.
In short, we become more abstractly and more directly involved with each other as the lines of media blur and the delivery systems merge. While we still don't have the universal "black box" that Jenkins mentions (and likely won't, likewise as he predicts), we do have technology that allows for multiple roles to be fulfilled within a single framework. We become more integrated with our tech, and our tech grows to reflect our perceived needs. In this culture of human-machine evolution, we're not cyborgs yet, but we're getting there.

Tuesday, November 5, 2013

Existenz, Stelarc, and the coming Singularity

Existenz - The most interesting and critical concept of the film, to me, wasn't broached formally as a subject, but was implied from context at the very end. I'm referring to the Transcendenz programmer's discomfort with the way the game unfolded, it's anti-gamer/gaming message. This implies that somehow, the technology of the time represented not only takes feedback from user action, but also from user thought. Games shaping themselves around our actions is nothing new, conceptually, and across the spectrum of gaming, from analog board games to the highest-end virtual worlds, the difference in the feedback from action is simply one of degree rather than kind. The idea that a game can read and interpret our thoughts, however, and subsequently alter itself to fit those thoughts, is both intriguing and disturbing in the possibilities derived from it. That, to me, would make for the most truly immersive gaming experience, given that a world that caters to us on an individual, unvoiced-desire level is a world which we are unlikely to want to leave. Of course, the idea that thought is indeed readable by machines itself implies that thoughts have some heretofore unknown physical properties which the machine can pick up on. Examining that, a possibility implied is that thought, by being made hardware, can then be changed by changing that hardware. To give us new thoughts (and, a presumed extension, new desires, memories, etc) it just becomes a matter of altering whatever physical form those thoughts take, or possibly how the brain itself reacts to said thoughts. Breaking things down to a chemical level, we've discovered that thoughts of depression can be caused by a chemical imbalance, the brain getting not enough of one thing or perhaps too much of another. If the brain reacts to thought chemically, that is to say if thinking certain things causes certain chemical changes in the brain (which we know happens, in that happy thoughts/memories cause certain parts of the brain to become more active, likewise unhappy thoughts, sexual thoughts, language, etc), then to disconnect (or reroute) one's reactions, one doesn't need to change the thoughts themselves, just how the brain receives/perceives them. Simple, right? Granted, that's not a true interpretation of or alteration of thought, but it seems to be hypothetically viable as a work-around solution.

Stelarc - His works are interesting, and disturbing to a degree. The flesh hook suspensions, surprisingly, less so than the spidery exoskeleton. That thing just brings to mind...well, spiders, already not pleasant, and one of the more annoying enemies from the game Doom 2. The SecondLife arm avatar video was just annoying. Performance art is all well and good, but six minutes or so of watching an older gentleman wave around his hands so his SL avatar can move just one arm, while the other hangs limply, and while it gets hit periodically by blocks causing repetitive and discordant sounds isn't my idea of a good time. But then, I'm not an artist.

The Singularity - From what it sounds like, the term "intelligence" isn't accurate to describe the shift forward. Increased processing capability feels more accurate. There may well be a computer that can make connections faster than humanity based on a series of logical steps, but many advancements come not from rote work or even logical extensions, but from intuitive leaps. Intuition, and other factors that inform intelligence (as I conceptualize it) can't be hard-coded. Computers can't guess, because computers can't imagine. They can't imagine because they can't desire. They can't desire because they can't feel. And they can't feel because they aren't organic, chemical-based creatures. Referring back to the initial paragraph, we're all bags of chemicals (but not, I believe, only that), and the interaction thereof shapes us. Without glands, how can anything feel the impulses that are gland-driven, such as love, hate, fear, passion, and so on. Cybernetic intelligence enhancements to organic brains seems a more likely outcome, long-term, than any strictly-mechanical intelligence surpassing a purely organic brain. We are analog creatures, and the basis of our inner selves is analog, based on analog input and offering analog feedback. A digital consciousness, while conceptually faster and simpler with digital feedback, cannot surpass an analog one in terms of perception, ability, etc because there just don't exist the protein-based means it needs. A computer cannot strive, fear, or die. Thus, a computer cannot progress, save to the limits of the hardware it's housed in.

Thursday, October 24, 2013

Playin' games

     The immersive game I chose to indulge in was Neverwinter Nights 2, an rpg based on the Dungeons and Dragons series of worlds. I define this game as immersive for several reasons. Aside from the standard "cut things up/shoot things/burn things" style of play expected with any game where combat happens, there's a solid core story that unfolds as the player works through. The story could have easily worked just as well as a novel rather than a gaming experience, with a good, high-fantasy-driven plot, memorable and distinct characters/personalities, dramatic (for a given value of fantasy drama) events and surprises, and similar aspects that make the genre enjoyable.
     Aside from but worked into the story there's a degree of immersion to be found in the character itself. As with many of this genre of game, the main character can be created by the player, offering up choices to suit one's preference such as gender, race, physical shape, job (by which I mean a range of options on what it is your adventurer actually does, from the old favorites of wizard, thief, fighter, and cleric to more specialized classes such as duelist, swashbuckler, divine champion, etc), and even down to the specifics of hair, skin, eye, and minor accent colors. You may devise your own history, and choose what skills a character has (though of course some jobs and races are more attuned to certain skillsets), and help the character grow into a unique and powerful force within the context of the game.
     A fun element of the gameplay is where your character falls on the good/neutral/evil spectrum. There are nine distinct alignments, and choices within the game can affect yours to a degree that there are consequences for frequently flouting the moral system you choose to live by. Some jobs can only be followed by certain alignments, for example, so to start with one of these jobs and then lose it through poor roleplaying of your character is possible (and frustrating, after spending hours building them up to a useful degree).
     The graphics, being of relatively recent vintage (2004 or so), are excellent for the time, as is the soundtrack, with the latter supporting the former strongly. traveling through a forest looks and sounds like it should, caves and dungeons offer distant dripping noises, combat gives powerful musical crescendos. The voice acting of the game is of a good quality, lending it at times the feel of an interactive movie. The game functions on multiple levels to create a distinct world and, while keeping you within the bounds of the story, to help you experience that world in a meaningful way, regardless of how you choose to play it.

Tuesday, October 22, 2013

Do games count?

     Obviously they count. Games are, to a greater or lesser degree, immersive fiction. The level of immersion as regards the fiction, and as regards to mechanics of the games themselves, varies of course. The fundamental premise of putting you into a situation where events happen in a non-real-world way is the core of fiction, even for games as simple, narratively speaking, as Pong. Leave aside the fact of why you're playing, in a narrative sense, as there's no  "you are a ping pong champion defending your title from those evil communist Russians" or similar elaborate backstory. Just accepting the fact that you are playing on a virtual field makes the concept count within the realm of digital humanities.
     Granted, a richer, more immersive story can lend more to the experience, which is why ARGs like Year Zero fit well into the area of study. Taking reality as we know it and pushing it that little bit, adding that layer of fiction to it, gives a chance to redefine how we react to both the new "reality" and the original. There's a very simple cell-phone application, Zombies, Run, that essentially takes advantage of the mapping/GPS capabilities that smart phones possess. There's nothing fancy to it, it just takes a map and adds in a number of zombies that move towards you at a variable speed. A simple, elegant concept, and one which uses what was there and incorporates what isn't in a real-time environment. Now, instead of going for a morning jog, you're trying to outpace the ravenous dead.
     Zork is an excellent example of how a game can count, both from the player side of the screen and the programmer side. From the end-user perspective, you wander around textually, collecting items, avoiding pitfalls, watching out for that damned grue, etc. Items held can have uses, often very specific uses to help you progress further, similar conceptually to (and predating) the more graphics-oriented Shadowgate. Combat isn't a key feature, and rarely happens. Rather, puzzle-solving is the focus. The programming side is where things get interesting, though, as now a game designer, another human, is required to anticipate the interactions the user will have, and code the likely responses into the game. Elements come into play beyond sheer coding, such as grammar and syntax (users must issue commands in an understandable way such as "look at object" or "see object" for the game to process the commands), human expression (typing "yell" or "scream" causes a frustrated cry, which I've given frequently while playing), simple memory and sense of direction (how far north can I go, anyway?) and so on. The programmer in essence has to write around the possible foibles of the player, and work them into the game, either allowing the player to proceed despite them or forcing them to take action to make their desires more clear. So video games, to a degree, become psychological tools as well. This is more apparent in current times, of course, where games, like movies, employ visuals, soundtracks, and other sequences of data to try and set the mood the designer wants, but Zork is a perfect example of how even in the earliest days of video games the games themselves represent significant objects within the digital humanities field.

Tuesday, October 8, 2013

Reactions to pieces from Young-Hae Chang Heavy Industries

Not friendly to epileptics.

That said, all three pieces share a similar structure, i.e. too-fast-to-be-read-comfortably text combined with strobe-y screen flashing and sense-of-urgency-inspiring music. I can get why the poems display at the speed they do, given the music that they try to keep time to and match rhythm with. The problem is that I loathe someone else dictating the speed at which I read. The poems try to convey urgency, force. That's fine. But if I blink, glance away for a split second, pause to light up a cigarette, or whatever, and things have gone three screens away, losing the thread for me...that's just a pain. I don't want to sit through an unpausable video, which is what the poems essentially are, multiple times to feel like I "got" everything that went into it, captured all those little, quickly-vanishing phrases.

"Lotus Blossom" rates special mention for actually playing into the strobe action of the poems, corresponding that annoying quirk of programming with its frequent mentions of flickering lights. The use of subliminal-like phrases scattered throughout what I'd consider the "main" text also adds to the hypnotic quality. All of the pieces almost seem to be aspiring to that hypnosis, especially "Dakota" with its pounding percussion, as if seeking to deliberately induce a trance state. I'd enjoy reading these as traditional text, but I confess doing so would probably lose something that the chosen medium seeks to add. "The Sea," with its stream-of-consciousness wandering, was for me the most enjoyable to read in terms of textual content.

It's possibly a dated view, but I like my text pinned down, like butterflies to a board, so that it can be appreciated. Granted, it may lose the animus when being presented like that, but it does make it more comprehensible. Literature shouldn't be a moving target.

Monday, September 30, 2013

An introduction to my Google Maps essay

    

     I love the word "ephemera." It's pretty, uncommon, and both the title and an accurate description of my essay. Which isn't really an essay, per se, in the sense we think of essays. Which is to say that it isn't a linked, cohesive network of naturally-progressing ideas, built upon a foundation of reason, narrative causality, or the other things we generally associate with essays. It's essentially a lot of blather, a few contemporary-ish literary references, some memories, and bad jokes, set to geography. The geography is for the most part incidental to the essay, but it gives you something to do, with the clicking around and zooming and whatnot.

Orange gets off pretty easily, color-coding-wise. Why yes, pun intended!
     I like this concept of a map-based essay specifically because it lends itself to a certain freedom. We default, I think, to linearity in our prose, because when all you have is simple text to convey something, any sense of imposed order is welcome, just to make sense of things. With the visual cartographic spread of a Google Map, however, we're afforded a degree more freedom, both from linear conceptualization and from relying solely on words to get the point across. The map as presented wasn't written in the order the pins/shapes are labeled. I wrote them as they came to me, and then went back and shuffled bits around to (hopefully) create a flow that is navigable and pleasing.

For the hipsters...
     The experience was actually a bit like coding. While written code does generally require some logical progression, it jumps around a lot too, calling different procedures and functions and subroutines. Some of that may be thought of as mimicked in my piece, given that there are a few parts that directly link to and rely on each other for coherence, though not to the whole. Coding as a practice is mostly rhizomatic, and this piece lives up to that, I feel.

Hail Discordia.
     As I never actually saw any options to use the rich text editing for my map creating, I don't have the links or pictures that I'd like showing. So I'll take the time here to link the things I couldn't there, namely hanky code and Pandemic 2 (and the latter's meme). Also, I've thrown in some pictures, some relevant and some not, since the last few posts have been lacking in visual stimulation. Also, you should read the Illuminatus trilogy. Aside from getting the fnord reference, or the picture on the right, it helps to have read something so sublimely odd. Good for the brain.
    

Monday, September 23, 2013

Reactions to The Silent History, "Reagan Library" and "That Sweet Old Etcetera"

The Silent History - Various

An intriguing blend of storytelling, story-creating, and interactivity. This is possibly one of the best digital objects I've seen, from a production standpoint. The real-world feel of it as a narrative combined with the ability to both access field reports at certain locations written by others and to create reports of one's own for others to access is clever, allowing the underlying story to remain but be supplemented in as many ways as there are extra tidbits to read. Fully crowd-sourced fiction can be sporadic in quality, as in any collaborative project involving dissimilar people and styles thrown together, but the core narrative is amazing as a stand-alone. It's beautifully put together in a technical sense. The blend of traditional text and pseudo-presentation/documentary mixes well, lending it a credence beyond standard fiction. The sheer spectrum of options open to this medium adds something to the art of storytelling that print-only narratives cannot match. Not better, but as good, certainly, and with amazing potential.

"Reagan Library" - Stuart Moulthrop

This would have gone better, I think, if it were meant for a modern platform rather than IE 2 or Netscape. The description makes it sound like there was meant to be audio, forgive the phrase. Also, the point-and-click on objects to move thing didn't work, so hyperlink was the only way to navigate. Still, the ability to drag the screen around to get a 360-degree view was nice. Maybe because it was so silent but interactive, the object was interestingly creepy, and the randomly-generated text at points enhanced that. The feel of an odd systemic human/machine hybrid breakdown actually rather worked given the crippled abilities of the object itself. It feels a bit like this excellent passage from an excellent book. (from 301 to 302) What it didn't feel like was that it became more coherent (or sensible) as things progressed. Just more eloquently creepy/disassociative breakdown-y.

"That Sweet Old Etcetera" - Alison Clifford

This was a little bit beautiful. The medium made for an interesting and innovative illustration of the message. The interactivity, the musical tones, the lovely construction of the poems into a landscape...all brilliant. Hard to read at times, but in line with the spirit of Cummings' style. Can be a bit tricky to navigate without cheating using the tab button, or at least the swaying tree part was, but even that makes it a little playful, in keeping with the tone of the object. This is as much visual art as it is poetry, and it makes me think of the points brought up in Goldsmith's "It's Not Plagiarism. In the Digital Age, it's 'Repurposing.'" The core poetry may not be new, but the presentation of it certainly is, and all the better for being presented in a way commensurate with the message.

Wednesday, September 18, 2013

Reactions to the week's readings, praise and snark dispensed judiciously

"My Body a Wunderkammer" - Shelley Jackson

Clever and playful. Jackson's hyperlink-driven text reads like a series of narrative Wikipedia essays, in that the links come at random intervals within the context of the specific essay being read, but tie into the essay they link to. Her imagery is strong, personal, relatable, and gratifying. The idea of the body broken down to parts, with each not only having its own story, but tales that interweave with the other parts, just makes sense as a medium. After all, our pieces may have specific design purposes, but still rely on other parts. This piece is charming. As separate essays, each link stands alone well, and as a read-through/click-through whole, each integrates nicely. Any negative critiques I have are related to the coding, not the content, and never mind the hair-splitting about the coding being the content, etc. You know what I mean, and I see the two as distinctly separate. I'm old-fashioned that way.

"Index for X and the Origin of Fires" and "Neckdeep" - Ander Monson

I like the presentation of "Index" as its purported namesake. It disrupts the traditional narrative, but still lets you build a narrative from the individual "listings" that allows you to strike near the point. In this case, it also serves to make the experiences relayed more raw, harsher, since without the traditional predictable narrative it becomes harder to...well, predict...the incidents, and thus inure myself to it. Indices are presented at the end of something, which makes everything before, to which they are referenced, a series of memories, in effect. And memory is notorious in its lack of linear reliability. To relive a trauma, one doesn't generally get lead down a safe narrative path, but is presented with it bluntly, across the mental face, full force. This piece succeeds at conveying that. A pity about the pictures, though. Those are just unpleasantly distracting.

As for "Neckdeep,"...eh. It's a card catalog. Clever. But Monson's favorite subject seems to be himself, and while that's forgivable when presented well, it's less enjoyable when the favorite subject is indulged in with the degree of smug self-satisfaction that radiates from the entries in the clever catalog concept. It's this, without the charm of the inherent satire of the personality portrayed.

"88 Constellations" - David Clark

Constellations are the culturally-accepted patterning of stars around culturally-accepted stories that we tell ourselves to make sense out of a universe which too often seems uncaring, incomprehensible, or downright hostile. A different set of cultural experiences would redraw the lines, but the stories would reduce down to the same few we share across the world, with minor changes to suit the tastes of the teller and the audience. "88 Constellations" is a website patterning stories from a man's life around constellations. The stories of that life are generally accepted, and told to make sense of the life surrounding them, to make that life as comprehensible to us as something uncaring, incomprehensible, or downright hostile can be. Different people encountering the subject of said stories might have told them differently, but they'd break down to the same stories in essence. I like to think that that's what Clark had in mind in his design, which is why the whole thing takes on a slightly wry, knowing tone.

"Pieces of Herself" - Juliet Davis

Obnoxious. Riotously obnoxious. Possibly meant to be? Was that the point? I feel I missed the point. If I wanted to parse content while being assaulted by random audio, especially when it layers to the point of being unlistenable, I'd still be browsing Angelfire and Geocities webpages. With the sound off, maybe I could better discern the "exploration of feminine embodiment and identity" the piece purports to offer, but given the crucial role the sound is apparently supposed to play, that doesn't seem likely to aid my understanding. Maybe this is beyond me because it just isn't my area of interest, beyond the fact of it being a digital object. Any socio-gendered discourse above the level of terms like "privilege" is pretty much beyond me, so the message of societal inscription, yadda yadda, is lost and wasted on me. Which just leaves the evaluation of this as a digital object. See the first word of the paragraph. Maybe a great message, but given the obnoxious medium, I don't feel compelled to hear it.

Thursday, September 12, 2013

Reactions to Understanding Media and "Mr. Plimpton's Revenge"


His magnum opus
Dinty Moore’s Google Maps essay, “Mr. Plimpton’s Revenge,” is a novel and interesting use/subversion of the mapping technology. While maintaining a traditional narrative structure (assuming one clicks through in the order presented), the use of Google Maps adds an element of interactivity I found appealing. Being able to pinpoint the specific locations of the story adds a degree of narrative immediacy and context to what would otherwise be a semi-charming/amusing story told over cocktails.

 
The world's newest supervillain
After reading the selection from McLuhan’s Understanding Media, two concepts stood out as salient. The first, regarding the medium being the message, is harder to wrap my head around as presented. The TV is not what’s on the TV, YouTube is not what’s on YouTube, and so on, surely. I am not what I say, the singer is not what she sings. Then I took a step back, mentally, after hearing about something where the medium literally is the message. The pleasant scent of freshly-cut grass is actually a cry for help, a Bat (Bug) Signal if you will, meant to summon assistance from predatory insects for the beleaguered blades, saving them from the ravages of ruthless fauna. In this case, the medium (smell) is the message itself (help me, Bugman!), the two inextricably bound together. This fact served a useful purpose (aside from reconfirming my sense of moral superiority for never mowing) in that it gave me a platform from which to grasp the concept as applied to us.

Scaling that up and through the lenses of abstraction and self-interest that help define human motivation, I can see the truth in that how we present information can be as important, and say as much, as the actual information presented. How we offer something can inform or define what we’re offering, be it a speech, a sales pitch, a web-based show, etc. Simple marketing theory, right? I’d just clarify and say that the medium isn’t the whole of the message, but is a necessary part of it.
 
Gravitas
Let’s take the Netflix show, House of Cards. The simple message is the show itself, an American political drama. But the medium, the vehicle by which that message is conveyed, tells us more. First, the fact that it is a show original* to Netflix, not simply a re-airing of network television, is Netflix saying “Look, we can create new content! We’re relevant! Eat it, RedBox!” Second, the use of headliner Kevin Spacey reasserts the claim, telling us, “Hey, we got that American Beauty guy! We’re a serious entertainment contender, not some YouTube-haunting kitten video stalker!” And of course Kevin Spacey is himself letting us know that, “Hey, I can still get acting jobs, even after K-Pax! I’m still relevant! Did you know I’m a Serious Actor? I play the POTUS, for crissake!”
 
Suck Dynasty
The second concept, of hot vs. cold media, I can agree with, though McLuhan’s application of those labels shows the age of the work. TV may have been a cold medium in the 50’s, but these days, with the diversity of messages to be found, the TV itself is lukewarm, with the non-medium-messages being hot or cold, depending on intent and content. It can be argued that much of the available programming today leaves very little to the imagination, indeed shoving so much irrelevant crap at us (looking at you, reality television) that we drown in the banality of it. Or maybe that’s just my old age. Now get off my unmown lawn.

*By original, I mean a Netflix reboot of an originally English show which was actually originally a book. But still, not network.

Tuesday, September 3, 2013

Digital Humanities - A Loose Personal Working Defintion


What is "Digital Humanities," as a field of study? A simple answer from my beginner's perspective is the confluence of human expression and a digital medium. To elaborate on the first part, I would consider the fields inherently thought of as "humanities," notably communication/rhetoric, philosophy, language, art, music, literature...essentially, the ways in which we, as self-aware beings, try to reflect our awareness within and upon the world. As for the digital part, I interpret it to be the overall medium by and through which the other, more-traditional media are viewed, with an important note that in my view, the originating medium can be digital itself, or the various non-digital media, if the intended method of taking in the object is digital.


An example discussed in class, the animated gifs of Miley Cyrus twerking on (to?) various works of traditional art, comes to mind. While a crucial part of the piece is that artwork ("The Scream," for example), and that artwork originates in realspace (i.e. exists, was created in, and was meant to be viewed in the flesh-and-blood analog world), the electronic addition of Miley to the pictures creates a new object, one that offers a new perspective, or subtly or significantly subverts an established perspective. In short, the new thing created is human expression conveying an implicit or explicit commentary through an ultimately digital medium, and meant to be viewed as such.

                Another example that seems relevant is the e-book, or at least those available on the Amazon Kindle. Users can highlight passages, quotes, etc., and the Kindle will show those highlighted sections to others reading the same book. This creates a subtextual narrative beyond the scope of the book itself, as it reveals (or at least alludes to) the thoughts of another person on the book, telling us in an interactive way something of what they feel or think. Thus a digitized object becomes a digital object. Something new is created that takes place exclusively on a digital platform.


            It is easy to perceive that such a movement may and will have its detractors, especially amongst those deeming themselves traditional academics. It appears to be a new discipline, and that can be frightening and disruptive to the scholarly status quo. I feel that, however, that rather than being a new discipline, the field is better thought of as the same discipline, expanded slightly, utilizing new tools. Human expression, as a whole, has not changed. The vehicle for it may, but the urge remains regardless of the means. Co-opting a thought from Robert Heinlein’s The Moon Is A Harsh Mistress, “can’t see it matters whether paths are protein or platinum.”