Thursday, December 5, 2013

The evolution of the villain in video games

     My paper will focus on, as the post's title suggests, how the role of the villain in video games has evolved in complexity over the past few decades. For the purposes of the paper and simplicity (and the sake of not spending more time and disposable income then I have handy), I'll stick with referencing specific games I've actually played, and making generalizations therefrom, rather than trying to encompass a broader sampling across platforms I don't or haven't owned and subgenres I don't play.

     First, I'll examine the video game as a tool for imposing and relaying narrative structures. In the earlier days of gaming, the games were simple (in complexity as regards to graphics, programming, and similar attributes, not necessarily in terms of skill to achieve victory), as were the stories they conveyed. Take Space Invaders or Galaga, for example. Wave after wave of alien ships descend, you shoot them, they attempt to shoot you, and that's about it. There's no narrative coordination worth mentioning just the pre-imposed condition of shoot or be shot, without any development on either the hero's side or the villain's.

     As a child, in a generation where growing up with video games was still a new-ish thing, that was novel enough to suffice. As I've aged, however, demands for more to the story have risen, much as I demand more, generally, from other sources of entertainment; novels,  movies, television, etc. Currently we demand games that offer a more grown-up feel, in terms of narrative structure. This springs from a desire to legitimize gaming as an adult pastime, rather than the poor, child-associated obsession it used to be ( and by some, may well still be) viewed as. Video games, like movies or television, have evolved to become (so I tell myself, rationalizing like mad) an art form, telling intricate, memorable tales as well as simple ones. Like the written narrative, they encompass generations rather than excluding them.

     I'll mention that the role of the hero is left two-dimensional to some degree by design, as it essentially is just a rough sketch, an outline in which we fill in ourselves. The villain, by contrast, is (or should be) a complete and fleshed-out personality, with its own psychology. I'll note Kant's theory of evil, and that said theory offers the opinion that what we see as evil is rarely so in the eyes of those performing the actions. rather, it can be seen to be a degree of selfishness greater than we generally allow ourselves, as a society, to indulge in. The selfishness comes through objectification and dehumanization, of course, but lacks within the eye of the perpetrator the moral wrongness that we associate with evil, instead often encompassing a ruthless sense of necessity.

     I'll mention some of the various villain tropes lending credence to the concept of villains with psychology, ones we see already in other media and indeed in life. I'll identify with those tropes various villains that fall in the category from games I've played. In general, I'll tie these elements together with the concept of advanced narrative structuring that we desire from our imagination-stretching pastimes.

Thursday, November 21, 2013

Convergence culture

     As Jenkins notes in his work, convergence culture has been with us for some time, as the world has grown more multidimensional in terms of the technology we use and the demands we make of that technology. Our phones are no longer just phones, even those comparatively unsophisticated versions we hardwire in and dub "land lines." Certainly, they make calls, but even for the land lines there's an increasing degree of complexity in the behind-the-scenes process while a decrease in complexity by the end user. Analog signals have fallen by the wayside in telecommunications, in favor of (at least at points) digital translation. Our words over the lines are broken into digital packets, delivered, and reassembled in (ideally) the same order as they were offered, through a series of difficult-to-follow paths, packet switches, and so on. Given that the land line itself now has become more the back-up option to the default preference we offer cell phones, we see how such a shift speaks volumes as to what we prioritize in our efforts to communicate. We demand constant accessibility (though some do bemoan it, often on the device they purportedly lament), accessibility to others and from others.
    
     Add, then, the features of smart phones, and their similar displacement of "dumb" cell phones, and we see additional qualities demanded from our hardware. It isn't enough not to just communicate from wherever to wherever, but we want to be able to communicate in different ways through the same device. On the same phone, one can call traditionally, or initiate a video or audio conference over the internet, or send text messages and still images. We can choose alternate means for the latter, using social media applications to send our thoughts and pictures and videos, not just to one, but to many at once. We're always on the grid, so to speak, and producing content to display on that grid, from the banal to the sublime.
    
     Phones are an obvious example, but other traditionally "dumb" appliances now have had an intelligence boost. Smart televisions feed our viewing habits to advertisers, who in turn target us with more specific advertisements. Some models even send information on the peripherals plugged into the television, and information on those secondary devices themselves, such as thumb drives. Some, even more disturbingly, have cameras built in that (some fear) will activate and offer advertisers live images of us, from which they can derive through our clothes, surroundings, etc what content would most likely appeal. Our game systems, as Jenkins mentions, are no longer solely platforms for games, but offer services as varied as the phones earlier, allowing movie streaming, data storage, audio playing, and so on. Game players are, in some cases, encouraged to generate their own content and make that available to other gamers, as in the case of user-generated challenges in Dante's Inferno.
In short, we become more abstractly and more directly involved with each other as the lines of media blur and the delivery systems merge. While we still don't have the universal "black box" that Jenkins mentions (and likely won't, likewise as he predicts), we do have technology that allows for multiple roles to be fulfilled within a single framework. We become more integrated with our tech, and our tech grows to reflect our perceived needs. In this culture of human-machine evolution, we're not cyborgs yet, but we're getting there.

Tuesday, November 5, 2013

Existenz, Stelarc, and the coming Singularity

Existenz - The most interesting and critical concept of the film, to me, wasn't broached formally as a subject, but was implied from context at the very end. I'm referring to the Transcendenz programmer's discomfort with the way the game unfolded, it's anti-gamer/gaming message. This implies that somehow, the technology of the time represented not only takes feedback from user action, but also from user thought. Games shaping themselves around our actions is nothing new, conceptually, and across the spectrum of gaming, from analog board games to the highest-end virtual worlds, the difference in the feedback from action is simply one of degree rather than kind. The idea that a game can read and interpret our thoughts, however, and subsequently alter itself to fit those thoughts, is both intriguing and disturbing in the possibilities derived from it. That, to me, would make for the most truly immersive gaming experience, given that a world that caters to us on an individual, unvoiced-desire level is a world which we are unlikely to want to leave. Of course, the idea that thought is indeed readable by machines itself implies that thoughts have some heretofore unknown physical properties which the machine can pick up on. Examining that, a possibility implied is that thought, by being made hardware, can then be changed by changing that hardware. To give us new thoughts (and, a presumed extension, new desires, memories, etc) it just becomes a matter of altering whatever physical form those thoughts take, or possibly how the brain itself reacts to said thoughts. Breaking things down to a chemical level, we've discovered that thoughts of depression can be caused by a chemical imbalance, the brain getting not enough of one thing or perhaps too much of another. If the brain reacts to thought chemically, that is to say if thinking certain things causes certain chemical changes in the brain (which we know happens, in that happy thoughts/memories cause certain parts of the brain to become more active, likewise unhappy thoughts, sexual thoughts, language, etc), then to disconnect (or reroute) one's reactions, one doesn't need to change the thoughts themselves, just how the brain receives/perceives them. Simple, right? Granted, that's not a true interpretation of or alteration of thought, but it seems to be hypothetically viable as a work-around solution.

Stelarc - His works are interesting, and disturbing to a degree. The flesh hook suspensions, surprisingly, less so than the spidery exoskeleton. That thing just brings to mind...well, spiders, already not pleasant, and one of the more annoying enemies from the game Doom 2. The SecondLife arm avatar video was just annoying. Performance art is all well and good, but six minutes or so of watching an older gentleman wave around his hands so his SL avatar can move just one arm, while the other hangs limply, and while it gets hit periodically by blocks causing repetitive and discordant sounds isn't my idea of a good time. But then, I'm not an artist.

The Singularity - From what it sounds like, the term "intelligence" isn't accurate to describe the shift forward. Increased processing capability feels more accurate. There may well be a computer that can make connections faster than humanity based on a series of logical steps, but many advancements come not from rote work or even logical extensions, but from intuitive leaps. Intuition, and other factors that inform intelligence (as I conceptualize it) can't be hard-coded. Computers can't guess, because computers can't imagine. They can't imagine because they can't desire. They can't desire because they can't feel. And they can't feel because they aren't organic, chemical-based creatures. Referring back to the initial paragraph, we're all bags of chemicals (but not, I believe, only that), and the interaction thereof shapes us. Without glands, how can anything feel the impulses that are gland-driven, such as love, hate, fear, passion, and so on. Cybernetic intelligence enhancements to organic brains seems a more likely outcome, long-term, than any strictly-mechanical intelligence surpassing a purely organic brain. We are analog creatures, and the basis of our inner selves is analog, based on analog input and offering analog feedback. A digital consciousness, while conceptually faster and simpler with digital feedback, cannot surpass an analog one in terms of perception, ability, etc because there just don't exist the protein-based means it needs. A computer cannot strive, fear, or die. Thus, a computer cannot progress, save to the limits of the hardware it's housed in.

Thursday, October 24, 2013

Playin' games

     The immersive game I chose to indulge in was Neverwinter Nights 2, an rpg based on the Dungeons and Dragons series of worlds. I define this game as immersive for several reasons. Aside from the standard "cut things up/shoot things/burn things" style of play expected with any game where combat happens, there's a solid core story that unfolds as the player works through. The story could have easily worked just as well as a novel rather than a gaming experience, with a good, high-fantasy-driven plot, memorable and distinct characters/personalities, dramatic (for a given value of fantasy drama) events and surprises, and similar aspects that make the genre enjoyable.
     Aside from but worked into the story there's a degree of immersion to be found in the character itself. As with many of this genre of game, the main character can be created by the player, offering up choices to suit one's preference such as gender, race, physical shape, job (by which I mean a range of options on what it is your adventurer actually does, from the old favorites of wizard, thief, fighter, and cleric to more specialized classes such as duelist, swashbuckler, divine champion, etc), and even down to the specifics of hair, skin, eye, and minor accent colors. You may devise your own history, and choose what skills a character has (though of course some jobs and races are more attuned to certain skillsets), and help the character grow into a unique and powerful force within the context of the game.
     A fun element of the gameplay is where your character falls on the good/neutral/evil spectrum. There are nine distinct alignments, and choices within the game can affect yours to a degree that there are consequences for frequently flouting the moral system you choose to live by. Some jobs can only be followed by certain alignments, for example, so to start with one of these jobs and then lose it through poor roleplaying of your character is possible (and frustrating, after spending hours building them up to a useful degree).
     The graphics, being of relatively recent vintage (2004 or so), are excellent for the time, as is the soundtrack, with the latter supporting the former strongly. traveling through a forest looks and sounds like it should, caves and dungeons offer distant dripping noises, combat gives powerful musical crescendos. The voice acting of the game is of a good quality, lending it at times the feel of an interactive movie. The game functions on multiple levels to create a distinct world and, while keeping you within the bounds of the story, to help you experience that world in a meaningful way, regardless of how you choose to play it.

Tuesday, October 22, 2013

Do games count?

     Obviously they count. Games are, to a greater or lesser degree, immersive fiction. The level of immersion as regards the fiction, and as regards to mechanics of the games themselves, varies of course. The fundamental premise of putting you into a situation where events happen in a non-real-world way is the core of fiction, even for games as simple, narratively speaking, as Pong. Leave aside the fact of why you're playing, in a narrative sense, as there's no  "you are a ping pong champion defending your title from those evil communist Russians" or similar elaborate backstory. Just accepting the fact that you are playing on a virtual field makes the concept count within the realm of digital humanities.
     Granted, a richer, more immersive story can lend more to the experience, which is why ARGs like Year Zero fit well into the area of study. Taking reality as we know it and pushing it that little bit, adding that layer of fiction to it, gives a chance to redefine how we react to both the new "reality" and the original. There's a very simple cell-phone application, Zombies, Run, that essentially takes advantage of the mapping/GPS capabilities that smart phones possess. There's nothing fancy to it, it just takes a map and adds in a number of zombies that move towards you at a variable speed. A simple, elegant concept, and one which uses what was there and incorporates what isn't in a real-time environment. Now, instead of going for a morning jog, you're trying to outpace the ravenous dead.
     Zork is an excellent example of how a game can count, both from the player side of the screen and the programmer side. From the end-user perspective, you wander around textually, collecting items, avoiding pitfalls, watching out for that damned grue, etc. Items held can have uses, often very specific uses to help you progress further, similar conceptually to (and predating) the more graphics-oriented Shadowgate. Combat isn't a key feature, and rarely happens. Rather, puzzle-solving is the focus. The programming side is where things get interesting, though, as now a game designer, another human, is required to anticipate the interactions the user will have, and code the likely responses into the game. Elements come into play beyond sheer coding, such as grammar and syntax (users must issue commands in an understandable way such as "look at object" or "see object" for the game to process the commands), human expression (typing "yell" or "scream" causes a frustrated cry, which I've given frequently while playing), simple memory and sense of direction (how far north can I go, anyway?) and so on. The programmer in essence has to write around the possible foibles of the player, and work them into the game, either allowing the player to proceed despite them or forcing them to take action to make their desires more clear. So video games, to a degree, become psychological tools as well. This is more apparent in current times, of course, where games, like movies, employ visuals, soundtracks, and other sequences of data to try and set the mood the designer wants, but Zork is a perfect example of how even in the earliest days of video games the games themselves represent significant objects within the digital humanities field.

Tuesday, October 8, 2013

Reactions to pieces from Young-Hae Chang Heavy Industries

Not friendly to epileptics.

That said, all three pieces share a similar structure, i.e. too-fast-to-be-read-comfortably text combined with strobe-y screen flashing and sense-of-urgency-inspiring music. I can get why the poems display at the speed they do, given the music that they try to keep time to and match rhythm with. The problem is that I loathe someone else dictating the speed at which I read. The poems try to convey urgency, force. That's fine. But if I blink, glance away for a split second, pause to light up a cigarette, or whatever, and things have gone three screens away, losing the thread for me...that's just a pain. I don't want to sit through an unpausable video, which is what the poems essentially are, multiple times to feel like I "got" everything that went into it, captured all those little, quickly-vanishing phrases.

"Lotus Blossom" rates special mention for actually playing into the strobe action of the poems, corresponding that annoying quirk of programming with its frequent mentions of flickering lights. The use of subliminal-like phrases scattered throughout what I'd consider the "main" text also adds to the hypnotic quality. All of the pieces almost seem to be aspiring to that hypnosis, especially "Dakota" with its pounding percussion, as if seeking to deliberately induce a trance state. I'd enjoy reading these as traditional text, but I confess doing so would probably lose something that the chosen medium seeks to add. "The Sea," with its stream-of-consciousness wandering, was for me the most enjoyable to read in terms of textual content.

It's possibly a dated view, but I like my text pinned down, like butterflies to a board, so that it can be appreciated. Granted, it may lose the animus when being presented like that, but it does make it more comprehensible. Literature shouldn't be a moving target.

Monday, September 30, 2013

An introduction to my Google Maps essay

    

     I love the word "ephemera." It's pretty, uncommon, and both the title and an accurate description of my essay. Which isn't really an essay, per se, in the sense we think of essays. Which is to say that it isn't a linked, cohesive network of naturally-progressing ideas, built upon a foundation of reason, narrative causality, or the other things we generally associate with essays. It's essentially a lot of blather, a few contemporary-ish literary references, some memories, and bad jokes, set to geography. The geography is for the most part incidental to the essay, but it gives you something to do, with the clicking around and zooming and whatnot.

Orange gets off pretty easily, color-coding-wise. Why yes, pun intended!
     I like this concept of a map-based essay specifically because it lends itself to a certain freedom. We default, I think, to linearity in our prose, because when all you have is simple text to convey something, any sense of imposed order is welcome, just to make sense of things. With the visual cartographic spread of a Google Map, however, we're afforded a degree more freedom, both from linear conceptualization and from relying solely on words to get the point across. The map as presented wasn't written in the order the pins/shapes are labeled. I wrote them as they came to me, and then went back and shuffled bits around to (hopefully) create a flow that is navigable and pleasing.

For the hipsters...
     The experience was actually a bit like coding. While written code does generally require some logical progression, it jumps around a lot too, calling different procedures and functions and subroutines. Some of that may be thought of as mimicked in my piece, given that there are a few parts that directly link to and rely on each other for coherence, though not to the whole. Coding as a practice is mostly rhizomatic, and this piece lives up to that, I feel.

Hail Discordia.
     As I never actually saw any options to use the rich text editing for my map creating, I don't have the links or pictures that I'd like showing. So I'll take the time here to link the things I couldn't there, namely hanky code and Pandemic 2 (and the latter's meme). Also, I've thrown in some pictures, some relevant and some not, since the last few posts have been lacking in visual stimulation. Also, you should read the Illuminatus trilogy. Aside from getting the fnord reference, or the picture on the right, it helps to have read something so sublimely odd. Good for the brain.