The Bare Essentials of Game Design

13 09 2007

Indie Game Watch I
In the “Indie Game Watch” column I will talk about indie games I consider remarkable, or otherwise deserving of extra attention.

For computer games, size and ambition require resources. Indie game authors are often alone and generally work within a far modester budget than any large studio, and as a result games cannot be of the same size, scope, complexity or wealth of contents. Thus, many indie titles are small and harmless, often puzzles, repetitive and formulaic match-3’s or narrow simulations, quick-and-fleeting things designed to bring a smile or a few moments of distraction and then generally to be forgotten. However, I say there are ways of producing memorable high-quality games within the indie model, and one of those is essentialism – taking a concept (an idea, a genre, something) and successfully reduce it to its bare minimals, and turn that into a game. If done wrong this method may yield something lacking, insufficient and boring, but if done right, the result may be engrossing and teach us developers something about games we overlook without realising!

In this post I’d like to mention two games -tow absolute reductions- that have done this: the real-time strategy reduction Galcon -a stellar game of epic scope fitting on one screen- and the turn-based strategy game OasisCivilization in 100 tiles per map!

Oasis screenshotMind Control Software’sOasis, situated in ancient Egypt, is what you get if you take the archetypal turn-based strategy (“TBS”) titles like Civilization, keep all its concepts -exploration, cities, armies, technology research, roads- and boil it down to its monofilament skeleton. You explore the map (all of which fits on screen at once) find cities, workers, and mines by clicking on fog of war; you spend workers on building roads and researching technology (by clicking on land or mines); mines automatically generate technology, cities automatically grow when connected by roads; and after a given number of turns barbarians attack and you’d better be ready. There is no widget-interface, everything is done by left-clicking map tiles. It is everything you find in TBS games, contained in one screen and condensed playing time. It is easy to learn, entertaining to play, and beautifully executed. If you like the concept of Civilization but, like me, are loath to spend 15 hours or more on a game, then this may be for you. However, its strengths -tight focus and constrained game play freedom- are also its weaknesses: it gets “understood” fast and is quite repetitive. Technology comes in predetermined order, and due to the tightly defined game play, all games play out similarly. After the first hour, the game does not surprise you any more, but nonetheless, it is still enjoyable. Also, every game developer should study this game and so gain a better understanding of the mechanics of TBS games!
 
Calcon screenshotImitation PicklesGalcon – the answer to what you get when you try to reduce the RTS concept to the utter bare essentials, revealing the numeric attrition simulation underneath. You and your opponent start with a planet in a planet field. Your planets automatically produce ships, larger planets produce more. You can send an adjustable percentage of your ships to conquer other planets. That’s it. It is RTS gaming in its pure form, and is worthwhile for two reasons: one, it provides immediate and intense, though short-lived when you’ve learned it, action; and two, it is in effect a vivisection of the RTS concept showing the least common denominator for all games of the kind. Functional stylised graphics suitable for the game; minimalistic (again) functional interface that does the job, adequate AI that provides a challenge until you learn to beat it, after which the game lose replay value.

Two small masterpieces that prove that less can be more, that KISS can be fun, and that indie can mean more than only match-3’s.





The Computer Games Industry’s Too Great Focus on Graphics

6 04 2007

This is the second instalment in a series of articles that could have been called “What Is Wrong With Current Video Games”, “Where Did the Video Game Industry Turn Wrong”, or something to that effect. In the first, “Inhuman Rights“, I argued that one of the biggest problems of current computer and video games is the stagnated or even devolved state of Artificial Intelligence in games. In this one I will argue that an ever-increasing focus on visual presentation is hollowing other aspects of the game, the very aspects that, unlike graphics, constitute the game play itself, the “contents” if you wish, of a game.

To start out, it is utterly indisputable and uncontroversial that the visual presentation (i.e. the “graphics”) has become the most important game play aspect: a disproportional portion of game development budget is spent on it; graphics is always a primary focus of game reviews; a significant amount of talk and buzz on fora etc are about it; etc. In a way, visual excellence is equated with enjoyment. Much less conceptual space is placed on story, game play, believability, creativity, or even sheer fun, and games today, compared to games of a decade ago, generally have less developed story and narrative: their interfaces are less interactive and more coldly order-oriented; and feedback from the game is more informative than attempting to establish a relationship with the player. So, what has led to this? Let’s start by stating some immediately observable facts, or at least facts that are obvious with some reflection and reminiscence:

Already the old Geek, back in 1983, saw this coming:
“Spike’s Peak features multiple stages and high resolution graphics. It’s a shame that its gameplay sucks so bad.” (http://www.videogamecritic.net/2600ss.htm)

First of all, graphics capabilities have escalated at a tremendous rate, if only evolutionary. Though the common argument would be that the demands of gamers and the game industry drives the development of graphics performance, in actuality the relationship is much more reciprocal: Increased graphical capabilities presses developers to constantly upgrade to “not fall behind”. Thus, the hardware industry do have some blame by pushing the always more potent graphical crack.

Secondly, it is very noteworthy that with the graphics revolution we’re experienced an accompanying shift, or increased focus, or game genres, especially for the PC platform. Most games tend to be variants of first-person shooters, real-time strategy (of the more-or-less conventional type, as contrasted to f.i. real-time tactics), or other action-oriented stereotypes. Though not an a priori predetermined consequence of increasing graphical capabilities, narrative games, adventure games, or other games of slower pace or cerebral nature have virtually disappeared.

That contemporary media, not games but also movies and music, is of intellectually attenuated and increasingly formalistic and mass-produced contents, too, is not really controversial. In fact, rather than denied, it is popularly, almost reflexively, attributed to a generational shift in attention span and lessening abilities to deeply concentrate. For instance, in 2002 Professor Greg M. Smith of Georgia State University said in a Gamestudios article said:

Although dialogue conventions have been developed in the novel and the theater, those conventions have been compressed in current film and television practices. Whether or not one blames the postmodern condition of interrupted sensibility or the economic pressure to maintain a declining viewership, contemporary media have taught audiences to read the narrative function of dialogue cues very quickly.

Or in other words, increasing superficiality is reflected in media. Speaking of the increasing importance, however, his claim is that computer games are a “primarily visual media”:

Since most people think of computer games as a visual medium, the primary faith in the evolution of games resides in improving the technology for their visual presentation. As designers are able to render more and more polygons, they can depict facial details more precisely, making more naturalistic expression and more nuanced characters possible. Assumedly, denser visual information will give the digital spaces explored by the interactive player some of the richness of the cinematic signifier. As long as we consider games to be primarily visual, it makes sense for both designers looking to shape the future and for critics wanting to explain present games to concentrate on the visual expressivity of this young medium.

That computer games are primarily a visual media (though not exclusively) is a claim with some merit, albeit perhaps not to exclusion, though consumers are increasingly brainwashed with the notion. Nonetheless, it is not an answer to what has caused the situation we see today with increasingly superficial “looker” games. Painted art is to an even higher extent a visual media, and if visual perfection was the goal of painting then artists would still only produce photorealistic landscapes. On the other hand, if computer graphics, too, had been perfected in the 17th century perhaps we’d have surrealistic modern art computer games today. Thus, games might be undergoing a phase where its visual component matures and eventually stabilised, whereafter attention will again be directed toward other dimensia. At the moment, however, the general agreement seems to be that “better graphics is better period” (interesting enough, some talk about too good graphics becoming “uncanny” and off-putting).

The Economics of Graphics
Contrafactual speculations about historical projections aside, we’re still just as caught with resplendently stupid games in increasing amounts. In the end, this is very likely due to a simple relationship: graphics costs money. 3D costs more than 2D, and sophisticated 3D costs more than simple. The game industry is getting huge, and with demand and market, so does the number of released games. With more titles out the market noise increase, and so the need to stand out. In what is perceived as a “primarily visual media” you stand out with better graphics and bigger games (note “bigger”, not necessarily “better”). Better graphics and bigger games cost more money (more modellers, artists, level designers, etc), and games get more expensive. Expensive games requires financing, and investors want returns. Investors abhor uncertainties and prefers proven formulas (not only in games, but in movies and music, and everywhere). Proven formulas are what’s already successful: FPS, RTS and action games with improved graphics. With increasing competition and budgets, risk-avoiding behaviour increase. A perfectly vicious little circle and a Catch 22 for other contents as more and more is focused on graphics.

Related to this, it is a very sad fact that more and more games are released in a quite “unfinished” state, relying on post-release patches to “finish up” the title. After having partaken in several beta tests I have, as a computer games designer and programmer, come to the conclusion that games very often seemed designed around the graphics, rather than the game mechanics, and patches try to retrofit post-release what should have been fundamental functionality at the very project onset. A conclusion from this, though perhaps “diagnosis” might be a better word, is that “graphics” has moved from being a conceptual focus to a conceptual pathology among developers; pathological in that it is a “deviation from a healthy, normal, or efficient condition” (the Unabridged Dictionary), or “the anatomic or functional manifestations of a disease” (Stedman’s Medical Dictionary).

Effects on Game Development and Evolution
In sum, what are the effects? Graphics cost money, which means smaller developers, which are generally more imaginative or innovative, can’t compete unless funded externally. Investors don’t take changes, demand rights of intervention, and direct the projects in “more appropriate” directions. If the games are successes, they were proven right, if it fails, they see deviating from the pure formula as the culprit – similar to dogs that know that it is their barking that drives away the garbage collectors. With the same soup being serves, discriminate gamers stop consuming (dropping into other hobbies, or playing retrogames on their emulators) and new gamers are formatted to the new standard. The market is streamlines and games are intellectually devolved further and we get more of the same excrement, only prettier. Pretty or not, it still taste pretty bad.

In conclusion
Graphics is a vicious circle. Nintendo Wii is the first large-scale industrial (that is, non-indie) actor that has tries to break out of it, and I applaud them! DRM-crazed or not, Nintendo has always placed quality of games above mere visuals, and by not going for cutting-edge hardware their new platform is cheap compared to the competition. Though innovative, the controllers are already plagiarised by Sony and Microsoft (which of course claims to “have independently had them in development” for a long time), so they won’t have the same effect. Nintendo attempts to compete solely on the grounds of quality of games. Nonetheless, frustratingly enough it might be a big mistake. After the DVD becoming mainstream “graphical awareness” has crept into population at large, and Wii’s “impoverished” graphics and no HD might make an uninformed mainstream ruminating on popular “knowledge” disregard it.

I remain a cynic.





Inhuman Rights

1 04 2007

Artificial Intelligence and mad computers hell-bent on universal destruction, a good ole’ staple of sci-fi, and for some reason computer games and computer gaming tend to feature prominently when movies are made on the theme. Tron, perhaps the first of its ilk, and WarGames, the seminal specimen, both feature computers dominating and threatening humanity by playing computer games – I can’t help but loving it! Back in the 70’s and 80’s, AI was one of the most promising not to forget frightening aspects promised by computerisation. It was hailed as saviour or destructor of our future, and though most people had an opinion on the topic, few held the view that “nah, nothing much will happen”. But this is what did happen.

Having studied a fair bit of Philosophy as well as CompSci, I’ve witnessed most traditional academic bastions for evolving artificial intelligence (Philosophy of Mind, “conventional” computer science and robotics), and really, their achievements were impressively absent. Rather, computer games was for a long time on the forefront of pushing the AI envelope. However, game development seems to have lost being the driver it was, and now nothing much really happens here neither. Robotics might be showing some progress, however, this seems to be in already established fields like specific pattern recognition (face recognition and voice recognition), and there’s basically nothing indicating quantum leaps or conceptual revolutions. Boring, and more than that, perhaps one of the biggest problems of contemporary computer games development: While our graphics grows increasingly realistic (whether this is for the better is another topic of debate) and ever-increasing computational potency allows fantastic physics and whatnot, the contents of the same games have, at large, stagnated or (arguably) even regressed. One common argument, which I with some reservations subscribe too, is that graphics, being the most immediate and obvious signpost of the game, assumes increasing focus and budget allocation, with detrimental effects on premises, story, narrative mechanics, game play, etc. etc. However, another side to it, which at least I haven’t heard mentioned is that today’s Artificial Stupidity is too insufficient to drive today’s massive games. Or is it simply that with being a larger business, with increasing number of titles to be (mass-)produced, there isn’t enough budget to develop AI; there aren’t enough skilled people to do the highly complex tasks of developing AI; the increasingly important middle-ware solutions and frameworks do not cover, or afford integration with, IA; or has the industry, in it’s collective growth euphoria simply forgot about AI? Or has it been convenient to forget about it when we can produce yet another MMORPG where we can replace dynamic content and intricate NP-characters with live human beings? Why is it that the online revolution, among other things, seems to have legitimised ignoring, even devolving, one of the most critical areas of computer games evolution?

I personally believe that game development could and should be one of the most important drivers of the development of artificial intelligence. Not only for the ever-stimulating reasons of Mad Science, but also because it would mean the potential for Actually Better Games, not as defined by graphic splendour, but as in better, more interesting game contents. The more graphics evolves, the larger, more intricate and environmentally impressive out game words become, the more salient and obvious is the discrepancy of these and that which populates them.


(from xkcd)

However, moving on on a wild tangent, I really do not doubt that eventually, AI evolution will have progressed to the point where we can actually speak about real, actual, no-bars-held artificial intelligence, rather than the Artificial Stupidity we’re tortured with today, and then games will land in science-fiction-land, because we’re going to have to start asking the Star Trek questions, those about the ethics of what we’re doing because we’re going to populate our creations with aware entities. Then the question for tomorrows game developer might have turned from “why is AI so lacking?” to “where does artificial intelligence end and virtual individuality begin?” and “can we really inflict violence on virtual individuals for our amusement?”. Nonetheless, I’d rather have that debate any day than suffer through another gorgeous game with retarded game play.





Cooperative Control

24 03 2007

Over on Theoretical Games Andrew Douglas writes about Cooperative Control“. This is something I’ve been musing over myself for close to a decade now, and is something that should already have happened a long time ago, for after all, didn’t games start off as social, collective, activities, and aren’t the all-by-myself game style of today an aberration, socially as well as psychologically? Claiming that adding “social” or “interactive” in front of “intercourse” makes for a (generally) more enjoyable activity than the more masturbatory nature of a solitary endeavour is hardly controversial, and similarly, shouldn’t computer games with friends be more stimulating and rewarding than hermit sessions?

The multiplayer revolution that started in the 90:s promised a lot but for some reason never really took of in a social way. Rather, it went of on a decidedly antisocial tangent, a vector it has never really veered from since. Perhaps due to the increasingly violent ideals reciprocally provided by and demanded from the entertainment industry, or perhaps because of the power of inertia and an increasingly expensive and thus risk-avoiding industry, multiplaying games tend to be games of massive (and armed and preferably bloody) opposition against each other, the best cooperation to be hoped for is that of playing on the same team.

In 2003 PCMag optimistically wrote, “one thing is sure: Gaming will continue to evolve from a lonely, you-against-the-machine activity to a more social and community-driven pastime.” That might still be realised, but as of today nothing much seems to be happening, except gaming having evolved from a you-against-the-machine to a you-against-all-the-other-players, which from a communication perspective could as well be computer players, hiding behind sparse text interfaces chatting at a level where they would fail a Turing test (in fact, I remember spambots in Counter-Strike that wrote more convincing messages than many players).

Similarly, gathering together friends and playing video games, generally on a console, is fun and common, but yet again, either we play games against each other (racing games, fighters) or we take turns and watch each other play (anyone remembering sitting down with Alex Kidd or Super Mario on 8-bit consoles and waiting for your go at the game?) Nonetheless, this technologically most primitive means of multiplaying is also the most social and closest to the original gaming roots we have, and something from which the multiplayer evolution is headed away from.

It might be tempting to proclaiming that the first steps away from this (the first step toward social gaming) can be found in the current MMORPGs, the successful of which tend to spawn coordination among players, albeit of quite static types. Still, Everquest resulted in virtual and RL marriages, and Ultima Online made people organise in vertically integrated production systems of virtual occupations. However, even when touted as an integral part of the gameplay experience, as in World or Warcraft, any cooperation or social aspects is more an optional secondary effect (or worse, an instrumental necessity) juxtaposed with glorified chat rooms than an integral game model aspect. “Social” is a connotational Chimaera capable of encompassing most everything occurring when more than two people interact, and another -likely unintentional- take on “social” gaming is Battlefield 2 which tries to recreate military command hierarchies where one player can act “commander” and give orders other player. Still, impressive as this might seem, these undertakings are still more-or-less haphazard aggregations of single-player undertakings – the games are still very much trapped in the precepts handed down by their ancestors, and multiplay is still qualitatively basically SP++.

However, to get back to the topic I started with, again, I very much doubt that I’m alone in my thoughts about socially orchestrated games being an important and logical next step, and I likewise doubt that I’m alone in being confused and dismayed in that this continuously fails to materialise. The next logical step would be what Andrew called “cooperative control”, but it has been the next logical step for close to a decade now, and neither Everquest and its spawns’ communal dungeon crawling and 3D-avatar-embellished chatting nor Battlefield’s command hierarchies are cooperative control. It makes on wonder how long it will take for some genuinely pro-social games will emerge…





Late on the bandwagon

23 03 2007

Since I’m of the habit of making notes for myself, perhaps as an subconscious realisation that I’m going increasingly bonkers, I might as well tell myself that I’m thinking about starting blogging. As if the blogosphere isn’t already starting to get overcrowded enough by all the fascinating not to forget continuous accounts of Wonderful Recipes You Must Try, the million-and-one attempts of people masking crypto-exhibitionist diaries as insightful commentaries, or talentless but passionate and heartfelt poetry bringing illumination to the intellectually and high-culturally impoverished ‘net.

Now, the official reason for this blog is something along the lines of open ruminations about most everything related to technology of computer games and their intersection with philosophical and human aspects. It will initially be quite sparse since I haven’t yet managed to properly enthuse myself about the entire endeavour but since there are rumours about people managing stunts like dividing oceans in twain and, in the case or Richard III, controlling the weather by means of mental forces, I perhaps might eventually and intermittently accumulate the energy to write down a musing or two.