27 May 2009

New Laboratory

A few pics from our new office right at the Alexanderplatz in the center of Berlin. The office is still sort-of under construction because of some delays, but it will be finished soon (in about 2 weeks hopefully). The good thing is that it is within walking distance from my home (about 3 kilometers), and its a very nice walk (at least if the weather plays well).

The new office is actually a LOT bigger then what's visible on the photos. We decided on a floor plan where we can have around 5..16 people in one room which is big enough to enable direct communication between team members, but not big enough to become noisy. There are several meeting rooms, a chill out area, 3 kitchens, a server room plus a fallback server room, and enough room for future growth (we had to move to a bigger office every 2 years since Radon Labs GmbH exists, hopefully the new office is big enough so that we can stay a bit longer this time around, because as far as Berlin is concerned, this is pretty much the perfect location).

Here's what we see when looking out of the office towards the Fernsehturm:

IMG_6187 

 

That's what it looks like in the areas that are still under construction, this is going to become one of the meeting rooms:

IMG_6234

 

And that's what the team offices look like (this is the room of the Drakensang character department, who are presumably out to lunch or fleeing from the alien attack or something):

IMG_6203

 

And this was right before the alien mothership appeared in the sky and started to lay waste to the city:

IMG_6267

PS: thanks to Stefan for letting me use his photos :)

23 May 2009

Another day with PSN

  1. Read somewhere on the Internets that Infamous demo is on PSN.
  2. Turn on PS3.
  3. “A system update is available”.
  4. Stare for an eternity at a very slowly growing bar indicating not only the current download status but also my growing anger level.
  5. Stare for another eternity at another status bar which says that its installing the update or something…
  6. Contemplate how the PS3 feels more and more like a fucking IBM-PC connected to a 9600 baud analog modem.
  7. Finally! Hurry into the PSN shop, try to make sense of the confusing layout, until (after what feels like another eternity) it slowly dawns on me that the European shop (or German shop, or whatever other obscure region got the fist up its ass this time) does in fact NOT have the demo which (maybe) is only on the Japanese shop (or Americanese, or Papua-New-Guineaese, Kamerunese or whatever other region is lucky this time).
  8. Turn off the PS3 while shaking head in frustration thinking about how internet commerce ought to be the future and wipe out boxed products completely. Somebody should tell the bean counters and lawyers that the internet simply wasn’t designed with national borders in mind.

It’s seldom enough that I feel like turning on my PS3, but being greeted with a fucking system update which takes forever to download (and install!) every single fucking time is a huge *fucking* TURNOFF! At least on the PC, I can continue to do other things while Windows is updating.

To be fair, Microsoft isn’t any worse when it comes to denying us Germans the good stuff, but at least I can check MajorNelsons webpage for the all-to-common “Not available in Germany” message before spending forever looking for the download on the Marketplace. And system updates are just not an issue on the 360. The last one (the NXE update) took under 2 minutes to download (and install, might I add), and unlike a typical PS3 system update, the changes are usually visible to the user.

Maya Programming #1

[Edit: fixed some bugs in the MEL code]

Maya is an incredibly complex beast, but it is also an incredibly powerful beast. The C++ API fully exposes this complexity to the outside (even more then the MEL API), and after about 10 years of working with the Maya SDK I still have some sort of love/hate relationship with the C++ API (and a thorough hate/hate relationship with MEL).

Much of the complexity stems from the fact that even a simple Maya scene is a huge network of interconnected tiny nodes (the dependency graph), and while the API tries its best to provide shortcuts for finding the relevant data, one often has to write several (sometimes a lot) lines of code to get to the actually wanted piece of data. Typical Maya code (at least typical exporter-plugin code) often consists of cascades of calls to getters and iterators just to find and read a single data element from some object hidden deeply in the dependency graph. Things have gotten better over time though, almost every new Maya version provided shortcuts for getting typical data (like triangulated mesh geometry, vertex tangents, or the shader nodes connected to the polygons of a shape node). I think the Maya team needed a little while before they got a feel for the needs of the gaming industry. Since Maya has more of a movie industry background, and (at least in Europe) is still a bit exotic compared to its (ugly) step-brother 3DMax this is understandable.

When programming a Maya plugin the actual API documentation is actually relatively useless because it’s mainly a pure class interface documentation and doesn’t explain how the objects of a Maya scene relate to each other or what’s the best practice to solve a specific task (and worse, there are often a lot of different ways to do something, and its up to the programmer to find the easiest, or cleanest, or fastest way). Instead of relying on the documentation its often better to explore the object relationships directly in Maya through the Script Editor, and only when one has a clear understanding of how the data is organized in Maya, lookup the API docs to find out how it’s done in C++.

Before starting to explore a Maya scene through the script editor one only needs to understand this:

The entire Maya scene is a single Dependency Graph, which is built from Dependency Nodes connected through Plugs (think of it as a brain made of interconnected neurons). Every time the Maya user manipulates the scene he may add or remove nodes from the graph, connect or disconnect plugs, or feed some new data into some input plugs). After parts of the dependency graph have been manipulated, it is in a dirty state and must be brought uptodate before being displayed (or examined by an exporter plugin). But instead of evaluating the entire graph (which would be very slow in a complex Maya scene with thousands of nodes), only the dirty parts of the graph which actually need updating will be evaluated. This is where the dependency stuff comes into play: every dependency node depends only on the nodes connected to its input plugs. Only input plugs marked as “dirty” need to be evaluated. If some data is changed at one of the input plugs it will propagate its dirty state “upward” in the graph, and in turn an “evaluation wave” will propagate “downwards” through the dirty nodes.

This system might seem overkill for simple 3D scenes, but as soon as animation, expressions and the construction history come into play it all makes sense, and its actually a very elegant design.

Now lets go on with the exploration:

Some important MEL commands for this are listAttrs, getAttr, setAttr and connectionInfo. Let’s start with a simple illustration of a dependency chain.

First open Maya’s script editor and create a polygon cube at the origin:

polyCube [Ctrl+Return]

This creates a transform node, and a child shape node in Maya (pCube1 and polyCube1). Let’s list the attributes of the transform node (for the sake of simplicity… these are equivalent with plugs):

listAttr pCube1

This produces dozens of attribute names (a first indication of how complex even a simple Maya scene is). Somewhere in this mess is the translateX attribute which defines the position on the X axis. Let’s have a look at its content:

getAttr pCube1.translateX

This should return 0, since we created the cube at the origin. Let’s move it to x=5.0:

setAttr pCube1.translateX 5.0

When the command executes, the cube in the 3D view should jump to its new position.

So far so good. Lets create a simple dependency by adding a transform animation. Just type this into Maya script editor (start a new line with Return, and execute the whole sequence with Ctrl+Return):

setKeyframe pCube1;
currentTime 10;
setAttr pCube1.translateX -5.0;
setKeyframe pCube1;

This sets an animation key at the current position (time should be 0 and the cube’s x position should be 5), then sets the current time to 10, moves the cube to x=-5 and sets another animation key.

Now grab the time slider and move it between frame 0 and 10, the cube should now move on the X axis. Now lets try to read the translateX attribute at different points in time. Move the time slider to frame number 3 and execute:

getAttr pCube1.translateX

This should return 2.777778. Now move the time slider to frame 6 and get the same attribute:

The result should now be –0.555556. The previously static attribute value now changes over time since we added an animation to the object. Obviously some other node manipulates the translateX attribute on pCube1 whenever the current time changes. Let’s see who it is:

connectionInfo -sourceFromDestination pCube1.translateX

This yields: pCube1_translateX.output

So there’s an object called pCube1_translateX, which has a plug called output, which feeds the translateX attribute of our cube with data. Now let’s check what type of object this pCube1_translateX is:

nodeType pCube1_translateX

The result: animCurveTL.

Now let’s check what the docs say about this node type. Open the Maya docs, go to “Technical Documentation –> Nodes”, and type animCurveTL into the “By substring” search box. This is what comes up:

This node is an "animCurve" that takes an attribute of type "time" as input and has an output attribute of type "distance". If the input attribute is not connected, it has an implicit connection to the Dependency Graph time node.

Interesting! Let’s double check: If the output plug of the animation curve is connected to the translateX attribute, it should return the same value for a specific time… Moving the time slider to frame 3 should yield a value of 2.77778, and indeed, a

getAttr pCube1_translateX.output

returns the expected value.

Now lets try something crazy: what if we feed the current value of translateX into the translateZ attribute of our cube and thus make the dependency chain a bit more interesting? The result should be that the cube moves diagonally when the time slider is moved even though only the translateX attribute is animated by an animation curve:

connectAttr -force pCube1.translateX pCube1.translateZ

Now I cheated a bit by using the –force argument. The translateZ attribute was already automatically connected to another animation curve when we executed the setKeyframe command (same thing that happened for our translateX attribute), we need to break this connection before connecting to another plug, and the –force just does that.

Lets see if it works by moving the time slider. And indeed… the cube moves diagonally on the X/Z plane as expected. Cool shit.

So that’s it. That’s how everything in Maya works. The only difference to a really complex scene is that there are hundreds or thousands of dependency nodes connected through even more plugs.

21 May 2009

Maya Plugin

[Edit: I added some clarification to the “cleanup” point  below]

I have started to write a new Maya plugin for Nebula3, which eventually may replace all (or parts of) our current plugin. The actual Maya plugin is only one small part of our asset pipeline, so this is not about rewriting the entire asset pipeline, just replacing one small gear in it. Otherwise it would be a truly Herculean task. In the beginning this will just be a private endeavour, free from time- or budget-limitations, so that no design compromises have to be made. This approach worked quite well for Nebula3 so it makes sense to use it more often in the future.

Our current Maya plugin is stable and fast, but at least the C++ part of it is beginning to show its age, it’s becoming harder to maintain, and since it’s based on Nebula2 code it is a lot harder to do low-level things like file io compared to similar Nebula3 code.

The new plugin will realize ideas I’ve been carrying around in the back of my head for quite some time, and which would be hard to implement into the existing plugin without a complete rewrite. The most important one is:

Separation into a platform-agnostic front-end and several specialized back-ends:
  • The actual Maya plugin will export into intermediate file formats which are completely platform-independent (and probably even somewhat engine-independent). Thus the plugin itself becomes more of a generic 3D engine exporter tool which doesn’t have to change every time a new target platform is supported or an engine feature is added or rewritten.
  • The back-end tools (or libs) convert the intermediate files into the files actually loaded by Nebula3. Those files can (and should) be highly platform-specific.

I’m expecting that the larger chunk of code goes into the platform-agnostic plugin, and that the back-ends are relatively small and straight-forward. The main advantage of this separation is better maintainability. The core plugin can remain relatively stable and clean, while the back-ends can have a higher frequency of change, and the “throw-away-and-rewrite” barrier is a lot lower since only the relatively small back-end-code has to be replaced without affecting the core plugin and the other platform-back-ends (too much). Also, the platform-mini-teams have more freedom to implement platform-specific optimizations into their engine-ports, since they have complete control over their exporter-backend.

The main disadvantage is that the export times will probably be a bit higher then now. A LOT of effort has gone into optimizing the performance of our toolkit plugin (exporting a scene with hundreds of thousands of polygon should only take up to a few seconds), and writing an additional set of output files may effect performance quite drastically. I’m planning to use XML only for the intermediate object hierarchy structure (which depends on the material complexity of the scene, but shouldn’t be more then a few dozen to a few hundred lines for a typical object), and to use binary file formats for “large stuff” like mesh and animation data. But if the XML files are hindering io performance too much, I will clearly go to a performance-optimized binary format, even if human-readability would be a major plus there (in the end, one set of the back-end tools could convert to human-readable ASCII file formats).

If you’re wondering why performance is so critical during export: consider that a project has about ten-thousand 3d models to be exported (which isn’t unrealistic for a complex RPG project like Drakensang for example). If the export time can be reduced by only one second per object, the time for a complete rebuild will be reduced by almost 3 hours! Actually, most 3d models batch-export in much less then 1 second in our build-pipeline, it’s the texture-conversion to DDS which eats the most build-time…

There are a lot of other things a Maya exporter tool should do right to be considered well-mannered:

  • It should support batch-exporting and automation (command-line batch-exporters, means of controlling export parameters on thousand of assets, standardized project directory structures, etc…).
  • It should be designed for a multi-project environment (a modeling artist or level designer must be able to quickly switch from one project to another).
  • It should of course offer a fast and exact preview for immediate quality control (the artist should be able to get an in-engine view of his work immediately)
  • It should not force artists to use archaic file formats. For instance, all texture conversion tools for the various console platforms I have encountered so far only accept crap like TGA or BMP as input, but NOT the industry standard PSD format! Quite baffling if one thinks about it. I don’t know how other companies deal with this, but I think it’s quite unacceptable to keep an extra TGA version for every one of tens-of-thousands textures around, just so that the batch exporter tools will work (for the console platforms we wrote our own wrappers, which first convert a PSD file to a temp TGA file and then invoke the conversion tools coming with the SDKs, but this is REALLY bad for the batch-export performance of course).
  • It should be fault-tolerant: Maya is an incredibly complex piece of technology, and plugins usually have no other chance then only supporting a specific subset of its features. The plugin should not crash or stop working when it encounters something slightly wrong or unknown in the Maya scene, instead it should provide the artist with clear warnings and readable error messages.
  • It should not require too many restrictions in the Maya scene: for instance, a very early version of our exporter tools required the artist to manually triangulate the scene, which is unacceptable of course.
  • It should cleanup the Maya scene during export: It’s relatively easy in Maya to create zero-area faces, or duplicate faces, or faces with a zero UV-area, etc... The exporter should remove those artefacts, and in those cases where an automatic handling is not possible, provide a detailed error log to the artist so that he has enough information to remove those problems manually. [EDIT: this was badly worded… of course the plugin should not modify the actual Maya scene, but instead remove artefacts from the data which has already been extracted from the Maya scene… it’s a bad idea to modify the Maya scene itself during export!]
  • It should optimize the Maya scene during export: For instance, the last time I looked at the XNA Maya plugin it exported a single material group for every Maya shape node, resulting in hundreds of draw calls for our simple Tiger tank example object. This is almost as bad as requiring the artist to work with a triangulated scene. Instead the plugin should try its best to optimize the scene for efficient rendering during export (like grouping polygons by material, sorting vertices for efficient vertex-cache usage, removing redundant vertices, and so on).

Of course this list could go on for a few more dozen points, there’s almost 10 years of work in our asset pipeline, and there’s probably more C++ and MEL code in it then in Nebula3 (which isn’t necessarily a good thing ;)