Bottom-up Dependency Injection in Unity3d

This is a potential solution to the dependency injection problem. That is, when an object needs access to some shared resource, how does it get a reference to it? Shared resources are typically handled in games as Singletons, but in truth that is not strictly necessary. Some of those resources could be unloaded and reloaded on an “as needed” basis. Why do we care? Because to test the client object, we could ideally use something completely different for the datasource and the client will be none the wiser.

Why not Editor references, or prefabs? Because a reference is meant for indicating a controller-controlled relationship. The object with the reference is the controlling object, meant to make changes to the controlled object. If we were to add references to dependancies as well, then it would give us dependency injection. The catch is, if we later want to change the object we depend on, perhaps to change the specific subclass, the reference that the child objects have can be invalidated. Then you would have to manually reconnect every single object reference to the new object, manually.

This goes the same for prefabs. There are ways of keeping the prefabs sane, but one change (like deleting an unrelated GameObject from the prefab) will require a new prefab, invalidating every reference to (and from) it.

 

This solution is designed for game/application code. It is not designed for separate libraries or assemblies. it also is designed as simple as possible. Therefore some performance costs are incurred so that the final code can be simpler.

First we define how a piece of game code will request a dependency. Perhaps it should look something like this:
SharedObject thing = SharedObject.instance;
That sure is pretty. The only gotcha is, when our SharedObject is an interface, we can’t declare an instance with a static property. Okay, then let’s get lower level.
SharedObject T = DependencyRegistry.Find();
Not bad. Brief, but effective. We intentionally avoid the DependencyRegistry.instance.Find<>() pattern because it’s more wordy. We will never be passing the DependencyRegistry as a parameter to a method, for example.

What if the caller needs to choose from a list of providers of the same class type? Perhaps a list of database connections. We will consider this a rare case, as most of the time you just need a single class or interface implementation. In any event, this is type-safely solved by creating a wrapper class or interface that offers the list in whatever form makes the most sense for the task at hand. It could be a list object, or even a custom class that has a field for each specific instance as needed.

That SharedObject.instance thing was nice though. And it would work perfectly fine for any concrete class. At first blush, it would seem inconsistent to allow two ways of accessing the same feature. However, the mindset behind an interface is very different from a class, so we can allow it. We just need to remember that a concrete provider that isn’t a singleton has to define an instance getter that resolves to the Registry.

Okay, API defined. How do we implement it? Clearly some searching is needed. Initially I considered using the Transform Hierarchy to search, but I realized that most objects in that tree will not be providers. Additionally, some of the providers may not even be in the same Transform root of the hierarchy. This leads me to creating an object registry. That is, a central list of objects that offer themselves as datasources.

 

The registry itself is as simple as a list of objects. We chose weak references so that a provider could use the OnDestroy or IDisposable interfaces to know when no consumers are making use of that provider any longer. Also, there should be relatively few providers at any one time, so the ram overhead should be negligible.

As an optimization, we assume that objects added later to the list will be more relevant to the potential consumers than the earlier ones. Therefore when searching we start at the end and work our way backwards.

Getting added to the registry is as simple as asking to be added. To prevent waste, we dedup additions. This can be safely done from within a MonoBehaviour’s OnEnable() method. This will be called before Start(), and work correctly from within prefabs as well as scenes. Likewise, removal is easily done during OnDisable().

It is not required that you add/remove yourself to the registry at these times. You can add or remove yourself at any time. The only things that care are the potential consumers. It is perfectly possible for a consumer to sit in a Start coroutine and wait for the provider object to become available.

You can grab the full source as described from
http://dwulf.com/source/DependencyRegistry.cs

Don’t Fear the Button

I’m afraid to push a button.
 
User interface design is something I care about. I am a developer, which practically makes me a power user by default. And yet I still come across boneheaded design in commonly used applications. Design that would make computers less accessible to the general public. Worse, in places that didn’t previously have these issues.

Latest: Youtube on a desktop browser. Drag the playhead to sometime later in the timeline. If it’s not loaded yet – tough cookies! It will only work on the section that’s loaded. Worse – it changes the “scale” of the playhead to only show what’s loaded, but *only while you are holding the playhead.* Absolutely no semblance of user expectation.

Why is this suddenly too difficult for YouTube to manage? I know a year ago I could easily have just skipped right to where I wanted to be, and simply wait for the data to download starting at the new location. As a developer, I can imagine why this choice was made. Perhaps the data format is not well suited for seeking. Is a “please wait” indicator too much to ask? Then the playhead scale will stay consistent. The user will have an understanding of where things stand. Instead, somewhere inside Google, a bunch of extra code was written to support this “seek only within loaded area” code, with special cases for the display and input system. Someone had to deliberately break the user’s trust.

Okay, fine, I can’t move the playhead to the actual intended target. There is another button, with the tooltip “watch later.” Maybe that will download it in the background so I can close the page and return. Oops! Nope, that means??? well, I’m not exactly sure. Some menu shows up on the left side of the browser and there is no indication of how it relates to the action I’ve requested – “watch later.”

That’s not what I wanted it to do. But I’m afraid that if I push the button again to “undo” whatever I just did, maybe the page will reload and I’ll have to wait for the whole thing to reload all over again. At this point there is no trust between me and the application that it will do anything that I expect it to.

We must always keep a watchful eye on our designs, whether they be games, websites, or applications. Every “feature” needs to be examined through the lens of the intended audience for clarity, not confusion. This is an ongoing battle that has been fought for many years, and will be for many years to come.
 
Let’s not forget it!

Why I’m not worried about PC and Console decline

A good number of well respected (and better read) people than I have lately voiced concerns about the state of indie and art gaming in the near future. Consoles, once the easiest and cheapest way to get hyper realistic games, are now regularly passed on the turnpike by off the shelf PCs. And the PCs are slowly being replaced with simpler tablet touchscreen devices. Devices that can play games whenever you want, wherever you are. And, barring some boneheaded designs, are as easy to play on as their original Console counterparts.

 
Good.
 

I’ve been around long enough to know the phrase “zero wait states”. The idea was that the machines will get so powerful that there will be no time delay between when you ask a computer to do something and when it happens. Some current systems take that to heart, most notably Apple. Most current systems are so mired in their computer science that the feel of a machine is a distant fifth priority item to anything else. That is a serious disservice to the power our current computers command. I bring this up because it is no longer vitally important to know that this machine has 1.21 gigawatts of processing power. These easier to use devices *should* be where we are going.

And yet people are freaking out that there will no longer be a venue for “that” kind of application or game. Why not? Touchscreens? They can certainly use controllers just like consoles, if the game works better for it. Big displays? The internal resolution of most tables far exceeds our “high def” televisions, and the devices have been able to drive them for years.

So what is the problem? Visibility of indie and art games? There is this strange thing called an interweb thingy where you can see other peoples opinions about all kinds of things. So what if Steam suddenly closed its doors on PC? As a website alone, even if it were no longer directly selling a thing – it would still be a highly respected resource for new games on the Internet because of its partially crowd based decision system. This is all it takes.

So what is all this grousing about? Things are getting simpler. Apart from a few better cable combinations (free charging cable with hdmi out for iPad) and some extra software support (his controller support via Bluetooth), we are already there with a bright future. People who want to know should be able to learn what they want about machines. But for the bulk of the world, they should not *need* to know. And that is exactly where we are heading.

Veridus Quo

Tonight I was at a local gathering of moonlighting game developers. One artist was having a lot of issues with his character rigging in Maya. He kept having to go back and recreate his skeleton and skin weights over and over again. He wanted to know what the typical workflow was for rigged characters. He described exporting the skin weights so that he could reimport them if things went wrong.??

This is not a technical explanation. These concepts will be just as valid for Blender or 3dsMax as they are for Maya.

Here were my suggestions.??

 

The first guideline to remember is to make backups often. That means save the maya file and make a duplicate, or put it in a version control system. How often? At least once an hour. Also save additional copies when you feel you have finished something complicated. There are some tools out there that will do this automatically for you.??

Be sure to save the it as a Maya file. Exporters should only ever be used as a last resort. Don’t use a skin weights exporter to ‘save’ the state of that part of the mesh. Always save everything, and that means the Maya file itself.

Phase 1: mesh phase

Get your mesh formed how you want it in a bind pose. Do lots of test deforms manually, and then reset it back. This will be the only phase you will be able to easily add or remove vertexes. You can move the vertexes easily later for tweaking, but adding or removing them later can be dangerous.

Put extra vertexes near joints that are expected to bend significantly, like more than 20 degrees. Take a human elbow for example. You might think one ring of vertexes at the place that the elbow will bend internally would be enough. Instead, have at least two rings of vertices just above and below where the elbow will bend. This will later be used to smooth out the large joint changes.

You may freely do texturing in this phase if desired. It is easier to do if you aren’t adding and removing vertexes, but it doesn’t hurt to do it here (or in any following phase).

Phase 2: building the skeleton

Now we will build the armature that must line up with the mesh. It is vitally important that you don’t move forward until the skeleton lines up with the expected internal joints of the mesh.

Starting in a 2d view, perhaps Front, build the armature and line it up with your mesh, all in this 2d view only. Do not bind the mesh yet! Then use a Side view and tweak only the depth of each pivot point. You will likely have to adjust the joint lengths slightly to maintain the proper lengths.??

As you build each joint, this is the best time to create any movement constraints you want on your joints. The range of motion can be adjusted later, but the correct orientation of the joint must be set now. In the case of an elbow, you will want to align one axis with the large up and down rotation that elbows can do, leaving the other axis for the smaller side to side motion limits.

You are welcome to add any desired inverse kinematics or other control structures at this or later phases. This can be handy for quickly testing the resulting mesh. Just be sure to undo any movement created by the controls before you get out of sync with the mesh.

Largely you will want joints that match up with real physical joints. A joint at an elbow for example. That said, there are some unexpected tweaks that can significantly improve the quality of the final skeleton.??

An arm with a hand on the end should have an extra joint added just above the wrist joint, perhaps a quarter of the way between the wrist and the elbow. Keep it in line with the ‘bone’. This joint will never be rotated to ‘bend’ the bone, only ever rotated to partially follow the wrist rotation. This way the wrist can rotate up and down, and side to side while not significantly affecting the forearm. (See this superb but technical paper for more information.)

Another exception is to remember that the joints do not have to stay physically inside the mesh. If there is a large curved mesh section that will never change curvature (like an alien arm with a curved bone), just make a single joint from one end to the other. You do not need smaller joints in between just to keep it visually within the mesh. Joints are never rendered.

Once you have realigned everything, go back to a perspective view, and triple check the joint positions. You will not be able to easily go back and change the joint lengths after this phase.??

Phase 3: binding the skin

It is finally time to bind the skeleton to the mesh in its bind pose. Pay close attention to the options in the binding tool. You will want to have Maya make as many vertex weight calculations as possible for you. Therefore bind it, and then start deforming the joints to see how well the default skin looks. The resulting mesh should look good in most places, and need tweaking in a few. If the skin moves like it is a mushy sack of potatoes, undo the bind, set a smaller influence radius and bind it again. On the other hand, if the skin looks rigid then the influence radius is too small.

Phase 4: everything else

At this point the key features are set in stone: the vertex count and the joint count. From here is all tweaking. Editing the texture itself, or the uv coordinates is perfectly fine. Editing the skin weights is a must to get every movement just right.

You may edit the positions of some vertexes, tho it is discouraged. Doing so also requires adjusting the skin weights. Typically it is better just to adjust the skin weights alone. Either way, do not add new vertexes or remove them. It tends to mess things up badly.

Avoid changing joint lengths. You can do it, but then you have to adjust a fair number of vertex positions and weights to make it not feel stretched.

 

Maybe this artists guide to the workflow of creating skin meshes will help someone out there. It already helped one, so it must be of some use! If it is useful to you, pass it on. If I’m wrong about something, poke me on twitter @DrakkenWulf. Happy modeling!