Programmers vs Video Game Art
When I made Bad Blaster, I drew everything by hand. I had to learn basic animation, such as how dogs run and how birds flap their wings. Animation is time-consuming, so I kept everything to between 4 and 8 frames if possible.
I spent somewhere between 3 and 5 months making the game, and drawing everything took way more time than I thought. If I had an idea for an enemy or an ability, I had to open up GIMP and grab the digital pencil tablet. I had to start limiting my ideas to keep the art manageable. I started introducing more enemies that hovered in the air, since they took at most 4 frames. Pixel art was not an option; it's not my preferred visual style for a video game, and it's also a lot harder to make than just drawing a regular character.
By the time I had finished the game, I had fallen in love with the actual programming aspect of it. Writing the music for the game was by far my favorite part of the project, but the coding was an unexpected delight. I also really liked designing the game. And I didn't like that my designs were limited by my artistic abilities.
I took all the lessons I learned from making Bad Blaster and put them to work in my next game, Robot Ops. The formula was simple: 1) I'm good at programming now and I really enjoy it, 2) I want lots of cool effects but I don't want to use an engine like Unreal or Unity, and 3) I didn't want to animate something by hand again.
The solution was simple. I'm a programmer, and so I got to work on a flat design animation framework that I could use for anything.
It was pretty cool, very simple, and yet took a lot of effort to get things to work smoothly.
I knew nothing of 3D animation and modeling at the time, but the result I came up with was very close to how 3D animation works. I take a model and make a pose, then I define the next pose in the sequence and how long the transition between the two should be, with linear interpolation handling the actual changes. I could make it repeating or one-time for jumps and falling. Here's a short description of a character like Bolt, the knight-looking guy with the sword and shield.
- The head is the center of the shape. Yes, that was a bad idea. But that's how it was.
- The head is a Rounded Rectangle that can be rotated about an adjustable origin.
- The width, height, and radius are defined when the head is created.
- In the constructor, a few more data are defined: the radius is subtracted from the width and height to get a new vector inside each "corner" of the rounded rectangle.
- The Rounded Rectangle is rendered in a few steps. First, a rectangle is rendered based on the vectors that were created in the previous step. This rectangle is taller than it is wide. Then a second rectangle is rendered that is wider than it is tall. Finally, 4 circles are rendered at each of the 4 vectors defined earlier. This creates a rounded rectangle.
That's cool and all, but I was relying on LibGDX's shape rendering library, which doesn't make it easy to rotate rectangles. It's much easier to just rotate triangles.
To turn this into a rotatable rounded rectangle, I had to switch to using triangles. I could keep the circles, but technically, it's better with arcs. So I replaced the circles with arcs. The new formula had 5 rectangles that were filled in by 2 triangles each. When the angle was rotated, all of the vectors would be rotated about the origin of the shape.
For the animation aspect, I simply connected each shape to each other. The head would be attached to a body, and the upper arms to the body, and the lower arms to the upper arms, and the sword hilt to the lower right arm, and so on. Each body part was some combination of shapes, usually triangles and arcs. The origin of each body part - the vector about which the shape rotated - was always at the connecting part. Shapes could either share or not share the "parent" shape's angle. If they shared the angle, the shape could still have its own angle; the two angles would be added to each other so that the child shape would perform its rotation based on how the parent shape was rotating.
Borders were created by first rendering a blacked-out and "inflated" copy of the original shapes, then rendering the real shapes in front of them.
I used gradients to make even cooler effects. My favorite was a spinning tornado thing that used all kinds of cool sinusoidal functions to twist about, and it had some really neat triangles with coordinated gradients that "swapped places" in order to create an illusion of movement.
But it always came back to triangles and trigonometry.
When I started using Blender years later, I was amazed at how similar it was to my 2D solutions. I also realized that I had made some mistakes in my own design, like the basic problem of not having the origin at the feet.
The end result was great. I got to do much more than I could have done if I had drawn it by hand. It still took some time, though, but at least it felt good making it.
Comments:
Leave a Comment
Submit