Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Replacing 2D Sprites with 3D Models

Save for later
  • 21 min read
  • 21 Sep 2015

article-image

In this article by Maya Posch author of the book Mastering AndEngine Game Development, when using a game engine that limits itself to handling scenes in two dimensions, it seems obvious that you would use two-dimensional images here, better known as sprites. After all, you won't need that third dimension, right? It is when you get into more advanced games and scenes that you notice that with animations, and also with the usage of existing assets, there are many advantages of using a three-dimensional model in a two-dimensional scene.

In this article we will cover these topics:

  • Using 3D models directly with AndEngine
  • Loading of 3D models with an AndEngine game

(For more resources related to this topic, see here.)


Why 3D in a 2D game makes sense


The reasons we want to use 3D models in our 2D scene include the following:

  • Recycling of assets: You can use the same models as used for a 3D engine project, as well as countless others.
  • Broader base of talent: You'll be able to use a 3D modeler for your 2D game, as good sprite artists are so rare.
  • Ease of animation: Good animation with sprites is hard. With 3D models, you can use various existing utilities to get smooth animations with ease.


As for the final impact it has on the game's looks, it's no silver bullet but should ease the development somewhat. The quality of the used models and produced animations as well as the way they are integrated into a scene will determine the final look.

2D and 3D compared


In short:







2D sprite 3D model
Defined using a 2D grid of pixels Defined using vertices in a 3D grid
Only a single front view Rotatable to observe any desired side
Resource-efficient Resource-intensive


A sprite is an image, or—if it's animated—a series of images. Within the boundaries of its resolution (for example 64, x 64 pixels), the individual pixels make up the resulting image. This is a proven low-tech method, and it has been in use since the earliest video games. Even the first 3D games, such as Wolfenstein 3D and Doom, used sprites instead of models, as the former are easy to implement and require very few resources to render.

With the available memory and processing capabilities of video consoles and personal computers until the later part of the 1990s, sprites were everywhere. It wasn't until the appearance of dedicated vertex graphics processors for consumer systems from companies such as 3dfx, Nvidia, and ATI that sprites would be largely replaced by vertex (3D) models.

This is not to say that 3D models were totally new by then, of course. The technology had been in commercial use since the 1970s, when it was used for movie CGI and engineering in particular. In essence, both sprites and models are a representation of the same object; it's just that one contains more information than the other. Once rendered on the screen, the resulting image contains roughly the same amount of data. The biggest difference between sprites and models is the total amount of information that they can contain.

For a sprite, there is no side or back. A model, on the other hand, has information about every part of its surface. It can be rotated in front of a camera to obtain a rendering of each of those orientations. A sprite is thus equivalent to a single orientation of a model.

Dealing with the third dimension


The first question that is likely to come to mind when it is suggested to use 3D models in what is advertised as a 2D engine is whether or not this will make the game engine into a 3D engine. The brief answer here is "No."

The longer answer is that despite the presence of these models, the engine's camera and other features are not aware of this third dimension, and so they will not be able to deal with it. It's not unlike the ray-casting engine employed by titles such as Wolfenstein 3D, which always operated in a horizontal plane and, by default, was not capable of tilting the camera to look up or down. This does imply that AndEngine can be turned into a 3D engine if all of its classes are adapted to deal with another dimension.

We're not going that far here, however. All that we are interested in right now is integrating 3D model support into the existing framework. For this, we need a number of things. The most important one is to be able to load these models. The second is to render them in such a way that we can use them within the AndEngine framework.

As we explored earlier, the way of integrating 3D models into a 2D scene is by realizing that a model is just a very large collection of possible sprites. What we need is a camera so that we can orient it relatively to the model, similar to how the camera in a 3D engine works. We can then display the model from the orientation.

Any further manipulations, such as scaling and scene-wide transformations, are performed on the model's camera configuration. The model is only manipulated to obtain a new orientation or frame of an animation.

Setting up the environment


We first need to load the model from our resources into the memory. For this, we require logic that fetches the file, parses it, and produces the output, which we can use in the following step of rendering an orientation of the model. To load the model, we can either write the logic for it ourselves or use an existing library. The latter approach is generally preferred, unless you have special needs that are not yet covered by an existing library.

As we have no such special needs, we will use an existing library. Our choice here is the open Asset Import Library, or assimp for short. It can import numerous 3D model files in addition to other kinds of resource files, which we'll find useful later on. Assimp is written in C++, which means that we will be using it as a native library (.a or .so). To accomplish this, we first need to obtain its source code and compile it for Android.

The main Assimp site can be found at http://assimp.sf.net/, and the Git repository is at https://github.com/assimp/assimp. From the latter, we obtain the current source for Assimp and put it into a folder called assimp.

We can easily obtain the Assimp source by either downloading an archive file containing the full repository or by using the Git client (from http://git-scm.com/) and cloning the repository using the following command in an empty folder (the assimp folder mentioned):

git clone https://github.com/assimp/assimp.git


This will create a local copy of the remote Git repository. An advantage of this method is that we can easily keep our local copy up to date with the Assimp project's version simply by pulling any changes.

As Assimp uses CMake for its build system, we will also need to obtain the CMake version for Android from http://code.google.com/p/android-cmake/. Android-Cmake contains the toolchain file that we will need to set up the cross-compilation from our host system to Android/ARM. Assuming that we put Android-cmake into the android-cmake folder, we can then find this toolchain file under android-cmake/toolchain/android.toolchain.cmake.

We now need to either set the following environmental variable or make sure we have properly set it:

  • ANDROID_NDK: This points to the root folder where the Android NDK is placed


At this point, we can use either the command-line-based CMake tool or the cross-platform CMake GUI. We choose the latter for sheer convenience. Unless you are quite familiar with the working of CMake, the use of the GUI tool can make the experience significantly more intuitive, not to mention faster and more automated. Any commands we use in the GUI tool will, however, easily translate to the command-line tool.

The first thing we do after opening the CMake GUI utility is specify the location of the source—the assimp source folder—and the output for the CMake-generated files. For this path to the latter, we will create a new folder called buildandroid inside the Assimp source folder and specify it as the build folder. We now need to set a variable inside the CMake GUI:

  • CMAKE_MAKE_PROGRAM: This variable specifies the path to the Make executable. For Linux/BSD, use GNU Make or similar; for Windows, use MinGW Make.


Next, we will want to click on the Configure button where we can set the type of Make files generated as well as specify the location of the toolchain file.

For the Make file type, you will generally want to pick Unix makefiles on Linux or similar and MinGW makefiles on Windows. Next, pick the option that allows you to specify the cross-compile toolchain file and select this file inside the Android-cmake folder as detailed earlier.

After this, the CMake GUI should output Configuring done. What has happened now is that the toolchain file that we linked to has configured CMake to use the NDK's compiler, which targets ARM as well as sets other configuration options. If we want, we can change some options here, such as the following:

  • CMAKE_BUILD_TYPE: We can specify the type of build we want here, which includes the Debug and Release strings.
  • ASSIMP_BUILD_STATIC_LIB: This is a boolean value. Setting it to true (or checking the box in the GUI) will generate only a library file for static linking and no .so file.


Whether we want to build statically or not depends on our ultimate goals and distribution details. As static linking of external libraries is quite convenient and also reduces the total file size on the platform, which is generally already strapped for space, it seems obvious to link statically. The resulting .a library for a release build should be in the order of 16 megabytes, while a debug build is about 68 megabytes. When linking the final application, only those parts of the library that we'll use will be included in our application, shrinking the total file size once more.

We are now ready to click on the Generate button, which should generate a Generating done output. If you get an error along the lines of Could not uniquely determine machine name for compiler, you should look at the paths used by CMake and check whether they exist. For the NDK toolchain on Windows, for example, the path may contain the windows part, whereas the NDK only has a folder called windows-x86_64.

If we look into the buildandroid folder after this, we can see that CMake has generated a makefile and additional relevant files. We only need the central Make file in the buildandroid folder, however. In a terminal window, we navigate to this folder and execute the following command:

make


This should start the execution of the Make files that CMake generated and result in a proper build. At the end of this compilation sequence, we should have a library file in assimp/libs/armeabi-v7a/ called libassimp.a. For our project, we need this library and the Assimp include files. We can find them under assimp/include/assimp.

We copy the folder with the include files to our project's /jni folder. The .a library is placed in the /jni folder as well. As this is a relatively simple NDK project, a simple file structure is fine. For a more complex project, we would want to have a separate /jni/libs folder, or something similar.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime

Importing a model


The Assimp library provides conversion tools for reading resource files, such as those for 3D mesh models, and provides a generic format on the application's side. For a 3D mesh file, Assimp provides us with an aiScene object that contains all the meshes and related data as described by the imported file.

After importing a model, we need to read the sets of data that we require for rendering. These are the types of data:

  • Vertices (positions)
  • Normals
  • Texture mapping (UV)
  • Indices


Vertices might be obvious; they are the positions of points between which lines of basic geometric shapes are drawn. Usually, three vertices are used to form a triangular face, which forms the basic shape unit for a model.

Normals indicate the orientation of the vertex. We have one normal per vertex.

Texture mapping is provided using so-called UV coordinates. Each vertex has a UV coordinate if texture mapping information is provided with the model.

Finally, indices are values provided per face, indicating which vertices should be used. This is essentially a compression technique, allowing the faces to define the vertices that they will use so that shared vertices have to be defined only once. During the drawing process, these indices are used by OpenGL to find the vertices to draw.

We start off our importer code by first creating a new file called assimpImporter.cpp in the /jni folder. We require the following include:

#include "assimp/Importer.hpp"     // C++ importer interface

#include "assimp/scene.h"           // output data structure

#include "assimp/postprocess.h"     // post processing flags

 

// for native asset manager

#include <sys/types.h>

#include <android/asset_manager.h>

#include <android/asset_manager_jni.h>


The Assimp include give us access to the central Importer object, which we'll use for the actual import process, and the scene object for its output. The postprocess include contains various flags and presets for post-processing information to be used with Importer, such as triangulation.

The remaining includes are meant to give us access to the Android Asset Manager API. The model file is stored inside the /assets folder, which once packaged as an APK is only accessible during runtime via this API, whether in Java or in native code.

Moving on, we will be using a single function in our native code to perform the importing and processing. As usual, we have to first declare a C-style interface so that when our native library gets compiled, our Java code can find the function in the library:

extern "C" {

JNIEXPORT jboolean JNICALL Java_com_nyanko_andengineontour_MainActivity_getModelData(JNIEnv*
env,

     jobject obj,

     jobject model,

     jobject assetManager,

     jstring filename);

};


The JNIEnv* parameter and the first jobject parameter are standard in an NDK/JNI function, with the former being a handy pointer to the current JVM environment, offering a variety of utility functions. Our own parameters are the following:

  • model
  • assetManager
  • filename


The model is a basic Java class with getters/setters for the arrays of vertex, normal, UV and index data of which we create an instance and pass a reference via the JNI.

The next parameter is the Asset Manager instance that we created in the Java code.

Finally, we obtain the name of the file that we are supposed to load from the assets containing our mesh.

One possible gotcha in the naming of the function we're exporting is that of underscores. Within the function name, no underscores are allowed, as underscores are used to indicate to the NDK what the package name and class names are. Our getModelData function gets parsed as being in the MainActivity class of the package com.nyanko.andengineontour.

If we had tried to use, for example, get_model_data as the function name, it would have tried to find function data in the model class of the com.nyanko.andengineontour.get package.

Next, we can begin the actual importing process. First, we define the aiScene instance, that will contain the imported scene, and the arrays for the imported data, as well as the Assimp Importer instance:

const aiScene* scene = 0;

       jfloat* vertexArray;

jfloat* normalArray;

jfloat* uvArray;

jshort* indexArray;

 

Assimp::Importer importer;


In order to use a Java string in native code, we have to use the provided method to obtain a reference via the env parameter:

const char* utf8 = env->GetStringUTFChars(filename, 0);

if (!utf8) { return JNI_FALSE; }


We then create a reference to the Asset Manager instance that we created in Java:

AAssetManager* mgr = AAssetManager_fromJava(env, assetManager);

if (!mgr) { return JNI_FALSE; }


We use this to obtain a reference to the asset we're looking for, being the model file:

AAsset* asset = AAssetManager_open(mgr, utf8,
AASSET_MODE_UNKNOWN);

if (!asset) { return JNI_FALSE; }


Finally, we release our reference to the filename string before moving on to the next stage:

env->ReleaseStringUTFChars(filename, utf8);


With access to the asset, we can now read it from the memory. While it is, in theory, possible to directly read a file from the assets, you will have to write a new I/O manager to allow Assimp to do this. This is because asset files, unfortunately, cannot be passed as a standard file handle reference on Android. For smaller models, however, we can read the entire file from the memory and pass this data to the Assimp importer.

First, we get the size of the asset, create an array to store its contents, and read the file in it:

int count = (int) AAsset_getLength(asset);

char buf[count + 1];

if (AAsset_read(asset, buf, count) != count) {

return JNI_FALSE;

}

Finally, we close the asset reference:

AAsset_close(asset);


We are now done with the asset manager and can move on to the importing of this model data:

const aiScene* scene = importer.ReadFileFromMemory(buf, count,
aiProcessPreset_TargetRealtime_Fast);

if (!scene) {

return JNI_FALSE;

}


The importer has a number of possible ways to read in the file data, as mentioned earlier. Here, we read from a memory buffer (buf) that we filled in earlier with the count parameter, indicating the size in bytes. The last parameter of the import function is the post-processing parameters. Here, we use the aiProcessPreset_TargetRealtime_Fast preset, which performs triangulation (converting non-triangle faces to triangles), and other sensible presets.

The resulting aiScene object can contain multiple meshes. In a complete importer, you'd want to import all of them into a loop. We'll just look at importing the first mesh into the scene here. First, we get the mesh:

aiMesh* mesh = scene->mMeshes[0];


This aiMesh object contains all of the information on the data we're interested in. First, however, we need to create our arrays:

int vertexArraySize = mesh->mNumVertices * 3;

int normalArraySize = mesh->mNumVertices * 3;

int uvArraySize = mesh->mNumVertices * 2;

int indexArraySize = mesh->mNumFaces * 3;

 

vertexArray = new float[vertexArraySize];

normalArray = new float[normalArraySize];

uvArray = new float[uvArraySize];

indexArray = new jshort[indexArraySize];


For the vertex, normal, and texture mapping (UV) arrays, we use the number of vertices as defined in the aiMesh object as normal, and the UVs are defined per vertex. The former two have three components (x, y, z) and the UVs have two (x, y).

Finally, indices are defined per vertex of the face, so we use the face count from the mesh multiplied by the number of vertices.

All things but indices use floats for their components. The jshort type is a short integer type defined by the NDK. It's generally a good idea to use the NDK types for values that are sent to and from the Java side.

Reading the data from the aiMesh object to the arrays is fairly straightforward:

for (unsigned int i = 0; i < mesh->mNumVertices; i++) {

   aiVector3D pos = mesh->mVertices[i];

   vertexArray[3 * i + 0] = pos.x;

   vertexArray[3 * i + 1] = pos.y;

   vertexArray[3 * i + 2] = pos.z;

  

   aiVector3D normal = mesh->mNormals[i];

   normalArray[3 * i + 0] = normal.x;

   normalArray[3 * i + 1] = normal.y;

   normalArray[3 * i + 2] = normal.z;

 

   aiVector3D uv = mesh->mTextureCoords[0][i];

   uvArray[2 * i * 0] = uv.x;

   uvArray[2 * i * 1] = uv.y;

}

for (unsigned int i = 0; i < mesh->mNumFaces; i++) {

const aiFace& face = mesh->mFaces[i];

indexArray[3 * i * 0] = face.mIndices[0];

indexArray[3 * i * 1] = face.mIndices[1];

indexArray[3 * i * 2] = face.mIndices[2];

}


To access the correct part of the array to write to, we use an index that uses the number of elements (floats or shorts) times the current iteration plus an offset to ensure that we reach the next available index. Doing things this way instead of pointing incrementation has the benefit that we do not have to reset the array pointer after we're done writing.

There! We have now read in all of the data that we want from the model.

Next is arguably the hardest part of using the NDK—passing data via the JNI. This involves quite a lot of reference magic and type-matching, which can be rather annoying and lead to confusing errors. To make things as easy as possible, we used the generic Java class instance so that we already had an object to put our data into from the native side. We still have to find the methods in this class instance, however, using what is essentially a Java reflection:

jclass cls = env->GetObjectClass(model);

if (!cls) { return JNI_FALSE; }


The first goal is to get a jclass reference. For this, we use the jobject model variable, as it already contains our instantiated class instance:

jmethodID setVA = env->GetMethodID(cls, "setVertexArray",
"([F)V");

jmethodID setNA = env->GetMethodID(cls, "setNormalArray",
"([F)V");

jmethodID setUA = env->GetMethodID(cls, "setUvArray", "([F)V");

jmethodID setIA = env->GetMethodID(cls, "setIndexArray", "([S)V");


We then obtain the method references for the setters in the class as jmethodID variables. The parameters in this class are the class reference we created, the name of the method, and its signature, being a float array ([F) parameter and a void (V) return type.

Finally, we create our native Java arrays to pass back via the JNI:

jfloatArray jvertexArray = env->NewFloatArray(vertexArraySize);

env->SetFloatArrayRegion(jvertexArray, 0, vertexArraySize,
   vertexArray);

jfloatArray jnormalArray = env->NewFloatArray(normalArraySize);

env->SetFloatArrayRegion(jnormalArray, 0, normalArraySize,
   normalArray);

jfloatArray juvArray = env->NewFloatArray(uvArraySize);

env->SetFloatArrayRegion(juvArray, 0, uvArraySize, uvArray);

jshortArray jindexArray = env->NewShortArray(indexArraySize);

env->SetShortArrayRegion(jindexArray, 0, indexArraySize,
   indexArray);


This code uses the env JNIEnv* reference to create the Java array and allocate memory for it in the JVM.

Finally, we call the setter functions in the class to set our data. These essentially calls the methods on the Java class inside the JVM, providing the parameter data as Java types:

env->CallVoidMethod(model, setVA, jvertexArray);

env->CallVoidMethod(model, setNA, jnormalArray);

env->CallVoidMethod(model, setUA, juvArray);

env->CallVoidMethod(model, setIA, jindexArray);


We only have to return JNI_TRUE now, and we're done.

Building our library


To build our code, we write the Android.mk and Application.mk files. Next, we go to the top level of our project in a terminal window and execute the ndk-build command. This will compile the code and place a library in the /libs folder of our project, inside a folder that indicates the CPU architecture it was compiled for.

For further details on the ndk-build tool, you can refer to the official documentation at https://developer.android.com/ndk/guides/ndk-build.html.

Our Android.mk file looks as follows:

LOCAL_PATH := $(call my-dir)

 

include $(CLEAR_VARS)

LOCAL_MODULE   := libassimp

LOCAL_SRC_FILES := libassimp.a

 

include $(PREBUILT_STATIC_LIBRARY)

 

include $(CLEAR_VARS)

 

LOCAL_MODULE       := assimpImporter

#LOCAL_MODULE_FILENAME   := assimpImporter

LOCAL_SRC_FILES     := assimpImporter.cpp

 

LOCAL_LDLIBS := -landroid -lz -llog

LOCAL_STATIC_LIBRARIES := libassimp libgnustl_static

 

include $(BUILD_SHARED_LIBRARY)


The only things worthy of notice here are the inclusion of the Assimp library we compiled earlier and the use of the gnustl_static library. Since we only have a single native library in the project, we don't have to share the STL library. So, we link it with our library.

Finally, we have the Application.mk file:

APP_PLATFORM := android-9

APP_STL := gnustl_static


There's not much to see here beyond the required specification of the STL runtime that we wish to use and the Android revision we are aiming for.

After executing the build command, we are ready to build the actual application that performs the rendering of our model data.

Summary


With our code added, we can now load 3D models from a variety of formats, import it into our application, and create objects out of them, which we can use together with AndEngine. As implemented now, we essentially have an embedded rendering pipeline for 3D assets that extends the basic AndEngine 2D rendering pipeline.

This provides a solid platform for the next stages in extending these basics even further to provide the texturing, lighting, and physics effects that we need to create an actual game.

Resources for Article:





Further resources on this subject: