This post guides you through the creation process of abstract-looking terrain shaders in the Duality 2D game engine. The basics of the engine are not presented here, but if you are familiar with game technology, it should not be too difficult to follow along. If something does not make sense at first, take a look at the official documentation on GitHub. Alternatively, there are two tutorials with more of an introductory flair. In addition, the concepts described here can be easily adapted to other game engines and frameworks as well.
Duality can be downloaded from the official site. A C# compiler and a text editor are also needed. Visual Studio 2013 or higher is recommended, but other IDEs, like MonoDevelop also work.
Open up a new project in Dualitor! First, we have to create several new resources.
The following list describes the required resources. Create and name them accordingly.
Let's start with implementing the vertex shader. Unlike most game engines, Duality handles some of the vertex transformations on the CPU, in order to achieve a parallax scaling effect. Thus, the vertex array passed to the GPU is already scaled. However, we do not need that precalculation for our terrain shader, so this transformation has to be undone in the vertex shader. Double click the VertexShader resource to open it in an external text editor. It should contain the following:
void main()
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
To perform the inverse transformation, the camera data should be passed to the shader. This is done automatically by Duality via pre-configured uniform variables: CameraFocusDist, CameraParallax and CameraPosition. The result worldPosition is passed to the fragment shader via a varying variable.
// vertex shader
varying vec3 worldPosition;
uniform float CameraFocusDist;
uniform bool CameraParallax;
uniform vec3 CameraPosition;
vec3 reverseParallaxTransform ()
{
// Duality uses software pre-transformation of vertices
// gl_Vertex is already in parallax (scaled) view space when arriving here.
vec4 vertex = gl_Vertex;
// Reverse-engineer the scale that was previously applied to the vertex
float scale = 1.0;
if (CameraParallax)
{
scale = CameraFocusDist / vertex.z;
} else {
// default focus dist is 500
scale = CameraFocusDist / 500.0;
}
return vec3 (vertex.xyz + vec3 (CameraPosition.xy, 0)) / scale;
}
void main()
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
worldPosition = reverseParallaxTransform ();
}
Next, implement the fragment shader. Various effects can be achieved using textures and mathematical functions creatively. Here a simple method is presented: the well-known XOR texture generation. It is based on calculating the binary exclusive or product of the integer world coordinates (operator ^ in GLSL). To control its parameters, two uniform variables, scale and repeat, are introduced in addition to the varying one from the vertex shader. A texture named mainTex is also used to alpha-mask the product.
// fragment shader
varying vec3 worldPosition;
uniform float scale;
uniform int repeat;
uniform sampler2D mainTex;
void main()
{
vec4 texSample = texture2D(mainTex, gl_TexCoord[0].st);
int x = (int)(worldPosition.x * scale) % repeat;
int y = (int)(worldPosition.y * scale) % repeat;
vec3 color = gl_Color.rgb * (x ^ y) / (float)repeat;
gl_FragColor = vec4(color, 1.0) * texSample.a;
}
Assign the VertexShader and FragmentShader resources to the ShaderProgram resource, and that to the DrawTechnique resource. The latter one, as mentioned, determines the blending mode. This time it has to be set to Mask in order to make the alpha masking work. The DrawTechnique should be assigned to the Material resource. The material is used to control the custom uniform parameters. The following values yield correct results:
Create a SpriteRenderer in the scene, and assign the new material to it. Because we used world coordinates in the fragment shader, the texture stays fixed relative to the world, and the alpha mask functions as a “window” to it. The effect can be perceived by repositioning the sprite in the game world. Duplicate the sprite GameObject several times and move them around. When they intersect, the texture should be perfectly continuous.
You may notice that the texture behaves incorrectly while moving the camera in Scene Editor view. The reason behind is that in that view mode, the camera is different than the one the shader calculates against. For inspecting the final look, use the Game View.
This technique can be used to quickly build continuous-looking terrains using a small number of alpha masks in your top-down or sidescroller game projects. Of course, the fragment shader could be extended with additional logic and textures. Experimenting with them often yields usable results.
I hope you enjoyed this post. In case you have any questions, feel free to post them below, or on the Duality forums.
Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics. It’s an interdisciplinary field between the more traditional mechanical engineering, electrical engineering and informatics, and Lőrinc has quickly grown a passion toward software development. He is a supporter of open source software and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.