I am currently using the following glsl code to retrieve separate textures from a tile atlas:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D image;
void main()
{
//width and height in tiles of atlas
int width = 8;
int height = 8;
//x and y position of a tile in the atlas
int x = 1;
int y = 4;
float scalarX = 1.0 / width;
float scalarY = 1.0 / height;
color = vec4(1.0) * texture(image, vec2((TexCoord.x + x) * scalarX, (TexCoord.y * scalarY) + y * scalarY));
}
The tile atlas is made up of: 8*8 sprites each with dimensions of: 16*16 pixels
Is this the best method to use when carrying out this task and could my code be improved / made more efficient?
2 Answers 2
Why don't you simply set the texture coordinates properly and have a "pass-thru" shader that just samples the texture and nothing else? For example, if you're using glVertexAttribPointer()
with an array of vertices and texture coords, make sure that the texture coordinates you send have the proper tile offset. Something like this:
const int numXTiles = 8;
const int numYTiles = 8;
const int numXPixelsPerTile = 16;
const int numYPixelsPerTile = 16;
const int atlasWidth = numXTiles * numXPixelsPerTile;
const int atlasHeight = numYTiles * numYPixelsPerTile;
const double pixelXDelta = 1.0 / atlasWidth;
const double pixelYDelta = 1.0 / atlasHeight;
typedef struct vertex {
Point3D position;
Point2D texCoord;
} vertex;
// Assume you're drawing a square and want to texture it with tile 3,4
vertex square[] = {
{ {-1.0, -1.0, 0.0 }, { 3.0 * numXPixelsPerTile * pixelXDelta, 4.0 * numYPixelsPerTile * pixelYDelta } },
{ { 1.0, -1.0, 0.0 }, { 4.0 * numXPixelsPerTile * pixelXDelta, 4.0 * numYPixelPerTile * pixelYDelta } },
{ { 1.0, 1.0, 0.0 }, { 4.0 * numXPixelsPerTile * pixelXDelta, 5.0 * numYPixelPerTile * pixelYDelta } },
{ { -1.0, 1.0, 0.0 }, { 3.0 * numXPixelsPerTile * pixelXDelta, 5.0 * numYPixelPerTile * pixelYDelta } }
};
glBufferData(GL_ARRAY_BUFFER, 4, square, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, sizeof(vertex), 0); // Position
glVertexAttribPointer(1, 2, GL_FLOAT, sizeof(vertex), (GLvoid*)sizeof(Point3D)); // Texture Coordinate
This assumes your vertex shader has attribute 0 as position and attribute 1 as texture coordinates. Then your fragment shader just becomes:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D image;
void main()
{
color = texture(image, TexCoord);
}
NOTE: I may be off by 1 on the right and top side. You might need to subtract 1 pixelXDelta
and 1 pixelYDelta
from those coordinates - I can never remember.
-
\$\begingroup\$ I kind of wanted to compute the texture coords using the frag shader. Thanks for the answer though. \$\endgroup\$Oliver Barnwell– Oliver Barnwell2015年05月29日 15:20:53 +00:00Commented May 29, 2015 at 15:20
-
\$\begingroup\$ Why? What are you trying to accomplish? If you add that information to the question, we can give you a more suitable answer. \$\endgroup\$user1118321– user11183212015年05月29日 15:22:02 +00:00Commented May 29, 2015 at 15:22
-
\$\begingroup\$ On second thoughts, after reading through your answer more thoroughly I believe your method will work well for the application I am developing. \$\endgroup\$Oliver Barnwell– Oliver Barnwell2015年05月29日 15:34:44 +00:00Commented May 29, 2015 at 15:34
Yes, there are a couple things you can simplify and improve in your shader code.
width
andheight
are constants, so you can either useconst
or#define
to make sure they are resolved at compile time. SincescalarX
andscalarY
only depend on width/height, those can also be compile-time constants.I suppose
x
andy
are set to constants values in your code for demonstration purposes only, as it would be strange to hardcode a shader to always access the same tile. Shouldn't those be coming from auniform
variable?You have this no-op multiplication
color = vec4(1.0) * texture(...)
there. I wouldn't rely on the shader compiler to optimize that out. Some are still pretty dumb, specially on mobile platforms. Just remove it yourself.Avoid using integer types. GPUs work better with floating-point values. In fact, until very recently, GLSL didn't even have an
int
type. Mobile GLSL still doesn't support integer types and operations. Prefer to usefloat
and float vector types whenever possible.Giving more descriptive names to your variables will remove the necessity for comments.
Putting it all together:
#version 330 core
in vec2 inTexCoord;
out vec4 outColor;
uniform sampler2D image;
uniform vec2 tilePosition;
const float tilesWidth = 8.0;
const float tilesHeight = 8.0;
const float scaleX = 1.0 / tilesWidth;
const float scaleY = 1.0 / tilesHeight;
void main()
{
outColor = texture(image, vec2(
(inTexCoord.x + tilePosition.x) * scaleX,
(inTexCoord.y * scaleY) + tilePosition.y * scaleY));
}
-
\$\begingroup\$ Thanks for your detailed answer, could I ask why glsl didn't support the int type until recently? \$\endgroup\$Oliver Barnwell– Oliver Barnwell2015年05月30日 22:18:26 +00:00Commented May 30, 2015 at 22:18
-
1\$\begingroup\$ @OliverBarnwell, probably because GPUs are not generic processors. GPUs are designed to operate on vertexes and textures, which are most commonly expressed in terms of floating-point values. So it is expected that hardware will be optimized (or even limited) for that kind of data. Modern Desktop GPUs nowadays are more generic and closer in architecture to your main CPU, so they also provide efficient integer calculations and are able to run generic computing programs like CUDA or OpenCL, but this was not always the case. \$\endgroup\$glampert– glampert2015年05月31日 03:57:32 +00:00Commented May 31, 2015 at 3:57