godot ViewportTexture workaround

In Godot 3 and 4, ViewportTextures do not have mipmaps. Unfortunately, if you’re like me you’re using the ViewportTexture class as a way to render a UI in world space. Not having mipmaps more or less means that your UI looks really bad when it’s scaled due to aliasing artifacts.

The workaround for this problem is to increase the number of samples for each pixel that you want to render, just like a mipmap would do for you normally. In other words, we want to emulate what a mipmap would do in a shader.

Shader code is definitely not the most efficient way to do this. Alternative solutions include:

  • Modifying the engine source so that ViewportTexture does generate mipmaps
  • Implementing a compute shader to generate the mipmaps on the GPU

Unfortunately, using the generate_mipmaps function in the Image class is too slow, because in order to get an Image you have to copy the texture from the GPU to the CPU. I also don’t currently possess the knowledge to implement the other two solutions, so I’ve opted for the shader solution for now.

how it works

I’m going to assume that you already know what mipmaps are, but as a small reminder I would like you to remember that mipmaps improve image quality by providing a quick way to get the average color of a group of texels depending on how far we are zoomed out. In our shader, we can still get the average color of a group of texels every frame by doing what the mipmap would do for you. For example:

shader_type spatial;
 
uniform sampler2D albedo_texture: filter_linear;
 
vec4 sample(vec2 uv, int lod) {
	ivec2 size = textureSize(albedo_texture, 0);
	vec2 uv_step = 1.0 / vec2(size);
 
	int block = lod * lod;
	int count = 0;
	vec4 color = vec4(0.0);
	for (int i = -block/2; i <= block/2; i++) {
		for (int j = -block/2; j <= block/2; j++) {
			vec2 delta = vec2(uv_step.x * float(i), uv_step.y * float(j));
			color += texture(albedo_texture, uv + delta);
			count += 1;
		}
	}
	return color / float(count);
}
 
vec4 samplef(vec2 uv, float lod) {
	int low_lod = int(lod);
	int high_lod = int(lod + 1.0);
	return mix(sample(uv, low_lod), sample(uv, high_lod), mod(lod, 1.0));
}
 
void fragment() {
	float lod = textureQueryLod(albedo_texture, UV).y;
	ALBEDO = samplef(UV, lod).xyz;
}

In this example, sample lets us calculate what the color would be in the nth mipmap of albedo_texture if it existed. It does this by doing what the mipmap would do; for the first mipmap, we average each 2x2 block of pixels, for the second we average each 4x4 block of pixels, and so on. Then, given the LOD value that we want to use, we linearly interpolate between the two closest mipmaps to get the resulting color.

However, this approach is slow. We have to do two passes in order to get the two color values to interpolate between (we could update the code to get the two color values in one pass, but I left it implemented the naive way to improve readability). The result is better than not doing this at all, but it also results in a rather blurry image when you view it from the side.

It’s possible to improve the image quality and speed of the shader by doing everything in a single pass. A friend, crazy_stewie, made a single-pass solution which I’ve commented below:

shader_type spatial;
 
uniform sampler2D albedo_texture: filter_linear;
 
vec3 sample(sampler2D sampler, vec2 uv) {
	// Project the pixel onto the texture to find out how many texels we want to
	// sample. We're approximating the shape of the projection here as a
	// parallelogram; in perspective this projection would actually result in a
	// different shape, but this is close enough.
	vec2 uvdx = dFdx(uv);
	vec2 uvdy = dFdy(uv);
	vec2 tex_size = vec2(textureSize(sampler, 0));
	vec2 extents = vec2(length(uvdx*tex_size), length(uvdy*tex_size));
 
	// In each direction along the parallelogram, we want to sample at minimum
	// once and at most 32 times.
	extents = clamp(round(extents), vec2(1, 1), vec2(32, 32));
	uvdx /= extents.x;
	uvdy /= extents.y;
 
	// We want to start sampling from the top-left of the parallelogram, but
	// we need to offset this by half of the distance so that the sample points
	// are centered on the original UV coordinate. In other words, we want to
	// sample the fences and not the fence posts.
	uv -= (extents.x*0.5 - 0.5)*uvdx + (extents.y*0.5 - 0.5)*uvdy;
	vec3 result = vec3(0);
	for (float i = 0.0; i < extents.x; i++) {
		for (float j = 0.0; j < extents.y; j++) {
			result += texture(sampler, uv + i*uvdx + j*uvdy).rgb;
		}
	}
 
	return result / (extents.x*extents.y);
}
 
void fragment() {
	ALBEDO = sample(albedo_texture, UV);
}

In other words, instead of adhering closely to the concept of mipmaps, we can instead dynamically change how many texels we sample based on how far away from the texture we are.

Here’s what the viewport texture looks like before:

The viewport texture with scaling artifacts.

And after:

The viewport texture without scaling artifacts.

You also can’t see it from the picture, but the pixel shimmering effect you would normally get without this approach is also gone.