🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

tracking texture coordinates from screen position

Started by
3 comments, last by Joyton 2 years, 10 months ago

How tracking object texture coordinates from screen position? I'm used image with RG color channels on objects and division color by image size. I think it not so good methods. Have a simple solution? My solution working, but tracking does not have high accuracy.

Advertisement

The alternative is that you project a ray into the mesh space, do collision detection at triangle level, then figure out the texture UV coordinates for the triangle (if you already have the position/indexes, this is easy) and calculate the barycentric coordinates to weight the UV values appropriately.

You may also be able to add B and A channels that repeat faster than the R and G channels – say, every 64 texels. You may then be able to improve the precision of the result.

I'm assuming you're already using uncompressed RGB textures, and turn texture filtering to NEAREST, which will get rid of interpolation artifacts in the wrap regions.

enum Bool { True, False, FileNotFound };

Joyton said:
I'm used image with RG color channels on objects

I know that trick. But you have to use up a lot of texture memory full of object IDs.

I wonder if you could do better with a shader intended to only return single hit information.

Assume Vulkan here.Suppose you render to a very tiny frame buffer, maybe only one pixel, the one where the cursor is. The vertex shader clips out and discards everything that doesn't hit that pixel. The fragment shader uses the Z-buffer as normal, and if the currently rendering fragment wins, stores the index of the triangle being worked on.

It should be possible to encode the winning vertex index info into the frame buffer's output color. There's an obscure feature, “FragmentBarycentricNV”, which turns off interpolation and provides access to the input vertex info.

There may be other ways to do this. But it ought to be possible to code a shader that runs over the same data used for rendering but just tells you what geometry corresponds to a screen pixel. But I'm not a shader expert. This should be in common use if it's not impossible, and I can't find any references to it being used.

hplus0603 said:

The alternative is that you project a ray into the mesh space, do collision detection at triangle level, then figure out the texture UV coordinates for the triangle (if you already have the position/indexes, this is easy) and calculate the barycentric coordinates to weight the UV values appropriately.

You may also be able to add B and A channels that repeat faster than the R and G channels – say, every 64 texels. You may then be able to improve the precision of the result.

I'm assuming you're already using uncompressed RGB textures, and turn texture filtering to NEAREST, which will get rid of interpolation artifacts in the wrap regions.

I am using DX9 and have no way of knowing anything about vertices :(

This topic is closed to new replies.

Advertisement