Note that OpenGL Objects like textures already use integer numbers, but bindless handles are different from the texture's name. The integer in this process is called a handle, and it is an unsigned, 64-bit integer. This integer can be provided to any particular Shader stage via the usual mechanisms, and once the integer gets where it needs to go, it can be converted back into a sampler for texture fetches. The conceptual foundation of bindless texturing is the ability to convert a Texture object into an integer which represents that texture. This represents a general overview of the process and the concepts that it uses. This is the purpose of bindless texturing to allow users to find textures from data either in memory or passed to the Vertex Shader and possibly transferred through the rendering pipeline.īindless texturing is a bit complex. This can only work if the shader can get its textures from values in memory, rather than values bound to the context. Yes, you would still need to bind buffers, but you would do that once at the beginning of the frame, and then let everyone find their data based on state provided by the rendering command. In a perfect world, this would reduce rendering to little more than a compute dispatch operation followed by a single multi-draw-indirect operation. Indeed, with Indirect Rendering, it becomes possible to generate those rendering commands on the GPU. If the objects could fetch which textures to use solely from memory (UBOs, SSBOs, etc), then one could render a number of objects with the same multi-draw drawing command. The reason for this is that you need to switch textures for different objects. This process has an intrinsic cost, one that must be paid essentially by every object that needs to use different textures from another.Ī second consequence is that you must issue more rendering calls. The immediate performance cost is that you must bind textures and images to the context before rendering an object that needs those textures/images. This has two performance bearing consequences. These all derive their data based on objects bound to locations in the OpenGL context at the time of the rendering call. Specifically, the Opaque Types in GLSL: samplers, images, and atomic counters. There are lines of communication for which this cannot work.
You simply set data into the appropriate buffer objects, then make sure that those buffers are bound when it comes time to render with the shader. With Uniform Buffer Objects, Shader Storage Buffer Objects, and various other means, it is possible to communicate state information to a Shader without having to modify any OpenGL context state.