User:Kri/Temp

Framebuffer Objects are a mechanism for rendering to images other than the default OpenGL Default Framebuffer. They are OpenGL Objects that allow you to render directly to textures, as well as blitting from one framebuffer to another.

Semantics
Framebuffer objects are very complicated. As such, we need to explicitly define certain terminology.

Image: For the purposes of this article, an image is a single 2D array of pixels. It has a specific format for these pixels.

Layered Image: For the purposes of this article, a layered image is a sequence of images of a particular size.

Texture: For the purposes of this article, a texture is an object that contains some number of images, as defined above. All of the images have the same format, but they do not have to have the same size (different mip-maps, for example). Textures can be bound to shaders and rendered with.

Renderbuffer: A renderbuffer is an object that contains a single image. Renderbuffers cannot be bound to shaders or otherwise rendered with; they can only be attached to FBOs.

Framebuffer-attachable image: Any image, as previously described, that can be attached to a framebuffer object.

Framebuffer-attachable layered image: Any layered image, as previously described, that can be attached to a framebuffer object.

Attachment point: A named location within a framebuffer object that a framebuffer-attachable image or layered image can be attached to. Attachment points can have limitations on the format of the images attached there.

Attach: To connect one object to another. This is not limited to FBOs, but attaching is a big part of them. Attachment is different from binding. Objects are bound to the context; they are attached to each other.

Framebuffer Object Structure
As standard OpenGL Objects, FBOs have the usual  creation style. As expected, it also has the usual  function, to bind an FBO to the context.

The  parameter for this object can take one of 3 values: GL_FRAMEBUFFER, GL_READ_FRAMEBUFFER, or GL_DRAW_FRAMEBUFFER. The last two allow you to bind an FBO so that reading commands (, , etc) and writing commands (any command of the form  ) can happen to two different buffers. The GL_FRAMEBUFFER target simply sets both the read and the write to the same FBO.

When an FBO is bound to a target, the available surfaces change. The default framebuffer has buffers like GL_FRONT, GL_BACK, GL_AUXi, GL_ACCUM, and so forth. FBOs do not have these.

Instead, FBOs have a different set of images. Each FBO image represents an attachment point, a location in the FBO where an image can be attached. FBOs have the following attachment points:


 * GL_COLOR_ATTACHMENTi: These are an implementation-dependent number of attachment points. You can query GL_MAX_COLOR_ATTACHMENTS to determine the number of color attachments that an implementation will allow. The minimum value for this is 1, so you are guaranteed to be able to have at least color attachment 0. These attachment points can only have images bound to them with color-renderable formats. All compressed image formats are not color-renderable, and thus cannot be attached to an FBO.


 * GL_DEPTH_ATTACHMENT: This attachment point can only have images with depth formats bound to it.


 * GL_STENCIL_ATTACHMENT: This attachment point can only have images with stencil formats bound to it.


 * GL_DEPTH_STENCIL_ATTACHMENT: This is shorthand for "both depth and stencil".


 * Note: If you use GL_DEPTH_STENCIL_ATTACHMENT, you should use a packed depth-stencil internal format for the texture or renderbuffer you are attaching.

Attaching Images
Now that we know where images can be attached to FBOs, we can start talking about how to actually attach images to these. Of course, in order to attach images to an FBO, we must first bind the FBO to the context.

You can attach images from any kind of texture to the framebuffer object.

Remember that textures are a set of images. Textures can have mipmaps; thus, each individual mipmap level can contain one or more images.

A 1D texture contains 2D images that have the vertical size of 1. Each individual image can be uniquely identified by a mipmap.

A 2D texture contains 2D images. Each individual image can be uniquely identified by a mipmap.

Each mipmap level of a 3D texture is considered a set of 2D textures, with the number of these being the extent of the Z coordinate. Each integer value for the depth of a 3D texture mipmap level is a layer. So each image in a 3D texture is uniquely identified by a  and a mipmap.

Cubemaps contain 6 targets, each of which is equivalent to a 2D texture. Thus, each image in a cubemap texture can be uniquely identified by a  and a mipmap.

Array textures are much like 3D textures. Each image in an array texture can be uniquely identified by a  (the array index) and a mipmap.

The highlighted words above are significant, as they match the parameters of the following functions used for attaching textures:

void glFramebufferTexture1D(GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level);

void glFramebufferTexture2D(GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level);

void glFramebufferTextureLayer(GLenum target, GLenum attachment, GLuint texture, GLint level, GLint layer);

The  parameter here is the same as the one for bind. However, GL_FRAMEBUFFER doesn't mean both read and draw (as that would make no sense); instead, it is the same as GL_DRAW_FRAMEBUFFER. The  parameter is one of the above attachment points.

The  argument is the texture object name you want to attach from. If you pass zero as, this has the effect of clearing the attachment for this  , regardless of what kind of image was attached there.

Because texture objects can hold multiple images, you must specify exactly which image to attach to this attachment point. The parameters match their above definitions, with the exception of.

When attaching a non-cubemap,  should be the proper texture type: GL_TEXTURE_1D, GL_TEXTURE_2D_MULTISAMPLE, etc. When attaching a cubemap, you must use the Texture2D function, and the   must be one of the 6 targets for cubemap binding.


 * Legacy Note: There is a function,, specifically for 3D textures. However, you shouldn't bother with it, as the TextureLayer function can do everything it can and more.

Renderbuffers can also be attached to FBOs. Indeed, this is the only way to use them besides just creating the storage for them.

Once you have created a renderbuffer object and made storage for it (given a size and format), you can attach it to an FBO with this function:

void glFramebufferRenderbuffer(GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer);

The parameters work mostly the same as with texture attachment. The  param must be GL_RENDERBUFFER. The  parameter is the renderbuffer object's name.

Layered Images
A layered image, as previously defined, is an ordered sequence of images of a particular size. A number of different kinds of textures can be considered layered.

A single mipmap level of a 1D or 2D array texture is a layered image. A single mipmap level of a 3D texture is likewise a layered image. Also, a mipmap level of a cubemap is a layered image. For cubemaps, you get 6 layers, one for each face. And the order of the faces is the same as the order of the enumerators:

For cubemap arrays, the number of layers bound in layered rendering is the number of array layers in the array cubemap * 6. The order of the faces within an array layer is the same as above.

Each texture, when taken as a layered image, has a specific number of layers. For array and 3D textures, this is the depth of the texture. For cubemaps, this is always exactly 6 layers: one per face.

To bind a texture as a layered image, use the following command:

void glFramebufferTexture(GLenum target, GLenum attachment, GLuint texture, GLint level);

The parameters have the same meaning as above. Indeed, this function can replace many of the usese for glFramebufferTexture1D, 2D, or Layer, as long as you do not intend to attach specific layers of array textures, cubemaps, or 3D textures as regular, non-layered images. If the texture is one of these kinds of textures, then the given mipmap level will be attached as a layered image with the number of layers that the given texture has.

Layered image rendering is used with Geometry Shaders.

Framebuffer Completeness
Each attachment point in a FBO has specific restrictions on the format of images that can be attached to it. However, it is not an immediate GL error to attach an image to an attachment point that doesn't support that format. It is an error to try to use an FBO that has been improperly set up. There are also a number of other issues with regard to sizes of images and so forth that must be detected in order to be able to safely use the FBO.

An FBO that is valid for use is said to be "framebuffer complete". To test framebuffer completeness, call this function:

GLenum glCheckFramebufferStatus(GLenum target);

You are not required to call this. Using an incomplete FBO is an error, so it's always a good idea to check.

The return value is GL_FRAMEBUFFER_COMPLETE if the FBO can be used. If it is something else, then there is a problem. Below are the rules for completeness, and the associated return values you will receive if they are not followed.

Attachment Completeness
Each attachment point itself must be complete according to these rules. Empty attachments (attachments with no image attached) are complete by default. If an image is attached, it must adhere to the following rules:


 * The source object for the image still exists and has the same type it was attached with.


 * The image has a non-zero width and height.


 * The layer for 3D or array textures attachments is less than the depth of the texture.


 * The image's format must match the attachment point's requirements, as defined above. Color-renderable formats for color attachments, etc.

Completeness Rules
These are the rules for framebuffer completeness. The order of these rules matters.


 * 1) If the   of   is the default framebuffer (FBO object number 0 is bound), and the default framebuffer does not exist, then you will get GL_FRAMEBUFFER_UNDEFINED. If the default framebuffer exists, then you always get GL_FRAMEBUFFER_COMPLETE. The rest of the rules apply when an FBO is bound.


 * 1) All attachments that are set as a draw buffer or read buffer (see below) must be attachment complete, as defined above. (GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT when false).


 * 1) There must be at least one image attached to the FBO. (GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT when false).


 * 1) All draw buffers (see below) must specify attachment points that have images attached. (GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER when false).


 * 1) If the read buffer is set, then it must specify an attachment point that has an image attached. (GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER when false).


 * 1) All images must have the same number of multisample samples. (GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE when false).


 * 1) If a layered image is attached to one attachment, then all attachments must be layered attachments. The attached layers do not have to have the same number of layers, nor do the layers have to come from the same kind of texture (a cubemap color texture can be paired with an array depth texture) (GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS when false).

Notice that there is no restriction based on size. The effective size of the FBO is the intersection of all of the sizes of the bound images (ie: the smallest in each dimension).

These rules are all code-based. If you ever get any of these values from, it is because your program has done something wrong in setting up the FBO. Each one has a specific remedy for it.

There's one more rule that can trip you up:


 * The implementation likes your combination of attached image formats. (GL_FRAMEBUFFER_UNSUPPORTED when false).

OpenGL allows implementations to state that they do not support some combination of image formats for the attached images; they do this by returning GL_FRAMEBUFFER_UNSUPPORTED when you attempt to use an unsupported format combinaton.

However, the OpenGL specification also requires that implementations support certain formats; that is, if you use these formats, implementations are forbidden to return GL_FRAMEBUFFER_UNSUPPORTED. This list of required formats is also the list of required image formats that all OpenGL implementations must support. These are most of the useful formats. Basically, don't concern yourself with GL_FRAMEBUFFER_UNSUPPORTED too much. Check for it, but you'll be fine as long as you stick to the required formats.
 * Legacy Note: GL_FRAMEBUFFER_UNSUPPORTED was initially, in the days of EXT_framebuffer_object, much less forgiving. The specification didn't have a list of required image formats. Indeed, the only guarantee that the EXT_FBO spec made was that there was at least one combination of formats that implementations supported; it provided no hints as to what that combination might be. The core extension ARB_framebuffer_object does differ from the core specification in one crucial way: it uses the EXT wording for GL_FRAMEBUFFER_UNSUPPORTED. So if you're using 3.0, you don't have to worry much about unsupported. If you're using ARB_framebuffer_object, then you should be concerned and do appropriate testing.

Multiple Render Targets
Modern shaders allow the user to render to multiple render targets simultaneously. To facilitate this, FBOs (and the default framebuffer) have a mapping that allows the user to define which fragment shader outputs go to which buffers. The way this works is somewhat complicated.

When linking a fragment shader, the user will assign each fragment shader output variable to a number. This number is between 0 and GL_MAX_DRAW_BUFFERS-1 under normal circumstances, or between 0 and GL_MAX_DUAL_SOURCE_DRAW_BUFFERS - 1 when a fragment shader is outputting values to the second color of a buffer, for dual source blending. The draw buffers list in the FBO (or the default framebuffer) is used to map between the values set into the fragment shader and the attachment names in the FBO (or buffers in the default framebuffer).

For example, let's say your fragment shader defines the following outputs:

out vec4 mainColor;

out vec2 subsideraryInfo;

When you link your shader, you use  or   to assign 0 to   and 1 to. You can also use  syntax to define this directly in the shader, as you would for attribute indices:

layout(location = 0) out vec4 mainColor;

layout(location = 0) out vec2 subsideraryInfo;

It is up to the draw buffers state in the FBO to state where these get rendered to. To set this mapping, you use this function:

void glDrawBuffers( GLsizei n, const GLenum *bufs );

This function sets up the entire mapping table in one shot. The indices in the list correspond to the values set with  or. This means that the list can only be as large as GL_MAX_DRAW_BUFFERS or GL_MAX_DUAL_SOURCE_DRAW_BUFFERS when using dual source blending. The entries in the  array are enumerators referring to buffer names in the framebuffer.

When the default framebuffer is active, these enumerators are from the list of the default framebuffer buffer names. GL_AUXi, GL_BACK_LEFT, and so on. When an FBO is active (in the GL_DRAW_FRAMEBUFFER slot), these enumerators are GL_COLOR_ATTACHMENTi values (less than GL_MAX_COLOR_ATTACHMENTS, of course). In either case, an entry in the list can be GL_NONE, which means that the output (if the shader outputs a value for it at all) is discarded.

If you are only setting up one draw buffer, you may use. It takes one enumeration value and sets the fragment data location 0 to draw to that buffer. All other fragment data location values are set to GL_NONE.

The state set by glDrawBuffers is part of the FBO (or default framebuffer). So you can generally set this up once and leave it set.

Framebuffer Bliting
The reason for the separation of GL_DRAW_FRAMEBUFFER and GL_READ_FRAMEBUFFER bindings is to allow data in one buffer to be blitted to another buffer.

A blit operation is a special form of copy operation; it copies a rectangular area of pixels from one framebuffer to another. It is not the same as a simple, as it has some very specific properties with regard to multisampling.

Performing a blit between framebuffers is quite simple. You bind the source framebuffer to GL_READ_FRAMEBUFFER, then bind the destination framebuffer to GL_DRAW_FRAMEBUFFER. After that, you call this function:

void glBlitFramebuffer(GLint srcX0, GLint srcY0, GLint srcX1, GLint srcY1,

GLint dstX0, GLint dstY0, GLint dstX1, GLint dstY1,

GLbitfield mask, GLenum filter);

The pixels in the rectangular area specified by the  values are copied to the rectangular area specified by the   values. The  parameter is a bitfield that specifies which kinds of buffers you want copied: GL_COLOR_BUFFER_BIT, GL_DEPTH_BUFFER_BIT, GL_STENCIL_BUFFER_BIT, or some combination. The  parameter specifies how you want filtering performed if the two rectangles are not the same size.

Now one thing to keep in mind is this: blit operations only read from the color buffer specified by  and will only write to the color buffers specified by. If multiple draw buffers are specified, then multiple color buffers are updated. This assumes that  included the color buffer. The depth and stencil buffers of the source framebuffers are blitted to the destination if the  specifies them.

The  state is stored with the FBO/default framebuffer, just like the   state.

Note that it is perfectly valid to read from the default framebuffer and write to an FBO, or vice-versa.

Format Considerations
Blitting is not the same as performing a pixel transfer operation. The conversion between source and destination format is more limited. Blitting depth and stencil buffers works as expected: values are converted from one bitdepth to the other as needed. Conversion between color formats is different.

A blit operation can only convert between formats within 3 groups. Signed integral and unsigned integral formats make up two groups, with all normalized and floating-point formats making up the third. Thus, it is legal to blit from an GL_RGB8 buffer to a GL_RGB32F and vice versa. But it is not legal to blit a GL_RGB8 from or to a GL_RGBI8 format image.

The data during blitting is converted according to simple rules. Blitting from a floating-point format to a normalized integer format will cause clamping, either to [0, 1] for unsigned normalized or [-1, 1] for signed normalized.

Multisampling Considerations
Multisampling is supported with the default framebuffer (through WGL/GLX_multisample) and/or FBOs (through multisampled renderbuffers or textures, where supported).

As explained in the article on Multisampling, a multisampled buffer must be resolved into a single sample before it can be displayed. Normally, this resolving operation is automatic, occurring during framebuffer swapping (though reading from the framebuffer can cause it to happen anyway).

Each FBO or framebuffer has a specific number of multisample samples (remember: an FBO cannot be framebuffer-complete if all of the attached images do not have the same number of samples). When you blit between two FBOs with the same number of samples, the copy is done directly; the destination buffer gets the same information the source had.

It is an error to blit between buffers with different numbers of samples, unless one of them has zero samples. You get this by not attaching multisampled images to that FBO, or not using multisampled default framebuffers.

In this special case, two things can happen. If the read framebuffer is the one with zero samples, then the draw buffer has all of its samples per-pixel replaced with the values from the read framebuffer. However, if the draw framebuffer is the one with zero samples, then it causes the multisampled framebuffer to have its multisamples resolved into a single sample per pixel. This explicit resolution is very useful when dealing with multisampled buffers.

As with all multisample behavior, none of this works at all unless  is in effect.

Feedback Loops
It is possible to bind a texture to an FBO, bind that same texture to a shader, and then try to render with it.

This is bad. Mostly.

It is perfectly valid to bind one image from a texture to an FBO and then render with that texture, as long as you prevent yourself from sampling from that image. If you do try to read and write to the same image, you get undefined results. Meaning it may do what you want, the sampler may get old data, the sampler may get half old and half new data, or it may get garbage data. Any of these are possible.

Do not try this.

It is possible to get the same effect by doing a  or a   operation. Similarly, if you try to read and write to the same image, you get undefined results.

EXT_Framebuffer_object
The original form of FBOs was this extension. It lacked quite a bit of the above functionality, which later extensions granted. The biggest difference is that it has more hard-coded restrictions on framebuffer completeness. All of the images have to be the same size in the EXT spec, for example. Some of these limitations were hardware-based. So there may be hardware that supports EXT_FBO and not ARB_FBO, even thought they support things like EXT_FBO_blit and other parts of ARB_FBO.