Chapters

Hide chapters

Metal by Tutorials

Second Edition · iOS 13 · Swift 5.1 · Xcode 11

Before You Begin

Section 0: 3 chapters
Show chapters Hide chapters

Section I: The Player

Section 1: 8 chapters
Show chapters Hide chapters

Section III: The Effects

Section 3: 10 chapters
Show chapters Hide chapters

14. Multipass & Deferred Rendering
Written by Marius Horga

Heads up... You're reading this book for free, with parts of this chapter shown beyond this point as scrambled text.

Up to this point, you’ve been running projects and playgrounds that only had one render pass. In other words, you used a single command encoder to submit all of your draw calls to the GPU.

For more complex apps, you may need multiple render passes before presenting the texture to the screen, letting you use the result of one pass in the next one. You may even need to render content offscreen for later use.

With multiple render passes, you can render a scene with multiple lights and shadows, like this:

Take note, because in this chapter, you’ll be creating that scene. Along the way, you’ll learn a few key concepts, such as:

  • Shadow maps.
  • Multipass rendering.
  • Deferred rendering with a G-buffer.
  • The blit command encoder.

You’ll start with shadows first.

Shadow maps

A shadow represents the absence of light on a surface. A shadow is present on an object when another surface or object obscures it from the light. Having shadows in a project makes your scene look more realistic and provides a feeling of depth.

Shadow maps are nothing more than textures containing shadow information about the scene. When a light shines on an object, anything that is behind that object gets a shadow cast on it.

Typically you render the scene from the location of your camera, but to build a shadow map, you need to render your scene from the location of the light source - in this case the sun.

The image on the left shows a render from the position of the camera with the directional light pointing down. The image on the right shows a render from the position of the directional light.

The eye shows where the camera was positioned in the first image.

You’ll do two render passes:

  • First pass: Using a separate view matrix holding the sun’s position, you’ll render from the point of view of the light. Because you’re not interested in color at this stage, only the depth of objects that the sun can see, you’ll only render a depth texture in this pass. This is a grayscale texture, with the gray value indicating depth. Black is close to the light and white is far away.

  • Second pass: You’ll render using the camera as usual, but you’ll compare the camera fragment with each depth map fragment. If the fragment’s depth is lighter in color than the depth map at that position, it means the fragment is in the shadow. The light can “see” the blue x in the above image, so it is not in shadow.

Shadows and deferred rendering are complex subjects, so there’s a starter project available for this chapter. Open it in Xcode and take a look around.

The code is similar to what’s available at the end of Chapter 5, “Lighting Fundamentals”.

For simplicity, you’ll be working on the diffuse color only; specularity and ambient lighting are not included with this project.

Build and run the project, and you’ll see a train and a tree model, both on top of a plane:

Add these properties in Renderer.swift, at the top of Renderer:

var shadowTexture: MTLTexture!
let shadowRenderPassDescriptor = MTLRenderPassDescriptor()

Later, when you create the render command encoder for drawing the shadow, you’ll use this render pass descriptor. Each render pass descriptor can have up to eight color textures attached to it, plus a depth texture and a stencil texture. The shadowRenderPassDescriptor points to shadowTexture as a depth attachment.

You’ll need several textures through the course of the chapter, so create a helper method for building them.

Add this new method to Renderer:

func buildTexture(pixelFormat: MTLPixelFormat, 
                  size: CGSize, 
				  label: String) -> MTLTexture {
  let descriptor = MTLTextureDescriptor.texture2DDescriptor(
                              pixelFormat: pixelFormat,
                              width: Int(size.width),
                              height: Int(size.height),
                              mipmapped: false)
  descriptor.usage = [.shaderRead, .renderTarget]
  descriptor.storageMode = .private
  guard let texture = 
    Renderer.device.makeTexture(descriptor: descriptor) else {
    fatalError() 
  }
  texture.label = "\(label) texture"
  return texture
}

In this method, you configure a texture descriptor and create a texture using that descriptor. Textures used by render pass descriptors have to be configured as render targets. Render targets are memory buffers or textures that allow offscreen rendering for cases where the rendered pixels don’t need to end up in the framebuffer. The storage mode is private, meaning the texture is stored in memory in a place that only the GPU can access.

Next, add the following to the bottom of the file:

private extension MTLRenderPassDescriptor {
  func setUpDepthAttachment(texture: MTLTexture) {
    depthAttachment.texture = texture
    depthAttachment.loadAction = .clear
    depthAttachment.storeAction = .store
    depthAttachment.clearDepth = 1
  }
}

This creates a new extension on MTLRenderPassDescriptor with a new method that sets up the depth attachment of a render pass descriptor and configures it to store the provided texture. This is where you’ll attach shadowTexture to shadowRenderPassDescriptor.

You’re creating a separate method because you’ll have other render pass descriptors later in the chapter. The load and store actions describe what action the attachment should take at the start and end of the render pass. In this case, you clear the texture at the beginning of the pass and store the texture at the end of the pass.

Now, add the following to the Renderer class:

func buildShadowTexture(size: CGSize) {
  shadowTexture = buildTexture(pixelFormat: .depth32Float,
                               size: size, label: "Shadow")
  shadowRenderPassDescriptor.setUpDepthAttachment(
                               texture: shadowTexture)
}

This builds the depth texture by calling the two helper methods you just created. Next, call this method at the end of init(metalView:):

buildShadowTexture(size: metalView.drawableSize)

Also, call it at the end of mtkView(_:drawableSizeWillChange:) so that when the user resizes the window, you can rebuild the textures with the correct size:

buildShadowTexture(size: size)

Build and run the project to make sure everything works. You won’t see any visual changes yet; you’re just verifying things are error-free before moving onto the next task.

Multipass rendering

A render pass consists of sending commands to a command encoder. The pass ends when you end encoding on that command encoder. Multipass rendering uses multiple command encoders and facilitates rendering content in one render pass and using the output of this pass as the input of the next render pass.

The shadow pass

During the shadow pass, you’ll be rendering from the point of view of the sun, so you’ll need a new view matrix.

matrix_float4x4 shadowMatrix;
var shadowPipelineState: MTLRenderPipelineState!
func renderShadowPass(renderEncoder: MTLRenderCommandEncoder) {
  renderEncoder.pushDebugGroup("Shadow pass")
  renderEncoder.label = "Shadow encoder"
  renderEncoder.setCullMode(.none)
  renderEncoder.setDepthStencilState(depthStencilState)
  // 1
  renderEncoder.setDepthBias(0.01, slopeScale: 1.0, clamp: 0.01)
  // 2
  uniforms.projectionMatrix = float4x4(orthoLeft: -8, right: 8, 
                                       bottom: -8, top: 8, 
                                       near: 0.1, far: 16)
  let position: float3 = [sunlight.position.x, 
                          sunlight.position.y,
                          sunlight.position.z]
  let center: float3 = [0, 0, 0]
  let lookAt = float4x4(eye: position, center: center, 
                        up: [0,1,0])
  uniforms.viewMatrix = lookAt
  uniforms.shadowMatrix = 
       uniforms.projectionMatrix * uniforms.viewMatrix
  
  renderEncoder.setRenderPipelineState(shadowPipelineState)
  for model in models {
    draw(renderEncoder: renderEncoder, model: model)
  }
  renderEncoder.endEncoding()
  renderEncoder.popDebugGroup()
}
guard let shadowEncoder = commandBuffer.makeRenderCommandEncoder(
                descriptor: shadowRenderPassDescriptor) else {
  return
}
renderShadowPass(renderEncoder: shadowEncoder)
func buildShadowPipelineState() {
  let pipelineDescriptor = MTLRenderPipelineDescriptor()
  pipelineDescriptor.vertexFunction = 
       Renderer.library.makeFunction(name: "vertex_depth")
  pipelineDescriptor.fragmentFunction = nil
  pipelineDescriptor.colorAttachments[0].pixelFormat = .invalid
  pipelineDescriptor.vertexDescriptor =
      MTKMetalVertexDescriptorFromModelIO(
                Model.defaultVertexDescriptor)
  pipelineDescriptor.depthAttachmentPixelFormat = .depth32Float
  do {
    shadowPipelineState = 
       try Renderer.device.makeRenderPipelineState(
                     descriptor: pipelineDescriptor)
  } catch let error {
    fatalError(error.localizedDescription)
  }
}
buildShadowPipelineState()
#import "../Utility/Common.h"

struct VertexIn {
  float4 position [[ attribute(0) ]];
};

vertex float4 
       vertex_depth(const VertexIn vertexIn [[ stage_in ]],
                    constant Uniforms &uniforms [[buffer(1)]]) {
  matrix_float4x4 mvp = 
        uniforms.projectionMatrix * uniforms.viewMatrix
        * uniforms.modelMatrix;
  float4 position = mvp * vertexIn.position;
  return position;
}

uniforms.projectionMatrix = camera.projectionMatrix

The main pass

Now that you have the shadow map saved to a texture, all you need to do is send it to the next pass — the main pass — so you can use the texture in lighting calculations in the fragment function.

renderEncoder.setFragmentTexture(shadowTexture, index: 0)
float4 shadowPosition;
out.shadowPosition = 
     uniforms.shadowMatrix * uniforms.modelMatrix 
     * vertexIn.position;
depth2d<float> shadowTexture [[texture(0)]]
// 1
float2 xy = in.shadowPosition.xy;
xy = xy * 0.5 + 0.5;
xy.y = 1 - xy.y;
// 2
constexpr sampler s(coord::normalized, filter::linear,
                    address::clamp_to_edge, 
                    compare_func:: less);
float shadow_sample = shadowTexture.sample(s, xy);
float current_sample = 
     in.shadowPosition.z / in.shadowPosition.w;
// 3
if (current_sample > shadow_sample ) {
  diffuseColor *= 0.5;
}

Deferred rendering

Before this chapter you’ve only been using forward rendering but assume you have a hundred models (or instances) and a hundred lights in the scene. Suppose it’s a metropolitan downtown where the number of buildings and street lights could easily amount to the number of the objects in this scene.

The G-buffer pass

All right, time to build that G-buffer up! First, create four new textures. Add this code at the top of Renderer:

var albedoTexture: MTLTexture!
var normalTexture: MTLTexture!
var positionTexture: MTLTexture!  
var depthTexture: MTLTexture!
var gBufferPipelineState: MTLRenderPipelineState!
var gBufferRenderPassDescriptor: MTLRenderPassDescriptor!
func buildGbufferTextures(size: CGSize) {
  albedoTexture = buildTexture(pixelFormat: .bgra8Unorm, 
                          size: size, label: "Albedo texture")
  normalTexture = buildTexture(pixelFormat: .rgba16Float, 
                          size: size, label: "Normal texture")
  positionTexture = buildTexture(pixelFormat: .rgba16Float, 
                          size: size, label: "Position texture")
  depthTexture = buildTexture(pixelFormat: .depth32Float, 
                          size: size, label: "Depth texture")
}
func setUpColorAttachment(position: Int, texture: MTLTexture) {
  let attachment: MTLRenderPassColorAttachmentDescriptor = 
    colorAttachments[position]
  attachment.texture = texture
  attachment.loadAction = .clear
  attachment.storeAction = .store
  attachment.clearColor = MTLClearColorMake(0.73, 0.92, 1, 1)
}
func buildGBufferRenderPassDescriptor(size: CGSize) {
  gBufferRenderPassDescriptor = MTLRenderPassDescriptor()
  buildGbufferTextures(size: size)
  let textures: [MTLTexture] = [albedoTexture, 
                                normalTexture, 
                                positionTexture]
  for (position, texture) in textures.enumerated() {
    gBufferRenderPassDescriptor.setUpColorAttachment(
          position: position, texture: texture)
  }
  gBufferRenderPassDescriptor.setUpDepthAttachment(
          texture: depthTexture)
}
buildGBufferRenderPassDescriptor(size: size)
func buildGbufferPipelineState() {
  let descriptor = MTLRenderPipelineDescriptor()
  descriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
  descriptor.colorAttachments[1].pixelFormat = .rgba16Float
  descriptor.colorAttachments[2].pixelFormat = .rgba16Float
  descriptor.depthAttachmentPixelFormat = .depth32Float
  descriptor.label = "GBuffer state"

  descriptor.vertexFunction = 
    Renderer.library.makeFunction(name: "vertex_main")
  descriptor.fragmentFunction = 
    Renderer.library.makeFunction(name: "gBufferFragment")
  descriptor.vertexDescriptor = 
    MTKMetalVertexDescriptorFromModelIO(
               Model.defaultVertexDescriptor)
  do {
    gBufferPipelineState = try 
	  Renderer.device.makeRenderPipelineState(
               descriptor: descriptor)
  } catch let error {
    fatalError(error.localizedDescription)
  }
}
buildGbufferPipelineState()
func renderGbufferPass(renderEncoder: MTLRenderCommandEncoder) {
  renderEncoder.pushDebugGroup("Gbuffer pass")
  renderEncoder.label = "Gbuffer encoder"

  renderEncoder.setRenderPipelineState(gBufferPipelineState)
  renderEncoder.setDepthStencilState(depthStencilState)

  uniforms.viewMatrix = camera.viewMatrix
  uniforms.projectionMatrix = camera.projectionMatrix
  fragmentUniforms.cameraPosition = camera.position
  renderEncoder.setFragmentTexture(shadowTexture, index: 0)
  renderEncoder.setFragmentBytes(&fragmentUniforms, 
                  length: MemoryLayout<FragmentUniforms>.stride, 
                  index: 3)
  for model in models {
    draw(renderEncoder: renderEncoder, model: model)
  }
  renderEncoder.endEncoding()
  renderEncoder.popDebugGroup()
}
guard let gBufferEncoder = commandBuffer.makeRenderCommandEncoder(
                        descriptor: gBufferRenderPassDescriptor) else {
  return
}
renderGbufferPass(renderEncoder: gBufferEncoder)
#import "../Utility/Common.h"

struct VertexOut {
  float4 position [[position]];
  float3 worldPosition;
  float3 worldNormal;
  float4 shadowPosition;
};

struct GbufferOut {
  float4 albedo [[color(0)]];
  float4 normal [[color(1)]];
  float4 position [[color(2)]];
};
fragment GbufferOut gBufferFragment(VertexOut in [[stage_in]],
             depth2d<float> shadow_texture [[texture(0)]],
             constant Material &material [[buffer(1)]]) {
  GbufferOut out;
  // 1
  out.albedo = float4(material.baseColor, 1.0);
  out.albedo.a = 0;
  out.normal = float4(normalize(in.worldNormal), 1.0);
  out.position = float4(in.worldPosition, 1.0);
  // 2
  // copy from fragment_main
  float2 xy = in.shadowPosition.xy;
  xy = xy * 0.5 + 0.5;
  xy.y = 1 - xy.y;
  constexpr sampler s(coord::normalized, filter::linear, 
                      address::clamp_to_edge, 
                      compare_func:: less);
  float shadow_sample = shadow_texture.sample(s, xy);
  float current_sample = 
         in.shadowPosition.z / in.shadowPosition.w;

  // 3
  if (current_sample > shadow_sample ) {
    out.albedo.a = 1;
  }
  return out;
}

The Blit Command Encoder

To blit means to copy from one part of memory to another. You use a blit command encoder on resources such as textures and buffers. It’s generally used for image processing, but you can (and will) also use it to copy image data that is rendered offscreen.

guard let blitEncoder = commandBuffer.makeBlitCommandEncoder() else {
  return
}
blitEncoder.pushDebugGroup("Blit")
blitEncoder.label = "Blit encoder"
let origin = MTLOriginMake(0, 0, 0)
let size = MTLSizeMake(Int(view.drawableSize.width), Int(view.drawableSize.height), 1)
blitEncoder.copy(from: albedoTexture, sourceSlice: 0, 
                 sourceLevel: 0, 
                 sourceOrigin: origin, sourceSize: size, 
                 to: drawable.texture, destinationSlice: 0, 
                 destinationLevel: 0, destinationOrigin: origin)
blitEncoder.endEncoding()
blitEncoder.popDebugGroup()
metalView.framebufferOnly = false

The Lighting pass

Up to this point, you’ve rendered the scene color attachments to multiple render targets, saving them for later use in the fragment shader. This assured that only the visible fragments get processed, thus reducing the amount of calculation that you would have otherwise done for all the geometry in the models in the scene.

var compositionPipelineState: MTLRenderPipelineState!

var quadVerticesBuffer: MTLBuffer!
var quadTexCoordsBuffer: MTLBuffer!

let quadVertices: [Float] = [
  -1.0,  1.0,
   1.0, -1.0,
  -1.0, -1.0,
  -1.0,  1.0,
   1.0,  1.0,
   1.0, -1.0
]

let quadTexCoords: [Float] = [
  0.0, 0.0,
  1.0, 1.0,
  0.0, 1.0,
  0.0, 0.0,
  1.0, 0.0,
  1.0, 1.0
]
quadVerticesBuffer = 
    Renderer.device.makeBuffer(bytes: quadVertices, 
      length: MemoryLayout<Float>.size * quadVertices.count, 
      options: [])
quadVerticesBuffer.label = "Quad vertices"
quadTexCoordsBuffer = 
    Renderer.device.makeBuffer(bytes: quadTexCoords, 
      length: MemoryLayout<Float>.size * quadTexCoords.count, 
      options: [])
quadTexCoordsBuffer.label = "Quad texCoords"
func renderCompositionPass(
             renderEncoder: MTLRenderCommandEncoder) {
  renderEncoder.pushDebugGroup("Composition pass")
  renderEncoder.label = "Composition encoder"
  renderEncoder.setRenderPipelineState(compositionPipelineState)
  renderEncoder.setDepthStencilState(depthStencilState)
  // 1
  renderEncoder.setVertexBuffer(quadVerticesBuffer, 
                                offset: 0, index: 0)
  renderEncoder.setVertexBuffer(quadTexCoordsBuffer, 
                                offset: 0, index: 1)
  // 2
  renderEncoder.setFragmentTexture(albedoTexture, index: 0)
  renderEncoder.setFragmentTexture(normalTexture, index: 1)
  renderEncoder.setFragmentTexture(positionTexture, index: 2)
  renderEncoder.setFragmentBytes(&lights, 
    length: MemoryLayout<Light>.stride * lights.count, 
    index: 2)
  renderEncoder.setFragmentBytes(&fragmentUniforms, 
    length: MemoryLayout<FragmentUniforms>.stride, 
	index: 3)
  // 3
  renderEncoder.drawPrimitives(type: .triangle, 
                               vertexStart: 0, 
                               vertexCount: quadVertices.count)
  renderEncoder.endEncoding()
  renderEncoder.popDebugGroup()
}
guard let compositionEncoder = 
    commandBuffer.makeRenderCommandEncoder(
                        descriptor: descriptor) else {
  return
}
renderCompositionPass(renderEncoder: compositionEncoder)
func buildCompositionPipelineState() {
  let descriptor = MTLRenderPipelineDescriptor()
  descriptor.colorAttachments[0].pixelFormat = 
      Renderer.colorPixelFormat
  descriptor.depthAttachmentPixelFormat = .depth32Float
  descriptor.label = "Composition state"
  descriptor.vertexFunction = Renderer.library.makeFunction(
    name: "compositionVert")
  descriptor.fragmentFunction = Renderer.library.makeFunction(
    name: "compositionFrag")
  do {
    compositionPipelineState = 
      try Renderer.device.makeRenderPipelineState(
          descriptor: descriptor)
  } catch let error {
    fatalError(error.localizedDescription)
  }
}
buildCompositionPipelineState()
#import "../Utility/Common.h"

struct VertexOut {
  float4 position [[position]];
  float2 texCoords;
};
vertex VertexOut compositionVert(
  constant float2 *quadVertices [[buffer(0)]],
  constant float2 *quadTexCoords [[buffer(1)]],
  uint id [[vertex_id]]) {
  VertexOut out;
  out.position = float4(quadVertices[id], 0.0, 1.0);
  out.texCoords = quadTexCoords[id];
  return out;
}
fragment float4 compositionFrag(VertexOut in [[stage_in]],
     constant FragmentUniforms &fragmentUniforms [[buffer(3)]],
     constant Light *lightsBuffer [[buffer(2)]],
     texture2d<float> albedoTexture [[texture(0)]],
     texture2d<float> normalTexture [[texture(1)]],
     texture2d<float> positionTexture [[texture(2)]],
     depth2d<float> shadowTexture [[texture(4)]]) {
  // 1
  constexpr sampler s(min_filter::linear, mag_filter::linear);
  float4 albedo = albedoTexture.sample(s, in.texCoords);
  float3 normal = normalTexture.sample(s, in.texCoords).xyz;
  float3 position = positionTexture.sample(s, in.texCoords).xyz;
  float3 baseColor = albedo.rgb;
  // 2
  float3 diffuseColor = compositeLighting(normal, position, 
                          fragmentUniforms, 
                          lightsBuffer, baseColor);
  // 3
  float shadow = albedo.a;
  if (shadow > 0) {
    diffuseColor *= 0.5;
  }
  return float4(diffuseColor, 1);
}

lights.append(sunlight)
createPointLights(count: 30, min: [-3, 0.3, -3], max: [1, 2, 2])
models[0].rotation.y += 0.01

var lightsBuffer: MTLBuffer!
lightsBuffer = Renderer.device.makeBuffer(bytes: lights, 
  length: MemoryLayout<Light>.stride * lights.count, 
  options: [])
renderEncoder.setFragmentBytes(&lights,
  length: MemoryLayout<Light>.stride * lights.count,
  index: 2)
renderEncoder.setFragmentBuffer(lightsBuffer, 
                                offset: 0, index: 2)
createPointLights(count: 300, min: [-10, 0.3, -10], 
                  max: [10, 2, 20])

Where to go from here?

If you’re trying to improve your app performance, you can try a few approaches. One is to render the lights as light volumes and use stencil tests to select only lights that are affecting the fragments and only render those lights instead of all.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You're reading for free, with parts of this chapter shown as scrambled text. Unlock this book, and our entire catalogue of books and videos, with a Kodeco Personal Plan.

Unlock now