Chapters

Hide chapters

Metal by Tutorials

Second Edition · iOS 13 · Swift 5.1 · Xcode 11

Before You Begin

Section 0: 3 chapters
Show chapters Hide chapters

Section I: The Player

Section 1: 8 chapters
Show chapters Hide chapters

Section III: The Effects

Section 3: 10 chapters
Show chapters Hide chapters

18. Rendering with Rays
Written by Marius Horga

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

In previous chapters, you worked with a traditional pipeline model, a raster-model, which uses a rasterizer to color the pixels on the screen. In this chapter, you’ll learn about another, somewhat different rendering technique, a ray-model, which you’ll use to render clouds.

Getting started

In the world of computer graphics, there are two main approaches to rendering graphics: The first one is geometry -> pixels. This approach transforms geometry into pixels using the raster-model. The raster-model assumes you know all of the models and their geometry (triangles) beforehand.

A pseudo-algorithm for the raster-model might look something like this:

for each triangle in the scene:
  if visible:
    mark triangle location
    apply triangle color
  if not visible:
    discard triangle

The second one is pixels -> geometry. This approach involves shooting rays from the camera, out of the screen and into the scene, using the ray-model which is what you’ll be using for the remainder of this chapter.

A pseudo-algorithm for the ray-model may look something like this:

for each pixel on the screen:
  if there's an intersection (hit):
    identify the object hit
    change pixel color
    optionally bounce the ray
  if there's no intersection (miss):
    discard ray
    leave pixel color unchanged

In ideal conditions, light travels through the air as a ray following a straight line until it hits a surface. Once the ray hits something, any combination of the following events may happen to the light ray:

  • Light gets absorbed into the surface.
  • Light gets reflected by the surface.
  • Light gets refracted through the surface.
  • Light gets scattered from another point under the surface.

When comparing the two models, the raster-model is a faster rendering technique, highly optimized for GPUs. This model scales well for larger scenes and implements antialiasing with ease. If you’re creating highly interactive rendered content, such as 1st- and 3rd-person games, the raster-model might be the better choice since pixel accuracy is not paramount.

In contrast, the ray-model is more parallelizable and handles shadows, reflection and refractions more easily. When you’re rendering static, far away scenes, using a ray-model might be the better choice.

The ray-model has a few variants; among the most popular are ray casting, ray tracing, path tracing and raymarching. Before you get started, it’s important to understand each.

Ray casting

In 1968 Arthur Appel introduced ray casting, making it one of the oldest ray-model variants. However, it wasn’t until 1992 that it became popular in the world of gaming — that’s when Id Software’s programmer, John Carmack, used it for their Wolfenstein 3D game.

For each pixel from 0 to width:
  Cast ray from the camera
  If there's an intersection (hit):
    Color the pixel in object's color
    Stop ray and go to the next pixel 
  If there's no intersection (miss):
    Color the pixel in the background color

Ray tracing

Ray tracing was introduced in 1979 by Turner Whitted. In contrast to ray casting — which shoots about a thousand rays into the scene — ray tracing shoots a ray for each pixel (width * height), which can easily amount to a million rays!

For each pixel on the screen:
  For each object in the scene:
    If there's an intersection (hit):
      Select the closest hit object
      Recursively trace reflection/refraction rays
      Color the pixel in the selected object's color

Path tracing

Path Tracing was introduced as a Monte Carlo algorithm to find a numerical solution to an integral part of the rendering equation. James Kajiya presented the rendering equation in 1986. You’ll learn more about the rendering equation in Chapter 20, “Advanced Lighting” and you’ll implement a path tracer in Chapter 21, “Metal Performance Shaders”.

For each pixel on the screen:
  Reset the pixel color C.
    For each sample (random direction): 
      Shoot a ray and trace its path.
      C += incoming radiance from ray.
    C /= number of samples 

Raymarching

Raymarching is one of the newer approaches to the ray-model. It attempts to make rendering faster than ray tracing by jumping (or marching) in fixed steps along the ray, making the time until an intersection occurs shorter.

For each step up to a maximum number of steps:
  Travel along the ray and check for intersections. 
  If there's an intersection (hit):
    Color the pixel in object's color
  If there's no intersection (miss):
    Color the pixel in the background color
  Add the step size to the distance traveled so far.

F(X,Y) = X^2 + Y^2 - R^2

F(X,Y,Z) = X^2 + Y^2 + Z^2 - R^2

Signed distance functions

Signed Distance Functions (SDF) describe the distance between any given point and the surface of an object in the scene. An SDF returns a negative number if the point is inside that object or positive otherwise.

// 1
float radius = 0.25;
float2 center = float2(0.0);
// 2
float distance = length(uv - center) - radius;
// 3
if (distance < 0.0) {
  color = float4(1.0, 0.85, 0.0, 1.0);
}

The raymarching algorithm

Go to the Raymarching playground page. Inside the Resources folder, open Shaders.metal.

struct Sphere {
  float3 center;
  float radius;
  Sphere(float3 c, float r) {
    center = c;
    radius = r;
  }
};
struct Ray {
  float3 origin;
  float3 direction;
  Ray(float3 o, float3 d) {
    origin = o;
    direction = d;
  }
};
float distanceToSphere(Ray r, Sphere s) {
  return length(r.origin - s.center) - s.radius;
}
For each step up to a maximum number of steps:
  Travel along the ray and check for intersections. 
  If there's an intersection (hit):
    Color the pixel in object's color
  If there's no intersection (miss):
    Color the pixel in the background color
  Add the step size to the distance traveled so far.
// 1
Sphere s = Sphere(float3(0.0), 1.0);
Ray ray = Ray(float3(0.0, 0.0, -3.0), 
              normalize(float3(uv, 1.0)));
// 2
for (int i = 0.0; i < 100.0; i++) {
  float distance = distanceToSphere(ray, s);
  if (distance < 0.001) {
    color = float3(1.0);
    break;
  }
  ray.origin += ray.direction * distance;
}

Sphere s = Sphere(float3(0.0), 1.0);
Ray ray = Ray(float3(0.0, 0.0, -3.0), 
              normalize(float3(uv, 1.0)));
Sphere s = Sphere(float3(1.0), 0.5);
Ray ray = Ray(float3(1000.0), normalize(float3(uv, 1.0)));
float distanceToScene(Ray r, Sphere s, float range) {
  // 1
  Ray repeatRay = r;
  repeatRay.origin = fmod(r.origin, range);
  // 2
  return distanceToSphere(repeatRay, s);
}
float distance = distanceToSphere(ray, s);
float distance = distanceToScene(ray, s, 2.0);

output.write(float4(color, 1.0), gid);
output.write(float4(color * abs((ray.origin - 1000.) / 10.0), 
                    1.0), gid);

constant float &time [[buffer(0)]]
Ray ray = Ray(float3(1000.0), normalize(float3(uv, 1.0)));
float3 cameraPosition = float3(1000.0 + sin(time) + 1.0, 
                               1000.0 + cos(time) + 1.0, 
                               time);
Ray ray = Ray(cameraPosition, normalize(float3(uv, 1.0)));
output.write(float4(color * abs((ray.origin - 1000.0)/10.0), 
                    1.0), gid);
float3 positionToCamera = ray.origin - cameraPosition;
output.write(float4(color * abs(positionToCamera / 10.0), 
                    1.0), gid);

Creating random noise

Noise, in the context of computer graphics, represents perturbations in the expected pattern of a signal. In other words, noise is everything the output contains but was not expected to be there. For example, pixels with different colors that make them seem misplaced among neighboring pixels.

float randomNoise(float2 p) {
  return fract(6791.0 * sin(47.0 * p.x + 9973.0 * p.y));
}
float noise = randomNoise(uv);
output.write(float4(float3(noise), 1.0), gid);

float noise = randomNoise(uv);
float tiles = 8.0;
uv = floor(uv * tiles);

float smoothNoise(float2 p) {
  // 1
  float2 north = float2(p.x, p.y + 1.0);
  float2 east = float2(p.x + 1.0, p.y);
  float2 south = float2(p.x, p.y - 1.0);
  float2 west = float2(p.x - 1.0, p.y);
  float2 center = float2(p.x, p.y);
  // 2
  float sum = 0.0;
  sum += randomNoise(north) / 8.0;
  sum += randomNoise(east) / 8.0;
  sum += randomNoise(south) / 8.0;
  sum += randomNoise(west) / 8.0;
  sum += randomNoise(center) / 2.0;
  return sum;
}
float noise = randomNoise(uv);
float noise = smoothNoise(uv);

float interpolatedNoise(float2 p) {
  // 1
  float q11 = smoothNoise(float2(floor(p.x), floor(p.y)));
  float q12 = smoothNoise(float2(floor(p.x), ceil(p.y)));
  float q21 = smoothNoise(float2(ceil(p.x), floor(p.y)));
  float q22 = smoothNoise(float2(ceil(p.x), ceil(p.y)));
  // 2
  float2 ss = smoothstep(0.0, 1.0, fract(p));
  float r1 = mix(q11, q21, ss.x);
  float r2 = mix(q12, q22, ss.x);
  return mix (r1, r2, ss.y);
}
float tiles = 8.0;
uv = floor(uv * tiles);
float noise = smoothNoise(uv);
float tiles = 4.0;
uv *= tiles;
float noise = interpolatedNoise(uv);

float fbm(float2 uv, float steps) {
  // 1
  float sum = 0;
  float amplitude = 0.8;
  for(int i = 0; i < steps; ++i) {
    // 2
    sum += interpolatedNoise(uv) * amplitude;
    // 3
    uv += uv * 1.2;
    amplitude *= 0.4;
  }
  return sum;
}
float noise = interpolatedNoise(uv);
float noise = fbm(uv, tiles);

Marching clouds

All right, it’s time to apply what you’ve learned about signed distance fields, random noise and raymarching by making some marching clouds!

struct Plane {
  float yCoord;
  Plane(float y) {
    yCoord = y;
  }
};

float distanceToPlane(Ray ray, Plane plane) {
  return ray.origin.y - plane.yCoord;
}

float distanceToScene(Ray r, Plane p) {
  return distanceToPlane(r, p);
}
uv *= tiles;
float3 clouds = float3(fbm(uv));
float2 noise = uv;
noise.x += time * 0.1;
noise *= tiles;
float3 clouds = float3(fbm(noise));
// 1
float3 land = float3(0.3, 0.2, 0.2);
float3 sky = float3(0.4, 0.6, 0.8);
clouds *= sky * 3.0;
// 2
uv.y = -uv.y;
Ray ray = Ray(float3(0.0, 4.0, -12.0), 
              normalize(float3(uv, 1.0)));
Plane plane = Plane(0.0);
// 3
for (int i = 0.0; i < 100.0; i++) {
  float distance = distanceToScene(ray, plane);
  if (distance < 0.001) {
    clouds = land;
    break;
  }
  ray.origin += ray.direction * distance;
}

Where to go from here?

In this chapter, you learned about various rendering techniques such as ray casting, ray tracing, path tracing, and raymarching. You learned about signed distance fields and how to find objects in the scene with them. You learned about noise and how useful it is to define random volumetric content that cannot be easily defined using traditional geometry (meshes). Finally, you learned how to use raymarching and random noise to create dynamic clouds.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now