WebGLRenderer / PerspectiveCamera / Lighting Basics
In GIS, libraries like Leaflet and MapLibre handle Scene, Camera, and Renderer internally, but with Three.js you need to assemble these components yourself. We will build the initialization process equivalent to a GIS
API's new Map(), understanding each component as we go.
Lighting is a 3D-specific concept that doesn't exist in 2D maps. You will experience how the combination of surface angles and light direction creates a sense of depth, and grasp the overall flow from an Earth-sized sphere to its rendering on screen.
We chose Three.js over raw WebGL as the rendering engine for our WebGIS engine. The reasons are clear.
On the other hand, to avoid excessive dependency on Three.js, mathematical logic such as coordinate calculations is separated into core/. This design makes it easy to swap the rendering engine or write tests.
Three.js requires at minimum three things: a Scene (a container for 3D objects), a Camera (defining the viewpoint), and a Renderer (the mechanism that draws to the screen). In this chapter, we set these up step by step and display an Earth-sized sphere.
In this book's implementation, these are consolidated into the ThreeRenderer class.
This class is responsible for: (1) creating the WebGLRenderer, (2) configuring the Scene and background color, (3) initial placement of the PerspectiveCamera, (4)
lighting, (5) OrbitControls setup, (6) custom zoom, and (7) per-frame render processing.
export class ThreeRenderer {
readonly renderer: THREE.WebGLRenderer;
readonly scene: THREE.Scene;
readonly camera: THREE.PerspectiveCamera;
readonly controls: OrbitControls;
constructor(canvas: HTMLCanvasElement) {
// Initialize Renderer, Scene, Camera,
// Lighting, and Controls
}
}The WebGLRenderer draws the 3D scene using the browser's WebGL API. The key setting during initialization is enabling logarithmicDepthBuffer (logarithmic depth buffer).
A standard depth buffer divides the range between near and far linearly. In planetary-scale scenes, the near/far ratio can become extremely large. With a linear buffer, most of the precision is concentrated near the camera, causing Z-fighting (a flickering artifact where surfaces alternate visibility) in the distance. The logarithmic depth buffer distributes precision logarithmically, achieving stable rendering even at vast scales. Furthermore, in this book, near/far values are dynamically adjusted based on the camera's surface distance, ensuring optimal depth precision at all zoom levels.
const renderer = new THREE.WebGLRenderer({
canvas,
antialias: true,
logarithmicDepthBuffer: true
});
renderer.setSize(w, h);
renderer.setPixelRatio(window.devicePixelRatio);A standard WebGL depth buffer (Z-buffer) divides the near-to-far range linearly. However, in WebGIS, we need to handle extreme distance ranges like the following:
With fixed near/far values, you would need to handle both overhead views from space (far ~ tens of thousands of km) and near-surface views (near ~ a few cm) with a single ratio. A linear depth buffer would cause Z-fighting (a flickering artifact where surfaces at different depths alternate visibility). In this book, near/far values are recalculated every frame based on the camera's surface distance, keeping the near/far ratio within an appropriate range at all times.
The logarithmic depth buffer solves this by storing depth values on a logarithmic scale. Expressed mathematically:
In Three.js, simply specifying logarithmicDepthBuffer: true enables it.
Column: The Cost of Logarithmic Depth Buffers
Logarithmic depth buffers incur a slight GPU performance cost because an additional depth value logarithmic transformation is performed in the fragment shader. However, for planetary-scale scenes, avoiding Z-fighting is essential, and this cost is well within acceptable limits. CesiumJS uses logarithmic depth buffers, but with a more advanced implementation that combines multi-frustum rendering and GPU double-precision emulation.
The Scene is a container for 3D objects. The Earth, lights, camera, and
all other objects are added to the Scene. The background color is set to a dark navy (0x000011) to represent outer space.
const scene = new THREE.Scene();
scene.background = new THREE.Color(0x000011);PerspectiveCamera is a camera with perspective projection similar to the human eye. It takes four parameters:
const camera = new THREE.PerspectiveCamera(
60, // FOV (degrees)
w / h, // Aspect ratio
1.0, // near (initial; dynamically updated in render())
WGS84.a * 10 // far (initial; dynamically updated in render())
);The values in the constructor are only initial values; they are updated to surface-distance-based near/far values within each frame's render() call.
near is set to max(surfaceDistance x 0.001, 0.5), and far is the horizon distance (sqrt(2Rh)) multiplied by a margin factor.
This ensures optimal depth buffer precision whether viewing from space or skimming the surface.
The camera's initial position is set at three times the Earth's radius from the origin, looking at the origin (the center of the Earth). In a real application, you would convert specific latitude/longitude coordinates (e.g., above Tokyo) to ECEF coordinates for placement (covered in Chapter 3), but in this chapter, we simply position the camera along the Z-axis.
By combining two types of lights, we achieve lighting with a sense of depth.
0x888888 = slightly dim). A fill light that prevents areas not reached by the DirectionalLight from going completely black// DirectionalLight: directional light that follows the camera
const dirLight = new THREE.DirectionalLight(
0xffffff, 1.0
);
dirLight.position.set(0, 0, 1);
camera.add(dirLight); // Attach as child of camera
scene.add(camera); // Add camera (with light) to scene
// AmbientLight: uniform ambient light for the entire scene
const ambient = new THREE.AmbientLight(
0x888888
);
scene.add(ambient);By making the DirectionalLight a child of the camera (camera.add(directionalLight)), light always comes from the camera's front direction regardless of where the camera is pointing.
When you rotate around the Earth, the visible side is always illuminated.
SphereGeometry is a geometry that approximates a sphere as a collection of polygons (triangles).
Using the WGS84 semi-major axis (approximately 6,378 km) as the radius, we render at the same scale as the actual Earth.
In this book, 1 Three.js unit = 1 meter. This means the Earth's equatorial radius is approximately 6,378,137 Three.js units. CesiumJS also uses a 1:1 scale. Introducing a scaling factor would make tile texture resolution calculations scale-dependent and complicate coordinate alignment with external data such as 3D Tiles.
The segment count (64x64) affects the smoothness of the sphere surface. Higher values produce smoother results, but vertex count is proportional to n x m, so 64 was chosen as a balance with performance (approximately 4,000 vertices).
const geometry = new THREE.SphereGeometry(
WGS84.a, // radius = 6,378,137 m
64, 64 // horizontal / vertical segments
);
const material = new THREE.MeshStandardMaterial({
color: 0x2244aa
});
const sphere = new THREE.Mesh(geometry, material);
scene.add(sphere);MeshStandardMaterial is a PBR (Physically Based Rendering) material that responds to lighting. Because the DirectionalLight creates natural shading, the sphere gains a sense of volume.
The color is set to navy (0x2244aa) to give an ocean-like appearance.
3D scene rendering is not a one-time operation; it repeatedly draws every frame (typically 60 fps) using requestAnimationFrame.
In this chapter the scene is static, but in later chapters when camera movement and animations are introduced,
the state will be updated each frame before rendering.
const animate = () => {
animationId = requestAnimationFrame(animate);
renderer.render(scene, camera);
};
animate();In this book's implementation, the render loop is separated into a RenderLoop class.
On the GlobeViewer side, the callback calls each layer's update and ThreeRenderer.render.
export class RenderLoop {
private animationId = 0;
private running = false;
constructor(
private readonly onFrame: () => void
) {
}
start(): void {
if (this.running) return;
this.running = true;
const animate = () => {
if (!this.running) return;
this.animationId =
requestAnimationFrame(animate);
this.onFrame();
};
animate();
}
stop(): void {
this.running = false;
cancelAnimationFrame(this.animationId);
}
}The update -> render order is important. Layers first add/remove tiles, and then ThreeRenderer draws the result.
this.loop = new RenderLoop(() => {
const ctx = this.createContext();
for (const layer of this.layers)
layer.update(ctx);
this.threeRenderer.render();
});To handle browser window resizing, three operations must be performed synchronously. If any one of them is missing, the rendering will be distorted after resizing.
function handleResize(w: number, h: number) {
// 1. Update the aspect ratio
camera.aspect = w / h;
// 2. Recalculate the projection matrix
camera.updateProjectionMatrix();
// 3. Resize the renderer
renderer.setSize(w, h);
}If you don't call updateProjectionMatrix(), the internal projection matrix won't reflect the aspect ratio change, and the rendering will remain stretched.
When navigating between pages in an SPA, failing to release the previous page's resources causes memory leaks.
In SvelteKit, the function returned from onMount is called on unmount,
so we reliably release Three.js resources here.
return () => {
cancelAnimationFrame(animationId);
geometry.dispose(); // Release GPU vertex buffers
material.dispose(); // Release material
renderer.dispose(); // Release WebGL context
};Calling dispose() releases buffers and textures allocated on the GPU.
JavaScript's garbage collector alone cannot free GPU memory, so explicit disposal is required.
In this chapter, we implemented the basic Three.js setup as follows:
logarithmicDepthBuffer: truerequestAnimationFrameIn the next chapter, we will cover camera controls with OrbitControls and the implementation of custom zoom that accounts for the Earth's shape (ellipsoid).