Attacking Pixels - Adam Robinson

Hi, I’m Adam Robinson, a software engineer and maker based in London

Take a look at my projects and experiments.

Search 20 Posts

Building a 3D Portfolio with Three.js

Posted 3 years ago - 10 min read

Tags : Three.js

Check out the live Netlify build of my 3D Portfolio @

Getting started...

This project stated as a way for me to integrate my Kinect 3D scanning work, while also furthering my understanding of the three.js library. My initial concept was to allow a user to walk around inside my 3D head scan as a kind of creepy museum, however the gravity of this task & the time requirements to model all of these ‘museum’ assets in blender became too much for a side project (I may have to return to this idea in the future). I decided to scale back and settled for a timeline, with a 3D avatar of myself walking the user through what I had been up to over the years. This allowed for a more infographic-esque approach cutting down on the 3D assets required for the scene. This allowed me to focus on the three.js mechanics of the scene, which would have a viewing frustum primarily moving along a single axis of travel cutting the development time down considerably.

Key concepts

I’ll scan over some of the concepts which I needed to wrap my head around to produce this scene. A fair amount of trial and error was required in order to find what would work best for my use case. Through my iterations I found many three.js tricks and optimisations which could be handy for your projects, I’ll brush past them in this post however you can find a focused list here [improving three.js performance].

Timeline Geometries

The timeline pathway & markers were both constructed with PlaneBufferGeometry. I found this to be the most efficient 2D geometry construct in three.js. The pathway used the MeshPhongMaterial as it was the least demanding material for the renderer which could still receive the shadows from my avatar.

let pathway = new THREE.Mesh(
  new THREE.PlaneBufferGeometry(150, 28000),
  new THREE.MeshPhongMaterial({ color: 0x999999 })

pathway.rotation.x = -Math.PI / 2
pathway.receiveShadow = true
pathway.position.set(0, 50, 1000)


Unlike with the pathway the markers used MeshBasicMaterial (seen below) for their material and were not set to receive shadows. The avatar and pathway are the only assets in the scene to receive shadows. The same instance of materialWhite is used throughout the scene on text & SVGs for increased performance.

// single white material instance
const materialWhite = new THREE.MeshBasicMaterial({
  color: 0xffffff,
  side: THREE.DoubleSide,

// simple expandable array for adding marker lines
const markerArray = [
  { type: "MAIN", position: 0 },
  { type: "SMALL", position: spacing * 1 },
  { type: "MAIN", position: spacing * 2 },
  { type: "MAIN", position: spacing * 3 },
  { type: "SMALL", position: spacing * 4 },
  { type: "SMALL", position: spacing * 5 },
  { type: "SMALL", position: spacing * 6 },
  { type: "SMALL", position: spacing * 7 },
  { type: "SMALL", position: spacing * 8 },
  { type: "MAIN", position: spacing * 9 },
  { type: "MAIN", position: spacing * 10 },

// Small & Main marker PlaneBufferGeometries
const mainTimelinePoint = new THREE.PlaneBufferGeometry(35, 150)
mainTimelinePoint.rotateY(-Math.PI / 2)
const secondaryTimelinePoint = new THREE.PlaneBufferGeometry(35, 100)
secondaryTimelinePoint.rotateY(-Math.PI / 2)

for (let index = 0; index < markerArray.length; index++) {
  if (markerArray[index].type === "MAIN") {
    const mainMesh = new THREE.Mesh(mainTimelinePoint, materialWhite)
  } else {
    const smallMesh = new THREE.Mesh(secondaryTimelinePoint, materialWhite)

The markers bellow the logos are added to the scene in a for loop on load. All meshes added to the scene use the same two instances of the SMALL or MAIN PlaneBufferGeometries. This is important as creating new geometry instances within the loop will result in a linear increase in geometries within the scene dramatically effecting performance.

Raycasting - Limiting the scene

In order to limit where my avatar could navigate between I made use of ray casting. A Ray cast value from the avatar to a transparent wall / collider was used to signal when the avatar had intersected the start or end of the timeline. This is demonstrated in the video below.

The code below shows the start and end colliders which are transparent in the scene. BoxBufferGeometry was used rather than PlaneBufferGeometry as there were edge cases where the avatars animation frames were not correctly clipping with the mesh. This seemed to be rectified with a box with a negligible depth of 20

// Colliders / Walls - Start & End

const geometry = new THREE.BoxBufferGeometry(100, 210, 20)
const material = new THREE.MeshBasicMaterial({
  opacity: 0,
  transparent: true,

this.colliders = []

const start = new THREE.Mesh(geometry, material)
start.position.set(0, 150, -250)

const end = start.clone()
end.position.set(0, 150, spacing * 11)

The code below demonstrates how the raycast intersection value was used to stop the movement of the avatar via a blocked boolean. When the avatar becomes blocked by the collider the avatars animation is set to idle via the action() function call, while the camera is also set to the side on view.
// Collision check in three.js animate() method

if (this.player.move) {
  const pos = this.player.object.position.clone()
  let dir = new THREE.Vector3()
  if (this.player.move.forward < 0) dir.negate()
  let raycaster = new THREE.Raycaster(pos, dir)
  let blocked = false

  const intersect = raycaster.intersectObjects(this.colliders)
  if (intersect.length > 0) {
    blocked = true

  if (!blocked) {
    if (this.player.move.forward > 0) {
      // forward speed
      this.player.object.translateZ(dt * 170)
    } else {
      // backward speed
      this.player.object.translateZ(dt * -120)
  } else {
    // set animation to Idle
    game.action = "Idle"
    // release controls

Avatar Animation

My avatar was made using my 3D head scans and added to a body built in Blender. I built the 3D model in a T-Pose in Blender as this allowed me to use Mixamo to automate the rigging and animation of the model. Mixamo accepts OBJ file types and exports animated files in FBX. .gltf or draco compressed .glb file formats are superior for web transfer due to their reduced size and binary format however I was unable to device a pipeline that would allow me to convert mixamos output FBX file into usable .glb files due to issues with multiple textures. I also found that through decimation of the model in Blender the FBX file sizes were acceptable (5MB) I could sacrifice detail here and go back and further decimate the model as this would drastically cut the poly count of the scene.

The following code was used to load the initial idle.fbx animation and the subsequent walking forwards and backwards animations into animations = {};

// field variables
  animations = {};
  anims = ['Walking', 'WalkingBackwards'];
  assetsPath = './assets/';

// loading idle animation
loader.load(`${this.assetsPath}fbx/avatar/idle.fbx`, function(object) {
  object.mixer = new THREE.AnimationMixer(object)
  game.player.mixer = object.mixer

  game.player.root = object.mixer.getRoot()
  object.scale.set(80, 80, 80)
  object.position.set(0, 50, -180)
  object.traverse(function(child) {
    if (child.isMesh) {
      child.castShadow = true
      child.receiveShadow = false
  game.player.object = new THREE.Object3D()
  game.animations.Idle = object.animations[0]
  // call out to helper function

// helper function loads in subsequent animations
 loadNextAnim(loader) {
    let anim = this.anims.pop();
    const game = this;
    loader.load(`${this.assetsPath}fbx/anims/${anim}.fbx`, function(object) {
      game.animations[anim] = object.animations[0];
      if (game.anims.length > 0) {
      } else {
        delete game.anims;

The animation to be shown on the avatar was set using the following action setter, which makes use of the the AnimationMixer assigned to the player object above. This function ensures that the last animation stops and there is a smooth transition between the next animation being set.
  set action(name) {
    const action = this.player.mixer.clipAction(this.animations[name]);
    this.player.actionName = name;

Text & SVG's to Single Buffer Geometry

I have used Oxanium as my font in this project. I made this choice as a font with more curvature would have produced a much higher polycount for the scene. Three’s TextGeometry function produces 3D geometry from input text. I monitored polycounts being produced by different fonts by console logging . This process was very trial and error and required me to stike a balance between aesthetics and performance. Choosing a ‘square’ or ‘pixilated’ font here would result in the lowest polycount.

JSON Font Conversion: [](

The following code ensures that all text loaded into the scene is within a single buffer geometry therefore reducing load on the renderer. In order to merge TextBufferGeometries you will need to use BufferGeometryUtils. BufferGeometryUtils is not included in the three.js build itself, and must be added to your application separately.

let textArray = [
    text: `M.Sc Computer Science
    xdepth: timelineDepth,
    yheight: descriptionHeight,
    zDistance: spacing * 9,
    text: `January 2019 - Present`,
    xdepth: dateDepth,
    yheight: dateHeight,
    zDistance: spacing * 10,
    text: `  Full Stack Developer
        Coming 2020`,
    xdepth: timelineDepth,
    yheight: descriptionHeight,
    zDistance: spacing * 10,

let mergedGeometry

loaderFonts.load("./assets/fonts/Oxanium.json", function(font) {
  let geometries = {
    let geometry = new THREE.TextBufferGeometry(text.text, {
      font: font,
      size: 20,
      height: 1,

    geometry.rotateY(-Math.PI / 2)
    geometry.translate(text.xdepth, text.yheight, zOffset + text.zDistance)

    return geometry

  mergedGeometry = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries)

  let mesh = new THREE.Mesh(mergedGeometry, materialWhite)


The following code ensures that multiple SVGs and their subsequent shapes are loaded into the scene is within a single buffer type geometry therefore reducing load on the renderer significantly. This was one of the best optimisations I devised for my use-case.

const svgLogoArray = [
  { filename: "xyz", yAxis: 340, zAxis: spacing * 8 + -60 },
  { filename: "UOB", yAxis: 340, zAxis: spacing * 9 + -140 },
  { filename: "defty", yAxis: 340, zAxis: spacing * 10 + -70 },

var singleLogoGeometry = new THREE.Geometry()

for (let s = 0; s < svgLogoArray.length; s++) {
  let depth = this.randomOffsetVal(150)
  let index = s
  loaderSVG.load(`./assets/svg/${svgLogoArray[s].filename}.svg`, function(
  ) {
    let paths = data.paths

    for (let i = 0; i < paths.length; i++) {
      let path = paths[i]

      let shapes = path.toShapes(true)

      for (let j = 0; j < shapes.length; j++) {
        let geometry = new THREE.ShapeGeometry(shapes[j])

        let mesh = new THREE.Mesh(geometry, materialWhite)
        mesh.rotation.set(Math.PI / 2, Math.PI / 2, Math.PI / 2)
          zOffset + svgLogoArray[index].zAxis
        mesh.receiveShadow = false
        mesh.castShadow = false

    var bufferGeometrySVG = new THREE.BufferGeometry().fromGeometry(

    var meshSVG = new THREE.Mesh(bufferGeometrySVG, materialWhite)

Loading Screen

Three.js Loading Screen

To ensure the user isn’t confronted by a hanging application I implemented Three.js’s loading screen. This isn’t the most accurate implementation of the assets being loaded however it serve it’s purpose as to display incoming filenames and ensure the user sees something is happening while the larger files are being transferred. Without this the application could appear broken on slower internet connections.

const loadingManager = new THREE.LoadingManager()

loadingManager.onLoad = function() {
  const loadingScreen = document.getElementById("loading-screen")
  loadingScreen.addEventListener("transitionend", this.onTransitionEnd)
  let elem = document.querySelector("#loading-screen")

loadingManager.onProgress = function(url, itemsLoaded, itemsTotal) {
  document.getElementById("loadingtext").innerHTML =
    "Loading file: " +
    url +
    ".\nLoaded " +
    itemsLoaded +
    " of " +
    itemsTotal +
    " files."

loadingManager.onError = function(url) {
  console.log("There was an error loading " + url)

// loadingManager instance passed to asset loaders
const loaderImage = new THREE.TextureLoader(loadingManager)
const loaderAvatar = new THREE.FBXLoader(loadingManager)
const loaderSVG = new THREE.SVGLoader(loadingManager)
const loaderFonts = new THREE.FontLoader(loadingManager)

I hope these code extracts are be helpful! Feel free to drop message or a comment on this post if you have any further questions about what I’ve detailed in this post or if you’d like something explained which I’ve missed.

Happy coding!

Adam G Robinson
Crafter. Explorer. Coder. 🇬🇧