LookDev Stills
Level LookDev + Animation Loop Test
Just triggering all the animations simultaneously in a loop for demonstration purposes.
Introduction
Hello Mascot! is a product that I helped develop for Adapter Digitals portfolio which compliments their ongoing efforts on diversifying the kind of media they can deliver to their clients. I will describe my journey to getting the internship as well as the general breakdown of the project. Let’s begin!
This project began last year in my first internship at Adapter Digital. They are a digital agency based in Bangkok. I arrived around the time they were debuting their software product X-Sight Analysis.
The Brief
The brief went a little something like this: A real estate development project (The Forestias) would like a new interesting way to elevate their high end residential experiences with digital interactive experiences. Some of their recreational facilities will have mascots and they would like a way to interact with them. This interactive solution had to come in the form of a interactive installation.
Constraints
The installation consists of 1080x5760 pixel (16:3) screen and a single motion capture camera. The demo that will be running on the installation will be using the Krungsri banana mascot because the studio had the original rig for it already and the character has no copyright. Lastly, the technology we must use is Unity.
Pre-Production
Pre production was important as my internship was short. So I had to really nail the scope and timeline for this project to a tea. Had about 4 weeks to create the visuals, hook up the mechanics that are driven by the mocap camera. No pressure am I right? I couldn’t find all the pre-production material I prepared but here are some screenshots I found.
The initial idea was something more generic, like a talking tom format, if you remember that mobile game. Where you will be able to query questions and trigger some events like emotes. This idea was safe but it wasn’t very innovative. So I developed the idea into a experience where you can interact with the mascot in their own little world where the screen acts like a window of sorts. The fun or gameplay loop was not much of a concern, we wanted the world to feel like a interactive sandbox to drive the fun.
As the weeks went on I was not able to provide enough variety of interactive elements in the world to really drive that vision home because of time but thank goodness it’s demo and the outcome still demonstrates that visions potential to customers.
Production
Character
I was given the source files for Krungsri’s banana character. I immediately realized that the rig provided would not be suitable for the project. Its topology, UVs and polygonal density would make it suboptimal for rendering in realtime on a large screen. So I had to make my own.
Source Character Rig
Low Poly Model + Rig + Run Animation
High Poly Sculpt
Baked Normals and ID Mask
LookDev
Face Shader
What is interesting about the character’s shaders is the shader that drives the character’s face. The scheduling of the blinks and mouth shapes are totally driven by wave functions. The CPU at anytime can interrupt and override a specific eye + mouth expression on an event. Each expression for the mouth and eyes are indexed and multiple sin cos, modulo functions are manually tuned and combined to get the desired mouth and eye index over time. Then the face expressions shader is composited onto the body’s base yellow claymation shader. Maybe in a future article I write in detail how I made the shaders for the character.
Eyes
Mouths
Environment
Concept
So I made a sketch, it isn’t amazing by any means but it provides guidance on what props, shaders, and materials will the scene need as well as the loose placement and overall composition. It also gives me an opportunity to decide where to put triggers in the level for the interactive events.
Blockout
Blocking out the scene is super duper important, allows me to use proxies to establish volume, proportions, and composition of the scene early on. That way when I start creating the props I have a better understanding on scale, variation, level of detail etc. Also when its time to set dress, the outcome is predictable. Blockouts also work as a sandbox for programming the gameplay because it abstracts all the visual noise and unessential objects away.Timelapse
Like all kinds of visual mediums its a highly iterative process. I had to really trust the process and power through the not so flattering early stages of the environment’s look development. What really transformed the scene was set dressing obviously but not so obviously the the post-processing.
Set Dressing
I export the blockouts out of engine and bring into my DCC as reference to model the props. The workflow for all the props are quite typical industry standard stuff.
Materials
Workflow
I used a highpoly to lowpoly workflow. I first model/sculpt the high-poly then model a low-poly. I use the low poly meshes to bake normals, cavity, ambient occlusion, alpha, color ID and whatever I forsee that I need for creating materials and shaders. I do most of my material authoring in Substance Designer. As for shaders I use the Unity’s HDRP lit and unlit shaders as a base build extra visual features around those.
Authoring
I use Substance Designer for authoring materials because it is totally procedural meaning all material and pattern definitions are parametric. This is great because it allows me to reuse node groups across many assets with different UVs and color IDs. It also means making changes are easier and less time consuming compared to a painting approach. Lastly the target output resolution of the texture maps can be changed without degrading quality.
There is some things that are just more intuitive to do painterly, when that does happen I use Substance Painter.
Reference
I was heavily inspired by some of the stylized platformer games, some I even grew up on such as little big planet.
What I liked about the art direction of little big planet is that you can immediately read exactly what each material was which is not true for all stylistic visuals. The cardboard, plastic, felt cloth, stitching, metallic joinery and so on. Every surface had a material description comprehensive enough to communicate to the player exactly what an object was made out of. It really grounded the visuals you saw on screen to reality, emphasizing this notion of a window into something thats both fiction but grounded in real life. I wanted to incorporate a similar level of tactileness to my materials to give that similar window into a shoebox effect.
Base Materials
I made a set of base materials in substance designer. These base materials were used for props and the landscape. Since all the materials are parametric changing colors and other material properties could be done for per prop material.
Base Material Graphs (Substance Designer)
Hero Props and VFX
Flowers
Coconut Tree
Fish
Rocks
Statue
God Rays
Bees
Primary Props
Hills
Clouds
Secondary Props
Shaders
Vegetation
The tree leaves were just convex geometry on top of a trunk mesh, nothing too special. For the grass I used polygonal cards and for the ivy I used polygonal strips. I leveraged the baked maps produced from the high-poly meshes to provide normals and alpha’s to the low-poly cards. Also baked other maps such as thickness, curvature for more advanced shader effects like subsurface scattering.
Wind Shader For Vegetation and Clouds
Vegetation
Clouds
River Shader
Post-Processing
Post processing was really key in getting the final look. Since Unity’s HDRP is physically based the default tonemapped image has a tendency of desaturating the intended diffuse colors of the objects. For the kind of look I was going for (vibrant and lots of lifted shadows), it was essential to color grade. You can see the difference it makes in the timelaspe. There was a transition where everything had this ugly yellow cast and felt really dull then all of the sudden its bright and vibrant.
Dialogue Boxes
Gameplay Programming
For implementing the game mechanics I had someone in the lab to assist me, P’Kiew. He is a experienced C# unity developer. I was so lucky to have him help and mentor me. He contributed in hooking up the mocap, core game systems and game mechanics on the final week of internship (week 4).
Design Pattern
The pattern we mostly used singletons. We used singletons as managers that take certain responsibilities aspects of the virtual world. Nothing here was super fancy as Unity handles a lot of complicated logic for us such as physics, triggers and colliders automatically. Our focus was in mostly hooking up routines and animations based on player input controls or triggered events in the level. The challenge was mostly tuning parameters post play tests.
Pose Estimation (Google’s API)
We leveraged Google’s Media Pipe API for precise body pose detection and estimation, seamlessly integrating it into Unity’s C# .NET environment. This integration allowed us to utilize key joint positions, such as wrists and shoulders, to dynamically control various aspects of the virtual world. For instance, we utilized these joint positions to control the horizontal movement of the character, adjust the levitation strength of the toppled rocks, and trigger in-game dialogue with the character.
Play Testing
Did lots of informal play tests letting people in the office or myself test the current build of the project to develop features and or iron out bugs.
Conclusion
Wish there was more time.
My internship being as long as my summer break (4 weeks) it was admittedly short. This is due to it being an independent internship and not a university sponsored one. However, I was surprised how much was accomplished in the time I had. Wish I had more time to polish the features and turn it into a game with a fun gameplay loop.
I wished I spent more time talking to the team. Everybody in the innovation / design lab at Adapter Digital were so incredibly kind and welcoming. I couldn’t of asked for a better internship experience❤️.
© 2025 All rights reserved.