Back to Posts

Vision by sound

Konstantinos Egkarchos / June 4, 2015

So how do you do something like this?

“A narrative horror adventure featuring a young blind woman who must solve mysteries and escape a deadly presence, all without sight.”

“Perception is a first-person narrative horror adventure that tells the story of Cassie, a blind heroine who uses her extraordinary hearing and razor-sharp wits to unravel the mysteries of an abandoned estate that haunts her dreams.”

I have seen this implementation from a project of a university but seems that it advanced a lot more from then. The game itself seems pretty neat and if you are into this horror style with no weapon to protect yourself, I advise you to back it. As always the gears in my mind started turning. “How do you do such a thing?” . Perception is made with Unreal Engine but as I come from the Unity3D environment I will unravel my thinking with that toolset in mind.

Let’s start from the simple things. The wind is just a simple emmisive particle. With a gradient color from black to blueish and black again. It adds a lot to the aesthetic of the game and is pretty realistic too! I shiver by looking at it.

While the wind is blowing it’s supposed to make a sound, and “light up” the objects that it surrounds. Also seems that Cassie, our protagonist, has a cane that she can extend and hit to the ground to make a noise and “light” the objects around it. Now, that is a bit tricky.

My initial thought was a shader. Every material should be black and should hold the position of the sound that spawned. Then calculate the offset of the sound by measuring how much time has passed. Simple but how is it implemented? I’m a bit of a noob about shaders but the logic fits and this should work. I’m looking forward in hearing your feedback for optimisation.

Here is the sample project.

The scene includes a walled room, and a spaceship . There are 2 files which are useful here. First let’s take a look at the shader. The vertex function calculates the current position of the model and the world position and passes it to the fragment function.

v2f vert(float4 v:POSITION) {
  v2f o;
  o.pos = mul(UNITY_MATRIX_MVP, v);
  o.wp = mul(_Object2World, v);
  return o;
}

The fragment function calculates the distance of the pixel from the sound emit position. Then it calculates the weight of the two colors (default black and highlighted). I multiplied the sound time offset by 4 to make it faster. Then subtracted from the distance and take the absolute value to find how close the pixel is to the soundwave. Divided by 0.2 to make the soundwave thinner, and clamped the value because the numbers between 0 and 1 is what’s important. Then return the color of the pixel with the lerp function.

fixed4 frag(v2f i) : COLOR {
  float dist = distance(i.wp, _SoundPos);
  float weight = clamp(abs((dist - (_SoundTimeOffset * 4))) / 0.2, 0, 1);
  return lerp(_HighlightColor, _DefaultColor, weight);
}

Then, in the main.cs script which is attached to my character, I set some shader global variables. First of all the time offset when player has spawned a sound. That is in order to calculate the sound wave offset. And in the Update I save the sound spawn position and save the timestamp which it spawned to calculate later the offset.

There are lots of things one can do to improve it, like add a fade to the end so that the sound wave doesn’t go infinitely.