Soundstage depth in video games
Hello, friends ! In this short post, I hasten to share with the thought that dawned on me regarding the formation of the depth of a 3D scene. First of all, I want to turn to sound engineers, video game developers, I would be glad to hear your expert opinion.
As we all know, the sound around us has a number of parameters: the main ones are the frequency response of the source, the environment and the direction. There are already many technologies that allow you to emulate surround sound in headphones. There are a number of technologies from various companies that take into account delays between the left and right ear, as well as changes in frequency response at different head positions from the sound source. But, one, in my opinion, key parameter is missing, this is the sound delay from the source to the listener, that is, the time for which the sound wave reaches the source to the listener is not taken into account. Basically, we hear a distant object with a lower volume, but without taking into account the same time delay. Of course, eye contact is needed, but it seems to me that such delays will add depth to the scene. How do you think ?