( OS prefers reducing animation! )

Research Blog

Ethan Robinson, Edinburgh Napier University

Home > Research > CI Research Blog > SOVRA: Subjective Orientation in VR Audio

SOVRA: Subjective Orientation in VR Audio

SOVRA (Subjective Orientation in VR Audio) establishes a method for capturing listeners’ experiences of spatial audio in VR. Supported by a Creative Informatics Small Research Grant, Research Associate Ethan Robinson reflects on the outcomes of the project.


This project investigated how, when in a Virtual Reality (VR) environment, binaural presentation of sound could be effectively captured and analysed. The sounds played to the participants were processed using a Head Related Transfer Function (HRTF) model, which used head size, and the distance between ears to emulate how the brain processes incoming sound. The project is a development on a previous study conducted by Dr Julián Villegas and Camilo Arevalo from the University of Aizu. These experiments have been repeated at Edinburgh Napier University by Ethan Robson and Dr Iain McGregor. Comparing and contrasting the results from both studies will improve the efficacy of the project, which will be further extended by an identical study in Colombia. The results of the project have many applications, by applying spatial audio cues to important aspects of a situation, improving the awareness of those that require them. In circumstances where spatially identifying the source of a sound is paramount, such as with emergency services, the results of this project could be utilised in providing end-users with enhanced auditory awareness.

Key actions and events

Fifty participants were recruited from staff and students at Edinburgh Napier University. During the experiment, the procedure was explained before the practice session began. In the practice part of the experiment, the participant was given the VR headset and headphones to wear, then the participant’s controller (see Fig. 1). A brief example of how the experiment operates was then disclosed, so the participant understood the procedure. The project was presented at the Creative Informatics small grants and PhD RA showcase in October 2022 as a short talk on the project’s impact, methods, and initial findings. At Edinburgh Napier University’s Driving Forward research exhibition, the SOVRA project was exhibited for the attendees to interact with. The three-day event was open to the public, industry, and local schools.


This project was conducted in collaboration with the University of Aizu, Japan. They completed the experiment with fifty-four participants. At Edinburgh Napier, we have conducted the experiments with fifty participants using a wider demographic range, to analyse a more accurate representation of the population. The project will be repeated at a university in Columbia, where an additional fifty participants will take part in the experiment.


As there was no indication of where the VR headset should be centred for the experiment, the first 8 experiments were completed with a half metre bias to the left. When analysing the results, this set of data made the process more complicated for the researcher. For ease of analysis, this bias was rectified for participant 9 onwards, with the VR headset centred where the user’s head was positioned during the experiment.

Discussions with Aizu were short and irregular, due to the time difference and busy schedules of all involved, so the method for the experiment had to be largely derived from the information received to run the experiment. This largely consisted of setting up the hardware and software to work in conjunction and record the results of the experiment correctly. To reduce the impact of bias, the first forty participants were instructed to use the equipment with examples of how to draw specific paths that they may hear. Upon discussion with the University of Aizu, it was discovered that they were providing corrections to the participants in the practice stage of the experiment. To match this, corrections were then provided to the participants from 41 through 50.

A person in a VR headset manipulating VR paddles.
Figure 1: Participant drawing their perception of the sound’s trajectory.

Key findings and impact

Most of the participants remarked that they did not hear much, if any, of the spatial movement of the sounds in the experiment. Many disclosed that they used the pitch changes to identify where they believed the trajectory should have been. Others simply did not hear many of the sounds outside of their head, and were all mono, non-spatialised sounds. Figures 2, 3, 4, and 5 contain the results of the practice session and experiment for all fifty participants. There are discrepancies with the results, common correlations for the trajectories do not provide an accurate model confirming experiment’s hypothesis as correct. After discussions with the partners, it was identified that the HRTF models they had processed the sounds with, largely were unsuccessful for the participants at Edinburgh Napier University. Whilst the results of the project could be utilised in Japan, they would not be suitable for general use in the UK. More experiments should be conducted with different HRTF models to determine the suitable HRTF range that could be utilised for this audience.

Multiple red, yellow and grey charts showing data results from experiments.
Figure 2: Napier results for the practice session as viewed from the left hand side.


Charts showing traces of participants' drawings.
Figure 3: Napier results for the practice session as viewed from above.
Charts showing traces of participants' drawings.
Figure 4: Napier results for the experiment as viewed from the left-hand side.
Charts showing traces of participants' drawings.
Figure 5: Napier results for the experiment as viewed from above.

Next steps and future work

The experiment will be repeated with other universities in elsewhere in the world. This will provide additional results, allowing for a much larger data set to be analysed.  Follow on grant applications will be developed in collaboration with Aizu University. As a method of capturing spatial listening experiences the method shows a lot of promise, and could possibly be utilised to generate HRTFs, as well capture other aspects associated with listening and hearing (see Figs 6 and 7).

A VR image of a person's head surrounded by a wavy line with partial text on a screen in front of them.
Figure 6: A screen capture of the trajectory drawn by the participant in the practice session.
A VR image of a person's head with a line extending in front of them and partial text on a screen in front of them.
Figure 7: A screen capture of the trajectory drawn by the participant in the experiment.