conference · Haptics · Multimodality · sonification

Got a new paper published, on the effects of auditory and haptic feedback on gaze behaviour!

SMC_published

About a month ago I wrote a blog post about a conference paper with the title “AN EXPLORATORY STUDY ON THE EFFECT OF AUDITORY FEEDBACK ON GAZE BEHAVIOR IN A VIRTUAL THROWING TASK WITH AND WITHOUT HAPTIC FEEDBACK” that had just been accepted for the Sound and Music Computing 2017 conference. Now, that paper has been formally published! You can find our paper here and the full conference proceedings here. The study leader, Emma Frid presented the paper last Thursday (6/7 2017) afternoon in Espoo, Finland. The other authors are Roberto Bresin, Eva-Lotta Sallnäs Pysander and I.

As I wrote in the earlier blog post, this particular paper is based on a small part of an extensive experiment. The experiment, which 20 participants took part in, was based on a simple task – picking up a ball and throwing it into a goal area at the opposite side of a virtual room. After 15 hits the task had been solved. The same task was solved in several different conditions of which some included haptic rendering and some included movement sonification (two different sound models were compared) of the throwing gesture. During all interaction with the interface, different parameters, including gaze data collected through an eye-tracker, were continuously logged. In the part of the experiment on which the published paper is based we wanted to find out if the participants’ visual focus in the interface changed depending on experiment condition (e.g. if participants looked more at the goal when haptic and/or auditory feedback was presented). Due to bad quality of the sampled gaze data for some of the participants (< 80% of the gaze points had been registered), only gaze data from 13 participants could be used in the analysis.

Much due to large inter-subject variability, we did not get any significant results this time around, but some interesting patterns arose. Results e.g. indicated that participants fixated fewer times on the screen when solving the task in visual/audio conditions compared to a visual-only condition and fewer times on the screen when solving the task in the visual/haptic/audio conditions than when doing it in the visual/haptic condition. The differences between haptic conditions were, however, small especially regarding one of the sound models presenting a swishing sonification of the throwing gesture. When considering total fixation duration (for how long the participants focused on the screen) the tendency was that participants focused less on the screen when this sound model was used (indications were stronger when haptic feedback was not provided). Even though these results were not significant they indicate that movement sonification has an effect on gaze behaviour. When looking at gaze behaviour for each participant individually we could also see that the participants could be divided into a few clusters in which the participants showed similar behaviour. Although the large inter-subject variability did not make it possible to find any general patterns, we could find indications of effects of auditory feedback within the clusters. See the article linked above, for a more detailed analysis, illustrations and discussion.

Even though we did not get any significant results, the indications we got that movement sonifications can affect visual focus are still interesting. If it is true that you look more on the screen when you do not have access to movement sonification, this can mean that you can focus on different parts of an interface, maybe solving different tasks in parallel, when having access to movement sonification in this kind of environment. It is definitely worth conducting similar studies with a lot more participants in order to see if the indications we got would become significant. Experiments with more users could also show if participants focus more on the goal when having access to movement sonification and/or haptic feedback – if so, this would indicate that the information provided by haptic and audio feedback, respectively, is enough to understand that you are performing an accurate throwing gesture (you don’t need to look at the ball to confirm it). Results from interviews held at the end of the test sessions already indicate this!

This is the very first paper Eva-Lotta and I have gotten accepted to the Sound and Music Computing conference. Emma and Roberto, however, have gotten papers accepted to that conference numerous times. Check out their Researchgate accounts for their earlier contributions to this conference and so much more related to e.g. sound design.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s