Cognition · conference · Haptics · Multimodality

Preparing submissions for the SweCog 2017 conference, held at Uppsala University!

SweCog2017_Uppsala

This week, I’m preparing submissions for this year’s version of the SweCog (Swedish Cognitive Science Society) conference. This conference covers a broad range of topics related to cognitive science. When I participated last year, when the conference was held at Chalmers, Gothenburg, I did not present anything (actually, none of the participants from Uppsala University did), but the situation this year is quite different since Uppsala University is hosting the event!

I really enjoyed last year’s conference much due to the large variety of topics covered and the very interesting keynote lectures. It was also (and still is, I assume) a single track conference, meaning that you will not have to choose which paper session to attend. As I remember there were ten paper presentations in total, three keynote lectures and one poster session during the two days conference. You can read more about my experiences from SweCog 2016 in this blog post, summing up that event. I also wrote summaries from day 1 and day 2.

Since the only thing that’s required is an extended abstract of 1-3 pages (and max 500 words), I’m working on several submissions. A topic that was not covered during last year’s conference was collaboration in multimodal environments and specifically how different combinations of modalities can affect communication between two users solving a task together. Since that is one of my main research interests, I now see my chance to contribute! The deadline for extended abstract submissions to SweCog 2017 is September 4, so there is still a lot of time to write. The conference will be held October 26-27 at Uppsala University. Since registration to the conference is free for SweCog members (membership is also free), I expect to see many of my old KTH colleagues at Uppsala University during the conference days! 😉  You can find more information about the conference here.

Before I started planning for contributions to SweCog 2017, I invited some of my “multimodal colleagues” from KTH to join the writing process. As a result, Emma Frid and I will collaborate on an extended abstract about a follow-up study to the study I present here. Thus, our contribution will focus on how multimodal feedback can affect visual focus when two users are solving a task together in a collaborative virtual environment. Since I have not yet heard from any other colleague, I plan to write another extended abstract on my own, about how multimodal feedback (or rather combinations of visual, haptic and auditory feedback) can affect the means by which users talk to each other while working in collaborative virtual environments. Maybe, I will also throw in a third one about the potential of using haptic guiding functions (see this blog post for an explanation of this concept) in situations where sighted and visually impaired users collaborate.

 

conference · Haptics · Multimodality · sonification

Got a new paper published, on the effects of auditory and haptic feedback on gaze behaviour!

SMC_published

About a month ago I wrote a blog post about a conference paper with the title “AN EXPLORATORY STUDY ON THE EFFECT OF AUDITORY FEEDBACK ON GAZE BEHAVIOR IN A VIRTUAL THROWING TASK WITH AND WITHOUT HAPTIC FEEDBACK” that had just been accepted for the Sound and Music Computing 2017 conference. Now, that paper has been formally published! You can find our paper here and the full conference proceedings here. The study leader, Emma Frid presented the paper last Thursday (6/7 2017) afternoon in Espoo, Finland. The other authors are Roberto Bresin, Eva-Lotta Sallnäs Pysander and I.

As I wrote in the earlier blog post, this particular paper is based on a small part of an extensive experiment. The experiment, which 20 participants took part in, was based on a simple task – picking up a ball and throwing it into a goal area at the opposite side of a virtual room. After 15 hits the task had been solved. The same task was solved in several different conditions of which some included haptic rendering and some included movement sonification (two different sound models were compared) of the throwing gesture. During all interaction with the interface, different parameters, including gaze data collected through an eye-tracker, were continuously logged. In the part of the experiment on which the published paper is based we wanted to find out if the participants’ visual focus in the interface changed depending on experiment condition (e.g. if participants looked more at the goal when haptic and/or auditory feedback was presented). Due to bad quality of the sampled gaze data for some of the participants (< 80% of the gaze points had been registered), only gaze data from 13 participants could be used in the analysis.

Much due to large inter-subject variability, we did not get any significant results this time around, but some interesting patterns arose. Results e.g. indicated that participants fixated fewer times on the screen when solving the task in visual/audio conditions compared to a visual-only condition and fewer times on the screen when solving the task in the visual/haptic/audio conditions than when doing it in the visual/haptic condition. The differences between haptic conditions were, however, small especially regarding one of the sound models presenting a swishing sonification of the throwing gesture. When considering total fixation duration (for how long the participants focused on the screen) the tendency was that participants focused less on the screen when this sound model was used (indications were stronger when haptic feedback was not provided). Even though these results were not significant they indicate that movement sonification has an effect on gaze behaviour. When looking at gaze behaviour for each participant individually we could also see that the participants could be divided into a few clusters in which the participants showed similar behaviour. Although the large inter-subject variability did not make it possible to find any general patterns, we could find indications of effects of auditory feedback within the clusters. See the article linked above, for a more detailed analysis, illustrations and discussion.

Even though we did not get any significant results, the indications we got that movement sonifications can affect visual focus are still interesting. If it is true that you look more on the screen when you do not have access to movement sonification, this can mean that you can focus on different parts of an interface, maybe solving different tasks in parallel, when having access to movement sonification in this kind of environment. It is definitely worth conducting similar studies with a lot more participants in order to see if the indications we got would become significant. Experiments with more users could also show if participants focus more on the goal when having access to movement sonification and/or haptic feedback – if so, this would indicate that the information provided by haptic and audio feedback, respectively, is enough to understand that you are performing an accurate throwing gesture (you don’t need to look at the ball to confirm it). Results from interviews held at the end of the test sessions already indicate this!

This is the very first paper Eva-Lotta and I have gotten accepted to the Sound and Music Computing conference. Emma and Roberto, however, have gotten papers accepted to that conference numerous times. Check out their Researchgate accounts for their earlier contributions to this conference and so much more related to e.g. sound design.

eHealth · Interact · conference

Paper on critical incidents and eHealth design accepted to Interact 2017!

Interact_accept

Months ago I wrote a blog post about a workshop at NordiCHI 2016, to which I submitted my first ever research contribution from the patient’s perspective. You can find the workshop position paper here. After that workshop the participants decided that we should continue our discussions and also do research together when possible. The first result of our collaboration, a short paper submitted to Interact 2017, has now been accepted for publication and presentation at the conference (was conditionally accepted about a month ago)!

Christiane Grünloh is the lead author of this paper, and the others are (in order) Jean Hallewell, Bridget Kane, Eunji Lee, Thomas Lind, Jonas Moll, Hanife Rexhepi and Isabella Scandurra. The title of the Interact paper is: “Using Critical Incidents in Workshops to Inform eHealth Design”.

The paper is focused on the workshop and especially on how this kind of workshop, gathering both researchers, practitioners and patients (me, in this case) who all contribute with a critical incident related to eHealth, can be used to generate ideas that can inform future eHealth design. More details about the format can be found in the paper when it’s published and in the blog post which I linked to above. Christiane will present the paper at the conference and it seems like the presentation (as well as most other presentations) will be broadcasted!

Here is the abstract, summarizing the main points:

Demands for technological solutions to address the variety of problems in healthcare have increased. The design of eHealth is challenging due to e.g. the complexity of the domain and the multitude of stakeholders involved. We describe a workshop method based on Critical Incidents that can be used to reflect on, and critically analyze, different experiences and practices in healthcare. We propose the workshop format, which was used during a conference and found very helpful by the participants to identify possible implications for eHealth design, that can be applied in future projects. This new format shows promise to evaluate eHealth designs, to learn from patients’ real stories and case studies through retrospective meta-analyses, and to inform design through joint reflection of understandings about users’ needs and issues for designers.

 

 

conference · Pedagogical development · Pedagogy

Paper on unexpected student behaviour and learning opportunities accepted to FIE 2017!

FIE_accept

Late last week it was confirmed that a conference paper I was co-authoring has been accepted for publication and presentation at the 2017 FIE (Frontiers in Education) conference! Åsa Cajander (lead author), Diane Golay, Mats Daniels, Aletta Nylén, Arnold Pears, Anne-Kathrin Peters from the IT department at Uppsala University and Roger McDermott from the School of Computer Science and Digital Media at Robert Gordon University are the other authors on the paper. The title of the paper is “Unexpected Student Behaviour and Learning Opportunities: Using the Theory of Planned Behaviour to Analyse a Critical Incident”.

In the paper we are using the Theory of Planned Behaviour to analyze a critical incident that occurred at the end of a course at Uppsala University. The incident relates to students refusing to present at and participate in a voluntary “design final” at the end of the course, where an external jury should choose the best project. During the course, project groups presented their work a couple of times in seminar groups and after each presentation the groups were awarded points by both the peers and the teachers. After the last presentation, the project groups with the highest number of points in the respective seminar group (three in total) were given the opportunity to present during the final.

The main idea with introducing the point system and design final was to add an engaging gamification component, providing an extra incentive for performing well during the entire course. The reactions from students, however, were unexpected in that some groups refused to take part in the design final and quite a few students did not see the point of the gamification related components.

Here is the paper abstract, outlining our main approach in analyzing the critical incident (I will come back to this topic and write more about the results and outcomes when the paper has been published in the conference proceedings):

One of the challenges in being a teacher is to set up an educational setting where the students receive relevant learning opportunities for the specific course, the students’ education in general, and for their future. However, efforts to create such educational settings do not always work in the way that faculty has intended. In this paper we investigate one such effort seen from a critical incident perspective. Central to the analysis in this paper is how the Theory of Planned Behaviour (TPB) can provide explanations for the incident. The critical incident can be summarised as students refusing to take part in a non-compulsory, but from the faculty perspective highly educational, activity. We describe the incident in depth, give thebackground for the educational intervention, and analyse the incident from the perspective of TPB. This paper makes two major contributions to engineering education research. The first is the development of a method for analysing critical teaching and learning incidents using the TPB. The critical incident analysisillustrates how the method is used to analyse and reason about the students’ behaviour. Another contribution is the development of a range of insights which deal with challenges raised by Learning interventions, especially those involved with acquiring hidden or ”invisible skills” not usually seen or acknowledged by students to belong to core subject area of a degree program.

The tension between the teachers’ expectations and the students’ reactions is very interesting from a pedagogical point of view. In this particular paper we analyze a critical incident using a specific method (Theory of Planned Behaviour), but we are planning broader articles on this subject as well. One interesting aspect to delve deeper into is the difference between universities – one of the main reasons why gamification was tested at Uppsala University was that it had been extremely well received by students at another university taking a very similar course with similar gamification components!