This week, I’m preparing submissions for this year’s version of the SweCog (Swedish Cognitive Science Society) conference. This conference covers a broad range of topics related to cognitive science. When I participated last year, when the conference was held at Chalmers, Gothenburg, I did not present anything (actually, none of the participants from Uppsala University did), but the situation this year is quite different since Uppsala University is hosting the event!
I really enjoyed last year’s conference much due to the large variety of topics covered and the very interesting keynote lectures. It was also (and still is, I assume) a single track conference, meaning that you will not have to choose which paper session to attend. As I remember there were ten paper presentations in total, three keynote lectures and one poster session during the two days conference. You can read more about my experiences from SweCog 2016 in this blog post, summing up that event. I also wrote summaries from day 1 and day 2.
Since the only thing that’s required is an extended abstract of 1-3 pages (and max 500 words), I’m working on several submissions. A topic that was not covered during last year’s conference was collaboration in multimodal environments and specifically how different combinations of modalities can affect communication between two users solving a task together. Since that is one of my main research interests, I now see my chance to contribute! The deadline for extended abstract submissions to SweCog 2017 is September 4, so there is still a lot of time to write. The conference will be held October 26-27 at Uppsala University. Since registration to the conference is free for SweCog members (membership is also free), I expect to see many of my old KTH colleagues at Uppsala University during the conference days! 😉 You can find more information about the conference here.
Before I started planning for contributions to SweCog 2017, I invited some of my “multimodal colleagues” from KTH to join the writing process. As a result, Emma Frid and I will collaborate on an extended abstract about a follow-up study to the study I present here. Thus, our contribution will focus on how multimodal feedback can affect visual focus when two users are solving a task together in a collaborative virtual environment. Since I have not yet heard from any other colleague, I plan to write another extended abstract on my own, about how multimodal feedback (or rather combinations of visual, haptic and auditory feedback) can affect the means by which users talk to each other while working in collaborative virtual environments. Maybe, I will also throw in a third one about the potential of using haptic guiding functions (see this blog post for an explanation of this concept) in situations where sighted and visually impaired users collaborate.