Cognition · conference · Haptics · Multimodality

Preparing submissions for the SweCog 2017 conference, held at Uppsala University!

SweCog2017_Uppsala

This week, I’m preparing submissions for this year’s version of the SweCog (Swedish Cognitive Science Society) conference. This conference covers a broad range of topics related to cognitive science. When I participated last year, when the conference was held at Chalmers, Gothenburg, I did not present anything (actually, none of the participants from Uppsala University did), but the situation this year is quite different since Uppsala University is hosting the event!

I really enjoyed last year’s conference much due to the large variety of topics covered and the very interesting keynote lectures. It was also (and still is, I assume) a single track conference, meaning that you will not have to choose which paper session to attend. As I remember there were ten paper presentations in total, three keynote lectures and one poster session during the two days conference. You can read more about my experiences from SweCog 2016 in this blog post, summing up that event. I also wrote summaries from day 1 and day 2.

Since the only thing that’s required is an extended abstract of 1-3 pages (and max 500 words), I’m working on several submissions. A topic that was not covered during last year’s conference was collaboration in multimodal environments and specifically how different combinations of modalities can affect communication between two users solving a task together. Since that is one of my main research interests, I now see my chance to contribute! The deadline for extended abstract submissions to SweCog 2017 is September 4, so there is still a lot of time to write. The conference will be held October 26-27 at Uppsala University. Since registration to the conference is free for SweCog members (membership is also free), I expect to see many of my old KTH colleagues at Uppsala University during the conference days! 😉  You can find more information about the conference here.

Before I started planning for contributions to SweCog 2017, I invited some of my “multimodal colleagues” from KTH to join the writing process. As a result, Emma Frid and I will collaborate on an extended abstract about a follow-up study to the study I present here. Thus, our contribution will focus on how multimodal feedback can affect visual focus when two users are solving a task together in a collaborative virtual environment. Since I have not yet heard from any other colleague, I plan to write another extended abstract on my own, about how multimodal feedback (or rather combinations of visual, haptic and auditory feedback) can affect the means by which users talk to each other while working in collaborative virtual environments. Maybe, I will also throw in a third one about the potential of using haptic guiding functions (see this blog post for an explanation of this concept) in situations where sighted and visually impaired users collaborate.

 

Cognition

Summary of the first day of SweCog 2016

The first day of the SweCog conference started around 10:00 with registrations and ended around 20:00 with the conference dinner (lunch was included in the conference but everyone had to pay for their own dinner). This day included one keynote, five paper presentations, one poster session and one panel discussion. Quite a few different topics were covered and most of these were related to machine learning and/or AI. The summaries presented below are short popular science descriptions – the intention is not to cover all important aspects of the different presentations, but rather to give a very broad overview on topics covered during the first day of the conference.

 

Machine learning and AI

The keynote, two paper presentations and most of the panel discussion concerned different aspects of machine learning. The opening keynote held by prof. Christian Balkenius from Lund University, was about spatial indices that can be used to bind memory to different locations. He introduced e.g. deictic codes for associating information with landmarks in the environment. The core of the speech concerned a computational architecture including a kind of auto-associative memory where the items stored are associated with spatial indices. Several examples of how this can be applied in robots were shown. This topic was also discussed further during the panel discussion, which also brought up the question about how much autonomy one can actually achieve in AI.

The paper presentation by Claes Strannegård from Chalmers also concerned memory and learning but from the animat (artificial animal) perspective. He described a scenario where a generic animat, equipped with sensors, motors (abilities like moving, eating, drinking, etc), vital needs and internal sensors that check the status of these needs, evolved as a result of the available resources and the specific environment where it “lived”. The knowledge database is empty from the beginning and thus the animat needs to “learn” what actions it must take in order to satisfy the vital needs (and what actions not to take). The animat’s only goal is to survive and performance is measured as lifetime – how long it takes until the animat no longer succeeds in satisfying the vital needs (vital parameters reaches 0).

The last presentation that could be connected to machine learning and AI was the one held by Mattias Forsblad from Linköping University. This presentation was different from the other two in that the focus was almost entirely on human memory and search strategies. This was one of the few presentations that used video to illustrate the key ideas. The video showed a woman searching through a couple of bags and jacket pockets to find her bus- and exercise cards. The main point was that the search was limited by clear rules, e.g. related to time of year (no need to search through winter jackets since it was summer) and type of object (there are not many places where you can expect to find a bus card). Consequences for search trees (objects can often be found on some familiar and clearly delimited places) were discussed in relation to machine learning.

 

Modalities and interaction

The presentation which was closest to my own research area was the one given by Mattias Arvola from Linköping University. He discussed transmodal interaction – how different modalities integrate and affect each other during activities. This concept seems to be very related to crossmodal interaction, but it focuses more on how information and meaning is transformed as we switch between modalities during activities. This was discussed both from a design and user perspective. One example brought up related to the design perspective was a software (Simpro) which first version was just text based. A second version used still images and text, the next version added sound and in the latest version they explored the possibilities with VR. New modalities were added during the lifespan of the project and after each iteration new information and meaning could be conveyed. An example from a user perspective was a haptic pong game which was specifically designed for deaf-blind users. A study showed that it was possible to learn how to play the game using only haptic feedback. Thus, feedback about e.g. location which is usually provided graphically could be successfully conveyed by haptic feedback. This example got me thinking about an audio-only version of the popular game Towers of Hanoi, highlighting that you can solve a rather complex game based on only audio feedback.

The other presentation related to this topic was held by Robert Lowe from University of Skövde. He discussed awareness and sharing of affective values during joint actions (although he never actually used the specific term “awareness”). The point was made that it is absolutely necessary to be aware of each other’s actions, intentions and affective valuations when performing joint actions to reach a common goal. A series of experiments were presented showing the applicability of the so called associative two-process theory in the context of social interaction. An example brought up was two persons moving a heavy object between two places. This reminded me of a study about the joint carrying of a stretcher in a complex environment with many obstacles. An argument was made that task performance was significantly improved in a haptic condition (compared to a visual-only condition) where the users could feel each other’s forces on the stretcher. This example also relates to Mattias’ presentation about transmodal interaction – the new information gained from the haptic feedback added something unique to the interaction.

 

Clojure – the modern Lisp

The presentation that stood out the most during this first conference day was the one held by Robert Johansson from the Karolinska Institute. His presentation felt, in several parts, as an ordinary programming lecture where code examples are shown and compiled in real time. The main argument presented was that Lisp (and specifically the version Clojure which work with both Java and Javascript) should be used more in cognitive science. There were very few connections to cognitive science in the presentation – one of the few connections made was that Lisp could be used to effectively teach clinical psychologists how to program e-Health applications. An example e-Health web application for doctors, based entirely on Clojure, was also mentioned.

Cognition

Some remarks on the SweCog 2016 conference

chalmers

Yesterday I got back from this year’s SweCog (Swedish Cognitive Science Society) conference. This year the conference was hosted by Chalmers University of Technology. It was two very interesting days, which I will try to sum up in coming posts. This post just provides an overview of the event.

I had never heard about this conference before I started my work as a postdoc in Uppsala this autumn and that is partly because this is only the third time SweCog was held in its current form. Before 2013 SweCog was a national graduate school, financed by Vetenskapsrådet (The Swedish Research Council), and was meant to provide a rich research environment for graduate students focusing on cognitive science. After the financed period ended, a decision was taken to make it a yearly conference. The conference has no registration fee and no external funding, so the hosting institution needs to cover the expenses.

We were eight researchers from my department (Visual Information and Interaction) who traveled to the conference as a group and stayed at the same hotel. Five of us went back to Stockholm together while some stayed in southern Sweden for a while. About 50 researchers from different universities (mostly Swedish) attended the conference and there were two key notes, ten paper presentations, one panel discussion and one poster session. The conference had only one track, so you never had to choose between different sessions running in parallel.

Since the conference gathered people whose research interests were in any way related to cognitive science the topics varied widely between different presentations. There were quite a few presentations related to different aspects of machine learning, one was about the power of the programming language LISP and one provided a philosophical view of concepts and their changing nature. As a result it was a little bit hard to find a common thread between the presentations, but on the other hand many interesting areas were covered.

Overall, I’m very pleased with the conference and I hope I will be able to participate again next year. In fact it is very likely that I will do so since it was revealed during the concluding session (the SweCog annual meeting) that the next university to host the event will be Uppsala University!