The first day of the SweCog conference started around 10:00 with registrations and ended around 20:00 with the conference dinner (lunch was included in the conference but everyone had to pay for their own dinner). This day included one keynote, five paper presentations, one poster session and one panel discussion. Quite a few different topics were covered and most of these were related to machine learning and/or AI. The summaries presented below are short popular science descriptions – the intention is not to cover all important aspects of the different presentations, but rather to give a very broad overview on topics covered during the first day of the conference.
Machine learning and AI
The keynote, two paper presentations and most of the panel discussion concerned different aspects of machine learning. The opening keynote held by prof. Christian Balkenius from Lund University, was about spatial indices that can be used to bind memory to different locations. He introduced e.g. deictic codes for associating information with landmarks in the environment. The core of the speech concerned a computational architecture including a kind of auto-associative memory where the items stored are associated with spatial indices. Several examples of how this can be applied in robots were shown. This topic was also discussed further during the panel discussion, which also brought up the question about how much autonomy one can actually achieve in AI.
The paper presentation by Claes Strannegård from Chalmers also concerned memory and learning but from the animat (artificial animal) perspective. He described a scenario where a generic animat, equipped with sensors, motors (abilities like moving, eating, drinking, etc), vital needs and internal sensors that check the status of these needs, evolved as a result of the available resources and the specific environment where it “lived”. The knowledge database is empty from the beginning and thus the animat needs to “learn” what actions it must take in order to satisfy the vital needs (and what actions not to take). The animat’s only goal is to survive and performance is measured as lifetime – how long it takes until the animat no longer succeeds in satisfying the vital needs (vital parameters reaches 0).
The last presentation that could be connected to machine learning and AI was the one held by Mattias Forsblad from Linköping University. This presentation was different from the other two in that the focus was almost entirely on human memory and search strategies. This was one of the few presentations that used video to illustrate the key ideas. The video showed a woman searching through a couple of bags and jacket pockets to find her bus- and exercise cards. The main point was that the search was limited by clear rules, e.g. related to time of year (no need to search through winter jackets since it was summer) and type of object (there are not many places where you can expect to find a bus card). Consequences for search trees (objects can often be found on some familiar and clearly delimited places) were discussed in relation to machine learning.
Modalities and interaction
The presentation which was closest to my own research area was the one given by Mattias Arvola from Linköping University. He discussed transmodal interaction – how different modalities integrate and affect each other during activities. This concept seems to be very related to crossmodal interaction, but it focuses more on how information and meaning is transformed as we switch between modalities during activities. This was discussed both from a design and user perspective. One example brought up related to the design perspective was a software (Simpro) which first version was just text based. A second version used still images and text, the next version added sound and in the latest version they explored the possibilities with VR. New modalities were added during the lifespan of the project and after each iteration new information and meaning could be conveyed. An example from a user perspective was a haptic pong game which was specifically designed for deaf-blind users. A study showed that it was possible to learn how to play the game using only haptic feedback. Thus, feedback about e.g. location which is usually provided graphically could be successfully conveyed by haptic feedback. This example got me thinking about an audio-only version of the popular game Towers of Hanoi, highlighting that you can solve a rather complex game based on only audio feedback.
The other presentation related to this topic was held by Robert Lowe from University of Skövde. He discussed awareness and sharing of affective values during joint actions (although he never actually used the specific term “awareness”). The point was made that it is absolutely necessary to be aware of each other’s actions, intentions and affective valuations when performing joint actions to reach a common goal. A series of experiments were presented showing the applicability of the so called associative two-process theory in the context of social interaction. An example brought up was two persons moving a heavy object between two places. This reminded me of a study about the joint carrying of a stretcher in a complex environment with many obstacles. An argument was made that task performance was significantly improved in a haptic condition (compared to a visual-only condition) where the users could feel each other’s forces on the stretcher. This example also relates to Mattias’ presentation about transmodal interaction – the new information gained from the haptic feedback added something unique to the interaction.
Clojure – the modern Lisp
The presentation that stood out the most during this first conference day was the one held by Robert Johansson from the Karolinska Institute. His presentation felt, in several parts, as an ordinary programming lecture where code examples are shown and compiled in real time. The main argument presented was that Lisp (and specifically the version Clojure which work with both Java and Javascript) should be used more in cognitive science. There were very few connections to cognitive science in the presentation – one of the few connections made was that Lisp could be used to effectively teach clinical psychologists how to program e-Health applications. An example e-Health web application for doctors, based entirely on Clojure, was also mentioned.