The second conference day started around 9:00 and ended at 14:45. The SweCog annual meeting was held during the last half hour, so that was not really a part of the conference. This day also included a key note and five paper presentations. Again, most of the presentations were related to machine learning. As in the previous post, about the first conference day, I will give brief description just to give an idea of the different topics covered.
Machine learning and AI
This day’s keynote and three of the paper presentations concerned different aspects of AI. The main topic of the keynote was language technology and the problem discussed was about developing a kind of “cognitive assistants” that can understand words in their current context. Thus, the semantics is the real challenge for these types of AI. An interesting example of a Google search for “Chris Andersson” was brought up. A human can directly understand from the search results that there must be several people with the same name (due to different affiliations being presented and different profiles, etc.), but this is much harder for an AI. A solution based on graph theory was presented, where “Chris Andersson” and each search result became nodes. Internal references between results were marked as edges. The result was a graph in which there were clusters representing search results having one of the many Chris Andersson in common.
The first paper presentation focusing on AI was held by Sam Thellman from Linköping University. The focus was human-robot interaction and more specifically if the type of embodiment matters in the interaction. Several experiments were presented where different types of embodiment, virtual or physical, were compared. These experiment did not show any effect on embodiment – it was more important if the robot was socially influential or not (no matter if it was physically present of virtual). One interesting example brought up, shedding the light on the effect of actually having a physical robot in front of you giving instructions, was that people were more willing to throw books in a garbage can if they got instructions from a physical robot. The connection was never made to human interaction (mediated compared to physical presence) – a discussion about that connection would have been very interesting!
The second paper presentation related to AI was held by Gordana Dodig-Crnkovic from Chalmers. This was one of the more complex presentations since it concerned morphological computing – a means of using physical properties and constraints of a robot’s (or organism’s) body to automatically control behavior. When using morphological computing you build a computational model from bottom and up using knowledge of the parts to build the complete model. Examples were taken from cell biology. Each cell requires their own input and produces a certain output. This of course constrains interaction with adjacent cells. Cells are then working together to build larger structures which in turn interact and have their own constraints. I found this presentation really fascinating since concrete links were made between AI and cell biology in this way. Gordana also brought this up to discussion during the first day’s panel discussion.
The last presentation on the AI topic was held by Ulf Persson from Chalmers. That presentation was about formalizing analogies as a way to transform knowledge from one domain into another. The reason for exploring this was to be able to create AI organisms than can move around between different parts of an environment making decisions (based on earlier knowledge) which can lead to plans for survival. The presentation began with a couple of known analogies (like electrons <-> planets and the somewhat awkward “neurons that wire together fire together” <-> …). The main part of the presentation contained mathematics on a level too complex to bring up here, but the end conclusion from the calculations presented that analogies are only approximate and that the big technical challenge is to find a procedure to discover analogies.
Interaction between speaker and audience
A presentation that really got me thinking about my own behavior was the one held by Mikael Jenssen from University of Gothenburg. He talked about “speaker-audience interactive synchrony”. The main argument was that the people in the audience move in synchrony with e.g. the speaker’s rhythm and body language. Note that the movements produced by the audience don’t have to mirror the speaker’s moves in this case (mirroring could be seen as a special case, though). An experiment was presented in which they found a correlation between an unclear voice and movements in the audience. Quite soon I started to look around so I could see if I could detect any movements in the audience that seemed to correlate with e.g. the speakers talking speed. I stopped after a while, though, since I realized that this particular audience was a horrible sample group since we had just been informed about the how the synchronization worked!
Concepts in motion
This day of the conference ended with a purely philosophical discussion about concepts and their changing nature. Joel Parthemore from University of Skövde talked about concepts and the fact that they are in incremental motion. This has implications for our understanding of the world, since concepts are the primary means of gaining this understanding. Several times during the presentation Joel contrasted his own view of concepts in constant motion against the common view that concepts are stable and precisely defined. The importance of being able to apply a concept in many different contexts was also discussed as well as the need to at least in small ways being able to adapt a concept to new contexts. Just to be on the safe side, Joel also made some references to AI technology.