Group-Everydai-2

Engaging with everydAI: How to move AI from imperfect algorithms to perfect user interactions

We've answered all the questions you asked during the Engaging with EverydAI webinar. You can find the responses from our experts, grouped by theme, using the links below:

The Engaging With EverydAI webinar
The Everyday AI research
Spotify and AI
Developing AI systems
AI and Policy
AI and Individuals
Studying AI
Theory Of AI

 

Engaging With EverydAI Webinar

Can I watch back a video of the Engaging With EverydAI event?

 

Everyday AI Research

Selected questions related to the Everyday AI research:

Where can I download the Everyday AI research?

Is our thinking about AI systems falsely assuming "what the user wants" = the passive, intuitive whims (Kahneman’s "System 1") instead of active deliberation?

How do you see the integration of AI and non-AI systems in human and machine interaction?

AI systems cannot feel the users context, is there a way we can give the user the feeling that the AI understands the context?

What about a choice/switch between convenience and interaction? Sometimes you just don´t want to interact but instead have a suggestion based on your behaviour.

Regarding co-intelligence, where are we on this journey timewise? Is it near, or is it 20 years away?

View more questions related to the Everyday AI research

 

Spotify and AI

What's the turnaround time from the moment an ethnographic insight is derived from the field until it's addressed by a feature in production at Spotify?

I am stuck in my music recommendations on Spotify... Can Spotify please add some kind of feature so I can get random music suggestions - completely different?:)

 

Developing AI systems

Selected questions related to developing AI systems:

What are common mistakes people make when starting with AI that can block them down the road and should therefore be avoided?

Should all AI projects be open source projects to improve transparency and accessibility?

'Building a capability' sounds nice if you have the pockets for it. How can SMEs get started with AI?

AI does not seem to provide forgetting functionality. Human beings forget as a mechanism to change from situations to situations. Forgetting should be a feature.

Is AI stereotype-free? How can we make sure it is not reproducing our own ideological biases and that it interacts with everyone in an "equal" way?

View more questions related to developing AI systems

AI and Policy

Selected questions related to the AI and policy:

How does AI affect the public policies of a country and municipality? Do you think that in the future, it will be necessary to use AI to make public decisions…

How can we trust AI systems and their choices if they cannot explain it themselves? e.g. an AI-"judge" in a court of law or driving an autonomous vehicle

What is the difference of AI used in Europe compared to in China?

Is it to be expected that the development of AI will be guided by commercial interests and thus by consumption and the interests of large-scale industry?

People speaking about a dystopian AI future miss the most relevant threat of AI - giving too much power to systems that actually have a very shallow understanding.

View more questions related to AI and policy

 

AI and Individuals

Selected questions related to AI and individuals:

How flexible AI should and could be to localization: variations between culture, religions, legislation, standard of living...?

Are we getting passive from AI? How can we keep up our ability to be proactive, take our own decisions and stick to our decisions and "sisu" together with AI?

I would like to hear more about what an end-user can do to reflect their acts regarding everyday AI. Wouldn't the only ethical thing be NOT to use those systems?

Are there good examples of how AI helps/guides people to make better decisions? And how do you know a decision is good?

View more questions related to AI and individuals

 

Studying AI

How can I be part of a cross-discipline, cross-functional team working dedicated to AI solutions to help the world?

I am starting my master studies in interaction design this autumn in Malmö. Do Reaktor collaborate with students in projects, work, research or more? thanks :)

Theory Of AI

Selected questions related to the theory of AI:

What is your definition of AI? As you said, linear regression can be said to be AI today. I don't believe machine learning algorithms are so much of a black box, as you say.

After playing with OpenAI's GPT-3, I feel amazed and wonder how many Aspects of Consciousness have we already achieved. What do you think?

How would you construct an evolving literature acquisition AI to libraries?

What is the definition of intellect? Intellect is just about combining contents in the mind already. Computers can do that.

View more related to the theory of AI

Answers To Your Questions

 

Engaging With EverydAI Webinar

Can I watch back a video of the Engaging With EverydAI event?

Yes. you can see the full webinar here.

Is the Alice Labs logo connected to Alice in Wonderland? I hope AI will associate more with "Alice in Wonderland" and not a "black box".

Kirsi Hantula - Researcher, Alice Labs:
A good catch - our logo is indeed connected to Alice in Wonderland. In a way, I agree with the comparison that you make between Alice in Wonderland and the black box. At least in the sense that Alice was only able to discover incredible new things in Wonderland because she got assistance (friendly advisors, tools etc.).

The same thing with using AI. If users are given appropriate tools to cooperate with AI, they may discover fine new things. In the context of everyday use, this applies to situations where people actively want to broaden their taste or knowledge about a certain topic, for example. With better tools for AI-user collaboration, people may find incredible new things.

 

Back to top

 

 

Everyday AI Research

Where can I download the Everyday AI research?

You can access the research here.


The title and contents of this webinar remind me of research titled Music in Everyday Life by Tia DeNora (2000). It has a psychological and sociological framework to address how people actually use music in their lives. The findings reveal that individuals often do not listen to music in the ways composers and performers perhaps imagined but in a number of selfish, functional and changing ways to assist their living and acting in various everyday situations.

  • Has your research revealed similar kinds of unintended usages of AI services/systems/products?
  • Is there perhaps a need for AI-based services that would allow and even encourage such personal, versatile and surprising usages?


Kirsi Hantula - Researcher, Alice Labs:
This is precisely what we are saying. We want to suggest that to accommodate this very human fickleness and fluctuation, AI service developers ought to build AI systems that do not necessarily even strive to fully understand the complexities of how people use these systems in different situations. Instead, service developers could take as their starting point the idea that AI consumer systems ought to be more ‘layered’, i.e. the systems ought to give users possibilities to adopt a passive, guiding, or collaborative role towards algorithmic decision-making, based on their particular needs and aspirations in the particular situation.


Do you think that change in algorithmic focus could appear naturally through consumer preference, or would it require 1+ leader(s) in AI to change their focus? Such as how Apple is a leader in endorsing privacy.

Kirsi Hantula - Researcher, Alice Labs:
If the question refers to the need of giving users more agency concerning AI systems (which is what we talk about in the study), I do think that we also need forward-looking companies that will start exploring how to improve user-system interaction.

In our study, we noticed that currently, quite a few users are experiencing what we called ‘algorithm fatigue’, i.e. people are feeling a bit disillusioned and disappointed by the development of consumer AI systems.

Since many people have used various AI-powered services for over a decade by now, they are thinking that these systems ought to be capable of providing a better service than they currently are. One way for service providers to address such increasing user expectations might be to start experimenting with new tools that will allow users to steer and assist algorithmic decision-making.


Is our thinking about AI systems falsely assuming "what the user wants" = the passive, intuitive whims (Kahneman’s "System 1") instead of active deliberation?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
I think so. The systems are often modelled from behaviour in a rather superficial fashion. Markets and economic dynamics of, say, social media companies play a big role.

Kirsi Hantula - Researcher, Alice Labs:
As long as we are talking about AI-powered consumer systems only and not making sweeping generalisations about all kinds of AI systems, I think this is the case.

However, we should keep in mind that even users of AI consumer systems might often - maybe even in most everyday use situations - prefer to remain passive towards the AI system.

For me, the biggest problem with the predominant thinking today is that it is based on a too simplistic idea that people always want to be passive towards the AI system when in reality, there are also recurring situations when people would want to have agency in their interaction with AI. The key is to acquire a deep understanding of users’ expectations and aspirations in different everyday use situations to know when they would rather be passive and when active. And then, start experimenting with new easy-to-use tools to provide people with more agency when they wish to have that.

How do you see the integration of AI and non-AI systems in human and machine interaction?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
AI should be one tool in a box, a technique among others to consider. In isolation, it doesn’t have as much value as it would when combined with interaction design. The user could enter their mood, and the AI could take that into account in the recommendations. Or the user could find things with a search and then ask the AI to find similar ones. There is a lot of untapped potential in here.

Kirsi Hantula - Researcher, Alice Labs:
I agree that there are situations (think of social situations where family members or close friends get together to maintain and strengthen their mutual ties, for example) where people predominantly prefer not to rely on AI.

Outside of these kinds of situations, AI may be quite a useful tool for people in many everyday use situations also, as long as they are given more possibilities to choose how active or passive they want to be concerning the AI. That way, AI becomes more like a tool.


How do we achieve this user-AI collaboration without defaulting to an “add another device” solution? e.g. a smartwatch measures stress which drives streaming suggestions.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Perhaps we shouldn’t start by adding more AI but instead add more possibilities for humans to give “traditional” input with components etc., in combination with the AI.

 

AI systems cannot feel the users context, is there a way we can give the user the feeling that the AI understands the context?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
It could be done implicitly by showing what the AI thinks is similar and letting the user feedback on whether the similarity is correct. This way, the AI would not need to understand the context at all, just present what it thinks represent its contents.


A question to Kirsi re: Karen’s relaxing TV evening. Is not the shortcoming of the AI recommendation of a film just a matter of the “data”?

Kirsi Hantula - Researcher, Alice Labs:
I find it difficult to believe that we will be able to solve the problem of inaccurate recommendations solely by increasing the amount of data gathered. In the example that I gave about Karen, for example, not only did she want to find a relaxing film, but she also needed to find a film that was short enough.

In practice, there are many things that affect people’s preferences in different everyday use situations, which is very difficult to predict accurately based on data, even if there were new data streams that could be used in producing AI-based recommendations.

Therefore, I think it would be easier to start experimenting with tools that allow users to steer recommendations in the right direction when they feel this is necessary.


What about a choice/switch between convenience and interaction? Sometimes you just don´t want to interact but instead have a suggestion based on your behaviour.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
If you don’t want to interact, a suggestion is what you should get. I would personally be ready to interact after suggestions anyway since that’s what happens quite often with humans when a suggestion wasn’t quite what I was looking for. At that point, the killer factor is frictionless interaction.

Kirsi Hantula - Researcher, Alice Labs:
This was our point in the study also. There are a lot of everyday use situations where users would prefer to remain passive and let the AI serve them. The important point here is that these situations do not account for all situations. Currently, however, adopting a more active role in algorithmic decision-making (by guiding and steering the system) is not possible for users.


With new interactive services in systems, it will add to the complexity of the system. What difficulties are there in making users want to learn the systems?

Kirsi Hantula - Researcher, Alice Labs:
The key is to design systems that allow users to choose when they want to be more active and when they’d rather remain passive. When people want to adopt a more active position towards the system, they are naturally more eager to learn how to use the new tools.

I also suspect that most of the new user tools that could be provided might be quite simple and similar to the tools users have used in other contexts.


On top of a forgetting feature, I think a "do not follow" -mode should be available, not only in hindsight but already before the user starts to behave weirdly.

AI seems to miss certain "significant" interactions we humans have which affect our decision-making (e.g. something that may have happened in our dreams).

How can we incorporate these interactions in AI tools?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Adding controls for the users to any kind of tool should be relatively straightforward. But perhaps it should be thought the other way around: how can we incorporate AI into the tools we already have?


If you don't understand outputs from an AI, why not make the system ask the user and run the process once more? Would add nuance to recommendations, for example

Kirsi Hantula - Researcher, Alice Labs:
That might be one way. Based on our study, however, we know that many users would also be willing to use a little bit of time to steer the AI proactively before any results have been produced. They are willing to do this to receive recommendations based on their preferences at the moment and not on the historical user data accumulated about them (or more precisely, about users like them).


Regarding co-intelligence, where are we on this journey timewise? Is it near, or is it 20 years away?

Kirsi Hantula - Researcher, Alice Labs:
It depends on how we define ‘co-intelligence’. When we use the term ‘co-intelligence’ in the context of our study, we refer to new ways of interaction between the user and the AI system that could take place in real-time and more seamlessly, causing immediate changes in the final outcomes.

I do not think that there are huge technical challenges that prevent service providers from experimenting with designing those kinds of tools since there is no need to try immediately to build enormously complicated interaction tools. Based on our study, users would often be happy with quite simple tools that could allow them to better indicate their present needs and desires to the AI system.


Suppose the AI had a connection to Karen’s iWatch that had detected stress. Do you think that AI could suggest a relaxing film if it gets access to the data?

Kirsi Hantula - Researcher, Alice Labs:
This would probably be possible in theory, but I think that an AI would still have enormous difficulties understanding what counts as ‘relaxing’ for Karen that particular evening. In addition, Karen may also have other aspirations for the film, besides it being relaxing. In the example that we gave in the presentation, she also knew that she would have to get up early the next morning, so she also wanted the chosen film to be short.

In practice, there are a myriad of things that keep changing people’s preferences in different situations, which is very difficult to predict accurately based on data, even if there were some new data streams that could be used in producing AI-based recommendations. Also, I suspect that many users might object to the idea that their iWatch and streaming service share data - this is not something that everybody wants.

 

Back to top

 

 

Spotify and AI

What's the turnaround time from the moment an ethnographic insight is derived from the field until it's addressed by a feature in production at Spotify?

Heli Rantavuo - Staff Researcher, Spotify:
Members of product, engineering and design teams take part in each research process and are connected to the latest insight and applying it in their work on a continuous basis. If a need that we uncover is complex to build for, it might take longer to address, whereas if it connects with an existing proposition, we would probably be able to address it in a near term sprint.


I am stuck in my music recommendations on Spotify... Can Spotify please add some kind of feature so I can get random music suggestions - completely different?:)

Heli Rantavuo - Staff Researcher, Spotify:
You could try exploring by Genre and other thematic categories from your Search tab, or tips given in this link.

 

Back to top

 

 

Developing AI systems

What are common mistakes people make when starting with AI that can block them down the road and should therefore be avoided?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
A broad question but some things that come to mind:

  • There seems to be a common misconception that any data is valuable, no matter how it is collected. “We have ten terabytes of logs of user behaviour; we need a data scientist or AI to dig out the gold.”

    As it is with science, observational data, even large amounts, is of limited value. You need to plan experiments carefully to make them valuable. And you need a theory to generalize.

  • As opposed to core algorithms, context is often undervalued in every sense: costs, data-generating processes, organization, maintenance, user interface, etc.

  • Hype has confused markets; there are new possibilities, but interest has been skewed towards technology from real needs and possibilities.


I'm curious how AI can improve journalism and the user experience for readers.

Kirsi Hantula - Researcher, Alice Labs:
In our study, we did not study AI-based content recommendation in news sites. I do suspect, however, that our general findings of users wanting to alternate between being a passive receiver of AI-based recommendations and taking a more active role in guiding the AI do hold true in the context of news consumption, as well.

To understand this more deeply, we ought to develop a more nuanced understanding of the typical situations where people consume news: the tensions developed between the AI and the user, and their expectations towards algorithmic decision-making in various use situations, for example.


Should all AI projects be open source projects to improve transparency and accessibility?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Open source (of code, models etc.) is just one part of the equation — what about the data that is used to train the algorithm? Having OS makes things more trustworthy, but it clearly isn’t enough.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
“should” is not well defined. There are all kinds of constraints, and it is hard to decide what to keep for the ideal world. Interestingly, in AI, you have not only the code but also trained models and data. Any of these can be open or closed, independently of the others. And then you have the target system like Netflix that is typically closed.

'Building a capability' sounds nice if you have the pockets for it. How can SMEs get started with AI?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Lots of APIs exist now for speech and image recognition and language models. These can be used in standard software products. And AI creeps in elsewhere, even without you knowing it. Otherwise, I don’t think one needs to be involved in AI development if there are no clear needs.

An empirical approach and thinking data not as a byproduct but as a justified set of measurements or a representative collection of samples from business-critical processes will take you in a direction where self-built AI may eventually be applicable.


Would it be more accurate to view AI as a tool or a toolbox? Is AI function-specific (like a hammer) or a set of functions to be applied as needed (a toolbox)?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
I see it exactly this way. It’s a new tool, and for many people that have grasped the AI hammer, all the world has started to look like a bunch of nails — this will disappear in time once the novelty wears out.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Yes, a collection of tools at various levels: algorithms, platforms, trained models, …

Kirsi Hantula - Researcher, Alice Labs:
From the perspective of people’s everyday practices, I would add here that while the tools/toolbox that AI systems provide for users might include pretty simple tools for interaction, people will probably use them in various unpredictable and inventive ways. That is how people use a hammer also. While we usually use it the way it is meant to be used, in the absence of better tools at hand, we may sometimes use it for a wide variety of other activities.

My point here is that the tools provided for users of AI systems ought to allow for this type of flexibility, innovativeness and exploration, even if they were simple and easy to use.


AI does not seem to provide forgetting functionality. Human beings forget as a mechanism to change from situations to situations. Forgetting should be a feature.

An interesting question. We recommend watching the webinar video for a full answer.


Forgetting and letting go is an interesting AI subject. When training AI are all memories lost in every seed? I don't have a context for this. Probably it depends

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
It depends. There’s something called transfer learning: You can tune a system for new data or new tasks while saving parts of it. Forgetting happens all the time during training, and it is adjustable and implementable in production systems in many ways. But the idea of bringing forgetting explicit to the user, as a part of UI, is interesting.


Spontaneous and inexplicable humans can be stimulating and fun but also dangerous. It begs the question if we want - or need - to create more of that.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
I don’t see these types of AIs as a problem in themselves — the problems come out of what we do with the results.

Kirsi Hantula - Researcher, Alice Labs:
It is also good to keep in mind that our study addressed tensions that people typically encounter in their interactions with some consumer AI systems. I entirely agree that, especially in certain subdomains where AI outperforms humans, we do not need to add human spontaneity and unpredictability to the equation.


What have people done to make the interaction with AI better?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
What is done is mostly part of everyday design work around systems that have AI-style computation as its components.


Almost all consumer algorithms today are based on simple "likes" or rating 1-X. An idea for increasing AI-human collaboration would be to allow humans to rate in a much wider way.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Putting things in order relative to each other or grouping them come to mind, at least.


In Finland, we have a mobile app for the national broadcasting system, Yle, it is called Yle Uutisvahti. I can't use it because of the "AI" algorithms ...

...that make me follow obscure keywords and does not let me block sports from the news. ...

When I'm reading a newspaper, the paper interface, remember, I can always skip the sections I don't like.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Well, there you have it: Yle Uutisvahti would benefit from combining “traditional” category-based subscription / non-subscription with AI recommendations. At least in your case, perhaps even more generally.


How can one improve and develop the "black box" if there is no knowledge/ understanding of the logic?

Kirsi Hantula - Researcher, Alice Labs:
We can start by building a deep understanding of the tensions and problems people currently encounter in their interactions with AI consumer systems. Based on our study, we know that in different use situations, people have different informational needs, as well. For example, if an AI system repeatedly makes the same mistake, users would want to understand why it makes that mistake and what they could do to make the system stop repeating the mistake.

 

Is AI stereotype-free? How can we make sure it is not reproducing our own ideological biases and that it interacts with everyone in an "equal" way?

We recommended watching the video of the Engaging with EverydAI webinar where we address this.


Stereotypes are nothing more than generalizations. There's nothing wrong with them in general, but as generalizations in the real world, they're estimations.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Very true. Some stereotypes may be culturally overused by humans, though, and that is discussed a lot currently… and many don’t want these same stereotypes to be replicated in automated systems.


Aside from data quality, amount and silos, what are the biggest technological limitations in AI solutions? Shallow understanding, brittleness, model update cost?

Watch the webinar video for our full thoughts on this.

Back to top

 

 

AI and Policy

How does AI affect the public policies of a country and municipality? Do you think that in the future, it will be necessary to use AI to make public decisions…

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
AI could help us seeing alternative solutions to specific questions, but in the end, it should be seen as a tool that helps people to make the decisions, not decide by itself.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Depending on what you mean by AI, it is already used in public decisions because
AI-like models provide input for decision-makers. Yes, they are just tools.


How do you think we should audit/keep control of AI systems? How do we make sure what happens inside the black box is according to our ruleset and regulations?

Watch back the webinar video to see this addressed.


How can we trust AI systems and their choices if they cannot explain it themselves? e.g. an AI-"judge" in a court of law or driving an autonomous vehicle

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
An AI judge would be bad: who bears the responsibility of the decision? However, AI finding similar cases for a human judge to look at and compare the current case with would be helpful.

Autonomous vehicles: we should start with limited, easy environments and slow speeds.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Trust is empirical: you trust it if it performs well in tests like trusting a Covid-19 vaccine or a numerical weather forecast mode if it performs well in tests. This is acceptable if you frame AI as a tool, which is what it is. The ultimate responsibility is on people and organizations who use the tool.


How can AI help us with our challenge with climate change and other environmental issues?

See our response to this in the recording of the Engaging with EverydAI webinar.

 

What is the difference between AI used in Europe compared to in China?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Same technology, the EU has stricter regulations that will even tighten more soon. As China has more power centralized internally, it applies AI more freely to societal issues.

Kirsi Hantula - Researcher, Alice Labs:
I would highlight two things here.

Firstly, compared to Europe, AI technologies in China are far deeper rooted in the fabric of people’s everyday lives. Besides the usual AI-powered content or product recommendation systems (i.e. online shopping, social media content recommendation, streaming services etc.), various smart home devices, for example, are much more common amongst the urban middle-class Chinese than in Europe.

Furthermore, social crediting has become a normal part of people’s everyday lives through the widespread use of services, such as Alipay and WeChat Pay. This way, the network of (often invisibly, and sometimes incomprehensibly) connected AI services seems to be even more encompassing in China than in Europe.

Secondly, there are subtle differences in how Chinese and European users of AI solutions judge and view these systems and what they see as their benefits because of cultural and societal differences.

In the level of everyday interactions with AI systems, however (which we studied in the Engaging with EverydAI report), we were surprised to notice that users’ everyday tensions and frustrations when using AI systems were quite similar in the USA, Finland and China.


Can we somehow recommend life choices, even at a product level, that lead to the lowest available emissions? Many people are unaware of these.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
We could, and it wouldn’t necessarily involve any AI at all.

 

Is it to be expected that the development of AI will be guided by commercial interests and thus by consumption and the interests of large-scale industry?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Not necessarily. If you compare AI to some other recent tech fads, like the web or mobile apps, commercial interests are a great driver, but the public sector and non-profits aren’t that far behind. In some cases, they might be even more innovative than companies.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Some technologies and practices, like selling digital music, inherently favor large scale, for you have fixed costs of production and almost zero distribution costs. AI has some of that, but currently,people’s tensions and problems the availability of data for complex models favors large scale. However, reality will be more diverse.

Look at the open-source movement, for example, with its vast richness of small-scale projects or Wikipedia. It will be a complex mix of commercial, non-commercial, global, and niches. The current dynamics with global social media companies are somewhat worrisome, though.


We know China and the US are ahead of the EU with AI because of the amount of data they have and maybe a lack of regulations. What is happening in Finland's AI field?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Nobody has a clear picture of the whole field in Finland. My impression is that markets are gradually maturing, customers are starting to have the in-house capability for buying at least if nothing else. We have a strong academic side; it has helped. Our language is small by the number of people and different, and that affects the NLP (natural language processing) side.


AI ethics are an important issue to consider. It's a shame that the field has mainly been taken over by people aiming to rig AI to advance their own ideologies.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
But governments are also acting.


People speaking about a dystopian AI future miss the most relevant threat of AI - giving too much power to systems that actually have a very shallow understanding.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
Yes. It’s silly how much power people are willing to give away with little evidence of good results.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
That’s one kind of dystopia. It is also common to project more on AI than there currently is. My worries related to AI are societal: We have lots of technological disruption going on, AI adds to that, and fast change brings its problems. And new technology will bring new ways to govern, organise, communicate, etc. Not all of these are automatically good.

Back to top

 

 

AI and Individuals

How flexible AI should and could be to localization: variations between culture, religions, legislation, standard of living...?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Like localization of any technology, some of those will or would take a lot of work. Specific to AI, localization would also require local data. There’s no principal reason not to go local; sometimes it will happen and sometimes not, depending on the scale of the solution, markets, etc. Of course, the ideal for human interaction would be to be as local as possible. It translates to trust, familiarity and other good things.

Kirsi Hantula - Researcher, Alice Labs:
Parallel to improving these things in the level of AI, some of the problems that users currently experience (in terms of AI-generated results feeling inaccurate or insensitive to their circumstances) might be addressed by giving them more tools to guide directly or steer the system.


How necessary is it for AI to work on enough personal data? Where is the sweet spot between privacy and personalisation? Nobody wants it to know you better than yourself.

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
To make any kind of recommendations work, we need to give personal data and preferences as input, and there is no going around that. Humans do this, as well: you learn rather fast what your kids want for breakfast. However, I don’t believe in an AI knowing you better than yourself but more like a good shop assistant: “What about this shirt?” Sometimes it works; sometimes it doesn’t. The mistakes are often harmless, but at times the suggestion may have great value.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Everything is relative. Your notes and your calendar know parts of you better than you. Otherwise, you wouldn’t need them to tell you what to do next.


Are we getting passive from AI? How can we keep up our ability to be proactive, take our own decisions and stick to our decisions and "sisu" together with AI?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
I don’t think we are that passive, at least yet. In the study, people tended to bypass AI recommendations and pick their films manually. It’s important, though, to keep AI in the tool role and not just blindly follow it.


Is it possible for everyday small AI for task automation for everyone? Sharing our uses and have them recommended into our work/life flow?

Kirsi Hantula - Researcher, Alice Labs:
I can’t comment technically here. But from the point of view of our study, I would say that overall, in people’s everyday life, there are recurring use situations where people would want to transfer some tasks to an AI that would take care of them. This would allow them to concentrate on other activities.

Currently, the common problem in these situations is that AI systems often cannot provide satisfying enough outcomes. This forces people to turn their attention to the system, even if they would rather remain passive towards it.


I would like to hear more about what an end-user can do to reflect their acts regarding everyday AI. Wouldn't the only ethical thing be NOT to use those systems? ;)

Kirsi Hantula - Researcher, Alice Labs:
As an individual, it is difficult not to use AI-powered systems since they are already widely adopted across a variety of services, not only by commercial companies but also by public agencies.

As a user, the most important thing is to be aware of the effects that an AI might have on your actions, decisions, and thoughts, and reflect when this is ok and when it is not. In situations when it is not ok, users are often clever at finding ways.

 

Are there good examples of how AI helps/guides people to make better decisions? And how do you know a decision is good?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
How about route planning? I don’t know if that’s actually AI, but results are highly visible and disputable. If you force the route to change, the system gives you the estimate of your proposed solution, which of course, is worse than what it suggested initially.

To verify solutions made by AIs, I believe we should invest in giving people data exploration tools, which would help users understand the solution landscape better and assess the solution’s validity.

 

Back to top

 

 

Studying AI

How can I be part of a cross-discipline, cross-functional team working dedicated to AI solutions to help the world?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
If you are a developer or otherwise have a career in one of those “cross functions”, get to a company that applies AI. It will gradually commoditize, so the hard part is probably helping the world, not being near AI. If you have a career choice to make, IT is obviously a choice. Still earlier in education, finding fun in math and general familiarity with computers is good.

I guess that there will be less talk about AI in a few years, not more. We will have better concepts on technologies currently called AI, some of them will become too familiar, and hype will cool down. So I’m not sure you are asking the right question. ;)

 

I am starting my master studies in interaction design this autumn in Malmö. Do Reaktor collaborate with students in projects, work, research or more? thanks :)

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
It’s complex. As consultants, we don’t know what we are working on in, say, two months from now => we haven’t offered any master study projects since they are not ultimately in our hands but depend on our customers. However, we have participated as subjects in studies that have been part of thesis projects etc. I guess the best advice is that if you have an idea, contact us and let’s try to find a way to make it work out. Sometimes it succeeds, sometimes not.

 

Back to top

 

 

Theory Of AI

Media and academia tend to present a very simplistic vision of AI, most often the robot. What are your favourite metaphors?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Regression is a simplistic metaphor, but it gives the right and realistic frame for many aspects and flavors of AI. For many applications of deep learning, pattern recognition is a good cover term. For some things, it helps to think of alien senses that are tuned for artificial worlds of text, internet, etc., instead of sensing light, sound, etc.

 

What is your definition of AI? As you said, linear regression can be said to be AI today. I don't believe machine learning algorithms are so much of a black box as you say.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
It is true that simpler models are less black box, and linear regression is also used for causal inference, so it can definitely be interpreted if the setup is right. Comments about a black box refer to more complex models, mainly deep learning. I don’t have a definition for the current meaning of AI but would like to write it in quotes, “AI”, to honour the conceptual shift that has happened in the last 10–20 years. ;)

There is some research on explainable AI that tries to make decisions or other aspects of AI more transparent. To a degree, that can be done. A model that recognizes birds can “explain” that the beak and upper chest are critical for recognizing this particular species. But those explanations are just approximations. Ironically, beginner (human) ornithologists rely on explainable rules and details, while experts rely on overall appearance. This is a common pattern with all human expertise.


How optimistic are you about general-purpose AI? So far, all we have are simple prediction models that don't have any real independent intelligence.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
I’m personally pessimistic of animal or human-level GAI. It will most likely eventually happen, but I don’t expect it in the near future (20–50 years).

We don’t understand all aspects of biological systems; deep learning mimics them only superficially. Innate implicit pre-knowledge may have an unexpectedly big role in organisms. Our theories of combining conceptual and non-conceptual cognition are immature.

Our current algorithms require vast amounts of data and, despite apparently advanced miniaturization, too much computing power (in terms of energy, at least). Our AI shines in restricted, formalized domains, while humans have a huge implicit, cultural knowledge acquired over tens of years, and evolutionarily, over thousands or millions of years. AI and humans will be complementary for a long time.

That said, intelligence is poorly defined, and technological development is fundamentally unpredictable. It is likely to take us into a place where the concept of GAI will look somehow naive.


After playing with OpenAI's GPT-3, I feel amazed and wonder how many Aspects of Consciousness have we already achieved. What do you think?

Karri-Pekka Laakso - Lead Designer (interaction), Reaktor:
I see it as surprisingly convincing but still a shiny gimmick. A better version of the old Eliza therapist living in Emacs, for example.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Yeah, GPT-3 looks amazing but lacks deeper representations of the world. You can see it in the errors it makes. Consciousness can mean either certain organizing principles of nervous systems related to attention and central control or the experience of being aware and qualia.

The latter is a metaphysical question and belongs mostly to philosophy, and I believe it will partly stay there for a long time, if not forever. The former falls into the area of psychology, cognitive science and AI. GPT-3 does have something like attention internally, but otherwise, not much on this front.


How would you construct an evolving literature acquisition AI to libraries?

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
Literature acquisition is an open, real-world problem — AI doesn’t solve such issues; we would need to formalize parts to be solvable by AI first. Doing that is not trivial and requires expertise from the domain that I lack. But natural language processing (NLP) and large language models (Bert, GPT, etc.) will likely play some role in the near future in discoverability, classification, and comparison of content. They may be used behind more mundane, traditional services, so you don’t even know you are using them.


What is the definition of intellect? :-) Intellect is just about combining contents in the mind already. Computers can do that. :-) )

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
I agree that intelligence is hard to define, and the current de-facto definition of AI is very vague.


Current AI models can't understand the world. That doesn't mean different models won't be able to.

Janne Sinkkonen - ‎Senior Data Scientist, Reaktor:
True

 

Back to top