Project Melo

3 weeks, Fall 2018

Collaboration with Jeongmin Seo, Steph Chun and Haewan Kim

Participated in research, experience design, concept development and testing

Speech responsive system

Smart assistant

Conversational interface

Project Melo is a personal assistant that learns about you through your conversation with it. More things you share with your assistant, more things it can do for you.





please turn on sound



When was the last time you used a voice assistant?


We currently use voice assistants when we have a demand for hands-free interaction consisting mostly of task-oriented commands. But in the long run, we aim to expand the use of voice assistants beyond these. Our team discovered that users' current motivations to use voice assistants are too weak. In other words, we questioned how we might increase the usage of voice assistants.




What if personal assistants actually feel more personal?

We thought if these smart voice assistants are more personal and responsive to MY needs and interests, we would be motivated to use them more often!

We conducted three types of research to guide our design decisions.



1

Diary Study: How do we use voice assistants?



In order to understand how current voice assistants are used, we individually used voice assistants (Google Assistant and Siri) for a week and logged our experience in a diary.

From this experience, we determined when we were motivated to use the current voice assistant and discovered that there is limited opportunity for personalization.




2

Contextual Inquiry: How do we start a conversation?



We conducted three rounds of conversations with strangers to investigate how people talk to each other when they meet for the first time.

Then through a post-conversation interview, we gained insights on when people felt comfortable or awkward and paid particular attention to the colloquial techniques we often overlook that current voice assistants lack.




3

Role play: How should we converse with an assistant?



Using our drafted script we conducted a role play through the phone as if one was a smart assistant and the other person was a user trying the smart assistant for the first time.

We wanted to learn whether our scenario was an effective personalization process and analyze user responses to the proposed assistant experiences.

Design direction

With our refined scenario, we designed the interface consisting of the above concepts: A personal assistant with a visual character that interacts with the user conversationally and reflects on the interaction to adapt its look and feel.

Meeting your assistant

First impressions matter. Introducing your personal assistant should feel organic as it is with getting to know new people. Based on research we drafted our first script for the initialization process. Our scenario begins with a short introduction to the user to set a clear expectation on the onboarding process. Then the assistant first asks the user's name as if they were meeting each other for the first time.






Developing the assistant

Current voice assistants feel like branded products. For Project Melo, every assistant looks, talks, and behaves differently, reflecting on how the user interacts with the assistant.



We discovered from research that facial expressions and gestures are crucial to a quality conversation, hence added a visual character for the assistant. Its name is initialized through the user's words and the personality of the assistant alters based on how the user talks to them and adapts accordingly, enhancing the user's attachment to the assistant in a more conversational manner.

Providing useful and personal suggestions

Instead of asking what the weather is and receive a number, what if your assistant can tell you more? For instance, Melo assistants can analyze your photo album to present what you wore on a similar weather and provide a more valuable and contextualized recommendation.

Features for a seamless conversation

In order to suffice the potential technical limitations of voice assistants, we added certain details to help the user have a comfortable conversation. We added an edit button for certain parts of the conversation where accurate recognition is essential. We also added an "I don't want to talk about this" button so the user can make a comfortable escape whenever he or she feels uncomfortable discussing or providing certain information.





Testing


Using the Wizard of Oz technique with our high fidelity prototype, we conducted three rounds of testing to evaluate the success of our design.

The goal of testing the initialization process was to analyze whether our solution increases interest and motivation for first-time users, and potentially influence the way they talk to the assistant.

What worked


“I loved how it has a face. There is something I can talk to now.”

“Calling the name several times… felt like bringing this thing into existence.”

The personalization process received very strong responses. The participants' engagement notably increased when they observed the assistant's visual responsiveness. It further triggered the participants' attachment and engagement with the assistant.



“I started to get into the conversation because there was enough hint to understand that it was responsive.”

It was interesting to observe how the participants were starting to respond to the assistant even without the explicit need to do so. The participants started to ask questions to the assistant and expressed micro-reactions during the conversations such as "mhm, "cool," or "yeah" which were uncommon for our interactions with the existing voice assistants.





What could have been better


“At the end of the conversation I would start interacting more to explore the options it suggested”

“By the end, I kind of forgot what the point of this process was. It was clear that I could personalize it but I wasn’t sure what it can do”

One of the feedback we received was that the function of the assistant didn't resonate well at the end of the process. If we were given more time for this project, we would add more steps in the end for the user to practice and explore its features. The current initialization process doesn't have enough opportunities for the user to learn the assistant's capabilities.