top of page

UX Case Study: Google Nest Hub Home Control

“Hey Google, I just want to turn off my lamp”



Google Nest (previously called Google Home) smart speaker products from google, providing personal assistant through voice. Google Nest Hub is a series of of smart speakers that have screen attached allowing user to see informations and interact with a visual interface through touch screen. Google Nest allows access to control your smart home devices.


Voice Based Control

Although Google Nest is promoting the use of voice command for, sometimes it could take much more effort to execute certain task with voice command. As simple as flicking a light switch in the traditional house experience, using voice command comparably takes more time and effort to turn on or off house lamp. However, using voice command is helpful for so many cases especially related to users with physical disabilities or injuries. In this case, I try to analyze some painful scenarios for general users:


1. Mute switch is on

Google home apply an analog switch to turn off mic to apply a higher degree of privacy. In the mute state, microphone is mechanically switched (no electricity goes through the mic). There is no way mic can catch any voice command, even use voice command to unmute the device. The scenario which user switch the mic back to unmute stage means that user is already is in front of the device, it is impractical to use voice command again to execute simple task such as turning lamp on.


2. Shhh... somebody is sleeping.

In many cases, user have to stay silent even though they want to execute certain tasks through Google Home devices. For someone living in a shared space or living in a small space, there are some occasion where they do not want to use their voice to execute task. By facilitating people who often encounter this scenario with alternative input method would decrease their pain in using smart speaker devices.

3. Accent-error

Accent has been one of the greatest problem in voice recognition algorithm. User that is not natively speaking english experienced multiple error command in the beginning of using Google Nest, since the machine learning have minimum to none available instances to learn the user’s accent. After the error occurred it is noticeably hard to teach the machine the correct action of a certain command, especially general users that does not know how the algorithm works. It creates more pain for users, since users have to use their voice again or tap on the only switch on the screen to undo the error.


Ideation

From the issues stated above, I further analyze the user journey of the established Google Home interface and ideate a new user journey that would reduce the pain for general users. During the ideation process these are certain considerations i take in account:

  1. Learned behavior of flicking light switch

  2. Minimizing clickstream for user

  3. Both voice command and non-voice command have to be available

  4. Creating a more efficient way to feed the algorithm with the least pain for users.


Solution

After taking some time thinking, I decided to dedicate a 'one-swipe-away' page for home control as the solution. Creating a 'one-swipe-away' page would efficiently provide a quick and efficient alternative for controlling smart devices, here's why:

  1. Compared to the existing interface, home control page took few swipe away from the home screen to access, and yes, the control buttons are all nested in 'rooms' button. By dedicating a 'one-swipe-away' page, it allows user to quickly control, even monitor their smart devices.

  2. Once users make a mistake, it is easier to correct the action by navigating around the home control page directly, rather than to stuck with button on screen.

  3. By correcting the error command instantly, It feeds the algorithm directly and allowing the algorithm to learn the error and able to deliver better experience earlier.

Wireframing & Predesign


Final Prototype


Play With Live Prototype

Comments


bottom of page