Google I/O Reactions: What is New with the Google Assistant
I was very excited and honored this year to be chosen to attend the 2019 Google I/O conference at the Shoreline Amphitheater in Mountain View, California. One of the technologies I was most excited to hear about was the Google Assistant. The Google Assistant is a virtual assistant created by Google that is has grown […] By Jenn Bailey.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Google I/O Reactions: What is New with the Google Assistant
15 mins
- Meeting the next generation Assistant
- Enhancing speed
- Demoing the next generation Assistant
- Adding more personalization
- Enabling a new driving mode
- Learning all about the Google Assistant
- Enhancing Existing Content for the Google Assistant
- Utilizing the Assistant in Your Android App
- Building Interactive Experiences with Interactive Canvas
- Providing an SDK for the smart home
- Where to go from here?
I was very excited and honored this year to be chosen to attend the 2019 Google I/O conference at the Shoreline Amphitheater in Mountain View, California. One of the technologies I was most excited to hear about was the Google Assistant. The Google Assistant is a virtual assistant created by Google that is has grown to support 19 languages in 80 countries. There are over a million actions for the Assistant and is available on over a billion devices. Throughout its evolution, the Assistant has intrigued me greatly because it allows me to interact with my devices using only my voice. I am a busy mom and professional who is on the go a lot. I have many Google Assistant Home devices throughout my house and can talk to Google from every room. This has helped me in many ways to be a more happy and productive person. When I am not within range of a Google Assistant enabled device, I find myself calling out ‘OK, Google’ in vain :].
Meeting the next generation Assistant
Enhancing speed
As I settled in for the Keynote at Google I/O, I was not disappointed by the new announcements about the Assistant. One of the biggest challenges that I experience using the Google Assistant is that it needs internet connectivity to understand what I’ve said to it. This can be particularly frustrating if the request I’ve made does not require connectivity, i.e. setting a timer. Currently, processing speech for the Assistant is very complicated. It involves many machine learning models including one to map incoming sound bites into phonetic units, a second one to assemble the units into words, and then a third one to predict the sequences of the words. To do this requires 100 Gigabytes of space and the need for network connectivity. Google made the groundbreaking announcement that they were able to reduce this down to half a Gigabyte. This allows these models to be stored locally on the device. This allows the Assistant to process speech in Airplane mode. This will increase the Assistant’s speed 10x. I was ecstatic to hear about this.
Demoing the next generation Assistant
After this exciting announcement, the Keynote segued into a demonstration of the new and faster Assistant which was quite impressive. You can see this demonstration here. The demonstration shows the Google Assistant rapidly handling back to back commands. These commands include searching for specific photos, ordering a Lyft car, setting a timer, taking a photo, checking the weather and various other requests. I was impressed that “Hey, Google” only had to be said once. The Assistant was also able to navigate photos and check on flight time while responding to a text message. The ability to multi-task using the Assistant is greatly improved. The Assistant can handle more complicated speech scenarios allowing the user to compose and send an email. I can only imagine how different it will be to utilize the Next Generation Assistant without the need for a network roundtrip.
Adding more personalization
The Google Assistant will be more personalized in the future with features such as ‘Picks For you,’ a feature that chooses recipes on a personalized basis. This utilizes a technique called ‘reference resolution’ which allows it to understand phrases such as ‘mom’s house’. Obviously, this would most often refer to someone’s mother’s house. However, it could also be the name of a grocery store or a restaurant. By using personalized reference resolution, associations such as this one can be made by the Assistant.
The next generation Assistant has the ability to set personalized reminders. As a mom, this will be a wonderful addition to our household. I will be able to remind my teenager of things when I am not around. Lastly, it was exciting to hear that you no longer have to say ‘Hey Google’ to stop alarms!
Enabling a new driving mode
Earlier this year the Assistant was added to Google Maps. It was announced that Google Assistant will now work with Waze and there will be an enhanced driving mode. In the future, there will be a ‘driving mode’ for the Assistant. The dashboard will bring the most relevant activities to the forefront. It will display things such as the option to navigate to a destination for an upcoming appointment in your calendar. It may show you podcasts you often listen to at certain times of the day during your commute. You will be able to use the Google Assistant without leaving navigation mode to send texts and answer phone calls.
Are you excited to try it? The next generation Assistant will appear first on the Pixel 4 which is rumored to be available in October of 2019 :].
Learning all about the Google Assistant
After hearing all the announcements in the Keynote, I was excited to see some of the new features for developers in the talks and the Google Assistant dome.
There were many great presentations on how to get started with the Google Assistant. The talks were a great overview for those who are new to the Google Assistant and broke down the different groups of individuals who may want to utilize the Assistant:
- Content Owner and Web Developers – Templates and markup that are available to enhance search, how-to tutorials, and an FAQ feature.
- Android app developer – App actions and slices.
- Innovator in the Conversational Space – Conversation Actions with Interactive Canvas for building experiences on smart displays.
- Hardware Developer – Smart Home SDK.
Enhancing Existing Content for the Google Assistant
The talk Enhance Your Search and Assistant Presence with Structured Data went into detail for web developers about how to use Structured Data and announced two new types that it now supports. Structured data makes it easier for developers who have existing web content to make lush search results of that content without proliferating it out to all the various platforms. This can help a developer reach a wider audience. Structured data supported podcasts, recipes and news last year, and this year it will support how-tos and FAQ templates. Video objects can be used with the how-to template for people who have great how-tos on youtube. It is as simple as filling in a Google Sheet to create the how-to template. The how-to guided experiences looks great on a smart display. This talk also demonstrated how to use the Actions on Google Simulator and Actions on Google Analytics to view and test the structured markup as it is being developed. This can be a simple way to bring existing content to life in the Google Search window. This is all great information for web developers, but what about Android developers? The most exciting talk was yet to come.