Today’s guest post is from Greg Raiz, chief executive of Boston-based Raizlabs, a leading mobile design and development firm.
I woke up July 3rd excited, I had a golden ticket to Google NYC to get my hands on Google Glass, the new head-mounted technology from Google. Half Gordi La Forge, half Android phone strapped to your head. Going into the experience I have to admit I was a skeptic.
My skepticism stemmed not from the potential of portable technology, it was more on social acceptance. Technology can only work if we can work it into our lives. Every new technology starts off as something strange and even our now common iPhone started it’s life as a not-so-common over-the-shoulder battery pack and antenna. Acceptance takes time and iteration.
One thing that Google got right is their understanding that the initial setup experience needs a helping hand. The Google Glass device isn’t being shipped and instead can only be picked up in a few locations. Google has trained staff to take new users through the experience. The technology is still very much for early adopters and the technical hurdles are easier to conquer with a guide. While I’m fairly technical, setting up a bluetooth connection, wireless tethering and connecting to a Google+ account on a device with no keyboard or mouse presents some unique challenges. The Glass guides spend an hour with each “glass explorer” explaining the UI, setup, fitting and using Glass.
The out of box experience is great and the packaging design and attention to detail is clearly visible. Google is paying attention to the details and the on-boarding experience is no exception.
How it feels
Glass feels like wearing glasses or sunglasses. A little semi-translucent cube sits to the upper right of your right eye. The glass itself doesn’t inhibit normal vision and I was able to look around easily. To see the glass interface you have to look slightly above your typical eye-line. The frame and design is fairly lightweight and wouldn’t cause discomfort if you wore them for longer periods of time. Glass isn’t obviously compatible with prescription glasses and of the 20 or so people who I’ve given demos to, the ones with prescription glasses had the most trouble.
Beyond the physical feel of the device the main feelings are emotional. The device is somewhat conspicuous looking and despite it’s compact size it’s very obvious you’re wearing it.
Leaving the Google offices I had an immediate mixed feeling of curiosity and embarrassment. I was curious to play with Glass but absolutely felt conspicuous talking to myself while walking down a New York street. While I’m not the only crazy person walking around in NYC talking to myself this initial feeling subsided but didn’t go away. This is a social convention issue and is a long term concern for any sort of mass adoption. Social norms have kept technologies like Bluetooth headsets only acceptable in certain situations. The glass community has felt this as well and there is already a term for this social faux pax: “Glassholes.” How quaint.
Having worn the device around town, in the office and at home the common reaction is “Creepy.” I believe this has to do with both the obvious technology and with the fact that you make less eye contact. When wearing the sunglass visor attachment people noted that it looked less creepy.
The app experience is frankly pretty terrible. I say this not as an insult to the Glass team as the current technology is absolutely amazing but I say this in the absolute terms of where the software is today and where it would need to get to be in a final form. The technology is alpha and the functionality is fairly limited. The technology doesn’t do one thing well, instead it does a lot of cool things poorly.
The core scenarios demoed involve taking photos and videos. Cool. As well as making a call or sending a txt, also cool. The problem is that the end user expectations may diverge in the details.
I expect that I can take a photo and email it. Nope.
I also expect that I can message anyone on my Google Apps for Domains account. Nope.
I expect third party apps to add verbs to the grammar of what the speech engine can understand. Nope.
I expect to be able to compose messages for Twitter/Facebook. Nope.
Glass comes with a number of app or cards built in and additional cards can be added through a Glass website portal. Apps in Glass don’t behave as apps from traditional app stores and it may be better to describe apps in terms of cards. These include:
— Making and receiving a phone call
— Using Google hangouts
— Recording a 10 second video clip
— Checking weather
— Searching Google for something
— Viewing nearby places
— Getting directions to a location (currently only if you have an Android phone)
— Viewing tweets and mentions from twitter
— And more ...
New apps currently insert cards into your timeline. Presumably future apps will persist beyond just tiles.
The Beta Dancing Bear
While Google doesn’t officially call the product a Beta product, it’s clear that it’s early. Google in general has a history and culture around testing products in plain sight. Gmail, Android, Google Wave and others have been released to the wild far before the time when the technologies was polished. This is in stark contrast to companies like Apple who keeps things under wraps until the unveil.
I’m reminded of Allan Cooper’s book, The Inmates Are Running the Asylum. Cooper describes the attraction of a dancing bear at a circus. What amazes the audience isn’t that the bear dances well. In fact, it’s clear that the bear dances quite poorly, but what’s amazing is that the bear can dance at all. In much the same way Glass is an amazing dancing bear.
The speech recognition on Glass uses a finite grammar of verbs to invoke actions. “Snap a photo” is not Ok. “Take a picture” is Ok. This is subtle but important as a flexible grammar would allow a more natural interaction. Google Voice Actions has been a part of Android since Eclair and has a similar strict grammar while Google Now is very fluid and natural. For Glass interactions to work well it has to be fluid.
Currently 3rd party apps appear to add timeline cards but it’s not yet clear if they can extend the grammar. For example the twitter app allows you to see mentions in your timeline but it doesn’t seem to allow you to compose a new tweet via the “Ok, Glass” syntax.
TL;DR; Cut to the chase, what do you think?
It’s too early to tell. If you’re curious, consider it an expensive toy. It will improve and there is evidence that it’s already improving. The Glass explorer community shows a number of people exploring hacks, mods and suggestions to the Glass team.
My initial impression of the technology is that I believe it’ll find a foothold in certain professions where hands free interactions are critical. Repair and service industries, surgeons, field techs, etc. Will the technology become mainstream? Maybe, but only as the technology disappears. I mean this in two ways. The physical technology has to vanish into a contact lens or into the bezel of a stylish pair of (sun)glasses. In addition the user interface has to vanish so that the use of the technology is as easy as speaking your request.
One of the nicer features of Glass is it’s ability to instantly take a photo and I hope this type of camera performance is integrated into future mobile phones as it does change the way you take pictures and videos. We’re excited to be early explorers of this new technology and will experiment with the app-development side of things to see what we can make it do.