With stunts such as Tech Evangelist Robert Scoble showering with Google Glass, the device has been initially considered through the private/privacy/gadget angle. However it will deliver tangible productivity and quality gains in more serious domains.
Indeed, in technology, 2013 will be remembered as the year Google Glass captured the collective imagination. Whilst smartphones and tablets have rapidly overtaken the PC as the primary interface for most of the information we consume today, these new devices might be an interim step towards a new age of deeper human-machine interaction.
At first sight, it is tempting to just draw parallels between the functionality of Google Glass and that of a smartphone. However a review of some of the design insights behind the device reveals a potentially more powerful platform for facilitating human-machine interactions, which makes it the democratic version of the optical head-mounted display (OHMD) well known by fighter pilots, and which could lead to more profound changes in the way people make decisions in real-time.
Sergey Brin, Google’s co-founder and lead proponent of Glass recently commented that his “vision when they started Google 15 years ago was that eventually you would not have to have a search query at all – you would just have information come to you when you needed it. And now 15 years later this [Glass] is now the first form factor that can deliver that vision”.
To date much of the commentary has focused on the public use of Google Glass, and the debate over the perceived increase in the erosion of privacy through the built-in camera. Many are questioning what the personal benefits of wearing such a conspicuous device will be, particularly if its draws irritation from those around them. However, another angle appearing through beta-testing is how it might aid productivity and quality in the work environment as Glass analyses images and delivers information on demand in front of the eyes of the worker to assist in the task at hand.
Following the developer version released out of Google X (Google’s “stealth lab”) to a selected group of 2000 developers in 2013, a consumer launch is now pegged for late 2014 / early 2015.
The hands free capability combined with the access to potentially infinite online data is expected to deliver significant productivity gains for individuals and businesses when the technology matures: just like fighter pilots use OHMD to acquire information and react in a fraction in a second, you can imagine the comfort and improvements when employees will do the same to service customers and make decisions in a seamless manner because they won’t need to take their eyes off the action to check a dashboard or a lengthy manual.
It is in that perspective that a new study from Cognizant focuses on Google Glass used by insurers. It envisages how it could soon transform how insurers work and engage with customers, from claims adjusters and risk engineers connecting in real-time with the home office, to service reps guiding customers through the claims submission process. Such as getting help with a step-by-step video or audio-recording of the incident. Rather than trying to recall the scene of the accident after the fact, consumers could detail the exact spots where the cars involved are damaged.
The document details four realistic use cases for Glass in the insurance industry:
- Improving Productivity and Efficiency of Claims Adjusters when appraising accident sites or automobiles.
- Improving Productivity, Efficiency and Throughput of Risk Engineers when inspecting properties and conducting risk assessments.
- Improving the Claims Submission Experience for Customers who could obtain guidance through the claims submission process, including which photos are most important to take.
- Improving Direct Visibility into Aggressive Driving