Google got a lot of people thinking when it unveiled Project Glass and its promo video showing how Google glasses of the future could work. It was cool. Plain and simple.
Of course as they often aren’t, things aren’t really that simple. It was a concept video, and few truly know what Google’s glasses in their current state can really do, or even how close Google really is to the currently fictional reality portrayed in the video. Experts in the field of augmented reality have expressed a fair amount of doubt, though there is still plenty of excitement coming from them as well.
AR technology concept broker/analyst Marianne Lindsell tells WebProNews, “Like the ‘Stark HUD’ concept (produced as far I can see as a sort of teaser for the Iron Man II film) I do suspect that the Google Project Glass video has a strong ‘Hollywood’ element.”
“I would guess that Google are both testing the market and managing expectation,” she says. “However also like Stark HUD, there has clearly been some use of technology in the production – and in the case of Project Glass, the tech/Hollywood ratio is I suspect much higher, if less than 100%.”
“The least realistic parts of the Google Glass video clip in my opinion are the field of view (a large FOV is needed – but can such a small device provide it?), the brightness (possibly – but there are some good techs out there), instant responsiveness, and to some extent the (presumption of?) excellent registration (which many AR concepts depend on, but Google have cleverly side-stepped in the clip by largely avoiding such concepts).”
“Focusing at an appropriate distance is possible (I have seen it), – but not in such a tiny piece of hardware (yet!),” she adds. “Even good registration is possible in some situations, – but any specs will be at the mercy of smartphone-like inertial and magnetic sensors (compasses are notorious), unless it can take its cues from the surroundings by image recognition and analysis (which some techs already do surprisingly well).”
Some Things To Consider
Lindsell highlighted some very interesting points about the Google Glasses in a comment she left on another WebProNews article. I’ll repost that comment here:
The glasses -concept- is definitely possible (as is the head tracking). I have seen a number of products that convince me of that – but the sleek designer package probably isn’t (yet).
There are usability thresholds in many areas that such a product will need to meet to be truly useful:
1) Field of View – the Google Glass product seems way to small to provide a useable FOV (no-one is yet aiming high enough here)
2) Brightness – a huge dynamic range is needed, – think about readibility on a sunny day – and brightness takes power
3) Exit pupil – an optical engineering parameter that needs to rate highly or the slightest jiggle of the glasses on your face will rob you of the display
4) Focus – optics will be needed to focus the display at a useable distance
5) Transparency – too opaque and the readouts block out your view (mind that lamp post!) – too transparent and you can’t make out what the marker is saying
6) Zonation and format – you probably -never- want any readouts to appear within your central view area – designing them to appear in the optimum place on the periphery is vital. No large windows please! – prefer conformal indications and markers.
7) Probably more important than all of the above will be the off/standby switch – the default position should be standby – with a quick and easy way to switch ‘on’ while required
Responsiveness and Registration – such a device will be -very- sensitive to delays. A note for OS suppliers!
9) Driving – special case – needs an even more safety-oriented (and accredited) design – but by no means impossible – think HUDs in fast jets
When someone, let’s assume Google for now, first clears all of the above hurdles, then we may have a useable product, although you may not be as keen on it when you see how big the packaging is.
I’m not quick to believe that Google’s sleek, small package is possible. Even then, I am assuming that the device will need to be connected to your smartphone.
Of course it’s always possible that the Google device uses a laser to project the display using one of the eyepieces. That -might- allow a smaller packaging.
The concept of course remains valid, and the gauntlet is well and truly thrown down to all major players, to overcome the challenges.
As for all the different things such a product would be useful for, – I submit that we have only scratched the surface of AR as a whole.
Who would have imagined the WWW when first connecting two computers together (with due credit to Mr Berners-Lee).
AR is a whole new way of teaming technology with people. For that, the technology needs to be -really- people-friendly!
“Many of these parameters will have a threshold level the tech must achieve in order to be useable and acceptable to the consumer market,” Lindsell told us in an email. “ I am not about to nail my colours to the mast on exactly where to call these levels, but suffice to say that whilst many products out there have some way to go, some of them are, as far as I can see, showing signs that they may get there. This is why I think there may be some real tech behind the Google Glass Project. What we don’t know of course is how far along Google are yet. I think the clue is that it is far enough for them to test the market and attempt to manage expectation.”
So What About Those Contact Lenses?
We recently looked at a fascinating presentation one of the Project Glass engineers gave at Google’s Solve For X event, in which he spoke about using technology similar to Project Glass’ in contact lenses (since then, Google has hinted at the technology for prescription glasses as well). We asked Lindsell if the contact lens approach would eliminate any of the doubt some experts have expressed about the technology Google is using.
“Probably not,” she says. “Of course there are a few universities (and even Microsoft) actively researching electronic display contact lenses, but it is still early days yet. There are significant hurdles in terms of how to power them, and even greater ones in terms of how to focus the image at a suitable distance.”
“Producing a picture matrix with sufficient resolution, over a sufficiently wide FOV is also a major challenge, and although I can’t speak for ‘hidden’ projects, I am not aware that we are even within sight of the right ball park yet (apologies for mixed metaphors),” she continues. “But then there again – electronic focus is possible (I have seen it) – though not in a miniature package. Contact lenses –may- seem like they would help with the FOV and form factor problems, but in reality I think they would have to solve those problems, in miniature, first. I think the jury is out on when contact lenses may be able to deliver AR (though I’m thinking 10 years+), although I might predict that in the interim electronic (non-AR) contact lenses may find use as a health sensor.”
We may not know how much of what has been presented in Project Glass is really feasible at this point, but Google’s promo video has clearly generated a lot of enthusiasm. We asked Lindsell if she expects a lot of excitement and involvement from developers as a result.
“I think this is where Google have really scored,” she says. “People sit up and listen when Google speak. It is my firm hope that they will be able to market an attractive product before this interest dies down. And here’s the rub – truly useable AR specs will require –a lot- of engineering, and this needs funding, which means market interest. There’s a chicken and egg situation here – the market is only interested in what is realistically possible (hence your own interest I suspect), – but even organisations with the ability to fund development need to prove there is a strong demand to release those funds, as well as a sense that the end product is truly feasible.”
“There may be some hope,” she adds. “ I have seen demonstrations of many existing AR specs technologies first hand (including Vuzix, Laster, Trivisio, BAe Systems and a few others) and although I have yet to see a single system meet what I might call a people-friendly acceptability factor, I have seen the current state of development of some of the component technologies.”
“This why I think that AR specs will be possible,” she says. “What I am far less sure about, is the final form factor – but even here let’s not rush to judgement, as prototype devices are certain to be clunky and unpalatable, whereas there has been significant R&D and the final package may be acceptable (even if not quite as tiny as Project Glass). How far Google have really got with this, is anyone’s guess, but if they don’t have something up their sleeve, it would have been very brave of them to put about the Project Glass video clip, with such a tiny device – especially for Sergey Brin to be seen wearing them so openly.”
We wrote about that here, showing some photos photographer Thomas Hawk was able to get:
“If there is a secret here my guess would be laser projection (not onto the retina – which would require eye tracking, but creating a virtual image using the eyepiece lens) or possibly a cunning use of novel LED tech (there continues to be much R&D here – think blue LEDs and Shuji Nakamura – there was a wonderful Sci Am article about it a couple of years back),” she says. “By the way – that was the one big elephant in the room I forgot to mention in my earlier list – style. Obviously crucial to the market, and for that reason I would take the Oakley announcement very seriously, although I suspect they would do much better to team up.”
Read about the Oakley announcement here.
“So yes, I think Google have created a lot of interest – and I just hope they can maintain it long enough to release product,” says Lindsell. “Does Apple have something in the works? My guess would be yes – but it would be ultra hush hush, and I doubt if they will declare it until they are ready, in spite of Google’s announcement. Will they be working harder now in the background, – very probably yes.”
There are rumors going around, in fact, that Apple may be working with Valve on such technology.
It may be Google that has generated this wave of excitement related to the possibilities of augmented reality, but there are plenty of others working in the space, and it’s entirely possible that we’ll see even more interesting products coming from elsewhere.
“I see many AR technologies emerging,” Lindsell tells us. “From location-based to marker-based services, image recognition and interpretation, object tracking (now in 3D – see metaio), facial recognition (not just face tracking), zoning, fencing, pre and post-visualisation/transformation, on-the-spot translation, sophisticated auditory cues and environments, use of haptics (early days here – much potential), sensory substitution, crowd sourcing in near real time, and even the use of VR in registration with sensor media to provide context. And there are so many ideas that people have yet to have – so much potential in AR yet to be realised. But there are key enabler technologies required first.”
“One of these is the AR specs,” she continues. “I think we are barely scratching the surface of how we might use AR. I really think that AR is the business end of a generational process of taking IT out of the office and conforming it to the user as ‘wearable tech’ that is constantly available to the user.”
“Think of everything that IT enables us to do now,” Lindsell concludes. “Computing was originally seen as wartime code-breaker technology. The cold war space race then helped it come of age (think chicken and egg again) because we needed help with the complexities of pre-launch checks for the hugely complex moon-rockets. Ever since there has been a march towards ? (no-one knows quite what!). All we know is that is that we use IT as an extension of ourselves – almost like add-on modules to help our brains (and occasionally other parts of us). So the real question is one of human and cultural evolution, what would we like some help with, and how can we increase our reach to get it?”