In a market finding its footing through highly specialized hardware / software ecosystems, what killer app feature can Hololens, the original AR workhorse, use to cut through the competition? Microsoft’s next-generation holographic processing unit, powered by machine learning, could prove to be a game-changer.
The future of AR at the moment is bright, but still riddled with uncertainty. Future developments notwithstanding, neither hardware nor adoption are at a particularly compelling stage, with issues surrounding a general lack of immersiveness and essential features, to say nothing of the typical obstacles faced by any emergent technology in terms of cost and content availability.
As a consequence, many hardware manufacturers in the enterprise space provide solutions that are focused on a handful of features that require little to no engagement with physical surroundings on the headset’s part. Instead, they tend to focus on visual remote communication and barebones heads-up displays. And while these solutions have been measurably successful in improving overall efficiency and productivity, there is still significant potential in a powerful enough computer interacting with physical environments in meaningful ways.
Unique Problems, Bespoke Solutions
However, as cloud computing and neural networks continue to ingratiate themselves into every industry under the sun, from diagnosis to astronomy to autocross, it seemed inevitable that augmented reality would also reap the benefits of such a bountiful and versatile resource. And in July of 2017, Hololens Director of Science Marc Pollefeys blogged about his team’s advances in building the newest iteration of the Holographic Processing Unit (HPU) which will include a co-processor chip supplementing the device’s normal on-board computing processes with deep neural networks (DNN).
Microsoft scientists such as Alex Kipman expect that the inclusion of lightweight, local DNNs will drastically improve AR headsets’ ability to interact with physical environments on a deeper level than simply recognizing rudimentary symbols and surfaces or marginally streamlining workflows. A headset capable of object recognition on the level of an advanced neural network would be able to not only readily identify unique objects in realistic environments with little to no help from a user, but bring developers closer to providing rich interactive overlays featuring occlusion and realistic feedback. This could prove invaluable in building software solutions for enterprise training, from pick-and-pack logistics procedures to surgical practice.
Meanwhile, continuing advances in natural language processing could eliminate the need for human input on the backend of AR headsets in remote guidance systems and “remote scribe” services. Instead, machine learning could perform contextual searches of large datasets, from operations manuals to medical records, to help users complete tasks with equal or greater efficiency and less human input.
The practice of designing bespoke processors is becoming increasingly common among AR hardware manufacturers. As more and more processors are developed to solve the unique and specific computing demands of emergent technologies such as AR, hardware and software will continue to improve productivity and add value to XR solutions across a wide variety of enterprises and use cases.