Cognitive systems will be a game-changer for IoT, allowing organisations to easily access and gain insights from the influx of IoT data, according a Microsoft executive.
“We’re at the front end of being able to access and refine data and be able to look at it with a natural interface – that will be huge in the IoT space,” said Andrew Shuman, Microsoft’s corporate vice president of products for technology and research.
In fact, Shuman said, there’s no reason why companies can’t already take advantage of today’s cognitive capabilities, rather than waiting for them to become more ‘intelligent’.
“There are all sorts of scenarios for IT professionals and application developers to start using natural interaction models,” he said.
“They have to find the right trade-off between where the technology is today and the user interface they want to build, but I don’t think there’s anything limiting them today from being able to do a more natural kind of interaction model, whether it be spoken, or typed, or visual.”
Cognitive’s IoT potential
Shuman shared his thoughts with IoT Hub on the sidelines of Microsoft's Ignite conference in Atlanta this week.
“The first way I think about it is how do you build cognitive services that allow for a natural interaction over huge data sets,” Shuman explained.
He said that the increase in endpoints that IoT will enable more touchpoints for natural interactions to take place.
“I also think over time that there will be an interesting place for IoT that uses more human gestures and interactions, so as you start to see IoT that can have spoken-word recordings, or videos, or photos, you could look at how you might leverage those data sets in different ways,” he said
Shuman sees the specific areas of voice input and image recognition as areas where cognitive services could make an impact in the IoT space.
“Object recognition via photos will become a very powerful mechanism, and you’re already seeing it in particular use cases,” he said.
“Not just for security systems or crowd management, but for more mundane things as well, such as being able to find things that are perhaps or lost in an office space or a building space, or tracking the usage of high-end equipment that’s in a building.”
Development challenges
Shuman said that enabling human-like interaction with large data sets will require a new style of systems development.
“I think it will require good developers to connect the dots between natural phrases and utterances or typed input with the underlying data, and being able to match that,” he said.
“This capability, I believe, is a more medium to long-term part of cognitive services, which is to get to that underlying data understanding and to be able to suggest different projections of data.”
Developers will also face challenges in emulating the nuances of person-to-person interaction within a technology context, he said.
“The human interaction model between two people is so rich, and can go in so many different directions,” he explained.
“There’s so much clarification that two people ask when they’re talking, and there’s so much misunderstanding that you have to work out through facial cues like nodding or leaning in.
“So finding even more of those things that have different meanings within a conversation is something that will be incredibly interesting to see if developers can crack.”
He also sees a shift in software around the world in that it is becoming more probabilistic.
“We’ve all had that experience of a computer guessing wrong, or guessing too aggressively wrong,” he said.
“Whereas between two people, they would clarify and confirm, so something as simple as asking ‘did you mean…’, which you see in search engines, is a simple way to start that journey.
“As long as you can do it very quickly and very effortlessly, that’s the art of designing a really beautiful software program for any given use case.”
Peter Gutierrez attended Microsoft Ignite in Atlanta as a guest of Microsoft.