Last week I was in Japan at NEC’s iExpo, and a few of the demonstrations that were shown there made me think about the way we interface with today’s systems. These demos weren’t the typical lame demos that many vendors show at conferences, like simply demonstrating when you pick up the phone the presence status on the user changes to “on phone.” NEC ran a series of demonstrations showing how, by using unified communications mixed with things like facial recognition systems, ID scanners and biometrics, the way users and systems work together can radically change many of the business processes we have.
I also want to make a comment on NEC as a company. The breadth of what NEC sells outside Japan is only about 30% of the product portfolio available in Japan. For a company that’s been trying to raise its US profile, it would seem expanding its portfolio would really help. A single source solution in Japan would require several US vendors to deliver all of the product required for the same solution here. This would obviously require NEC to learn to sell solutions to line of business managers, and many of the products would be sold by being pulled through the solution, but I see that as being a necessary requirement in a few years anyway.
As far as the demos go, one example was this two foot tall robot that NEC created (I don’t remember the name of it but perhaps someone from NEC could post it here and people could look it up). The robot’s eyes were cameras, it is able recognize speech commands in Japanese language (which is no easy task) and was cable of acting on a set of commands that would be initiated off “triggered” events. NEC is also working on emotion-detecting software that is a level past facial recognition, where the expression on someone’s face would be interpreted. A practical example of the use of this robot could be having the robot be part of the staff in a day care center. The robot could mingle among the kids and if the child needed someone or something the child could make a request, the request interpreted along with the face scanned to indentify the child, this information could be combined with the presence status of the day care staff (or maybe even the parents) and a request or message sent off in a number of different communications method (hence the importance of the tie in to unified communications). This could easily be applied to elderly care, hotel environments, airports, retail stores or any other environment where services need to be provided to someone without having to use a keyboard or mobile phone as an input device.
There are many ways to input information into a system, with the most common ones today being through a keyboard, dial pad on a mobile phone and, in some cases, using SMS on a mobile device. These are obviously widely used for entering information, but they can be slow, difficult to use when mobile or just not practical in certain situations (like toddlers at a day care). The most common additional types of input devices I see are:
* Speech recognition. This works well for users that are in hands-constrained environments or when the user isn’t capable of using a device. This could be a worker on a factory floor, a mobile worker in a car, a small child or bed ridden individual. It’s true there have been speech recognition systems built in the past that, I admit, were not very good but the technology will only continue to get better.
* ID badge scanners or proximity cards. This is a bit big-brotherish, but these can be used to track workers as they move around a building or campus. These can also be loyalty cards or ID bracelets given by airlines, schools, hotels or casinos that people could carry with them. Mostly people think of these devices as authentication devices, but the ability to provide a customer’s or worker’s location can add another level intelligence to a business process.
* Facial recognition. This would be used primarily for dual authentication or used to identify people where identification can be used for security or other purposes, such as healthcare.
* Environmental systems. The environmental systems in schools can be used to provide additional information on the temperature of the room, noise level or other conditions that can alter the comfort or situation of a person.
None of these on their own is the “next big thing,” it’s the combination of these integrated with traditional input methods and unified communications that can significantly alter the way people interact with each other and the way people interact with machines. We’re still a number of years for this to be mainstream but new ways of getting information into systems are sorely needed to drastically change the way we live and work.
Written by Zeus Kerravala, The Yankee Group