mass of information
mass of options
mass of applications
mass of user choices
A complex & imperfect user experience
Our relationship with technology today still requires us to adapt our behaviour to the various devices we use rather than the devices adapting to us.
App interfaces remain the same regardless of context. Voice assistants require us to ask long, contextualised questions, as if the assistant weren’t “next to us” and aware of the situation.
Our software-only solution enables any microphone-enabled device to become aware of the context it is situated into using audio processing and AI.
Phones, watches, speakers, laptops and vehicles become aware of our actions, our behaviours, and the context we find ourselves in, seamlessly adapting and responding.
Simple telephony
Front and rear cameras
GPS Accelerometer Gyroscope Magnetometer Proximity Sensor Barometer Thermometer Depth Sensor
Fingerprint scan Iris scan HR monitor Pedometer
The next step:
Understanding
(rather than merely sensing) of context and events
Events are very short actions that can happen at any time. There are thousands of them and they can be related to the current context or not. The context instead is what describes what is happening around users: Are they working? Are they eating? etc.
If we were to extract context from events alone, this would result in verbose and erratic behaviours.
Here are some examples where our software retrieves the context from real life everyday situations only using the microphone embedded on a phone. HyperSentience builds up from events analyzing context directly. By making predictions on average once a minute rather than every second, HyperSentience optimizes for precision and minimizes wrong predictions.
Our current demo is already capable of detecting the contexts of working, eating, cooking and talking. But there are plenty of other contexts on their way, from traveling to many domestic activities, crowded spaces, music concerts etc. This is just the beginning.