Wearables

Wearables are the next stage of evolution in the connected world of IoT. The advancement in technology has reached a stage where human lives completely depend on smart devices and Internet. Wearing smart equipment on body for various purposes like tracking information related to health and fitness, use GPS to track people & places, etc allow multitasking and make our lives better.

 

AuroSys has been a leader in Wearables designing and developing extensive embedded products and apps for its clients around the globe. We have conceived numerous innovative products at our well-equipped Research & Development Labs and served our customers delivering comprehensive solutions using our domain and technical expertise in the arena of Internet Of Things. AuroSys has successfully delivered solutions for Connected Home Monitoring systems, Connected Mailbox Systems, Energy Monitoring and Management Systems, Healthcare, Logistics, Industrial/Building Automation and more. Our thought leadership and experience in this industry helps us keep our customers at top end of the curve in this age of emerging technology.

Wearables

Our Domain Expertise

  • Sensors
    • Inertial
    • Motion
    • Heart rate
    • EEG
  • Advanced Optics
  • Gesture Tracking
  • Image Recognition
  • Speech Recognition
  • Natural Language Processing
  • Power Consumption
  • Bluetooth, WiFi Direct and NFCs

Some of the Industry Specific Use Cases

Today’s range of body sensors can already measure an impressive array of parameters

  • Stride length, distance, step count, cadence and speed;
  • Heart rate, heart rate variability, heart rate recovery, respiration rate, skin temperature, skin moisture levels, breathing rate, breathing volume, activity intensity;
  • Body temperature;
  • Calories burned, distance travelled;
  • Sleep quality, sleep patterns;
  • Wearer’s brainwaves – can be used to control electronic devices/services by thought;
  • Back posture: sitting position, chest and shoulders;
  • Force of impact to the head (used in contact sports);
  • Exposure to the sun (UV measurement);
  • Biomechanical data collected while running (e.g. L/R pressure etc.);
  • Altitude and rate of ascent/descent;
  • Location (3D);
  • Motion parameters including speed and acceleration;
  • Repetitions of specific physical activities (e.g. sit ups, dips, press ups).

This is a combination of an in-body sensor that could measure key nutritional parameters about the user with a cloud-based service that could analyze those parameters to provide feedback to the user about what they should be eating.

If it were possible for an in-body sensor to send a semi-real time report into the cloud about the user’s diet – because it could measure those key parameters directly – then the user could choose to make this data available to a third party service provider for analysis.

The service provider would then be able to make recommendations as to what the user should buy when at the supermarket – the user’s location would be used by the service to determine when the user was in a food shopping mall.

Furthermore, the same service could make recommendations for a personal or family-optimized menu at mealtimes. Because the service would know whether the user had any mineral or other deficiencies or excesses, then a suitable menu could be recommended. This could be in the form of tablet supplements or just a recommendation like “How about salmon tonight – you should have some in the freezer?”

One of the biggest costs of an insurance company lies in the processing of insurance claims.

If a user was wearing a suitable pair of smart glasses, which might be required for navigation purposes anyway, then those glasses would be able to record the entire journey using a roiling window to minimize data storage requirements. This would mean that any accident could be recorded on video. The same pair of smart glasses could also record exactly what the user was looking at, at the time of the accident, as well as other information such as speed.

Far more interesting would be what happens if an in-body sensor were to be combined with such a pair of smart glasses. In this case, it would be technically possible for the insurance company to determine if the user was driving after a lack of sleep (because the sensor would monitor sleeping patterns) or if the user’s blood contained too much alcohol.

If such technology were to become available then we think that users who were willing to use the technology – which would offer many positive benefits – would enjoy significantly reduced car insurance premiums.

It is interesting to think what might become possible if smart glasses are combined with police databases and facial recognition software.

We are not too far from the point when a police officer could user a pair of smart glasses to automatically obtain information about a person that was within the officer’s field of view – simply by asking or by setting a default.

This could be possible in real time as a police officer was speaking to a person with the resulting information being projected onto the officer’s field of view.

The smart glasses could take a picture of that person and send it to a cloud-based police service where facial recognition technology would match the picture with an entry in a police database. The police office might not know who he was taking to but this technology would be able to alert the officer if the person was a suspect in an in ongoing case, had a criminal record or, hopefully in most cases, had no police history at all.

Thinking a stage further ahead, then we can foresee the facial recognition camera technology that is already installed at most security gates at airports being integrated into smart glasses, so that a police officer walking down a crowded street would automatically be alerted to the presence of a suspect walking towards him.

One of the major use cases for Google Maps is navigation. But so far most of Google’s navigation services have been focused on cars or pedestrians in urban areas.

 

We think that an opportunity exists for navigation services to be developed specifically for those who engage in outdoor pursuits.

To take one example, it is very difficult and potentially dangerous to ski in bad visibility. Mist, cloud and snow can combine into a single featureless morass where it is extremely hard – even for an expert skier – to assess speed, angle of slope or what the upcoming terrain is like. If a Google Glass-like technology were to be integrated into a pair or ski goggles then this problem could be alleviated.

Based on official route data that been recorded by park authorities, or potentially user-uploaded data that had been validated for accuracy, then the skier would be able to ski in white-out conditions with the safe route and terrain precisely marked on their field of view.

While by no means a green light to ski normally, such a system could help save lives, especially in off-piste skiing situations where the skier has to avoid cliffs and other serious hazards.

We can see similar applications in mountain walking, sailing and off-road running.