Face sentiment detection for product placement
To optimize your product placement and pricing for your products, Google Cloud Vision has the capability to detect facial expressions and extrapolate four emotions: joy, anger, surprise or sorrow in just a few seconds with 5 different confidence-levels: “very unlikely” (by default if nothing is detected), “unlikely”, “possible”, “likely” and “very likely”. Amazon Rekognition identifies a larger spectrum of feelings: happy, sad, angry, confused, disgusted, surprised or unknown (by default if nothing is detected) with a scoring between 0 to 100.
Overall, Amazon Rekognition outperformed Google Cloud Vision according to independent test conducted by CloudAcademy in this article. You will see below a testing video at the 2017 NRF show where Google Vision detects only 1 sentiment accurately.
Occupancy for your store operations
It’s critical to optimize your staff on the floor according to the traffic driven in your store.
Combining optical device (LIDAR), laser scanning technology (Rhino) and data processing (3D Fusion software) is not trying to count exact people but to measure the overall occupancy.
LIDAR works by rapidly firing lasers (up to 900,000 times per second) on an area and measuring the time for the light to bounce off that area and travel back to the source. This millions of points would create, in aggregate a digital mapping of the environment.
Generally, LIDAR is used by autonomous vehicles to navigate environments but at the 2017 NRF show, Google demonstrated that it could be useful in other fields like retail.
3D try before purchase
The first purpose of this new technology is to allow your customers to visualize products directly from their home. The second one, is when they will visit your store, they can find what they need right down to the exact shelf.
Project Tango driven by Google’s ATAP Department is a revolution in mobile computing as it combines 3D-mapping with spatial positioning and indoor mapping.
In short, it’s using Augmented Reality (AR) for you to “see” 3D objects as a layer on top of the current environment that you’re visualizing with your mobile phone.
With Tango, the objects around you like a dress (e.g. GAP) or design furnitures (i.e. table) can be integrated into the simulation along with your actual body motion. As you can see in these two demonstrations, it’s possible to see how it will look like in your interior directly at home instantly.
Shopping assistance to fight the paradox of choice
Offering a web, mobile or robot assistant will reduce anxiety for shoppers and increase their happiness. Here, the goal is to guide the consumer to the right products identifying their goals.
Using IBM Watson, two prototypes were designed by combining three services: Voice to Text, Natural Language Classifier (NLC) and Text to Voice. The approach could use open questions to interpret compatible tastes (i.e. Nespresso) or more specific answers to establish a diagnosis (i.e. “Ask SkinCeuticals” from L’Oréal).
As discussed in a previous post, Voice is the next web browser for Commerce and Retail. There is no need to use a browser from a computer, mobile or tablet as people will embrace this new personal robot assistant.