Face sentiment detection for product placement
To optimize your product placement and pricing for your products, Google Cloud Vision can detect facial expressions and extrapolate four emotions. These emotions are the following: joy, anger, surprise, or sorrow. In order to do so in just a few seconds, it refers to 5 different confidence levels: “very unlikely” (by default if nothing is detected), “unlikely”, “possible”, “likely”, and “very likely”.
On the other hand, Amazon Rekognition identifies a larger spectrum of feelings. Being happy, sad, angry, confused, disgusted, surprised, or unknown is detected with a score between 0 to 100.
2017 Comparison between Google Cloud Vision and Amazon Rekognition emotions analysis. Source: CloudAcademy.
2017 NextUser’s YouTube video: Facial Emotion Recognition
Occupancy for your retail store operations
It is critical to optimize your staff on the floor according to the traffic driven in your store. Combining optical devices (LIDAR), laser scanning technology (Rhino), and data processing (3D Fusion software) does not mean trying to count exact people. Indeed, it is used to measure the overall occupancy of your stores.
LIDAR works by rapidly firing lasers (up to 900,000 times per second) on an area and measuring the time for the light to bounce off that area and travel back to the source. These millions of points would create a digital mapping of the environment.
Generally, LIDAR is used by autonomous vehicles to navigate environments. Nonetheless, at the 2017 NRF show, Google demonstrated that it could be useful in other fields like retail.
2017 NextUser’s YouTube video: LIDAR for Retail – Light Sensor Detection from Google
3D try before purchase
Augmented Reality (AR) technology can allow customers to visualize products from their homes and find them down to the exact shelf in the store. Project Tango by Google’s ATAP Department combines 3D mapping with spatial positioning and indoor mapping. It uses AR to display 3D objects as a layer on top of the current environment visualized on the device. In fact, Tango can integrate the objects around customers into the simulation along with their actual body motion.
2017 NextUser’s YouTube video: 3D mobile visualization
Shopping assistance to fight the paradox of choice
Moreover, offering a web, mobile, or robot assistant will reduce anxiety for shoppers and increase their happiness. Here, the goal is to guide the consumer to the right products identifying their goals.
Using IBM Watson, two prototypes were designed by combining three services. We used Voice to Text, Natural Language Classifier (NLC), and Text to Voice. This approach could use open questions to interpret compatible tastes (i.e. Nespresso) or more specific answers to establish a diagnosis (i.e. “Ask SkinCeuticals” from L’Oréal).
Voice is one of the web browsers for Commerce and Retail. There is no need to use a browser from a computer, mobile, or tablet as people will embrace this personal robot assistant.
2017 NextUser’s YouTube video: Nespresso – NextUser with Natural Language Classifier (NLC)
NextUser’s YouTube video: SkinCeuticals – Shopping Assistant Prototype v1
The retail industry is adopting AI, AR, and LIDAR technologies at an unprecedented rate, transforming how customers interact with products and brands. By leveraging these technologies, retailers can enhance customer experiences, improve operational efficiency, and gain a competitive advantage.
Get started with NextUser to boost your performance!