New Developments in Facial Direction Estimation Using Deep Learning

Category Science

tldr #

This study by scientists from Shibaura Institute of Technology, led by Professor Chinthaka Premachandra, developed a precise, horizontal wide-range angle detection approach (with k > 5) for facial direction estimation. The scientists gathered point cloud data from various orientations using a depth sensor integrated with a gyro sensor and employed a deep learning-based classification model to estimate the orientation. The proposed classification method, designed for more than seven classes, achieved remarkable performance and classification accuracy rates of over 98%, 95%, and 91% for 7, 9, and 11 classes, respectively.


content #

In recent years, artificial intelligence has demonstrated tremendous potential for the development and advancement of a wide variety of technologies. A good example is facial direction estimation, which finds applications in driver assistance systems that prevent distracted driving, methods to prevent cheating in examinations, and software for creating three-dimensional (3D) virtual avatars.Traditional facial orientation estimation techniques recognize the characteristic parts of the face, including the nose, eyes, and mouth, and detect their movements. However, such two-dimensional (2D) image-based methods raise privacy concerns and fail when features of the face are hidden due to a mask, or if the face is turned sideways.

The research was primarily conducted at Shibaura Institute of Technology

The solution may lie in optimizing facial detection using point cloud data (data obtained from a discrete set of data points) and a depth sensor. In fact, some previous studies have employed an estimation model based on the deep learning of 3D point cloud data in five face directions: frontal, diagonal frontal, right, left, and horizontal. However, considering the level of accuracy required for driver assistance systems that crucially verify the driver's status, this five-class (k = 5) classification is insufficient for satisfactorily detecting the face direction.

The paper was published in the IEEE Sensors Journal

To address this limitation, scientists from Shibaura Institute of Technology, led by Professor Chinthaka Premachandra of the Graduate School of Engineering and Science, have developed a more precise, horizontal wide-range angle detection approach (with k > 5). They accurately measured the horizontal angle of the face during the acquisition of the training data using gyroscopic sensors.Their paper was made available online in the IEEE Sensors Journal.

Facial orientation estimation can be used in driver assistance systems and face recognition applications

In this study, the scientists gathered point cloud data from various orientations using a depth sensor, which was integrated with a gyro sensor during data collection. This data was employed to train a deep learning-based classification model, which was utilized for face orientation estimation.

The scientists changed the horizontal angle of the face relative to the camera from +90 degrees to -90 degrees, using step sizes of 30, 22.5, 18, and 15 degrees between them. As a result, the classification of face direction was represented by more than seven classes (k = 7, 9, 11, 13).

The scientists used a depth sensor integrated with a gyro sensor during data collection

"Precise training data for each orientation was obtained from the integration of the depth and gyro sensors, which reduce the number of point cloud samples required for constructing the classification model. Furthermore, applying a weight reduction process to reduce the weight of point cloud data enhanced training efficiency and resulted in fast face orientation estimation," explains Prof. Premachandra.

The classification of face direction was represented by more than seven classes

The proposed classification method, designed for more than seven classes, achieves remarkable performance in face direction detection through deep learning. For example, it has demonstrated classification accuracy rates of over 98%, 95%, and 91% for 7, 9, and 11 classes, respectively, representing a significant improvement over the conventional face orientation estimation techniques.

Overall, this studyserves as a vital reference for research into more precise facial orientation estimation methods, which can be utilized in driver assistance systems and face recognition applications.

The classification method demonstrated classification accuracy rates of over 98%, 95%, and 91% for 7, 9, and 11 classes, respectively

hashtags #
worddensity #

Share