Facial Expression Recognition to Detect Student Engagement in Online Lectures
DOI:
https://doi.org/10.34148/teknika.v13i2.853Keywords:
Facial Expression Recognition, Convolutional Neural Network, Student Engagement, Online LecturesAbstract
In synchronous online lectures, the lecturers often provide the lecture material directly through video conference technology. On the other hand, there are many students who do not pay attention to the lecturers when they are participating in online lectures. As a consequence, in this research, an application was developed to assist lecturers in gathering data regarding the degree to which students who participate in online lectures pay attention to the presented information. The application employed a convolutional neural network (CNN) model to recognize each student's facial expressions and place them into one of two classes: either engaged or disengaged. The captured student facial image was preprocessed to facilitate the classification process. The preprocessing stage consisted of image conversion to gray scale, face detection using the Haar-Cascade Classifier model, and a median filter to reduce noise. In the process of designing a CNN model, three different hyperparameter tuning scenarios were implemented. These tuning scenarios aimed to obtain the best possible CNN model by determining which CNN model hyperparameters were the most optimal. The results of the experiments indicate that the CNN model from the second scenario has the highest level of accuracy in terms of recognizing facial expressions, coming in at 86%. The results of this research have been tested to measure the level of student participation in online lectures. The trial results show that the proposed application can help lecturers evaluate student engagement during online lectures.
Downloads
References
F. Amiti, “Synchronous and asynchronous E-learning,” Eur. J. Open Educ. E-Learning Stud., vol. 5, no. 2, 2020.
Z. Libasin, A. R. Azudin, N. A. Idris, M. S. A. Rahman, and N. Umar, “Comparison of students’ academic performance in mathematics course with synchronous and asynchronous online learning environments during COVID-19 crisis,” Int. J. Acad. Res. Progress. Educ. Dev., vol. 10, no. 2, pp. 492–501, 2021.
O. B. Adedoyin and E. Soykan, “Covid-19 pandemic and online learning: the challenges and opportunities,” Interact. Learn. Environ., pp. 1–13, 2020.
X. Xie, K. Siau, and F. F.-H. Nah, “COVID-19 pandemic–online education in the new normal and the next normal,” J. Inf. Technol. case Appl. Res., vol. 22, no. 3, pp. 175–187, 2020.
F. Martin, T. Sun, M. Turk, and A. D. Ritzhaupt, “A meta-analysis on the effects of synchronous online learning on cognitive and affective educational outcomes,” Int. Rev. Res. Open Distrib. Learn., vol. 22, no. 3, pp. 205–242, 2021.
F. Firman, A. P. Sari, and F. Firdaus, “Aktivitas mahasiswa dalam pembelajaran daring berbasis konferensi video: refleksi pembelajaran menggunakan Zoom dan Google Meet,” Indones. J. Educ. Sci., vol. 3, no. 2, pp. 130–137, 2021.
B. R. A. Febrilia, I. C. Nissa, P. Pujilestari, and D. U. Setyawati, “Analisis Keterlibatan dan Respon Mahasiswa dalam Pembelajaran Daring Menggunakan Google Classroom di Masa Pandemi Covid-19,” FIBONACCI J. Pendidik. Mat. Dan Mat., vol. 6, no. 2, pp. 175–184, 2020.
E. Di Lascio, S. Gashi, and S. Santini, “Unobtrusive assessment of students’ emotional engagement during lectures using electrodermal activity sensors,” Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol., vol. 2, no. 3, pp. 1–21, 2018.
H. Monkaresi, N. Bosch, R. A. Calvo, and S. K. D’Mello, “Automated detection of engagement using video-based estimation of facial expressions and heart rate,” IEEE Trans. Affect. Comput., vol. 8, no. 1, pp. 15–28, 2016.
N. Alyuz et al., “Semi-supervised model personalization for improved detection of learner’s emotional engagement,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016, pp. 100–107.
B. De Carolis, F. D’Errico, N. Macchiarulo, and G. Palestra, “‘Engaged Faces’: Measuring and Monitoring Student Engagement from Face and Gaze Behavior,” in IEEE/WIC/ACM International Conference on Web Intelligence-Companion Volume, 2019, pp. 80–85.
O. Mohamad Nezami, M. Dras, L. Hamey, D. Richards, S. Wan, and C. Paris, “Automatic recognition of student engagement using deep learning and facial expression,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2020, pp. 273–289.
L. Chen, C. Zhou, and L. Shen, “Facial Expression Recognition Based on SVM in E-learning,” IERI Procedia, vol. 2, pp. 781–787, 2012, doi: https://doi.org/10.1016/j.ieri.2012.06.171.
D. A. Pisner and D. M. Schnyer, “Support vector machine,” in Machine learning, Elsevier, 2020, pp. 101–121.
A. Asthana, J. Saragih, M. Wagner, and R. Goecke, “Evaluating AAM fitting methods for facial expression recognition,” in 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009, pp. 1–8.
J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Trans. Acoust., vol. 36, no. 7, pp. 1169–1179, 1988.
M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with gabor wavelets,” in Proceedings Third IEEE international conference on automatic face and gesture recognition, 1998, pp. 200–205.
J. Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, “The faces of engagement: Automatic recognition of student engagementfrom facial expressions,” IEEE Trans. Affect. Comput., vol. 5, no. 1, pp. 86–98, 2014.
P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, 2004.
G. Littlewort et al., “The computer expression recognition toolbox (CERT),” in 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2011, pp. 298–305.
J. Whitehill et al., “Towards an optimal affect-sensitive instructional system of cognitive skills,” in CVPR 2011 WORKSHOPS, 2011, pp. 20–25.
C. Pabba and P. Kumar, “An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition,” Expert Syst., vol. 39, no. 1, p. e12839, 2022.
S. Nurdiati, M. K. Najib, F. Bukhari, M. R. Ardhana, S. Rahmah, and T. P. Blante, “Perbandingan AlexNet dan VGG untuk Pengenalan Ekspresi Wajah pada Dataset Kelas Komputasi Lanjut,” Techno. Com, vol. 21, no. 3, pp. 500–510, 2022.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
I. J. Goodfellow et al., “Challenges in Representation Learning: A Report on Three Machine Learning Contests BT - Neural Information Processing,” 2013, pp. 117–124.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.
R. C. Gonzalez, Digital Image Processing, 4th ed. Prentice Hall, 2017.
J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., vol. 13, no. 1, pp. 281–305, 2012.
P. K. Diederik and J. L. Ba, “Adam: a Method for Stochastic Optimization International Conference On Learning Representations, 2015.” 2015.
E. Alpaydin, Introduction to machine learning, 2nd ed. Cambridge, Massachusetts: MIT press, 2014.
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol, CA: O’Reilly Media, Inc., 2008.
F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, Oct. 2011.
M. Abadi et al., “TensorFlow: A System for Large-Scale Machine Learning,” in 12th USENIX symposium on operating systems design and implementation (OSDI 16), 2016, pp. 265–283.