Journal Information
Vol. 99. Issue 6.
Pages 546-560 (November - December 2023)
Share
Share
Download PDF
More article options
Visits
1495
Vol. 99. Issue 6.
Pages 546-560 (November - December 2023)
Review article
Full text access
Face-based automatic pain assessment: challenges and perspectives in neonatal intensive care units
Visits
1495
Tatiany M. Heidericha,
Corresponding author
tatianymh@fei.edu.br

Corresponding author.
, Lucas P. Carlinia, Lucas F. Buzutia, Rita de C.X. Baldab, Marina C.M. Barrosb, Ruth Guinsburgb, Carlos E. Thomaza
a Centro Universitário da Fundação Educacional Inaciana (FEI), São Bernardo do Campo, SP, Brazil
b Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Tables (4)
Table 1. Number of publications found in the last ten years.
Table 2. Summary of the 15 articles found.
Table 3. Result of the literature search.
Table 4. Result of the literature search.
Show moreShow less
Abstract
Objective

To describe the challenges and perspectives of the automation of pain assessment in the Neonatal Intensive Care Unit.

Data sources

A search for scientific articles published in the last 10 years on automated neonatal pain assessment was conducted in the main Databases of the Health Area and Engineering Journal Portals, using the descriptors: Pain Measurement, Newborn, Artificial Intelligence, Computer Systems, Software, Automated Facial Recognition.

Summary of findings

Fifteen articles were selected and allowed a broad reflection on first, the literature search did not return the various automatic methods that exist to date, and those that exist are not effective enough to replace the human eye; second, computational methods are not yet able to automatically detect pain on partially covered faces and need to be tested during the natural movement of the neonate and with different light intensities; third, for research to advance in this area, databases are needed with more neonatal facial images available for the study of computational methods.

Conclusion

There is still a gap between computational methods developed for automated neonatal pain assessment and a practical application that can be used at the bedside in real-time, that is sensitive, specific, and with good accuracy. The studies reviewed described limitations that could be minimized with the development of a tool that identifies pain by analyzing only free facial regions, and the creation and feasibility of a synthetic database of neonatal facial images that is freely available to researchers.

Keywords:
Pain measurement
Newborn
Artificial intelligence
Computer systems
Software
Automated facial recognition
Full Text
Introduction

Facial expression analysis is a non-invasive method for pain assessment in premature and full-term newborns frequently used in Neonatal Intensive Care Units (NICU) for pain diagnosis.1 When newborns experience a painful sensation, the facial features observed are brow bulge, eye squeeze, nasolabial furrow, open lips, stretched mouth (vertical or horizontal), lip purse, taut tongue, and chin quiver.1 These features are present in more than 90% of neonates undergoing painful stimuli, and 95-98% of term newborns undergoing acute painful procedures exhibit at least the first three facial movements.1 The same characteristics are absent when these patients suffer an unpleasant but not painful stimulus.1,2

Several pain scales have been developed for the assessment of neonatal pain. These scales contemplate the facial expression analysis and are commonly used in the NICU, as follows: Premature Infant Pain Profile (PIPP) and PIPP Revisited (PIPP-R);3,4 Neonatal Pain, Agitation, and Sedation Scale (N-PASS Scale);5 Neonatal Facial Coding System (NFCS);1 Echelledela Douleur Inconfort Nouveau-ne’ (EDIN Scale);6 Crying, requires increased oxygen administration, increased vital signs, Expression, Sleeplessness Scale (CRIES Scale);7 COMFORT neo Scale;8 COVERS Neonatal Pain Scale;9 PAIN Assessment in Neonates Scale (PAIN Scale);10 Neonatal Infant Pain Scale (NIPS Scale).11 In clinical practice, it is necessary to evaluate the scope of application of the different scales to flexibly choose the appropriate scale.12

Due to the wide spectrum of available different scoring methods and instruments for the diagnosis of neonatal pain, health professionals need an extensive set of skills and knowledge to conduct this task.13 Although some health professionals recognize the occurrence of pain in the neonatal population, the facial assessment of pain is still performed empirically in real clinical situations. One way to minimize this problem would be the use of a computational tool capable of identifying pain in critically ill newborns by evaluating facial expressions automatically and in real time.

In the last years, computational methods have been developed to detect the painful phenomenon automatically,14-26 to help health professionals to monitor the presence of pain and identify the need for therapeutic intervention. Even with the advancement of technology, these studies, all of them related to automatic neonatal pain assessment, have not addressed practical difficulties in identifying pain in newborns who remain with devices attached to their faces. This gap is due to the difficulty in assessing facial expression in a neonate whose face is partially covered by devices, such as enteral/gastric tube fixation, orotracheal intubation fixation, and phototherapy goggles. These problems highlight the need to develop neonatal facial movement detection techniques.

In this context, this study aims to describe the challenges and perspectives of the process of automation of neonatal pain assessment in the NICU. Specifically, the authors propose to discuss: (i) the availability of access to the literature on computational methods for automatic neonatal pain assessment (when the literature review is done in the main Databases of the Health and Engineering Areas); (ii) the computational methods available so far for the automatic evaluation of neonatal pain; (iii) the difficulty of evaluating a face that is partially covered by assistive devices; (iv) the reduced number of databases of neonatal facial images that hinder the advance in research; (v) the perspectives for pain evaluation through the analysis of segmented facial regions.

The authors believe that this critical and up-to-date review is necessary for both the medical staff, who aim to choose an automatic method to assist in pain assessment over a continuous period; and for software engineers, who seek a starting point for further research related to the real needs of neonates in Intensive Care Units.

Method

In order to describe the challenges related to finding the available scientific literature that enables evidence-based clinical practice, the authors searched for scientific articles published in the last 10 years on the automatic assessment of neonatal pain.

The authors have searched the main Health Area Databases27 and Engineering Journal Portals28 (VHL - Virtual Health Library; Portal CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior; Embase/Elsevier; Lilacs; Medline; Pubmed, Scielo; DOAJ - Directory of Open Access Journals; IEEE Xplore - Institute of Electrical and Electronics Engineers), Semantic Scholar Database, and the arXiv Free Distribution Service.

The literature search took place in August and September 2022, using the Health Descriptors (DeCS structured vocabulary found on the Virtual Health Library site - VHL),29 in English: Pain Measurement, Newborn, Artificial Intelligence, Computer Systems, Software, Automated Facial Recognition, with the Boolean operator AND.

The search for scientific articles included literature published in the last ten years on facial assessment of neonatal pain, selected from a search with the following associated descriptors: Pain Measurement and Newborn and Artificial Intelligence; Pain Measurement and Newborn and Computer Systems; Pain Measurement and Newborn and Software; Pain Measurement and Newborn and Automated Facial Recognition.

Review articles, manuscripts that did not address automated facial assessment for neonatal pain diagnosis, and duplicates were excluded.

The results were descriptive and aimed to identify the computational methods that have advanced in automating the facial assessment of neonatal pain in recent years. For this, data related to the methodology applied in each study were tabulated, as follows: the pain scale on which each study was based for the diagnosis of pain; the database and the sample used in the research; the facial regions that participated in the pain assessment, and diagnosis automation process; sensitivity and specificity in the result of each research; as well as the limitations of each study and the future perspectives of each author.

Challenging issuesAvailability of literature related to automatic pain assessment in newborns

In this research, the authors identified relevant studies for the process of automation of neonatal pain assessment. When performing the literature search in 11 databases (Table 1), 19 articles were found, six of them in more than one database. Two studies by Zamzmi[20,26] were added because they were cited in Grifantini's report,30 totaling 15 articles[14,20,23,26,30-40] selected for review (Table 2).

Table 1.

Number of publications found in the last ten years.

Descriptors  Pain Measurement  Pain Measurement AND Newborn  Search 1 = Pain Measurement AND Newborn AND Artificial Intelligence  Search 2 = Pain Measurement AND Newborn AND Computer Systems  Search 3 = Pain Measurement AND Newborn AND Software  Search 4 = Pain Measurement AND Newborn AND Automated Facial Recognition  Duplicated in the four different searches of the same Base  Total publications in the last ten years  Non-inclusion criteria  INCLUSION/[REFERENCE] 
Databases                     
arXiv  92 
DOAJ  2.786  14 
Embase/ Elsevier  189.086  7.825  264  1.212  2.017  199  3.692  3.692 
IEEE Xplore  778  13  3/30-32 
Lilacs  1.138  61 
Medline  55.144  869  13  15  14  1/14 
Portal CAPES  50.045  767  16  15  1/30 
Pubmed  51.779  862  14  21  18  3/143033 
Scielo  424  18 
Semantic Scholar  2.900,000  32.500  56  378  1.050  167  1645  1636  9/233134-40 
VHL  56.751  946  19  21  19  2/1430 
TOTAL  410.923  43.876  332  1.607  3.126  372  11  5.426  5.407  19 

Note: DOAJ, Directory of Open Access Journals; IEEE Xplore, Institute of Electrical and Electronics Engineers; Lilacs, Literatura Latino-americana e do Caribe em Ciências da Saúde; Medline, Medical Literature Analysis and Retrievel System Online; Portal CAPES, Portal de Periódicos da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior; Scielo, Scientific Electronic Library Online; VHL, Virtual Health Library.

Table 2.

Summary of the 15 articles found.

Reference  Authors' names/Country  Title  Study Type  Objective 
14  Heiderich, T et al. (2015)/ Brasil  Neonatal procedural pain can be assessed by computer software that has good sensitivity and specificity to detect facial movements.  Software development for neonatal pain assessment.  Develop and validate computer software to monitor neonatal facial movements of pain in real-time. 
23  Carlini, L et al. (2021)/Brasil  A Convolutional Neural Network-based Mobile Application to Bedside Neonatal Pain Assessment.  A mobile application for smartphones for neonatal pain assessment.  Propose and implement a mobile application for smartphones that uses Artificial Intelligence (AI) techniques to automatically identify the facial expression of pain in neonates, presenting feasibility in real clinical situations. 
30  Grifantini, C (2020)/USA  Detecting Faces, Saving Lives.  Report: Discuss how facial recognition software is changing health care.  Report on research using facial recognition technology, with machine learning algorithms and neural networks, and could be incorporated into hospitals to reduce pain and suffering and save lives. 
30 cited 20  First citation in Grifantini, C (2020) Zamzmi, G et al. (2019)/USA  Convolutional Neural Networks for Neonatal Pain Assessment  Investigate the use of Convolutional Neural Networks for assessing neonatal pain.  Investigate the use of a novel lightweight neonatal convolutional neural network as well as other popular convolutional neural network architectures for assessing neonatal pain. 
30 cited 26  Second citation in Grifantini, C (2020) Zamzmi, G et al. (2022)/USA  A Comprehensive and Context-Sensitive Neonatal Pain Assessment Using Computer Vision  Present an automated system for neonatal pain assessment.  Present a pain assessment system that utilizes facial expressions along with crying sounds, body movement, and vital sign changes. 
31  Egede, J et al. (2019)/United Kingdom  Automatic Neonatal Pain Estimation: An Acute Pain in Neonates Database.  Present an automated system for neonatal pain assessment.  Present a system for neonatal pain assessment which encodes pain indicative-features 
32  Martinez-B, A et al. (2014)/Spain  An Autonomous System to Assess, Display and Communicate the Pain Level in Newborns.  Present an automated system for neonatal pain assessment - Web application.  Present a system that automatically analyses the pain or discomfort levels of newborns. 
33  Roué, J et al. (2021)/France  Using sensor-fusion and machine-learning algorithms to assess acute pain in non-verbal infants: a study protocol.  The study protocol, Clinical Trials, and Prospective observational study.  Identify the specific signals and patterns from each facial sensor that correlate with the pain stimulus. 
34  Cheng, X et al. (2022) /China  Artificial Intelligence Based Pain Assessment Technology in Clinical Application of Real-World Neonatal Blood Sampling.  Prospective study - The client-server model to run on the mobile.  Analyze the consistency of the NPA results from a self-developed automated NPA system and nurses’ on-site NPAs (OS-NPAs). 
35  Domingues, P et al. (2021)/Brasil  Neonatal Face Mosaic: An areas-of-interest segmentation method based on 2D face images.  Create a facial mosaic to aid in the facial assessment of neonatal pain.  Propose the separation of the face into predefined polygonal regions relevant to pain detection in neonates. 
36  Han, J et al. (2012)/Netherlands  Neonatal Monitoring Based on Facial Expression Analysis.  Development of a system to analyze various facial regions.  Design a prototype of an automated video monitoring system for detecting discomfort in newborns by analyzing their facial expression. 
37  Mansor, M et al. (2014)/Malaysia  Infant Pain Detection with Homomorphic Filter and Fuzzy k-NN Classifier.  Present an automated system for neonatal pain assessment.  Evaluate the performance of illumination levels for infant pain classification. 
38  Parodi, E et al. (2017)/Italy  Automated Newborn Pain Assessment Framework Using Computer Vision Techniques.  Proposed algorithm for neonatal pain assessment.  Propose a computerized tool for neonatal pain evaluation based on patients’ facial expressions. 
39  Wang, Y et al. (2022)/China  Full‑convolution Siamese network algorithm under deep learning used in tracking of facial video image in newborns.  Explore a new tracking network for neonatal pain assessment.  Explore the full-convolution Siamese network in neonatal facial video image tracking application. 
40  Dosso, Y et al. (2022)/Canada  NICUface: Robust Neonatal Face Detection in Complex NICU Scenes  Creation of robust NICU-face detectors.  Create two neonatal face detection models (NICUface) by finetuning the most performant pre-trained face detection models on exceptionally challenging NICU scenes. 

Note: AI, Artificial intelligence; Fuzzy k-NN, Fuzzy K-Nearest Neighbor; NICU, Neonatal Intensive Care Unit; NPA, Neonatal Pain Assessment; OS-NPAs, on-site NPAs.

It is worth noting that these 2 added articles were not found using the selected descriptors. This dissonance showed us one of the challenges in the literature search: depending on the keywords used for the search, researchers may not find relevant studies on the topic.

One way to maximize the search for scientific documents would be to systematize the search process in all databases. For this, using words common to all search systems would be interesting. A warning to researchers would be to use keywords in their publications that address both concepts related to the Health and Engineering Areas.

Table 3 shows the methods used in each study, the pain scale on which each study was based for the pain diagnosis, the database, the facial regions needed for face detection and diagnosis of neonatal pain, and the diagnostic accuracy of each method. As for the method, it was possible to observe that each study used a different method for pain detection. Interestingly, these methods did not observe the need to detect some facial regions for pain diagnoses, such as cheeks, nose, and chin. Even so, the studies were not shown to be effective to the point of being used in clinical practice because the methods developed so far have not been tested at the bedside. Each study's limitations and future perspectives are summarized and reported in Table 4.

Table 3.

Result of the literature search.

Reference  Method/Scale which was based  Database used (Sample)  Discrimination of facial regions  Sensitivity and specificity 
14  Developed in the Delphi environment, based on image recognition of pain-related facial actions. /Scale: NFCS  Own Base - UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age).  Bulging brow; narrowing of the lid slit; deepening of the nasolabial furrow; open lips; mouth stretching.  The software exhibited 85% sensitivity and 100% specificity in detecting neutral facial expressions in their sting state and 100% sensitivity and specificity in detecting procedural pain in neonates. 
23  A computational model with face detection, data augmentation, and classification model with transfer learning to a CNN architecture pre-trained by adding fully connected layers specifically trained with neonatal face images. The application was developed for the Android operating system using the Android Studio IDE./Scale: NFCS  UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age) and Infant COPE (26 Caucasian neonates).  Not applicable.  This model achieved 93.07% accuracy, 0.9431 F1 Score, and 0.9254 AUC. 
30  Report/Scale: Not applicable.  Not applicable.  Not applicable.  Not applicable. 
30 cited 20  Compares the use of a novel Convolutional Neural Networks Neonatal along with others (ResNet50 and VGG-16) for pain assessment application./Scale: NIPS  Infant COPE (26 Caucasian neonates) and NPAD (31 neonates between 32 and 40 weeks of gestational age).  Not applicable.  Assessing neonatal pain using LBP features achieved 86.8% average accuracy;Assessing neonatal pain using HOG features with Support vector machines achieved 81.29% average accuracy;Proposed N-CNN, which extracts features directly from the images, achieved state-of-the-art results and outperformed ResNet, VGG-16, as well as handcrafted descriptors. 
30 cited 26  Existing static methods have been divided into two categories: handcrafted-representation-based methods and deep-representation-based methods./Scale: NIPS  Infant COPE (26 Caucasian neonates).  Not applicable.  The system achieved 95.56% accuracy using decision fusion of different pain responses that were recorded in a challenging clinical environment. 
31  Uses handcrafted algorithms and deep-learned features./Scale: NIPS and NFCS  Own Base - APN-db (213 newborns between 26 and 41 weeks of gestational age).  Brow bulge, eye squeeze, nasolabial furrow, open lips, stretch mouth (vertical), stretch mouth (horizontal), lip purse, taut tongue, chin quiver.  The system performs well with an RMSE of 1.94 compared to human error of 1.65 on the same dataset, demonstrating its potential application to newborn health care. 
32  The behavioral parameters related to movement and expression are measured using computer vision techniques./Scale: NIPS, BPSN, DAN, NFCS, PIPP and CRIES  Not reported.  Head movement, expression of pain, frowning, lips movement, eyes open/closed, cheek frowning.  Not reported. 
33  Uses facial electromyography to record facial muscle activity-related infant pain./Scale: N-PASS, PIPP-R, NFCS, FLACC and VAS  Own Base (The painful procedures will be a minimum of 60 newborns and infants averaging 6 months).  Forehead, cheek, eyebrow puffing, eye pinch, and nasolabial sulcus.  Tests will be performed in further studies. 
34  It was implemented with the client-server model and designed to run on the mobile nursing personal digital assistant device./Scale: NIPS  Own Base (232 newborns with a mean gestational age of 33.93 ± 4.77 weeks).  Frown, eye squeezing, nasolabial fold deepening, mouth stretching, and tongue tightening.  The accuracies of the NIPS pain score and pain grade given by the automated NPA system were 88.79% and 95.25%, with kappa values of 0.92 and 0.90 (p< 0.001), respectively. 
35  Identifying, transforming, and extracting the regions of interest from the face, assembling an average face of the newborns, and using similarity metrics to check for artifacts./Scale: NFCS  UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age).  Eyebrows, eyes, nose, the region between the eyes, mouth, nasolabial folds, cheeks, and forehead.  Not reported. However, all images could be mapped and segmented by region. 
36  The system consists of several algorithmic components, ranging from face detection, determination of the region of interest, and facial feature extraction to behavior stage classification./Scale: Unmentioned  Own Base (newborn with different conditions)  Eyes, eyebrows, and mouth.  The algorithm can operate with approximately 88% accuracy. 
37  The Local Binary Pattern features are computed in the Fuzzy k-NN classifier employed to classify newborn pain./Scale: Unmentioned  Infant COPE (26 Caucasian neonates)  Not applicable.  Using the HOMO method, the sensitivity is 96.667%.Specificity ranged from 96.5% to 97.2% and accuracy ranged from 93.3% to 97.2% depending on the illumination. The fastest time consumption was obtained by Conventional Validation under 100 illumination levels with 0.065s. 
38  Facial features were extracted through different image processing methods: placement and tracking of landmarks, edge detection, and binary thresholding./Scale: NFCS, PIPP and DAN  Own Base - Ordine Mauriziano Hospital (15 healthy full-term neonates between 48 and 72 hours of life).  Eye squeeze (between mid-eyebrow and mid-lower eyelid), cheek raise (between eye medial corner and nose corner), brow bulging (between eyebrows medial border).  The overall result is not reported, but some operators' evaluations were particularly inconsistent regarding some parameters like face furrowing. For these parameters, the scores had very low consistency (about 40%). 
39  The commonly used face detection methods are introduced first, and then, the convolutional neural network in deep learning is analyzed and improved and then applied to the facial recognition of newborns./Scale: Used in Hubei hospital  Own Base - Hubei hospital (40 newborns with the age of no more than 7 days).  Not applicable.  The accuracy of the improved algorithm is 0.889, higher by 0.036 in contrast to other models; the area under the curve (AUC) of success rate reaches 0.748, higher by 0.075 compared with other algorithms. 
40  Compare five pre-trained face detection models, proposing two new NICUface models./Scale: Unmentioned  CHEO (33 newborns), COPE (27 newborns), and NBHR (257 patients)  Not applicable.  The proposed NICUface models outperform previous state-of-the-art models for neonatal face detection and are robust to many identified complex NICU scenes. 

Note: APN-db, Acute Pain in Neonates; AUC, Area Under the Cuve; BPSN, Bernese Pain Scale for Neonates; CHEO, Children's Hospital of Eastern Ontario; COPE, Classification of Pain Expression; CRIES, C–Crying; R–Requires increased oxygen administration; I–Increased vital signs; E–Expression; S–Sleeplessness; DAN, DouleurAigue Nouveau-Né; FLACC, Face, Legs, Activity, Cry, Consolability; Fuzzy k-NN, Fuzzy K-Nearest Neighbor; HOG, Histogram of Oriented Gradients; HOMO, Homomorphic Filter; IDE, Integrated Development Environment; LBP, Local BinaryPattern; NBHR, Newborn Baby Heart Rate; N-CNN, Neonatal - Convolutional Neural Networks; NFCS, Neonatal FacialCoding System; NIPS, Neonatal Infant Pain Scale; NPA, Neonatal Pain Assessment; NPAD, Neonatal Pain Assessment Dataset; N-PASS, Neonatal Pain and Sedation Scale; PIPP, Premature Infant Pain Profile; PIPP-R, Premature Infant Pain Profile-Revised; RMSE, Root Mean Square Error; UNIFESP, Universidade Federal de São Paulo; VAS, Visual Analogue Scale; VGG, Visual Geometry Group.

Table 4.

Result of the literature search.

Reference  Each study's limitation  Reported perspectives 
14 
  • The software is unable to detect points corresponding to the lower area of the face, such as chin movements, or specific points on the tongue;

  • The face with medical devices is not detected.

 
  • It should be noted that the ideal tool would indicate when pain intensity deserves treatment or treatment adjustments;

  • The use of an electronic eye, less dependent on humans, could help to integrate pain assessment with pain treatment in the context of neonatal care.

 
23 
  • Translational research needs to be done to assess the accuracy of the app for different neonates and clinical situations to assess if the performance of the neural network is less prone to subjective variables that modify pain assessment than the human performance;

  • The face with medical devices is not detected.

 
  • Apply different explainable AI methods to better understand facial regions that might be relevant to pain assessment and use all the depicted face images to enlarge the face image samples and train our computational model.

  • Evaluate more recent CNN architectures.

  • Perform hands-on testing of the mobile application.

 
30 
  • Not applicable.

 
  • Not applicable.

 
30 cited 20 
  • The current approach was evaluated on a relatively small number of infants;

  • The face with medical devices is not detected.

 
  • Explore the use of CNNs to develop a highly accurate pain assessment application;

  • Improve the effectiveness of pain intervention while mitigating the short- and long-term outcomes of pain exposure early in life;

  • Realize a multimodal approach to pain assessment that allows pain to be assessed during circumstances when all pain responses are not available, clinical condition, activity level, and sedation;

  • Integrate contextual information, such as medication type/dose, to obtain a context-sensitive pain assessment.

  • Collect a large multimodal dataset during hospitalization in the NICU;

  • Investigate the possibility of using neonate sounds as soft biometrics.

 
30 cited 26 
  • High False Negative Rate;

  • The current approach was evaluated on a relatively small number of infants;

  • The current work does not provide a comparison between the assessment of the proposed automatic system and human;

  • The face with medical devices is not detected.

 
  • Investigate several directions for minimizing False Negative Rate;

  • Employ or implement advanced noise reduction methods;

  • Enlarge the training data using traditional augmentation methods and Generative adversarial networks;

  • Follow another approach, assessing the level or intensity of the detected pain class;

  • Evaluate the approach on a larger dataset of infants recorded during both procedural and postoperative pain;

  • Investigate the association between neonatal pain and the brain's hemodynamic activities using Near-infrared Spectroscopy;

  • Explore the association between neonatal pain and changes in skin color as well as the association between pain and eye movement/pupil dilation;

  • Test how well the automatic system performs as compared to human judgments;

  • Adding points to the total score of infants, based on age, pain history, or other factors, to compensate for their limited ability to behaviorally or physio-logically communicate pain.

 
31 
  • In terms of PCC, the performance is relatively low;

  • The face with medical devices is not detected.

 
  • The goal is pain intensity estimation;

  • Future work will focus on incorporating other pain modalities, particularly body movements which are part of the NFLAPS and collecting additional data.

 
32 
  • Does not report whether the system will be able to evaluate faces with medical devices.

 
  • Providing the relatives (parents) with a tool that allows remote supervising of their newborn's wellness.

 
33 
  • The face with medical devices is not detected.

 
  • Evaluate the intensity of the pain, classifying it as mild, moderate, or severe pain;

  • Future applications may also include patient populations incapable of expressing pain (children with disability, adults with dementia, or mechanically ventilated patients).

 
34 
  • It is not possible to accurately assess pain scores or pain grades with AI technology;

  • The automated NPA system currently requires an additional nurse to record the video;

  • Does not report whether the system will be able to evaluate faces with medical devices.

 
  • Automate the entire process by recording video with bedside cameras in the future.

  • The AI technology embedded in the electronic-medical-record system in the future will realize intervention for pain in real-time by medical staff.

  • The downstream health education system can further perform pain-knowledge education for newborns’ family members to realize the traceability, standardization, and intelligence of the whole process of pain management.

 
35 
  • The metrics to validate the presence of artifacts need to be further tested. Mutual information was not able to validate, Pearson's correlation coefficient validated a part of the cases, and only the root means square error showed promise.

 
  • Test more metrics and techniques to detect the presence of artifacts in each facial region.

 
36 
  • The system should be more illumination independent and should be able to handle partial component occlusions.

  • The face with medical devices is not detected.

 
  • Perform an analysis possible to be applied in a real hospital situation.

 
37 
  • The face with medical devices is not detected.

 
  • Implement the system in clinical practice, since the classification result shows that the proposed technique could be employed as a valuable tool for classifying the newborn between pain and normal with Fuzzy k-NN Classifier.

 
38 
  • The limited size of the dataset requires extension and further experimentation;

  • Rapid head movements are a problem for landmark tracking;

  • A thorough exploration of the parameters and alternatives of the KLT algorithm seems necessary to ensure more stability to the system;

  • The face with medical devices is not detected.

 
  • Conduct additional video acquisition campaigns that will lead to a substantial extension of the original dataset;

  • Increase the number of expert operators performing manual pain assessment.

  • To better analyze the consistency and repeatability of the process, multiple scoring sessions on the videos should be performed by the same operators at different periods;

  • Adopt landmark selection algorithms best suited for child's faces;

  • Video processing can be effectively complemented by the analysis of audio information.

 
39 
  • The model cannot be updated online;

  • The amount of data in the newborn image database is not very large, and it must amplify the image on the data set during the experiment;

  • It needs more experiments with faces covered by devices.

 
  • In the follow-up research, the algorithm can be further optimized based on the model algorithm built by this research firstly to improve its real-time performance and recognize the emotions of newborns in real-time.

  • The algorithm can combine with other auxiliary information such as crying and body movements for multi-modal classification and recognition when the facial expressions of newborns are collected to build a standard newborn image database with a large volume of data.

 
40 
  • Not all methods were able to robustly identify patients' faces in complex scenes involving phototherapy lighting, ventilation support, view in dorsal decubitus, and prone position;

  • One of the proposed settings works better on smaller faces, and the other works better in a brighter environment.

 
  • Perform an enhancement to improve the accuracy of NICU-face; Complement the networks using an ensemble network and combining the strengths of the models;

  • Use these models for the implementation of other neonatal monitoring applications (e.g., in-home monitoring or intelligent monitoring applications from smartphones)

 

Note: CNN, convolutional neural networks; Fuzzy k-NN, Fuzzy K-Nearest Neighbor; KLT, Kanade–Lucas–Tomasi; NFLAPS, Neonatal Face and Limb Acute Pain; NICU, Neonatal Intensive Care Unit; NPA, Neonatal Pain Assessment; PCC, Pearson Correlation Coefficient.

Computational methods available for automatic neonatal pain assessment

In 2006, a pioneering study was conducted to classify facial expressions of pain. The authors applied three feature extraction techniques: principal component analysis, linear discriminant analysis, and support vector machine. The face image dataset was captured during cradling (a disturbance that can provoke crying that is not in response to pain), an air stimulus on the nose, and friction on the external lateral surface of the heel. The model based on the support vector machine achieved the best performance: pain versus non-pain 88%; pain versus rest 94.6%; pain versus cry 80%; pain versus air puff 83%; and pain versus friction 93%. The results of this study suggested that the application of facial classification techniques in pain assessment and management was becoming a promising area of investigation.41

In 2008, one of the first attempts to automate facial expression assessment of neonatal pain was performed in a study developed to compare the distance of specific facial points. However, the user manually detected each facial point, so the method was of great interest for clinical research, but not for clinical use.42

In 2015, Heiderich et al. developed software to assess neonatal pain. This software was capable of automatically capturing facial images, comparing corresponding facial landmarks, and diagnosing pain presence. The software demonstrated 85% sensitivity and 100% specificity in the detection of neutral facial expressions, and 100% sensitivity and specificity in the detection of pain during painful procedures.14

In 2016, a study based on Machine Learning15 proposed an automated multimodal approach that used a combination of behavioral and physiological indicators to assess newborn pain. Pain recognition yielded 88%, 85%, and 82% overall accuracy using solely facial expression, body movement, and vital signs, respectively. The combination of facial expression, body movement, and changes in vital signs (i.e., the multimodal approach) achieved 95% overall accuracy. These preliminary results revealed that using behavioral indicators of pain along with physiological indicators could better assess neonatal pain.15

In 2018, researchers created a computational framework for pattern detection, interpretation, and classification of frontal face images for automatic pain identification in neonates. Classification of pain faces by the computational framework versus classification by healthcare professionals using the pain scale "Neonatal Facial Coding System" reached 72.8% accuracy. The authors reported that some disagreements between the assessment methods could stem from unstudied confounding factors, such as the classification of faces related to stress or newborn discomfort.16

In the same year, another group of researchers presented a dynamic method related to the duration of facial activity, by combining temporal and spatial representations of the face.17 In this study, the authors used facial configuration descriptors, head pose descriptors, numerical gradient descriptors, and temporal texture descriptors to describe facial changes over time. The dynamic facial representation and the multi-feature combination scheme were successfully applied for infant pain assessment. The authors concluded that the profile-based infant pain assessment is also feasible because its performance was almost as good as using the whole face. In addition, the authors noted that gestational age was one of the most influencing factors for infant pain assessment, highlighting the importance of designing specific models depending on gestational age.17

Other researchers have implemented a computational framework using triangular meshes to generate a spatially normalized atlas of high resolution, potentially useful for the automatic evaluation of neonatal pain.19 These atlases are essential to describe characteristic and detailed facial patterns, preventing image effects or signals (which are not relevant and which portray undesirable particularities, inherent to the imperfect data acquisition process) from being erroneously propagated as discriminative variations.

Also in 2019, researchers created a network for neonatal pain classification, called Neonatal - Convolutional Neural Network (N-CNN), designed to analyze neonates’ facial expressions. The proposed network achieved encouraging results that suggested that automated neonatal pain recognition may be a viable and efficient alternative for pain assessment.20 This was the first CNN built specifically for neonatal pain assessment, which did not use transfer learning as a methodology.

In addition, another group of studies developed an automated neonatal discomfort detection system based on video monitoring, divided into two stages: (1) face detection and face normalization; (2) feature extraction and facial expression classification to discriminate infant status into comfort or discomfort. The experimental results showed an accuracy of 87% to 97%. However, even though the results were promising for use in clinical practice, the authors reported the need for new studies with more newborn data to evaluate and validate the system.21

A major technological advance occurred in 2020 after the development of a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification of Pain Expressions videos). The creators of this dataset also presented a system to classify the iCOPEvid segments into two categories: pain and non-pain. Compared to other human classification systems, the results were superior; however, the addition of CNN to further improve the results was not successful. Therefore, the authors reported the need for further studies using CNN.18

In 2020, a new study proposed an application of multivariate statistical analysis, in the context of images of newborns with and without pain, to explore, quantify, and determine behavioral measures that would help in the creation of generalist pain classification models, both by automated systems and by health professionals. The authors reported that using behavioral measures it was possible to classify the intensity of pain expression and identify the main facial regions involved in this process (frowning the forehead, squeezing the eyes, deepening the nasolabial groove, and horizontally opening the mouth made the model similar to a face with pain, and features such as mouth closure, eye-opening, and forehead relaxation made the model similar to a face without pain). The developed framework showed that it is possible to statistically classify the expression of pain and non-pain through facial images and highlight discriminant facial regions for the pain phenomenon.19

In 2021, two studies were conducted using deep neural networks. One compared the use of the N-CNN and an adapted ResNet50 neural network architecture to find the model best suited to the neonatal face recognition task. The results showed that the modified ResNet50 model was the best one, with an accuracy of 87.5% for the COPE image bank.22 The other study used neural networks for newborn face detection and pain classification in the context of mobile applications. Additionally, this was the first study to apply explainable Artificial Intelligence (AI) techniques in neonatal pain classification.23

Then, new research reviewed the practices and challenges for pain assessment and management in the NICU using AI. The researchers reported that AI-based frameworks can use single or multiple combinations of continuous objective variables, that is, facial and body movements, cry frequencies, and physiological data (vital signs) to make high-confidence predictions about the time-to-pain onset following postsurgical sedation. The authors reported that emerging AI-based strategies have the potential to minimize or avoid damage to the newborn's body and psyche from postsurgical pain and opioid withdrawal.24

Another study group has created an AI System, called “PainChek Infant” for automatic recognition and analysis of the face of infants aged 0 to 12 months, allowing the detection of six facial action units indicative of the presence of pain. PainChek Infant pain scores showed a good correlation with "Neonatal Facial Coding System-R" and the “Observer-administered Visual Analogue Scale” scores (r = 0.82–0.88; p < 0.0001). PainChek Infant also showed good to excellent interrater reliability (ICC = 0.81–0.97, p < 0.001) and high levels of internal consistency (α = 0.82–0.97).25

In 2022, a pain assessment system was created using facial expressions, crying, body movement, and vital sign changes. The proposed automatic system generated a standardized pain assessment comparable to those obtained by conventional nurse-derived pain scores. According to the authors, the system achieved 95.56% accuracy. The results showed that the automatic assessment of neonatal pain is a viable and more efficient alternative than the manual assessment.26

Additionally, in 2023, a systematic review study discussed the models, methods, and data types used to lay the foundations for an automated pain assessment system based on deep learning. In total, one hundred and ten pain assessment works based on unimodal and multimodal approaches were identified for different age groups, including neonates. According to the authors, artificial intelligence solutions in general, and deep neural networks in particular, are models that perform complex functions, but lack transparency, which becomes the main reason for criticism. Also, this review demonstrated the importance of multimodal approaches for automatic pain estimation, especially in clinical settings, and highlights that the limited number of studies exploring the phenomenon of pain beyond extraordinary situations, or considering different contexts, maybe one of the limitations of current approaches regarding their applicability in real-life settings and circumstances.43

All studies reported significant limitations that preclude the use of their methods in the clinical NICU practice, such as (1) the inability to detect points corresponding to the lower facial area, chin movements, specific tongue points, and rapid head movements; (2) a small number of neonates for evaluation and testing of the algorithm; (3) inability to robustly identify patients' faces in complex scenes involving lighting and ventilation support. Given these limitations, there is an emerging need for evaluating and validating each neonatal pain assessment automation method proposed to date.

A limited number of databases of neonatal facial images

The reason for the small number of published studies around automated neonatal pain analysis and assessment using Computer Vision and Machine Learning technologies may be related to the limited number of neonatal image datasets available for research.17

Currently, there are few datasets for facial expression analysis of pain in newborns. The publicly available databases are COPE;41 Acute Pain in Neonates database (APN-db);31 Facial Expression of Neonatal Pain (FENP);44 freely available data from YouTube, which was used from the year 2014 for a systematic review study;45 USF-MNPAD-I (University of South Florida Multimodal Neonatal Pain Assessment Dataset;46 and Newborn Baby Heart Rate Estimation Database (NBHR)47 which provides facial images of newborns, but is primarily aimed at monitoring physiological signs.

All these databases are being widely used in the academic scientific environment; however, they have some limitations, such as the small number of images; images of only one ethnic group; low confidence (no explanation about approval in ethics committees); images of a specific clinical population, mainly term newborns, not allowing studies with preterm and critically ill newborns.

These databases only have images of newborns with the face free and do not have images of newborns with devices attached to the face, except for the USF-MNPAD-I database.46 This scenario of a scarcity of databases with images of critically ill newborns hampers the development of new methods to automate pain assessment in this very specific population.

Perspectives

This article attempted to report the difficulties faced so far in the creation of an automatic method for pain assessment in the neonatal context. Based on the several analyzed frameworks, it is evident that there are gaps in the development of practical applications that are sensible, specific, with good accuracy, and can be used at the bedside.

For research to advance in this area, a larger number of neonatal facial images are needed to test and validate algorithms. The authors believe that a convenient way to overcome this practical issue would be the creation of synthetic databases, which might contemplate not only the increased number of facial images but also different races, sex, types of patients, and types of devices used, aiming at a better generalization of the algorithms.

Another limitation in the studies is the difficulty of detecting pain in partially covered faces. As previously stated, the devices attached to the face of the newborn hinder the visualization of all points and facial regions that are indispensable for the automatic evaluation of pain.

This problem only happens because the computational methods developed so far assume that all facial regions need to be detected, evaluated, and scored to make the pain diagnosis possible. Trying to guess what the image of the facial region behind the medical device looks like may not be the best alternative to solve this problem.

One possibility would be to identify pain only by analyzing the free facial regions. The development of a system of evaluation of the segmented parts of the face would make possible the evaluation and classification of pain weighted only by the free facial regions, not requiring the identification and classification of all facial points, as is done holistically nowadays.

The creation of a classifier by facial region would allow the identification of which regions are more discriminating for the diagnosis of neonatal pain. Consequently, it would be possible to give scores with different weights for each visible facial region and maximize the process of pain assessment of newborns who remain with part of the face occluded.

In addition, new methods of facial assessment need to be tested during the natural movement of the neonate and with different light intensities. It would be important to test how the computer method for automatic pain assessment works together with other assessment methods such as manual assessment using facial pain scales, body movement assessment, brain activity, sweating, skin color, pupil dilation, vital signs, and crying.

It is worth mentioning that, for decision-making, neither clinical practice (based on pain scales) nor computer models alone would be sufficient to reach a more accurate decision process. This is because, without interpreting the information used for decision-making by humans and machines, one cannot affirm that the assessment was made with precision. Therefore, studies that seek to understand this information, extracted from both human and machine eyes, can help to create models that combine these two types of learning. Examples of such studies would be those of Silva et al.48 Barros et al.49 and Soares et al.50 that used gaze tracking of observers during newborn pain assessment; and research using eXplainable Artificial Intelligence (XAI) models, such as those of Carlini et al.23 and Coutrin et al.51

With technological advances, it will be possible to create a method capable of identifying the presence and the intensity of neonatal pain, differentiating pain from discomfort and acute pain from chronic pain. Thus, providing the appropriate neonatal care and treatment for each patient, according to the gestational age and within the complexity that involves the NICU environment.

Funding source

This work received financial support from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior CAPES related to the doctoral scholarship of the student Tatiany Marcondes Heiderich (Brazil/Process: n. 142499/2020-0) and the Centro Universitário da FEI - Fundação Educacional Inaciana.

Acknowledgments

To CAPES and Centro Universitário da FEI - Fundação Educacional Inaciana for the scholarships.

References
[1]
RVE Grunau, KD. Craig.
Pain expression in neonates: facial action and cry.
Pain, 28 (1987), pp. 395-410
[2]
R Guinsburg.
Assessing and treating pain in the newborn.
J Pediatr, 75 (1999), pp. 149-160
[3]
BR Stevens, CR Johnston, PR Petryshen, AB. Taddio.
Premature infant pain profile: development and initial validation.
[4]
BJ Stevens, S Gibbins, J Yamada, K Dionne, G Lee, C Johnston, et al.
The Premature Infant Pain Profile-Revised (PIPP-R).
Clin J Pain, 30 (2014), pp. 238-243
[5]
P Hummel, M Puchalski, S Creech, M.G. Weiss.
N-PASS: neonatal pain, agitation, and sedation scale - reliability and validity.
Presented at: Pediatric Academic Societies’ annual meeting. Seattle, WA,
[6]
T Debillon, V Zupan, N Ravault, JF Magny, M. Dehan.
Development and initial validation of the EDIN scale, a new tool for assessing prolonged pain in preterm infants.
Arch Dis Child Fetal Neonatal Ed, 85 (2001), pp. 36F-341
[7]
SW Krechel, JB. Mc.
CRIES: a new neonatal postoperative pain measurement score. Initial testing of validity and reliability.
Paediatr Anaesth, 5 (1995), pp. 53-61
[8]
M van Dijk, DW Roofthooft, KJ Anand, F Guldemond, J de Graaf, S Simons, et al.
Taking up the challenge of measuring prolonged pain in (premature) neonates.
Clin J Pain, 25 (2009), pp. 607-616
[9]
IL Hand, L Noble, D Geiss, L Wozniak, C. Hall.
COVERS Neonatal pain scale: development and validation.
Int J Pediatr, 2010 (2010),
[10]
D Hudson-Barr, B Capper-Michel, S Lambert, T Mizell Palermo, K Morbeto, S Lombardo.
Validation of the Pain Assessment in Neonates (PAIN) Scale with the Neonatal Infant Pain Scale (NIPS).
Neonatal Netw, 21 (2002), pp. 15-21
[11]
J Lawrence, D Alcock, P McGrath, J Kay, SB MacMurray, C. Dulberg.
The development of a tool to assess neonatal pain.
Neonatal Netw, 12 (1993), pp. 59-66
[12]
Z Zeng.
Assessment of neonatal pain: uni- and multidimensional evaluation scales.
Front Nurs, 9 (2022), pp. 247-254
[13]
AB Serpa, R Guinsburg, C Balda Rde, AM dos Santos, KC Areco, CA. Peres.
Multidimensional pain assessment of preterm newborns at the 1st, 3rd and 7th days of life.
Sao Paulo Med J, 125 (2007), pp. 29-33
[14]
TM Heiderich, AT Leslie, R Guinsburg.
Neonatal procedural pain can be assessed by computer software that has good sensitivity and specificity to detect facial movements.
Acta Paediatr, 104 (2015), pp. e63-e69
[15]
G Zamzmi, CY Pai, D Goldgof, R Kasturi, T Ashmeade, Y. Sun.
An approach for automated multimodal analysis of infants’ pain.
2016 23rd International Conference on Pattern Recognition (ICPR), pp. 4148-4153
[16]
GF Teruel, TM Heiderich, R Guinsburg, CE. Thomaz.
Analysis and recognition of pain in 2d face images of full term and healthy newborns.
Anais do XV Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2018), Sociedade Brasileira de Computação - SBC, (2018), pp. 228-239
[17]
R Zhi, G Zamzmi, D Goldgof, T Ashmeade, Y. Sun.
Automatic infants’ pain assessment by dynamic facial representation: effects of profile view, gestational age, gender, and race.
J Clin Med, 7 (2018), pp. 173
[18]
S Brahnam, L Nanni, S McMurtrey, A Lumini, R Brattin, M Slack, et al.
Neonatal pain detection in videos using the iCOPEvid dataset and an ensemble of descriptors extracted from Gaussian of Local Descriptors.
Appl Comput Inform, (2020), pp. 1-22
[19]
PA Orona, DA Fabbro, TM Heiderich, MC Barros, Balda RdeC, R Guinsburg, et al.
Atlas of neonatal face images using triangular Meshes.
Anais do XV Workshop de Visão Computacional (WVC 2019), Sociedade Brasileira de Computação - SBC, (2019), pp. 19-24
[20]
G Zamzmi, R Paul, Salekin MdS, D Goldgof, R Kasturi, T Ho, et al.
Convolutional neural networks for neonatal pain assessment.
IEEE Trans Biom Behav Identity Sci, 1 (2019), pp. 192-200
[21]
Y Sun, C Shan, T Tan, X Long, A Pourtaherian, S Zinger, et al.
Video-based discomfort detection for infants.
Mach Vis Appl, 30 (2019), pp. 933-944
[22]
L Buzuti, T Heideirich, M Barros, R Guinsburg, C. Thomaz.
Neonatal pain assessment from facial expression using deep neural networks.
Anais do XVI Workshop de Visão Computacional (WVC 2020), Sociedade Brasileira de Computação - SBC, (2020), pp. 87-92
[23]
LP Carlini, LA Ferreira, GAS Coutrin, V.V. Varoto, TM Heiderich, RCX Balda, et al.
A convolutional neural network-based mobile application to bedside neonatal pain assessment.
2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 394-401
[24]
MS Salekin, PR Mouton, G Zamzmi, R Patel, D Goldgof, M Kneusel, et al.
Future roles of artificial intelligence in early pain management of newborns.
Paediatr Neonatal Pain, 3 (2021), pp. 134-145
[25]
K Hoti, PT Chivers, JD. Hughes.
Assessing procedural pain in infants: a feasibility study evaluating a point-of-care mobile solution based on automated facial analysis.
Lancet Digit Health, 3 (2021), pp. e623-e634
[26]
G Zamzmi, CY Pai, D Goldgof, R Kasturi, T Ashmeade, Y. Sun.
A comprehensive and context-sensitive neonatal pain assessment using computer vision.
IEEE Trans Affect Comput, 13 (2022), pp. 28-45
[27]
Graziosi MES, Liebano RE, Nahas FX, Pesquisa em Bases de Dados - Módulo Científico, In: Especialização em Saúde da Família UNA-SUS, 25–33, [cited 2022 Aug 14]. Available from: https://www.unasus.unifesp.br/biblioteca_virtual/esf/1/modulo_cientifico/Unidade_13.pdf
[28]
EESC - USP. Revistas Científicas na Área da Engenharia . [cited 2022 Aug 30]. Available from: https://eesc.usp.br/biblioteca/post.php?guid=95&catid=fonte_eletronica.
[29]
Biblioteca Virtual em Saúde - BVS. DeCS/MeSH - Descritores em Ciências da Saúde. [cited 2022 Aug 30]. Available from: https://decs.bvsalud.org/.
[30]
K Grifantini.
Detecting faces, saving lives.
[31]
J Egede, M Valstar, MT Torres, D. Sharkey.
Automatic neonatal pain estimation: an acute pain in neonates database.
2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 1-7
[32]
A Martinez-Balleste, JO Casanovas-Marsal, A Solanas, F Casino, M. Garcia-Martinez.
An autonomous system to assess, display and communicate the pain level in newborns.
2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 1-5
[33]
JM Roué, I Morag, WM Haddad, B Gholami, KJS. Anand.
Using sensor-fusion and machine-learning algorithms to assess acute pain in non-verbal infants: a study protocol.
BMJ Open, 11 (2021),
[34]
X Cheng, H Zhu, L Mei, F Luo, X Chen, Y Zhao, et al.
Artificial intelligence based pain assessment technology in clinical application of real-world neonatal blood sampling.
Diagnostics, 12 (2022), pp. 1831
[35]
PH Domingues, RM da Silva, IJ Orra, ME Cruz, TM Heiderich, CE. Thomaz.
Neonatal face mosaic: an areas-of-interest segmentation method based on 2D face images.
Anais do XVII Workshop de Visão Computacional (WVC 2021), Sociedade Brasileira de Computação - SBC, (2021), pp. 201-205
[36]
J Han, L Hazelhoff, PHN de With.
Neonatal monitoring based on facial expression analysis.
Neonatal Monitoring Technologies, IGI Global, (2012), pp. 303-323
[37]
MN Mansor, AK Junoh, A Ahmed, H Kamarudin, A. Idris.
Infant pain detection with homomorphic filter and fuzzy k-NN classifier.
Appl Mech Mater, 643 (2014), pp. 183-189
[38]
E Parodi, D Melis, L Boulard, M Gavelli, E. Baccaglini.
Automated newborn pain assessment framework using computer vision techniques.
Proceedings of the International Conference on Bioinformatics Research and Applications 2017 - ICBRA 2017, pp. 31-36
[39]
Y Wang, L Huang, AL. Yee.
Full-convolution Siamese network algorithm under deep learning used in tracking of facial video image in newborns.
J Supercomput, 78 (2022), pp. 14343-14361
[40]
YS Dosso, D Kyrollos, KJ Greenwood, J Harrold, JR. Green.
NICUface: Robust neonatal face detection in complex NICU scenes.
IEEE Access, 10 (2022), pp. 62893-62909
[41]
S Brahnam, CF Chuang, FY Shih, MR. Slack.
Machine recognition and representation of neonatal facial displays of acute pain.
Artif Intell Med, 36 (2006), pp. 211-222
[42]
M Schiavenato, JF Byers, P Scovanner, JM McMahon, Y Xia, N Lu, et al.
Neonatal pain facial expression: Evaluating the primal face of pain.
[43]
S Gkikas, M. Tsiknakis.
Automatic assessment of pain based on deep learning methods: a systematic review.
Comput Methods Programs Biomed, 231 (2023),
[44]
J Yan, G Lu, X Li, W Zheng, C Huang, Z Cui, et al.
FENP: a database of neonatal facial expression for pain analysis.
IEEE Trans Affect Comput, 14 (2023), pp. 245-254
[45]
D Harrison, M Sampson, J Reszel, K Abdulla, N Barrowman, J Cumber, et al.
Too many crying babies: a systematic review of pain management practices during immunizations on YouTube.
BMC Pediatr, 14 (2014), pp. 134
[46]
MS Salekin, G Zamzmi, J Hausmann, D Goldgof, R Kasturi, M Kneusel, et al.
Multimodal neonatal procedural and postoperative pain assessment dataset.
Data Brief, 35 (2021),
[47]
B Huang, W Chen, CL Lin, CF Juang, Y Xing, Y Wang, et al.
A neonatal dataset and benchmark for non-contact neonatal heart rate monitoring based on spatio-temporal neural networks.
[48]
GV Silva, MC Barros, JD Soares, LP Carlini, TM Heiderich, RN Orsi, et al.
What facial features does the pediatrician look to decide that a newborn is feeling pain?.
Am J Perinatol, 40 (2023), pp. 851-857
[49]
MC Barros, CE Thomaz, GV da Silva, AS do Carmo, LP Carlini, TM Heiderich, et al.
Identification of pain in neonates: the adults’ visual perception of neonatal facial features.
J Perinatol, 41 (2021), pp. 2304-2308
[50]
JD Soares, MC Barros, GV da Silva, LP Carlini, TM Heiderich, RN Orsi, et al.
Looking at neonatal facial features of pain: do health and non-health professionals differ?.
J Pediatr (Rio J), 98 (2022), pp. 406-412
[51]
GAS Coutrin, LP Carlini, LA Ferreira, TM Heiderich, RCX Balda, MCM Barros, et al.
Convolutional neural networks for newborn pain assessment using face images: A quantitative and qualitative comparison.
3rd International Conference on Medical Imaging and Computer-Aided Diagnosis - MICAD 2022,
Copyright © 2023. Sociedade Brasileira de Pediatria
Idiomas
Jornal de Pediatria (English Edition)
Article options
Tools
en pt
Taxa de publicaçao Publication fee
Os artigos submetidos a partir de 1º de setembro de 2018, que forem aceitos para publicação no Jornal de Pediatria, estarão sujeitos a uma taxa para que tenham sua publicação garantida. O artigo aceito somente será publicado após a comprovação do pagamento da taxa de publicação. Ao submeterem o manuscrito a este jornal, os autores concordam com esses termos. A submissão dos manuscritos continua gratuita. Para mais informações, contate assessoria@jped.com.br. Articles submitted as of September 1, 2018, which are accepted for publication in the Jornal de Pediatria, will be subject to a fee to have their publication guaranteed. The accepted article will only be published after proof of the publication fee payment. By submitting the manuscript to this journal, the authors agree to these terms. Manuscript submission remains free of charge. For more information, contact assessoria@jped.com.br.
Cookies policy Política de cookies
To improve our services and products, we use "cookies" (own or third parties authorized) to show advertising related to client preferences through the analyses of navigation customer behavior. Continuing navigation will be considered as acceptance of this use. You can change the settings or obtain more information by clicking here. Utilizamos cookies próprios e de terceiros para melhorar nossos serviços e mostrar publicidade relacionada às suas preferências, analisando seus hábitos de navegação. Se continuar a navegar, consideramos que aceita o seu uso. Você pode alterar a configuração ou obter mais informações aqui.