Skip to main content

Artificial intelligence (AI) diagnostic tools: utilizing a convolutional neural network (CNN) to assess periodontal bone level radiographically—a retrospective study



The purpose of this investigation was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the accuracy and usefulness of this system for the detection of alveolar bone loss in periapical radiographs in the anterior region of the dental arches. We also aimed to evaluate the usefulness of the system in categorizing the severity of bone loss due to periodontal disease.


A data set of 1724 intraoral periapical images of upper and lower anterior teeth in 1610 adult patients were retrieved from the ROMEXIS software management system at King Saud bin Abdulaziz University for Health Sciences. Using a combination of pre-trained deep CNN architecture and a self-trained network, the radiographic images were used to determine the optimal CNN algorithm. The diagnostic and predictive accuracy, precision, confusion matrix, recall, F1-score, Matthews Correlation Coefficient (MCC), Cohen Kappa, were calculated using the deep CNN algorithm in Python.


The periapical radiograph dataset was divided randomly into 70% training, 20% validation, and 10% testing datasets. With the deep learning algorithm, the diagnostic accuracy for classifying normal versus disease was 73.0%, and 59% for the classification of the levels of severity of the bone loss. The Model showed a significant difference in the confusion matrix, accuracy, precision, recall, f1-score, MCC and Matthews Correlation Coefficient (MCC), Cohen Kappa, and receiver operating characteristic (ROC), between both the binary and multi-classification models.


This study revealed that the deep CNN algorithm (VGG-16) was useful to detect alveolar bone loss in periapical radiographs, and has a satisfactory ability to detect the severity of bone loss in teeth. The results suggest that machines can perform better based on the level classification and the captured characteristics of the image diagnosis. With additional optimization of the periodontal dataset, it is expected that a computer-aided detection system can become an effective and efficient procedure for aiding in the detection and staging of periodontal disease.

Peer Review reports


Periodontitis (PD), a multifactorial and complex inflammatory disease in tooth-supporting tissues, is categorized by the loss of periodontal tissue support [5]. It is considered the second most prevalent oral disease globally (20–50%) and is the primary cause of tooth loss in adults [19]. Though the microbial plaque biofilm initiates the process, progression is largely due to an exaggerated host immune-inflammatory response [5]. It is a major public health problem with a significant impact on an individual’s quality of life [19].

Despite the latest advances in treatment modalities, there has not been a significant improvement in the methodology for detecting alveolar bone loss and assessing the severity of the bone loss in the compromised teeth. Radiographs, such as panoramic/periapical and bitewing radiographs as well as periodontal probing, are widely used as objective diagnostic tools for diagnosing and predicting periodontally compromised teeth (PCT). Clinical diagnostic and prognostic judgment depends greatly on empirical evidence [20].

Artificial intelligence (AI) has primarily been used in dentistry to improve the accuracy and efficiency of diagnosis, which is critical to achieving the best outcomes for procedures, and provide superior patient care [8]. AI approaches may be beneficial because they provide a more effective diagnostic process when combined with clinical assessment. Using image recognition, classification, and segmentation, AI may enhance dental efficiency. Convolutional neural networks (CNNs), the latest core model of artificial neural networks, and deep learning in computer vision include image recognition and segmentation, which can be used as a supplement to radiographs to detect periodontal disease. CNNs can detect edges and capture patterns in PCT images. Through their multiple convolutional and hidden layers, deep CNN algorithms can learn hierarchical feature representations and capture regional patterns from the PCT images.

To date, there have only been a few studies that investigated the use of deep learning in the diagnosis of PCT. CNN-based methods were proposed for detecting radiographic bone loss (RBL) on dental panoramic radiographs [2,3,4,5,6,7,8,9,10,11, 16,17,18,19,20,21,22, 25]. Studies have also evaluated deep CNN for determining peri-implant marginal bone loss on dental periapical radiographs [6]. The CNN was used in the detection of periodontally compromised posterior teeth on intraoral radiographs [15].

The purpose of the current study was to evaluate the potential usefulness and accuracy of deep CNN algorithms for detecting an alveolar bone loss in incisor teeth in periapical radiographs and the severity of the bone loss in the periodontally compromised incisor teeth.


The study was conducted in the College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, and approved by the Institutional Review Board of King Abdullah International Medical Research Center at Riyadh, Saudi Arabia (SP20-234-R).

A data set of 1724 intraoral periapical images of upper and lower anterior teeth from randomly selected periodontitis adult patients between 2015 and 2020 was retrieved from the ROMEXIS 6.0 software (Planmeca, Finland). Periapical images of patients aged 12 years or younger, as well as images with severe noise or haziness or showing teeth that were partially present or severely distorted, had undergone apical surgery with root resection, with a full restorative crown, or teeth with a shape that deviated from normal anatomical structures, were excluded.

All periapical images were annotated and examined by three independent examiners, including a periodontist who collected, deciphered, and categorized them to determine the severity of the bone loss in the periodontally compromised incisor teeth. All examiners were calibrated for annotation and categorization of the severity of the bone loss. All the periapical radiographs for which the diagnosis of the 3 examiners did not agree were excluded. We also categorized the severity of bone loss based on the traditional classification by the International Workshop for Classification of Periodontal Diseases and Conditions (1999) in which the root is divided into three parts from the CEJ to the root apex. The first part represented the coronal third, the second the middle third, and the third the apical third. The severity of the bone loss has been defined as mild if the bone loss is in the coronal third of the root, moderate when in the middle third, advanced when in the apical third of the root length [17], and healthy when no vertical or horizontal alveolar bone loss was present.

Using a combination of transfer learning models with CNN architecture, the radiographic images were used to determine the optimal CNN algorithm in Python. The data set was divided randomly into 70% training dataset, 20% validation dataset, and 10% testing dataset. The image was exported manually in high-quality “PNG” format and examined and manually cropped to show only the tooth boundaries. The images were classified in binary (healthy or disease) and multiclassification (normal, mild, moderate, severe). Each image was resized to equal size of 150 × 150 pixels. The pixel value was normalized to a value between zero and one, Greyscale.

This study was designed to use a CNN-based model called VGG-16 (Visual Geometry Group) network architecture with the TensorFlow and Keras libraries. This architecture is the most popular and effective deep learning model for image classification problems [28]. The use of previous knowledge in rebuilding machine learning for the collected dataset is better than starting from scratch to solve a new image classification proble [24]. Transfer learning well avoids building deep convolution networks to local optimal and over-fitting problems [26].

We used the transfer learning approach to classify our dataset. Transfer learning is a process of sharing one domain's knowledge with another domain. The proposed model consists of 13 convolutional layers and 2 dense layers because we fine-tuned the network. The dense layer contained 256 neurons, the last layer 4 neurons for the multi-classification, and 2 neurons for the binary classification. Also, in the drop layer to prevent overfitting, we randomly dropped 50% of the neurons. We used the RMSprop optimizer with the loss of categorical cross-entropy for model learning. We trained the model using 100 epochs and 16 batch sizes. Obtaining the result in a short time with highly accurate prediction and diagnosis is an essential factor in periodontal treatment. There is ongoing research to improve the accuracy and the speed of radiology AI [23]. The entire process of the methodology is presented in Fig. 1.

Fig. 1
figure 1

Scheme of the general process in the methodology

Statistical analysis

The dental radiographic image dataset was evaluated for two different classifications, the binary classification (healthy or disease) and the multi-classification (normal bone, mild bone loss, moderate bone loss, and severe bone loss). For the binary classification, the images of the dataset was divided in a training dataset (n = 1206; 70%), a validation dataset (n = 345; 20%), and a test dataset (n = 173; 10%), and for the multi-classification the training dataset (n = 1206; 70%), validation dataset (n = 345; 20%), and test dataset (n = 173; 10%). The training dataset was used by the CNN model to learn the RBL detection and distinguish between the normal and abnormal periodontal bone levels in both types of classification.

The validation dataset was used to analyze the CNN performance and generate the best weights for a deep CNN algorithm model. Finally, the test dataset was used to evaluate the CNN prediction models by applying a confusion matrix, accuracy, precision, recall, F1-score, Matthews Correlation Coefficient (MCC), and Cohen Kappa. The κ values for the Cohen kappa were classified as follows: 0, poor; 0.00–0.20, weak; 0.21–0.40, fair; 0.41–0.60, moderate; 0.61–0.80, substantial; and 0.81–1.00, almost perfect agreement.


Demographic data of the patients

The dataset of the dental radiographic images consisted of a total of 1724 periapical radiographic images. Almost half (n = 814, 47.21%) were classified as normal/healthy teeth and 910 (52.78%) as non-normal/deceased teeth with alveolar bone loss based on the binary classification. Of the non-normal teeth, a multiclass classification was performed with 511 (29.64%) categorized as mild, 290 (16.82%) as moderate, and 109 (6.32%) with severe bone loss.

Model performance result for PCT classification

Confusion matrices and Accuracy

The results of the confusion matrices for the alveolar bone levels with and without normalization using the CNN classification when testing the model training and testing it for the binary classification and misclassification are shown in Figs. 2 and 3. The color gamut of shade varies and gets darker according to the proportion of the correct value with the classification. The diagonal components are the number of images that were predicted correctly, and the label of prediction matches the actual true label. However, the non-diagonal components were misjudged by the classifier. The deep CNN had an accuracy of 73.04% and 59.42%, for the binary and multi-classification, respectively. There was a significant difference in the accuracy result of predictive between both binary and multi-classification approaches (p = 0.037).

Fig. 2
figure 2

Binary class confusion matrix using a deep CNN classifier

Fig. 3
figure 3

Multi-classification confusion matrix using a deep CNN classifier

The total diagnostic accuracy of the alveolar bone levels in the binary classification was 73.04% and the highest diagnostic accuracy was for the presence of alveolar bone loss (72%), and the lowest the absence of alveolar bone loss (59%). The total diagnostic accuracy for the multi-classification was 59.42%, with the highest diagnostic accuracy for the presence of normal alveolar bone, followed by mild bone loss, moderate severity, and the lowest severe alveolar bone loss.

Precision, recall, and F1-scores

The ML classification performance and model prediction efficiency and effectiveness were evaluated using accuracy, precision, recall, F1-score (Tables 1 and 2). The precision, recall, and the F1-scores for the binary classifier were above 70%, indicating that ML is a good classifier for the presence and absence of alveolar bone. In our findings, a score of 0.75 indicates a good model ability in predicting the correct class for the presence and absence of alveolar bone. The precision ranged from 45 to 83% for the multi-classification. The recall and F1 scores ranged from 45 and 70%. The mild bone loss had the lowest sensitivity as well as the lowest F1 score (0.45). The normal alveolar bone levels had an F1 score of 0.70, the highest across all the stages. However, the values differed significantly for the multi-classification indicating that it may only fairly classify the severity of the bone loss (Table 2).

Table 1 Experimental results for the transfer learning model
Table 2 Statistical evaluation of the learning model

Matthews correlation coefficient (MCC)

Due to the imbalance in the number of images in each category of the bone level loss dataset, the MCC index was used to evaluate both models’ performance. A correlation of 0.51 and 0.65 for the binary and multi-classification signifies the that predicted class and the true class is moderately correlated.

Cohen Kappa

The assignment of the presence/absence of bone loss between the ML and the periodontists showed moderate agreement (k = 0.512). However, the agreement for multi-classification was fair (k = 0.41).

Sensitivity and specificity

The detection of healthy versus diseased alveolar bone using machine learning as a diagnostic marker had a sensitivity of 73% and a specificity of 79.1%, with the detection of bone loss by the periodontist as the gold standard.


In this study, we evaluated the diagnostic performance of a CNN-based model VGG-16 to detect periodontal bone loss and classify the alveolar bone levels in teeth affected by periodontal disease. The presence of alveolar bone loss was detected with high accuracy by the system. Our findings revealed that deep CNN had a diagnostic accuracy of 73.04% in detecting an alveolar bone loss in the anterior teeth of both arches. The total diagnostic accuracy for the multi-classification was 59.42%, with the highest diagnostic accuracy for the presence of normal alveolar bone, followed by mild, moderate, and severe alveolar bone loss.

Our findings are supported by Lee et al. [15] who developed a deep learning model to classify periodontally compromised posterior teeth from periapical radiographs. They revealed a diagnostic accuracy of 81% for periodontally compromised premolar teeth and 76.7% for molars, similar to our findings of 73% for the anterior teeth. Another study by Lee Chun demonstrated a diagnostic accuracy of 0.85 and no significant difference in the RBL percentage measurements determined by the DL and examiners and high sensitivity, specificity, and accuracy for the different stages, all over 0.8 [14]. Other models used DL to detect RBL or calculate the RBL percentage from the panoramic radiographs and to assign periodontitis staging from the panoramic radiographs [7,8,9,10,11,12,13,14,15,16]. Although these models on the panoramic radiographs had good accuracy and reliability in assessing the bone level on the panoramic radiographs, it is generally not recommended to rely on panoramic radiographs due to the presence of distorted images, overlapping objects, and low resolution [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18, 21].

The proposed model is the first model that assigned a severity category based on the traditional classification by the International Workshop for Classification of Periodontal Diseases and Conditions (1999) with the severity of the bone loss has been characterized as mild with bone loss is in the coronal third of the root, moderate with bone loss is in the middle third of the root and advanced when in the apical third of the root length [17]. Healthy indicated no vertical or horizontal alveolar bone loss.

The current study was based on periapical radiographs, the standard radiographic images for periodontal diagnosis. A major highlight of our study is that we have not excluded caries, restorations, and endodontically treated teeth to increase the complexity and simulate real-life scenarios as much as possible. The current results confirmed the CNN model, designed to detect the presence and absence of RBL and categorize the bone loss based on the severity, may assist in saving time and produce confirmatory decisions for RBL detection and categorization of the severity. The implication is that the clinicians would not have to assign a bone loss stage by manually calculating the RBL percentage for each tooth, a very time-consuming process.

The deep CNN algorithm yielded promising results in various fields of medicine, including imaging [4,5,6,7,8,9,10,11,12,13,14,15]. In our study, we performed supervised deep learning using a CNN-based model called VGG-16 (Visual Geometry Group) on a periapical radiographic dental dataset. We confirmed that the results had a close predictive accuracy compared with the periodontists. The deep CNN algorithm had a higher diagnostic accuracy to distinguish between no bone loss from bone loss in the upper and lower incisor teeth and the deep CNN algorithm was better trained and optimized for the detection of a normal PCT. However, the accuracy of categorizing the severity was lower. The diagnostic accuracy for severe bone loss was the lowest overall, and the trained deep CNN algorithm was poorly optimized for the detection of severe bone loss. Further studies on the mechanisms underlying deep CNN algorithms are necessary. The sensitivity, specificity, and the F-measure demonstrated acceptable performance for the binary classification, which supports the appropriateness of this method. Other topics for future research include additional techniques for improving the system output, such as applying more advanced augmentation techniques, extending the dataset, and using more recent CNN architectures.

In summary, this study found that a faster R-CNN trained on a limited number of labeled imaging data had a good capability of detecting the presence of bone loss and a satisfactory detection ability for the severity of bone loss in teeth. The application of a faster R-CNN to assist in detecting bone loss may reduce diagnostic effort by saving assessment time and automating screening documentation. A good quality tooth edge is important if the periodontal tissue is damaged in order to increase the performance accuracy of a PCT model, both diagnostically and predicatively. The Deep CNN algorithm can automatically extract features from PCT images and identify diverse characteristics in the input image, such as spots, corners, edges, or progressively complex characteristics, including patterns, structures, and shapes [15]. The VGG-16 has a powerful advantage in deciphering the detection problem, supporting its use in the present study [24].

The strengths of our study include the fact the consideration of imbalanced data, for which accuracy may not the best measure of the performance of a classifier. We considered other measures such as precision and recall (also known as sensitivity) as the most appropriate measure for the performance of imbalanced data.


A limitation of this study is that the machine was only tested to detect and categorize alveolar bone loss, not to diagnose periodontitis. It is noteworthy that only 2-dimensional periapical radiographs are insufficient for a complete diagnosis or prediction of PD. In order to make a more accurate diagnosis and prediction of PD, radiographic and clinical data must be reviewed comprehensively, including the patient's history and a comprehensive periodontal examination. Deep CNN algorithms based on periapical radiographic images alone do not provide sufficient evidence to diagnose and predict periodontally compromised teeth, but they may be useful as a reference.

We had a heterogeneous distribution of images, based on the severity with less severe and moderate images. This may have contributed to the reduction of the diagnostic accuracy for the multi-classification. To develop an advanced deep learning algorithm with the upgraded performance it is important to consider the design of the algorithm and the use of a balanced training dataset with high quality. To overcome this limitation of imbalanced image classes required for deep learning, we collected only high-quality images that were classified by the periodontist. Learning transfer and preprocessing techniques, including image augmentation and enhancement, were used to avoid overfitting and to normalize the model [15].

The performance of the machine learning algorithms changes with an increasing data size [27]. Additional research on a larger dental image dataset with an equal distribution of the groups and deep CNN algorithms for classification is required.

The images were cropped and resized to 150X150 pixels due to practical constraints. Another limitation of our study was that we performed our results on Google Collab with limited memory space, however, memory limits are exceeded if the input dimensions of images are increased (the memory size of google Collab is 12 GB). To process 1800 images with limited resources, Google Collab was used. We downgraded the input dimension to 150 to utilize the affected resource. Furthermore, we tried 224 dimensions with a reduced number of images sample to see if the utilization of memory was affected, but the results were not good. The accuracy was lower than what we have now, and we recommend investigating the impact of the periapical dimension on deep learning performance.

Understanding the difference in human dentition is crucial in studies. On a superficial level, the teeth of different individuals may appear similar but on closer examination, they reveal significant variation in both size and shape [3]. This variation in the image dataset for upper and lower teeth might affect the model result. Maintaining a high quality that covers this variation within each class is important and studies are required to reduce the impact of this factor on the performance of the deep learning prediction and diagnoses.

Clinical trials comparing the identification of periodontally compromised teeth using conventional clinical and radiographic findings with and without the support of the CNN should be done. Because the differential diagnosis between healthy teeth and incipient PCT was made using only periapical radiographs, this study did not diagnose or distinguish between healthy teeth and incipient PCT.


A deep CNN algorithm (VGG-16) was found to be useful to detect alveolar bone loss in periapical radiographs, as well as to detect the severity of bone loss in teeth. The results indicate that machines can perform better based on the level classification and captured characteristics of the image diagnosis. By optimizing the periodontal dataset, a computer-aided detection system should be able to aid in the detection and staging of periodontitis.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due [raw data generated from college of dentistry, KSAU-HS] but are available from the corresponding author on reasonable request.



Convolutional neural network


Matthews Correlation Coefficient


PCeriodontally compromised teeth


Artificial intelligence




Radiographic bone loss


  1. Åkesson L, Håkansson J, Rohlin M. Comparison of panoramic and intraoral radiography and pocket probing for the measurement of the marginal bone level. J Clin Periodontol. 1992;19(5):326–32.

    Article  Google Scholar 

  2. Albandar JM, Abbas DK. Radiographic quantification of alveolar bone level changes: comparison of 3 currently used methods. J Clin Periodontol. 1986;13(9):810–3.

    Article  Google Scholar 

  3. Alt KW, Pichler SL. Artificial modifications of human teeth. Dental anthropology: Springer; 1998. p. 387–415.

    Google Scholar 

  4. Bindal P, Bindal U, Kazemipoor M, Jha S. Hybrid machine learning approaches in viability assessment of dental pulp stem cells treated with platelet-rich concentrates on different periods. Appl Med Inform. 2019;41(3):93–101.

    Google Scholar 

  5. Cecoro G, Annunziata M, Iuorio MT, Nastri L, Guida L. Periodontitis, low-grade inflammation and systemic health: a scoping review. Medicina. 2020;56(6):272.

    Article  Google Scholar 

  6. Cha J-Y, Yoon H-I, Yeo I-S, Huh K-H, Han J-S. Peri-implant bone loss measurement using a region-based convolutional neural network on dental periapical radiographs. J Clin Med. 2021;10(5):1009.

    Article  Google Scholar 

  7. Chang H-J, Lee S-J, Yong T-H, Shin N-Y, Jang B-G, Kim J-E, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep. 2020;10(1):1–8.

    Article  Google Scholar 

  8. Chen Y-W, Stanley K, Att W. Artificial intelligence in dentistry: current applications and future perspectives. Quintessence Int. 2020;51(3):248–57.

    PubMed  Google Scholar 

  9. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–10.

    Article  Google Scholar 

  10. Jung S-K, Kim T-W. New approach for the diagnosis of extractions with neural network machine learning. Am J Orthod Dentofac Orthop. 2016;149(1):127–33.

    Article  Google Scholar 

  11. Kim J, Lee H-S, Song I-S, Jung K-H. DeNTNet: deep neural transfer network for the detection of periodontal bone loss using panoramic dental radiographs. Sci Rep. 2019;9(1):1–9.

    Article  Google Scholar 

  12. Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep. 2019;9(1):1–6.

    Article  Google Scholar 

  13. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574–82.

    Article  Google Scholar 

  14. Lee CT, Kabir T, Nelson J, Sheng S, Meng HW, Van Dyke TE, et al. Use of the deep learning approach to measure alveolar bone level. J Clin Periodontol. 2021.

  15. Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci. 2018;48(2):114–23.

    Article  Google Scholar 

  16. Li H, Zhou J, Zhou Y, Chen Q, She Y, Gao F, et al. An interpretable computer-aided diagnosis method for periodontitis from panoramic radiographs. Front Physiol. 2021;12:934.

    Google Scholar 

  17. Lindhe J, Ranney R, Lamster I, Charles A, Chung CP, Flemmig T, et al. Consensus report: chronic periodontitis. Ann Periodontol. 1999;4(1):38.

    Article  Google Scholar 

  18. Mol A. Imaging methods in periodontology. Periodontology. 2004;34(1):34–48.

    Article  Google Scholar 

  19. Nazir M, Al-Ansari A, Al-Khalifa K, Alhareky M, Gaffar B, Almas K. Global prevalence of periodontal disease and lack of its surveillance. Sci World J. 2020;2020.

  20. Nguyen TT, Larrivée N, Lee A, Bilaniuk O, Durand R. Use of artificial intelligence in dentistry: current clinical trends and research advances. J Can Dent Assoc. 2021;87(l7):1488–2159.

    Google Scholar 

  21. Pepelassi EA, Tsiklakis K, Diamanti-Kipioti A. Radiographic detection and assessment of the periodontal endosseous defects. J Clin Periodontol. 2000;27(4):224–30.

    Article  Google Scholar 

  22. Sunnetci KM, Ulukaya S, Alkan A. Periodontal bone loss detection based on hybrid deep learning and machine learning models with a user-friendly application. Biomed Signal Process Control. 2022;77: 103844.

    Article  Google Scholar 

  23. Tadavarthi Y, Vey B, Krupinski E, Prater A, Gichoya J, Safdar N, et al. The state of radiology AI: considerations for purchase decisions and current market offerings. Radiol Artif Intell. 2020;2(6):e200004.

  24. Tammina S. Transfer learning using vgg-16 with deep convolutional neural network for classifying images. Int J Sci Res Publ (IJSRP). 2019;9(10):143–50.

    Google Scholar 

  25. Tsoromokos N, Parinussa S, Claessen F, Moin DA, Loos BG. Estimation of alveolar bone loss in periodontitis using machine learning. Int Dental J. 2022.

  26. Wu Y, Qin X, Pan Y, Yuan C, editors. Convolution neural network based transfer learning for classification of flowers. 2018 IEEE 3rd international conference on signal and image processing (ICSIP); 2018: IEEE.

  27. Wu Z, Ramsundar B, Feinberg EN, Gomes J, Geniesse C, Pappu AS, et al. MoleculeNet: a benchmark for molecular machine learning. Chem Sci. 2018;9(2):513–30.

    Article  Google Scholar 

  28. Yauney G, Rana A, Wong LC, Javia P, Muftu A, Shah P, editors. Automated process incorporating machine learning segmentation and correlation of oral diseases with systemic health. 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2019: IEEE.

Download references


We would like to acknowledge Engr. Amin Alawad, IT Manager at College of Dentistry, King Saud bin Abdulaziz University for Health Sciences for his support in this research and Dr. Susanna Wright of King Abdullah International Medical Research Center for her assistance in editing the manuscript.

Author information

Authors and Affiliations



GA: Formulated the research plan, performed literature search, collected the data, and wrote up the study MAW: performed literature search, collected data, and wrote the manuscript FFF: performed literature search, analyzed data, and wrote the manuscript MAJ: performed literature search, collected and analyzed data, and wrote the manuscript RA: performed literature search, collected data, and edited the manuscript MA: performed literature search, collected data, and edited the manuscript All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Fathima Fazrina Farook.

Ethics declarations

Ethics approval and consent to participate

The study protocol was approved by the Institutional Review Board of King Abdullah International Medical Research Center in Riyadh, Saudi Arabia (SP20-234-R). The methods were carried out in accordance with the relevant guidelines and regulations. Informed consent has been obtained from all participants for the use of their radiographs for research purposes.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests in this section.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Alotaibi, G., Awawdeh, M., Farook, F.F. et al. Artificial intelligence (AI) diagnostic tools: utilizing a convolutional neural network (CNN) to assess periodontal bone level radiographically—a retrospective study. BMC Oral Health 22, 399 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • CNN
  • Artificial intelligence
  • Teeth
  • Bone level
  • Periodontitis
  • Learning machine
  • VGG-16