Skip to main content

Table 1 AI methods on osteoporotic fracture prediction

From: AI algorithms for accurate prediction of osteoporotic fractures in patients with diabetes: an up-to-date review

Author

Time

Country

Mortality

Subjects

AI algorithm

Train/validation/test set

Best result

Whittier et al. [36]

2022

Canada

HR-pQCT

5873 patients

Fuzzy c-means clustering

446 train and 5873 test

HR = 2.96

Shimizu et al. [37]

2022

Japan

Database

7033 patients

DT, Feature Selection and Relative Importance, ANN, and SVM

75% train and 25% test

AUC = 0.74

Kong et al. [38]

2022

Korea

Database

1595 participants

DeepSurv

1416/1595 train (fivefold CV), and 179/1595 test

C-index = 0.614

Dong et al. [39]

2022

USA

Database

4461 subjects and 15,524 spine radiographs

GoogLeNet

76.5% train, 8.5% validation, and 15% test

AUC-ROC = 0.99, AUC-ROC = 0.82, sensitivity = 59.8%, PPV = 91.2%, and F1 score = 0.72

Chen et al. [40]

2022

China

Database

14,419 patients

XGBoost combining MLP

80% train and 20% test

AUC = 0.9 (approximately), accuracy = 90.38%, and F1 score = 0.9037

Ulivieri et al. [41]

2021

Italy

Database

172 women

2 derivative algorithms of ANN

90/172 train and 82/172 test, then reverse to 82/172 train and 90/172 test

AUC = 0.896, accuracy = 82.93%, sensitivity = 82.14%, specificity = 83.72%

Nissinen et al. [42]

2021

Finland

DXA

2949 + 459 women and 115 men

Convolutional neural network (CNN)

2949/3523 train (tenfold CV), and 574/3523 test

AUC = 0.64, accuracy = 52.0%, sensitivity = 67.8%, specificity = 51.4%

de Vries et al. [43]

2021

Netherland

Database

7578 patients

CR, RSF and ANN-DeepSurv model

100% train and 100% test

C-index = 0.625

Wu et al. [44]

2020

USA

Database

5130 men

RF, NN, LR, and gradient boosting,

80% train (tenfold CV), and 20% test

AUC = 0.71, Accuracy = 0.88

Villamor et al. [45]

2020

Spain

Database

137 patients

SVM, RBF, LR, SNN, and RF

101/137 train (tenfold CV), and 36/137 test

Accuracy = 0.86

Galassi et al. [46]

2020

Spain

Database

137 patients

SVM, LR, DT, and RF

70% train (twofold CV), and 30% test

Accuracy over 87%, Specificity over 92%, and Sensitivity over 83%

Engels et al. [47]

2020

Germany

Database

288,086 individuals

SL, XGBoost, LR, RF, SVM and RUS

80% train (tenfold CV), 20% test

AUC = 0.72

Almog et al. [48]

2020

USA

Database

6,329,986 patients

Crystal Bone

50% train (threefold CV), 50% test

AUC = 0.81

Su et al. [49]

2019

USA

Database

5994 men

CARTs

tenfold CV

AUC = 0.726

Muehlematter et al. [50]

2019

Switzerland

CT

60 stable and 60 unstable vertebrae of 58 patients

MLP, ANN, RF, SVM, and naïve Bayesian classifier

2/3 train (tenfold CV), 1/3 test

AUC = 0.97

Ferizi et al. [51]

2019

USA

Database

92 women

linear models, SVM, DT, KNN, and EL

22/23 train (23-fold CV), 1/23 test

Specificity = 0.83(adjusted), Accuracy = 0.71(adjusted), Precision = 0.68, F1-score = 0.67(adjusted)

Kruse et al. [52]

2017

Denmark

Database

10,775 women

Standardized variable means, Euclidean distances, and Ward's D2 method of HAC

Not required

Nine (k = 9) clusters were identified

Kruse et al. [53]

2017

Denmark

Database

4722 women and 717 men

Classification Tree, BAT, BGLM, PLS, KNN, LogitBoost, BGAM, HDDA, RF, C5.0, CIT, LMT, SGB, QDA, LDA, BFDA, BMARS, NSC, SVMRW, NN, NNFE, XGB, CIRF, and AB

75% train (fivefold CV), 25% test

AUC = 0.92

Schuler et al. [54]

2010

Australia

CT

100

InShape model

Not presented

R = 0.91

  1. AI artificial intelligence, CT computed tomography, ML machine learning, CV cross validation, AUC area under curve, BAT Bootstrap aggregated trees, BGLM Bayesian Generalized Linear Model, PLS Partial Least Squares, KNN k-Nearest Neighbours, LogitBoost Boosted Logistic Regression, BGAM Boosted Generalized Additive Model, HDDA High Dimensional Discriminant Analysis, RF Random Forest, CIT Conditional Inference Tree, LMT Logistic Model Trees, SGB Stochastic Gradient Boostin, QDA Quadratic Discriminant Analysis, LDA Linear Discriminant Analysis, BFDA Bagged Flexible Discriminant Analysis, BMARS Bagged Multivariate Adaptive Regression Splines, NSC Nearest Shrunken Centroids, SVMRW Support Vector Machines with Radial Weights, NN Neural Network, NNFE Neural Network with Feature Extraction, XGB eXtreme Gradient Boosting, CIRF Conditional Inference Random Forest, AB Adaptive Boosting, HAC hierarchical agglomerative clustering, DT decision tree, EL ensemble learning, MLP multi-layer perceptron, ANN artificial neural networks, SVM support vector machine, CARTs classification and regression trees, SL Superlearner, XGBoost extreme gradient boosting, LR logistic regression, RUS random under sampling, RBF radial basis function, SNN Shallow Neural Networks, GB gradient boosting, CR cox regression, RSF random survival forests, DXA dual X-ray absorptiometry