Deep Convolutional Neural Networks for Endotracheal Tube Position and X-ray Image Classification: Challenges and Opportunities.
发表日期： 2017.08.30 来源：J Digit Imaging. 2017 Aug;30(4):460-468.
1. Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, Philadelphia, PA, 19107, USA. Paras.email@example.com.
本研究的目的是评估深层卷积神经网络（DCNN）在微创，中间和更明显的影像学差异中的有效性。创建了三个不同的数据集，其中包括气管内（ET）管（n = 300），ET管的低/正常位置（n = 300）和胸/腹部X光照片（n = 120）的存在/不存在。数据集分为训练，验证和测试。使用未经训练和预训练的深层神经网络，包括AlexNet和GoogLeNet分类器，使用Caffe框架。对于存在/不存在和低/正常的ET管数据集进行数据增加。计算接收器工作特性（ROC），曲线下面积（AUC）和95%置信区间。使用非参数方法确定AUC的统计学差异。预先训练的AlexNet和GoogLeNet分类器在区分胸部与腹部X光照片时具有完美的准确性（AUC 1.00），仅使用45个训练样本。对于更困难的数据集，包括存在/不存在和低/正常位置的气管插管，更多的训练情况，预训练的网络和数据增加方法有助于提高准确性。用于分类存在与不存在ET管的性能最好的网络仍然非常准确，AUC为0.99。然而，对于最困难的数据集，例如气管内导管的低位与正常位置，DCNNs表现不佳，但达到0.81的合理AUC。
The goal of this study is to evaluate the efficacy of deep convolutional neural networks (DCNNs) in differentiating subtle, intermediate, and more obvious image differences in radiography. Three different datasets were created, which included presence/absence of the endotracheal (ET) tube (n = 300), low/normal position of the ET tube (n = 300), and chest/abdominal radiographs (n = 120). The datasets were split into training, validation, and test. Both untrained and pre-trained deep neural networks were employed, including AlexNet and GoogLeNet classifiers, using the Caffe framework. Data augmentation was performed for the presence/absence and low/normal ET tube datasets. Receiver operating characteristic (ROC), area under the curves (AUC), and 95% confidence intervals were calculated. Statistical differences of the AUCs were determined using a non-parametric approach. The pre-trained AlexNet and GoogLeNet classifiers had perfect accuracy (AUC 1.00) in differentiating chest vs. abdominal radiographs, using only 45 training cases. For more difficult datasets, including the presence/absence and low/normal position endotracheal tubes, more training cases, pre-trained networks, and data-augmentation approaches were helpful to increase accuracy. The best-performing network for classifying presence vs. absence of an ET tube was still very accurate with an AUC of 0.99. However, for the most difficult dataset, such as low vs. normal position of the endotracheal tube, DCNNs did not perform as well, but achieved a reasonable AUC of 0.81.
Artificial intelligence; Artificial neural networks (ANNs); Classification; Machine learning; Radiography